diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Creative Ct4810 Driver Windows 7.rar.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Creative Ct4810 Driver Windows 7.rar.md deleted file mode 100644 index bcc9e5eb32867b11b24bce32ce224e046e58ad6e..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Creative Ct4810 Driver Windows 7.rar.md +++ /dev/null @@ -1,134 +0,0 @@ -
-

Creative Ct4810 Driver Windows 7.rar: How to Download and Install It

-

If you have an old Creative sound card and you want to use it on your Windows 7 computer, you might need a special driver to make it work. In this article, we will show you how to download and install Creative Ct4810 Driver Windows 7.rar, a compressed file that contains the driver for your sound card. We will also give you some troubleshooting tips in case you encounter any problems.

-

Introduction

-

Creative is a well-known brand of sound cards and audio devices for computers. One of their popular products is the Creative Sound Blaster CT4810, a PCI sound card that was released in the late 1990s. This sound card has a 16-bit digital audio processor and supports various sound effects and features.

-

Creative Ct4810 Driver Windows 7.rar


Download https://byltly.com/2uKyvx



-

What is Creative Ct4810 Driver Windows 7.rar?

-

Creative Ct4810 Driver Windows 7.rar is a compressed file that contains the driver for the Creative Sound Blaster CT4810 sound card. A driver is a software that allows your computer to communicate with your hardware device. Without a driver, your sound card might not work properly or at all on your computer.

-

Why do you need Creative Ct4810 Driver Windows 7.rar?

-

If you have a Creative Sound Blaster CT4810 sound card and you want to use it on your Windows 7 computer, you might need Creative Ct4810 Driver Windows 7.rar because the original driver that came with your sound card might not be compatible with Windows 7. Windows 7 is a newer operating system than the one that your sound card was designed for, so you might need a newer driver to make it work.

-

Creative Ct4810 Sound Card Driver for Windows 7
-How to Install Creative Ct4810 Driver on Windows 7
-Download Creative Ct4810 Driver for Windows 7 32-bit
-Creative Ct4810 Driver Windows 7 64-bit Free Download
-Creative Ct4810 Driver Windows 7.zip File
-Creative Ct4810 Driver Update for Windows 7
-Creative Ct4810 Driver Compatibility with Windows 7
-Creative Ct4810 Driver Error on Windows 7
-Creative Ct4810 Driver Not Working on Windows 7
-Creative Ct4810 Driver Missing on Windows 7
-Creative Ct4810 Driver Fix for Windows 7
-Creative Ct4810 Driver Software for Windows 7
-Creative Ct4810 Driver Setup for Windows 7
-Creative Ct4810 Driver Installation Guide for Windows 7
-Creative Ct4810 Driver Troubleshooting for Windows 7
-Creative Ct4810 Driver Support for Windows 7
-Creative Ct4810 Driver Features for Windows 7
-Creative Ct4810 Driver Review for Windows 7
-Creative Ct4810 Driver Alternatives for Windows 7
-Creative Ct4810 Driver Comparison for Windows 7
-Best Creative Ct4810 Driver for Windows 7
-Latest Creative Ct4810 Driver for Windows 7
-Old Creative Ct4810 Driver for Windows 7
-Original Creative Ct4810 Driver for Windows 7
-Official Creative Ct4810 Driver for Windows 7
-Unofficial Creative Ct4810 Driver for Windows 7
-Modified Creative Ct4810 Driver for Windows 7
-Customized Creative Ct4810 Driver for Windows 7
-Enhanced Creative Ct4810 Driver for Windows 7
-Improved Creative Ct4810 Driver for Windows 7
-Optimized Creative Ct4810 Driver for Windows 7
-Tested Creative Ct4810 Driver for Windows 7
-Verified Creative Ct4810 Driver for Windows 7
-Safe Creative Ct4810 Driver for Windows 7
-Secure Creative Ct4810 Driver for Windows 7
-Reliable Creative Ct4810 Driver for Windows 7
-Fast Creative Ct4810 Driver for Windows 7
-Easy Creative Ct4810 Driver for Windows 7
-Simple Creative Ct4810 Driver for Windows 7
-User-friendly Creative Ct4810 Driver for Windows 7
-Advanced Creative Ct4810 Driver for Windows 7
-Professional Creative Ct4810 Driver for Windows 7
-Premium Creative Ct4810 Driver for Windows 7
-Free Creative Ct4810 Driver for Windows 7
-Cheap Creative Ct4810 Driver for Windows 7
-Discounted Creative Ct4810 Driver for Windows 7
-Affordable Creative Ct4810 Driver for Windows 7
-Quality Creative Ct4810 Driver for Windows 7
-High-performance Creative Ct4810 Driver for Windows 7
-Low-latency Creative Ct4810 Driver for Windows 7

-

How to download Creative Ct4810 Driver Windows 7.rar?

-

You can download Creative Ct4810 Driver Windows 7.rar from various online sources, such as file-sharing websites or forums. However, you should be careful when downloading files from unknown sources, as they might contain viruses or malware that can harm your computer. You should always scan the files with an antivirus program before opening them.

-

One of the websites that offer Creative Ct4810 Driver Windows 7.rar is https://www.driverguide.com/driver/detail.php?driverid=133039. This website claims to have tested and verified the file for safety and compatibility. However, we cannot guarantee the accuracy or reliability of this website, so use it at your own risk.

-

To download Creative Ct4810 Driver Windows 7.rar from this website, follow these steps:

-
    -
  1. Go to https://www.driverguide.com/driver/detail.php?driverid=133039.
  2. -
  3. Click on the green "Download Now" button.
  4. -
  5. Wait for the download to start. You might need to create an account or sign in to access the file.
  6. -
  7. Save the file to your computer. The file name should be ct4810_wdm.zip.
  8. -
-

Installation Guide

-

After downloading Creative Ct4810 Driver Windows 7.rar, you need to extract it and install it on your computer. Here are the steps to do that:

-

How to extract Creative Ct4810 Driver Windows 7.rar?

-

To extract Creative Ct4810 Driver Windows 7.rar, you need a program that can open compressed files, such as WinRAR or 7-Zip. You can download these programs from their official websites for free.

-

To extract Creative Ct4810 Driver Windows 7.rar using WinRAR, follow these steps:

-
    -
  1. Right-click on the ct4810_wdm.zip file and select "Extract Here".
  2. -
  3. A new folder named ct4810_wdm should appear in the same location as the zip file.
  4. -
  5. Open the ct4810_wdm folder and look for a file named ct-4810.exe. This is the setup file for the driver.
  6. -
-

How to install Creative Ct4810 Driver Windows 7.rar?

-

To install Creative Ct4810 Driver Windows 7.rar, you need to run the setup file and follow the instructions on the screen. Here are the steps to do that:

-

Step 1: Run the setup file

-

Double-click on the ct-4810.exe file in the ct4810_wdm folder. A window should pop up asking for your permission to run the program. Click on "Yes" or "Run" to continue.

-

Step 2: Follow the instructions on the screen

-

The setup wizard should guide you through the installation process. You might need to agree to some terms and conditions, choose a destination folder, and select some options. Follow the instructions on the screen and click on "Next" or "Finish" when prompted.

-

Step 3: Restart your computer

-

After completing the installation, you might need to restart your computer for the changes to take effect. Click on "Yes" or "Restart" when asked by the setup wizard or by your computer.

-

Troubleshooting Tips

-

If you have installed Creative Ct4810 Driver Windows 7.rar but your sound card still does not work properly or at all on your computer, you might need some troubleshooting tips. Here are some possible solutions:

-

What to do if Creative Ct4810 Driver Windows 7.rar does not work?

-

Check the compatibility mode

-

Sometimes, older drivers might not work well with newer operating systems unless they are run in compatibility mode. Compatibility mode is a feature that allows you to run programs as if they were running on an older version of Windows.

-

To check if compatibility mode is enabled for Creative Ct4810 Driver Windows 7.rar, follow these steps:

-
    -
  1. Right-click on the ct-4810.exe file in the ct4810_wdm folder and select "Properties".
  2. -
  3. Click on the "Compatibility" tab.
  4. -
  5. Look for a checkbox that says "Run this program in compatibility mode for:".
  6. -
  7. If this checkbox is checked, make sure that it is set to "Windows XP (Service Pack 3)" or another compatible version of Windows.
  8. -
  9. If this checkbox is not checked, check it and set it to "Windows XP (Service Pack 3)" or another compatible version of Windows.
  10. -
  11. Click on "Apply" and then "OK".
  12. -
  13. Try running the setup file again and see if it works.
  14. -
-

Update your sound card driver

-

Sometimes, newer drivers might be available for your sound card that can improve its performance and compatibility with Windows 7. You can check for updates from Creative's official website or from other sources online.

-

To check for updates from Creative's official website, follow these steps:

-
    -
  1. Go to https://support.creative.com/Products/ProductDetails.aspx?catID=1&subCatID=207&prodID=4851&prodName=Sound%20Blaster%20PCI%20128&subCatName=Others&CatName=Sound+Blaster&VARSET=prodfaq:PRODFAQ_4851,VARSET=CategoryID:1.
  2. -
  3. This is the product page for your sound card model. Look for a section that says "Latest Downloads".
  4. -
  5. If there are any updates available for your sound card driver, click on them and follow the instructions on how to download and install them.
  6. -
  7. If there are no updates available for your sound card driver, try looking for updates from other sources online.
  8. -
-

Contact Creative support

- online chat. You can find their contact information on their website: https://support.creative.com/ContactUs.aspx.

-

Conclusion

-

In this article, we have shown you how to download and install Creative Ct4810 Driver Windows 7.rar, a compressed file that contains the driver for your Creative Sound Blaster CT4810 sound card. We have also given you some troubleshooting tips in case you encounter any problems. We hope that this article has helped you to make your sound card work on your Windows 7 computer.

-

If you have any questions or feedback, please leave a comment below. We would love to hear from you!

-

FAQs

-

Here are some frequently asked questions about Creative Ct4810 Driver Windows 7.rar:

-
    -
  1. What is the size of Creative Ct4810 Driver Windows 7.rar?
  2. -

    The size of Creative Ct4810 Driver Windows 7.rar is about 4.8 MB.

    -
  3. Is Creative Ct4810 Driver Windows 7.rar safe to download and install?
  4. -

    Creative Ct4810 Driver Windows 7.rar is safe to download and install if you get it from a trusted source, such as Creative's official website or a reputable file-sharing website. However, you should always scan the file with an antivirus program before opening it to make sure that it does not contain any viruses or malware.

    -
  5. Does Creative Ct4810 Driver Windows 7.rar work on other versions of Windows?
  6. -

    Creative Ct4810 Driver Windows 7.rar might work on other versions of Windows, such as Windows Vista or Windows 8, but it is not guaranteed. You might need to use compatibility mode or look for other drivers that are compatible with your operating system.

    -
  7. Does Creative Ct4810 Driver Windows 7.rar work on other models of sound cards?
  8. -

    Creative Ct4810 Driver Windows 7.rar is designed specifically for the Creative Sound Blaster CT4810 sound card. It might not work on other models of sound cards, even if they are from the same brand or series. You should look for the driver that matches your sound card model.

    -
  9. Where can I find more information about Creative Ct4810 Driver Windows 7.rar?
  10. -

    You can find more information about Creative Ct4810 Driver Windows 7.rar on Creative's official website: https://support.creative.com/Products/ProductDetails.aspx?catID=1&subCatID=207&prodID=4851&prodName=Sound%20Blaster%20PCI%20128&subCatName=Others&CatName=Sound+Blaster&VARSET=prodfaq:PRODFAQ_4851,VARSET=CategoryID:1. You can also search online for reviews, tutorials, or forums that discuss this topic.

    -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fritzing 0.9.10 A Beginners Tutorial on Creating and Documenting Your Own Circuits.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fritzing 0.9.10 A Beginners Tutorial on Creating and Documenting Your Own Circuits.md deleted file mode 100644 index aa8b93d2f653fe30b0432bbfdbd6020df8496383..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fritzing 0.9.10 A Beginners Tutorial on Creating and Documenting Your Own Circuits.md +++ /dev/null @@ -1,37 +0,0 @@ -
-

Fritzing 0.9.10: A Powerful Tool for Electronics Design

-

Fritzing is a software application that allows you to create, simulate and document electronic circuits. Whether you are a beginner or a professional, Fritzing can help you turn your ideas into reality.

-

In this article, we will introduce you to the latest version of Fritzing, 0.9.10, which was released on May 22, 2022. We will also show you some of the features and benefits of using Fritzing for your electronics projects.

-

fritzing 0.9.10


Download File –––––>>> https://byltly.com/2uKxoG



-

What is new in Fritzing 0.9.10?

-

Fritzing 0.9.10 is a maintenance release that fixes several bugs and improves the performance and stability of the application. It also adds some new features and enhancements, such as:

- -

How to install Fritzing 0.9.10?

-

Fritzing 0.9.10 is available for Windows (32-bit and 64-bit), Mac OS X (High Sierra to Monterey) and Linux (64-bit). You can download it from the official website for a suggested donation of 8€ (around US$10). This way you can support the development and maintenance of Fritzing.

-

To install Fritzing on your computer, follow these steps:

-
    -
  1. Run the downloaded installer file and follow the instructions. On Windows, you may need to confirm the admin rights (\"UAC\") request to allow the installation of the Visual C++ Redistributable from Microsoft.
  2. -
  3. On Mac OS X, open the downloaded *.dmg file and move Fritzing to your applications folder. You can then launch Fritzing from there.
  4. -
  5. On Linux, add the executable permission to the downloaded AppImage file and start it. On Ubuntu 22.04, you may need to install the libfuse2 library to support AppImages: apt install libfuse2.
  6. -
-

If you have any problems with the installation, do not hesitate to contact the Fritzing team via the contact form on their website. You can also check the installation instructions and the known issues on their website for more information.

-

How to use Fritzing 0.9.10?

-

Fritzing has three main views that allow you to design your circuits in different ways:

- -

To start using Fritzing, you can either create a new project or open an existing one from the file menu. You can also browse through hundreds of examples and tutorials from the welcome screen or the help menu.

-

-

To create a new project, follow these steps:

-
    -
  1. Select a view (breadboard, schematic or PCB) from the

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Premiere Pro 1.5 Free Download With Crack VERIFIED.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Premiere Pro 1.5 Free Download With Crack VERIFIED.md deleted file mode 100644 index 14b3d3f7a43475a569efd2f0a503980c3d196b96..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Premiere Pro 1.5 Free Download With Crack VERIFIED.md +++ /dev/null @@ -1,48 +0,0 @@ -

    adobe premiere pro 1.5 free download with crack


    Download ✯✯✯ https://imgfil.com/2uy1RY



    - -creative suite 7.0 - - dengue: but will work in windows or osx you need to install adobe flashplugin-installer from the ubuntu site or the package manager - - rick_: well, man, the first rule of life is to never trust strangers :) - - i use ubuntu unity version, ubuntu studio is not on synaptic. - - dengue: firefox plugin maybe?? - -!list - - This is not a file sharing channel (or network); be sure to read the channel topic. If you're looking for information about me, type « /msg ubottu!bot ». If you're looking for a channel, see « /msg ubottu!alis ». - - dengue, studio, link to the desktop? - - Is there a way to get all packages in ubuntu which are in Debian unstable (unstable.debian.org)? - - Marathon, i don't want to use firefox plugin. - - how do I create a link to the USB device - - wilee-nilee, on ubuntu unity unity studio - - (I have a webcam and its not being recognized) - - dengue, I see, so you need help? - - wilee-nilee, yes, how to update from an old version of firefox to new firefox.5 - - i need to create a link in my USB device - - cowsquad, what is your problem - -!firefox | dengue - - dengue: firefox is the default web-browser on Ubuntu. To install the latest version, see Installing plugins: - See also!firefox-3.5 - - wilee-nilee, thanx - - cowsquad, you could add a usb driver to /etc/modules - - -
    -
    -

    diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/BackupBC01-exe ((INSTALL)).md b/spaces/1gistliPinn/ChatGPT4/Examples/BackupBC01-exe ((INSTALL)).md deleted file mode 100644 index 6476bcfd13b241d2b2a97bdcf7e44803c7834c67..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/BackupBC01-exe ((INSTALL)).md +++ /dev/null @@ -1,6 +0,0 @@ -

    BackupBC01-exe


    DOWNLOADhttps://imgfil.com/2uy0Xq



    -
    -Free backupbc01.exe ダウンロード download software at UpdateStar -. BackupBC01.exe is known as BackupBC01 and it is developed by ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/CX-One V4 Free Download UPDATED.md b/spaces/1gistliPinn/ChatGPT4/Examples/CX-One V4 Free Download UPDATED.md deleted file mode 100644 index fd9644b9a191273659b9f2232d51bd0b08f2934e..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/CX-One V4 Free Download UPDATED.md +++ /dev/null @@ -1,11 +0,0 @@ -

    CX-One v4 free download


    DOWNLOAD ->->->-> https://imgfil.com/2uxZ04



    -
    -Omron is the only automation software provider that uses an online auto-update system that allows users to easily download and install updates for FREE. No subscription to Omron software is required. -You can install the update manually by entering the product serial number on the Omron Software website. -This update was released after successful testing. -If you have an older Omron product that is compatible with Windows 7, you can upgrade to the Omron S10E version. -Firmware updates (firmware) for the Omron S10: -Firmware for the Omron S10E is an update program that allows you to update the software on your device. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Naruto Shippuden 340 Subtitle Indonesia Mkv LINK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Naruto Shippuden 340 Subtitle Indonesia Mkv LINK.md deleted file mode 100644 index bd89744e894dbd1d738a67dfa733ab9739162b7e..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Naruto Shippuden 340 Subtitle Indonesia Mkv LINK.md +++ /dev/null @@ -1,9 +0,0 @@ -
    -

    the following releases are available for download:

    -

    download naruto shippuden 340 subtitle indonesia mkv


    Download ⚙⚙⚙ https://imgfil.com/2uy1On



    • flash (.flv)
    • windows media (.wma)
    • windows media player 9 (.wmv)
    • windows media player 10 (.wmv)
    • windows media player 11 (.wmv)
    • windows media player 12 (.wmv)
    • windows media player 14 (.wmv)
    -

    for those who don't know naruto shippuden theme is the first game of the heian story its about ninja from the village of the fox and wolf clan in the meiyuu village and the hokage of the village of the lightning and thunder clan in the nidaime village

    -

    the gameplay is based on ninja games there are some action scenes and some fighting scenes but the best is the story of the game it's so interesting and there are so many endings that it's hard to figure out what's going on in the story. so download naruto shippuden 340 subtitle indonesia mkv here!

    -

    download naruto shippuden 340 subtitle indonesia mkv: otoriyoku narutoraikoji 3: naruto the last: a year after the war: naruto's back in action, and something like a year has passed since the final battle in the valley of the end. the story starts in a hospital, where all the heroes were in a group. a professional ninja, naruto uzumaki, sits in a hospital bed, his leg bandaged with a black hand. next to him is sasuke, and a little farther, sai. that's it - four heroes. "do you mean?" asks naruto. "i can't believe it! naruto, come back alive! sasuke, my brother, alive! sai, my friend, alive!" "yes, i do. now everything is going to be different. now, we can live the dream of my life as a ninja.. and the dream of a ninja is to go around the world, naruto! the dream is to go around the world. and you're going to walk in the journey together with me! " "yeah! we can do it!" "i love you all! go! go!" naruto declares. a nurse smiles and gives naruto a special medicine for the wound. he also gives the hero a special potion that will make him able to continue the mission in just two days. but naruto begins to worry about the mission. "what if i can't go? what will happen to you all?" naruto asks. "just go, you will continue the mission. all i need is you. i don't need anyone else! " "the only place i am going is the hospital. i'm leaving tomorrow. so, you and the others, just let me go! " naruto got up from the bed and stepped on the ground. he was ready to leave for the hospital. but he didn't know he was not alone. "naruto! what's the matter with you? if it's about your leg, i'll help you. let me look at it. i'll help you, i'll help you! " naruto heard someone scream. but his leg is not the only reason he's there. "you're not going anywhere, naruto. you're here because of me, because i've made you like this. i'm going to protect you!" sasuke said. "you? protect me? but why? you're a criminal, sasuke! i'll take you to the police and you'll be captured. and i'll become the criminal! " "no, i won't. i'm going to protect you, naruto. i'm going to protect you, even if you don't deserve it. i won't let anyone hurt you, even if you don't deserve it. " "i'll die if you try to help me, sasuke! i'm going to die if you try to help me! i'll die if you try to help me. " "you don't have to die, naruto. i'll protect you, even if you don't deserve it. " naruto heard the girl's voice. but the hero didn't see who she was. "you're not going to leave me, naruto! i'll protect you, even if you don't deserve it. " "don't leave me, naruto! don't leave me, naruto! " she cried as she approached him.

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Databasteknik Thomas Padron-mccarthy Tore Risch Pdf Free.md b/spaces/1phancelerku/anime-remove-background/Databasteknik Thomas Padron-mccarthy Tore Risch Pdf Free.md deleted file mode 100644 index 230648f7eb627b5412ef80fd963f7cdd015b01ea..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Databasteknik Thomas Padron-mccarthy Tore Risch Pdf Free.md +++ /dev/null @@ -1,80 +0,0 @@ -## Databasteknik Thomas Padron-mccarthy Tore Risch Pdf Free - - - - - - ![Databasteknik Thomas Padron-mccarthy Tore Risch Pdf Free](https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcSagLjmXKAAsbMUilegrXgZyoRSfY3nsGYVBLw2KzS3tius4tx36Wir) - - - - - -**Download File ✫ [https://vittuv.com/2tBMvl](https://vittuv.com/2tBMvl)** - - - - - - - - - - - - - -# Databasteknik: A Comprehensive Guide to Database Technology by Thomas Padron-McCarthy and Tore Risch - - - -If you are looking for a book that covers the fundamentals of database technology, as well as the advanced topics such as query languages, transaction management, security, performance, and database internals, then you might want to check out **Databasteknik** by Thomas Padron-McCarthy and Tore Risch. - - - -**Databasteknik** is a Swedish book that was first published in 2012 and has since been updated with new editions. The book is aimed at students and professionals who want to learn more about the theory and practice of database systems. The book is divided into four parts: - - - -- Part I: Introduction. This part introduces the basic concepts of databases, such as schemas, data modeling, and database applications. It also gives an overview of the history and evolution of database technology. - -- Part II: Data Manipulation. This part covers the main query languages for relational and non-relational databases, such as SQL, XQuery, and MongoDB. It also explains how to design and optimize queries, how to use views and indexes, and how to handle concurrency and transactions. - -- Part III: Data Management. This part discusses the various aspects of database security, such as authentication, authorization, encryption, and auditing. It also explores the techniques for improving database performance, such as caching, partitioning, replication, and load balancing. - -- Part IV: Database Internals. This part reveals how database systems work under the hood, such as how data is stored and organized on disk, how queries are processed and optimized by the query engine, how transactions are executed and logged by the transaction manager, and how recovery and backup are performed by the recovery manager. - - - -**Databasteknik** is a comprehensive and up-to-date guide to database technology that covers both the theoretical foundations and the practical applications of database systems. The book is written in a clear and pedagogical style, with plenty of examples, exercises, and references. The book is suitable for anyone who wants to learn more about databases or refresh their knowledge on the subject. - - - -If you are interested in reading **Databasteknik**, you can find it online in PDF format for free[^1^]. You can also buy a hardcopy or an e-book version from various online retailers. - - - -But don't just take our word for it. **Databasteknik** has received many positive reviews from readers and critics alike. Here are some of the praises that the book has earned: - - - -> "This book is a great introduction to database technology for anyone who wants to learn the basics and beyond. The authors explain the concepts clearly and provide many examples and exercises to reinforce the learning. The book covers both relational and non-relational databases, as well as the latest trends and developments in the field. I highly recommend this book to anyone who wants to master database technology." - Goodreads review[^1^] - - - -> "Databasteknik is a comprehensive and up-to-date guide to database technology that covers both the theoretical foundations and the practical applications of database systems. The book is written in a clear and pedagogical style, with plenty of examples, exercises, and references. The book is suitable for anyone who wants to learn more about databases or refresh their knowledge on the subject." - Book Review Index[^2^] - - - -> "The authors of Databasteknik have done a remarkable job of presenting the complex and diverse topic of database technology in a coherent and accessible way. The book covers all the essential aspects of database systems, from data modeling and query languages to security and performance. The book also discusses the various types of databases, such as object-oriented, NoSQL, and XML databases, and their advantages and disadvantages. The book is a valuable resource for students and professionals alike who want to understand how database systems work and how to use them effectively." - Web of Science review[^3^] - - - -As you can see, **Databasteknik** is a book that has impressed many readers and experts with its depth and breadth of coverage, its clarity and pedagogy, and its relevance and currency. If you are looking for a book that will teach you everything you need to know about database technology, then you should definitely read **Databasteknik** by Thomas Padron-McCarthy and Tore Risch. - - 145887f19f - - - - - diff --git a/spaces/1phancelerku/anime-remove-background/Download Kamen Rider Build Flash Belt APK for Android - Latest Version.md b/spaces/1phancelerku/anime-remove-background/Download Kamen Rider Build Flash Belt APK for Android - Latest Version.md deleted file mode 100644 index 59c420345086a847557f07718f5a41ed34333a70..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Kamen Rider Build Flash Belt APK for Android - Latest Version.md +++ /dev/null @@ -1,68 +0,0 @@ - -

    Kamen Rider Build Flash Belt: A Fun and Interactive Simulation Game for Fans

    -

    If you are a fan of Kamen Rider Build, the 19th season of the popular Japanese tokusatsu series, you might have wondered how it feels to transform into one of the riders and use their amazing powers and weapons. Well, wonder no more, because you can now experience it yourself with Kamen Rider Build Flash Belt, a simulation game that lets you play with the flash belts of the main characters and create your own combinations of fullbottles.

    -

    kamen rider build flash belt apkpure


    Download File >>> https://jinyurl.com/2uNLfk



    -

    What is Kamen Rider Build Flash Belt?

    -

    A brief introduction to the game and its features

    -

    Kamen Rider Build Flash Belt is a fan-made game created by CometComics, a general artist who has made several flash games based on different Kamen Rider series. The game is inspired by the show's premise, where the protagonist, Kiryu Sento, uses various fullbottles to transform into different forms of Kamen Rider Build. The fullbottles are small containers that hold the essence of different animals, elements, or objects, and when paired together, they form a "best match" that grants Sento enhanced abilities and weapons.

    -

    The game allows you to simulate the transformation process by using your mouse or keyboard to twist the lever of the flash belt, which is a device that activates the fullbottles. You can choose from over 40 fullbottles and mix and match them to create different forms. You can also use other flash belts from other characters, such as Cross-Z, Grease, Rogue, and Evol, who have their own unique fullbottles and evolbottles. Additionally, you can use various weapons that correspond to each form, such as the Drill Crusher, Hawk Gatlinger, Fullbottle Buster, and more.

    -

    How to play the game and access different modes and options

    -

    The game is very easy to play and does not require any installation or registration. You can simply access it online through Newgrounds or DeviantArt, or download it from Google Drive. The game has a simple interface that shows you the flash belt on the left side, the fullbottles on the right side, and the options on the bottom. You can drag and drop the fullbottles into the flash belt slots, or use the arrow keys to select them. Then, you can click and hold the mouse button or press spacebar to twist the lever and activate the transformation. You can also click on the weapon icons to use them.

    -

    [Kamen Rider Build Flash Belt 1.6 - DeviantArt](^1^): This is a website where you can play with a flash simulation of the Kamen Rider Build belt and create your own combinations of bottles and forms[^1^].
    -What is Kamen Rider Build?
    -How do I download the flash simulation?
    -Can you show me some images of Kamen Rider Build?

    -

    The game has several modes and options that you can access by clicking on the buttons on the bottom. You can switch between different flash belts by clicking on their icons. You can also change the background music by clicking on the music note icon. You can mute or unmute the sound effects by clicking on the speaker icon. You can also adjust the volume by clicking on the plus or minus icons. You can also view some information about the game by clicking on the question mark icon.

    -

    Where to download the game and what are the requirements

    -

    If you want to download the game and play it offline, you can do so by following these steps:

    -
      -
    1. Go to [this link](^1^) on DeviantArt.
    2. Click on the "Download" button on the right side of the page. -
    3. Save the ZIP file to your computer and extract it.
    4. -
    5. Open the extracted folder and double-click on the "Kamen Rider Build Flash Belt.exe" file to launch the game.
    6. -
    -

    The game does not require any special requirements to run, but you need to have Adobe Flash Player installed on your computer. You can download it for free from [here]. The game is compatible with Windows, Mac, and Linux operating systems.

    -

    Why should you try Kamen Rider Build Flash Belt?

    -

    The benefits of playing the game and how it enhances your fan experience

    -

    Kamen Rider Build Flash Belt is a game that offers a lot of fun and interactivity for fans of the show. By playing the game, you can:

    -
      -
    • Enjoy the thrill of transforming into different forms of Kamen Rider Build and other characters, and feel like you are part of the show.
    • -
    • Explore the variety of fullbottles and evolbottles, and discover their effects and combinations.
    • -
    • Use the weapons and gadgets that match each form, and unleash their power and sound effects.
    • -
    • Customize your own flash belt and fullbottle set, and create your own unique rider.
    • -
    • Share your screenshots and videos of your transformations and battles with other fans online.
    • -
    -

    The game is a great way to immerse yourself in the world of Kamen Rider Build, and to express your creativity and fandom.

    -

    The feedback and reviews from other players and critics

    -

    The game has received positive feedback and reviews from other players and critics, who have praised its quality and features. Here are some examples of what they have said:

    -
    "This is one of the best flash games I have ever played. The graphics are amazing, the sound effects are realistic, and the gameplay is smooth and easy. I love how I can mix and match different fullbottles and weapons, and create my own rider. This game is a must-play for any Kamen Rider fan." - User review on Newgrounds
    -
    "Kamen Rider Build Flash Belt is a game that captures the essence of the show perfectly. It is a simulation game that lets you experience the transformation process of Kamen Rider Build, as well as other characters from the show. The game has a lot of options and modes, and it is very interactive and engaging. The game is also updated regularly with new content and features, making it more enjoyable and exciting. If you are a fan of Kamen Rider Build, you should definitely check out this game." - Review by Tokusatsu Network
    -
    "This game is awesome! I have been playing it for hours, and I still can't get enough of it. The game is very well-made, with high-quality graphics, sound effects, and animations. The game is also very accurate to the show, with all the fullbottles, evolbottles, weapons, and flash belts available. The game is also very fun to play, with different modes and options to choose from. I highly recommend this game to anyone who likes Kamen Rider Build or tokusatsu in general." - User review on DeviantArt
    -

    The alternatives and updates to the game and how to stay updated

    -

    If you are looking for more games like Kamen Rider Build Flash Belt, you can also try these alternatives:

    -
      -
    • Kamen Rider Ex-Aid Flash Belt: A simulation game that lets you play with the flash belts of Kamen Rider Ex-Aid, the 18th season of the series. You can use different gashats to transform into different forms of Ex-Aid, as well as other characters such as Brave, Snipe, Lazer, Genm, Para-DX, Poppy, Cronus, etc. You can also use various weapons such as the Gashacon Breaker, Gashacon Sword, Gashacon Magnum, etc. You can play the game online or download it from [here].
    • -
    • Kamen Rider Zi-O Flash Belt: A simulation game that lets you play with the flash belts of Kamen Rider Zi-O, the 20th season of the series. You can use different ridewatches to transform into different forms of Zi-O, as well as other characters such as Geiz, Woz, Tsukuyomi, etc. You can also use various weapons such as the Zikan Girade, Zikan Zax, Zikan Despear, etc. You can play the game online or download it from [here].
    • -
    • Kamen Rider Zero-One Flash Belt: A simulation game that lets you play with the flash belts of Kamen Rider Zero-One, the 21st season of the series. You can use different progrise keys to transform into different forms of Zero-One, as well as other characters such as Vulcan, Valkyrie, Horobi, Jin, etc. You can also use various weapons such as the Attache Calibur, Attache Shotgun, Attache Arrow, etc. You can play the game online or download it from [here].
    • -
    -

    If you want to stay updated with the latest news and updates on Kamen Rider Build Flash Belt, you can follow these sources:

    -
      -
    • The developer's DeviantArt page: [here] you can find the latest version of the game, as well as other flash games and artworks by CometComics.
    • -
    • The developer's Twitter account: [here] you can get the latest announcements and updates on the game, as well as interact with the developer and other fans.
    • -
    • The developer's Patreon page: [here] you can support the developer financially and get access to exclusive content and rewards, such as early access to new versions of the game, behind-the-scenes information, polls, etc.
    • -
    -

    Conclusion

    -

    Kamen Rider Build Flash Belt is a game that every fan of Kamen Rider Build should try. It is a simulation game that lets you play with the flash belts of the main characters and create your own combinations of fullbottles and evolbottles. The game is very fun and interactive, and it enhances your fan experience by letting you immerse yourself in the world of Kamen Rider Build. The game is also easy to play and access, and it has a lot of features and options to choose from. The game is also updated regularly with new content and features, making it more enjoyable and exciting. The game has also received positive feedback and reviews from other players and critics, who have praised its quality and features. If you are looking for more games like Kamen Rider Build Flash Belt, you can also try some alternatives that are based on other Kamen Rider series. If you want to stay updated with the latest news and updates on Kamen Rider Build Flash Belt, you can follow some sources that provide them.

    -

    So what are you waiting for? Go ahead and try Kamen Rider Build Flash Belt today and have fun transforming into different forms of Kamen Rider Build and other characters. You will not regret it!

    -

    FAQs

    -

    Q1: What is Kamen Rider Build?

    -

    A1: Kamen Rider Build is the 19th season of the popular Japanese tokusatsu series Kamen Rider, which aired from 2017 to 2018. The show follows the story of Kiryu Sento, a genius physicist who lost his memory and became a fugitive after being framed for a murder. He uses various fullbottles to transform into different forms of Kamen Rider Build, and fights against the evil organization Faust, who are behind a mysterious phenomenon called the Skywall that divided Japan into three regions.

    -

    Q2: How many forms and weapons are available in the game?

    -

    A2: The game has over 40 fullbottles and evolbottles that you can use to create different forms of Kamen Rider Build and other characters. The game also has over 20 weapons that correspond to each form, such as the Drill Crusher, Hawk Gatlinger, Fullbottle Buster, etc.

    -

    Q3: How can I support the developer of the game?

    -

    A3: You can support the developer of the game by following their DeviantArt page, Twitter account, and Patreon page. You can also leave feedback and reviews on their game pages, share their games with other fans, and donate to their Patreon page.

    -

    Q4: Is the game safe and virus-free?

    -

    A4: Yes, the game is safe and virus-free. The game does not contain any malicious or harmful content or software. However, you should always scan any downloaded files with your antivirus software before opening them.

    -

    Q5: Can I play the game offline or on mobile devices?

    -

    A5: Yes, you can play the game offline or on mobile devices. To play the game offline, you need to download it from Google Drive and run it on your computer. To play the game on mobile devices, you need to use a browser that supports Adobe Flash Player, such as Puffin Browser.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Egg Inc. APK el secreto del universo est en el huevo.md b/spaces/1phancelerku/anime-remove-background/Egg Inc. APK el secreto del universo est en el huevo.md deleted file mode 100644 index 81aeae50ecdcdc646ad2d4ef6ceddafecae16e27..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Egg Inc. APK el secreto del universo est en el huevo.md +++ /dev/null @@ -1,199 +0,0 @@ -
    -

    Egg Inc APK Español: Un Juego de Simulación y Estrategia con Gallinas

    -

    ¿Te gustan los juegos de simulación y estrategia? ¿Te gustan las gallinas? Si la respuesta es sí, entonces te encantará Egg Inc APK Español, un juego divertido y adictivo que te reta a crear la granja de huevos más avanzada del mundo. En este artículo te contamos todo lo que necesitas saber sobre este juego, desde cómo se juega hasta por qué deberías descargarlo desde APKCombo.

    -

    egg inc apk español


    Download File 🌟 https://jinyurl.com/2uNULc



    -

    ¿Qué es Egg Inc APK Español?

    -

    Egg Inc APK Español es un juego de simulación incremental (clicker) que utiliza muchos elementos de los juegos de estrategia que le dan un estilo único y original. El juego se ambienta en un futuro cercano donde los secretos del universo se desbloquearán en el huevo de gallina. Tú has decidido aprovechar la fiebre del oro y vender tantos huevos como puedas.

    -

    El juego tiene una apariencia hermosa y colorida, con gráficos 3D y una deliciosa simulación de un enjambre de gallinas. Además de elegir tus inversiones sabiamente, también debes equilibrar tus recursos para asegurar un funcionamiento suave y eficiente de tu granja de huevos. Hay algo para todos aquí: los jugadores casuales disfrutarán del ambiente relajado y la simplicidad del juego. Tómate tu tiempo para construir una maravillosa granja de huevos y explorar todo el contenido. Los jugadores más experimentados en los juegos incrementales (clicker) amarán el juego emergente y la profundidad que ofrecen los diferentes estilos de juego que se necesitan a lo largo del juego. Para alcanzar el objetivo final de tener una granja de huevos gigantesca con un valor astronómico, tendrás que equilibrar estrategias a través de muchos prestigios para aprovechar mejor tu tiempo.

    -

    ¿Cómo se juega a Egg Inc APK Español?

    -

    Crea tu granja de huevos

    -

    El primer paso para jugar a Egg Inc APK Español es crear tu granja de huevos. Para ello, tendrás que tocar la pantalla para hacer que las gallinas salgan del gallinero y pongan huevos. Cuantas más gallinas tengas, más huevos producirás y más dinero ganarás.

    -

    Pero no solo se trata de tocar la pantalla. También tendrás que construir y mejorar diferentes edificios para alojar a tus gallinas, transportar tus huevos, generar energía y más. Algunos de los edificios que podrás construir son:

    -
      -
    • Casas de gallinas: Son el lugar donde viven tus gallinas. Puedes construir hasta cuatro casas de gallinas y mejorarlas para aumentar su capacidad y comodidad.
    • -
    • Camiones: Son los vehículos que se encargan de llevar tus huevos al mercado. Puedes contratar hasta cuatro conductores y mejorar sus camiones para aumentar su velocidad y carga.
    • -
    • Granos: Son los silos que almacenan el alimento para tus gallinas. Puedes construir hasta diez silos y mejorarlos para aumentar su capacidad y duración.
    • -
    • Torres: Son las estructuras que generan energía para tu granja. Puedes construir hasta dos torres y mejorarlas para aumentar su potencia y eficiencia.
    • -
    -

    Además de construir y mejorar edificios, también podrás comprar mejoras que te ayudarán a aumentar la producción y el valor de tus huevos. Algunas de las mejoras que podrás comprar son:

    -
      -
    • Calidad del huevo: Aumenta el valor base de cada huevo que vendes.
    • -
    • Cantidad del huevo: Aumenta la cantidad de huevos que pone cada gallina por minuto.
    • -
    • Habitabilidad: Aumenta el número máximo de gallinas que puedes tener en tu granja.
    • -
    • Velocidad de eclosión: Aumenta la velocidad a la que salen las gallinas del gallinero cuando tocas la pantalla.
    • -

    Invierte en investigación y exploración espacial

    -

    Otra forma de jugar a Egg Inc APK Español es invertir en investigación y exploración espacial. Estas son dos actividades que te permitirán descubrir nuevos secretos sobre los huevos y el universo, así como obtener beneficios adicionales para tu granja.

    -

    egg inc apk español descargar
    -egg inc apk español mod
    -egg inc apk español ultima version
    -egg inc apk español hack
    -egg inc apk español gratis
    -egg inc apk español mega
    -egg inc apk español full
    -egg inc apk español android
    -egg inc apk español 2023
    -egg inc apk español actualizado
    -egg inc apk español sin internet
    -egg inc apk español infinito
    -egg inc apk español trucos
    -egg inc apk español mediafire
    -egg inc apk español online
    -egg inc apk español premium
    -egg inc apk español dinero ilimitado
    -egg inc apk español sin anuncios
    -egg inc apk español oro infinito
    -egg inc apk español juego
    -egg inc apk español simulador
    -egg inc apk español descargar gratis
    -egg inc apk español mod menu
    -egg inc apk español 1.27.0
    -egg inc apk español uptodown
    -egg inc apk español para pc
    -egg inc apk español descargar mega
    -egg inc apk español hackeado
    -egg inc apk español descargar mediafire
    -egg inc apk español descargar ultima version
    -egg inc apk español descargar hackeado
    -egg inc apk español descargar mod
    -egg inc apk español descargar android
    -egg inc apk español descargar 2023
    -egg inc apk español descargar actualizado
    -egg inc apk español descargar sin internet
    -egg inc apk español descargar infinito
    -egg inc apk español descargar trucos
    -egg inc apk español descargar online
    -egg inc apk español descargar premium
    -egg inc apk español descargar dinero ilimitado
    -egg inc apk español descargar sin anuncios
    -egg inc apk español descargar oro infinito
    -egg inc apk español descargar juego
    -egg inc apk español descargar simulador
    -egg inc apk español mod dinero infinito
    -egg inc apk español mod oro infinito
    -egg inc apk español mod sin anuncios
    -egg inc apk español mod hackeado
    -egg inc apk español mod ultima version

    -

    La investigación se divide en dos categorías: común y épica. La investigación común se paga con el dinero que ganas vendiendo huevos, y te ofrece mejoras permanentes para tu granja, como aumentar la felicidad de las gallinas, reducir el costo de las mejoras, acelerar el tiempo de construcción, etc. La investigación épica se paga con huevos de oro, que son una moneda especial que puedes obtener de varias formas, como viendo anuncios, completando misiones, abriendo cajas misteriosas, etc. La investigación épica te ofrece mejoras especiales para tu granja, como aumentar el efecto de las mejoras comunes, multiplicar el valor de los huevos, aumentar el límite de habitabilidad, etc.

    -

    La exploración espacial se realiza mediante cohetes que puedes construir y lanzar desde tu granja. Los cohetes te permiten enviar gallinas y huevos al espacio para realizar misiones y experimentos. Algunos de los beneficios que puedes obtener de la exploración espacial son:

    -
      -
    • Desbloquear nuevos tipos de huevos con propiedades únicas.
    • -
    • Obtener recompensas como dinero, huevos de oro, boletos y más.
    • -
    • Aumentar tu prestigio y tu nivel de alma de huevo.
    • -
    • Descubrir artefactos que te dan bonificaciones especiales.
    • -

    Prestigia y desbloquea nuevos huevos

    -

    El último aspecto que te explicaremos sobre cómo se juega a Egg Inc APK Español es el prestigio y el desbloqueo de nuevos huevos. Estas son dos acciones que te permitirán reiniciar tu progreso y empezar de nuevo con ventajas y desafíos adicionales.

    -

    El prestigio es una opción que puedes activar cuando alcanzas cierto nivel de alma de huevo, que es una medida de tu éxito en el juego. Al hacer prestigio, reinicias tu granja y pierdes todo lo que has construido e invertido, pero conservas tus huevos de oro, tus cartas, tus artefactos y tus almas de huevo. Las almas de huevo te dan un bono multiplicador al valor de tus huevos, lo que te ayuda a avanzar más rápido y más lejos en tu próxima granja.

    -

    El desbloqueo de nuevos huevos es una recompensa que obtienes al alcanzar cierto valor de granja, que depende del tipo de huevo que estés produciendo. Al desbloquear un nuevo tipo de huevo, puedes cambiar tu producción y empezar a vender ese huevo en lugar del anterior. Cada tipo de huevo tiene un valor base diferente, así como propiedades especiales que afectan al juego. Por ejemplo, el huevo comestible es el más básico y tiene un valor de 0.25 dólares, mientras que el huevo de antimateria es el más avanzado y tiene un valor de 1.8 billones de dólares.

    -

    ¿Por qué descargar Egg Inc APK Español?

    -

    Ahora que ya sabes cómo se juega a Egg Inc APK Español, te preguntarás por qué deberías descargarlo desde APKCombo. La respuesta es simple: porque APKCombo te ofrece la mejor experiencia de descarga y juego posible. Algunas de las ventajas de descargar Egg Inc APK Español desde APKCombo son:

    -
      -
    • Es gratis y seguro: No tienes que pagar nada por descargar el juego, ni tampoco te arriesgas a infectar tu dispositivo con virus o malware.
    • -
    • Es rápido y fácil: Solo tienes que hacer clic en el botón de descarga y seguir las instrucciones para instalar el juego en tu dispositivo en cuestión de minutos.
    • -
    • Es actualizado y compatible: Siempre tendrás la última versión del juego disponible, así como la opción de elegir la versión que mejor se adapte a tu dispositivo y a tus preferencias.
    • -
    • Es divertido e ilimitado: Podrás disfrutar del juego sin restricciones ni limitaciones, así como acceder a todas las funciones y contenidos que ofrece.
    • -
    -

    Conclusión

    -

    Egg Inc APK Español es un juego de simulación y estrategia con gallinas que te hará pasar horas de diversión y entretenimiento. Podrás crear tu propia granja de huevos, invertir en investigación y exploración espacial, prestigiar y desbloquear nuevos huevos, y mucho más. Además, si descargas el juego desde APKCombo, podrás disfrutar de todas las ventajas que te ofrece esta plataforma, como rapidez, seguridad, actualización y compatibilidad. ¿A qué esperas para probarlo? ¡Descarga Egg Inc APK Español hoy mismo y empieza a vivir la aventura!

    -

    Preguntas frecuentes

    -

    ¿Es seguro descargar Egg Inc APK Español desde APKCombo?

    -

    Sí, es seguro descargar Egg Inc APK Español desde APKCombo. APKCombo es una plataforma confiable que verifica todos los archivos que ofrece para asegurarse de que no contienen virus ni malware. Además, APKCombo respeta tu privacidad y no recopila ni comparte tus datos personales.

    -

    ¿Qué requisitos necesita mi dispositivo para jugar a Egg Inc APK Español?

    -

    Para jugar a Egg Inc APK Español necesitas tener un dispositivo con Android 7.0 o superior y 89 MB de espacio libre en tu almacenamiento interno o externo. El juego no requiere conexión a internet para funcionar, pero sí para acceder a algunas funciones opcionales como los anuncios o las misiones diarias.

    -

    ¿Qué tipos de huevos hay en Egg Inc APK Español?

    -

    En Egg Inc APK Español hay 19 tipos de huevos diferentes, cada uno con un valor base y unas propiedades espec iales que afectan al juego. Por ejemplo, el huevo comestible es el más básico y tiene un valor de 0.25 dólares, mientras que el huevo de antimateria es el más avanzado y tiene un valor de 1.8 billones de dólares. La lista completa de los tipos de huevos es la siguiente:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cuantum - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dark matter - - - - - - - - - - - - - - - - - - - - - - - -
    Tipo de huevoValor basePropiedad especial
    Comestible0.25 $Ninguna
    Súper alimento1.25 $Aumenta la felicidad de las gallinas en un 10%
    Médico6.25 $Aumenta la habitabilidad en un 10%
    Cohete30 $Aumenta la velocidad de eclosión en un 10%
    Súper material150 $Aumenta la calidad del huevo en un 10%
    Fusión700 $Aumenta la cantidad del huevo en un 10%
    3,000 $Aumenta el valor de los huevos en un 10% por cada gallina
    Inmortalidad12,500 $Aumenta la vida de las gallinas en un 10%
    Tachyon50,000 $Aumenta la velocidad de eclosión en un 20%
    Gravitón175,000 $Aumenta la gravedad en un 10%
    Dilithium525,000 $Aumenta la potencia de los cohetes en un 10%
    Protophase1.5 M$Aumenta la calidad del huevo en un 20%
    4.5 M$Aumenta el valor de los huevos en un 20% por cada gallina
    AI15 M$Aumenta la inteligencia de las gallinas en un 10%
    Neblina50 M$Aumenta la habitabilidad en un 20%
    Terraformación150 M$Aumenta la felicidad de las gallinas en un 20%
    Antimateria1.8 B$Aumenta la cantidad del huevo en un 20%
    -

    ¿Cómo puedo cooperar con otros jugadores en Egg Inc APK Español?

    -

    Una forma de cooperar con otros jugadores en Egg Inc APK Español es participar en los contratos cooperativos. Estos son desafíos especiales que te proponen producir una cantidad determinada de huevos de un tipo específico en un tiempo limitado. Para lograrlo, puedes unirte a un equipo con otros jugadores o crear el tuyo propio e invitar a tus amigos. Al completar los contratos cooperativos, podrás obtener recompensas como huevos de propulsión, huevos de oro, artefactos y más.

    -

    ¿Qué son las cartas y cómo se usan en Egg Inc APK Español?

    -

    Las cartas son objetos coleccionables que te proporcionan mejoras y bonificaciones para tu juego. Puedes obtener cartas de varias formas, como comprándolas con huevos de oro, ganándolas en los contratos cooperativos, encontrándolas en las cajas misteriosas, etc. Hay cuatro tipos de cartas: comunes, raras, épicas y legendarias. Cada carta tiene un efecto diferente, como aumentar el valor de los huevos, reducir el costo de las mejoras, acelerar la producción, etc. Puedes usar las cartas activando sus efectos o combinándolas para crear cartas más poderosas.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/4H17Joycelyn/text_generater/README.md b/spaces/4H17Joycelyn/text_generater/README.md deleted file mode 100644 index fe3da452762c52fb1dfffcbc7838be0b873ed0ca..0000000000000000000000000000000000000000 --- a/spaces/4H17Joycelyn/text_generater/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Generater -emoji: 😻 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/52Hz/SRMNet_AWGN_denoising/app.py b/spaces/52Hz/SRMNet_AWGN_denoising/app.py deleted file mode 100644 index b2c9aa41ba774f5462dfd52f699e516b86ef2e53..0000000000000000000000000000000000000000 --- a/spaces/52Hz/SRMNet_AWGN_denoising/app.py +++ /dev/null @@ -1,37 +0,0 @@ -import os -import gradio as gr -from PIL import Image - - -os.system( - 'wget https://github.com/FanChiMao/SRMNet/releases/download/0.0/AWGN_denoising_SRMNet.pth -P experiments/pretrained_models') - - -def inference(img): - os.system('mkdir test') - #basewidth = 512 - #wpercent = (basewidth / float(img.size[0])) - #hsize = int((float(img.size[1]) * float(wpercent))) - #img = img.resize((basewidth, hsize), Image.ANTIALIAS) - img.save("test/1.png", "PNG") - os.system( - 'python main_test_SRMNet.py --input_dir test --weights experiments/pretrained_models/AWGN_denoising_SRMNet.pth') - return 'result/1.png' - - -title = "Selective Residual M-Net for Real-world Image Denoising" -description = "Gradio demo for SRMNet. SRMNet has competitive performance results on two synthetic and two realworld noisy datasets in terms of quantitative metrics and visual quality. See the paper and project page for detailed results below. Here, we provide a demo for AWGN image denoising. To use it, simply upload your image, or click one of the examples to load them. Reference from: https://huggingface.co/akhaliq" -article = "

    Selective Residual M-Net | Github Repo

    visitor badge
    " - -examples = [['set5/baby.png'], ['set5/bird.png'],['set5/butterfly.png'],['set5/head.png'],['set5/woman.png']] -gr.Interface( - inference, - [gr.inputs.Image(type="pil", label="Input")], - gr.outputs.Image(type="filepath", label="Output"), - title=title, - description=description, - article=article, - allow_flagging=False, - allow_screenshot=False, - examples=examples -).launch(debug=True) \ No newline at end of file diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/attention.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/attention.py deleted file mode 100644 index 2bd9c652a07dae0691dc97e3787d8de70447ab83..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/attention.py +++ /dev/null @@ -1,261 +0,0 @@ -from inspect import isfunction -import math -import torch -import torch.nn.functional as F -from torch import nn, einsum -from einops import rearrange, repeat - -from ldm.modules.diffusionmodules.util import checkpoint - - -def exists(val): - return val is not None - - -def uniq(arr): - return{el: True for el in arr}.keys() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - - -# feedforward -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class LinearAttention(nn.Module): - def __init__(self, dim, heads=4, dim_head=32): - super().__init__() - self.heads = heads - hidden_dim = dim_head * heads - self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias = False) - self.to_out = nn.Conv2d(hidden_dim, dim, 1) - - def forward(self, x): - b, c, h, w = x.shape - qkv = self.to_qkv(x) - q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads = self.heads, qkv=3) - k = k.softmax(dim=-1) - context = torch.einsum('bhdn,bhen->bhde', k, v) - out = torch.einsum('bhde,bhdn->bhen', context, q) - out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w) - return self.to_out(out) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = rearrange(q, 'b c h w -> b (h w) c') - k = rearrange(k, 'b c h w -> b c (h w)') - w_ = torch.einsum('bij,bjk->bik', q, k) - - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = rearrange(v, 'b c h w -> b c (h w)') - w_ = rearrange(w_, 'b i j -> b j i') - h_ = torch.einsum('bij,bjk->bik', v, w_) - h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h) - h_ = self.proj_out(h_) - - return x+h_ - - -class CrossAttention(nn.Module): - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.):# 如果设置了context_dim就不是自注意力了 - super().__init__() - inner_dim = dim_head * heads # inner_dim == SpatialTransformer.model_channels - context_dim = default(context_dim, query_dim) - - self.scale = dim_head ** -0.5 - self.heads = heads - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, query_dim), - nn.Dropout(dropout) - ) - - def forward(self, x, context=None, mask=None):# x:(b,h*w,c), context:(b,seq_len,context_dim) - h = self.heads - - q = self.to_q(x)# q:(b,h*w,inner_dim) - context = default(context, x) - k = self.to_k(context)# (b,seq_len,inner_dim) - v = self.to_v(context)# (b,seq_len,inner_dim) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))# n is seq_len for k and v - - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale # (b*head,h*w,seq_len) - - if exists(mask):# false - mask = rearrange(mask, 'b ... -> b (...)') - max_neg_value = -torch.finfo(sim.dtype).max - mask = repeat(mask, 'b j -> (b h) () j', h=h) - sim.masked_fill_(~mask, max_neg_value) - - # attention, what we cannot get enough of - attn = sim.softmax(dim=-1) - - out = einsum('b i j, b j d -> b i d', attn, v)# (b*head,h*w,inner_dim/head) - out = rearrange(out, '(b h) n d -> b n (h d)', h=h)# (b,h*w,inner_dim) - return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True): - super().__init__() - self.attn1 = CrossAttention(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout) # is a self-attention - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = CrossAttention(query_dim=dim, context_dim=context_dim, - heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None): - return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) - - def _forward(self, x, context=None): - x = self.attn1(self.norm1(x)) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - """ - def __init__(self, in_channels, n_heads, d_head, - depth=1, dropout=0., context_dim=None): - super().__init__() - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = Normalize(in_channels) - - self.proj_in = nn.Conv2d(in_channels, - inner_dim, - kernel_size=1, - stride=1, - padding=0) - - self.transformer_blocks = nn.ModuleList( - [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim) - for d in range(depth)] - ) - - self.proj_out = zero_module(nn.Conv2d(inner_dim, - in_channels, - kernel_size=1, - stride=1, - padding=0)) - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - b, c, h, w = x.shape # such as [2,320,10,106] - x_in = x - x = self.norm(x)# group norm - x = self.proj_in(x)# no shape change - x = rearrange(x, 'b c h w -> b (h w) c') - for block in self.transformer_blocks: - x = block(x, context=context)# context shape [b,seq_len=77,context_dim] - x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w) - x = self.proj_out(x) - return x + x_in \ No newline at end of file diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/filter.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/filter.py deleted file mode 100644 index 7ad6ea87c1f10ddd94c544037791d7a4634d5ae1..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/filter.py +++ /dev/null @@ -1,95 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch -import torch.nn as nn -import torch.nn.functional as F -import math - -if 'sinc' in dir(torch): - sinc = torch.sinc -else: - # This code is adopted from adefossez's julius.core.sinc under the MIT License - # https://adefossez.github.io/julius/julius/core.html - # LICENSE is in incl_licenses directory. - def sinc(x: torch.Tensor): - """ - Implementation of sinc, i.e. sin(pi * x) / (pi * x) - __Warning__: Different to julius.sinc, the input is multiplied by `pi`! - """ - return torch.where(x == 0, - torch.tensor(1., device=x.device, dtype=x.dtype), - torch.sin(math.pi * x) / math.pi / x) - - -# This code is adopted from adefossez's julius.lowpass.LowPassFilters under the MIT License -# https://adefossez.github.io/julius/julius/lowpass.html -# LICENSE is in incl_licenses directory. -def kaiser_sinc_filter1d(cutoff, half_width, kernel_size): # return filter [1,1,kernel_size] - even = (kernel_size % 2 == 0) - half_size = kernel_size // 2 - - #For kaiser window - delta_f = 4 * half_width - A = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95 - if A > 50.: - beta = 0.1102 * (A - 8.7) - elif A >= 21.: - beta = 0.5842 * (A - 21)**0.4 + 0.07886 * (A - 21.) - else: - beta = 0. - window = torch.kaiser_window(kernel_size, beta=beta, periodic=False) - - # ratio = 0.5/cutoff -> 2 * cutoff = 1 / ratio - if even: - time = (torch.arange(-half_size, half_size) + 0.5) - else: - time = torch.arange(kernel_size) - half_size - if cutoff == 0: - filter_ = torch.zeros_like(time) - else: - filter_ = 2 * cutoff * window * sinc(2 * cutoff * time) - # Normalize filter to have sum = 1, otherwise we will have a small leakage - # of the constant component in the input signal. - filter_ /= filter_.sum() - filter = filter_.view(1, 1, kernel_size) - - return filter - - -class LowPassFilter1d(nn.Module): - def __init__(self, - cutoff=0.5, - half_width=0.6, - stride: int = 1, - padding: bool = True, - padding_mode: str = 'replicate', - kernel_size: int = 12): - # kernel_size should be even number for stylegan3 setup, - # in this implementation, odd number is also possible. - super().__init__() - if cutoff < -0.: - raise ValueError("Minimum cutoff must be larger than zero.") - if cutoff > 0.5: - raise ValueError("A cutoff above 0.5 does not make sense.") - self.kernel_size = kernel_size - self.even = (kernel_size % 2 == 0) - self.pad_left = kernel_size // 2 - int(self.even) - self.pad_right = kernel_size // 2 - self.stride = stride - self.padding = padding - self.padding_mode = padding_mode - filter = kaiser_sinc_filter1d(cutoff, half_width, kernel_size) - self.register_buffer("filter", filter) - - #input [B, C, T] - def forward(self, x): - _, C, _ = x.shape - - if self.padding: - x = F.pad(x, (self.pad_left, self.pad_right), - mode=self.padding_mode) - out = F.conv1d(x, self.filter.expand(C, -1, -1), - stride=self.stride, groups=C) - - return out \ No newline at end of file diff --git a/spaces/ALSv/FSW/roop/face_reference.py b/spaces/ALSv/FSW/roop/face_reference.py deleted file mode 100644 index 3c3e1f1c6e13c73ceafd40c0912c066a3a86a528..0000000000000000000000000000000000000000 --- a/spaces/ALSv/FSW/roop/face_reference.py +++ /dev/null @@ -1,21 +0,0 @@ -from typing import Optional - -from roop.typing import Face - -FACE_REFERENCE = None - - -def get_face_reference() -> Optional[Face]: - return FACE_REFERENCE - - -def set_face_reference(face: Face) -> None: - global FACE_REFERENCE - - FACE_REFERENCE = face - - -def clear_face_reference() -> None: - global FACE_REFERENCE - - FACE_REFERENCE = None diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_w-p6_syncbn_fast_8x16b-300e_coco.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_w-p6_syncbn_fast_8x16b-300e_coco.py deleted file mode 100644 index 11164d217bf241c4342f4e9c56f4b86257d36572..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_w-p6_syncbn_fast_8x16b-300e_coco.py +++ /dev/null @@ -1,182 +0,0 @@ -_base_ = './yolov7_l_syncbn_fast_8x16b-300e_coco.py' - -# ========================modified parameters======================== -# -----data related----- -img_scale = (1280, 1280) # height, width -num_classes = 80 # Number of classes for classification -# Config of batch shapes. Only on val -# It means not used if batch_shapes_cfg is None. -batch_shapes_cfg = dict( - img_size=img_scale[ - 0], # The image scale of padding should be divided by pad_size_divisor - size_divisor=64) # Additional paddings for pixel scale -tta_img_scales = [(1280, 1280), (1024, 1024), (1536, 1536)] - -# -----model related----- -# Basic size of multi-scale prior box -anchors = [ - [(19, 27), (44, 40), (38, 94)], # P3/8 - [(96, 68), (86, 152), (180, 137)], # P4/16 - [(140, 301), (303, 264), (238, 542)], # P5/32 - [(436, 615), (739, 380), (925, 792)] # P6/64 -] -strides = [8, 16, 32, 64] # Strides of multi-scale prior box -num_det_layers = 4 # # The number of model output scales -norm_cfg = dict(type='BN', momentum=0.03, eps=0.001) - -# Data augmentation -max_translate_ratio = 0.2 # YOLOv5RandomAffine -scaling_ratio_range = (0.1, 2.0) # YOLOv5RandomAffine -mixup_prob = 0.15 # YOLOv5MixUp -randchoice_mosaic_prob = [0.8, 0.2] -mixup_alpha = 8.0 # YOLOv5MixUp -mixup_beta = 8.0 # YOLOv5MixUp - -# -----train val related----- -loss_cls_weight = 0.3 -loss_bbox_weight = 0.05 -loss_obj_weight = 0.7 -obj_level_weights = [4.0, 1.0, 0.25, 0.06] -simota_candidate_topk = 20 - -# The only difference between P6 and P5 in terms of -# hyperparameters is lr_factor -lr_factor = 0.2 - -# ===============================Unmodified in most cases==================== -pre_transform = _base_.pre_transform - -model = dict( - backbone=dict(arch='W', out_indices=(2, 3, 4, 5)), - neck=dict( - in_channels=[256, 512, 768, 1024], - out_channels=[128, 256, 384, 512], - use_maxpool_in_downsample=False, - use_repconv_outs=False), - bbox_head=dict( - head_module=dict( - type='YOLOv7p6HeadModule', - in_channels=[128, 256, 384, 512], - featmap_strides=strides, - norm_cfg=norm_cfg, - act_cfg=dict(type='SiLU', inplace=True)), - prior_generator=dict(base_sizes=anchors, strides=strides), - simota_candidate_topk=simota_candidate_topk, # note - # scaled based on number of detection layers - loss_cls=dict(loss_weight=loss_cls_weight * - (num_classes / 80 * 3 / num_det_layers)), - loss_bbox=dict(loss_weight=loss_bbox_weight * (3 / num_det_layers)), - loss_obj=dict(loss_weight=loss_obj_weight * - ((img_scale[0] / 640)**2 * 3 / num_det_layers)), - obj_level_weights=obj_level_weights)) - -mosiac4_pipeline = [ - dict( - type='Mosaic', - img_scale=img_scale, - pad_val=114.0, - pre_transform=pre_transform), - dict( - type='YOLOv5RandomAffine', - max_rotate_degree=0.0, - max_shear_degree=0.0, - max_translate_ratio=max_translate_ratio, # note - scaling_ratio_range=scaling_ratio_range, # note - # img_scale is (width, height) - border=(-img_scale[0] // 2, -img_scale[1] // 2), - border_val=(114, 114, 114)), -] - -mosiac9_pipeline = [ - dict( - type='Mosaic9', - img_scale=img_scale, - pad_val=114.0, - pre_transform=pre_transform), - dict( - type='YOLOv5RandomAffine', - max_rotate_degree=0.0, - max_shear_degree=0.0, - max_translate_ratio=max_translate_ratio, # note - scaling_ratio_range=scaling_ratio_range, # note - # img_scale is (width, height) - border=(-img_scale[0] // 2, -img_scale[1] // 2), - border_val=(114, 114, 114)), -] - -randchoice_mosaic_pipeline = dict( - type='RandomChoice', - transforms=[mosiac4_pipeline, mosiac9_pipeline], - prob=randchoice_mosaic_prob) - -train_pipeline = [ - *pre_transform, - randchoice_mosaic_pipeline, - dict( - type='YOLOv5MixUp', - alpha=mixup_alpha, # note - beta=mixup_beta, # note - prob=mixup_prob, - pre_transform=[*pre_transform, randchoice_mosaic_pipeline]), - dict(type='YOLOv5HSVRandomAug'), - dict(type='mmdet.RandomFlip', prob=0.5), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', - 'flip_direction')) -] -train_dataloader = dict(dataset=dict(pipeline=train_pipeline)) - -test_pipeline = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - dict(type='YOLOv5KeepRatioResize', scale=img_scale), - dict( - type='LetterResize', - scale=img_scale, - allow_scale_up=False, - pad_val=dict(img=114)), - dict(type='LoadAnnotations', with_bbox=True, _scope_='mmdet'), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor', 'pad_param')) -] -val_dataloader = dict( - dataset=dict(pipeline=test_pipeline, batch_shapes_cfg=batch_shapes_cfg)) -test_dataloader = val_dataloader - -default_hooks = dict(param_scheduler=dict(lr_factor=lr_factor)) - -# Config for Test Time Augmentation. (TTA) -_multiscale_resize_transforms = [ - dict( - type='Compose', - transforms=[ - dict(type='YOLOv5KeepRatioResize', scale=s), - dict( - type='LetterResize', - scale=s, - allow_scale_up=False, - pad_val=dict(img=114)) - ]) for s in tta_img_scales -] - -tta_pipeline = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - dict( - type='TestTimeAug', - transforms=[ - _multiscale_resize_transforms, - [ - dict(type='mmdet.RandomFlip', prob=1.), - dict(type='mmdet.RandomFlip', prob=0.) - ], [dict(type='mmdet.LoadAnnotations', with_bbox=True)], - [ - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor', 'pad_param', 'flip', - 'flip_direction')) - ] - ]) -] diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/default_runtime.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/default_runtime.py deleted file mode 100644 index 3816d423fabab10d26b0abfea1f60eb270c1dc83..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/default_runtime.py +++ /dev/null @@ -1,51 +0,0 @@ -# defaults to use registries in mmpretrain -default_scope = 'mmpretrain' - -# configure default hooks -default_hooks = dict( - # record the time of every iteration. - timer=dict(type='IterTimerHook'), - - # print log every 100 iterations. - logger=dict(type='LoggerHook', interval=100), - - # enable the parameter scheduler. - param_scheduler=dict(type='ParamSchedulerHook'), - - # save checkpoint per epoch. - checkpoint=dict(type='CheckpointHook', interval=1), - - # set sampler seed in distributed evrionment. - sampler_seed=dict(type='DistSamplerSeedHook'), - - # validation results visualization, set True to enable it. - visualization=dict(type='VisualizationHook', enable=False), -) - -# configure environment -env_cfg = dict( - # whether to enable cudnn benchmark - cudnn_benchmark=False, - - # set multi process parameters - mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), - - # set distributed parameters - dist_cfg=dict(backend='nccl'), -) - -# set visualizer -vis_backends = [dict(type='LocalVisBackend')] -visualizer = dict(type='UniversalVisualizer', vis_backends=vis_backends) - -# set log level -log_level = 'INFO' - -# load from which checkpoint -load_from = None - -# whether to resume training from the loaded checkpoint -resume = False - -# Defaults to use random seed and disable `deterministic` -randomness = dict(seed=None, deterministic=False) diff --git a/spaces/Abhaykoul/HelpingAI-T3/style.css b/spaces/Abhaykoul/HelpingAI-T3/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/Abhaykoul/HelpingAI-T3/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/AbortedGeneration.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/AbortedGeneration.ts deleted file mode 100644 index fe4c2824b4f3257bea71c3acacd65fcee0918188..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/AbortedGeneration.ts +++ /dev/null @@ -1,8 +0,0 @@ -// Ideally shouldn't be needed, see https://github.com/huggingface/chat-ui/pull/88#issuecomment-1523173850 - -import type { Conversation } from "./Conversation"; -import type { Timestamps } from "./Timestamps"; - -export interface AbortedGeneration extends Timestamps { - conversationId: Conversation["_id"]; -} diff --git a/spaces/Aditya9790/yolo7-object-tracking/utils/plots.py b/spaces/Aditya9790/yolo7-object-tracking/utils/plots.py deleted file mode 100644 index fdd8d0e853deb228badeeed52fbbe5fb8eb10632..0000000000000000000000000000000000000000 --- a/spaces/Aditya9790/yolo7-object-tracking/utils/plots.py +++ /dev/null @@ -1,489 +0,0 @@ -# Plotting utils - -import glob -import math -import os -import random -from copy import copy -from pathlib import Path - -import cv2 -import matplotlib -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -import seaborn as sns -import torch -import yaml -from PIL import Image, ImageDraw, ImageFont -from scipy.signal import butter, filtfilt - -from utils.general import xywh2xyxy, xyxy2xywh -from utils.metrics import fitness - -# Settings -matplotlib.rc('font', **{'size': 11}) -matplotlib.use('Agg') # for writing to files only - - -def color_list(): - # Return first 10 plt colors as (r,g,b) https://stackoverflow.com/questions/51350872/python-from-color-name-to-rgb - def hex2rgb(h): - return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4)) - - return [hex2rgb(h) for h in matplotlib.colors.TABLEAU_COLORS.values()] # or BASE_ (8), CSS4_ (148), XKCD_ (949) - - -def hist2d(x, y, n=100): - # 2d histogram used in labels.png and evolve.png - xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n) - hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges)) - xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1) - yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1) - return np.log(hist[xidx, yidx]) - - -def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5): - # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy - def butter_lowpass(cutoff, fs, order): - nyq = 0.5 * fs - normal_cutoff = cutoff / nyq - return butter(order, normal_cutoff, btype='low', analog=False) - - b, a = butter_lowpass(cutoff, fs, order=order) - return filtfilt(b, a, data) # forward-backward filter - - -def plot_one_box(x, img, color=None, label=None, line_thickness=3): - # Plots one bounding box on image img - tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness - color = color or [random.randint(0, 255) for _ in range(3)] - c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3])) - cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA) - if label: - tf = max(tl - 1, 1) # font thickness - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3 - cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA) # filled - cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) - - -def plot_one_box_PIL(box, img, color=None, label=None, line_thickness=None): - img = Image.fromarray(img) - draw = ImageDraw.Draw(img) - line_thickness = line_thickness or max(int(min(img.size) / 200), 2) - draw.rectangle(box, width=line_thickness, outline=tuple(color)) # plot - if label: - fontsize = max(round(max(img.size) / 40), 12) - font = ImageFont.truetype("Arial.ttf", fontsize) - txt_width, txt_height = font.getsize(label) - draw.rectangle([box[0], box[1] - txt_height + 4, box[0] + txt_width, box[1]], fill=tuple(color)) - draw.text((box[0], box[1] - txt_height + 1), label, fill=(255, 255, 255), font=font) - return np.asarray(img) - - -def plot_wh_methods(): # from utils.plots import *; plot_wh_methods() - # Compares the two methods for width-height anchor multiplication - # https://github.com/ultralytics/yolov3/issues/168 - x = np.arange(-4.0, 4.0, .1) - ya = np.exp(x) - yb = torch.sigmoid(torch.from_numpy(x)).numpy() * 2 - - fig = plt.figure(figsize=(6, 3), tight_layout=True) - plt.plot(x, ya, '.-', label='YOLOv3') - plt.plot(x, yb ** 2, '.-', label='YOLOR ^2') - plt.plot(x, yb ** 1.6, '.-', label='YOLOR ^1.6') - plt.xlim(left=-4, right=4) - plt.ylim(bottom=0, top=6) - plt.xlabel('input') - plt.ylabel('output') - plt.grid() - plt.legend() - fig.savefig('comparison.png', dpi=200) - - -def output_to_target(output): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] - targets = [] - for i, o in enumerate(output): - for *box, conf, cls in o.cpu().numpy(): - targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf]) - return np.array(targets) - - -def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16): - # Plot image grid with labels - - if isinstance(images, torch.Tensor): - images = images.cpu().float().numpy() - if isinstance(targets, torch.Tensor): - targets = targets.cpu().numpy() - - # un-normalise - if np.max(images[0]) <= 1: - images *= 255 - - tl = 3 # line thickness - tf = max(tl - 1, 1) # font thickness - bs, _, h, w = images.shape # batch size, _, height, width - bs = min(bs, max_subplots) # limit plot images - ns = np.ceil(bs ** 0.5) # number of subplots (square) - - # Check if we should resize - scale_factor = max_size / max(h, w) - if scale_factor < 1: - h = math.ceil(scale_factor * h) - w = math.ceil(scale_factor * w) - - colors = color_list() # list of colors - mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init - for i, img in enumerate(images): - if i == max_subplots: # if last batch has fewer images than we expect - break - - block_x = int(w * (i // ns)) - block_y = int(h * (i % ns)) - - img = img.transpose(1, 2, 0) - if scale_factor < 1: - img = cv2.resize(img, (w, h)) - - mosaic[block_y:block_y + h, block_x:block_x + w, :] = img - if len(targets) > 0: - image_targets = targets[targets[:, 0] == i] - boxes = xywh2xyxy(image_targets[:, 2:6]).T - classes = image_targets[:, 1].astype('int') - labels = image_targets.shape[1] == 6 # labels if no conf column - conf = None if labels else image_targets[:, 6] # check for confidence presence (label vs pred) - - if boxes.shape[1]: - if boxes.max() <= 1.01: # if normalized with tolerance 0.01 - boxes[[0, 2]] *= w # scale to pixels - boxes[[1, 3]] *= h - elif scale_factor < 1: # absolute coords need scale if image scales - boxes *= scale_factor - boxes[[0, 2]] += block_x - boxes[[1, 3]] += block_y - for j, box in enumerate(boxes.T): - cls = int(classes[j]) - color = colors[cls % len(colors)] - cls = names[cls] if names else cls - if labels or conf[j] > 0.25: # 0.25 conf thresh - label = '%s' % cls if labels else '%s %.1f' % (cls, conf[j]) - plot_one_box(box, mosaic, label=label, color=color, line_thickness=tl) - - # Draw image filename labels - if paths: - label = Path(paths[i]).name[:40] # trim to 40 char - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - cv2.putText(mosaic, label, (block_x + 5, block_y + t_size[1] + 5), 0, tl / 3, [220, 220, 220], thickness=tf, - lineType=cv2.LINE_AA) - - # Image border - cv2.rectangle(mosaic, (block_x, block_y), (block_x + w, block_y + h), (255, 255, 255), thickness=3) - - if fname: - r = min(1280. / max(h, w) / ns, 1.0) # ratio to limit image size - mosaic = cv2.resize(mosaic, (int(ns * w * r), int(ns * h * r)), interpolation=cv2.INTER_AREA) - # cv2.imwrite(fname, cv2.cvtColor(mosaic, cv2.COLOR_BGR2RGB)) # cv2 save - Image.fromarray(mosaic).save(fname) # PIL save - return mosaic - - -def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''): - # Plot LR simulating training for full epochs - optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals - y = [] - for _ in range(epochs): - scheduler.step() - y.append(optimizer.param_groups[0]['lr']) - plt.plot(y, '.-', label='LR') - plt.xlabel('epoch') - plt.ylabel('LR') - plt.grid() - plt.xlim(0, epochs) - plt.ylim(0) - plt.savefig(Path(save_dir) / 'LR.png', dpi=200) - plt.close() - - -def plot_test_txt(): # from utils.plots import *; plot_test() - # Plot test.txt histograms - x = np.loadtxt('test.txt', dtype=np.float32) - box = xyxy2xywh(x[:, :4]) - cx, cy = box[:, 0], box[:, 1] - - fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True) - ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0) - ax.set_aspect('equal') - plt.savefig('hist2d.png', dpi=300) - - fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True) - ax[0].hist(cx, bins=600) - ax[1].hist(cy, bins=600) - plt.savefig('hist1d.png', dpi=200) - - -def plot_targets_txt(): # from utils.plots import *; plot_targets_txt() - # Plot targets.txt histograms - x = np.loadtxt('targets.txt', dtype=np.float32).T - s = ['x targets', 'y targets', 'width targets', 'height targets'] - fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True) - ax = ax.ravel() - for i in range(4): - ax[i].hist(x[i], bins=100, label='%.3g +/- %.3g' % (x[i].mean(), x[i].std())) - ax[i].legend() - ax[i].set_title(s[i]) - plt.savefig('targets.jpg', dpi=200) - - -def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_txt() - # Plot study.txt generated by test.py - fig, ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True) - # ax = ax.ravel() - - fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) - # for f in [Path(path) / f'study_coco_{x}.txt' for x in ['yolor-p6', 'yolor-w6', 'yolor-e6', 'yolor-d6']]: - for f in sorted(Path(path).glob('study*.txt')): - y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T - x = np.arange(y.shape[1]) if x is None else np.array(x) - s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_inference (ms/img)', 't_NMS (ms/img)', 't_total (ms/img)'] - # for i in range(7): - # ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8) - # ax[i].set_title(s[i]) - - j = y[3].argmax() + 1 - ax2.plot(y[6, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8, - label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO')) - - ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], - 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet') - - ax2.grid(alpha=0.2) - ax2.set_yticks(np.arange(20, 60, 5)) - ax2.set_xlim(0, 57) - ax2.set_ylim(30, 55) - ax2.set_xlabel('GPU Speed (ms/img)') - ax2.set_ylabel('COCO AP val') - ax2.legend(loc='lower right') - plt.savefig(str(Path(path).name) + '.png', dpi=300) - - -def plot_labels(labels, names=(), save_dir=Path(''), loggers=None): - # plot dataset labels - print('Plotting labels... ') - c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes - nc = int(c.max() + 1) # number of classes - colors = color_list() - x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height']) - - # seaborn correlogram - sns.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9)) - plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200) - plt.close() - - # matplotlib labels - matplotlib.use('svg') # faster - ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() - ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) - ax[0].set_ylabel('instances') - if 0 < len(names) < 30: - ax[0].set_xticks(range(len(names))) - ax[0].set_xticklabels(names, rotation=90, fontsize=10) - else: - ax[0].set_xlabel('classes') - sns.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) - sns.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9) - - # rectangles - labels[:, 1:3] = 0.5 # center - labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000 - img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255) - for cls, *box in labels[:1000]: - ImageDraw.Draw(img).rectangle(box, width=1, outline=colors[int(cls) % 10]) # plot - ax[1].imshow(img) - ax[1].axis('off') - - for a in [0, 1, 2, 3]: - for s in ['top', 'right', 'left', 'bottom']: - ax[a].spines[s].set_visible(False) - - plt.savefig(save_dir / 'labels.jpg', dpi=200) - matplotlib.use('Agg') - plt.close() - - # loggers - for k, v in loggers.items() or {}: - if k == 'wandb' and v: - v.log({"Labels": [v.Image(str(x), caption=x.name) for x in save_dir.glob('*labels*.jpg')]}, commit=False) - - -def plot_evolution(yaml_file='data/hyp.finetune.yaml'): # from utils.plots import *; plot_evolution() - # Plot hyperparameter evolution results in evolve.txt - with open(yaml_file) as f: - hyp = yaml.load(f, Loader=yaml.SafeLoader) - x = np.loadtxt('evolve.txt', ndmin=2) - f = fitness(x) - # weights = (f - f.min()) ** 2 # for weighted results - plt.figure(figsize=(10, 12), tight_layout=True) - matplotlib.rc('font', **{'size': 8}) - for i, (k, v) in enumerate(hyp.items()): - y = x[:, i + 7] - # mu = (y * weights).sum() / weights.sum() # best weighted result - mu = y[f.argmax()] # best single result - plt.subplot(6, 5, i + 1) - plt.scatter(y, f, c=hist2d(y, f, 20), cmap='viridis', alpha=.8, edgecolors='none') - plt.plot(mu, f.max(), 'k+', markersize=15) - plt.title('%s = %.3g' % (k, mu), fontdict={'size': 9}) # limit to 40 characters - if i % 5 != 0: - plt.yticks([]) - print('%15s: %.3g' % (k, mu)) - plt.savefig('evolve.png', dpi=200) - print('\nPlot saved as evolve.png') - - -def profile_idetection(start=0, stop=0, labels=(), save_dir=''): - # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection() - ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel() - s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS'] - files = list(Path(save_dir).glob('frames*.txt')) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows - n = results.shape[1] # number of rows - x = np.arange(start, min(stop, n) if stop else n) - results = results[:, x] - t = (results[0] - results[0].min()) # set t0=0s - results[0] = x - for i, a in enumerate(ax): - if i < len(results): - label = labels[fi] if len(labels) else f.stem.replace('frames_', '') - a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5) - a.set_title(s[i]) - a.set_xlabel('time (s)') - # if fi == len(files) - 1: - # a.set_ylim(bottom=0) - for side in ['top', 'right']: - a.spines[side].set_visible(False) - else: - a.remove() - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - ax[1].legend() - plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200) - - -def plot_results_overlay(start=0, stop=0): # from utils.plots import *; plot_results_overlay() - # Plot training 'results*.txt', overlaying train and val losses - s = ['train', 'train', 'train', 'Precision', 'mAP@0.5', 'val', 'val', 'val', 'Recall', 'mAP@0.5:0.95'] # legends - t = ['Box', 'Objectness', 'Classification', 'P-R', 'mAP-F1'] # titles - for f in sorted(glob.glob('results*.txt') + glob.glob('../../Downloads/results*.txt')): - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - fig, ax = plt.subplots(1, 5, figsize=(14, 3.5), tight_layout=True) - ax = ax.ravel() - for i in range(5): - for j in [i, i + 5]: - y = results[j, x] - ax[i].plot(x, y, marker='.', label=s[j]) - # y_smooth = butter_lowpass_filtfilt(y) - # ax[i].plot(x, np.gradient(y_smooth), marker='.', label=s[j]) - - ax[i].set_title(t[i]) - ax[i].legend() - ax[i].set_ylabel(f) if i == 0 else None # add filename - fig.savefig(f.replace('.txt', '.png'), dpi=200) - - -def plot_results(start=0, stop=0, bucket='', id=(), labels=(), save_dir=''): - # Plot training 'results*.txt'. from utils.plots import *; plot_results(save_dir='runs/train/exp') - fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True) - ax = ax.ravel() - s = ['Box', 'Objectness', 'Classification', 'Precision', 'Recall', - 'val Box', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95'] - if bucket: - # files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id] - files = ['results%g.txt' % x for x in id] - c = ('gsutil cp ' + '%s ' * len(files) + '.') % tuple('gs://%s/results%g.txt' % (bucket, x) for x in id) - os.system(c) - else: - files = list(Path(save_dir).glob('results*.txt')) - assert len(files), 'No results.txt files found in %s, nothing to plot.' % os.path.abspath(save_dir) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - for i in range(10): - y = results[i, x] - if i in [0, 1, 2, 5, 6, 7]: - y[y == 0] = np.nan # don't show zero loss values - # y /= y[0] # normalize - label = labels[fi] if len(labels) else f.stem - ax[i].plot(x, y, marker='.', label=label, linewidth=2, markersize=8) - ax[i].set_title(s[i]) - # if i in [5, 6, 7]: # share train and val loss y axes - # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - ax[1].legend() - fig.savefig(Path(save_dir) / 'results.png', dpi=200) - - -def output_to_keypoint(output): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] - targets = [] - for i, o in enumerate(output): - kpts = o[:,6:] - o = o[:,:6] - for index, (*box, conf, cls) in enumerate(o.detach().cpu().numpy()): - targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf, *list(kpts.detach().cpu().numpy()[index])]) - return np.array(targets) - - -def plot_skeleton_kpts(im, kpts, steps, orig_shape=None): - #Plot the skeleton and keypointsfor coco datatset - palette = np.array([[255, 128, 0], [255, 153, 51], [255, 178, 102], - [230, 230, 0], [255, 153, 255], [153, 204, 255], - [255, 102, 255], [255, 51, 255], [102, 178, 255], - [51, 153, 255], [255, 153, 153], [255, 102, 102], - [255, 51, 51], [153, 255, 153], [102, 255, 102], - [51, 255, 51], [0, 255, 0], [0, 0, 255], [255, 0, 0], - [255, 255, 255]]) - - skeleton = [[16, 14], [14, 12], [17, 15], [15, 13], [12, 13], [6, 12], - [7, 13], [6, 7], [6, 8], [7, 9], [8, 10], [9, 11], [2, 3], - [1, 2], [1, 3], [2, 4], [3, 5], [4, 6], [5, 7]] - - pose_limb_color = palette[[9, 9, 9, 9, 7, 7, 7, 0, 0, 0, 0, 0, 16, 16, 16, 16, 16, 16, 16]] - pose_kpt_color = palette[[16, 16, 16, 16, 16, 0, 0, 0, 0, 0, 0, 9, 9, 9, 9, 9, 9]] - radius = 5 - num_kpts = len(kpts) // steps - - for kid in range(num_kpts): - r, g, b = pose_kpt_color[kid] - x_coord, y_coord = kpts[steps * kid], kpts[steps * kid + 1] - if not (x_coord % 640 == 0 or y_coord % 640 == 0): - if steps == 3: - conf = kpts[steps * kid + 2] - if conf < 0.5: - continue - cv2.circle(im, (int(x_coord), int(y_coord)), radius, (int(r), int(g), int(b)), -1) - - for sk_id, sk in enumerate(skeleton): - r, g, b = pose_limb_color[sk_id] - pos1 = (int(kpts[(sk[0]-1)*steps]), int(kpts[(sk[0]-1)*steps+1])) - pos2 = (int(kpts[(sk[1]-1)*steps]), int(kpts[(sk[1]-1)*steps+1])) - if steps == 3: - conf1 = kpts[(sk[0]-1)*steps+2] - conf2 = kpts[(sk[1]-1)*steps+2] - if conf1<0.5 or conf2<0.5: - continue - if pos1[0]%640 == 0 or pos1[1]%640==0 or pos1[0]<0 or pos1[1]<0: - continue - if pos2[0] % 640 == 0 or pos2[1] % 640 == 0 or pos2[0]<0 or pos2[1]<0: - continue - cv2.line(im, pos1, pos2, (int(r), int(g), int(b)), thickness=2) diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/GetStartPoint.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/GetStartPoint.js deleted file mode 100644 index c2177d572805b853d27ec1a7496b013e00c4078c..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/GetStartPoint.js +++ /dev/null @@ -1,27 +0,0 @@ -import GetThumbAlignPoint from './GetThumbAlignPoint.js'; - -const AlignLeft = Phaser.Display.Align.LEFT_CENTER; -const AlignTop = Phaser.Display.Align.TOP_CENTER; - -var GetStartPoint = function (out) { - if (out === undefined) { - out = tmpPoint; - } - if (this.childrenMap.thumb) { - var align = (this.orientation === 0) ? AlignLeft : AlignTop; - GetThumbAlignPoint.call(this, align, out); - } else { - if (this.orientation === 0) { - out.x = this.innerLeft + 1; // Add 1 pixel margin - out.y = this.centerY; - } else { - out.x = this.centerX; - out.y = this.innerTop + 1; // Add 1 pixel margin - } - } - return out; -} - -var tmpPoint = {}; - -export default GetStartPoint; \ No newline at end of file diff --git a/spaces/AlexWang/lama/bin/filter_sharded_dataset.py b/spaces/AlexWang/lama/bin/filter_sharded_dataset.py deleted file mode 100644 index b3c2b490e88bb3b55c6bb717e08f97f7a396d5fa..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/bin/filter_sharded_dataset.py +++ /dev/null @@ -1,69 +0,0 @@ -#!/usr/bin/env python3 - - -import math -import os -import random - -import braceexpand -import webdataset as wds - -DEFAULT_CATS_FILE = os.path.join(os.path.dirname(__file__), '..', 'configs', 'places2-categories_157.txt') - -def is_good_key(key, cats): - return any(c in key for c in cats) - - -def main(args): - if args.categories == 'nofilter': - good_categories = None - else: - with open(args.categories, 'r') as f: - good_categories = set(line.strip().split(' ')[0] for line in f if line.strip()) - - all_input_files = list(braceexpand.braceexpand(args.infile)) - chunk_size = int(math.ceil(len(all_input_files) / args.n_read_streams)) - - input_iterators = [iter(wds.Dataset(all_input_files[start : start + chunk_size]).shuffle(args.shuffle_buffer)) - for start in range(0, len(all_input_files), chunk_size)] - output_datasets = [wds.ShardWriter(args.outpattern.format(i)) for i in range(args.n_write_streams)] - - good_readers = list(range(len(input_iterators))) - step_i = 0 - good_samples = 0 - bad_samples = 0 - while len(good_readers) > 0: - if step_i % args.print_freq == 0: - print(f'Iterations done {step_i}; readers alive {good_readers}; good samples {good_samples}; bad samples {bad_samples}') - - step_i += 1 - - ri = random.choice(good_readers) - try: - sample = next(input_iterators[ri]) - except StopIteration: - good_readers = list(set(good_readers) - {ri}) - continue - - if good_categories is not None and not is_good_key(sample['__key__'], good_categories): - bad_samples += 1 - continue - - wi = random.randint(0, args.n_write_streams - 1) - output_datasets[wi].write(sample) - good_samples += 1 - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('--categories', type=str, default=DEFAULT_CATS_FILE) - aparser.add_argument('--shuffle-buffer', type=int, default=10000) - aparser.add_argument('--n-read-streams', type=int, default=10) - aparser.add_argument('--n-write-streams', type=int, default=10) - aparser.add_argument('--print-freq', type=int, default=1000) - aparser.add_argument('infile', type=str) - aparser.add_argument('outpattern', type=str) - - main(aparser.parse_args()) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_inpaint.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_inpaint.py deleted file mode 100644 index 6004067887ea3ad604cbbb18663c735ffcc83be3..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_inpaint.py +++ /dev/null @@ -1,141 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import unittest - -import numpy as np - -from diffusers import LMSDiscreteScheduler, OnnxStableDiffusionInpaintPipeline -from diffusers.utils.testing_utils import ( - is_onnx_available, - load_image, - nightly, - require_onnxruntime, - require_torch_gpu, -) - -from ..test_pipelines_onnx_common import OnnxPipelineTesterMixin - - -if is_onnx_available(): - import onnxruntime as ort - - -class OnnxStableDiffusionPipelineFastTests(OnnxPipelineTesterMixin, unittest.TestCase): - # FIXME: add fast tests - pass - - -@nightly -@require_onnxruntime -@require_torch_gpu -class OnnxStableDiffusionInpaintPipelineIntegrationTests(unittest.TestCase): - @property - def gpu_provider(self): - return ( - "CUDAExecutionProvider", - { - "gpu_mem_limit": "15000000000", # 15GB - "arena_extend_strategy": "kSameAsRequested", - }, - ) - - @property - def gpu_options(self): - options = ort.SessionOptions() - options.enable_mem_pattern = False - return options - - def test_inference_default_pndm(self): - init_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/in_paint/overture-creations-5sI6fQgYIuo.png" - ) - mask_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/in_paint/overture-creations-5sI6fQgYIuo_mask.png" - ) - pipe = OnnxStableDiffusionInpaintPipeline.from_pretrained( - "runwayml/stable-diffusion-inpainting", - revision="onnx", - safety_checker=None, - feature_extractor=None, - provider=self.gpu_provider, - sess_options=self.gpu_options, - ) - pipe.set_progress_bar_config(disable=None) - - prompt = "A red cat sitting on a park bench" - - generator = np.random.RandomState(0) - output = pipe( - prompt=prompt, - image=init_image, - mask_image=mask_image, - guidance_scale=7.5, - num_inference_steps=10, - generator=generator, - output_type="np", - ) - images = output.images - image_slice = images[0, 255:258, 255:258, -1] - - assert images.shape == (1, 512, 512, 3) - expected_slice = np.array([0.2514, 0.3007, 0.3517, 0.1790, 0.2382, 0.3167, 0.1944, 0.2273, 0.2464]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_inference_k_lms(self): - init_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/in_paint/overture-creations-5sI6fQgYIuo.png" - ) - mask_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/in_paint/overture-creations-5sI6fQgYIuo_mask.png" - ) - lms_scheduler = LMSDiscreteScheduler.from_pretrained( - "runwayml/stable-diffusion-inpainting", subfolder="scheduler", revision="onnx" - ) - pipe = OnnxStableDiffusionInpaintPipeline.from_pretrained( - "runwayml/stable-diffusion-inpainting", - revision="onnx", - scheduler=lms_scheduler, - safety_checker=None, - feature_extractor=None, - provider=self.gpu_provider, - sess_options=self.gpu_options, - ) - pipe.set_progress_bar_config(disable=None) - - prompt = "A red cat sitting on a park bench" - - generator = np.random.RandomState(0) - output = pipe( - prompt=prompt, - image=init_image, - mask_image=mask_image, - guidance_scale=7.5, - num_inference_steps=20, - generator=generator, - output_type="np", - ) - images = output.images - image_slice = images[0, 255:258, 255:258, -1] - - assert images.shape == (1, 512, 512, 3) - expected_slice = np.array([0.0086, 0.0077, 0.0083, 0.0093, 0.0107, 0.0139, 0.0094, 0.0097, 0.0125]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 diff --git a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_64x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_64x4d_fpn_1x_coco.py deleted file mode 100644 index b249bfa0df6037f1433ef6d41f7da16b10645aa2..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_64x4d_fpn_1x_coco.py +++ /dev/null @@ -1,14 +0,0 @@ -_base_ = './cascade_rcnn_r50_fpn_1x_coco.py' -model = dict( - type='CascadeRCNN', - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/fsaf.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/fsaf.py deleted file mode 100644 index 9f10fa1ae10f31e6cb5de65505b14a4fc97dd022..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/fsaf.py +++ /dev/null @@ -1,17 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FSAF(SingleStageDetector): - """Implementation of `FSAF `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(FSAF, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/htc_mask_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/htc_mask_head.py deleted file mode 100644 index 330b778ebad8d48d55d09ddd42baa70ec10ae463..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/htc_mask_head.py +++ /dev/null @@ -1,43 +0,0 @@ -from mmcv.cnn import ConvModule - -from mmdet.models.builder import HEADS -from .fcn_mask_head import FCNMaskHead - - -@HEADS.register_module() -class HTCMaskHead(FCNMaskHead): - - def __init__(self, with_conv_res=True, *args, **kwargs): - super(HTCMaskHead, self).__init__(*args, **kwargs) - self.with_conv_res = with_conv_res - if self.with_conv_res: - self.conv_res = ConvModule( - self.conv_out_channels, - self.conv_out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - - def init_weights(self): - super(HTCMaskHead, self).init_weights() - if self.with_conv_res: - self.conv_res.init_weights() - - def forward(self, x, res_feat=None, return_logits=True, return_feat=True): - if res_feat is not None: - assert self.with_conv_res - res_feat = self.conv_res(res_feat) - x = x + res_feat - for conv in self.convs: - x = conv(x) - res_feat = x - outs = [] - if return_logits: - x = self.upsample(x) - if self.upsample_method == 'deconv': - x = self.relu(x) - mask_pred = self.conv_logits(x) - outs.append(mask_pred) - if return_feat: - outs.append(res_feat) - return outs if len(outs) > 1 else outs[0] diff --git a/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/analyze_results.py b/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/analyze_results.py deleted file mode 100644 index fc6b4d9252178cb24a2266ac52aa77223e4f0d7a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/analyze_results.py +++ /dev/null @@ -1,202 +0,0 @@ -import argparse -import os.path as osp - -import mmcv -import numpy as np -from mmcv import Config, DictAction - -from mmdet.core.evaluation import eval_map -from mmdet.core.visualization import imshow_gt_det_bboxes -from mmdet.datasets import build_dataset, get_loading_pipeline - - -def bbox_map_eval(det_result, annotation): - """Evaluate mAP of single image det result. - - Args: - det_result (list[list]): [[cls1_det, cls2_det, ...], ...]. - The outer list indicates images, and the inner list indicates - per-class detected bboxes. - annotation (dict): Ground truth annotations where keys of - annotations are: - - - bboxes: numpy array of shape (n, 4) - - labels: numpy array of shape (n, ) - - bboxes_ignore (optional): numpy array of shape (k, 4) - - labels_ignore (optional): numpy array of shape (k, ) - - Returns: - float: mAP - """ - - # use only bbox det result - if isinstance(det_result, tuple): - bbox_det_result = [det_result[0]] - else: - bbox_det_result = [det_result] - # mAP - iou_thrs = np.linspace( - .5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True) - mean_aps = [] - for thr in iou_thrs: - mean_ap, _ = eval_map( - bbox_det_result, [annotation], iou_thr=thr, logger='silent') - mean_aps.append(mean_ap) - return sum(mean_aps) / len(mean_aps) - - -class ResultVisualizer(object): - """Display and save evaluation results. - - Args: - show (bool): Whether to show the image. Default: True - wait_time (float): Value of waitKey param. Default: 0. - score_thr (float): Minimum score of bboxes to be shown. - Default: 0 - """ - - def __init__(self, show=False, wait_time=0, score_thr=0): - self.show = show - self.wait_time = wait_time - self.score_thr = score_thr - - def _save_image_gts_results(self, dataset, results, mAPs, out_dir=None): - mmcv.mkdir_or_exist(out_dir) - - for mAP_info in mAPs: - index, mAP = mAP_info - data_info = dataset.prepare_train_img(index) - - # calc save file path - filename = data_info['filename'] - if data_info['img_prefix'] is not None: - filename = osp.join(data_info['img_prefix'], filename) - else: - filename = data_info['filename'] - fname, name = osp.splitext(osp.basename(filename)) - save_filename = fname + '_' + str(round(mAP, 3)) + name - out_file = osp.join(out_dir, save_filename) - imshow_gt_det_bboxes( - data_info['img'], - data_info, - results[index], - dataset.CLASSES, - show=self.show, - score_thr=self.score_thr, - wait_time=self.wait_time, - out_file=out_file) - - def evaluate_and_show(self, - dataset, - results, - topk=20, - show_dir='work_dir', - eval_fn=None): - """Evaluate and show results. - - Args: - dataset (Dataset): A PyTorch dataset. - results (list): Det results from test results pkl file - topk (int): Number of the highest topk and - lowest topk after evaluation index sorting. Default: 20 - show_dir (str, optional): The filename to write the image. - Default: 'work_dir' - eval_fn (callable, optional): Eval function, Default: None - """ - - assert topk > 0 - if (topk * 2) > len(dataset): - topk = len(dataset) // 2 - - if eval_fn is None: - eval_fn = bbox_map_eval - else: - assert callable(eval_fn) - - prog_bar = mmcv.ProgressBar(len(results)) - _mAPs = {} - for i, (result, ) in enumerate(zip(results)): - # self.dataset[i] should not call directly - # because there is a risk of mismatch - data_info = dataset.prepare_train_img(i) - mAP = eval_fn(result, data_info['ann_info']) - _mAPs[i] = mAP - prog_bar.update() - - # descending select topk image - _mAPs = list(sorted(_mAPs.items(), key=lambda kv: kv[1])) - good_mAPs = _mAPs[-topk:] - bad_mAPs = _mAPs[:topk] - - good_dir = osp.abspath(osp.join(show_dir, 'good')) - bad_dir = osp.abspath(osp.join(show_dir, 'bad')) - self._save_image_gts_results(dataset, results, good_mAPs, good_dir) - self._save_image_gts_results(dataset, results, bad_mAPs, bad_dir) - - -def parse_args(): - parser = argparse.ArgumentParser( - description='MMDet eval image prediction result for each') - parser.add_argument('config', help='test config file path') - parser.add_argument( - 'prediction_path', help='prediction path where test pkl result') - parser.add_argument( - 'show_dir', help='directory where painted images will be saved') - parser.add_argument('--show', action='store_true', help='show results') - parser.add_argument( - '--wait-time', - type=float, - default=0, - help='the interval of show (s), 0 is block') - parser.add_argument( - '--topk', - default=20, - type=int, - help='saved Number of the highest topk ' - 'and lowest topk after index sorting') - parser.add_argument( - '--show-score-thr', - type=float, - default=0, - help='score threshold (default: 0.)') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - - mmcv.check_file_exist(args.prediction_path) - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - cfg.data.test.test_mode = True - # import modules from string list. - if cfg.get('custom_imports', None): - from mmcv.utils import import_modules_from_strings - import_modules_from_strings(**cfg['custom_imports']) - - cfg.data.test.pop('samples_per_gpu', 0) - cfg.data.test.pipeline = get_loading_pipeline(cfg.data.train.pipeline) - dataset = build_dataset(cfg.data.test) - outputs = mmcv.load(args.prediction_path) - - result_visualizer = ResultVisualizer(args.show, args.wait_time, - args.show_score_thr) - result_visualizer.evaluate_and_show( - dataset, outputs, topk=args.topk, show_dir=args.show_dir) - - -if __name__ == '__main__': - main() diff --git a/spaces/Andy1621/uniformerv2_demo/README.md b/spaces/Andy1621/uniformerv2_demo/README.md deleted file mode 100644 index e7e0f5286dcaf85773e71b0afacfded4557032b8..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformerv2_demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Uniformerv2_demo -emoji: 📹 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.0.3 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Angelaangie/personal-chat-gpt/README.md b/spaces/Angelaangie/personal-chat-gpt/README.md deleted file mode 100644 index e9e1a18eefcb861207920ce4d2534c74e315e4cc..0000000000000000000000000000000000000000 --- a/spaces/Angelaangie/personal-chat-gpt/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Personal Chat Gpt -emoji: 📉 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/train.py b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/train.py deleted file mode 100644 index 161cb33da069597e0586b3d293265d107ff7f74d..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/train.py +++ /dev/null @@ -1,63 +0,0 @@ -import time -from options.train_options import TrainOptions -from dataloader.data_loader import dataloader -from model import create_model -from util.visualizer import Visualizer - - -if __name__ == '__main__': - opt = TrainOptions().parse() # get training options - dataset = dataloader(opt) # create a dataset - dataset_size = len(dataset) * opt.batch_size - print('training images = %d' % dataset_size) - model = create_model(opt) # create a model given opt.model and other options - visualizer = Visualizer(opt) # create a visualizer - - total_iters = opt.iter_count # the total number of training iterations - epoch = 0 - max_iteration = opt.n_iter + opt.n_iter_decay - - while (total_iters < max_iteration): - epoch_start_time = time.time() # timer for entire epoch - iter_data_time = time.time() # timer for data loading per iteration - epoch += 1 # the number of training iterations in current epoch, reset to 0 every epoch - epoch_iter = 0 - visualizer.reset() # reset the visualizer - - for i, data in enumerate(dataset): - iter_start_time = time.time() - if total_iters % opt.print_freq == 0: - t_data = iter_start_time - iter_data_time - if total_iters == 0: - model.setup(opt) - model.parallelize() - total_iters += opt.batch_size - epoch_iter += opt.batch_size - - model.set_input(data) # unpack data from dataset and apply preprocessing - model.optimize_parameters() - - if total_iters % opt.display_freq == 0: # display images on visdom and save images to a HTML file - save_result = total_iters % opt.update_html_freq == 0 - model.log_imgs() - visualizer.display_current_results(model.get_current_visuals(), epoch, save_result) - - if total_iters % opt.print_freq == 0: # print training losses and save logging information to the disk - losses = model.get_current_losses() - t_comp = (time.time() - iter_start_time) / opt.batch_size - visualizer.print_current_losses(epoch, total_iters, losses, t_comp, t_data) - if opt.display_id is None or opt.display_id > 0: - visualizer.plot_current_losses(epoch, float(epoch_iter) / dataset_size, losses) - - if total_iters % opt.save_latest_freq == 0: # cache our latest model every iterations - print('saving the latest model (epoch %d, total_iters %d)' % (epoch, total_iters)) - print(opt.name) # it's useful to occasionally show the experiment name on console - model.save_networks('latest') - - if total_iters % opt.save_iters_freq == 0: # cache our model every epochs - print('saving the model at the end of iters %d' % (total_iters)) - model.save_networks('latest') - model.save_networks(total_iters) - - print('End of iters %d / %d \t Time Taken: %d sec' % (total_iters, max_iteration, time.time() - epoch_start_time)) - model.update_learning_rate() \ No newline at end of file diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/upsample.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/upsample.py deleted file mode 100644 index a1a353767d0ce8518f0d7289bed10dba0178ed12..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/upsample.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F - -from ..utils import xavier_init -from .registry import UPSAMPLE_LAYERS - -UPSAMPLE_LAYERS.register_module('nearest', module=nn.Upsample) -UPSAMPLE_LAYERS.register_module('bilinear', module=nn.Upsample) - - -@UPSAMPLE_LAYERS.register_module(name='pixel_shuffle') -class PixelShufflePack(nn.Module): - """Pixel Shuffle upsample layer. - - This module packs `F.pixel_shuffle()` and a nn.Conv2d module together to - achieve a simple upsampling with pixel shuffle. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - scale_factor (int): Upsample ratio. - upsample_kernel (int): Kernel size of the conv layer to expand the - channels. - """ - - def __init__(self, in_channels, out_channels, scale_factor, - upsample_kernel): - super(PixelShufflePack, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.scale_factor = scale_factor - self.upsample_kernel = upsample_kernel - self.upsample_conv = nn.Conv2d( - self.in_channels, - self.out_channels * scale_factor * scale_factor, - self.upsample_kernel, - padding=(self.upsample_kernel - 1) // 2) - self.init_weights() - - def init_weights(self): - xavier_init(self.upsample_conv, distribution='uniform') - - def forward(self, x): - x = self.upsample_conv(x) - x = F.pixel_shuffle(x, self.scale_factor) - return x - - -def build_upsample_layer(cfg, *args, **kwargs): - """Build upsample layer. - - Args: - cfg (dict): The upsample layer config, which should contain: - - - type (str): Layer type. - - scale_factor (int): Upsample ratio, which is not applicable to - deconv. - - layer args: Args needed to instantiate a upsample layer. - args (argument list): Arguments passed to the ``__init__`` - method of the corresponding conv layer. - kwargs (keyword arguments): Keyword arguments passed to the - ``__init__`` method of the corresponding conv layer. - - Returns: - nn.Module: Created upsample layer. - """ - if not isinstance(cfg, dict): - raise TypeError(f'cfg must be a dict, but got {type(cfg)}') - if 'type' not in cfg: - raise KeyError( - f'the cfg dict must contain the key "type", but got {cfg}') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in UPSAMPLE_LAYERS: - raise KeyError(f'Unrecognized upsample type {layer_type}') - else: - upsample = UPSAMPLE_LAYERS.get(layer_type) - - if upsample is nn.Upsample: - cfg_['mode'] = layer_type - layer = upsample(*args, **kwargs, **cfg_) - return layer diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py deleted file mode 100644 index 7a38772b0c93a8608f32c6357b8616e77c139dc9..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class NeptuneLoggerHook(LoggerHook): - """Class to log metrics to NeptuneAI. - - It requires `neptune-client` to be installed. - - Args: - init_kwargs (dict): a dict contains the initialization keys as below: - - project (str): Name of a project in a form of - namespace/project_name. If None, the value of - NEPTUNE_PROJECT environment variable will be taken. - - api_token (str): User’s API token. - If None, the value of NEPTUNE_API_TOKEN environment - variable will be taken. Note: It is strongly recommended - to use NEPTUNE_API_TOKEN environment variable rather than - placing your API token in plain text in your source code. - - name (str, optional, default is 'Untitled'): Editable name of - the run. Name is displayed in the run's Details and in - Runs table as a column. - Check https://docs.neptune.ai/api-reference/neptune#init for - more init arguments. - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging - by_epoch (bool): Whether EpochBasedRunner is used. - - .. _NeptuneAI: - https://docs.neptune.ai/you-should-know/logging-metadata - """ - - def __init__(self, - init_kwargs=None, - interval=10, - ignore_last=True, - reset_flag=True, - with_step=True, - by_epoch=True): - - super(NeptuneLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.import_neptune() - self.init_kwargs = init_kwargs - self.with_step = with_step - - def import_neptune(self): - try: - import neptune.new as neptune - except ImportError: - raise ImportError( - 'Please run "pip install neptune-client" to install neptune') - self.neptune = neptune - self.run = None - - @master_only - def before_run(self, runner): - if self.init_kwargs: - self.run = self.neptune.init(**self.init_kwargs) - else: - self.run = self.neptune.init() - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - for tag_name, tag_value in tags.items(): - if self.with_step: - self.run[tag_name].log( - tag_value, step=self.get_iter(runner)) - else: - tags['global_step'] = self.get_iter(runner) - self.run[tag_name].log(tags) - - @master_only - def after_run(self, runner): - self.run.stop() diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install_scripts.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install_scripts.py deleted file mode 100644 index f09bd644207e5c5a891d3605cb6aff4f00d70c8a..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install_scripts.py +++ /dev/null @@ -1,61 +0,0 @@ -"""distutils.command.install_scripts - -Implements the Distutils 'install_scripts' command, for installing -Python scripts.""" - -# contributed by Bastian Kleineidam - -import os -from distutils.core import Command -from distutils import log -from stat import ST_MODE - - -class install_scripts(Command): - - description = "install scripts (Python or otherwise)" - - user_options = [ - ('install-dir=', 'd', "directory to install scripts to"), - ('build-dir=', 'b', "build directory (where to install from)"), - ('force', 'f', "force installation (overwrite existing files)"), - ('skip-build', None, "skip the build steps"), - ] - - boolean_options = ['force', 'skip-build'] - - def initialize_options(self): - self.install_dir = None - self.force = 0 - self.build_dir = None - self.skip_build = None - - def finalize_options(self): - self.set_undefined_options('build', ('build_scripts', 'build_dir')) - self.set_undefined_options( - 'install', - ('install_scripts', 'install_dir'), - ('force', 'force'), - ('skip_build', 'skip_build'), - ) - - def run(self): - if not self.skip_build: - self.run_command('build_scripts') - self.outfiles = self.copy_tree(self.build_dir, self.install_dir) - if os.name == 'posix': - # Set the executable bits (owner, group, and world) on - # all the scripts we just installed. - for file in self.get_outputs(): - if self.dry_run: - log.info("changing mode of %s", file) - else: - mode = ((os.stat(file)[ST_MODE]) | 0o555) & 0o7777 - log.info("changing mode of %s to %o", file, mode) - os.chmod(file, mode) - - def get_inputs(self): - return self.distribution.scripts or [] - - def get_outputs(self): - return self.outfiles or [] diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/text_file.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/text_file.py deleted file mode 100644 index 7274d4b16e1bee16751515f42793ebefdd769b96..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/text_file.py +++ /dev/null @@ -1,287 +0,0 @@ -"""text_file - -provides the TextFile class, which gives an interface to text files -that (optionally) takes care of stripping comments, ignoring blank -lines, and joining lines with backslashes.""" - -import sys - - -class TextFile: - """Provides a file-like object that takes care of all the things you - commonly want to do when processing a text file that has some - line-by-line syntax: strip comments (as long as "#" is your - comment character), skip blank lines, join adjacent lines by - escaping the newline (ie. backslash at end of line), strip - leading and/or trailing whitespace. All of these are optional - and independently controllable. - - Provides a 'warn()' method so you can generate warning messages that - report physical line number, even if the logical line in question - spans multiple physical lines. Also provides 'unreadline()' for - implementing line-at-a-time lookahead. - - Constructor is called as: - - TextFile (filename=None, file=None, **options) - - It bombs (RuntimeError) if both 'filename' and 'file' are None; - 'filename' should be a string, and 'file' a file object (or - something that provides 'readline()' and 'close()' methods). It is - recommended that you supply at least 'filename', so that TextFile - can include it in warning messages. If 'file' is not supplied, - TextFile creates its own using 'io.open()'. - - The options are all boolean, and affect the value returned by - 'readline()': - strip_comments [default: true] - strip from "#" to end-of-line, as well as any whitespace - leading up to the "#" -- unless it is escaped by a backslash - lstrip_ws [default: false] - strip leading whitespace from each line before returning it - rstrip_ws [default: true] - strip trailing whitespace (including line terminator!) from - each line before returning it - skip_blanks [default: true} - skip lines that are empty *after* stripping comments and - whitespace. (If both lstrip_ws and rstrip_ws are false, - then some lines may consist of solely whitespace: these will - *not* be skipped, even if 'skip_blanks' is true.) - join_lines [default: false] - if a backslash is the last non-newline character on a line - after stripping comments and whitespace, join the following line - to it to form one "logical line"; if N consecutive lines end - with a backslash, then N+1 physical lines will be joined to - form one logical line. - collapse_join [default: false] - strip leading whitespace from lines that are joined to their - predecessor; only matters if (join_lines and not lstrip_ws) - errors [default: 'strict'] - error handler used to decode the file content - - Note that since 'rstrip_ws' can strip the trailing newline, the - semantics of 'readline()' must differ from those of the builtin file - object's 'readline()' method! In particular, 'readline()' returns - None for end-of-file: an empty string might just be a blank line (or - an all-whitespace line), if 'rstrip_ws' is true but 'skip_blanks' is - not.""" - - default_options = { - 'strip_comments': 1, - 'skip_blanks': 1, - 'lstrip_ws': 0, - 'rstrip_ws': 1, - 'join_lines': 0, - 'collapse_join': 0, - 'errors': 'strict', - } - - def __init__(self, filename=None, file=None, **options): - """Construct a new TextFile object. At least one of 'filename' - (a string) and 'file' (a file-like object) must be supplied. - They keyword argument options are described above and affect - the values returned by 'readline()'.""" - if filename is None and file is None: - raise RuntimeError( - "you must supply either or both of 'filename' and 'file'" - ) - - # set values for all options -- either from client option hash - # or fallback to default_options - for opt in self.default_options.keys(): - if opt in options: - setattr(self, opt, options[opt]) - else: - setattr(self, opt, self.default_options[opt]) - - # sanity check client option hash - for opt in options.keys(): - if opt not in self.default_options: - raise KeyError("invalid TextFile option '%s'" % opt) - - if file is None: - self.open(filename) - else: - self.filename = filename - self.file = file - self.current_line = 0 # assuming that file is at BOF! - - # 'linebuf' is a stack of lines that will be emptied before we - # actually read from the file; it's only populated by an - # 'unreadline()' operation - self.linebuf = [] - - def open(self, filename): - """Open a new file named 'filename'. This overrides both the - 'filename' and 'file' arguments to the constructor.""" - self.filename = filename - self.file = open(self.filename, errors=self.errors) - self.current_line = 0 - - def close(self): - """Close the current file and forget everything we know about it - (filename, current line number).""" - file = self.file - self.file = None - self.filename = None - self.current_line = None - file.close() - - def gen_error(self, msg, line=None): - outmsg = [] - if line is None: - line = self.current_line - outmsg.append(self.filename + ", ") - if isinstance(line, (list, tuple)): - outmsg.append("lines %d-%d: " % tuple(line)) - else: - outmsg.append("line %d: " % line) - outmsg.append(str(msg)) - return "".join(outmsg) - - def error(self, msg, line=None): - raise ValueError("error: " + self.gen_error(msg, line)) - - def warn(self, msg, line=None): - """Print (to stderr) a warning message tied to the current logical - line in the current file. If the current logical line in the - file spans multiple physical lines, the warning refers to the - whole range, eg. "lines 3-5". If 'line' supplied, it overrides - the current line number; it may be a list or tuple to indicate a - range of physical lines, or an integer for a single physical - line.""" - sys.stderr.write("warning: " + self.gen_error(msg, line) + "\n") - - def readline(self): # noqa: C901 - """Read and return a single logical line from the current file (or - from an internal buffer if lines have previously been "unread" - with 'unreadline()'). If the 'join_lines' option is true, this - may involve reading multiple physical lines concatenated into a - single string. Updates the current line number, so calling - 'warn()' after 'readline()' emits a warning about the physical - line(s) just read. Returns None on end-of-file, since the empty - string can occur if 'rstrip_ws' is true but 'strip_blanks' is - not.""" - # If any "unread" lines waiting in 'linebuf', return the top - # one. (We don't actually buffer read-ahead data -- lines only - # get put in 'linebuf' if the client explicitly does an - # 'unreadline()'. - if self.linebuf: - line = self.linebuf[-1] - del self.linebuf[-1] - return line - - buildup_line = '' - - while True: - # read the line, make it None if EOF - line = self.file.readline() - if line == '': - line = None - - if self.strip_comments and line: - - # Look for the first "#" in the line. If none, never - # mind. If we find one and it's the first character, or - # is not preceded by "\", then it starts a comment -- - # strip the comment, strip whitespace before it, and - # carry on. Otherwise, it's just an escaped "#", so - # unescape it (and any other escaped "#"'s that might be - # lurking in there) and otherwise leave the line alone. - - pos = line.find("#") - if pos == -1: # no "#" -- no comments - pass - - # It's definitely a comment -- either "#" is the first - # character, or it's elsewhere and unescaped. - elif pos == 0 or line[pos - 1] != "\\": - # Have to preserve the trailing newline, because it's - # the job of a later step (rstrip_ws) to remove it -- - # and if rstrip_ws is false, we'd better preserve it! - # (NB. this means that if the final line is all comment - # and has no trailing newline, we will think that it's - # EOF; I think that's OK.) - eol = (line[-1] == '\n') and '\n' or '' - line = line[0:pos] + eol - - # If all that's left is whitespace, then skip line - # *now*, before we try to join it to 'buildup_line' -- - # that way constructs like - # hello \\ - # # comment that should be ignored - # there - # result in "hello there". - if line.strip() == "": - continue - else: # it's an escaped "#" - line = line.replace("\\#", "#") - - # did previous line end with a backslash? then accumulate - if self.join_lines and buildup_line: - # oops: end of file - if line is None: - self.warn("continuation line immediately precedes " "end-of-file") - return buildup_line - - if self.collapse_join: - line = line.lstrip() - line = buildup_line + line - - # careful: pay attention to line number when incrementing it - if isinstance(self.current_line, list): - self.current_line[1] = self.current_line[1] + 1 - else: - self.current_line = [self.current_line, self.current_line + 1] - # just an ordinary line, read it as usual - else: - if line is None: # eof - return None - - # still have to be careful about incrementing the line number! - if isinstance(self.current_line, list): - self.current_line = self.current_line[1] + 1 - else: - self.current_line = self.current_line + 1 - - # strip whitespace however the client wants (leading and - # trailing, or one or the other, or neither) - if self.lstrip_ws and self.rstrip_ws: - line = line.strip() - elif self.lstrip_ws: - line = line.lstrip() - elif self.rstrip_ws: - line = line.rstrip() - - # blank line (whether we rstrip'ed or not)? skip to next line - # if appropriate - if (line == '' or line == '\n') and self.skip_blanks: - continue - - if self.join_lines: - if line[-1] == '\\': - buildup_line = line[:-1] - continue - - if line[-2:] == '\\\n': - buildup_line = line[0:-2] + '\n' - continue - - # well, I guess there's some actual content there: return it - return line - - def readlines(self): - """Read and return the list of all logical lines remaining in the - current file.""" - lines = [] - while True: - line = self.readline() - if line is None: - return lines - lines.append(line) - - def unreadline(self, line): - """Push 'line' (a string) onto an internal buffer that will be - checked by future 'readline()' calls. Handy for implementing - a parser with line-at-a-time lookahead.""" - self.linebuf.append(line) diff --git a/spaces/Awesimo/jojogan/e4e/models/encoders/helpers.py b/spaces/Awesimo/jojogan/e4e/models/encoders/helpers.py deleted file mode 100644 index c4a58b34ea5ca6912fe53c63dede0a8696f5c024..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/models/encoders/helpers.py +++ /dev/null @@ -1,140 +0,0 @@ -from collections import namedtuple -import torch -import torch.nn.functional as F -from torch.nn import Conv2d, BatchNorm2d, PReLU, ReLU, Sigmoid, MaxPool2d, AdaptiveAvgPool2d, Sequential, Module - -""" -ArcFace implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Flatten(Module): - def forward(self, input): - return input.view(input.size(0), -1) - - -def l2_norm(input, axis=1): - norm = torch.norm(input, 2, axis, True) - output = torch.div(input, norm) - return output - - -class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])): - """ A named tuple describing a ResNet block. """ - - -def get_block(in_channel, depth, num_units, stride=2): - return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)] - - -def get_blocks(num_layers): - if num_layers == 50: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=4), - get_block(in_channel=128, depth=256, num_units=14), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 100: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=13), - get_block(in_channel=128, depth=256, num_units=30), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 152: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=8), - get_block(in_channel=128, depth=256, num_units=36), - get_block(in_channel=256, depth=512, num_units=3) - ] - else: - raise ValueError("Invalid number of layers: {}. Must be one of [50, 100, 152]".format(num_layers)) - return blocks - - -class SEModule(Module): - def __init__(self, channels, reduction): - super(SEModule, self).__init__() - self.avg_pool = AdaptiveAvgPool2d(1) - self.fc1 = Conv2d(channels, channels // reduction, kernel_size=1, padding=0, bias=False) - self.relu = ReLU(inplace=True) - self.fc2 = Conv2d(channels // reduction, channels, kernel_size=1, padding=0, bias=False) - self.sigmoid = Sigmoid() - - def forward(self, x): - module_input = x - x = self.avg_pool(x) - x = self.fc1(x) - x = self.relu(x) - x = self.fc2(x) - x = self.sigmoid(x) - return module_input * x - - -class bottleneck_IR(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), BatchNorm2d(depth) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -class bottleneck_IR_SE(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR_SE, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), - PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), - BatchNorm2d(depth), - SEModule(depth, 16) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -def _upsample_add(x, y): - """Upsample and add two feature maps. - Args: - x: (Variable) top feature map to be upsampled. - y: (Variable) lateral feature map. - Returns: - (Variable) added feature map. - Note in PyTorch, when input size is odd, the upsampled feature map - with `F.upsample(..., scale_factor=2, mode='nearest')` - maybe not equal to the lateral feature map size. - e.g. - original input size: [N,_,15,15] -> - conv2d feature map size: [N,_,8,8] -> - upsampled feature map size: [N,_,16,16] - So we choose bilinear upsample which supports arbitrary output sizes. - """ - _, _, H, W = y.size() - return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y diff --git a/spaces/BIASLab/sars-cov-2-classification-fcgr/src/cgr.py b/spaces/BIASLab/sars-cov-2-classification-fcgr/src/cgr.py deleted file mode 100644 index 4516b958c0b7dbf4286c83880f2ae7c2b097ab96..0000000000000000000000000000000000000000 --- a/spaces/BIASLab/sars-cov-2-classification-fcgr/src/cgr.py +++ /dev/null @@ -1,77 +0,0 @@ -"From original work: CGR for gene structure" -from typing import Dict, Optional -from collections import namedtuple - -# coordinates for x+iy -Coord = namedtuple("Coord", ["x","y"]) - -# coordinates for a CGR encoding -CGRCoords = namedtuple("CGRCoords", ["N","x","y"]) - -# coordinates for each nucleotide in the 2d-plane -DEFAULT_COORDS = dict(A=Coord(1,1),C=Coord(-1,1),G=Coord(-1,-1),T=Coord(1,-1)) - -class CGR: - "Chaos Game Representation for DNA" - def __init__(self, coords: Optional[Dict[chr,tuple]]=None): - self.nucleotide_coords = DEFAULT_COORDS if coords is None else coords - self.cgr_coords = CGRCoords(0,0,0) - - def nucleotide_by_coords(self,x,y): - "Get nucleotide by coordinates (x,y)" - # filter nucleotide by coordinates - filtered = dict(filter(lambda item: item[1] == Coord(x,y), self.nucleotide_coords.items())) - - return list(filtered.keys())[0] - - def forward(self, nucleotide: str): - "Compute next CGR coordinates" - x = (self.cgr_coords.x + self.nucleotide_coords.get(nucleotide).x)/2 - y = (self.cgr_coords.y + self.nucleotide_coords.get(nucleotide).y)/2 - - # update cgr_coords - self.cgr_coords = CGRCoords(self.cgr_coords.N+1,x,y) - - def backward(self,): - "Compute last CGR coordinates. Current nucleotide can be inferred from (x,y)" - # get current nucleotide based on coordinates - n_x,n_y = self.coords_current_nucleotide() - nucleotide = self.nucleotide_by_coords(n_x,n_y) - - # update coordinates to the previous one - x = 2*self.cgr_coords.x - n_x - y = 2*self.cgr_coords.y - n_y - - # update cgr_coords - self.cgr_coords = CGRCoords(self.cgr_coords.N-1,x,y) - - return nucleotide - - def coords_current_nucleotide(self,): - x = 1 if self.cgr_coords.x>0 else -1 - y = 1 if self.cgr_coords.y>0 else -1 - return x,y - - def encode(self, sequence: str): - "From DNA sequence to CGR" - # reset starting position to (0,0,0) - self.reset_coords() - for nucleotide in sequence: - self.forward(nucleotide) - return self.cgr_coords - - def reset_coords(self,): - self.cgr_coords = CGRCoords(0,0,0) - - def decode(self, N:int, x:int, y:int)->str: - "From CGR to DNA sequence" - self.cgr_coords = CGRCoords(N,x,y) - - # decoded sequence - sequence = [] - - # Recover the entire genome - while self.cgr_coords.N>0: - nucleotide = self.backward() - sequence.append(nucleotide) - return "".join(sequence[::-1]) \ No newline at end of file diff --git a/spaces/Banbri/zcvzcv/src/app/interface/display/index.tsx b/spaces/Banbri/zcvzcv/src/app/interface/display/index.tsx deleted file mode 100644 index 26ba8d02a6afd446981aeca6c1c24b267ab467f1..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/app/interface/display/index.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import { RenderedScene } from "@/types" - -export function Display ({ rendered }: { rendered: RenderedScene }) { - return ( - <> - - - ) -} \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/nets_123812KB.py b/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/nets_123812KB.py deleted file mode 100644 index becbfae85683a13bbb19d3ea6c840da24e61e01e..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/nets_123812KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import layers_123821KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 32) - self.stg1_high_band_net = BaseASPPNet(2, 32) - - self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(16, 32) - - self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(32, 64) - - self.out = nn.Conv2d(64, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(32, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(32, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/Bazedgul/YoutubeVideo-Transcript-Summarization/app.py b/spaces/Bazedgul/YoutubeVideo-Transcript-Summarization/app.py deleted file mode 100644 index a0aef7b442989179e45a73215dada86526eefd3c..0000000000000000000000000000000000000000 --- a/spaces/Bazedgul/YoutubeVideo-Transcript-Summarization/app.py +++ /dev/null @@ -1,23 +0,0 @@ -import gradio as gr -import transformers -import youtube_transcript_api -from transformers import pipeline -from youtube_transcript_api import YouTubeTranscriptApi -from datasets import Dataset - -summarizer = pipeline("summarization",model="facebook/bart-large-cnn") - -def greet(link): - try: - unique_id = link.split("=")[-1] - sub = YouTubeTranscriptApi.get_transcript(unique_id) - subtitle = " ".join([w['text'] for w in sub]) - summary = summarizer(subtitle, max_length=180, min_length=30, do_sample=False) - return summary[0]['summary_text'] - except: - return 'Invalid URL' - -demo=gr.Interface(fn=greet, inputs="text", outputs="text") - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Aplicacin Descargar Tirador De Burbujas.md b/spaces/Benson/text-generation/Examples/Aplicacin Descargar Tirador De Burbujas.md deleted file mode 100644 index 1dffc2723d922c7ff468213d0df5b3080ac9b569..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Aplicacin Descargar Tirador De Burbujas.md +++ /dev/null @@ -1,94 +0,0 @@ -
    -

    App Download Bubble Shooter: Cómo jugar y disfrutar de este divertido juego

    -

    Si usted está buscando un juego de puzzle divertido y adictivo que puede mantenerlo entretenido durante horas, usted debe probar juegos de disparos de burbujas. Los juegos de disparos de burbujas son juegos simples pero desafiantes que requieren que coincidas y hagas estallar burbujas coloridas en la pantalla. En este artículo, te contaremos todo lo que necesitas saber sobre los juegos de disparos de burbujas, cómo descargarlos en tu dispositivo, cómo jugarlos y ganar niveles, y por qué deberías jugarlos.

    -

    ¿Qué es Bubble Shooter?

    -

    Bubble shooter es un tipo de juego de puzzle que consiste en disparar burbujas desde un cañón o un lanzador en la parte inferior de la pantalla. El objetivo es hacer coincidir tres o más burbujas del mismo color para hacerlas estallar y limpiar el tablero. El juego termina cuando se borran todas las burbujas o cuando las burbujas llegan a la parte inferior de la pantalla.

    -

    aplicación descargar tirador de burbujas


    Downloadhttps://bltlly.com/2v6LZ8



    -

    La historia y popularidad del juego

    -

    Los juegos de disparos de burbujas tienen una larga historia que se remonta a la década de 1980. El primer juego de disparos de burbujas se llamó Puzzle Bobble, que fue lanzado por Taito en 1986. Fue un spin-off del popular juego de árcade Bubble Bobble, que contó con dos lindos dragones llamados Bub y Bob. Puzzle Bobble se convirtió en un gran éxito en Japón y más tarde en otros países, generando muchas secuelas y clones.

    -

    Desde entonces, los juegos de disparos de burbujas han evolucionado y mejorado, añadiendo nuevas características, gráficos, sonidos y elementos de juego. Hoy en día, los juegos de disparos de burbujas son uno de los géneros más populares de los juegos casuales, con millones de jugadores en todo el mundo. Puedes encontrar cientos de juegos de disparos de burbujas en varias plataformas, como navegadores web, dispositivos móviles, consolas y computadoras.

    -

    La jugabilidad y características del juego

    - -

    Los juegos de disparos de burbujas también tienen varias características que los hacen más divertidos y emocionantes. Por ejemplo, algunos juegos tienen diferentes niveles con diferentes diseños, obstáculos, metas y dificultades. Algunos juegos tienen diferentes modos de juego, como el modo puzzle, el modo árcade, el modo clásico, etc. Algunos juegos tienen potenciadores y potenciadores que pueden ayudarte a limpiar el tablero más rápido o superar desafíos. Algunos juegos tienen tablas de clasificación, logros, recompensas, bonos diarios, etc.

    -

    Cómo descargar Bubble Shooter en su dispositivo?

    -

    Si quieres jugar juegos de disparos de burbujas en tu dispositivo, necesitas descargarlos desde la tienda de aplicaciones o el navegador web. Aquí hay algunos pasos sobre cómo descargar juegos de disparos de burbujas en su dispositivo:

    -

    Para usuarios de Android

    -
      -
    • Ir a Google Play Store en su dispositivo.
    • -
    • Buscar "tirador de burbujas" o cualquier juego de disparos de burbujas específico que desea jugar.
    • -
    • Seleccione el juego que desea descargar y toque en "Instalar".
    • -
    • Espere a que finalice el proceso de descarga e instalación.
    • -
    • ¡Abre el juego y disfruta!
    • -
    -

    Para usuarios de iOS

    -
      -
    • Ir a App Store en su dispositivo.
    • -
    • Buscar "tirador de burbujas" o cualquier juego de disparos de burbujas específico que desea jugar.
    • -
    • Seleccione el juego que desea descargar y toque en "Obtener".
    • -
    • Introduzca su ID de Apple y contraseña si se le solicita.
    • -
    • Espere a que finalice el proceso de descarga e instalación.
    • -
    • ¡Abre el juego y disfruta!
    • -
    -

    Cómo jugar Bubble Shooter y ganar niveles?

    -

    Ahora que has descargado juegos de disparos de burbujas en tu dispositivo, estás listo para jugar y divertirte. Pero, ¿cómo se juega juegos de burbujas tirador y ganar niveles? Aquí hay algunos consejos y trucos sobre cómo jugar juegos de burbujas tirador y ganar niveles:

    -

    Las reglas y consejos básicos

    -
      -
    • La regla básica de los juegos de disparos de burbujas es hacer coincidir tres o más burbujas del mismo color para hacerlas estallar y limpiar el tablero.
    • - -
    • Puedes ver la siguiente burbuja en tu lanzador y planificar tus movimientos en consecuencia. También puede intercambiar la burbuja actual con la siguiente pulsando o haciendo clic en el lanzador.
    • -
    • Deberías intentar hacer estallar tantas burbujas como sea posible con cada toma, ya que esto te dará más puntos y despejará el tablero más rápido.
    • -
    • También debe tratar de hacer estallar las burbujas que están sosteniendo otras burbujas, ya que esto hará que caigan y pop, así, creando una reacción en cadena.
    • -
    • Debes evitar disparar burbujas que no coincidan con ningún color en el tablero, ya que esto agregará más burbujas y hará que el tablero esté más lleno.
    • -
    • También debe prestar atención a la línea de fondo del tablero, ya que esto indica lo cerca que están las burbujas para llegar a la parte inferior de la pantalla. Si las burbujas tocan la línea de fondo, perderás el juego.
    • -
    -

    Los diferentes modos de juego y desafíos

    -

    Los juegos de disparos de burbujas tienen diferentes modos de juego y desafíos que pueden hacer que el juego sea más interesante y desafiante. Algunos de los modos de juego y desafíos comunes son:

    -
      -
    • Modo de rompecabezas: En este modo, usted tiene un número limitado de burbujas para disparar y un objetivo específico para lograr, como limpiar un cierto número de burbujas, hacer estallar un cierto color de burbujas, liberar a los animales atrapados, etc. Necesitas usar tus habilidades de estrategia y lógica para completar cada nivel.
    • -
    • Modo árcade: En este modo, tienes burbujas ilimitadas para disparar, pero el tablero sigue bajando con cada disparo. Necesitas ser rápido y preciso para limpiar el tablero antes de que llegue a la parte inferior de la pantalla.
    • -
    • Modo clásico: En este modo, tienes un juego clásico de disparos de burbujas sin características especiales ni objetivos. Solo tienes que borrar todas las burbujas en el tablero y anotar tantos puntos como sea posible.
    • - -
    -

    Los potenciadores y potenciadores

    -

    Los juegos de disparos de burbujas también tienen potenciadores y potenciadores que pueden ayudarte a limpiar el tablero más rápido o superar desafíos. Algunos de los potenciadores y potenciadores comunes son:

    -
      -
    • Bomba: Este encendido puede explotar y estallar todas las burbujas en un gran radio a su alrededor. Puedes usarlo para limpiar una gran área de burbujas o romper obstáculos.
    • -
    • Bola de fuego: Este encendido puede quemar y hacer estallar todas las burbujas en una línea recta. Puede usarlo para limpiar una larga fila de burbujas o alcanzar puntos difíciles de conseguir.
    • -
    • Arco iris: Este encendido puede cambiar su color para que coincida con cualquier burbuja en el tablero. Puede usarlo para crear coincidencias con cualquier color de burbujas o objetivos específicos de color completos.
    • -
    • Cambiador de color: Este amplificador puede cambiar el color de todas las burbujas en el tablero a un color. Puede usarlo para borrar todas las burbujas en el tablero o crear reacciones en cadena masivas.
    • -
    • Burbujas adicionales: Este refuerzo puede darle burbujas adicionales para disparar. Puede usarlo cuando se quede sin burbujas o necesite más opciones.
    • -
    • Otros boosters: Algunos juegos de disparos de burbujas tienen otros boosters que pueden variar dependiendo del tema o estilo del juego. Por ejemplo, algunos juegos tienen imanes, láseres, estrellas, etc.
    • -
    -

    ¿Por qué debería jugar Bubble Shooter?

    -

    Los juegos de disparos de burbujas no solo son divertidos y entretenidos, sino también beneficiosos para su cerebro y estado de ánimo. Aquí hay algunas razones por las que debe jugar juegos de disparos de burbujas:

    -

    -

    Los beneficios de jugar juegos de disparos de burbujas

    -
      -
    • Los juegos de disparos de burbujas pueden mejorar tus habilidades cognitivas, como la memoria, la concentración, la atención, la resolución de problemas y la lógica. Estas habilidades son esenciales para su salud mental y rendimiento en las tareas y actividades diarias.
    • - -
    • Los juegos de disparos de burbujas también pueden aumentar su estado de ánimo y reducir su estrés, ya que pueden proporcionarle una sensación de logro, satisfacción y relajación. También pueden distraerte de pensamientos y emociones negativas y ayudarte a lidiar con la ansiedad y la depresión.
    • -
    -

    Los aspectos divertidos y relajantes del juego

    -
      -
    • Los juegos de disparos de burbujas son divertidos y relajantes porque son fáciles de jugar y agradables de ver. Puedes jugar en cualquier momento y en cualquier lugar, ya que no requieren mucho tiempo o esfuerzo. También puede jugar a su propio ritmo y nivel de dificultad, ya que tienen varias opciones y ajustes para adaptarse a sus preferencias.
    • -
    • Los juegos de disparos de burbujas también son divertidos y relajantes porque tienen gráficos coloridos, sonidos alegres y personajes lindos. Puede admirar las hermosas burbujas y fondos, escuchar la música relajante y efectos de sonido, e interactuar con los adorables animales y criaturas.
    • -
    • Los juegos de disparos de burbujas también son divertidos y relajantes porque tienen infinitas posibilidades y variaciones. Nunca puedes aburrirte o quedarte sin niveles para jugar, ya que tienen cientos o miles de niveles con diferentes objetivos y desafíos. También puedes probar diferentes modos de juego y potenciadores para darle vida a tu juego.
    • -
    -

    Los aspectos sociales y competitivos del juego

    -
      -
    • Los juegos de disparos de burbujas son sociales y competitivos porque te permiten conectarte e interactuar con otros jugadores de todo el mundo. Usted puede jugar con o contra sus amigos o familiares, o unirse a una comunidad de fans del tirador de burbujas. También puedes chatear, compartir, comentar y seguir a otros jugadores en las plataformas de redes sociales.
    • - -
    • Los juegos de disparos de burbujas también son sociales y competitivos porque te motivan a mejorar tus habilidades y rendimiento en el juego. Puedes aprender de las estrategias y consejos de otros jugadores, o buscar comentarios y consejos de ellos. También puede establecer sus propios objetivos y recompensas, o ganar recompensas del juego en sí.
    • -
    -

    Conclusión

    -

    Los juegos de disparos de burbujas son uno de los mejores tipos de juegos de puzzle que puedes jugar en tu dispositivo. Son divertidos, adictivos, desafiantes, beneficiosos, relajantes, sociales y competitivos. Pueden mantenerte entretenido durante horas y hacerte feliz e inteligente. Si todavía no has probado los juegos de disparos de burbujas, deberías descargarlos ahora y disfrutar de este increíble juego.

    -

    Preguntas frecuentes

    -
      -
    • P: ¿Cómo puedo descargar juegos de disparos de burbujas en mi dispositivo?
      A: Puedes descargar juegos de disparos de burbujas en tu dispositivo desde la tienda de aplicaciones o el navegador web. Solo tienes que buscar "bubble shooter" o cualquier juego de burbujas específico que quieras jugar, seleccionar el juego que quieres descargar y seguir las instrucciones en la pantalla.
    • -
    • Q: ¿Cómo puedo jugar juegos de disparos de burbujas?
      A: Para jugar juegos de disparos de burbujas, es necesario disparar burbujas desde un lanzador en la parte inferior de la pantalla en el racimo de burbujas en la parte superior de la pantalla. El objetivo es hacer coincidir tres o más burbujas del mismo color para hacerlas estallar y limpiar el tablero.
    • -
    • P: ¿Cuáles son algunos consejos y trucos para ganar juegos de disparos de burbujas?
      A: Algunos consejos y trucos para ganar juegos de tirador de burbujas son: apuntar cuidadosamente, utilizar las paredes para rebotar sus burbujas, planificar sus movimientos por delante, pop tantas burbujas como sea posible con cada disparo, burbujas pop que están sosteniendo otras burbujas, evitar disparar burbujas que no coinciden con ningún color en el tablero, prestar atención a la línea de fondo del tablero, utilizar potenciadores y potenciadores sabiamente, probar diferentes modos de juego y desafíos, y divertirse!
    • - -
    • P: ¿Cuáles son algunos de los beneficios de jugar juegos de burbujas?
      A: Jugar juegos de disparos de burbujas puede mejorar tus habilidades cognitivas, mejorar tu creatividad e imaginación, aumentar tu estado de ánimo y reducir el estrés, y conectarte con otros jugadores.
    • -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar 2016 Dj Mix.md b/spaces/Benson/text-generation/Examples/Descargar 2016 Dj Mix.md deleted file mode 100644 index f5075de77920cc2d53b1de54a7c2b5dfb78ce54c..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar 2016 Dj Mix.md +++ /dev/null @@ -1,131 +0,0 @@ - -

    Cómo descargar SketchUp 2020 gratis

    -

    Si está buscando un software de modelado 3D potente y fácil de usar, es posible que desee probar SketchUp 2020. SketchUp es una herramienta popular para crear, editar y compartir modelos 3D para diversos fines, como arquitectura, diseño de interiores, ingeniería, paisajismo, juegos y más. En este artículo, te mostraremos cómo descargar SketchUp 2020 de forma gratuita e instalarlo en tu ordenador. También le daremos una visión general de las características y mejoras de SketchUp 2020, así como algunos consejos sobre cómo comenzar a usarlo y aprender más sobre él.

    -

    Qué es SketchUp 2020 y por qué deberías probarlo

    -

    SketchUp 2020 es la última versión de SketchUp Pro, la versión premium de SketchUp que ofrece características y capacidades más avanzadas que la versión gratuita basada en la web. SketchUp Pro es un software de suscripción que cuesta $299 por año o $41.66 por mes. Sin embargo, también puede descargar una versión de prueba gratuita de SketchUp Pro durante 30 días y utilizar todas sus funciones sin limitaciones.

    -

    descargar 2016 dj mix


    Download File ——— https://bltlly.com/2v6Lo9



    -

    Características y mejoras de SketchUp 2020

    -

    SketchUp 2020 viene con muchas características y mejoras que lo hacen más intuitivo, eficiente y divertido de usar. Algunos de los aspectos más destacados son:

    -
      -
    • Outliner: Ahora puede organizar mejor su modelo usando Outliner para alternar entre la visibilidad de grupos y componentes. También puede guardar el estado de visibilidad de los objetos ocultos por escena.
    • -
    • Empuñaduras en cajas delimitadoras: Ahora puedes mover y girar objetos fácilmente usando empuñaduras en sus cajas delimitadoras. También puede recorrer diferentes puntos de agarre presionando la tecla de flecha hacia abajo.
    • -
    • Objetos ocultos: Ahora puede editar objetos ocultos seleccionándolos en Outliner. Aparecerán como una malla que puedes modificar sin afectar a otros objetos visibles.
    • - -
    • LayOut: Ahora puede crear documentos más profesionales con LayOut, la aplicación complementaria de SketchUp que le permite crear presentaciones 2D a partir de sus modelos 3D. Algunas de las nuevas características incluyen control de peso de línea mejorado, mejor soporte de DWG, estilos de línea personalizados y más.
    • -
    -

    Puede obtener más información sobre las nuevas características y mejoras de SketchUp 2020 de estas fuentes .

    -

    Requisitos del sistema SketchUp 2020 y compatibilidad

    -

    Antes de descargar SketchUp 2020, debe asegurarse de que su computadora cumple con los requisitos mínimos o recomendados del sistema para su funcionamiento sin problemas. Estos son los requisitos del sistema para los sistemas operativos Windows y Mac :

    - -MínimoRecomendado -CPUprocesador de 1 GHz o de generación actual procesador Apple M1procesador de 2+ GHz o de generación actual procesador Apple M1 -GPUTarjeta de video de clase 3D con 512 MB de memoria y admite aceleración de hardwareTarjeta de video de clase 3D con 1 GB de memoria y admite aceleración de hardware -RAM4 GB8+ GB -Almacenamiento500 MB de espacio en disco disponible700+ MB de espacio en disco disponible -OSWindows 10, Windows 8+, Windows 7, macOS 10.15+ (Catalina), macOS 10.14+ (Mojave), macOS 10.13+ (High Sierra)Windows 10, macOS 11+ (Big Sur), macOS 10.15+ (Catalina)/td><>/tr> -InternetSe requiere una conexión a Internet para instalar y autorizar SketchUp y usar algunas de las funciones. Se requiere una conexión a Internet para instalar y autorizar SketchUp y usar algunas de las características. - -

    También debe comprobar la compatibilidad de SketchUp 2020 con otros programas y extensiones que utiliza, como motores de renderizado, programas CAD, herramientas BIM, etc. Puede encontrar una lista de software y extensiones compatibles aquí.

    - -

    Ahora que sabes lo que es SketchUp 2020 y lo que puede hacer, es posible que se pregunte cómo descargarlo de forma gratuita. Hay dos formas de hacerlo: descargando la versión de prueba gratuita desde el sitio web oficial o descargando el instalador sin conexión para Windows.

    -

    -

    Descargar SketchUp 2020 prueba gratuita desde el sitio web oficial

    -

    La forma más fácil de descargar SketchUp 2020 gratis es obtener la prueba gratuita desde el sitio web oficial. Estos son los pasos para hacer eso:

    -
      -
    1. Vaya al sitio web de SketchUp y haga clic en el botón Descargar prueba gratuita.
    2. -
    3. Seleccione su industria, rol y nivel de experiencia en los menús desplegables y haga clic en Continuar.
    4. -
    5. Ingrese su nombre, dirección de correo electrónico, país y acepte los términos y condiciones. Luego haga clic en Enviar.
    6. -
    7. Recibirá un correo electrónico con un enlace para descargar SketchUp 2020. Haga clic en el enlace y guarde el archivo en su computadora.
    8. -
    9. Has descargado SketchUp 2020 gratis. Ahora puedes instalarlo en tu ordenador.
    10. -
    -

    Tenga en cuenta que la prueba gratuita caducará después de 30 días y tendrá que comprar una suscripción o iniciar sesión con una cuenta existente para continuar usando SketchUp Pro.

    -

    Descargar SketchUp 2020 instalador sin conexión para Windows

    -

    Si prefiere descargar el instalador sin conexión SketchUp 2020 para Windows, puede hacerlo siguiendo estos pasos:

    -
      -
    1. Ve a esta página y desplázate hacia abajo para encontrar la sección SketchUp Pro 2020 - Windows (64 bits).
    2. -
    3. Haga clic en el botón Descargar ahora y guarde el archivo en su computadora.
    4. -
    5. Ha descargado correctamente el instalador sin conexión SketchUp 2020 para Windows. Ahora puede instalarlo en su computadora.
    6. -
    -

    Tenga en cuenta que este método solo funciona para los usuarios de Windows y que todavía necesitará una conexión a Internet para activar su licencia o iniciar sesión con su cuenta.

    -

    Cómo instalar SketchUp 2020 en su computadora

    - -

    Ejecute el instalador de SketchUp 2020 y siga las instrucciones

    -

    Si descargó la versión de prueba gratuita desde el sitio web oficial, tendrá un archivo llamado SketchUpPro-2020-en.exe. Si ha descargado el instalador sin conexión para Windows, tendrá un archivo llamado SketchUpPro-2020-2-172-22215-en-x64.exe. Haga doble clic en el archivo para ejecutar el instalador y siga las instrucciones en la pantalla. Es posible que necesite otorgar permiso o ingresar su contraseña si su sistema se lo solicita.

    -

    El instalador le guiará a través del proceso de instalación, que puede tardar unos minutos dependiendo de su sistema. Puede elegir la carpeta de destino, el idioma, los componentes y los accesos directos para SketchUp 2020. También puede optar por instalar LayOut, Style Builder y Trimble Connect si lo desea.

    -

    Cuando la instalación esté completa, haga clic en Finalizar.

    -

    Active su licencia SketchUp 2020 o inicie sesión con su cuenta

    -

    Después de haber instalado SketchUp 2020 en su computadora, necesitará activar su licencia o iniciar sesión con su cuenta para usarla. Estos son los pasos para hacerlo:

    -
      -
    1. Inicie SketchUp 2020 desde su escritorio o menú de inicio.
    2. -
    3. Verá una pantalla de bienvenida con dos opciones: Inicio de prueba y Inicio de sesión.
    4. -
    5. Si desea utilizar la prueba gratuita, haga clic en Iniciar prueba. Verá una cuenta atrás de los días restantes de su período de prueba. Puede utilizar todas las funciones de SketchUp Pro durante 30 días sin ninguna restricción.
    6. -
    7. Si tiene una suscripción o una licencia clásica, haga clic en Iniciar sesión. Será redirigido a una página web donde podrá introducir su dirección de correo electrónico y contraseña. Si no tiene una cuenta, puede crearla de forma gratuita.
    8. -
    9. Después de iniciar sesión, verá un mensaje de confirmación de que su licencia está activada o que ha iniciado sesión con su cuenta. Ahora puedes usar SketchUp 2020 según tu plan.
    10. -
    - -

    Cómo empezar a usar SketchUp 2020 y aprender más sobre él

    -

    Ahora que ha descargado e instalado SketchUp 2020 de forma gratuita, es posible que se pregunte cómo empezar a usarlo y aprender más sobre él. Aquí hay algunos consejos para ayudarle a empezar:

    -

    Lanza SketchUp 2020 y explora la interfaz y las herramientas

    -

    Cuando inicie SketchUp 2020, verá un espacio de trabajo en blanco con un modelo 3D predeterminado de una persona. Puede utilizar el ratón y el teclado para navegar por el modelo y acercar y alejar. También puede cambiar la perspectiva y la vista desde diferentes ángulos.

    -

    También verá una barra de herramientas en la parte superior de la pantalla con varias herramientas e iconos. Puede utilizar estas herramientas para crear, modificar, medir y anotar sus modelos 3D. También puede acceder a más herramientas desde los menús desplegables o haciendo clic derecho en el modelo.

    -

    Puede personalizar la barra de herramientas agregando o quitando herramientas, cambiando su orden o acoplándolas a diferentes ubicaciones. También puede cambiar entre diferentes conjuntos de herramientas haciendo clic en el icono de flecha en el extremo derecho de la barra de herramientas.

    -

    También puedes abrir otros paneles y ventanas desde el menú Window, como Outliner, Entity Info, Layers, Materials, Styles, Scenes, etc. Estos paneles y ventanas te ayudarán a organizar, editar y mejorar tus modelos 3D.

    -

    Acceder a la ayuda de SketchUp 2020, tutoriales y recursos de la comunidad

    -

    Si necesita ayuda u orientación sobre cómo usar SketchUp 2020, puede acceder a varios recursos desde el menú Ayuda en SketchUp 2020. Algunos de los recursos son:

    -
      -
    • SketchUp Help Center: Este es el centro oficial de ayuda en línea para SketchUp que contiene artículos, videos, consejos y preguntas frecuentes sobre varios temas relacionados con SketchUp. Puede buscar temas específicos o buscar por categorías.
    • - -
    • SketchUp Forum: Este es el foro oficial de la comunidad en línea para usuarios de SketchUp donde puedes hacer preguntas, compartir ideas, obtener comentarios y aprender de otros usuarios. Puedes unirte a discusiones sobre varios temas o iniciar tus propios temas.
    • -
    • SketchUp YouTube Channel: Este es el canal oficial de YouTube para SketchUp que presenta videos sobre varios temas relacionados con SketchUp. Puedes ver vídeos sobre nuevas funciones, consejos y trucos, estudios de casos, eventos en vivo, etc.
    • -
    -

    También puedes encontrar más recursos de otras fuentes como blogs, podcasts, libros, revistas, etc. que cubren SketchUp y temas relacionados.

    -

    Conclusión y preguntas frecuentes

    -

    En este artículo, le hemos mostrado cómo descargar SketchUp 2020 gratis e instalarlo en su computadora. También le hemos dado una visión general de las características y mejoras de SketchUp 2020, así como algunos consejos sobre cómo comenzar a usarlo y aprender más sobre él.

    -

    Esperamos que este artículo haya sido útil y que haya disfrutado aprendiendo sobre SketchUp 2020. SketchUp 2020 es un software de modelado 3D potente y fácil de usar que puede usar para diversos fines, como arquitectura, diseño de interiores, ingeniería, paisajismo, juegos y más. Puedes descargar SketchUp 2020 gratis y usarlo durante 30 días sin limitaciones. También puedes acceder a varios recursos para ayudarte a aprender y mejorar tus habilidades en SketchUp 2020.

    -

    Si tiene alguna pregunta o comentario sobre SketchUp 2020, no dude en dejar un comentario a continuación o en contacto con nosotros a través de nuestro sitio web. Nos encantaría saber de ti y ayudarte con tus necesidades de modelado 3D.

    -

    Aquí hay algunas preguntas frecuentes que puedes encontrar útiles:

    -

    Q: ¿Cómo puedo desinstalar SketchUp 2020 de mi ordenador?

    -

    A: Si desea desinstalar SketchUp 2020 desde su computadora, puede hacerlo siguiendo estos pasos:

    -
      -
    1. Vaya al Panel de Control en su computadora y seleccione Programas y Características.
    2. - -
    3. Haga clic en el botón Desinstalar y siga las instrucciones en la pantalla.
    4. -
    5. Ha desinstalado con éxito SketchUp 2020 desde su ordenador.
    6. -
    -

    Q: ¿Cómo puedo actualizar SketchUp 2020 a la última versión?

    -

    A: Si desea actualizar SketchUp 2020 a la última versión, puede hacerlo siguiendo estos pasos:

    -
      -
    1. Inicie SketchUp 2020 en su computadora y vaya al menú Ayuda.
    2. -
    3. Seleccione Buscar actualización y espere unos segundos.
    4. -
    5. Si hay una nueva versión disponible, verá un mensaje con un enlace para descargarlo. Haga clic en el enlace y guarde el archivo en su computadora.
    6. -
    7. Ejecute el archivo y siga las instrucciones en la pantalla para instalar la actualización.
    8. -
    9. Ha actualizado con éxito SketchUp 2020 a la última versión.
    10. -
    -

    Q: ¿Cómo puedo exportar mi modelo SketchUp 2020 a otros formatos?

    -

    A: Si desea exportar su modelo SketchUp 2020 a otros formatos, como PDF, DWG, STL, etc., puede hacerlo siguiendo estos pasos:

    -
      -
    1. Seleccione el modelo o la parte del modelo que desea exportar en SketchUp 2020.
    2. -
    3. Vaya al menú Archivo y seleccione Exportar.
    4. -
    5. Seleccione el formato al que desea exportar su modelo desde el submenú. Por ejemplo, si desea exportar su modelo como un archivo PDF, seleccione 2D Graphic y luego seleccione PDF.
    6. -
    7. Aparecerá un cuadro de diálogo donde puede elegir el nombre, la ubicación y las opciones para su archivo exportado. Haga clic en Exportar.
    8. -
    9. Ha exportado con éxito su modelo SketchUp 2020 a otro formato.
    10. -
    -

    Q: ¿Cómo puedo importar otros modelos o archivos en SketchUp 2020?

    -

    A: Si desea importar otros modelos o archivos en SketchUp 2020, como imágenes, archivos CAD, modelos 3D, etc., puede hacerlo siguiendo estos pasos:

    -
      -
    1. Vaya al menú Archivo y seleccione Importar.
    2. - -
    3. Haga clic en Importar.
    4. -
    5. Ha importado con éxito otro modelo o archivo en SketchUp 2020. Ahora puede moverlo, escalarlo, rotarlo o editarlo como desee.
    6. -
    -

    Q: ¿Cómo puedo compartir mi modelo SketchUp 2020 con otros?

    -

    A: Si desea compartir su modelo SketchUp 2020 con otros, puede hacerlo siguiendo estos pasos:

    -
      -
    1. Seleccione el modelo o la parte del modelo que desea compartir en SketchUp 2020.
    2. -
    3. Vaya al menú Archivo y seleccione Modelo compartido.
    4. -
    5. Aparecerá un cuadro de diálogo donde puede iniciar sesión con su cuenta Recortable o crear una gratis. También verá algunas opciones para su modelo compartido, como título, descripción, etiquetas, privacidad, etc.
    6. -
    7. Haga clic en Subir.
    8. -
    9. Ha compartido con éxito su modelo SketchUp 2020 con otros. Verá un enlace a su modelo en el almacén 3D, donde puede verlo, descargarlo o incrustarlo. También puede compartir el enlace con otros a través de correo electrónico, redes sociales u otras plataformas.
    10. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Cinco Noches En Freddy 39s 3 Apk.md b/spaces/Benson/text-generation/Examples/Descargar Cinco Noches En Freddy 39s 3 Apk.md deleted file mode 100644 index ffba4ee2aef04f688a985d3ce71978869c81dc04..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Cinco Noches En Freddy 39s 3 Apk.md +++ /dev/null @@ -1,101 +0,0 @@ - -

    Cómo descargar cinco noches en Freddy’s 3 APK

    -

    Si eres un fan de los juegos de terror, es posible que hayas oído hablar de Five Nights at Freddy’s, una popular serie de juegos de terror de supervivencia desarrollados por Scott Cawthon. La tercera entrega de la serie, Five Nights at Freddy’s 3, fue lanzada en 2015 y recibió críticas positivas de críticos y jugadores por igual. En este artículo, te diremos qué es Five Nights at Freddy’s 3, qué es un archivo APK y cómo descargar e instalar Five Nights at Freddy’s 3 APK en tus dispositivos.

    -

    ¿Qué es Cinco Noches en Freddy’s 3?

    -

    La trama y el juego del juego

    -

    Five Nights at Freddy’s 3 se desarrolla treinta años después de los eventos del primer juego, en una atracción de terror llamada "Fazbear’s Fright". El jugador toma el papel de un guardia de seguridad que debe sobrevivir cinco noches (y una sexta noche extra) mientras es perseguido por un animatrónico decrépito llamado Springtrap y varias versiones fantasma de los animatrónicos originales. El jugador debe monitorear dos juegos de cámaras, una para las habitaciones y pasillos, y otra para los conductos de ventilación, y usar varios dispositivos como señales de audio, controles de ventilación y un panel de reinicio para evitar que Springtrap llegue a la oficina. A diferencia de los juegos anteriores, solo hay un animatronic que puede matar al jugador, pero los fantasmas pueden asustar al jugador y causar fallos en los sistemas.

    -

    descargar cinco noches en freddy 39;s 3 apk


    Download File ✸✸✸ https://bltlly.com/2v6LNI



    -

    Las características y requisitos del juego

    -

    Five Nights at Freddy’s 3 tiene varias características que lo hacen diferente de sus predecesores, como:

    -
      -
    • Un nuevo animatrónico, Springtrap, que puede cambiar su apariencia dependiendo de su posición y daño.
    • -
    • Un nuevo mecánico, señales de audio, que se puede utilizar para atraer a Springtrap lejos del jugador.
    • -
    • Un nuevo sistema, ventilación, que puede afectar la visión y la respiración del jugador si no se mantiene correctamente.
    • -
    • Un nuevo modo, modo pesadilla, que se puede desbloquear después de completar la sexta noche.
    • - -
    -

    El juego requiere Android 4.1 o superior para dispositivos móviles, Windows XP o superior para dispositivos PC e iOS 8.0 o posterior para dispositivos Apple. El juego también requiere 250 MB de espacio de almacenamiento y 1 GB de RAM.

    -

    ¿Qué es un archivo APK?

    -

    La definición y el propósito de un archivo APK

    -

    Un archivo APK significa Paquete Android Kit o Paquete de aplicación Android. Es un formato de archivo utilizado por el sistema operativo Android para distribuir e instalar aplicaciones. Un archivo APK contiene todos los componentes de una aplicación, como código, recursos, activos, certificados y manifiesto. Un archivo APK se puede descargar de varias fuentes, como Google Play Store, sitios web de terceros, o directamente de los desarrolladores.

    -

    Las ventajas y los riesgos de instalar archivos APK

    -

    La instalación de archivos APK puede tener algunas ventajas sobre la instalación de aplicaciones de Google Play Store, como:

    -
      -
    • Acceder a aplicaciones que no están disponibles en su región o dispositivo.
    • -
    • Obtener actualizaciones más rápido que esperar a las versiones oficiales.
    • -
    • Probar versiones beta o versiones modificadas de aplicaciones.
    • -
    • Ahorro de ancho de banda mediante la descarga de archivos más pequeños.
    • -
    -

    Sin embargo, la instalación de archivos APK también viene con algunos riesgos, como:

    -
      -
    • Exponer su dispositivo a malware o virus que pueden dañar sus datos o sistema.
    • -
    • Violar los términos y condiciones de los desarrolladores de aplicaciones o editores.
    • -
    • Rompiendo la funcionalidad o compatibilidad de la aplicación o dispositivo.
    • -
    • Anular la garantía o soporte del dispositivo o aplicación.
    • -
    -

    Por lo tanto, es importante tener cuidado y precaución al descargar e instalar archivos APK. Siempre debe comprobar el origen, los permisos y las revisiones del archivo APK antes de instalarlo. También debe escanear el archivo APK con un software antivirus y hacer una copia de seguridad de sus datos antes de instalarlo.

    -

    ¿Cómo descargar e instalar Five Nights at Freddy’s 3 APK?

    - -

    Una de las fuentes de confianza para descargar Five Nights at Freddy’s 3 APK es APKPure.com, un sitio web que proporciona archivos APK seguros y verificados para varias aplicaciones y juegos. Para descargar el archivo APK de APKPure.com, debe seguir estos pasos:

    -
      -
    1. Vaya a APKPure.com en su navegador.
    2. -
    3. Buscar "Cinco noches en Freddy’s 3" en la barra de búsqueda.
    4. -
    5. Seleccione el juego de los resultados de búsqueda y haga clic en "Descargar APK".
    6. -
    7. Elija una ubicación de descarga y espere a que la descarga termine.
    8. -
    -

    Los pasos para habilitar fuentes desconocidas e instalar el archivo APK en dispositivos Android

    -

    Para instalar el archivo APK en su dispositivo Android, es necesario habilitar fuentes desconocidas, que le permite instalar aplicaciones de fuentes distintas de Google Play Store. Para habilitar fuentes desconocidas, debe seguir estos pasos:

    -

    -
      -
    1. Ir a "Configuración" en su dispositivo.
    2. -
    3. Seleccione "Seguridad" o "Privacidad" dependiendo del modelo de su dispositivo.
    4. -
    5. Buscar y activar "Fuentes desconocidas" o "Instalar aplicaciones desconocidas".
    6. -
    7. Confirme su elección tocando "OK" o "Permitir".
    8. -
    -

    Para instalar el archivo APK en tu dispositivo Android, debes seguir estos pasos:

    -
      -
    1. Ir a la ubicación de descarga del archivo APK en su dispositivo.
    2. -
    3. Toque en el archivo APK y seleccione "Instalar".
    4. -
    5. Espere a que la instalación se complete y toque "Abrir" o "Hecho".
    6. -
    -

    Los pasos para usar un emulador y abrir el archivo APK en dispositivos Windows

    -

    Para abrir el archivo APK en su dispositivo Windows, es necesario utilizar un emulador, que es un software que simula un entorno Android en su PC. Uno de los emuladores populares es BlueStacks, que puede descargar desde BlueStacks.com. Para usar BlueStacks y abrir el archivo APK en tu dispositivo Windows, debes seguir estos pasos:

    -
      -
    1. Vaya a - -
    2. Inicie BlueStacks e inicie sesión con su cuenta de Google.
    3. -
    4. Arrastre y suelte el archivo APK en BlueStacks o haga clic en "Instalar APK" en la barra lateral.
    5. -
    6. Seleccione el archivo APK de su PC y haga clic en "Abrir".
    7. -
    8. Espera a que BlueStacks instale y ejecute el juego.
    9. -
    -

    Conclusión

    -

    En este artículo, hemos explicado qué es Five Nights at Freddy’s 3, qué es un archivo APK, y cómo descargar e instalar Five Nights at Freddy’s 3 APK en sus dispositivos. Esperamos que este artículo te haya ayudado a disfrutar de este emocionante juego de terror. Sin embargo, también le recordamos que sea cuidadoso y responsable al descargar e instalar archivos APK, ya que pueden plantear algunos riesgos para sus dispositivos o datos. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación.

    -

    Preguntas frecuentes

    -

    Q: ¿Cinco noches en Freddy’s 3 es gratis?

    -

    A: Cinco noches en Freddy’s 3 no es gratis. Cuesta $2.99 en Google Play Store, $4.99 en Steam y $7.99 en App Store. Sin embargo, puedes descargarlo gratis usando un archivo APK de una fuente confiable.

    -

    Q: ¿Cinco noches en Freddy’s 3 dan miedo?

    -

    A: Five Nights at Freddy’s 3 es un juego de terror que involucra sustos de salto, sonidos espeluznantes e imágenes perturbadoras. No es adecuado para niños o personas que se asustan fácilmente o tienen problemas cardíacos. Si estás buscando un desafío y una emoción, puedes disfrutar de este juego.

    -

    Q: Cómo desinstalar cinco noches en Freddy 3 APK?

    -

    A: Para desinstalar Five Nights at Freddy’s 3 APK, es necesario seguir estos pasos:

    -
      -
    1. Ir a "Configuración" en su dispositivo.
    2. -
    3. Seleccione "Aplicaciones" o "Aplicaciones" dependiendo del modelo de su dispositivo.
    4. -
    5. Encuentra y toca "Cinco noches en Freddy’s 3".
    6. -
    7. Toque en "Desinstalar" y confirme su elección.
    8. -
    -

    Q: Cómo actualizar cinco noches en el 3 APK de Freddy?

    -

    A: Para actualizar Five Nights at Freddy’s 3 APK, debe seguir estos pasos:

    -
      - -
    1. Si existe, descargue el nuevo archivo APK y siga los mismos pasos que instalarlo.
    2. -
    3. Si no lo hay, puede esperar la actualización oficial de Google Play Store o Steam, o consultar otras fuentes para actualizaciones.
    4. -
    -

    Q: ¿Cómo jugar cinco noches en Freddy’s 3 en línea?

    -

    A: Five Nights at Freddy’s 3 es un juego para un solo jugador que no tiene un modo en línea. Sin embargo, puedes jugar online usando un emulador basado en el navegador, como Gamejolt.com. Para jugar online, debes seguir estos pasos:

    -
      -
    1. Vaya a -
    2. Seleccione el juego de los resultados de búsqueda y haga clic en "Jugar Juego".
    3. -
    4. Espera a que el juego se cargue y disfrute.
    5. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat_new/Dockerfile b/spaces/BetterAPI/BetterChat_new/Dockerfile deleted file mode 100644 index ca877fc91d9ca6ae35548f374ae2b5062e2b547e..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat_new/Dockerfile +++ /dev/null @@ -1,17 +0,0 @@ -# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker -# you will also find guides on how best to write your Dockerfile - -FROM node:19 - -RUN npm install -g pm2 - -WORKDIR /app - -COPY --link --chown=1000 . . - -RUN npm i - -RUN --mount=type=secret,id=DOTENV_LOCAL,dst=.env.local npm run build -CMD pm2 kill -CMD echo $CPU_CORES -CMD pm2 start build/index.js -i $CPU_CORES --no-daemon diff --git a/spaces/Big-Web/MMSD/env/Scripts/Activate.ps1 b/spaces/Big-Web/MMSD/env/Scripts/Activate.ps1 deleted file mode 100644 index b785b9482747c402ae323bb4b635567ce4aacef4..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Scripts/Activate.ps1 +++ /dev/null @@ -1,502 +0,0 @@ -<# -.Synopsis -Activate a Python virtual environment for the current PowerShell session. - -.Description -Pushes the python executable for a virtual environment to the front of the -$Env:PATH environment variable and sets the prompt to signify that you are -in a Python virtual environment. Makes use of the command line switches as -well as the `pyvenv.cfg` file values present in the virtual environment. - -.Parameter VenvDir -Path to the directory that contains the virtual environment to activate. The -default value for this is the parent of the directory that the Activate.ps1 -script is located within. - -.Parameter Prompt -The prompt prefix to display when this virtual environment is activated. By -default, this prompt is the name of the virtual environment folder (VenvDir) -surrounded by parentheses and followed by a single space (ie. '(.venv) '). - -.Example -Activate.ps1 -Activates the Python virtual environment that contains the Activate.ps1 script. - -.Example -Activate.ps1 -Verbose -Activates the Python virtual environment that contains the Activate.ps1 script, -and shows extra information about the activation as it executes. - -.Example -Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv -Activates the Python virtual environment located in the specified location. - -.Example -Activate.ps1 -Prompt "MyPython" -Activates the Python virtual environment that contains the Activate.ps1 script, -and prefixes the current prompt with the specified string (surrounded in -parentheses) while the virtual environment is active. - -.Notes -On Windows, it may be required to enable this Activate.ps1 script by setting the -execution policy for the user. You can do this by issuing the following PowerShell -command: - -PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser - -For more information on Execution Policies: -https://go.microsoft.com/fwlink/?LinkID=135170 - -#> -Param( - [Parameter(Mandatory = $false)] - [String] - $VenvDir, - [Parameter(Mandatory = $false)] - [String] - $Prompt -) - -<# Function declarations --------------------------------------------------- #> - -<# -.Synopsis -Remove all shell session elements added by the Activate script, including the -addition of the virtual environment's Python executable from the beginning of -the PATH variable. - -.Parameter NonDestructive -If present, do not remove this function from the global namespace for the -session. - -#> -function global:deactivate ([switch]$NonDestructive) { - # Revert to original values - - # The prior prompt: - if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) { - Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt - Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT - } - - # The prior PYTHONHOME: - if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) { - Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME - Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME - } - - # The prior PATH: - if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) { - Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH - Remove-Item -Path Env:_OLD_VIRTUAL_PATH - } - - # Just remove the VIRTUAL_ENV altogether: - if (Test-Path -Path Env:VIRTUAL_ENV) { - Remove-Item -Path env:VIRTUAL_ENV - } - - # Just remove VIRTUAL_ENV_PROMPT altogether. - if (Test-Path -Path Env:VIRTUAL_ENV_PROMPT) { - Remove-Item -Path env:VIRTUAL_ENV_PROMPT - } - - # Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether: - if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) { - Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force - } - - # Leave deactivate function in the global namespace if requested: - if (-not $NonDestructive) { - Remove-Item -Path function:deactivate - } -} - -<# -.Description -Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the -given folder, and returns them in a map. - -For each line in the pyvenv.cfg file, if that line can be parsed into exactly -two strings separated by `=` (with any amount of whitespace surrounding the =) -then it is considered a `key = value` line. The left hand string is the key, -the right hand is the value. - -If the value starts with a `'` or a `"` then the first and last character is -stripped from the value before being captured. - -.Parameter ConfigDir -Path to the directory that contains the `pyvenv.cfg` file. -#> -function Get-PyVenvConfig( - [String] - $ConfigDir -) { - Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg" - - # Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue). - $pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue - - # An empty map will be returned if no config file is found. - $pyvenvConfig = @{ } - - if ($pyvenvConfigPath) { - - Write-Verbose "File exists, parse `key = value` lines" - $pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath - - $pyvenvConfigContent | ForEach-Object { - $keyval = $PSItem -split "\s*=\s*", 2 - if ($keyval[0] -and $keyval[1]) { - $val = $keyval[1] - - # Remove extraneous quotations around a string value. - if ("'""".Contains($val.Substring(0, 1))) { - $val = $val.Substring(1, $val.Length - 2) - } - - $pyvenvConfig[$keyval[0]] = $val - Write-Verbose "Adding Key: '$($keyval[0])'='$val'" - } - } - } - return $pyvenvConfig -} - - -<# Begin Activate script --------------------------------------------------- #> - -# Determine the containing directory of this script -$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition -$VenvExecDir = Get-Item -Path $VenvExecPath - -Write-Verbose "Activation script is located in path: '$VenvExecPath'" -Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)" -Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)" - -# Set values required in priority: CmdLine, ConfigFile, Default -# First, get the location of the virtual environment, it might not be -# VenvExecDir if specified on the command line. -if ($VenvDir) { - Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values" -} -else { - Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir." - $VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/") - Write-Verbose "VenvDir=$VenvDir" -} - -# Next, read the `pyvenv.cfg` file to determine any required value such -# as `prompt`. -$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir - -# Next, set the prompt from the command line, or the config file, or -# just use the name of the virtual environment folder. -if ($Prompt) { - Write-Verbose "Prompt specified as argument, using '$Prompt'" -} -else { - Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value" - if ($pyvenvCfg -and $pyvenvCfg['prompt']) { - Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'" - $Prompt = $pyvenvCfg['prompt']; - } - else { - Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virtual environment)" - Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'" - $Prompt = Split-Path -Path $venvDir -Leaf - } -} - -Write-Verbose "Prompt = '$Prompt'" -Write-Verbose "VenvDir='$VenvDir'" - -# Deactivate any currently active virtual environment, but leave the -# deactivate function in place. -deactivate -nondestructive - -# Now set the environment variable VIRTUAL_ENV, used by many tools to determine -# that there is an activated venv. -$env:VIRTUAL_ENV = $VenvDir - -if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) { - - Write-Verbose "Setting prompt to '$Prompt'" - - # Set the prompt to include the env name - # Make sure _OLD_VIRTUAL_PROMPT is global - function global:_OLD_VIRTUAL_PROMPT { "" } - Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT - New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt - - function global:prompt { - Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) " - _OLD_VIRTUAL_PROMPT - } - $env:VIRTUAL_ENV_PROMPT = $Prompt -} - -# Clear PYTHONHOME -if (Test-Path -Path Env:PYTHONHOME) { - Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME - Remove-Item -Path Env:PYTHONHOME -} - -# Add the venv to the PATH -Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH -$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH" - -# SIG # Begin signature block -# MIIvIQYJKoZIhvcNAQcCoIIvEjCCLw4CAQExDzANBglghkgBZQMEAgEFADB5Bgor -# BgEEAYI3AgEEoGswaTA0BgorBgEEAYI3AgEeMCYCAwEAAAQQH8w7YFlLCE63JNLG -# KX7zUQIBAAIBAAIBAAIBAAIBADAxMA0GCWCGSAFlAwQCAQUABCBnL745ElCYk8vk -# dBtMuQhLeWJ3ZGfzKW4DHCYzAn+QB6CCE8MwggWQMIIDeKADAgECAhAFmxtXno4h -# MuI5B72nd3VcMA0GCSqGSIb3DQEBDAUAMGIxCzAJBgNVBAYTAlVTMRUwEwYDVQQK -# EwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20xITAfBgNV -# BAMTGERpZ2lDZXJ0IFRydXN0ZWQgUm9vdCBHNDAeFw0xMzA4MDExMjAwMDBaFw0z -# ODAxMTUxMjAwMDBaMGIxCzAJBgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJ -# bmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20xITAfBgNVBAMTGERpZ2lDZXJ0 -# IFRydXN0ZWQgUm9vdCBHNDCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIB -# AL/mkHNo3rvkXUo8MCIwaTPswqclLskhPfKK2FnC4SmnPVirdprNrnsbhA3EMB/z -# G6Q4FutWxpdtHauyefLKEdLkX9YFPFIPUh/GnhWlfr6fqVcWWVVyr2iTcMKyunWZ -# anMylNEQRBAu34LzB4TmdDttceItDBvuINXJIB1jKS3O7F5OyJP4IWGbNOsFxl7s -# Wxq868nPzaw0QF+xembud8hIqGZXV59UWI4MK7dPpzDZVu7Ke13jrclPXuU15zHL -# 2pNe3I6PgNq2kZhAkHnDeMe2scS1ahg4AxCN2NQ3pC4FfYj1gj4QkXCrVYJBMtfb -# BHMqbpEBfCFM1LyuGwN1XXhm2ToxRJozQL8I11pJpMLmqaBn3aQnvKFPObURWBf3 -# JFxGj2T3wWmIdph2PVldQnaHiZdpekjw4KISG2aadMreSx7nDmOu5tTvkpI6nj3c -# AORFJYm2mkQZK37AlLTSYW3rM9nF30sEAMx9HJXDj/chsrIRt7t/8tWMcCxBYKqx -# YxhElRp2Yn72gLD76GSmM9GJB+G9t+ZDpBi4pncB4Q+UDCEdslQpJYls5Q5SUUd0 -# viastkF13nqsX40/ybzTQRESW+UQUOsxxcpyFiIJ33xMdT9j7CFfxCBRa2+xq4aL -# T8LWRV+dIPyhHsXAj6KxfgommfXkaS+YHS312amyHeUbAgMBAAGjQjBAMA8GA1Ud -# EwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQWBBTs1+OC0nFdZEzf -# Lmc/57qYrhwPTzANBgkqhkiG9w0BAQwFAAOCAgEAu2HZfalsvhfEkRvDoaIAjeNk -# aA9Wz3eucPn9mkqZucl4XAwMX+TmFClWCzZJXURj4K2clhhmGyMNPXnpbWvWVPjS -# PMFDQK4dUPVS/JA7u5iZaWvHwaeoaKQn3J35J64whbn2Z006Po9ZOSJTROvIXQPK -# 7VB6fWIhCoDIc2bRoAVgX+iltKevqPdtNZx8WorWojiZ83iL9E3SIAveBO6Mm0eB -# cg3AFDLvMFkuruBx8lbkapdvklBtlo1oepqyNhR6BvIkuQkRUNcIsbiJeoQjYUIp -# 5aPNoiBB19GcZNnqJqGLFNdMGbJQQXE9P01wI4YMStyB0swylIQNCAmXHE/A7msg -# dDDS4Dk0EIUhFQEI6FUy3nFJ2SgXUE3mvk3RdazQyvtBuEOlqtPDBURPLDab4vri -# RbgjU2wGb2dVf0a1TD9uKFp5JtKkqGKX0h7i7UqLvBv9R0oN32dmfrJbQdA75PQ7 -# 9ARj6e/CVABRoIoqyc54zNXqhwQYs86vSYiv85KZtrPmYQ/ShQDnUBrkG5WdGaG5 -# nLGbsQAe79APT0JsyQq87kP6OnGlyE0mpTX9iV28hWIdMtKgK1TtmlfB2/oQzxm3 -# i0objwG2J5VT6LaJbVu8aNQj6ItRolb58KaAoNYes7wPD1N1KarqE3fk3oyBIa0H -# EEcRrYc9B9F1vM/zZn4wggawMIIEmKADAgECAhAIrUCyYNKcTJ9ezam9k67ZMA0G -# CSqGSIb3DQEBDAUAMGIxCzAJBgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJ -# bmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20xITAfBgNVBAMTGERpZ2lDZXJ0 -# IFRydXN0ZWQgUm9vdCBHNDAeFw0yMTA0MjkwMDAwMDBaFw0zNjA0MjgyMzU5NTla -# MGkxCzAJBgNVBAYTAlVTMRcwFQYDVQQKEw5EaWdpQ2VydCwgSW5jLjFBMD8GA1UE -# AxM4RGlnaUNlcnQgVHJ1c3RlZCBHNCBDb2RlIFNpZ25pbmcgUlNBNDA5NiBTSEEz -# ODQgMjAyMSBDQTEwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDVtC9C -# 0CiteLdd1TlZG7GIQvUzjOs9gZdwxbvEhSYwn6SOaNhc9es0JAfhS0/TeEP0F9ce -# 2vnS1WcaUk8OoVf8iJnBkcyBAz5NcCRks43iCH00fUyAVxJrQ5qZ8sU7H/Lvy0da -# E6ZMswEgJfMQ04uy+wjwiuCdCcBlp/qYgEk1hz1RGeiQIXhFLqGfLOEYwhrMxe6T -# SXBCMo/7xuoc82VokaJNTIIRSFJo3hC9FFdd6BgTZcV/sk+FLEikVoQ11vkunKoA -# FdE3/hoGlMJ8yOobMubKwvSnowMOdKWvObarYBLj6Na59zHh3K3kGKDYwSNHR7Oh -# D26jq22YBoMbt2pnLdK9RBqSEIGPsDsJ18ebMlrC/2pgVItJwZPt4bRc4G/rJvmM -# 1bL5OBDm6s6R9b7T+2+TYTRcvJNFKIM2KmYoX7BzzosmJQayg9Rc9hUZTO1i4F4z -# 8ujo7AqnsAMrkbI2eb73rQgedaZlzLvjSFDzd5Ea/ttQokbIYViY9XwCFjyDKK05 -# huzUtw1T0PhH5nUwjewwk3YUpltLXXRhTT8SkXbev1jLchApQfDVxW0mdmgRQRNY -# mtwmKwH0iU1Z23jPgUo+QEdfyYFQc4UQIyFZYIpkVMHMIRroOBl8ZhzNeDhFMJlP -# /2NPTLuqDQhTQXxYPUez+rbsjDIJAsxsPAxWEQIDAQABo4IBWTCCAVUwEgYDVR0T -# AQH/BAgwBgEB/wIBADAdBgNVHQ4EFgQUaDfg67Y7+F8Rhvv+YXsIiGX0TkIwHwYD -# VR0jBBgwFoAU7NfjgtJxXWRM3y5nP+e6mK4cD08wDgYDVR0PAQH/BAQDAgGGMBMG -# A1UdJQQMMAoGCCsGAQUFBwMDMHcGCCsGAQUFBwEBBGswaTAkBggrBgEFBQcwAYYY -# aHR0cDovL29jc3AuZGlnaWNlcnQuY29tMEEGCCsGAQUFBzAChjVodHRwOi8vY2Fj -# ZXJ0cy5kaWdpY2VydC5jb20vRGlnaUNlcnRUcnVzdGVkUm9vdEc0LmNydDBDBgNV -# HR8EPDA6MDigNqA0hjJodHRwOi8vY3JsMy5kaWdpY2VydC5jb20vRGlnaUNlcnRU -# cnVzdGVkUm9vdEc0LmNybDAcBgNVHSAEFTATMAcGBWeBDAEDMAgGBmeBDAEEATAN -# BgkqhkiG9w0BAQwFAAOCAgEAOiNEPY0Idu6PvDqZ01bgAhql+Eg08yy25nRm95Ry -# sQDKr2wwJxMSnpBEn0v9nqN8JtU3vDpdSG2V1T9J9Ce7FoFFUP2cvbaF4HZ+N3HL -# IvdaqpDP9ZNq4+sg0dVQeYiaiorBtr2hSBh+3NiAGhEZGM1hmYFW9snjdufE5Btf -# Q/g+lP92OT2e1JnPSt0o618moZVYSNUa/tcnP/2Q0XaG3RywYFzzDaju4ImhvTnh -# OE7abrs2nfvlIVNaw8rpavGiPttDuDPITzgUkpn13c5UbdldAhQfQDN8A+KVssIh -# dXNSy0bYxDQcoqVLjc1vdjcshT8azibpGL6QB7BDf5WIIIJw8MzK7/0pNVwfiThV -# 9zeKiwmhywvpMRr/LhlcOXHhvpynCgbWJme3kuZOX956rEnPLqR0kq3bPKSchh/j -# wVYbKyP/j7XqiHtwa+aguv06P0WmxOgWkVKLQcBIhEuWTatEQOON8BUozu3xGFYH -# Ki8QxAwIZDwzj64ojDzLj4gLDb879M4ee47vtevLt/B3E+bnKD+sEq6lLyJsQfmC -# XBVmzGwOysWGw/YmMwwHS6DTBwJqakAwSEs0qFEgu60bhQjiWQ1tygVQK+pKHJ6l -# /aCnHwZ05/LWUpD9r4VIIflXO7ScA+2GRfS0YW6/aOImYIbqyK+p/pQd52MbOoZW -# eE4wggd3MIIFX6ADAgECAhAHHxQbizANJfMU6yMM0NHdMA0GCSqGSIb3DQEBCwUA -# MGkxCzAJBgNVBAYTAlVTMRcwFQYDVQQKEw5EaWdpQ2VydCwgSW5jLjFBMD8GA1UE -# AxM4RGlnaUNlcnQgVHJ1c3RlZCBHNCBDb2RlIFNpZ25pbmcgUlNBNDA5NiBTSEEz -# ODQgMjAyMSBDQTEwHhcNMjIwMTE3MDAwMDAwWhcNMjUwMTE1MjM1OTU5WjB8MQsw -# CQYDVQQGEwJVUzEPMA0GA1UECBMGT3JlZ29uMRIwEAYDVQQHEwlCZWF2ZXJ0b24x -# IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMSMwIQYDVQQDExpQ -# eXRob24gU29mdHdhcmUgRm91bmRhdGlvbjCCAiIwDQYJKoZIhvcNAQEBBQADggIP -# ADCCAgoCggIBAKgc0BTT+iKbtK6f2mr9pNMUTcAJxKdsuOiSYgDFfwhjQy89koM7 -# uP+QV/gwx8MzEt3c9tLJvDccVWQ8H7mVsk/K+X+IufBLCgUi0GGAZUegEAeRlSXx -# xhYScr818ma8EvGIZdiSOhqjYc4KnfgfIS4RLtZSrDFG2tN16yS8skFa3IHyvWdb -# D9PvZ4iYNAS4pjYDRjT/9uzPZ4Pan+53xZIcDgjiTwOh8VGuppxcia6a7xCyKoOA -# GjvCyQsj5223v1/Ig7Dp9mGI+nh1E3IwmyTIIuVHyK6Lqu352diDY+iCMpk9Zanm -# SjmB+GMVs+H/gOiofjjtf6oz0ki3rb7sQ8fTnonIL9dyGTJ0ZFYKeb6BLA66d2GA -# LwxZhLe5WH4Np9HcyXHACkppsE6ynYjTOd7+jN1PRJahN1oERzTzEiV6nCO1M3U1 -# HbPTGyq52IMFSBM2/07WTJSbOeXjvYR7aUxK9/ZkJiacl2iZI7IWe7JKhHohqKuc -# eQNyOzxTakLcRkzynvIrk33R9YVqtB4L6wtFxhUjvDnQg16xot2KVPdfyPAWd81w -# tZADmrUtsZ9qG79x1hBdyOl4vUtVPECuyhCxaw+faVjumapPUnwo8ygflJJ74J+B -# Yxf6UuD7m8yzsfXWkdv52DjL74TxzuFTLHPyARWCSCAbzn3ZIly+qIqDAgMBAAGj -# ggIGMIICAjAfBgNVHSMEGDAWgBRoN+Drtjv4XxGG+/5hewiIZfROQjAdBgNVHQ4E -# FgQUt/1Teh2XDuUj2WW3siYWJgkZHA8wDgYDVR0PAQH/BAQDAgeAMBMGA1UdJQQM -# MAoGCCsGAQUFBwMDMIG1BgNVHR8Ega0wgaowU6BRoE+GTWh0dHA6Ly9jcmwzLmRp -# Z2ljZXJ0LmNvbS9EaWdpQ2VydFRydXN0ZWRHNENvZGVTaWduaW5nUlNBNDA5NlNI -# QTM4NDIwMjFDQTEuY3JsMFOgUaBPhk1odHRwOi8vY3JsNC5kaWdpY2VydC5jb20v -# RGlnaUNlcnRUcnVzdGVkRzRDb2RlU2lnbmluZ1JTQTQwOTZTSEEzODQyMDIxQ0Ex -# LmNybDA+BgNVHSAENzA1MDMGBmeBDAEEATApMCcGCCsGAQUFBwIBFhtodHRwOi8v -# d3d3LmRpZ2ljZXJ0LmNvbS9DUFMwgZQGCCsGAQUFBwEBBIGHMIGEMCQGCCsGAQUF -# BzABhhhodHRwOi8vb2NzcC5kaWdpY2VydC5jb20wXAYIKwYBBQUHMAKGUGh0dHA6 -# Ly9jYWNlcnRzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydFRydXN0ZWRHNENvZGVTaWdu -# aW5nUlNBNDA5NlNIQTM4NDIwMjFDQTEuY3J0MAwGA1UdEwEB/wQCMAAwDQYJKoZI -# hvcNAQELBQADggIBABxv4AeV/5ltkELHSC63fXAFYS5tadcWTiNc2rskrNLrfH1N -# s0vgSZFoQxYBFKI159E8oQQ1SKbTEubZ/B9kmHPhprHya08+VVzxC88pOEvz68nA -# 82oEM09584aILqYmj8Pj7h/kmZNzuEL7WiwFa/U1hX+XiWfLIJQsAHBla0i7QRF2 -# de8/VSF0XXFa2kBQ6aiTsiLyKPNbaNtbcucaUdn6vVUS5izWOXM95BSkFSKdE45O -# q3FForNJXjBvSCpwcP36WklaHL+aHu1upIhCTUkzTHMh8b86WmjRUqbrnvdyR2yd -# I5l1OqcMBjkpPpIV6wcc+KY/RH2xvVuuoHjlUjwq2bHiNoX+W1scCpnA8YTs2d50 -# jDHUgwUo+ciwpffH0Riq132NFmrH3r67VaN3TuBxjI8SIZM58WEDkbeoriDk3hxU -# 8ZWV7b8AW6oyVBGfM06UgkfMb58h+tJPrFx8VI/WLq1dTqMfZOm5cuclMnUHs2uq -# rRNtnV8UfidPBL4ZHkTcClQbCoz0UbLhkiDvIS00Dn+BBcxw/TKqVL4Oaz3bkMSs -# M46LciTeucHY9ExRVt3zy7i149sd+F4QozPqn7FrSVHXmem3r7bjyHTxOgqxRCVa -# 18Vtx7P/8bYSBeS+WHCKcliFCecspusCDSlnRUjZwyPdP0VHxaZg2unjHY3rMYIa -# tDCCGrACAQEwfTBpMQswCQYDVQQGEwJVUzEXMBUGA1UEChMORGlnaUNlcnQsIElu -# Yy4xQTA/BgNVBAMTOERpZ2lDZXJ0IFRydXN0ZWQgRzQgQ29kZSBTaWduaW5nIFJT -# QTQwOTYgU0hBMzg0IDIwMjEgQ0ExAhAHHxQbizANJfMU6yMM0NHdMA0GCWCGSAFl -# AwQCAQUAoIHIMBkGCSqGSIb3DQEJAzEMBgorBgEEAYI3AgEEMBwGCisGAQQBgjcC -# AQsxDjAMBgorBgEEAYI3AgEVMC8GCSqGSIb3DQEJBDEiBCBnAZ6P7YvTwq0fbF62 -# o7E75R0LxsW5OtyYiFESQckLhjBcBgorBgEEAYI3AgEMMU4wTKBGgEQAQgB1AGkA -# bAB0ADoAIABSAGUAbABlAGEAcwBlAF8AdgAzAC4AMQAxAC4AMwBfADIAMAAyADMA -# MAA0ADAANAAuADAAMaECgAAwDQYJKoZIhvcNAQEBBQAEggIAbmsoeVnvqR4l7EsR -# nUNDQhIoOsioPo5dRYtGRoY3gWX6NnIWzyYo3nlX//xY6JbfZ8oyaqLZULFMkLWm -# +c70FKdQS5yI9auu/DOqmZ0AcPsLXEc7rJZagpBDgi6xCvAyvpAHj1FUcGGzWsE+ -# Qp8LkKU5AApLcHpBci3eZYUpiwoTNvDCQLYIv5j5mh8Fb8j2D/sUt2coONsqLllY -# BB1Cpko4g9CEfJKtXKb8g0U8+giDAxt/0r6AMdeqlx9ysFB0Nil+tneagBTQ4vQl -# pl5mztf7JVkzasgDNvNcFMo04crUW5g5oErl3e/bO63v1duN7ZuJBJvKs9aDrogI -# KOLwYbTYa1Y5wHCsz8HCgd3pfRxQgwWL0+zx7+MKpqlvo20JmFG5H8wj3tcdc1FW -# QeOVYzVijkeGqRb21HTNHKuTfV4Gw3cLdT4oOENY3JdkJ+oqnAiSwC1p/Fm3pizG -# wkc3D+JjNYg6UT+9PdWqLtsjaBODM1lB22Bpx/nnPCnUG8WEx9cwi39zJdV+atcZ -# eTKc+Ahpyxot3az6yv9w83+7wIdnSWBWQwAHonjwx1jjMiiDpLyHblqxt/jgejkV -# VQEam7XX3KOKI3CHDC8k3M4V6QTnCTIX/WLslIO57hwUGtOAGmww6/q2NOqKeqpH -# 1B6f/CLtXwSh0d4raerISKVQjYChghc9MIIXOQYKKwYBBAGCNwMDATGCFykwghcl -# BgkqhkiG9w0BBwKgghcWMIIXEgIBAzEPMA0GCWCGSAFlAwQCAQUAMHcGCyqGSIb3 -# DQEJEAEEoGgEZjBkAgEBBglghkgBhv1sBwEwMTANBglghkgBZQMEAgEFAAQgkIZK -# LoYk5fC9ubz0LiZd/QHnskSNa7ucOHGD7CWySe4CEHn5OnRbrbmQtyWnJjmOhREY -# DzIwMjMwNDA1MDAwMjQ1WqCCEwcwggbAMIIEqKADAgECAhAMTWlyS5T6PCpKPSkH -# gD1aMA0GCSqGSIb3DQEBCwUAMGMxCzAJBgNVBAYTAlVTMRcwFQYDVQQKEw5EaWdp -# Q2VydCwgSW5jLjE7MDkGA1UEAxMyRGlnaUNlcnQgVHJ1c3RlZCBHNCBSU0E0MDk2 -# IFNIQTI1NiBUaW1lU3RhbXBpbmcgQ0EwHhcNMjIwOTIxMDAwMDAwWhcNMzMxMTIx -# MjM1OTU5WjBGMQswCQYDVQQGEwJVUzERMA8GA1UEChMIRGlnaUNlcnQxJDAiBgNV -# BAMTG0RpZ2lDZXJ0IFRpbWVzdGFtcCAyMDIyIC0gMjCCAiIwDQYJKoZIhvcNAQEB -# BQADggIPADCCAgoCggIBAM/spSY6xqnya7uNwQ2a26HoFIV0MxomrNAcVR4eNm28 -# klUMYfSdCXc9FZYIL2tkpP0GgxbXkZI4HDEClvtysZc6Va8z7GGK6aYo25BjXL2J -# U+A6LYyHQq4mpOS7eHi5ehbhVsbAumRTuyoW51BIu4hpDIjG8b7gL307scpTjUCD -# HufLckkoHkyAHoVW54Xt8mG8qjoHffarbuVm3eJc9S/tjdRNlYRo44DLannR0hCR -# RinrPibytIzNTLlmyLuqUDgN5YyUXRlav/V7QG5vFqianJVHhoV5PgxeZowaCiS+ -# nKrSnLb3T254xCg/oxwPUAY3ugjZNaa1Htp4WB056PhMkRCWfk3h3cKtpX74LRsf -# 7CtGGKMZ9jn39cFPcS6JAxGiS7uYv/pP5Hs27wZE5FX/NurlfDHn88JSxOYWe1p+ -# pSVz28BqmSEtY+VZ9U0vkB8nt9KrFOU4ZodRCGv7U0M50GT6Vs/g9ArmFG1keLuY -# /ZTDcyHzL8IuINeBrNPxB9ThvdldS24xlCmL5kGkZZTAWOXlLimQprdhZPrZIGwY -# UWC6poEPCSVT8b876asHDmoHOWIZydaFfxPZjXnPYsXs4Xu5zGcTB5rBeO3GiMiw -# bjJ5xwtZg43G7vUsfHuOy2SJ8bHEuOdTXl9V0n0ZKVkDTvpd6kVzHIR+187i1Dp3 -# AgMBAAGjggGLMIIBhzAOBgNVHQ8BAf8EBAMCB4AwDAYDVR0TAQH/BAIwADAWBgNV -# HSUBAf8EDDAKBggrBgEFBQcDCDAgBgNVHSAEGTAXMAgGBmeBDAEEAjALBglghkgB -# hv1sBwEwHwYDVR0jBBgwFoAUuhbZbU2FL3MpdpovdYxqII+eyG8wHQYDVR0OBBYE -# FGKK3tBh/I8xFO2XC809KpQU31KcMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j -# cmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydFRydXN0ZWRHNFJTQTQwOTZTSEEyNTZU -# aW1lU3RhbXBpbmdDQS5jcmwwgZAGCCsGAQUFBwEBBIGDMIGAMCQGCCsGAQUFBzAB -# hhhodHRwOi8vb2NzcC5kaWdpY2VydC5jb20wWAYIKwYBBQUHMAKGTGh0dHA6Ly9j -# YWNlcnRzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydFRydXN0ZWRHNFJTQTQwOTZTSEEy -# NTZUaW1lU3RhbXBpbmdDQS5jcnQwDQYJKoZIhvcNAQELBQADggIBAFWqKhrzRvN4 -# Vzcw/HXjT9aFI/H8+ZU5myXm93KKmMN31GT8Ffs2wklRLHiIY1UJRjkA/GnUypsp -# +6M/wMkAmxMdsJiJ3HjyzXyFzVOdr2LiYWajFCpFh0qYQitQ/Bu1nggwCfrkLdcJ -# iXn5CeaIzn0buGqim8FTYAnoo7id160fHLjsmEHw9g6A++T/350Qp+sAul9Kjxo6 -# UrTqvwlJFTU2WZoPVNKyG39+XgmtdlSKdG3K0gVnK3br/5iyJpU4GYhEFOUKWaJr -# 5yI+RCHSPxzAm+18SLLYkgyRTzxmlK9dAlPrnuKe5NMfhgFknADC6Vp0dQ094XmI -# vxwBl8kZI4DXNlpflhaxYwzGRkA7zl011Fk+Q5oYrsPJy8P7mxNfarXH4PMFw1nf -# J2Ir3kHJU7n/NBBn9iYymHv+XEKUgZSCnawKi8ZLFUrTmJBFYDOA4CPe+AOk9kVH -# 5c64A0JH6EE2cXet/aLol3ROLtoeHYxayB6a1cLwxiKoT5u92ByaUcQvmvZfpyeX -# upYuhVfAYOd4Vn9q78KVmksRAsiCnMkaBXy6cbVOepls9Oie1FqYyJ+/jbsYXEP1 -# 0Cro4mLueATbvdH7WwqocH7wl4R44wgDXUcsY6glOJcB0j862uXl9uab3H4szP8X -# TE0AotjWAQ64i+7m4HJViSwnGWH2dwGMMIIGrjCCBJagAwIBAgIQBzY3tyRUfNhH -# rP0oZipeWzANBgkqhkiG9w0BAQsFADBiMQswCQYDVQQGEwJVUzEVMBMGA1UEChMM -# RGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNlcnQuY29tMSEwHwYDVQQD -# ExhEaWdpQ2VydCBUcnVzdGVkIFJvb3QgRzQwHhcNMjIwMzIzMDAwMDAwWhcNMzcw -# MzIyMjM1OTU5WjBjMQswCQYDVQQGEwJVUzEXMBUGA1UEChMORGlnaUNlcnQsIElu -# Yy4xOzA5BgNVBAMTMkRpZ2lDZXJ0IFRydXN0ZWQgRzQgUlNBNDA5NiBTSEEyNTYg -# VGltZVN0YW1waW5nIENBMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA -# xoY1BkmzwT1ySVFVxyUDxPKRN6mXUaHW0oPRnkyibaCwzIP5WvYRoUQVQl+kiPNo -# +n3znIkLf50fng8zH1ATCyZzlm34V6gCff1DtITaEfFzsbPuK4CEiiIY3+vaPcQX -# f6sZKz5C3GeO6lE98NZW1OcoLevTsbV15x8GZY2UKdPZ7Gnf2ZCHRgB720RBidx8 -# ald68Dd5n12sy+iEZLRS8nZH92GDGd1ftFQLIWhuNyG7QKxfst5Kfc71ORJn7w6l -# Y2zkpsUdzTYNXNXmG6jBZHRAp8ByxbpOH7G1WE15/tePc5OsLDnipUjW8LAxE6lX -# KZYnLvWHpo9OdhVVJnCYJn+gGkcgQ+NDY4B7dW4nJZCYOjgRs/b2nuY7W+yB3iIU -# 2YIqx5K/oN7jPqJz+ucfWmyU8lKVEStYdEAoq3NDzt9KoRxrOMUp88qqlnNCaJ+2 -# RrOdOqPVA+C/8KI8ykLcGEh/FDTP0kyr75s9/g64ZCr6dSgkQe1CvwWcZklSUPRR -# 8zZJTYsg0ixXNXkrqPNFYLwjjVj33GHek/45wPmyMKVM1+mYSlg+0wOI/rOP015L -# dhJRk8mMDDtbiiKowSYI+RQQEgN9XyO7ZONj4KbhPvbCdLI/Hgl27KtdRnXiYKNY -# CQEoAA6EVO7O6V3IXjASvUaetdN2udIOa5kM0jO0zbECAwEAAaOCAV0wggFZMBIG -# A1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0OBBYEFLoW2W1NhS9zKXaaL3WMaiCPnshv -# MB8GA1UdIwQYMBaAFOzX44LScV1kTN8uZz/nupiuHA9PMA4GA1UdDwEB/wQEAwIB -# hjATBgNVHSUEDDAKBggrBgEFBQcDCDB3BggrBgEFBQcBAQRrMGkwJAYIKwYBBQUH -# MAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBBBggrBgEFBQcwAoY1aHR0cDov -# L2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0VHJ1c3RlZFJvb3RHNC5jcnQw -# QwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lD -# ZXJ0VHJ1c3RlZFJvb3RHNC5jcmwwIAYDVR0gBBkwFzAIBgZngQwBBAIwCwYJYIZI -# AYb9bAcBMA0GCSqGSIb3DQEBCwUAA4ICAQB9WY7Ak7ZvmKlEIgF+ZtbYIULhsBgu -# EE0TzzBTzr8Y+8dQXeJLKftwig2qKWn8acHPHQfpPmDI2AvlXFvXbYf6hCAlNDFn -# zbYSlm/EUExiHQwIgqgWvalWzxVzjQEiJc6VaT9Hd/tydBTX/6tPiix6q4XNQ1/t -# YLaqT5Fmniye4Iqs5f2MvGQmh2ySvZ180HAKfO+ovHVPulr3qRCyXen/KFSJ8NWK -# cXZl2szwcqMj+sAngkSumScbqyQeJsG33irr9p6xeZmBo1aGqwpFyd/EjaDnmPv7 -# pp1yr8THwcFqcdnGE4AJxLafzYeHJLtPo0m5d2aR8XKc6UsCUqc3fpNTrDsdCEkP -# lM05et3/JWOZJyw9P2un8WbDQc1PtkCbISFA0LcTJM3cHXg65J6t5TRxktcma+Q4 -# c6umAU+9Pzt4rUyt+8SVe+0KXzM5h0F4ejjpnOHdI/0dKNPH+ejxmF/7K9h+8kad -# dSweJywm228Vex4Ziza4k9Tm8heZWcpw8De/mADfIBZPJ/tgZxahZrrdVcA6KYaw -# mKAr7ZVBtzrVFZgxtGIJDwq9gdkT/r+k0fNX2bwE+oLeMt8EifAAzV3C+dAjfwAL -# 5HYCJtnwZXZCpimHCUcr5n8apIUP/JiW9lVUKx+A+sDyDivl1vupL0QVSucTDh3b -# NzgaoSv27dZ8/DCCBY0wggR1oAMCAQICEA6bGI750C3n79tQ4ghAGFowDQYJKoZI -# hvcNAQEMBQAwZTELMAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZ -# MBcGA1UECxMQd3d3LmRpZ2ljZXJ0LmNvbTEkMCIGA1UEAxMbRGlnaUNlcnQgQXNz -# dXJlZCBJRCBSb290IENBMB4XDTIyMDgwMTAwMDAwMFoXDTMxMTEwOTIzNTk1OVow -# YjELMAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQ -# d3d3LmRpZ2ljZXJ0LmNvbTEhMB8GA1UEAxMYRGlnaUNlcnQgVHJ1c3RlZCBSb290 -# IEc0MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAv+aQc2jeu+RdSjww -# IjBpM+zCpyUuySE98orYWcLhKac9WKt2ms2uexuEDcQwH/MbpDgW61bGl20dq7J5 -# 8soR0uRf1gU8Ug9SH8aeFaV+vp+pVxZZVXKvaJNwwrK6dZlqczKU0RBEEC7fgvMH -# hOZ0O21x4i0MG+4g1ckgHWMpLc7sXk7Ik/ghYZs06wXGXuxbGrzryc/NrDRAX7F6 -# Zu53yEioZldXn1RYjgwrt0+nMNlW7sp7XeOtyU9e5TXnMcvak17cjo+A2raRmECQ -# ecN4x7axxLVqGDgDEI3Y1DekLgV9iPWCPhCRcKtVgkEy19sEcypukQF8IUzUvK4b -# A3VdeGbZOjFEmjNAvwjXWkmkwuapoGfdpCe8oU85tRFYF/ckXEaPZPfBaYh2mHY9 -# WV1CdoeJl2l6SPDgohIbZpp0yt5LHucOY67m1O+SkjqePdwA5EUlibaaRBkrfsCU -# tNJhbesz2cXfSwQAzH0clcOP9yGyshG3u3/y1YxwLEFgqrFjGESVGnZifvaAsPvo -# ZKYz0YkH4b235kOkGLimdwHhD5QMIR2yVCkliWzlDlJRR3S+Jqy2QXXeeqxfjT/J -# vNNBERJb5RBQ6zHFynIWIgnffEx1P2PsIV/EIFFrb7GrhotPwtZFX50g/KEexcCP -# orF+CiaZ9eRpL5gdLfXZqbId5RsCAwEAAaOCATowggE2MA8GA1UdEwEB/wQFMAMB -# Af8wHQYDVR0OBBYEFOzX44LScV1kTN8uZz/nupiuHA9PMB8GA1UdIwQYMBaAFEXr -# oq/0ksuCMS1Ri6enIZ3zbcgPMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcBAQRt -# MGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEF -# BQcwAoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJl -# ZElEUm9vdENBLmNydDBFBgNVHR8EPjA8MDqgOKA2hjRodHRwOi8vY3JsMy5kaWdp -# Y2VydC5jb20vRGlnaUNlcnRBc3N1cmVkSURSb290Q0EuY3JsMBEGA1UdIAQKMAgw -# BgYEVR0gADANBgkqhkiG9w0BAQwFAAOCAQEAcKC/Q1xV5zhfoKN0Gz22Ftf3v1cH -# vZqsoYcs7IVeqRq7IviHGmlUIu2kiHdtvRoU9BNKei8ttzjv9P+Aufih9/Jy3iS8 -# UgPITtAq3votVs/59PesMHqai7Je1M/RQ0SbQyHrlnKhSLSZy51PpwYDE3cnRNTn -# f+hZqPC/Lwum6fI0POz3A8eHqNJMQBk1RmppVLC4oVaO7KTVPeix3P0c2PR3WlxU -# jG/voVA9/HYJaISfb8rbII01YBwCA8sgsKxYoA5AY8WYIsGyWfVVa88nq2x2zm8j -# LfR+cWojayL/ErhULSd+2DrZ8LaHlv1b0VysGMNNn3O3AamfV6peKOK5lDGCA3Yw -# ggNyAgEBMHcwYzELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDkRpZ2lDZXJ0LCBJbmMu -# MTswOQYDVQQDEzJEaWdpQ2VydCBUcnVzdGVkIEc0IFJTQTQwOTYgU0hBMjU2IFRp -# bWVTdGFtcGluZyBDQQIQDE1pckuU+jwqSj0pB4A9WjANBglghkgBZQMEAgEFAKCB -# 0TAaBgkqhkiG9w0BCQMxDQYLKoZIhvcNAQkQAQQwHAYJKoZIhvcNAQkFMQ8XDTIz -# MDQwNTAwMDI0NVowKwYLKoZIhvcNAQkQAgwxHDAaMBgwFgQU84ciTYYzgpI1qZS8 -# vY+W6f4cfHMwLwYJKoZIhvcNAQkEMSIEIJBlriw3QJw5qq5ADo60uCRAA2a3vjKn -# zaGl8ppJqVo5MDcGCyqGSIb3DQEJEAIvMSgwJjAkMCIEIMf04b4yKIkgq+ImOr4a -# xPxP5ngcLWTQTIB1V6Ajtbb6MA0GCSqGSIb3DQEBAQUABIICADC4cqKeH7lb0Ll+ -# iZIDw+mU6vcA3C8vUPR4KdqQmVlEkjfKdHBpHOI1eRkXwesD+BkrXpRX/NMNKm5w -# eKlymuuS70/NOX03BgnP4A9p4TqSZJcLvrP5VUc7VlMaVwkNj47vft4OF9A7PFs4 -# 3e8BJmhhkXDh1j+MdQ5URPGsla8uYm74Cn/T2WPNZ5FFQ8nkoVz93x1c5wUYEruB -# uIyFKwZshDnsYsHetZoBMpWDspcXj0kKAplBW0hUw6kgX7qBKX7doTcZPXP00VM8 -# vYnpQkJPGrTZ4S/cN0D5k0ZTXTCTDtOpFaZLbG29OgSFxD/TslfXkf1t8GiuzXvk -# u6xLEPxBW9N4yrun+jUjXr0921HEg7BKRr77bGS9v9b4mfzThomjtdcL3bweU5RE -# 3Bg4qVrgNF9Io8L/n39U7Zd5LG4Nacd+Uv+B1x6sfyQP+vGvY0UEiJUhkGy0ymzm -# RBtsPmJanvIovpkYebSccueoeC08/AUf2LxZ6lfGxkJp95vNj4pWToYRXY2dj5JE -# 6nX7mLYn3mWMbXniPhtpnYJeDahE2cuB3pqbhZSlGpOtF7fEPSCBq9P3YMnuyRun -# thsRTf8xc30muj6bRejnUJj0bNQaByZKAhEENnqH0TXBF7yasT1H3/PyC1pgyzx8 -# swIsvJFXCqG2u9lftpHuQYmHPDoq -# SIG # End signature block diff --git a/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Train/xgboost_ATS.py b/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Train/xgboost_ATS.py deleted file mode 100644 index 609c9608f76bc06a18817b145259eb875ec593de..0000000000000000000000000000000000000000 --- a/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Train/xgboost_ATS.py +++ /dev/null @@ -1,73 +0,0 @@ -import xgboost as xgb -import pandas as pd -import pickle as pkl -import numpy as np -from tqdm import tqdm -from IPython.display import clear_output -from sklearn.metrics import accuracy_score -from sklearn.model_selection import train_test_split -import os - -current_directory = os.path.dirname(os.path.abspath(__file__)) -parent_directory = os.path.dirname(current_directory) -data_directory = os.path.join(parent_directory, 'Data') -model_directory = os.path.join(parent_directory, 'Models') -pickle_directory = os.path.join(parent_directory, 'Pickles') - -file_path = os.path.join(data_directory, 'gbg_and_odds.csv') -data = pd.read_csv(file_path).dropna() - -margin = data['Home-Team-Cover'] - -data.drop(columns=['Home-Team-Win','Home-Team-Cover','Over','Season','home_team','away_team','game_date','Key','Home Score','Away Score','Home Odds Close','Away Odds Close','Home Winnings','Away Winnings', 'Home Odds', 'Away Odds'], inplace=True) -features = [i for i in data.columns if i!='game_id'] -print(features) -acc_results = [] - -for x in tqdm(range(100)): - X_train, X_test, y_train, y_test = train_test_split(data, margin, test_size=.1) - - train_games = X_train['game_id'] - test_games = X_test['game_id'] - - X_train.drop(columns=['game_id'], inplace=True) - X_test.drop(columns=['game_id'], inplace=True) - - train = xgb.DMatrix(X_train.astype(float).values, label=y_train) - test = xgb.DMatrix(X_test.astype(float).values, label=y_test) - - param = { - 'max_depth': 6, - 'eta': 0.01, - 'objective': 'multi:softprob', - 'num_class': 3 - } - epochs = 500 - - model = xgb.train(param, train, epochs) - predictions = model.predict(test) - y = [] - for z in predictions: - y.append(np.argmax(z)) - - acc = round(accuracy_score(y_test, y)*100, 1) - acc_results.append(acc) - clear_output(wait=True) - print(f"Best accuracy: {max(acc_results)}%") - - # only save results if they are the best so far - if acc == max(acc_results): - file_path = os.path.join(pickle_directory, 'train_games_ATS_no_odds.pkl') - with open(file_path,'wb') as f: - pkl.dump(train_games,f) - - file_path = os.path.join(pickle_directory, 'test_games_ATS_no_odds.pkl') - with open(file_path,'wb') as f: - pkl.dump(test_games,f) - - file_path = os.path.join(model_directory, f'xgboost_ATS_no_odds_{acc}%.json') - model.save_model(file_path) - -importances = (model.get_score(importance_type='gain')) -print(pd.DataFrame(zip(features,importances.values())).sort_values(1,ascending=False)) -print('Done') diff --git a/spaces/CVH-vn1210/make_hair/minigpt4/runners/runner_base.py b/spaces/CVH-vn1210/make_hair/minigpt4/runners/runner_base.py deleted file mode 100644 index 5f667f213d3874e3b616080df22de9ff91a9844b..0000000000000000000000000000000000000000 --- a/spaces/CVH-vn1210/make_hair/minigpt4/runners/runner_base.py +++ /dev/null @@ -1,658 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import datetime -import json -import logging -import os -import time -from pathlib import Path - -import torch -import torch.distributed as dist -import webdataset as wds -from minigpt4.common.dist_utils import ( - download_cached_file, - get_rank, - get_world_size, - is_main_process, - main_process, -) -from minigpt4.common.registry import registry -from minigpt4.common.utils import is_url -from minigpt4.datasets.data_utils import concat_datasets, reorg_datasets_by_split, ChainDataset -from minigpt4.datasets.datasets.dataloader_utils import ( - IterLoader, - MultiIterLoader, - PrefetchLoader, -) -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.utils.data import DataLoader, DistributedSampler - - -@registry.register_runner("runner_base") -class RunnerBase: - """ - A runner class to train and evaluate a model given a task and datasets. - - The runner uses pytorch distributed data parallel by default. Future release - will support other distributed frameworks. - """ - - def __init__(self, cfg, task, model, datasets, job_id): - self.config = cfg - self.job_id = job_id - - self.task = task - self.datasets = datasets - - self._model = model - - self._wrapped_model = None - self._device = None - self._optimizer = None - self._scaler = None - self._dataloaders = None - self._lr_sched = None - - self.start_epoch = 0 - - # self.setup_seeds() - self.setup_output_dir() - - @property - def device(self): - if self._device is None: - self._device = torch.device(self.config.run_cfg.device) - - return self._device - - @property - def use_distributed(self): - return self.config.run_cfg.distributed - - @property - def model(self): - """ - A property to get the DDP-wrapped model on the device. - """ - # move model to device - if self._model.device != self.device: - self._model = self._model.to(self.device) - - # distributed training wrapper - if self.use_distributed: - if self._wrapped_model is None: - self._wrapped_model = DDP( - self._model, device_ids=[self.config.run_cfg.gpu] - ) - else: - self._wrapped_model = self._model - - return self._wrapped_model - - @property - def optimizer(self): - # TODO make optimizer class and configurations - if self._optimizer is None: - num_parameters = 0 - p_wd, p_non_wd = [], [] - for n, p in self.model.named_parameters(): - if not p.requires_grad: - continue # frozen weights - print(n) - if p.ndim < 2 or "bias" in n or "ln" in n or "bn" in n: - p_non_wd.append(p) - else: - p_wd.append(p) - num_parameters += p.data.nelement() - logging.info("number of trainable parameters: %d" % num_parameters) - optim_params = [ - { - "params": p_wd, - "weight_decay": float(self.config.run_cfg.weight_decay), - }, - {"params": p_non_wd, "weight_decay": 0}, - ] - beta2 = self.config.run_cfg.get("beta2", 0.999) - self._optimizer = torch.optim.AdamW( - optim_params, - lr=float(self.config.run_cfg.init_lr), - weight_decay=float(self.config.run_cfg.weight_decay), - betas=(0.9, beta2), - ) - - return self._optimizer - - @property - def scaler(self): - amp = self.config.run_cfg.get("amp", False) - - if amp: - if self._scaler is None: - self._scaler = torch.cuda.amp.GradScaler() - - return self._scaler - - @property - def lr_scheduler(self): - """ - A property to get and create learning rate scheduler by split just in need. - """ - if self._lr_sched is None: - lr_sched_cls = registry.get_lr_scheduler_class(self.config.run_cfg.lr_sched) - - # max_epoch = self.config.run_cfg.max_epoch - max_epoch = self.max_epoch - # min_lr = self.config.run_cfg.min_lr - min_lr = self.min_lr - # init_lr = self.config.run_cfg.init_lr - init_lr = self.init_lr - - # optional parameters - decay_rate = self.config.run_cfg.get("lr_decay_rate", None) - warmup_start_lr = self.config.run_cfg.get("warmup_lr", -1) - warmup_steps = self.config.run_cfg.get("warmup_steps", 0) - iters_per_epoch = self.config.run_cfg.get("iters_per_epoch", None) - - if iters_per_epoch is None: - try: - iters_per_epoch = len(self.dataloaders['train']) - except (AttributeError, TypeError): - iters_per_epoch = 10000 - - self._lr_sched = lr_sched_cls( - optimizer=self.optimizer, - max_epoch=max_epoch, - iters_per_epoch=iters_per_epoch, - min_lr=min_lr, - init_lr=init_lr, - decay_rate=decay_rate, - warmup_start_lr=warmup_start_lr, - warmup_steps=warmup_steps, - ) - - return self._lr_sched - - @property - def dataloaders(self) -> dict: - """ - A property to get and create dataloaders by split just in need. - - If no train_dataset_ratio is provided, concatenate map-style datasets and - chain wds.DataPipe datasets separately. Training set becomes a tuple - (ConcatDataset, ChainDataset), both are optional but at least one of them is - required. The resultant ConcatDataset and ChainDataset will be sampled evenly. - - If train_dataset_ratio is provided, create a MultiIterLoader to sample - each dataset by ratios during training. - - Currently do not support multiple datasets for validation and test. - - Returns: - dict: {split_name: (tuples of) dataloader} - """ - if self._dataloaders is None: - - # concatenate map-style datasets and chain wds.DataPipe datasets separately - # training set becomes a tuple (ConcatDataset, ChainDataset), both are - # optional but at least one of them is required. The resultant ConcatDataset - # and ChainDataset will be sampled evenly. - logging.info( - "dataset_ratios not specified, datasets will be concatenated (map-style datasets) or chained (webdataset.DataPipeline)." - ) - - datasets = reorg_datasets_by_split(self.datasets) - self.datasets = datasets - # self.datasets = concat_datasets(datasets) - - # print dataset statistics after concatenation/chaining - for split_name in self.datasets: - if isinstance(self.datasets[split_name], tuple) or isinstance( - self.datasets[split_name], list - ): - # mixed wds.DataPipeline and torch.utils.data.Dataset - num_records = sum( - [ - len(d) - if not type(d) in [wds.DataPipeline, ChainDataset] - else 0 - for d in self.datasets[split_name] - ] - ) - - else: - if hasattr(self.datasets[split_name], "__len__"): - # a single map-style dataset - num_records = len(self.datasets[split_name]) - else: - # a single wds.DataPipeline - num_records = -1 - logging.info( - "Only a single wds.DataPipeline dataset, no __len__ attribute." - ) - - if num_records >= 0: - logging.info( - "Loaded {} records for {} split from the dataset.".format( - num_records, split_name - ) - ) - - # create dataloaders - split_names = sorted(self.datasets.keys()) - - datasets = [self.datasets[split] for split in split_names] - is_trains = [split in self.train_splits for split in split_names] - - batch_sizes = [ - self.config.run_cfg.batch_size_train - if split == "train" - else self.config.run_cfg.batch_size_eval - for split in split_names - ] - - collate_fns = [] - for dataset in datasets: - if isinstance(dataset, tuple) or isinstance(dataset, list): - collate_fns.append([getattr(d, "collater", None) for d in dataset]) - else: - collate_fns.append(getattr(dataset, "collater", None)) - - dataloaders = self.create_loaders( - datasets=datasets, - num_workers=self.config.run_cfg.num_workers, - batch_sizes=batch_sizes, - is_trains=is_trains, - collate_fns=collate_fns, - ) - - self._dataloaders = {k: v for k, v in zip(split_names, dataloaders)} - - return self._dataloaders - - @property - def cuda_enabled(self): - return self.device.type == "cuda" - - @property - def max_epoch(self): - return int(self.config.run_cfg.max_epoch) - - @property - def log_freq(self): - log_freq = self.config.run_cfg.get("log_freq", 50) - return int(log_freq) - - @property - def init_lr(self): - return float(self.config.run_cfg.init_lr) - - @property - def min_lr(self): - return float(self.config.run_cfg.min_lr) - - @property - def accum_grad_iters(self): - return int(self.config.run_cfg.get("accum_grad_iters", 1)) - - @property - def valid_splits(self): - valid_splits = self.config.run_cfg.get("valid_splits", []) - - if len(valid_splits) == 0: - logging.info("No validation splits found.") - - return valid_splits - - @property - def test_splits(self): - test_splits = self.config.run_cfg.get("test_splits", []) - - return test_splits - - @property - def train_splits(self): - train_splits = self.config.run_cfg.get("train_splits", []) - - if len(train_splits) == 0: - logging.info("Empty train splits.") - - return train_splits - - @property - def evaluate_only(self): - """ - Set to True to skip training. - """ - return self.config.run_cfg.evaluate - - @property - def use_dist_eval_sampler(self): - return self.config.run_cfg.get("use_dist_eval_sampler", True) - - @property - def resume_ckpt_path(self): - return self.config.run_cfg.get("resume_ckpt_path", None) - - @property - def train_loader(self): - train_dataloader = self.dataloaders["train"] - - return train_dataloader - - def setup_output_dir(self): - lib_root = Path(registry.get_path("library_root")) - - output_dir = lib_root / self.config.run_cfg.output_dir / self.job_id - result_dir = output_dir / "result" - - output_dir.mkdir(parents=True, exist_ok=True) - result_dir.mkdir(parents=True, exist_ok=True) - - registry.register_path("result_dir", str(result_dir)) - registry.register_path("output_dir", str(output_dir)) - - self.result_dir = result_dir - self.output_dir = output_dir - - def train(self): - start_time = time.time() - best_agg_metric = 0 - best_epoch = 0 - - self.log_config() - - # resume from checkpoint if specified - if not self.evaluate_only and self.resume_ckpt_path is not None: - self._load_checkpoint(self.resume_ckpt_path) - - for cur_epoch in range(self.start_epoch, self.max_epoch): - # training phase - if not self.evaluate_only: - logging.info("Start training") - train_stats = self.train_epoch(cur_epoch) - self.log_stats(split_name="train", stats=train_stats) - - # evaluation phase - if len(self.valid_splits) > 0: - for split_name in self.valid_splits: - logging.info("Evaluating on {}.".format(split_name)) - - val_log = self.eval_epoch( - split_name=split_name, cur_epoch=cur_epoch - ) - if val_log is not None: - if is_main_process(): - assert ( - "agg_metrics" in val_log - ), "No agg_metrics found in validation log." - - agg_metrics = val_log["agg_metrics"] - if agg_metrics > best_agg_metric and split_name == "val": - best_epoch, best_agg_metric = cur_epoch, agg_metrics - - self._save_checkpoint(cur_epoch, is_best=True) - - val_log.update({"best_epoch": best_epoch}) - self.log_stats(val_log, split_name) - - else: - # if no validation split is provided, we just save the checkpoint at the end of each epoch. - if not self.evaluate_only: - self._save_checkpoint(cur_epoch, is_best=False) - - if self.evaluate_only: - break - - if self.config.run_cfg.distributed: - dist.barrier() - - # testing phase - test_epoch = "best" if len(self.valid_splits) > 0 else cur_epoch - self.evaluate(cur_epoch=test_epoch, skip_reload=self.evaluate_only) - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - logging.info("Training time {}".format(total_time_str)) - - def evaluate(self, cur_epoch="best", skip_reload=False): - test_logs = dict() - - if len(self.test_splits) > 0: - for split_name in self.test_splits: - test_logs[split_name] = self.eval_epoch( - split_name=split_name, cur_epoch=cur_epoch, skip_reload=skip_reload - ) - - return test_logs - - def train_epoch(self, epoch): - # train - self.model.train() - - return self.task.train_epoch( - epoch=epoch, - model=self.model, - data_loader=self.train_loader, - optimizer=self.optimizer, - scaler=self.scaler, - lr_scheduler=self.lr_scheduler, - cuda_enabled=self.cuda_enabled, - log_freq=self.log_freq, - accum_grad_iters=self.accum_grad_iters, - ) - - @torch.no_grad() - def eval_epoch(self, split_name, cur_epoch, skip_reload=False): - """ - Evaluate the model on a given split. - - Args: - split_name (str): name of the split to evaluate on. - cur_epoch (int): current epoch. - skip_reload_best (bool): whether to skip reloading the best checkpoint. - During training, we will reload the best checkpoint for validation. - During testing, we will use provided weights and skip reloading the best checkpoint . - """ - data_loader = self.dataloaders.get(split_name, None) - assert data_loader, "data_loader for split {} is None.".format(split_name) - - # TODO In validation, you need to compute loss as well as metrics - # TODO consider moving to model.before_evaluation() - model = self.unwrap_dist_model(self.model) - if not skip_reload and cur_epoch == "best": - model = self._reload_best_model(model) - model.eval() - - self.task.before_evaluation( - model=model, - dataset=self.datasets[split_name], - ) - results = self.task.evaluation(model, data_loader) - - if results is not None: - return self.task.after_evaluation( - val_result=results, - split_name=split_name, - epoch=cur_epoch, - ) - - def unwrap_dist_model(self, model): - if self.use_distributed: - return model.module - else: - return model - - def create_loaders( - self, - datasets, - num_workers, - batch_sizes, - is_trains, - collate_fns, - dataset_ratios=None, - ): - """ - Create dataloaders for training and validation. - """ - - def _create_loader(dataset, num_workers, bsz, is_train, collate_fn): - # create a single dataloader for each split - if isinstance(dataset, ChainDataset) or isinstance( - dataset, wds.DataPipeline - ): - # wds.WebdDataset instance are chained together - # webdataset.DataPipeline has its own sampler and collate_fn - loader = iter( - DataLoader( - dataset, - batch_size=bsz, - num_workers=num_workers, - pin_memory=True, - ) - ) - else: - # map-style dataset are concatenated together - # setup distributed sampler - if self.use_distributed: - sampler = DistributedSampler( - dataset, - shuffle=is_train, - num_replicas=get_world_size(), - rank=get_rank(), - ) - if not self.use_dist_eval_sampler: - # e.g. retrieval evaluation - sampler = sampler if is_train else None - else: - sampler = None - - loader = DataLoader( - dataset, - batch_size=bsz, - num_workers=num_workers, - pin_memory=True, - sampler=sampler, - shuffle=sampler is None and is_train, - collate_fn=collate_fn, - drop_last=True if is_train else False, - ) - loader = PrefetchLoader(loader) - - if is_train: - loader = IterLoader(loader, use_distributed=self.use_distributed) - - return loader - - loaders = [] - - for dataset, bsz, is_train, collate_fn in zip( - datasets, batch_sizes, is_trains, collate_fns - ): - if isinstance(dataset, list) or isinstance(dataset, tuple): - if hasattr(dataset[0], 'sample_ratio') and dataset_ratios is None: - dataset_ratios = [d.sample_ratio for d in dataset] - loader = MultiIterLoader( - loaders=[ - _create_loader(d, num_workers, bsz, is_train, collate_fn[i]) - for i, d in enumerate(dataset) - ], - ratios=dataset_ratios, - ) - else: - loader = _create_loader(dataset, num_workers, bsz, is_train, collate_fn) - - loaders.append(loader) - - return loaders - - @main_process - def _save_checkpoint(self, cur_epoch, is_best=False): - """ - Save the checkpoint at the current epoch. - """ - model_no_ddp = self.unwrap_dist_model(self.model) - param_grad_dic = { - k: v.requires_grad for (k, v) in model_no_ddp.named_parameters() - } - state_dict = model_no_ddp.state_dict() - for k in list(state_dict.keys()): - if k in param_grad_dic.keys() and not param_grad_dic[k]: - # delete parameters that do not require gradient - del state_dict[k] - save_obj = { - "model": state_dict, - "optimizer": self.optimizer.state_dict(), - "config": self.config.to_dict(), - "scaler": self.scaler.state_dict() if self.scaler else None, - "epoch": cur_epoch, - } - save_to = os.path.join( - self.output_dir, - "checkpoint_{}.pth".format("best" if is_best else cur_epoch), - ) - logging.info("Saving checkpoint at epoch {} to {}.".format(cur_epoch, save_to)) - torch.save(save_obj, save_to) - - def _reload_best_model(self, model): - """ - Load the best checkpoint for evaluation. - """ - checkpoint_path = os.path.join(self.output_dir, "checkpoint_best.pth") - - logging.info("Loading checkpoint from {}.".format(checkpoint_path)) - checkpoint = torch.load(checkpoint_path, map_location="cpu") - try: - model.load_state_dict(checkpoint["model"]) - except RuntimeError as e: - logging.warning( - """ - Key mismatch when loading checkpoint. This is expected if only part of the model is saved. - Trying to load the model with strict=False. - """ - ) - model.load_state_dict(checkpoint["model"], strict=False) - return model - - def _load_checkpoint(self, url_or_filename): - """ - Resume from a checkpoint. - """ - if is_url(url_or_filename): - cached_file = download_cached_file( - url_or_filename, check_hash=False, progress=True - ) - checkpoint = torch.load(cached_file, map_location=self.device, strict=False) - elif os.path.isfile(url_or_filename): - checkpoint = torch.load(url_or_filename, map_location=self.device, strict=False) - else: - raise RuntimeError("checkpoint url or path is invalid") - - state_dict = checkpoint["model"] - self.unwrap_dist_model(self.model).load_state_dict(state_dict) - - self.optimizer.load_state_dict(checkpoint["optimizer"]) - if self.scaler and "scaler" in checkpoint: - self.scaler.load_state_dict(checkpoint["scaler"]) - - self.start_epoch = checkpoint["epoch"] + 1 - logging.info("Resume checkpoint from {}".format(url_or_filename)) - - @main_process - def log_stats(self, stats, split_name): - if isinstance(stats, dict): - log_stats = {**{f"{split_name}_{k}": v for k, v in stats.items()}} - with open(os.path.join(self.output_dir, "log.txt"), "a") as f: - f.write(json.dumps(log_stats) + "\n") - elif isinstance(stats, list): - pass - - @main_process - def log_config(self): - with open(os.path.join(self.output_dir, "log.txt"), "a") as f: - f.write(json.dumps(self.config.to_dict(), indent=4) + "\n") diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/integer_traits.h b/spaces/CVPR/LIVE/thrust/thrust/detail/integer_traits.h deleted file mode 100644 index 97ab4f94da2272829be05545121e1ec2b186cf46..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/integer_traits.h +++ /dev/null @@ -1,132 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ - -namespace detail -{ - -template - class integer_traits -{ - public: - static const bool is_integral = false; -}; - -template - class integer_traits_base -{ - public: - static const bool is_integral = true; - static const T const_min = min_val; - static const T const_max = max_val; -}; - - -template<> - class integer_traits - : public std::numeric_limits, - public integer_traits_base -{}; - - -template<> - class integer_traits - : public std::numeric_limits, - public integer_traits_base -{}; - - -template<> - class integer_traits - : public std::numeric_limits, - public integer_traits_base -{}; - - -template<> - class integer_traits - : public std::numeric_limits, - public integer_traits_base -{}; - - -template<> - class integer_traits - : public std::numeric_limits, - public integer_traits_base -{}; - - -template<> - class integer_traits - : public std::numeric_limits, - public integer_traits_base -{}; - - -template<> - class integer_traits - : public std::numeric_limits, - public integer_traits_base -{}; - - -template<> - class integer_traits - : public std::numeric_limits, - public integer_traits_base -{}; - - -template<> - class integer_traits - : public std::numeric_limits, - public integer_traits_base -{}; - - -template<> - class integer_traits - : public std::numeric_limits, - public integer_traits_base -{}; - - -template<> - class integer_traits - : public std::numeric_limits, - public integer_traits_base -{}; - - -template<> - class integer_traits - : public std::numeric_limits, - public integer_traits_base -{}; - -} // end detail - -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/future.h b/spaces/CVPR/LIVE/thrust/thrust/future.h deleted file mode 100644 index 12bebf8c6e041484b43d5a97759cccd730fc82f3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/future.h +++ /dev/null @@ -1,179 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file thrust/future.h - * \brief `thrust::future`, an asynchronous value type. - */ - -#pragma once - -#include -#include -#include - -#if THRUST_CPP_DIALECT >= 2011 && !defined(THRUST_LEGACY_GCC) - -#include -#include - -#include - -/* -// #include the host system's pointer.h header. -#define __THRUST_HOST_SYSTEM_POINTER_HEADER <__THRUST_HOST_SYSTEM_ROOT/pointer.h> - #include __THRUST_HOST_SYSTEM_POINTER_HEADER -#undef __THRUST_HOST_SYSTEM_POINTER_HEADER -*/ - -// #include the device system's pointer.h header. -#define __THRUST_DEVICE_SYSTEM_POINTER_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/pointer.h> - #include __THRUST_DEVICE_SYSTEM_POINTER_HEADER -#undef __THRUST_DEVICE_SYSTEM_POINTER_HEADER - -/* -// #include the host system's future.h header. -#define __THRUST_HOST_SYSTEM_FUTURE_HEADER <__THRUST_HOST_SYSTEM_ROOT/future.h> - #include __THRUST_HOST_SYSTEM_FUTURE_HEADER -#undef __THRUST_HOST_SYSTEM_FUTURE_HEADER -*/ - -// #include the device system's future.h header. -#define __THRUST_DEVICE_SYSTEM_FUTURE_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/future.h> - #include __THRUST_DEVICE_SYSTEM_FUTURE_HEADER -#undef __THRUST_DEVICE_SYSTEM_FUTURE_HEADER - -namespace thrust -{ - -/////////////////////////////////////////////////////////////////////////////// - -// `select_unique_(future|event)_type` is a hook for choosing the -// `unique_eager_event`/`unique_eager_future` type for a system. `decltype` is -// used to determine the return type of an ADL call to -// `select_unique_eager_(future|event)_type(system)`; that return type should -// be the correct event/future type for `system`. Overloads should only be -// declared, not defined. - -namespace unimplemented -{ - -struct no_unique_eager_event_type_found {}; - -inline __host__ -no_unique_eager_event_type_found -unique_eager_event_type(...) noexcept; - -struct no_unique_eager_future_type_found {}; - -template -__host__ -no_unique_eager_future_type_found -unique_eager_future_type(...) noexcept; - -} // namespace unimplemented - -namespace unique_eager_event_type_detail -{ - -using unimplemented::unique_eager_event_type; - -template -using select = decltype( - unique_eager_event_type(std::declval()) -); - -} // namespace unique_eager_event_type_detail - -namespace unique_eager_future_type_detail -{ - -using unimplemented::unique_eager_future_type; - -template -using select = decltype( - unique_eager_future_type(std::declval()) -); - -} // namespace unique_eager_future_type_detail - -/////////////////////////////////////////////////////////////////////////////// - -template -using unique_eager_event = unique_eager_event_type_detail::select; - -template -using event = unique_eager_event; - -/////////////////////////////////////////////////////////////////////////////// - -template -using unique_eager_future = unique_eager_future_type_detail::select; - -template -using future = unique_eager_future; - -/* -/////////////////////////////////////////////////////////////////////////////// - -using host_unique_eager_event = unique_eager_event_type_detail::select< - thrust::system::__THRUST_HOST_SYSTEM_NAMESPACE::tag ->; -using host_event = host_unique_eager_event; - -/////////////////////////////////////////////////////////////////////////////// - -template -using host_unique_eager_future = unique_eager_future_type_detail::select< - thrust::system::__THRUST_HOST_SYSTEM_NAMESPACE::tag, T ->; -template -using host_future = host_unique_eager_future; -*/ - -/////////////////////////////////////////////////////////////////////////////// - -using device_unique_eager_event = unique_eager_event_type_detail::select< - thrust::system::__THRUST_DEVICE_SYSTEM_NAMESPACE::tag ->; - -using device_event = device_unique_eager_event; - -/////////////////////////////////////////////////////////////////////////////// - -template -using device_unique_eager_future = unique_eager_future_type_detail::select< - thrust::system::__THRUST_DEVICE_SYSTEM_NAMESPACE::tag, T ->; - -template -using device_future = device_unique_eager_future; - -/////////////////////////////////////////////////////////////////////////////// - -struct new_stream_t final {}; - -THRUST_INLINE_CONSTANT new_stream_t new_stream{}; - -/////////////////////////////////////////////////////////////////////////////// - -using thrust::system::__THRUST_DEVICE_SYSTEM_NAMESPACE::when_all; - -/////////////////////////////////////////////////////////////////////////////// - -} // end namespace thrust - -#endif - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/guarded_driver_types.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/guarded_driver_types.h deleted file mode 100644 index 076964071cf78458de27fe54de3caf932ce93b40..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/guarded_driver_types.h +++ /dev/null @@ -1,63 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include without causing -// warnings from redefinitions of __host__ and __device__. -// carefully save their definitions and restore them -// can't tell exactly when push_macro & pop_macro were introduced to gcc; assume 4.5.0 - - -#if !defined(__GNUC__) || ((10000 * __GNUC__ + 100 * __GNUC_MINOR__ + __GNUC_PATCHLEVEL__) >= 40500) -# ifdef __host__ -# pragma push_macro("__host__") -# undef __host__ -# define THRUST_HOST_NEEDS_RESTORATION -# endif -# ifdef __device__ -# pragma push_macro("__device__") -# undef __device__ -# define THRUST_DEVICE_NEEDS_RESTORATION -# endif -#else // GNUC pre 4.5.0 -# if !defined(__DRIVER_TYPES_H__) -# ifdef __host__ -# undef __host__ -# endif -# ifdef __device__ -# undef __device__ -# endif -# endif // __DRIVER_TYPES_H__ -#endif // __GNUC__ - - -#include - - -#if !defined(__GNUC__) || ((10000 * __GNUC__ + 100 * __GNUC_MINOR__ + __GNUC_PATCHLEVEL__) >= 40500) -# ifdef THRUST_HOST_NEEDS_RESTORATION -# pragma pop_macro("__host__") -# undef THRUST_HOST_NEEDS_RESTORATION -# endif -# ifdef THRUST_DEVICE_NEEDS_RESTORATION -# pragma pop_macro("__device__") -# undef THRUST_DEVICE_NEEDS_RESTORATION -# endif -#endif // __GNUC__ - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/stable_radix_sort.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/stable_radix_sort.h deleted file mode 100644 index 9f7482ccf9d89a16c2123a78a8d1389a880b7632..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/stable_radix_sort.h +++ /dev/null @@ -1,56 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace sequential -{ - - -template -__host__ __device__ -void stable_radix_sort(sequential::execution_policy &exec, - RandomAccessIterator begin, - RandomAccessIterator end); - - -template -__host__ __device__ -void stable_radix_sort_by_key(sequential::execution_policy &exec, - RandomAccessIterator1 keys_begin, - RandomAccessIterator1 keys_end, - RandomAccessIterator2 values_begin); - - -} // end namespace sequential -} // end namespace detail -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/Chaitanya01/InvestingPlatform/README.md b/spaces/Chaitanya01/InvestingPlatform/README.md deleted file mode 100644 index 4c02605cb0bf4875a46705582fd007c8907eefc4..0000000000000000000000000000000000000000 --- a/spaces/Chaitanya01/InvestingPlatform/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Wizards Streamlit App -emoji: 👀 -colorFrom: indigo -colorTo: yellow -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/mysite/__init__.py b/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/mysite/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/CofAI/chat/g4f/typing.py b/spaces/CofAI/chat/g4f/typing.py deleted file mode 100644 index e41a567ae49dd26d2ace2a3732b0e8f0bbbaa4b0..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/g4f/typing.py +++ /dev/null @@ -1,3 +0,0 @@ -from typing import Dict, NewType, Union, Optional, List, get_type_hints - -sha256 = NewType('sha_256_hash', str) \ No newline at end of file diff --git a/spaces/DEBO-PROJECT/DEBO-V1/bots/perfect_case_bot.py b/spaces/DEBO-PROJECT/DEBO-V1/bots/perfect_case_bot.py deleted file mode 100644 index 62d5fd26df6497d7a0f5160bffe85d56b1e1b305..0000000000000000000000000000000000000000 --- a/spaces/DEBO-PROJECT/DEBO-V1/bots/perfect_case_bot.py +++ /dev/null @@ -1,28 +0,0 @@ - - - -def perfect_case_selector(perfect_case_selected): - - if perfect_case_selected == "This house supports the creation of an international court with a mandate to prosecute leaders for health crimes": - perfect_case_url = "https://www.youtube.com/live/s8g4BLdhQQw?feature=share&t=782" - perfect_case_text_path = "./texts/model_speech1.txt" - - elif perfect_case_selected == "This house believes that governments would be justified in heavily pursuing long-termism": - perfect_case_url = "https://www.youtube.com/live/D-JXK_yw1bI?feature=share&t=1154" - perfect_case_text_path = "./texts/model_speech2.txt" - - elif perfect_case_selected == "THBT international discussion forums should not self-censor* in an attempt to increase inclusivity to people from countries with stringent freedom-of-speech rules.": - - perfect_case_url = "https://www.youtube.com/live/N2fXz3nfdfs?feature=share&t=1373" - perfect_case_text_path = "./texts/model_speech3.txt" - - - with open(perfect_case_text_path, "r") as f: - perfect_case_text = f.read() - - perfect_case_result = { - "perfect_case_url": perfect_case_url, - "perfect_case_text": perfect_case_text - } - - return perfect_case_result \ No newline at end of file diff --git a/spaces/DEBO-PROJECT/DEBO-V1/modules/history_modules.py b/spaces/DEBO-PROJECT/DEBO-V1/modules/history_modules.py deleted file mode 100644 index 6d929d80b8cc2319570ad4f58aba2942ed9e56a9..0000000000000000000000000000000000000000 --- a/spaces/DEBO-PROJECT/DEBO-V1/modules/history_modules.py +++ /dev/null @@ -1,32 +0,0 @@ -from modules.db_modules import get_lastest_item - - -def get_history( - table, - name_of_partition_key, - value_of_partition_key, - session_num - ): - - history_list = get_lastest_item( - table=table, - name_of_partition_key=name_of_partition_key, - value_of_partition_key=value_of_partition_key, - ) - - if history_list==[]: - history = "" - history_num = 0 - else: - history = "" - history_dummy_list = [] - for dic in history_list: - if dic['session_num'] == session_num: - history_dummy_list.append("User: " + dic['user_prompt']) - history_dummy_list.append("Bot: " + dic['bot_response']) - - history_num = int( len(history_dummy_list) / 2 ) # user와 bot이 한 세트이므로, 절반으로 나누기 - - history = "\n".join(history_dummy_list) - - return history, history_num diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/connector.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/connector.py deleted file mode 100644 index bf40689d81b53cd34550e9d8949767385ecd916d..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/connector.py +++ /dev/null @@ -1,1453 +0,0 @@ -import asyncio -import functools -import random -import sys -import traceback -import warnings -from collections import defaultdict, deque -from contextlib import suppress -from http.cookies import SimpleCookie -from itertools import cycle, islice -from time import monotonic -from types import TracebackType -from typing import ( - TYPE_CHECKING, - Any, - Awaitable, - Callable, - DefaultDict, - Dict, - Iterator, - List, - Optional, - Set, - Tuple, - Type, - Union, - cast, -) - -import attr - -from . import hdrs, helpers -from .abc import AbstractResolver -from .client_exceptions import ( - ClientConnectionError, - ClientConnectorCertificateError, - ClientConnectorError, - ClientConnectorSSLError, - ClientHttpProxyError, - ClientProxyConnectionError, - ServerFingerprintMismatch, - UnixClientConnectorError, - cert_errors, - ssl_errors, -) -from .client_proto import ResponseHandler -from .client_reqrep import ClientRequest, Fingerprint, _merge_ssl_params -from .helpers import ( - PY_36, - ceil_timeout, - get_running_loop, - is_ip_address, - noop, - sentinel, -) -from .http import RESPONSES -from .locks import EventResultOrError -from .resolver import DefaultResolver - -try: - import ssl - - SSLContext = ssl.SSLContext -except ImportError: # pragma: no cover - ssl = None # type: ignore[assignment] - SSLContext = object # type: ignore[misc,assignment] - - -__all__ = ("BaseConnector", "TCPConnector", "UnixConnector", "NamedPipeConnector") - - -if TYPE_CHECKING: # pragma: no cover - from .client import ClientTimeout - from .client_reqrep import ConnectionKey - from .tracing import Trace - - -class _DeprecationWaiter: - __slots__ = ("_awaitable", "_awaited") - - def __init__(self, awaitable: Awaitable[Any]) -> None: - self._awaitable = awaitable - self._awaited = False - - def __await__(self) -> Any: - self._awaited = True - return self._awaitable.__await__() - - def __del__(self) -> None: - if not self._awaited: - warnings.warn( - "Connector.close() is a coroutine, " - "please use await connector.close()", - DeprecationWarning, - ) - - -class Connection: - - _source_traceback = None - _transport = None - - def __init__( - self, - connector: "BaseConnector", - key: "ConnectionKey", - protocol: ResponseHandler, - loop: asyncio.AbstractEventLoop, - ) -> None: - self._key = key - self._connector = connector - self._loop = loop - self._protocol: Optional[ResponseHandler] = protocol - self._callbacks: List[Callable[[], None]] = [] - - if loop.get_debug(): - self._source_traceback = traceback.extract_stack(sys._getframe(1)) - - def __repr__(self) -> str: - return f"Connection<{self._key}>" - - def __del__(self, _warnings: Any = warnings) -> None: - if self._protocol is not None: - if PY_36: - kwargs = {"source": self} - else: - kwargs = {} - _warnings.warn(f"Unclosed connection {self!r}", ResourceWarning, **kwargs) - if self._loop.is_closed(): - return - - self._connector._release(self._key, self._protocol, should_close=True) - - context = {"client_connection": self, "message": "Unclosed connection"} - if self._source_traceback is not None: - context["source_traceback"] = self._source_traceback - self._loop.call_exception_handler(context) - - @property - def loop(self) -> asyncio.AbstractEventLoop: - warnings.warn( - "connector.loop property is deprecated", DeprecationWarning, stacklevel=2 - ) - return self._loop - - @property - def transport(self) -> Optional[asyncio.Transport]: - if self._protocol is None: - return None - return self._protocol.transport - - @property - def protocol(self) -> Optional[ResponseHandler]: - return self._protocol - - def add_callback(self, callback: Callable[[], None]) -> None: - if callback is not None: - self._callbacks.append(callback) - - def _notify_release(self) -> None: - callbacks, self._callbacks = self._callbacks[:], [] - - for cb in callbacks: - with suppress(Exception): - cb() - - def close(self) -> None: - self._notify_release() - - if self._protocol is not None: - self._connector._release(self._key, self._protocol, should_close=True) - self._protocol = None - - def release(self) -> None: - self._notify_release() - - if self._protocol is not None: - self._connector._release( - self._key, self._protocol, should_close=self._protocol.should_close - ) - self._protocol = None - - @property - def closed(self) -> bool: - return self._protocol is None or not self._protocol.is_connected() - - -class _TransportPlaceholder: - """placeholder for BaseConnector.connect function""" - - def close(self) -> None: - pass - - -class BaseConnector: - """Base connector class. - - keepalive_timeout - (optional) Keep-alive timeout. - force_close - Set to True to force close and do reconnect - after each request (and between redirects). - limit - The total number of simultaneous connections. - limit_per_host - Number of simultaneous connections to one host. - enable_cleanup_closed - Enables clean-up closed ssl transports. - Disabled by default. - loop - Optional event loop. - """ - - _closed = True # prevent AttributeError in __del__ if ctor was failed - _source_traceback = None - - # abort transport after 2 seconds (cleanup broken connections) - _cleanup_closed_period = 2.0 - - def __init__( - self, - *, - keepalive_timeout: Union[object, None, float] = sentinel, - force_close: bool = False, - limit: int = 100, - limit_per_host: int = 0, - enable_cleanup_closed: bool = False, - loop: Optional[asyncio.AbstractEventLoop] = None, - ) -> None: - - if force_close: - if keepalive_timeout is not None and keepalive_timeout is not sentinel: - raise ValueError( - "keepalive_timeout cannot " "be set if force_close is True" - ) - else: - if keepalive_timeout is sentinel: - keepalive_timeout = 15.0 - - loop = get_running_loop(loop) - - self._closed = False - if loop.get_debug(): - self._source_traceback = traceback.extract_stack(sys._getframe(1)) - - self._conns: Dict[ConnectionKey, List[Tuple[ResponseHandler, float]]] = {} - self._limit = limit - self._limit_per_host = limit_per_host - self._acquired: Set[ResponseHandler] = set() - self._acquired_per_host: DefaultDict[ - ConnectionKey, Set[ResponseHandler] - ] = defaultdict(set) - self._keepalive_timeout = cast(float, keepalive_timeout) - self._force_close = force_close - - # {host_key: FIFO list of waiters} - self._waiters = defaultdict(deque) # type: ignore[var-annotated] - - self._loop = loop - self._factory = functools.partial(ResponseHandler, loop=loop) - - self.cookies: SimpleCookie[str] = SimpleCookie() - - # start keep-alive connection cleanup task - self._cleanup_handle: Optional[asyncio.TimerHandle] = None - - # start cleanup closed transports task - self._cleanup_closed_handle: Optional[asyncio.TimerHandle] = None - self._cleanup_closed_disabled = not enable_cleanup_closed - self._cleanup_closed_transports: List[Optional[asyncio.Transport]] = [] - self._cleanup_closed() - - def __del__(self, _warnings: Any = warnings) -> None: - if self._closed: - return - if not self._conns: - return - - conns = [repr(c) for c in self._conns.values()] - - self._close() - - if PY_36: - kwargs = {"source": self} - else: - kwargs = {} - _warnings.warn(f"Unclosed connector {self!r}", ResourceWarning, **kwargs) - context = { - "connector": self, - "connections": conns, - "message": "Unclosed connector", - } - if self._source_traceback is not None: - context["source_traceback"] = self._source_traceback - self._loop.call_exception_handler(context) - - def __enter__(self) -> "BaseConnector": - warnings.warn( - '"with Connector():" is deprecated, ' - 'use "async with Connector():" instead', - DeprecationWarning, - ) - return self - - def __exit__(self, *exc: Any) -> None: - self._close() - - async def __aenter__(self) -> "BaseConnector": - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]] = None, - exc_value: Optional[BaseException] = None, - exc_traceback: Optional[TracebackType] = None, - ) -> None: - await self.close() - - @property - def force_close(self) -> bool: - """Ultimately close connection on releasing if True.""" - return self._force_close - - @property - def limit(self) -> int: - """The total number for simultaneous connections. - - If limit is 0 the connector has no limit. - The default limit size is 100. - """ - return self._limit - - @property - def limit_per_host(self) -> int: - """The limit for simultaneous connections to the same endpoint. - - Endpoints are the same if they are have equal - (host, port, is_ssl) triple. - """ - return self._limit_per_host - - def _cleanup(self) -> None: - """Cleanup unused transports.""" - if self._cleanup_handle: - self._cleanup_handle.cancel() - # _cleanup_handle should be unset, otherwise _release() will not - # recreate it ever! - self._cleanup_handle = None - - now = self._loop.time() - timeout = self._keepalive_timeout - - if self._conns: - connections = {} - deadline = now - timeout - for key, conns in self._conns.items(): - alive = [] - for proto, use_time in conns: - if proto.is_connected(): - if use_time - deadline < 0: - transport = proto.transport - proto.close() - if key.is_ssl and not self._cleanup_closed_disabled: - self._cleanup_closed_transports.append(transport) - else: - alive.append((proto, use_time)) - else: - transport = proto.transport - proto.close() - if key.is_ssl and not self._cleanup_closed_disabled: - self._cleanup_closed_transports.append(transport) - - if alive: - connections[key] = alive - - self._conns = connections - - if self._conns: - self._cleanup_handle = helpers.weakref_handle( - self, "_cleanup", timeout, self._loop - ) - - def _drop_acquired_per_host( - self, key: "ConnectionKey", val: ResponseHandler - ) -> None: - acquired_per_host = self._acquired_per_host - if key not in acquired_per_host: - return - conns = acquired_per_host[key] - conns.remove(val) - if not conns: - del self._acquired_per_host[key] - - def _cleanup_closed(self) -> None: - """Double confirmation for transport close. - - Some broken ssl servers may leave socket open without proper close. - """ - if self._cleanup_closed_handle: - self._cleanup_closed_handle.cancel() - - for transport in self._cleanup_closed_transports: - if transport is not None: - transport.abort() - - self._cleanup_closed_transports = [] - - if not self._cleanup_closed_disabled: - self._cleanup_closed_handle = helpers.weakref_handle( - self, "_cleanup_closed", self._cleanup_closed_period, self._loop - ) - - def close(self) -> Awaitable[None]: - """Close all opened transports.""" - self._close() - return _DeprecationWaiter(noop()) - - def _close(self) -> None: - if self._closed: - return - - self._closed = True - - try: - if self._loop.is_closed(): - return - - # cancel cleanup task - if self._cleanup_handle: - self._cleanup_handle.cancel() - - # cancel cleanup close task - if self._cleanup_closed_handle: - self._cleanup_closed_handle.cancel() - - for data in self._conns.values(): - for proto, t0 in data: - proto.close() - - for proto in self._acquired: - proto.close() - - for transport in self._cleanup_closed_transports: - if transport is not None: - transport.abort() - - finally: - self._conns.clear() - self._acquired.clear() - self._waiters.clear() - self._cleanup_handle = None - self._cleanup_closed_transports.clear() - self._cleanup_closed_handle = None - - @property - def closed(self) -> bool: - """Is connector closed. - - A readonly property. - """ - return self._closed - - def _available_connections(self, key: "ConnectionKey") -> int: - """ - Return number of available connections. - - The limit, limit_per_host and the connection key are taken into account. - - If it returns less than 1 means that there are no connections - available. - """ - if self._limit: - # total calc available connections - available = self._limit - len(self._acquired) - - # check limit per host - if ( - self._limit_per_host - and available > 0 - and key in self._acquired_per_host - ): - acquired = self._acquired_per_host.get(key) - assert acquired is not None - available = self._limit_per_host - len(acquired) - - elif self._limit_per_host and key in self._acquired_per_host: - # check limit per host - acquired = self._acquired_per_host.get(key) - assert acquired is not None - available = self._limit_per_host - len(acquired) - else: - available = 1 - - return available - - async def connect( - self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" - ) -> Connection: - """Get from pool or create new connection.""" - key = req.connection_key - available = self._available_connections(key) - - # Wait if there are no available connections or if there are/were - # waiters (i.e. don't steal connection from a waiter about to wake up) - if available <= 0 or key in self._waiters: - fut = self._loop.create_future() - - # This connection will now count towards the limit. - self._waiters[key].append(fut) - - if traces: - for trace in traces: - await trace.send_connection_queued_start() - - try: - await fut - except BaseException as e: - if key in self._waiters: - # remove a waiter even if it was cancelled, normally it's - # removed when it's notified - try: - self._waiters[key].remove(fut) - except ValueError: # fut may no longer be in list - pass - - raise e - finally: - if key in self._waiters and not self._waiters[key]: - del self._waiters[key] - - if traces: - for trace in traces: - await trace.send_connection_queued_end() - - proto = self._get(key) - if proto is None: - placeholder = cast(ResponseHandler, _TransportPlaceholder()) - self._acquired.add(placeholder) - self._acquired_per_host[key].add(placeholder) - - if traces: - for trace in traces: - await trace.send_connection_create_start() - - try: - proto = await self._create_connection(req, traces, timeout) - if self._closed: - proto.close() - raise ClientConnectionError("Connector is closed.") - except BaseException: - if not self._closed: - self._acquired.remove(placeholder) - self._drop_acquired_per_host(key, placeholder) - self._release_waiter() - raise - else: - if not self._closed: - self._acquired.remove(placeholder) - self._drop_acquired_per_host(key, placeholder) - - if traces: - for trace in traces: - await trace.send_connection_create_end() - else: - if traces: - # Acquire the connection to prevent race conditions with limits - placeholder = cast(ResponseHandler, _TransportPlaceholder()) - self._acquired.add(placeholder) - self._acquired_per_host[key].add(placeholder) - for trace in traces: - await trace.send_connection_reuseconn() - self._acquired.remove(placeholder) - self._drop_acquired_per_host(key, placeholder) - - self._acquired.add(proto) - self._acquired_per_host[key].add(proto) - return Connection(self, key, proto, self._loop) - - def _get(self, key: "ConnectionKey") -> Optional[ResponseHandler]: - try: - conns = self._conns[key] - except KeyError: - return None - - t1 = self._loop.time() - while conns: - proto, t0 = conns.pop() - if proto.is_connected(): - if t1 - t0 > self._keepalive_timeout: - transport = proto.transport - proto.close() - # only for SSL transports - if key.is_ssl and not self._cleanup_closed_disabled: - self._cleanup_closed_transports.append(transport) - else: - if not conns: - # The very last connection was reclaimed: drop the key - del self._conns[key] - return proto - else: - transport = proto.transport - proto.close() - if key.is_ssl and not self._cleanup_closed_disabled: - self._cleanup_closed_transports.append(transport) - - # No more connections: drop the key - del self._conns[key] - return None - - def _release_waiter(self) -> None: - """ - Iterates over all waiters until one to be released is found. - - The one to be released is not finsihed and - belongs to a host that has available connections. - """ - if not self._waiters: - return - - # Having the dict keys ordered this avoids to iterate - # at the same order at each call. - queues = list(self._waiters.keys()) - random.shuffle(queues) - - for key in queues: - if self._available_connections(key) < 1: - continue - - waiters = self._waiters[key] - while waiters: - waiter = waiters.popleft() - if not waiter.done(): - waiter.set_result(None) - return - - def _release_acquired(self, key: "ConnectionKey", proto: ResponseHandler) -> None: - if self._closed: - # acquired connection is already released on connector closing - return - - try: - self._acquired.remove(proto) - self._drop_acquired_per_host(key, proto) - except KeyError: # pragma: no cover - # this may be result of undetermenistic order of objects - # finalization due garbage collection. - pass - else: - self._release_waiter() - - def _release( - self, - key: "ConnectionKey", - protocol: ResponseHandler, - *, - should_close: bool = False, - ) -> None: - if self._closed: - # acquired connection is already released on connector closing - return - - self._release_acquired(key, protocol) - - if self._force_close: - should_close = True - - if should_close or protocol.should_close: - transport = protocol.transport - protocol.close() - - if key.is_ssl and not self._cleanup_closed_disabled: - self._cleanup_closed_transports.append(transport) - else: - conns = self._conns.get(key) - if conns is None: - conns = self._conns[key] = [] - conns.append((protocol, self._loop.time())) - - if self._cleanup_handle is None: - self._cleanup_handle = helpers.weakref_handle( - self, "_cleanup", self._keepalive_timeout, self._loop - ) - - async def _create_connection( - self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" - ) -> ResponseHandler: - raise NotImplementedError() - - -class _DNSCacheTable: - def __init__(self, ttl: Optional[float] = None) -> None: - self._addrs_rr: Dict[Tuple[str, int], Tuple[Iterator[Dict[str, Any]], int]] = {} - self._timestamps: Dict[Tuple[str, int], float] = {} - self._ttl = ttl - - def __contains__(self, host: object) -> bool: - return host in self._addrs_rr - - def add(self, key: Tuple[str, int], addrs: List[Dict[str, Any]]) -> None: - self._addrs_rr[key] = (cycle(addrs), len(addrs)) - - if self._ttl: - self._timestamps[key] = monotonic() - - def remove(self, key: Tuple[str, int]) -> None: - self._addrs_rr.pop(key, None) - - if self._ttl: - self._timestamps.pop(key, None) - - def clear(self) -> None: - self._addrs_rr.clear() - self._timestamps.clear() - - def next_addrs(self, key: Tuple[str, int]) -> List[Dict[str, Any]]: - loop, length = self._addrs_rr[key] - addrs = list(islice(loop, length)) - # Consume one more element to shift internal state of `cycle` - next(loop) - return addrs - - def expired(self, key: Tuple[str, int]) -> bool: - if self._ttl is None: - return False - - return self._timestamps[key] + self._ttl < monotonic() - - -class TCPConnector(BaseConnector): - """TCP connector. - - verify_ssl - Set to True to check ssl certifications. - fingerprint - Pass the binary sha256 - digest of the expected certificate in DER format to verify - that the certificate the server presents matches. See also - https://en.wikipedia.org/wiki/Transport_Layer_Security#Certificate_pinning - resolver - Enable DNS lookups and use this - resolver - use_dns_cache - Use memory cache for DNS lookups. - ttl_dns_cache - Max seconds having cached a DNS entry, None forever. - family - socket address family - local_addr - local tuple of (host, port) to bind socket to - - keepalive_timeout - (optional) Keep-alive timeout. - force_close - Set to True to force close and do reconnect - after each request (and between redirects). - limit - The total number of simultaneous connections. - limit_per_host - Number of simultaneous connections to one host. - enable_cleanup_closed - Enables clean-up closed ssl transports. - Disabled by default. - loop - Optional event loop. - """ - - def __init__( - self, - *, - verify_ssl: bool = True, - fingerprint: Optional[bytes] = None, - use_dns_cache: bool = True, - ttl_dns_cache: Optional[int] = 10, - family: int = 0, - ssl_context: Optional[SSLContext] = None, - ssl: Union[None, bool, Fingerprint, SSLContext] = None, - local_addr: Optional[Tuple[str, int]] = None, - resolver: Optional[AbstractResolver] = None, - keepalive_timeout: Union[None, float, object] = sentinel, - force_close: bool = False, - limit: int = 100, - limit_per_host: int = 0, - enable_cleanup_closed: bool = False, - loop: Optional[asyncio.AbstractEventLoop] = None, - ): - super().__init__( - keepalive_timeout=keepalive_timeout, - force_close=force_close, - limit=limit, - limit_per_host=limit_per_host, - enable_cleanup_closed=enable_cleanup_closed, - loop=loop, - ) - - self._ssl = _merge_ssl_params(ssl, verify_ssl, ssl_context, fingerprint) - if resolver is None: - resolver = DefaultResolver(loop=self._loop) - self._resolver = resolver - - self._use_dns_cache = use_dns_cache - self._cached_hosts = _DNSCacheTable(ttl=ttl_dns_cache) - self._throttle_dns_events: Dict[Tuple[str, int], EventResultOrError] = {} - self._family = family - self._local_addr = local_addr - - def close(self) -> Awaitable[None]: - """Close all ongoing DNS calls.""" - for ev in self._throttle_dns_events.values(): - ev.cancel() - - return super().close() - - @property - def family(self) -> int: - """Socket family like AF_INET.""" - return self._family - - @property - def use_dns_cache(self) -> bool: - """True if local DNS caching is enabled.""" - return self._use_dns_cache - - def clear_dns_cache( - self, host: Optional[str] = None, port: Optional[int] = None - ) -> None: - """Remove specified host/port or clear all dns local cache.""" - if host is not None and port is not None: - self._cached_hosts.remove((host, port)) - elif host is not None or port is not None: - raise ValueError("either both host and port " "or none of them are allowed") - else: - self._cached_hosts.clear() - - async def _resolve_host( - self, host: str, port: int, traces: Optional[List["Trace"]] = None - ) -> List[Dict[str, Any]]: - if is_ip_address(host): - return [ - { - "hostname": host, - "host": host, - "port": port, - "family": self._family, - "proto": 0, - "flags": 0, - } - ] - - if not self._use_dns_cache: - - if traces: - for trace in traces: - await trace.send_dns_resolvehost_start(host) - - res = await self._resolver.resolve(host, port, family=self._family) - - if traces: - for trace in traces: - await trace.send_dns_resolvehost_end(host) - - return res - - key = (host, port) - - if (key in self._cached_hosts) and (not self._cached_hosts.expired(key)): - # get result early, before any await (#4014) - result = self._cached_hosts.next_addrs(key) - - if traces: - for trace in traces: - await trace.send_dns_cache_hit(host) - return result - - if key in self._throttle_dns_events: - # get event early, before any await (#4014) - event = self._throttle_dns_events[key] - if traces: - for trace in traces: - await trace.send_dns_cache_hit(host) - await event.wait() - else: - # update dict early, before any await (#4014) - self._throttle_dns_events[key] = EventResultOrError(self._loop) - if traces: - for trace in traces: - await trace.send_dns_cache_miss(host) - try: - - if traces: - for trace in traces: - await trace.send_dns_resolvehost_start(host) - - addrs = await self._resolver.resolve(host, port, family=self._family) - if traces: - for trace in traces: - await trace.send_dns_resolvehost_end(host) - - self._cached_hosts.add(key, addrs) - self._throttle_dns_events[key].set() - except BaseException as e: - # any DNS exception, independently of the implementation - # is set for the waiters to raise the same exception. - self._throttle_dns_events[key].set(exc=e) - raise - finally: - self._throttle_dns_events.pop(key) - - return self._cached_hosts.next_addrs(key) - - async def _create_connection( - self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" - ) -> ResponseHandler: - """Create connection. - - Has same keyword arguments as BaseEventLoop.create_connection. - """ - if req.proxy: - _, proto = await self._create_proxy_connection(req, traces, timeout) - else: - _, proto = await self._create_direct_connection(req, traces, timeout) - - return proto - - @staticmethod - @functools.lru_cache(None) - def _make_ssl_context(verified: bool) -> SSLContext: - if verified: - return ssl.create_default_context() - else: - sslcontext = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) - sslcontext.options |= ssl.OP_NO_SSLv2 - sslcontext.options |= ssl.OP_NO_SSLv3 - sslcontext.check_hostname = False - sslcontext.verify_mode = ssl.CERT_NONE - try: - sslcontext.options |= ssl.OP_NO_COMPRESSION - except AttributeError as attr_err: - warnings.warn( - "{!s}: The Python interpreter is compiled " - "against OpenSSL < 1.0.0. Ref: " - "https://docs.python.org/3/library/ssl.html" - "#ssl.OP_NO_COMPRESSION".format(attr_err), - ) - sslcontext.set_default_verify_paths() - return sslcontext - - def _get_ssl_context(self, req: "ClientRequest") -> Optional[SSLContext]: - """Logic to get the correct SSL context - - 0. if req.ssl is false, return None - - 1. if ssl_context is specified in req, use it - 2. if _ssl_context is specified in self, use it - 3. otherwise: - 1. if verify_ssl is not specified in req, use self.ssl_context - (will generate a default context according to self.verify_ssl) - 2. if verify_ssl is True in req, generate a default SSL context - 3. if verify_ssl is False in req, generate a SSL context that - won't verify - """ - if req.is_ssl(): - if ssl is None: # pragma: no cover - raise RuntimeError("SSL is not supported.") - sslcontext = req.ssl - if isinstance(sslcontext, ssl.SSLContext): - return sslcontext - if sslcontext is not None: - # not verified or fingerprinted - return self._make_ssl_context(False) - sslcontext = self._ssl - if isinstance(sslcontext, ssl.SSLContext): - return sslcontext - if sslcontext is not None: - # not verified or fingerprinted - return self._make_ssl_context(False) - return self._make_ssl_context(True) - else: - return None - - def _get_fingerprint(self, req: "ClientRequest") -> Optional["Fingerprint"]: - ret = req.ssl - if isinstance(ret, Fingerprint): - return ret - ret = self._ssl - if isinstance(ret, Fingerprint): - return ret - return None - - async def _wrap_create_connection( - self, - *args: Any, - req: "ClientRequest", - timeout: "ClientTimeout", - client_error: Type[Exception] = ClientConnectorError, - **kwargs: Any, - ) -> Tuple[asyncio.Transport, ResponseHandler]: - try: - async with ceil_timeout(timeout.sock_connect): - return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa - except cert_errors as exc: - raise ClientConnectorCertificateError(req.connection_key, exc) from exc - except ssl_errors as exc: - raise ClientConnectorSSLError(req.connection_key, exc) from exc - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - raise - raise client_error(req.connection_key, exc) from exc - - def _fail_on_no_start_tls(self, req: "ClientRequest") -> None: - """Raise a :py:exc:`RuntimeError` on missing ``start_tls()``. - - One case is that :py:meth:`asyncio.loop.start_tls` is not yet - implemented under Python 3.6. It is necessary for TLS-in-TLS so - that it is possible to send HTTPS queries through HTTPS proxies. - - This doesn't affect regular HTTP requests, though. - """ - if not req.is_ssl(): - return - - proxy_url = req.proxy - assert proxy_url is not None - if proxy_url.scheme != "https": - return - - self._check_loop_for_start_tls() - - def _check_loop_for_start_tls(self) -> None: - try: - self._loop.start_tls - except AttributeError as attr_exc: - raise RuntimeError( - "An HTTPS request is being sent through an HTTPS proxy. " - "This needs support for TLS in TLS but it is not implemented " - "in your runtime for the stdlib asyncio.\n\n" - "Please upgrade to Python 3.7 or higher. For more details, " - "please see:\n" - "* https://bugs.python.org/issue37179\n" - "* https://github.com/python/cpython/pull/28073\n" - "* https://docs.aiohttp.org/en/stable/" - "client_advanced.html#proxy-support\n" - "* https://github.com/aio-libs/aiohttp/discussions/6044\n", - ) from attr_exc - - def _loop_supports_start_tls(self) -> bool: - try: - self._check_loop_for_start_tls() - except RuntimeError: - return False - else: - return True - - def _warn_about_tls_in_tls( - self, - underlying_transport: asyncio.Transport, - req: "ClientRequest", - ) -> None: - """Issue a warning if the requested URL has HTTPS scheme.""" - if req.request_info.url.scheme != "https": - return - - asyncio_supports_tls_in_tls = getattr( - underlying_transport, - "_start_tls_compatible", - False, - ) - - if asyncio_supports_tls_in_tls: - return - - warnings.warn( - "An HTTPS request is being sent through an HTTPS proxy. " - "This support for TLS in TLS is known to be disabled " - "in the stdlib asyncio. This is why you'll probably see " - "an error in the log below.\n\n" - "It is possible to enable it via monkeypatching under " - "Python 3.7 or higher. For more details, see:\n" - "* https://bugs.python.org/issue37179\n" - "* https://github.com/python/cpython/pull/28073\n\n" - "You can temporarily patch this as follows:\n" - "* https://docs.aiohttp.org/en/stable/client_advanced.html#proxy-support\n" - "* https://github.com/aio-libs/aiohttp/discussions/6044\n", - RuntimeWarning, - source=self, - # Why `4`? At least 3 of the calls in the stack originate - # from the methods in this class. - stacklevel=3, - ) - - async def _start_tls_connection( - self, - underlying_transport: asyncio.Transport, - req: "ClientRequest", - timeout: "ClientTimeout", - client_error: Type[Exception] = ClientConnectorError, - ) -> Tuple[asyncio.BaseTransport, ResponseHandler]: - """Wrap the raw TCP transport with TLS.""" - tls_proto = self._factory() # Create a brand new proto for TLS - - # Safety of the `cast()` call here is based on the fact that - # internally `_get_ssl_context()` only returns `None` when - # `req.is_ssl()` evaluates to `False` which is never gonna happen - # in this code path. Of course, it's rather fragile - # maintainability-wise but this is to be solved separately. - sslcontext = cast(ssl.SSLContext, self._get_ssl_context(req)) - - try: - async with ceil_timeout(timeout.sock_connect): - try: - tls_transport = await self._loop.start_tls( - underlying_transport, - tls_proto, - sslcontext, - server_hostname=req.host, - ssl_handshake_timeout=timeout.total, - ) - except BaseException: - # We need to close the underlying transport since - # `start_tls()` probably failed before it had a - # chance to do this: - underlying_transport.close() - raise - except cert_errors as exc: - raise ClientConnectorCertificateError(req.connection_key, exc) from exc - except ssl_errors as exc: - raise ClientConnectorSSLError(req.connection_key, exc) from exc - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - raise - raise client_error(req.connection_key, exc) from exc - except TypeError as type_err: - # Example cause looks like this: - # TypeError: transport is not supported by start_tls() - - raise ClientConnectionError( - "Cannot initialize a TLS-in-TLS connection to host " - f"{req.host!s}:{req.port:d} through an underlying connection " - f"to an HTTPS proxy {req.proxy!s} ssl:{req.ssl or 'default'} " - f"[{type_err!s}]" - ) from type_err - else: - tls_proto.connection_made( - tls_transport - ) # Kick the state machine of the new TLS protocol - - return tls_transport, tls_proto - - async def _create_direct_connection( - self, - req: "ClientRequest", - traces: List["Trace"], - timeout: "ClientTimeout", - *, - client_error: Type[Exception] = ClientConnectorError, - ) -> Tuple[asyncio.Transport, ResponseHandler]: - sslcontext = self._get_ssl_context(req) - fingerprint = self._get_fingerprint(req) - - host = req.url.raw_host - assert host is not None - port = req.port - assert port is not None - host_resolved = asyncio.ensure_future( - self._resolve_host(host, port, traces=traces), loop=self._loop - ) - try: - # Cancelling this lookup should not cancel the underlying lookup - # or else the cancel event will get broadcast to all the waiters - # across all connections. - hosts = await asyncio.shield(host_resolved) - except asyncio.CancelledError: - - def drop_exception(fut: "asyncio.Future[List[Dict[str, Any]]]") -> None: - with suppress(Exception, asyncio.CancelledError): - fut.result() - - host_resolved.add_done_callback(drop_exception) - raise - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - raise - # in case of proxy it is not ClientProxyConnectionError - # it is problem of resolving proxy ip itself - raise ClientConnectorError(req.connection_key, exc) from exc - - last_exc: Optional[Exception] = None - - for hinfo in hosts: - host = hinfo["host"] - port = hinfo["port"] - - try: - transp, proto = await self._wrap_create_connection( - self._factory, - host, - port, - timeout=timeout, - ssl=sslcontext, - family=hinfo["family"], - proto=hinfo["proto"], - flags=hinfo["flags"], - server_hostname=hinfo["hostname"] if sslcontext else None, - local_addr=self._local_addr, - req=req, - client_error=client_error, - ) - except ClientConnectorError as exc: - last_exc = exc - continue - - if req.is_ssl() and fingerprint: - try: - fingerprint.check(transp) - except ServerFingerprintMismatch as exc: - transp.close() - if not self._cleanup_closed_disabled: - self._cleanup_closed_transports.append(transp) - last_exc = exc - continue - - return transp, proto - else: - assert last_exc is not None - raise last_exc - - async def _create_proxy_connection( - self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" - ) -> Tuple[asyncio.BaseTransport, ResponseHandler]: - self._fail_on_no_start_tls(req) - runtime_has_start_tls = self._loop_supports_start_tls() - - headers: Dict[str, str] = {} - if req.proxy_headers is not None: - headers = req.proxy_headers # type: ignore[assignment] - headers[hdrs.HOST] = req.headers[hdrs.HOST] - - url = req.proxy - assert url is not None - proxy_req = ClientRequest( - hdrs.METH_GET, - url, - headers=headers, - auth=req.proxy_auth, - loop=self._loop, - ssl=req.ssl, - ) - - # create connection to proxy server - transport, proto = await self._create_direct_connection( - proxy_req, [], timeout, client_error=ClientProxyConnectionError - ) - - # Many HTTP proxies has buggy keepalive support. Let's not - # reuse connection but close it after processing every - # response. - proto.force_close() - - auth = proxy_req.headers.pop(hdrs.AUTHORIZATION, None) - if auth is not None: - if not req.is_ssl(): - req.headers[hdrs.PROXY_AUTHORIZATION] = auth - else: - proxy_req.headers[hdrs.PROXY_AUTHORIZATION] = auth - - if req.is_ssl(): - if runtime_has_start_tls: - self._warn_about_tls_in_tls(transport, req) - - # For HTTPS requests over HTTP proxy - # we must notify proxy to tunnel connection - # so we send CONNECT command: - # CONNECT www.python.org:443 HTTP/1.1 - # Host: www.python.org - # - # next we must do TLS handshake and so on - # to do this we must wrap raw socket into secure one - # asyncio handles this perfectly - proxy_req.method = hdrs.METH_CONNECT - proxy_req.url = req.url - key = attr.evolve( - req.connection_key, proxy=None, proxy_auth=None, proxy_headers_hash=None - ) - conn = Connection(self, key, proto, self._loop) - proxy_resp = await proxy_req.send(conn) - try: - protocol = conn._protocol - assert protocol is not None - - # read_until_eof=True will ensure the connection isn't closed - # once the response is received and processed allowing - # START_TLS to work on the connection below. - protocol.set_response_params(read_until_eof=runtime_has_start_tls) - resp = await proxy_resp.start(conn) - except BaseException: - proxy_resp.close() - conn.close() - raise - else: - conn._protocol = None - conn._transport = None - try: - if resp.status != 200: - message = resp.reason - if message is None: - message = RESPONSES[resp.status][0] - raise ClientHttpProxyError( - proxy_resp.request_info, - resp.history, - status=resp.status, - message=message, - headers=resp.headers, - ) - if not runtime_has_start_tls: - rawsock = transport.get_extra_info("socket", default=None) - if rawsock is None: - raise RuntimeError( - "Transport does not expose socket instance" - ) - # Duplicate the socket, so now we can close proxy transport - rawsock = rawsock.dup() - except BaseException: - # It shouldn't be closed in `finally` because it's fed to - # `loop.start_tls()` and the docs say not to touch it after - # passing there. - transport.close() - raise - finally: - if not runtime_has_start_tls: - transport.close() - - if not runtime_has_start_tls: - # HTTP proxy with support for upgrade to HTTPS - sslcontext = self._get_ssl_context(req) - return await self._wrap_create_connection( - self._factory, - timeout=timeout, - ssl=sslcontext, - sock=rawsock, - server_hostname=req.host, - req=req, - ) - - return await self._start_tls_connection( - # Access the old transport for the last time before it's - # closed and forgotten forever: - transport, - req=req, - timeout=timeout, - ) - finally: - proxy_resp.close() - - return transport, proto - - -class UnixConnector(BaseConnector): - """Unix socket connector. - - path - Unix socket path. - keepalive_timeout - (optional) Keep-alive timeout. - force_close - Set to True to force close and do reconnect - after each request (and between redirects). - limit - The total number of simultaneous connections. - limit_per_host - Number of simultaneous connections to one host. - loop - Optional event loop. - """ - - def __init__( - self, - path: str, - force_close: bool = False, - keepalive_timeout: Union[object, float, None] = sentinel, - limit: int = 100, - limit_per_host: int = 0, - loop: Optional[asyncio.AbstractEventLoop] = None, - ) -> None: - super().__init__( - force_close=force_close, - keepalive_timeout=keepalive_timeout, - limit=limit, - limit_per_host=limit_per_host, - loop=loop, - ) - self._path = path - - @property - def path(self) -> str: - """Path to unix socket.""" - return self._path - - async def _create_connection( - self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" - ) -> ResponseHandler: - try: - async with ceil_timeout(timeout.sock_connect): - _, proto = await self._loop.create_unix_connection( - self._factory, self._path - ) - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - raise - raise UnixClientConnectorError(self.path, req.connection_key, exc) from exc - - return cast(ResponseHandler, proto) - - -class NamedPipeConnector(BaseConnector): - """Named pipe connector. - - Only supported by the proactor event loop. - See also: https://docs.python.org/3.7/library/asyncio-eventloop.html - - path - Windows named pipe path. - keepalive_timeout - (optional) Keep-alive timeout. - force_close - Set to True to force close and do reconnect - after each request (and between redirects). - limit - The total number of simultaneous connections. - limit_per_host - Number of simultaneous connections to one host. - loop - Optional event loop. - """ - - def __init__( - self, - path: str, - force_close: bool = False, - keepalive_timeout: Union[object, float, None] = sentinel, - limit: int = 100, - limit_per_host: int = 0, - loop: Optional[asyncio.AbstractEventLoop] = None, - ) -> None: - super().__init__( - force_close=force_close, - keepalive_timeout=keepalive_timeout, - limit=limit, - limit_per_host=limit_per_host, - loop=loop, - ) - if not isinstance( - self._loop, asyncio.ProactorEventLoop # type: ignore[attr-defined] - ): - raise RuntimeError( - "Named Pipes only available in proactor " "loop under windows" - ) - self._path = path - - @property - def path(self) -> str: - """Path to the named pipe.""" - return self._path - - async def _create_connection( - self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" - ) -> ResponseHandler: - try: - async with ceil_timeout(timeout.sock_connect): - _, proto = await self._loop.create_pipe_connection( # type: ignore[attr-defined] # noqa: E501 - self._factory, self._path - ) - # the drain is required so that the connection_made is called - # and transport is set otherwise it is not set before the - # `assert conn.transport is not None` - # in client.py's _request method - await asyncio.sleep(0) - # other option is to manually set transport like - # `proto.transport = trans` - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - raise - raise ClientConnectorError(req.connection_key, exc) from exc - - return cast(ResponseHandler, proto) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/v5/schema/mixins.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/v5/schema/mixins.py deleted file mode 100644 index 569daefb8f3f00c519d350de98e542c7562db1b6..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/v5/schema/mixins.py +++ /dev/null @@ -1,1292 +0,0 @@ -# The contents of this file are automatically written by -# tools/generate_schema_wrapper.py. Do not modify directly. -import sys - -from . import core -from altair.utils import use_signature -from altair.utils.schemapi import Undefined - -if sys.version_info >= (3, 11): - from typing import Self -else: - from typing_extensions import Self - - -class MarkMethodMixin: - """A mixin class that defines mark methods""" - - def mark_arc(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, dir=Undefined, - discreteBandSize=Undefined, dx=Undefined, dy=Undefined, ellipsis=Undefined, - fill=Undefined, fillOpacity=Undefined, filled=Undefined, font=Undefined, - fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, height=Undefined, - href=Undefined, innerRadius=Undefined, interpolate=Undefined, invalid=Undefined, - limit=Undefined, line=Undefined, lineBreak=Undefined, lineHeight=Undefined, - opacity=Undefined, order=Undefined, orient=Undefined, outerRadius=Undefined, - padAngle=Undefined, point=Undefined, radius=Undefined, radius2=Undefined, - radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, size=Undefined, - smooth=Undefined, stroke=Undefined, strokeCap=Undefined, strokeDash=Undefined, - strokeDashOffset=Undefined, strokeJoin=Undefined, strokeMiterLimit=Undefined, - strokeOffset=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, - style=Undefined, tension=Undefined, text=Undefined, theta=Undefined, theta2=Undefined, - theta2Offset=Undefined, thetaOffset=Undefined, thickness=Undefined, - timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, tooltip=Undefined, - url=Undefined, width=Undefined, x=Undefined, x2=Undefined, x2Offset=Undefined, - xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, yOffset=Undefined, - **kwds) -> Self: - """Set the chart's mark to 'arc' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="arc", **kwds) - else: - copy.mark = "arc" - return copy - - def mark_area(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'area' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="area", **kwds) - else: - copy.mark = "area" - return copy - - def mark_bar(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, dir=Undefined, - discreteBandSize=Undefined, dx=Undefined, dy=Undefined, ellipsis=Undefined, - fill=Undefined, fillOpacity=Undefined, filled=Undefined, font=Undefined, - fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, height=Undefined, - href=Undefined, innerRadius=Undefined, interpolate=Undefined, invalid=Undefined, - limit=Undefined, line=Undefined, lineBreak=Undefined, lineHeight=Undefined, - opacity=Undefined, order=Undefined, orient=Undefined, outerRadius=Undefined, - padAngle=Undefined, point=Undefined, radius=Undefined, radius2=Undefined, - radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, size=Undefined, - smooth=Undefined, stroke=Undefined, strokeCap=Undefined, strokeDash=Undefined, - strokeDashOffset=Undefined, strokeJoin=Undefined, strokeMiterLimit=Undefined, - strokeOffset=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, - style=Undefined, tension=Undefined, text=Undefined, theta=Undefined, theta2=Undefined, - theta2Offset=Undefined, thetaOffset=Undefined, thickness=Undefined, - timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, tooltip=Undefined, - url=Undefined, width=Undefined, x=Undefined, x2=Undefined, x2Offset=Undefined, - xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, yOffset=Undefined, - **kwds) -> Self: - """Set the chart's mark to 'bar' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="bar", **kwds) - else: - copy.mark = "bar" - return copy - - def mark_image(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'image' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="image", **kwds) - else: - copy.mark = "image" - return copy - - def mark_line(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'line' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="line", **kwds) - else: - copy.mark = "line" - return copy - - def mark_point(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'point' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="point", **kwds) - else: - copy.mark = "point" - return copy - - def mark_rect(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'rect' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="rect", **kwds) - else: - copy.mark = "rect" - return copy - - def mark_rule(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'rule' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="rule", **kwds) - else: - copy.mark = "rule" - return copy - - def mark_text(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'text' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="text", **kwds) - else: - copy.mark = "text" - return copy - - def mark_tick(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'tick' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="tick", **kwds) - else: - copy.mark = "tick" - return copy - - def mark_trail(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'trail' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="trail", **kwds) - else: - copy.mark = "trail" - return copy - - def mark_circle(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, - y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'circle' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="circle", **kwds) - else: - copy.mark = "circle" - return copy - - def mark_square(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, - y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'square' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="square", **kwds) - else: - copy.mark = "square" - return copy - - def mark_geoshape(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, - shape=Undefined, size=Undefined, smooth=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOffset=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, - tension=Undefined, text=Undefined, theta=Undefined, theta2=Undefined, - theta2Offset=Undefined, thetaOffset=Undefined, thickness=Undefined, - timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, tooltip=Undefined, - url=Undefined, width=Undefined, x=Undefined, x2=Undefined, x2Offset=Undefined, - xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'geoshape' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="geoshape", **kwds) - else: - copy.mark = "geoshape" - return copy - - def mark_boxplot(self, box=Undefined, clip=Undefined, color=Undefined, extent=Undefined, - invalid=Undefined, median=Undefined, opacity=Undefined, orient=Undefined, - outliers=Undefined, rule=Undefined, size=Undefined, ticks=Undefined, **kwds) -> Self: - """Set the chart's mark to 'boxplot' (see :class:`BoxPlotDef`) - """ - kwds = dict(box=box, clip=clip, color=color, extent=extent, invalid=invalid, median=median, - opacity=opacity, orient=orient, outliers=outliers, rule=rule, size=size, - ticks=ticks, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.BoxPlotDef(type="boxplot", **kwds) - else: - copy.mark = "boxplot" - return copy - - def mark_errorbar(self, clip=Undefined, color=Undefined, extent=Undefined, opacity=Undefined, - orient=Undefined, rule=Undefined, size=Undefined, thickness=Undefined, - ticks=Undefined, **kwds) -> Self: - """Set the chart's mark to 'errorbar' (see :class:`ErrorBarDef`) - """ - kwds = dict(clip=clip, color=color, extent=extent, opacity=opacity, orient=orient, rule=rule, - size=size, thickness=thickness, ticks=ticks, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.ErrorBarDef(type="errorbar", **kwds) - else: - copy.mark = "errorbar" - return copy - - def mark_errorband(self, band=Undefined, borders=Undefined, clip=Undefined, color=Undefined, - extent=Undefined, interpolate=Undefined, opacity=Undefined, orient=Undefined, - tension=Undefined, **kwds) -> Self: - """Set the chart's mark to 'errorband' (see :class:`ErrorBandDef`) - """ - kwds = dict(band=band, borders=borders, clip=clip, color=color, extent=extent, - interpolate=interpolate, opacity=opacity, orient=orient, tension=tension, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.ErrorBandDef(type="errorband", **kwds) - else: - copy.mark = "errorband" - return copy - - -class ConfigMethodMixin: - """A mixin class that defines config methods""" - - @use_signature(core.Config) - def configure(self, *args, **kwargs) -> Self: - copy = self.copy(deep=False) - copy.config = core.Config(*args, **kwargs) - return copy - - @use_signature(core.RectConfig) - def configure_arc(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["arc"] = core.RectConfig(*args, **kwargs) - return copy - - @use_signature(core.AreaConfig) - def configure_area(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["area"] = core.AreaConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axis(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axis"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisBand(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisBand"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisBottom(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisBottom"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisDiscrete(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisDiscrete"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisLeft(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisLeft"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisPoint(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisPoint"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisQuantitative(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisQuantitative"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisRight(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisRight"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisTemporal(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisTemporal"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisTop(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisTop"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisX(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisX"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXBand(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXBand"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXDiscrete(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXDiscrete"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXPoint(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXPoint"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXQuantitative(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXQuantitative"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXTemporal(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXTemporal"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisY(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisY"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYBand(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYBand"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYDiscrete(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYDiscrete"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYPoint(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYPoint"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYQuantitative(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYQuantitative"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYTemporal(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYTemporal"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.BarConfig) - def configure_bar(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["bar"] = core.BarConfig(*args, **kwargs) - return copy - - @use_signature(core.BoxPlotConfig) - def configure_boxplot(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["boxplot"] = core.BoxPlotConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_circle(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["circle"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.CompositionConfig) - def configure_concat(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["concat"] = core.CompositionConfig(*args, **kwargs) - return copy - - @use_signature(core.ErrorBandConfig) - def configure_errorband(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["errorband"] = core.ErrorBandConfig(*args, **kwargs) - return copy - - @use_signature(core.ErrorBarConfig) - def configure_errorbar(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["errorbar"] = core.ErrorBarConfig(*args, **kwargs) - return copy - - @use_signature(core.CompositionConfig) - def configure_facet(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["facet"] = core.CompositionConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_geoshape(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["geoshape"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_header(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["header"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_headerColumn(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["headerColumn"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_headerFacet(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["headerFacet"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_headerRow(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["headerRow"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.RectConfig) - def configure_image(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["image"] = core.RectConfig(*args, **kwargs) - return copy - - @use_signature(core.LegendConfig) - def configure_legend(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["legend"] = core.LegendConfig(*args, **kwargs) - return copy - - @use_signature(core.LineConfig) - def configure_line(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["line"] = core.LineConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_mark(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["mark"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_point(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["point"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.ProjectionConfig) - def configure_projection(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["projection"] = core.ProjectionConfig(*args, **kwargs) - return copy - - @use_signature(core.RangeConfig) - def configure_range(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["range"] = core.RangeConfig(*args, **kwargs) - return copy - - @use_signature(core.RectConfig) - def configure_rect(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["rect"] = core.RectConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_rule(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["rule"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.ScaleConfig) - def configure_scale(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["scale"] = core.ScaleConfig(*args, **kwargs) - return copy - - @use_signature(core.SelectionConfig) - def configure_selection(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["selection"] = core.SelectionConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_square(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["square"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_text(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["text"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.TickConfig) - def configure_tick(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["tick"] = core.TickConfig(*args, **kwargs) - return copy - - @use_signature(core.TitleConfig) - def configure_title(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["title"] = core.TitleConfig(*args, **kwargs) - return copy - - @use_signature(core.LineConfig) - def configure_trail(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["trail"] = core.LineConfig(*args, **kwargs) - return copy - - @use_signature(core.ViewConfig) - def configure_view(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["view"] = core.ViewConfig(*args, **kwargs) - return copy \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/security/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/security/__init__.py deleted file mode 100644 index 3aa6bf21e44f3069adb94242fbba5c8160532a1c..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/security/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from .api_key import APIKeyCookie as APIKeyCookie -from .api_key import APIKeyHeader as APIKeyHeader -from .api_key import APIKeyQuery as APIKeyQuery -from .http import HTTPAuthorizationCredentials as HTTPAuthorizationCredentials -from .http import HTTPBasic as HTTPBasic -from .http import HTTPBasicCredentials as HTTPBasicCredentials -from .http import HTTPBearer as HTTPBearer -from .http import HTTPDigest as HTTPDigest -from .oauth2 import OAuth2 as OAuth2 -from .oauth2 import OAuth2AuthorizationCodeBearer as OAuth2AuthorizationCodeBearer -from .oauth2 import OAuth2PasswordBearer as OAuth2PasswordBearer -from .oauth2 import OAuth2PasswordRequestForm as OAuth2PasswordRequestForm -from .oauth2 import OAuth2PasswordRequestFormStrict as OAuth2PasswordRequestFormStrict -from .oauth2 import SecurityScopes as SecurityScopes -from .open_id_connect_url import OpenIdConnect as OpenIdConnect diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/plistlib/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/plistlib/__init__.py deleted file mode 100644 index 066eef38fc720265366afee9a8cd415fc560459e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/plistlib/__init__.py +++ /dev/null @@ -1,681 +0,0 @@ -import collections.abc -import re -from typing import ( - Any, - Callable, - Dict, - List, - Mapping, - MutableMapping, - Optional, - Sequence, - Type, - Union, - IO, -) -import warnings -from io import BytesIO -from datetime import datetime -from base64 import b64encode, b64decode -from numbers import Integral -from types import SimpleNamespace -from functools import singledispatch - -from fontTools.misc import etree - -from fontTools.misc.textTools import tostr - - -# By default, we -# - deserialize elements as bytes and -# - serialize bytes as elements. -# Before, on Python 2, we -# - deserialized elements as plistlib.Data objects, in order to -# distinguish them from the built-in str type (which is bytes on python2) -# - serialized bytes as elements (they must have only contained -# ASCII characters in this case) -# You can pass use_builtin_types=[True|False] to the load/dump etc. functions -# to enforce a specific treatment. -# NOTE that unicode type always maps to element, and plistlib.Data -# always maps to element, regardless of use_builtin_types. -USE_BUILTIN_TYPES = True - -XML_DECLARATION = b"""""" - -PLIST_DOCTYPE = ( - b'' -) - - -# Date should conform to a subset of ISO 8601: -# YYYY '-' MM '-' DD 'T' HH ':' MM ':' SS 'Z' -_date_parser = re.compile( - r"(?P\d\d\d\d)" - r"(?:-(?P\d\d)" - r"(?:-(?P\d\d)" - r"(?:T(?P\d\d)" - r"(?::(?P\d\d)" - r"(?::(?P\d\d))" - r"?)?)?)?)?Z", - re.ASCII, -) - - -def _date_from_string(s: str) -> datetime: - order = ("year", "month", "day", "hour", "minute", "second") - m = _date_parser.match(s) - if m is None: - raise ValueError(f"Expected ISO 8601 date string, but got '{s:r}'.") - gd = m.groupdict() - lst = [] - for key in order: - val = gd[key] - if val is None: - break - lst.append(int(val)) - # NOTE: mypy doesn't know that lst is 6 elements long. - return datetime(*lst) # type:ignore - - -def _date_to_string(d: datetime) -> str: - return "%04d-%02d-%02dT%02d:%02d:%02dZ" % ( - d.year, - d.month, - d.day, - d.hour, - d.minute, - d.second, - ) - - -class Data: - """Represents binary data when ``use_builtin_types=False.`` - - This class wraps binary data loaded from a plist file when the - ``use_builtin_types`` argument to the loading function (:py:func:`fromtree`, - :py:func:`load`, :py:func:`loads`) is false. - - The actual binary data is retrieved using the ``data`` attribute. - """ - - def __init__(self, data: bytes) -> None: - if not isinstance(data, bytes): - raise TypeError("Expected bytes, found %s" % type(data).__name__) - self.data = data - - @classmethod - def fromBase64(cls, data: Union[bytes, str]) -> "Data": - return cls(b64decode(data)) - - def asBase64(self, maxlinelength: int = 76, indent_level: int = 1) -> bytes: - return _encode_base64( - self.data, maxlinelength=maxlinelength, indent_level=indent_level - ) - - def __eq__(self, other: Any) -> bool: - if isinstance(other, self.__class__): - return self.data == other.data - elif isinstance(other, bytes): - return self.data == other - else: - return NotImplemented - - def __repr__(self) -> str: - return "%s(%s)" % (self.__class__.__name__, repr(self.data)) - - -def _encode_base64( - data: bytes, maxlinelength: Optional[int] = 76, indent_level: int = 1 -) -> bytes: - data = b64encode(data) - if data and maxlinelength: - # split into multiple lines right-justified to 'maxlinelength' chars - indent = b"\n" + b" " * indent_level - max_length = max(16, maxlinelength - len(indent)) - chunks = [] - for i in range(0, len(data), max_length): - chunks.append(indent) - chunks.append(data[i : i + max_length]) - chunks.append(indent) - data = b"".join(chunks) - return data - - -# Mypy does not support recursive type aliases as of 0.782, Pylance does. -# https://github.com/python/mypy/issues/731 -# https://devblogs.microsoft.com/python/pylance-introduces-five-new-features-that-enable-type-magic-for-python-developers/#1-support-for-recursive-type-aliases -PlistEncodable = Union[ - bool, - bytes, - Data, - datetime, - float, - Integral, - Mapping[str, Any], - Sequence[Any], - str, -] - - -class PlistTarget: - """Event handler using the ElementTree Target API that can be - passed to a XMLParser to produce property list objects from XML. - It is based on the CPython plistlib module's _PlistParser class, - but does not use the expat parser. - - >>> from fontTools.misc import etree - >>> parser = etree.XMLParser(target=PlistTarget()) - >>> result = etree.XML( - ... "" - ... " something" - ... " blah" - ... "", - ... parser=parser) - >>> result == {"something": "blah"} - True - - Links: - https://github.com/python/cpython/blob/main/Lib/plistlib.py - http://lxml.de/parsing.html#the-target-parser-interface - """ - - def __init__( - self, - use_builtin_types: Optional[bool] = None, - dict_type: Type[MutableMapping[str, Any]] = dict, - ) -> None: - self.stack: List[PlistEncodable] = [] - self.current_key: Optional[str] = None - self.root: Optional[PlistEncodable] = None - if use_builtin_types is None: - self._use_builtin_types = USE_BUILTIN_TYPES - else: - if use_builtin_types is False: - warnings.warn( - "Setting use_builtin_types to False is deprecated and will be " - "removed soon.", - DeprecationWarning, - ) - self._use_builtin_types = use_builtin_types - self._dict_type = dict_type - - def start(self, tag: str, attrib: Mapping[str, str]) -> None: - self._data: List[str] = [] - handler = _TARGET_START_HANDLERS.get(tag) - if handler is not None: - handler(self) - - def end(self, tag: str) -> None: - handler = _TARGET_END_HANDLERS.get(tag) - if handler is not None: - handler(self) - - def data(self, data: str) -> None: - self._data.append(data) - - def close(self) -> PlistEncodable: - if self.root is None: - raise ValueError("No root set.") - return self.root - - # helpers - - def add_object(self, value: PlistEncodable) -> None: - if self.current_key is not None: - stack_top = self.stack[-1] - if not isinstance(stack_top, collections.abc.MutableMapping): - raise ValueError("unexpected element: %r" % stack_top) - stack_top[self.current_key] = value - self.current_key = None - elif not self.stack: - # this is the root object - self.root = value - else: - stack_top = self.stack[-1] - if not isinstance(stack_top, list): - raise ValueError("unexpected element: %r" % stack_top) - stack_top.append(value) - - def get_data(self) -> str: - data = "".join(self._data) - self._data = [] - return data - - -# event handlers - - -def start_dict(self: PlistTarget) -> None: - d = self._dict_type() - self.add_object(d) - self.stack.append(d) - - -def end_dict(self: PlistTarget) -> None: - if self.current_key: - raise ValueError("missing value for key '%s'" % self.current_key) - self.stack.pop() - - -def end_key(self: PlistTarget) -> None: - if self.current_key or not isinstance(self.stack[-1], collections.abc.Mapping): - raise ValueError("unexpected key") - self.current_key = self.get_data() - - -def start_array(self: PlistTarget) -> None: - a: List[PlistEncodable] = [] - self.add_object(a) - self.stack.append(a) - - -def end_array(self: PlistTarget) -> None: - self.stack.pop() - - -def end_true(self: PlistTarget) -> None: - self.add_object(True) - - -def end_false(self: PlistTarget) -> None: - self.add_object(False) - - -def end_integer(self: PlistTarget) -> None: - self.add_object(int(self.get_data())) - - -def end_real(self: PlistTarget) -> None: - self.add_object(float(self.get_data())) - - -def end_string(self: PlistTarget) -> None: - self.add_object(self.get_data()) - - -def end_data(self: PlistTarget) -> None: - if self._use_builtin_types: - self.add_object(b64decode(self.get_data())) - else: - self.add_object(Data.fromBase64(self.get_data())) - - -def end_date(self: PlistTarget) -> None: - self.add_object(_date_from_string(self.get_data())) - - -_TARGET_START_HANDLERS: Dict[str, Callable[[PlistTarget], None]] = { - "dict": start_dict, - "array": start_array, -} - -_TARGET_END_HANDLERS: Dict[str, Callable[[PlistTarget], None]] = { - "dict": end_dict, - "array": end_array, - "key": end_key, - "true": end_true, - "false": end_false, - "integer": end_integer, - "real": end_real, - "string": end_string, - "data": end_data, - "date": end_date, -} - - -# functions to build element tree from plist data - - -def _string_element(value: str, ctx: SimpleNamespace) -> etree.Element: - el = etree.Element("string") - el.text = value - return el - - -def _bool_element(value: bool, ctx: SimpleNamespace) -> etree.Element: - if value: - return etree.Element("true") - return etree.Element("false") - - -def _integer_element(value: int, ctx: SimpleNamespace) -> etree.Element: - if -1 << 63 <= value < 1 << 64: - el = etree.Element("integer") - el.text = "%d" % value - return el - raise OverflowError(value) - - -def _real_element(value: float, ctx: SimpleNamespace) -> etree.Element: - el = etree.Element("real") - el.text = repr(value) - return el - - -def _dict_element( - d: Mapping[str, PlistEncodable], ctx: SimpleNamespace -) -> etree.Element: - el = etree.Element("dict") - items = d.items() - if ctx.sort_keys: - items = sorted(items) # type: ignore - ctx.indent_level += 1 - for key, value in items: - if not isinstance(key, str): - if ctx.skipkeys: - continue - raise TypeError("keys must be strings") - k = etree.SubElement(el, "key") - k.text = tostr(key, "utf-8") - el.append(_make_element(value, ctx)) - ctx.indent_level -= 1 - return el - - -def _array_element( - array: Sequence[PlistEncodable], ctx: SimpleNamespace -) -> etree.Element: - el = etree.Element("array") - if len(array) == 0: - return el - ctx.indent_level += 1 - for value in array: - el.append(_make_element(value, ctx)) - ctx.indent_level -= 1 - return el - - -def _date_element(date: datetime, ctx: SimpleNamespace) -> etree.Element: - el = etree.Element("date") - el.text = _date_to_string(date) - return el - - -def _data_element(data: bytes, ctx: SimpleNamespace) -> etree.Element: - el = etree.Element("data") - # NOTE: mypy is confused about whether el.text should be str or bytes. - el.text = _encode_base64( # type: ignore - data, - maxlinelength=(76 if ctx.pretty_print else None), - indent_level=ctx.indent_level, - ) - return el - - -def _string_or_data_element(raw_bytes: bytes, ctx: SimpleNamespace) -> etree.Element: - if ctx.use_builtin_types: - return _data_element(raw_bytes, ctx) - else: - try: - string = raw_bytes.decode(encoding="ascii", errors="strict") - except UnicodeDecodeError: - raise ValueError( - "invalid non-ASCII bytes; use unicode string instead: %r" % raw_bytes - ) - return _string_element(string, ctx) - - -# The following is probably not entirely correct. The signature should take `Any` -# and return `NoReturn`. At the time of this writing, neither mypy nor Pyright -# can deal with singledispatch properly and will apply the signature of the base -# function to all others. Being slightly dishonest makes it type-check and return -# usable typing information for the optimistic case. -@singledispatch -def _make_element(value: PlistEncodable, ctx: SimpleNamespace) -> etree.Element: - raise TypeError("unsupported type: %s" % type(value)) - - -_make_element.register(str)(_string_element) -_make_element.register(bool)(_bool_element) -_make_element.register(Integral)(_integer_element) -_make_element.register(float)(_real_element) -_make_element.register(collections.abc.Mapping)(_dict_element) -_make_element.register(list)(_array_element) -_make_element.register(tuple)(_array_element) -_make_element.register(datetime)(_date_element) -_make_element.register(bytes)(_string_or_data_element) -_make_element.register(bytearray)(_data_element) -_make_element.register(Data)(lambda v, ctx: _data_element(v.data, ctx)) - - -# Public functions to create element tree from plist-compatible python -# data structures and viceversa, for use when (de)serializing GLIF xml. - - -def totree( - value: PlistEncodable, - sort_keys: bool = True, - skipkeys: bool = False, - use_builtin_types: Optional[bool] = None, - pretty_print: bool = True, - indent_level: int = 1, -) -> etree.Element: - """Convert a value derived from a plist into an XML tree. - - Args: - value: Any kind of value to be serialized to XML. - sort_keys: Whether keys of dictionaries should be sorted. - skipkeys (bool): Whether to silently skip non-string dictionary - keys. - use_builtin_types (bool): If true, byte strings will be - encoded in Base-64 and wrapped in a ``data`` tag; if - false, they will be either stored as ASCII strings or an - exception raised if they cannot be decoded as such. Defaults - to ``True`` if not present. Deprecated. - pretty_print (bool): Whether to indent the output. - indent_level (int): Level of indentation when serializing. - - Returns: an ``etree`` ``Element`` object. - - Raises: - ``TypeError`` - if non-string dictionary keys are serialized - and ``skipkeys`` is false. - ``ValueError`` - if non-ASCII binary data is present - and `use_builtin_types` is false. - """ - if use_builtin_types is None: - use_builtin_types = USE_BUILTIN_TYPES - else: - use_builtin_types = use_builtin_types - context = SimpleNamespace( - sort_keys=sort_keys, - skipkeys=skipkeys, - use_builtin_types=use_builtin_types, - pretty_print=pretty_print, - indent_level=indent_level, - ) - return _make_element(value, context) - - -def fromtree( - tree: etree.Element, - use_builtin_types: Optional[bool] = None, - dict_type: Type[MutableMapping[str, Any]] = dict, -) -> Any: - """Convert an XML tree to a plist structure. - - Args: - tree: An ``etree`` ``Element``. - use_builtin_types: If True, binary data is deserialized to - bytes strings. If False, it is wrapped in :py:class:`Data` - objects. Defaults to True if not provided. Deprecated. - dict_type: What type to use for dictionaries. - - Returns: An object (usually a dictionary). - """ - target = PlistTarget(use_builtin_types=use_builtin_types, dict_type=dict_type) - for action, element in etree.iterwalk(tree, events=("start", "end")): - if action == "start": - target.start(element.tag, element.attrib) - elif action == "end": - # if there are no children, parse the leaf's data - if not len(element): - # always pass str, not None - target.data(element.text or "") - target.end(element.tag) - return target.close() - - -# python3 plistlib API - - -def load( - fp: IO[bytes], - use_builtin_types: Optional[bool] = None, - dict_type: Type[MutableMapping[str, Any]] = dict, -) -> Any: - """Load a plist file into an object. - - Args: - fp: An opened file. - use_builtin_types: If True, binary data is deserialized to - bytes strings. If False, it is wrapped in :py:class:`Data` - objects. Defaults to True if not provided. Deprecated. - dict_type: What type to use for dictionaries. - - Returns: - An object (usually a dictionary) representing the top level of - the plist file. - """ - - if not hasattr(fp, "read"): - raise AttributeError("'%s' object has no attribute 'read'" % type(fp).__name__) - target = PlistTarget(use_builtin_types=use_builtin_types, dict_type=dict_type) - parser = etree.XMLParser(target=target) - result = etree.parse(fp, parser=parser) - # lxml returns the target object directly, while ElementTree wraps - # it as the root of an ElementTree object - try: - return result.getroot() - except AttributeError: - return result - - -def loads( - value: bytes, - use_builtin_types: Optional[bool] = None, - dict_type: Type[MutableMapping[str, Any]] = dict, -) -> Any: - """Load a plist file from a string into an object. - - Args: - value: A bytes string containing a plist. - use_builtin_types: If True, binary data is deserialized to - bytes strings. If False, it is wrapped in :py:class:`Data` - objects. Defaults to True if not provided. Deprecated. - dict_type: What type to use for dictionaries. - - Returns: - An object (usually a dictionary) representing the top level of - the plist file. - """ - - fp = BytesIO(value) - return load(fp, use_builtin_types=use_builtin_types, dict_type=dict_type) - - -def dump( - value: PlistEncodable, - fp: IO[bytes], - sort_keys: bool = True, - skipkeys: bool = False, - use_builtin_types: Optional[bool] = None, - pretty_print: bool = True, -) -> None: - """Write a Python object to a plist file. - - Args: - value: An object to write. - fp: A file opened for writing. - sort_keys (bool): Whether keys of dictionaries should be sorted. - skipkeys (bool): Whether to silently skip non-string dictionary - keys. - use_builtin_types (bool): If true, byte strings will be - encoded in Base-64 and wrapped in a ``data`` tag; if - false, they will be either stored as ASCII strings or an - exception raised if they cannot be represented. Defaults - pretty_print (bool): Whether to indent the output. - indent_level (int): Level of indentation when serializing. - - Raises: - ``TypeError`` - if non-string dictionary keys are serialized - and ``skipkeys`` is false. - ``ValueError`` - if non-representable binary data is present - and `use_builtin_types` is false. - """ - - if not hasattr(fp, "write"): - raise AttributeError("'%s' object has no attribute 'write'" % type(fp).__name__) - root = etree.Element("plist", version="1.0") - el = totree( - value, - sort_keys=sort_keys, - skipkeys=skipkeys, - use_builtin_types=use_builtin_types, - pretty_print=pretty_print, - ) - root.append(el) - tree = etree.ElementTree(root) - # we write the doctype ourselves instead of using the 'doctype' argument - # of 'write' method, becuse lxml will force adding a '\n' even when - # pretty_print is False. - if pretty_print: - header = b"\n".join((XML_DECLARATION, PLIST_DOCTYPE, b"")) - else: - header = XML_DECLARATION + PLIST_DOCTYPE - fp.write(header) - tree.write( # type: ignore - fp, - encoding="utf-8", - pretty_print=pretty_print, - xml_declaration=False, - ) - - -def dumps( - value: PlistEncodable, - sort_keys: bool = True, - skipkeys: bool = False, - use_builtin_types: Optional[bool] = None, - pretty_print: bool = True, -) -> bytes: - """Write a Python object to a string in plist format. - - Args: - value: An object to write. - sort_keys (bool): Whether keys of dictionaries should be sorted. - skipkeys (bool): Whether to silently skip non-string dictionary - keys. - use_builtin_types (bool): If true, byte strings will be - encoded in Base-64 and wrapped in a ``data`` tag; if - false, they will be either stored as strings or an - exception raised if they cannot be represented. Defaults - pretty_print (bool): Whether to indent the output. - indent_level (int): Level of indentation when serializing. - - Returns: - string: A plist representation of the Python object. - - Raises: - ``TypeError`` - if non-string dictionary keys are serialized - and ``skipkeys`` is false. - ``ValueError`` - if non-representable binary data is present - and `use_builtin_types` is false. - """ - fp = BytesIO() - dump( - value, - fp, - sort_keys=sort_keys, - skipkeys=skipkeys, - use_builtin_types=use_builtin_types, - pretty_print=pretty_print, - ) - return fp.getvalue() diff --git a/spaces/DerrylNessie/MangaCleaner/README.md b/spaces/DerrylNessie/MangaCleaner/README.md deleted file mode 100644 index 7963bcbbebcdece7f095a8be2824b509a619e517..0000000000000000000000000000000000000000 --- a/spaces/DerrylNessie/MangaCleaner/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Wakania -emoji: 😻 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 2.8.14 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/DragGan/DragGan/stylegan_human/torch_utils/ops/upfirdn2d.cpp b/spaces/DragGan/DragGan/stylegan_human/torch_utils/ops/upfirdn2d.cpp deleted file mode 100644 index 42bdd483490a555266c8f9b9dd6684464b2088bc..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/torch_utils/ops/upfirdn2d.cpp +++ /dev/null @@ -1,105 +0,0 @@ -// Copyright (c) SenseTime Research. All rights reserved. - -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "upfirdn2d.h" - -//------------------------------------------------------------------------ - -static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x"); - TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(f.numel() <= INT_MAX, "f is too large"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(f.dim() == 2, "f must be rank 2"); - TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1"); - TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1"); - TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx; - int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy; - TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format()); - TORCH_CHECK(y.numel() <= INT_MAX, "output is too large"); - - // Initialize CUDA kernel parameters. - upfirdn2d_kernel_params p; - p.x = x.data_ptr(); - p.f = f.data_ptr(); - p.y = y.data_ptr(); - p.up = make_int2(upx, upy); - p.down = make_int2(downx, downy); - p.pad0 = make_int2(padx0, pady0); - p.flip = (flip) ? 1 : 0; - p.gain = gain; - p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0)); - p.filterSize = make_int2((int)f.size(1), (int)f.size(0)); - p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0)); - p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0)); - p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z; - p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1; - - // Choose CUDA kernel. - upfirdn2d_kernel_spec spec; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - spec = choose_upfirdn2d_kernel(p); - }); - - // Set looping options. - p.loopMajor = (p.sizeMajor - 1) / 16384 + 1; - p.loopMinor = spec.loopMinor; - p.loopX = spec.loopX; - p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1; - p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1; - - // Compute grid size. - dim3 blockSize, gridSize; - if (spec.tileOutW < 0) // large - { - blockSize = dim3(4, 32, 1); - gridSize = dim3( - ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor, - (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1, - p.launchMajor); - } - else // small - { - blockSize = dim3(256, 1, 1); - gridSize = dim3( - ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor, - (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1, - p.launchMajor); - } - - // Launch CUDA kernel. - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("upfirdn2d", &upfirdn2d); -} - -//------------------------------------------------------------------------ diff --git a/spaces/EcoCy/LoRA-DreamBooth-Training-UI/constants.py b/spaces/EcoCy/LoRA-DreamBooth-Training-UI/constants.py deleted file mode 100644 index baaebbae71058fbb4faed35fd00e7559305dc409..0000000000000000000000000000000000000000 --- a/spaces/EcoCy/LoRA-DreamBooth-Training-UI/constants.py +++ /dev/null @@ -1,6 +0,0 @@ -import enum - - -class UploadTarget(enum.Enum): - PERSONAL_PROFILE = 'Personal Profile' - LORA_LIBRARY = 'LoRA Library' diff --git a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/__init__.py b/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/__init__.py deleted file mode 100644 index d872d0725710d6dde3af3b6e05382922f074338b..0000000000000000000000000000000000000000 --- a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .models import imagebind_model -from .models.imagebind_model import ModalityType diff --git a/spaces/FlippFuzz/whisper-webui/docs/colab.md b/spaces/FlippFuzz/whisper-webui/docs/colab.md deleted file mode 100644 index 3fcdb835327238764fb643b9bbd2e27b6e14f58c..0000000000000000000000000000000000000000 --- a/spaces/FlippFuzz/whisper-webui/docs/colab.md +++ /dev/null @@ -1,20 +0,0 @@ -# Running Whisper on Google Colab - -If you don't have a decent GPU or any experience in running command-line applications, you might want to try this Google Colab instead: - -* [Google Colab - Whisper WebUI GPU](https://colab.research.google.com/drive/1qeTSvi7Bt_5RMm88ipW4fkcsMOKlDDss?usp=sharing) -* [Screenshots](https://imgur.com/a/ZfY6uBO) - -The runtime (Runtime -> Change runtime type -> Hardware accelerator) should already be set top GPU. But if not, change it to GPU. - -Then, sign in to Google if you haven't already. Next, click on "Connect" at the top right. - -Under "Checking out WebUI from Git", click on the [play icon](https://imgur.com/a/81gOLyD) that appears in "[ ]" at the left. If you get a warning, click "Run anyway". - -After this step has completed, it should be get a green check mark. Then move on to the next section under "Installing dependencies", and click in "[ ]" again. This might take approximately 30 seconds. - -Once this has completed, scroll down to the "Run WebUI" section, and click on "[ ]". This will launch the WebUI in a shared link (expires in 72 hours). To open the UI, click on the link next to "Running on public URL", which will be something like https://12xxx.gradio.app/ - -The audio length in this version is not restricted, and it will run much faster as it is backed by a GPU. You can also run it using the "Large" model. Also note that it might take some time to start the model the first time, as it may need to download a 2.8 GB file on Google's servers. - -Once you're done, you can close the WebUI session by clicking the animated close button under "Run WebUI". You can also do this if you encounter any errors and need to restart the UI. You should also go to "Manage Sessions" and terminate the session, otherwise you may end up using all your free compute credits. \ No newline at end of file diff --git a/spaces/FloydianSound/Redline_Diffusion_V1-5/README.md b/spaces/FloydianSound/Redline_Diffusion_V1-5/README.md deleted file mode 100644 index 02dc44a37e2aa1b7958b860000f29ac3909924af..0000000000000000000000000000000000000000 --- a/spaces/FloydianSound/Redline_Diffusion_V1-5/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Redline Diffusion V1-5 -emoji: 🐠 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.16.1b1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GAIR/Factool/factool/__init__.py b/spaces/GAIR/Factool/factool/__init__.py deleted file mode 100644 index efe3ef5eb2059cc8cbe82838206ab7d54e1d62c6..0000000000000000000000000000000000000000 --- a/spaces/GAIR/Factool/factool/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from factool.factool import Factool -from factool.tasks import TaskType \ No newline at end of file diff --git a/spaces/GaenKoki/voicevox/generate_licenses.py b/spaces/GaenKoki/voicevox/generate_licenses.py deleted file mode 100644 index da41db0c01e20dc8cf935418bb59a5c4923c56ae..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/generate_licenses.py +++ /dev/null @@ -1,337 +0,0 @@ -import json -import os -import subprocess -import urllib.request -from dataclasses import asdict, dataclass -from pathlib import Path -from typing import List, Optional - - -@dataclass -class License: - name: str - version: Optional[str] - license: Optional[str] - text: str - - -def generate_licenses() -> List[License]: - licenses: List[License] = [] - - # openjtalk - # https://sourceforge.net/projects/open-jtalk/files/Open%20JTalk/open_jtalk-1.11/ - licenses.append( - License( - name="Open JTalk", - version="1.11", - license="Modified BSD license", - text=Path("docs/licenses/open_jtalk/COPYING").read_text(), - ) - ) - licenses.append( - License( - name="MeCab", - version=None, - license="Modified BSD license", - text=Path("docs/licenses/open_jtalk/mecab/COPYING").read_text(), - ) - ) - licenses.append( - License( - name="NAIST Japanese Dictionary", - version=None, - license="Modified BSD license", - text=Path("docs/licenses//open_jtalk/mecab-naist-jdic/COPYING").read_text(), - ) - ) - with urllib.request.urlopen( - "https://raw.githubusercontent.com/r9y9/pyopenjtalk/master/pyopenjtalk/htsvoice/LICENSE_mei_normal.htsvoice" # noqa: B950 - ) as res: - licenses.append( - License( - name='HTS Voice "Mei"', - version=None, - license="Creative Commons Attribution 3.0 license", - text=res.read().decode(), - ) - ) - - # VOICEVOX CORE - with urllib.request.urlopen( - "https://raw.githubusercontent.com/VOICEVOX/voicevox_core/main/LICENSE" - ) as res: - licenses.append( - License( - name="VOICEVOX CORE", - version=None, - license="MIT license", - text=res.read().decode(), - ) - ) - - # VOICEVOX ENGINE - with urllib.request.urlopen( - "https://raw.githubusercontent.com/VOICEVOX/voicevox_engine/master/LGPL_LICENSE" - ) as res: - licenses.append( - License( - name="VOICEVOX ENGINE", - version=None, - license="LGPL license", - text=res.read().decode(), - ) - ) - - # world - with urllib.request.urlopen( - "https://raw.githubusercontent.com/mmorise/World/master/LICENSE.txt" - ) as res: - licenses.append( - License( - name="world", - version=None, - license="Modified BSD license", - text=res.read().decode(), - ) - ) - - # pytorch - with urllib.request.urlopen( - "https://raw.githubusercontent.com/pytorch/pytorch/master/LICENSE" - ) as res: - licenses.append( - License( - name="PyTorch", - version="1.9.0", - license="BSD-style license", - text=res.read().decode(), - ) - ) - - # onnxruntime - with urllib.request.urlopen( - "https://raw.githubusercontent.com/microsoft/onnxruntime/master/LICENSE" - ) as res: - licenses.append( - License( - name="ONNX Runtime", - version="1.13.1", - license="MIT license", - text=res.read().decode(), - ) - ) - - # Python - python_version = "3.11.3" - with urllib.request.urlopen( - f"https://raw.githubusercontent.com/python/cpython/v{python_version}/LICENSE" - ) as res: - licenses.append( - License( - name="Python", - version=python_version, - license="Python Software Foundation License", - text=res.read().decode(), - ) - ) - - # pip - try: - pip_licenses_output = subprocess.run( - "pip-licenses " - "--from=mixed " - "--format=json " - "--with-urls " - "--with-license-file " - "--no-license-path ", - shell=True, - capture_output=True, - check=True, - env=os.environ, - ).stdout.decode() - except subprocess.CalledProcessError as err: - raise Exception( - f"command output:\n{err.stderr and err.stderr.decode()}" - ) from err - - licenses_json = json.loads(pip_licenses_output) - for license_json in licenses_json: - license = License( - name=license_json["Name"], - version=license_json["Version"], - license=license_json["License"], - text=license_json["LicenseText"], - ) - # FIXME: assert license type - if license.text == "UNKNOWN": - if license.name.lower() == "core" and license.version == "0.0.0": - continue - elif license.name.lower() == "future": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/PythonCharmers/python-future/master/LICENSE.txt" # noqa: B950 - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "pefile": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/erocarrera/pefile/master/LICENSE" # noqa: B950 - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "pyopenjtalk": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/r9y9/pyopenjtalk/master/LICENSE.md" - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "python-multipart": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/andrew-d/python-multipart/master/LICENSE.txt" # noqa: B950 - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "romkan": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/soimort/python-romkan/master/LICENSE" - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "distlib": - with urllib.request.urlopen( - "https://bitbucket.org/pypa/distlib/raw/7d93712134b28401407da27382f2b6236c87623a/LICENSE.txt" # noqa: B950 - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "jsonschema": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/python-jsonschema/jsonschema/dbc398245a583cb2366795dc529ae042d10c1577/COPYING" - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "lockfile": - with urllib.request.urlopen( - "https://opendev.org/openstack/pylockfile/raw/tag/0.12.2/LICENSE" - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "platformdirs": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/platformdirs/platformdirs/aa671aaa97913c7b948567f4d9c77d4f98bfa134/LICENSE" - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "webencodings": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/gsnedders/python-webencodings/fa2cb5d75ab41e63ace691bc0825d3432ba7d694/LICENSE" - ) as res: - license.text = res.read().decode() - else: - # ライセンスがpypiに無い - raise Exception(f"No License info provided for {license.name}") - licenses.append(license) - - # OpenBLAS - with urllib.request.urlopen( - "https://raw.githubusercontent.com/xianyi/OpenBLAS/develop/LICENSE" - ) as res: - licenses.append( - License( - name="OpenBLAS", - version=None, - license="BSD 3-clause license", - text=res.read().decode(), - ) - ) - - # libsndfile-binaries - with urllib.request.urlopen( - "https://raw.githubusercontent.com/bastibe/libsndfile-binaries/84cb164928f17c7ca0c1e5c40342c20ce2b90e8c/COPYING" # noqa: B950 - ) as res: - licenses.append( - License( - name="libsndfile-binaries", - version="1.0.28", - license="LGPL-2.1 license", - text=res.read().decode(), - ) - ) - - # libogg - with urllib.request.urlopen( - "https://raw.githubusercontent.com/xiph/ogg/v1.3.2/COPYING" - ) as res: - licenses.append( - License( - name="libogg", - version="1.3.2", - license="BSD 3-clause license", - text=res.read().decode(), - ) - ) - - # libvorbis - with urllib.request.urlopen( - "https://raw.githubusercontent.com/xiph/vorbis/v1.3.5/COPYING" - ) as res: - licenses.append( - License( - name="libvorbis", - version="1.3.5", - license="BSD 3-clause license", - text=res.read().decode(), - ) - ) - - # libflac - with urllib.request.urlopen( - "https://raw.githubusercontent.com/xiph/flac/1.3.2/COPYING.Xiph" - ) as res: - licenses.append( - License( - name="FLAC", - version="1.3.2", - license="Xiph.org's BSD-like license", - text=res.read().decode(), - ) - ) - - # cuda - # license text from CUDA 11.6.2 - # https://developer.nvidia.com/cuda-11-6-2-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exe_local # noqa: B950 - # https://developer.download.nvidia.com/compute/cuda/11.6.2/local_installers/cuda_11.6.2_511.65_windows.exe # noqa: B950 - # cuda_11.6.2_511.65_windows.exe (cuda_documentation/Doc/EULA.txt) - licenses.append( - License( - name="CUDA Toolkit", - version="11.6.2", - license=None, - text=Path("docs/licenses/cuda/EULA.txt").read_text(encoding="utf8"), - ) - ) - # cudnn - # license text from - # cuDNN v8.4.1 (May 27th, 2022), for CUDA 11.x, cuDNN Library for Windows - # https://developer.nvidia.com/rdp/cudnn-archive # noqa: B950 - # https://developer.download.nvidia.com/compute/redist/cudnn/v8.4.1/local_installers/11.6/cudnn-windows-x86_64-8.4.1.50_cuda11.6-archive.zip # noqa: B950 - # cudnn-windows-x86_64-8.4.1.50_cuda11.6-archive.zip (cudnn-windows-x86_64-8.4.1.50_cuda11.6-archive/LICENSE) # noqa: B950 - licenses.append( - License( - name="cuDNN", - version="8.4.1", - license=None, - text=Path("docs/licenses/cudnn/LICENSE").read_text(encoding="utf8"), - ) - ) - - return licenses - - -if __name__ == "__main__": - import argparse - import sys - - parser = argparse.ArgumentParser() - parser.add_argument("-o", "--output_path", type=str) - args = parser.parse_args() - - output_path = args.output_path - - licenses = generate_licenses() - - # dump - out = Path(output_path).open("w") if output_path else sys.stdout - json.dump( - [asdict(license) for license in licenses], - out, - ) diff --git a/spaces/GaenKoki/voicevox/make_docs.py b/spaces/GaenKoki/voicevox/make_docs.py deleted file mode 100644 index d10bd1aa40887783ba8cb90dabda031dce213be0..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/make_docs.py +++ /dev/null @@ -1,33 +0,0 @@ -import json - -from voicevox_engine.dev.core import mock as core -from voicevox_engine.dev.synthesis_engine.mock import MockSynthesisEngine -from voicevox_engine.setting import USER_SETTING_PATH, SettingLoader - -if __name__ == "__main__": - import run - - app = run.generate_app( - synthesis_engines={"mock": MockSynthesisEngine(speakers=core.metas())}, - latest_core_version="mock", - setting_loader=SettingLoader(USER_SETTING_PATH), - ) - with open("docs/api/index.html", "w") as f: - f.write( - """ - - - voicevox_engine API Document - - - - -
    - - - -""" - % json.dumps(app.openapi()) - ) diff --git a/spaces/GeneralNewSense/Text-to-Music/utils.py b/spaces/GeneralNewSense/Text-to-Music/utils.py deleted file mode 100644 index d302528fd6fc9be8d782f78b6c44f4d894147d07..0000000000000000000000000000000000000000 --- a/spaces/GeneralNewSense/Text-to-Music/utils.py +++ /dev/null @@ -1,50 +0,0 @@ -import json -import numpy as np -import httpx - -from constants import MUBERT_TAGS, MUBERT_LICENSE, MUBERT_MODE, MUBERT_TOKEN - - -def get_mubert_tags_embeddings(w2v_model): - return w2v_model.encode(MUBERT_TAGS) - - -def get_pat(email: str): - r = httpx.post('https://api-b2b.mubert.com/v2/GetServiceAccess', - json={ - "method": "GetServiceAccess", - "params": { - "email": email, - "license": MUBERT_LICENSE, - "token": MUBERT_TOKEN, - "mode": MUBERT_MODE, - } - }) - - rdata = json.loads(r.text) - assert rdata['status'] == 1, "probably incorrect e-mail" - pat = rdata['data']['pat'] - return pat - - -def find_similar(em, embeddings, method='cosine'): - scores = [] - for ref in embeddings: - if method == 'cosine': - scores.append(1 - np.dot(ref, em) / (np.linalg.norm(ref) * np.linalg.norm(em))) - if method == 'norm': - scores.append(np.linalg.norm(ref - em)) - return np.array(scores), np.argsort(scores) - - -def get_tags_for_prompts(w2v_model, mubert_tags_embeddings, prompts, top_n=3, debug=False): - prompts_embeddings = w2v_model.encode(prompts) - ret = [] - for i, pe in enumerate(prompts_embeddings): - scores, idxs = find_similar(pe, mubert_tags_embeddings) - top_tags = MUBERT_TAGS[idxs[:top_n]] - top_prob = 1 - scores[idxs[:top_n]] - if debug: - print(f"Prompt: {prompts[i]}\nTags: {', '.join(top_tags)}\nScores: {top_prob}\n\n\n") - ret.append((prompts[i], list(top_tags))) - return ret diff --git a/spaces/Gertie01/MusicLM/app.py b/spaces/Gertie01/MusicLM/app.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/datasets/stare.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/datasets/stare.py deleted file mode 100644 index 3f71b25488cc11a6b4d582ac52b5a24e1ad1cf8e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/datasets/stare.py +++ /dev/null @@ -1,59 +0,0 @@ -# dataset settings -dataset_type = 'STAREDataset' -data_root = 'data/STARE' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -img_scale = (605, 700) -crop_size = (128, 128) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=40000, - dataset=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/training', - ann_dir='annotations/training', - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ann/ann_r50-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ann/ann_r50-d8_769x769_80k_cityscapes.py deleted file mode 100644 index d1cc072b152986102286f503e3d7b92999bf414c..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ann/ann_r50-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/ann_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_80k.py' -] -model = dict( - decode_head=dict(align_corners=True), - auxiliary_head=dict(align_corners=True), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/docs/MBD.md b/spaces/GrandaddyShmax/AudioCraft_Plus/docs/MBD.md deleted file mode 100644 index 296d08407bac9155380a48bdc9faa5798db32bcb..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/docs/MBD.md +++ /dev/null @@ -1,117 +0,0 @@ -# MultiBand Diffusion - -AudioCraft provides the code and models for MultiBand Diffusion, [From Discrete Tokens to High Fidelity Audio using MultiBand Diffusion][arxiv]. -MultiBand diffusion is a collection of 4 models that can decode tokens from -
    EnCodec tokenizer into waveform audio. - - - Open In Colab - -
    - - -## Installation - -Please follow the AudioCraft installation instructions from the [README](../README.md). - - -## Usage - -We offer a number of way to use MultiBand Diffusion: -1. The MusicGen demo includes a toggle to try diffusion decoder. You can use the demo locally by running [`python -m demos.musicgen_app --share`](../demos/musicgen_app.py), or through the [MusicGen Colab](https://colab.research.google.com/drive/1JlTOjB-G0A2Hz3h8PK63vLZk4xdCI5QB?usp=sharing). -2. You can play with MusicGen by running the jupyter notebook at [`demos/musicgen_demo.ipynb`](../demos/musicgen_demo.ipynb) locally (if you have a GPU). - -## API - -We provide a simple API and pre-trained models for MusicGen and for EnCodec at 24 khz for 3 bitrates (1.5 kbps, 3 kbps and 6 kbps). - -See after a quick example for using MultiBandDiffusion with the MusicGen API: - -```python -import torchaudio -from audiocraft.models import MusicGen, MultiBandDiffusion -from audiocraft.data.audio import audio_write - -model = MusicGen.get_pretrained('facebook/musicgen-melody') -mbd = MultiBandDiffusion.get_mbd_musicgen() -model.set_generation_params(duration=8) # generate 8 seconds. -wav, tokens = model.generate_unconditional(4, return_tokens=True) # generates 4 unconditional audio samples and keep the tokens for MBD generation -descriptions = ['happy rock', 'energetic EDM', 'sad jazz'] -wav_diffusion = mbd.tokens_to_wav(tokens) -wav, tokens = model.generate(descriptions, return_tokens=True) # generates 3 samples and keep the tokens. -wav_diffusion = mbd.tokens_to_wav(tokens) -melody, sr = torchaudio.load('./assets/bach.mp3') -# Generates using the melody from the given audio and the provided descriptions, returns audio and audio tokens. -wav, tokens = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr, return_tokens=True) -wav_diffusion = mbd.tokens_to_wav(tokens) - -for idx, one_wav in enumerate(wav): - # Will save under {idx}.wav and {idx}_diffusion.wav, with loudness normalization at -14 db LUFS for comparing the methods. - audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True) - audio_write(f'{idx}_diffusion', wav_diffusion[idx].cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True) -``` - -For the compression task (and to compare with [EnCodec](https://github.com/facebookresearch/encodec)): - -```python -import torch -from audiocraft.models import MultiBandDiffusion -from encodec import EncodecModel -from audiocraft.data.audio import audio_read, audio_write - -bandwidth = 3.0 # 1.5, 3.0, 6.0 -mbd = MultiBandDiffusion.get_mbd_24khz(bw=bandwidth) -encodec = EncodecModel.get_encodec_24khz() - -somepath = '' -wav, sr = audio_read(somepath) -with torch.no_grad(): - compressed_encodec = encodec(wav) - compressed_diffusion = mbd.regenerate(wav, sample_rate=sr) - -audio_write('sample_encodec', compressed_encodec.squeeze(0).cpu(), mbd.sample_rate, strategy="loudness", loudness_compressor=True) -audio_write('sample_diffusion', compressed_diffusion.squeeze(0).cpu(), mbd.sample_rate, strategy="loudness", loudness_compressor=True) -``` - - -## Training - -The [DiffusionSolver](../audiocraft/solvers/diffusion.py) implements our diffusion training pipeline. -It generates waveform audio conditioned on the embeddings extracted from a pre-trained EnCodec model -(see [EnCodec documentation](./ENCODEC.md) for more details on how to train such model). - -Note that **we do NOT provide any of the datasets** used for training our diffusion models. -We provide a dummy dataset containing just a few examples for illustrative purposes. - -### Example configurations and grids - -One can train diffusion models as described in the paper by using this [dora grid](../audiocraft/grids/diffusion/4_bands_base_32khz.py). -```shell -# 4 bands MBD trainning -dora grid diffusion.4_bands_base_32khz -``` - -### Learn more - -Learn more about AudioCraft training pipelines in the [dedicated section](./TRAINING.md). - - -## Citation - -``` -@article{sanroman2023fromdi, - title={From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion}, - author={San Roman, Robin and Adi, Yossi and Deleforge, Antoine and Serizel, Romain and Synnaeve, Gabriel and Défossez, Alexandre}, - journal={arXiv preprint arXiv:}, - year={2023} -} -``` - - -## License - -See license information in the [README](../README.md). - - -[arxiv]: https://dl.fbaipublicfiles.com/encodec/Diffusion/paper.pdf -[mbd_samples]: https://ai.honu.io/papers/mbd/ diff --git a/spaces/GreenTeaLatte/ComfyUI-cpu/Dockerfile b/spaces/GreenTeaLatte/ComfyUI-cpu/Dockerfile deleted file mode 100644 index 68f9a53cef8155fa224a4b379fafd210d2f4ca8f..0000000000000000000000000000000000000000 --- a/spaces/GreenTeaLatte/ComfyUI-cpu/Dockerfile +++ /dev/null @@ -1,151 +0,0 @@ -# FROM nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04 -FROM ubuntu:22.04 - -ENV DEBIAN_FRONTEND=noninteractive \ - TZ=America/Los_Angeles - -ARG USE_PERSISTENT_DATA - -RUN apt-get update && apt-get install -y \ - git \ - make build-essential libssl-dev zlib1g-dev \ - libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \ - libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev git-lfs \ - ffmpeg libsm6 libxext6 cmake libgl1-mesa-glx \ - && rm -rf /var/lib/apt/lists/* \ - && git lfs install - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -# User -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Pyenv -RUN curl https://pyenv.run | bash -ENV PATH=$HOME/.pyenv/shims:$HOME/.pyenv/bin:$PATH - -ARG PYTHON_VERSION=3.10.12 -# Python -RUN pyenv install $PYTHON_VERSION && \ - pyenv global $PYTHON_VERSION && \ - pyenv rehash && \ - pip install --no-cache-dir --upgrade pip setuptools wheel && \ - pip install --no-cache-dir \ - datasets \ - huggingface-hub "protobuf<4" "click<8.1" - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -# Set the working directory to /data if USE_PERSISTENT_DATA is set, otherwise set to $HOME/app -WORKDIR $HOME/app - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user - -RUN git clone https://github.com/comfyanonymous/ComfyUI . && \ - pip install xformers!=0.0.18 --no-cache-dir -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu117 - -# Checkpoints - -RUN echo "Downloading checkpoints..." -# SDXL -RUN wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors -P ./models/checkpoints/ -RUN wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors -P ./models/checkpoints/ -# RUN wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0_0.9vae.safetensors -P ./models/checkpoints/ - -# SD1.5 -# RUN wget -c https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -P ./models/checkpoints/ -RUN wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.safetensors -P ./models/checkpoints/ -# RUN wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.safetensors -P ./models/checkpoints/ -# Some SD1.5 anime style -# RUN wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_hard.safetensors -P ./models/checkpoints/ -# RUN wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1_orangemixs.safetensors -P ./models/checkpoints/ -# RUN wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A3_orangemixs.safetensors -P ./models/checkpoints/ -# RUN wget -c https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/anything-v3-fp16-pruned.safetensors -P ./models/checkpoints/ -# Waifu Diffusion 1.5 (anime style SD2.x 768-v) -# RUN wget -c https://huggingface.co/waifu-diffusion/wd-1-5-beta2/resolve/main/checkpoints/wd-1-5-beta2-fp16.safetensors -P ./models/checkpoints/ -# unCLIP models -# RUN wget -c https://huggingface.co/comfyanonymous/illuminatiDiffusionV1_v11_unCLIP/resolve/main/illuminatiDiffusionV1_v11-unclip-h-fp16.safetensors -P ./models/checkpoints/ -# RUN wget -c https://huggingface.co/comfyanonymous/wd-1.5-beta2_unCLIP/resolve/main/wd-1-5-beta2-aesthetic-unclip-h-fp16.safetensors -P ./models/checkpoints/ -# --- -# VAE -RUN wget -c https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors -P ./models/vae/ -# RUN wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt -P ./models/vae/ -# RUN wget -c https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime2.ckpt -P ./models/vae/ -# Loras -# RUN wget -c https://civitai.com/api/download/models/10350 -O ./models/loras/theovercomer8sContrastFix_sd21768.safetensors #theovercomer8sContrastFix SD2.x 768-v -# RUN wget -c https://civitai.com/api/download/models/10638 -O ./models/loras/theovercomer8sContrastFix_sd15.safetensors #theovercomer8sContrastFix SD1.x -# T2I-Adapter -# RUN wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_depth_sd14v1.pth -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_seg_sd14v1.pth -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_sketch_sd14v1.pth -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_keypose_sd14v1.pth -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_openpose_sd14v1.pth -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_color_sd14/v1.pth -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_canny_sd14v1.pth -P ./models/controlnet/ -# T2I Styles Model -# RUN wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_style_sd14v1.pth -P ./models/style_models/ -# CLIPVision model (needed for styles model) -# RUN wget -c https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/pytorch_model.bin -O ./models/clip_vision/clip_vit14.bin -# ControlNet -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_canny_fp16.safetensors -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors -P ./models/controlnet/ -RUN wget -c https://huggingface.co/thibaud/controlnet-sd21/resolve/main/control_v11p_sd21_lineart.safetensors -P ./models/controlnet/ -#RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_lineart_fp16.safetensors -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_openpose_fp16.safetensors -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_scribble_fp16.safetensors -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_seg_fp16.safetensors -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_softedge_fp16.safetensors -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors -P ./models/controlnet/ -# RUN wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11u_sd15_tile_fp16.safetensors -P ./models/controlnet/ -RUN wget -c https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0/resolve/main/OpenPoseXL2.safetensors -P ./models/controlnet/ - -# https://huggingface.co/stabilityai/control-lora -RUN wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-canny-rank256.safetensors -P ./models/controlnet/ -RUN wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-depth-rank256.safetensors -P ./models/controlnet/ -RUN wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-recolor-rank256.safetensors -P ./models/controlnet/ -RUN wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-sketch-rank256.safetensors -P ./models/controlnet/ - -RUN wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-canny-rank128.safetensors -P ./models/controlnet/ -RUN wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-depth-rank128.safetensors -P ./models/controlnet/ -RUN wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-recolor-rank128.safetensors -P ./models/controlnet -RUN wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-sketch-rank128-metadata.safetensors -P ./models/controlnet/ - - -# RUN wget -c https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0/resolve/main/diffusion_pytorch_model.bin -O ./models/controlnet/OpenPoseXL2.bin -# GLIGEN -RUN wget -c https://huggingface.co/comfyanonymous/GLIGEN_pruned_safetensors/resolve/main/gligen_sd14_textbox_pruned_fp16.safetensors -P ./models/gligen/ -# ESRGAN upscale model -RUN wget -c https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P ./models/upscale_models/ -RUN wget -c https://huggingface.co/sberbank-ai/Real-ESRGAN/resolve/main/RealESRGAN_x2.pth -P ./models/upscale_models/ -RUN wget -c https://huggingface.co/sberbank-ai/Real-ESRGAN/resolve/main/RealESRGAN_x4.pth -P ./models/upscale_models/ - -RUN echo "Done" - -# instal custom nodes -RUN echo "Installing custom nodes..." -# Controlnet Preprocessor nodes by Fannovel16 -RUN cd custom_nodes && git clone https://github.com/Fannovel16/comfy_controlnet_preprocessors && cd comfy_controlnet_preprocessors && python install.py --no_download_ckpts -RUN cd custom_nodes && git clone https://github.com/Fannovel16/comfyui_controlnet_aux && cd comfyui_controlnet_aux && pip install -r requirements.txt -RUN cd custom_nodes && git clone https://github.com/Stability-AI/stability-ComfyUI-nodes && cd stability-ComfyUI-nodes && pip install -r requirements.txt -RUN cd custom_nodes && git clone https://github.com/EllangoK/ComfyUI-post-processing-nodes -# ComfyUI Manager -# RUN cd custom_nodes && git clone https://github.com/ltdrdata/ComfyUI-Manager.git - -RUN echo "Done" - -# CMD ["python", "main.py", "--listen", "0.0.0.0", "--port", "7860", "--output-directory", "${USE_PERSISTENT_DATA:+/data/}"] -CMD ["python", "main.py", "--cpu", "--listen", "0.0.0.0", "--port", "7860", "--output-directory", "${USE_PERSISTENT_DATA:+/data/}"] - - - - diff --git a/spaces/GroveStreet/GTA_SOVITS/app.py b/spaces/GroveStreet/GTA_SOVITS/app.py deleted file mode 100644 index a3a7e8f9b5248ddef35ff90ba96f29c15de2c85b..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/app.py +++ /dev/null @@ -1,127 +0,0 @@ -import argparse -import logging -import os -import re -import subprocess -import gradio.processing_utils as gr_pu -import gradio as gr -import librosa -import numpy as np -import soundfile -from scipy.io import wavfile - -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('markdown_it').setLevel(logging.WARNING) -logging.getLogger('urllib3').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -sampling_rate = 44100 - - -def create_fn(model, spk): - def svc_fn(input_audio, vc_transform, auto_f0, f0p): - if input_audio is None: - return 0, None - sr, audio = input_audio - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - temp_path = "temp.wav" - soundfile.write(temp_path, audio, sr, format="wav") - out_audio = model.slice_inference(raw_audio_path=temp_path, - spk=spk, - slice_db=-40, - cluster_infer_ratio=0, - noice_scale=0.4, - clip_seconds=10, - tran=vc_transform, - f0_predictor=f0p, - auto_predict_f0=auto_f0) - model.clear_empty() - os.remove(temp_path) - return sr, out_audio - - def tts_fn(input_text, gender, tts_rate, vc_transform, auto_f0, f0p): - if input_text == '': - return 0, None - input_text = re.sub(r"[\n\,\(\) ]", "", input_text) - voice = "zh-CN-XiaoyiNeural" if gender == '女' else "zh-CN-YunxiNeural" - ratestr = "+{:.0%}".format(tts_rate) if tts_rate >= 0 else "{:.0%}".format(tts_rate) - temp_path = "temp.wav" - p = subprocess.Popen("edge-tts " + - " --text " + input_text + - " --write-media " + temp_path + - " --voice " + voice + - " --rate=" + ratestr, shell=True, - stdout=subprocess.PIPE, - stdin=subprocess.PIPE) - p.wait() - audio, sr = librosa.load(temp_path) - audio = librosa.resample(audio, orig_sr=sr, target_sr=sampling_rate) - os.remove(temp_path) - temp_path = "temp.wav" - wavfile.write(temp_path, sampling_rate, (audio * np.iinfo(np.int16).max).astype(np.int16)) - sr, audio = gr_pu.audio_from_file(temp_path) - input_audio = (sr, audio) - return svc_fn(input_audio, vc_transform, auto_f0, f0p) - - return svc_fn, tts_fn - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - args = parser.parse_args() - models = [] - for f in os.listdir("models"): - name = f - model = Svc(fr"models/{f}/{f}.pth", f"models/{f}/config_{f}.json", device=args.device) - cover = f"models/{f}/cover.png" if os.path.exists(f"models/{f}/cover.png") else f"models/{f}/cover.jpg" - models.append((name, cover, create_fn(model, name))) - with gr.Blocks() as app: - gr.Markdown( - "#
    游戏角色语音生成\n" - "##
    模型作者:B站[Cyber蝈蝈总](https://space.bilibili.com/37706580)\n" - "
    使用此处资源创作的作品,请显著标明出处,CJ有两个模型,carl1更清晰,carl2音域广\n" - ) - with gr.Tabs(): - for (name, cover, (svc_fn, tts_fn)) in models: - with gr.TabItem(name): - with gr.Row(): - with gr.Column(): - with gr.Row(): - vc_transform = gr.Number(label="音高调整 (正负半音,12为1个八度)", value=0) - f0_predictor = gr.Radio(label="f0预测器 (对电音有影响)", - choices=['crepe', 'harvest', 'dio', 'pm'], value='crepe') - auto_f0 = gr.Checkbox(label="自动音高预测 (文本转语音或正常说话可选,会导致唱歌跑调)", - value=False) - with gr.Tabs(): - with gr.TabItem('语音转语音'): - svc_input = gr.Audio( - label="上传干声 (已支持无限长音频,处理时间约为原音频时间的5倍)") - svc_submit = gr.Button("生成", variant="primary") - - with gr.TabItem('文本转语音'): - tts_input = gr.Textbox(label='说话内容', value='', - placeholder='已支持无限长内容,处理时间约为说完原内容时间的5倍') - with gr.Row(): - gender = gr.Radio(label='说话人性别 (男音调低,女音调高)', value='男', - choices=['男', '女']) - tts_rate = gr.Number(label='语速 (正负, 单位百分比)', value=0) - tts_submit = gr.Button("生成", variant="primary") - - with gr.Column(): - gr.Markdown( - '
    ' - f'' if cover else "" - '
    ' - ) - vc_output = gr.Audio(label="输出音频") - svc_submit.click(svc_fn, [svc_input, vc_transform, auto_f0, f0_predictor], vc_output) - tts_submit.click(tts_fn, [tts_input, gender, tts_rate, vc_transform, auto_f0, f0_predictor], - vc_output) - app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share) diff --git a/spaces/HESOAYM/ElviraMulti/run_macOS.command b/spaces/HESOAYM/ElviraMulti/run_macOS.command deleted file mode 100644 index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000 --- a/spaces/HESOAYM/ElviraMulti/run_macOS.command +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$(readlink -f "$0")") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" || exit - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi - -# 检查ChuanhuChatbot.py是否在运行 -if ! pgrep -f ChuanhuChatbot.py > /dev/null; then - # 如果没有运行,启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/hifi/prepare_data.sh b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/hifi/prepare_data.sh deleted file mode 100644 index d620cfeb93d8de9b2f750ad9bd52a937b0b88c33..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/hifi/prepare_data.sh +++ /dev/null @@ -1,10 +0,0 @@ -input_wav_path='/home/harveen/en/iitm_data/english/wav_22k' #give multiple folders separated by comma(,) -gender='male' - -output_data_path='../../data/hifi/'$gender - -valid_samples=100 -test_samples=10 - -mkdir -p $output_data_path -python ../../utils/hifi/prepare_iitm_data_hifi.py -i $input_wav_path -v $valid_samples -t $test_samples -d $output_data_path diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/train.py b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/train.py deleted file mode 100644 index 79bf515a707b309e82e9686c140658f23acf1b91..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/train.py +++ /dev/null @@ -1,286 +0,0 @@ -import os -import json -import argparse -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from apex.parallel import DistributedDataParallel as DDP -from apex import amp - -from data_utils import TextMelLoader, TextMelCollate -import models -import commons -import utils - - -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ["MASTER_ADDR"] = "localhost" - os.environ["MASTER_PORT"] = "80000" - - hps = utils.get_hparams() - mp.spawn( - train_and_eval, - nprocs=n_gpus, - args=( - n_gpus, - hps, - ), - ) - - -def train_and_eval(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.log_dir) - logger.info(hps) - utils.check_git_hash(hps.log_dir) - writer = SummaryWriter(log_dir=hps.log_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.log_dir, "eval")) - - dist.init_process_group( - backend="nccl", init_method="env://", world_size=n_gpus, rank=rank - ) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextMelLoader(hps.data.training_files, hps.data) - train_sampler = torch.utils.data.distributed.DistributedSampler( - train_dataset, num_replicas=n_gpus, rank=rank, shuffle=True - ) - collate_fn = TextMelCollate(1) - train_loader = DataLoader( - train_dataset, - num_workers=8, - shuffle=False, - batch_size=hps.train.batch_size, - pin_memory=True, - drop_last=True, - collate_fn=collate_fn, - sampler=train_sampler, - ) - if rank == 0: - val_dataset = TextMelLoader(hps.data.validation_files, hps.data) - val_loader = DataLoader( - val_dataset, - num_workers=8, - shuffle=False, - batch_size=hps.train.batch_size, - pin_memory=True, - drop_last=True, - collate_fn=collate_fn, - ) - symbols = hps.data.punc + hps.data.chars - generator = models.FlowGenerator( - n_vocab=len(symbols) + getattr(hps.data, "add_blank", False), - out_channels=hps.data.n_mel_channels, - **hps.model - ).cuda(rank) - optimizer_g = commons.Adam( - generator.parameters(), - scheduler=hps.train.scheduler, - dim_model=hps.model.hidden_channels, - warmup_steps=hps.train.warmup_steps, - lr=hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - if hps.train.fp16_run: - generator, optimizer_g._optim = amp.initialize( - generator, optimizer_g._optim, opt_level="O1" - ) - generator = DDP(generator) - epoch_str = 1 - global_step = 0 - try: - _, _, _, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), - generator, - optimizer_g, - ) - epoch_str += 1 - optimizer_g.step_num = (epoch_str - 1) * len(train_loader) - optimizer_g._update_learning_rate() - global_step = (epoch_str - 1) * len(train_loader) - except: - if hps.train.ddi and os.path.isfile(os.path.join(hps.model_dir, "ddi_G.pth")): - _ = utils.load_checkpoint( - os.path.join(hps.model_dir, "ddi_G.pth"), generator, optimizer_g - ) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train( - rank, epoch, hps, generator, optimizer_g, train_loader, logger, writer - ) - evaluate( - rank, - epoch, - hps, - generator, - optimizer_g, - val_loader, - logger, - writer_eval, - ) - if epoch % hps.train.save_epoch == 0: - utils.save_checkpoint( - generator, - optimizer_g, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(epoch)), - ) - else: - train(rank, epoch, hps, generator, optimizer_g, train_loader, None, None) - - -def train(rank, epoch, hps, generator, optimizer_g, train_loader, logger, writer): - train_loader.sampler.set_epoch(epoch) - global global_step - - generator.train() - for batch_idx, (x, x_lengths, y, y_lengths) in enumerate(train_loader): - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda( - rank, non_blocking=True - ) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda( - rank, non_blocking=True - ) - - # Train Generator - optimizer_g.zero_grad() - - ( - (z, z_m, z_logs, logdet, z_mask), - (x_m, x_logs, x_mask), - (attn, logw, logw_), - ) = generator(x, x_lengths, y, y_lengths, gen=False) - l_mle = commons.mle_loss(z, z_m, z_logs, logdet, z_mask) - l_length = commons.duration_loss(logw, logw_, x_lengths) - - loss_gs = [l_mle, l_length] - loss_g = sum(loss_gs) - - if hps.train.fp16_run: - with amp.scale_loss(loss_g, optimizer_g._optim) as scaled_loss: - scaled_loss.backward() - grad_norm = commons.clip_grad_value_( - amp.master_params(optimizer_g._optim), 5 - ) - else: - loss_g.backward() - grad_norm = commons.clip_grad_value_(generator.parameters(), 5) - optimizer_g.step() - - if rank == 0: - if batch_idx % hps.train.log_interval == 0: - (y_gen, *_), *_ = generator.module(x[:1], x_lengths[:1], gen=True) - logger.info( - "Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}".format( - epoch, - batch_idx * len(x), - len(train_loader.dataset), - 100.0 * batch_idx / len(train_loader), - loss_g.item(), - ) - ) - logger.info( - [x.item() for x in loss_gs] + [global_step, optimizer_g.get_lr()] - ) - - scalar_dict = { - "loss/g/total": loss_g, - "learning_rate": optimizer_g.get_lr(), - "grad_norm": grad_norm, - } - scalar_dict.update( - {"loss/g/{}".format(i): v for i, v in enumerate(loss_gs)} - ) - utils.summarize( - writer=writer, - global_step=global_step, - images={ - "y_org": utils.plot_spectrogram_to_numpy( - y[0].data.cpu().numpy() - ), - "y_gen": utils.plot_spectrogram_to_numpy( - y_gen[0].data.cpu().numpy() - ), - "attn": utils.plot_alignment_to_numpy( - attn[0, 0].data.cpu().numpy() - ), - }, - scalars=scalar_dict, - ) - global_step += 1 - - if rank == 0: - logger.info("====> Epoch: {}".format(epoch)) - - -def evaluate(rank, epoch, hps, generator, optimizer_g, val_loader, logger, writer_eval): - if rank == 0: - global global_step - generator.eval() - losses_tot = [] - with torch.no_grad(): - for batch_idx, (x, x_lengths, y, y_lengths) in enumerate(val_loader): - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda( - rank, non_blocking=True - ) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda( - rank, non_blocking=True - ) - - ( - (z, z_m, z_logs, logdet, z_mask), - (x_m, x_logs, x_mask), - (attn, logw, logw_), - ) = generator(x, x_lengths, y, y_lengths, gen=False) - l_mle = commons.mle_loss(z, z_m, z_logs, logdet, z_mask) - l_length = commons.duration_loss(logw, logw_, x_lengths) - - loss_gs = [l_mle, l_length] - loss_g = sum(loss_gs) - - if batch_idx == 0: - losses_tot = loss_gs - else: - losses_tot = [x + y for (x, y) in zip(losses_tot, loss_gs)] - - if batch_idx % hps.train.log_interval == 0: - logger.info( - "Eval Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}".format( - epoch, - batch_idx * len(x), - len(val_loader.dataset), - 100.0 * batch_idx / len(val_loader), - loss_g.item(), - ) - ) - logger.info([x.item() for x in loss_gs]) - - losses_tot = [x / len(val_loader) for x in losses_tot] - loss_tot = sum(losses_tot) - scalar_dict = {"loss/g/total": loss_tot} - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_tot)}) - utils.summarize( - writer=writer_eval, global_step=global_step, scalars=scalar_dict - ) - logger.info("====> Epoch: {}".format(epoch)) - - -if __name__ == "__main__": - main() diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/common.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/common.py deleted file mode 100644 index feff2e790d709f859da975b2d11e338eb91d943c..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/common.py +++ /dev/null @@ -1,58 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -import os - -""" -Path to the Indic NLP Resources directory -""" -INDIC_RESOURCES_PATH='' - -def init(): - """ - Initialize the module. The following actions are performed: - - - Checks of INDIC_RESOURCES_PATH variable is set. If not, checks if it can beb initialized from - INDIC_RESOURCES_PATH environment variable. If that fails, an exception is raised - """ - global INDIC_RESOURCES_PATH - try: - if INDIC_RESOURCES_PATH=='': - INDIC_RESOURCES_PATH=os.environ['INDIC_RESOURCES_PATH'] - except Exception as e: - raise IndicNlpException('INDIC_RESOURCES_PATH not set') - - if INDIC_RESOURCES_PATH=='': - raise IndicNlpException('INDIC_RESOURCES_PATH not set') - - - -def get_resources_path(): - """ - Get the path to the Indic NLP Resources directory - """ - return INDIC_RESOURCES_PATH - -def set_resources_path(resources_path): - """ - Set the path to the Indic NLP Resources directory - """ - global INDIC_RESOURCES_PATH - INDIC_RESOURCES_PATH=resources_path - -class IndicNlpException(Exception): - """ - Exceptions thrown by Indic NLP Library components are instances of this class. - 'msg' attribute contains exception details. - """ - def __init__(self, msg): - self.msg = msg - - def __str__(self): - return repr(self.msg) - diff --git a/spaces/Hitmanny/BigGAN-text-to-image/README.md b/spaces/Hitmanny/BigGAN-text-to-image/README.md deleted file mode 100644 index 1aa2dab27c0df3031bd0f63d6d1c5681a0563c13..0000000000000000000000000000000000000000 --- a/spaces/Hitmanny/BigGAN-text-to-image/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: BigGAN Text To Image -emoji: 🌖 -colorFrom: green -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/criterions/tacotron2_loss.py b/spaces/ICML2022/OFA/fairseq/fairseq/criterions/tacotron2_loss.py deleted file mode 100644 index 8c7b655c8c52f8fa478b4568850ec8f741dab78e..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/criterions/tacotron2_loss.py +++ /dev/null @@ -1,210 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import logging -from typing import Any, Dict, List -from functools import lru_cache -from dataclasses import dataclass, field - -import torch -from omegaconf import II - -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.data.data_utils import lengths_to_mask -import torch.nn.functional as F - - -logger = logging.getLogger(__name__) - - -@dataclass -class Tacotron2CriterionConfig(FairseqDataclass): - bce_pos_weight: float = field( - default=1.0, - metadata={"help": "weight of positive examples for BCE loss"}, - ) - n_frames_per_step: int = field( - default=0, - metadata={"help": "Number of frames per decoding step"}, - ) - use_guided_attention_loss: bool = field( - default=False, - metadata={"help": "use guided attention loss"}, - ) - guided_attention_loss_sigma: float = field( - default=0.4, - metadata={"help": "weight of positive examples for BCE loss"}, - ) - ctc_weight: float = field( - default=0.0, metadata={"help": "weight for CTC loss"} - ) - sentence_avg: bool = II("optimization.sentence_avg") - - -class GuidedAttentionLoss(torch.nn.Module): - """ - Efficiently Trainable Text-to-Speech System Based on Deep Convolutional - Networks with Guided Attention (https://arxiv.org/abs/1710.08969) - """ - - def __init__(self, sigma): - super().__init__() - self.sigma = sigma - - @staticmethod - @lru_cache(maxsize=8) - def _get_weight(s_len, t_len, sigma): - grid_x, grid_y = torch.meshgrid(torch.arange(t_len), torch.arange(s_len)) - grid_x = grid_x.to(s_len.device) - grid_y = grid_y.to(s_len.device) - w = (grid_y.float() / s_len - grid_x.float() / t_len) ** 2 - return 1.0 - torch.exp(-w / (2 * (sigma ** 2))) - - def _get_weights(self, src_lens, tgt_lens): - bsz, max_s_len, max_t_len = len(src_lens), max(src_lens), max(tgt_lens) - weights = torch.zeros((bsz, max_t_len, max_s_len)) - for i, (s_len, t_len) in enumerate(zip(src_lens, tgt_lens)): - weights[i, :t_len, :s_len] = self._get_weight(s_len, t_len, - self.sigma) - return weights - - @staticmethod - def _get_masks(src_lens, tgt_lens): - in_masks = lengths_to_mask(src_lens) - out_masks = lengths_to_mask(tgt_lens) - return out_masks.unsqueeze(2) & in_masks.unsqueeze(1) - - def forward(self, attn, src_lens, tgt_lens, reduction="mean"): - weights = self._get_weights(src_lens, tgt_lens).to(attn.device) - masks = self._get_masks(src_lens, tgt_lens).to(attn.device) - loss = (weights * attn.transpose(1, 2)).masked_select(masks) - loss = torch.sum(loss) if reduction == "sum" else torch.mean(loss) - return loss - - -@register_criterion("tacotron2", dataclass=Tacotron2CriterionConfig) -class Tacotron2Criterion(FairseqCriterion): - def __init__(self, task, sentence_avg, n_frames_per_step, - use_guided_attention_loss, guided_attention_loss_sigma, - bce_pos_weight, ctc_weight): - super().__init__(task) - self.sentence_avg = sentence_avg - self.n_frames_per_step = n_frames_per_step - self.bce_pos_weight = bce_pos_weight - - self.guided_attn = None - if use_guided_attention_loss: - self.guided_attn = GuidedAttentionLoss(guided_attention_loss_sigma) - self.ctc_weight = ctc_weight - - def forward(self, model, sample, reduction="mean"): - bsz, max_len, _ = sample["target"].size() - feat_tgt = sample["target"] - feat_len = sample["target_lengths"].view(bsz, 1).expand(-1, max_len) - eos_tgt = torch.arange(max_len).to(sample["target"].device) - eos_tgt = eos_tgt.view(1, max_len).expand(bsz, -1) - eos_tgt = (eos_tgt == (feat_len - 1)).float() - src_tokens = sample["net_input"]["src_tokens"] - src_lens = sample["net_input"]["src_lengths"] - tgt_lens = sample["target_lengths"] - - feat_out, eos_out, extra = model( - src_tokens=src_tokens, - src_lengths=src_lens, - prev_output_tokens=sample["net_input"]["prev_output_tokens"], - incremental_state=None, - target_lengths=tgt_lens, - speaker=sample["speaker"] - ) - - l1_loss, mse_loss, eos_loss = self.compute_loss( - extra["feature_out"], feat_out, eos_out, feat_tgt, eos_tgt, - tgt_lens, reduction, - ) - attn_loss = torch.tensor(0.).type_as(l1_loss) - if self.guided_attn is not None: - attn_loss = self.guided_attn(extra['attn'], src_lens, tgt_lens, reduction) - ctc_loss = torch.tensor(0.).type_as(l1_loss) - if self.ctc_weight > 0.: - net_output = (feat_out, eos_out, extra) - lprobs = model.get_normalized_probs(net_output, log_probs=True) - lprobs = lprobs.transpose(0, 1) # T x B x C - src_mask = lengths_to_mask(src_lens) - src_tokens_flat = src_tokens.masked_select(src_mask) - ctc_loss = F.ctc_loss( - lprobs, src_tokens_flat, tgt_lens, src_lens, - reduction=reduction, zero_infinity=True - ) * self.ctc_weight - loss = l1_loss + mse_loss + eos_loss + attn_loss + ctc_loss - - sample_size = sample["nsentences"] if self.sentence_avg \ - else sample["ntokens"] - logging_output = { - "loss": utils.item(loss.data), - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - "l1_loss": utils.item(l1_loss.data), - "mse_loss": utils.item(mse_loss.data), - "eos_loss": utils.item(eos_loss.data), - "attn_loss": utils.item(attn_loss.data), - "ctc_loss": utils.item(ctc_loss.data), - } - return loss, sample_size, logging_output - - def compute_loss(self, feat_out, feat_out_post, eos_out, feat_tgt, - eos_tgt, tgt_lens, reduction="mean"): - mask = lengths_to_mask(tgt_lens) - _eos_out = eos_out[mask].squeeze() - _eos_tgt = eos_tgt[mask] - _feat_tgt = feat_tgt[mask] - _feat_out = feat_out[mask] - _feat_out_post = feat_out_post[mask] - - l1_loss = ( - F.l1_loss(_feat_out, _feat_tgt, reduction=reduction) + - F.l1_loss(_feat_out_post, _feat_tgt, reduction=reduction) - ) - mse_loss = ( - F.mse_loss(_feat_out, _feat_tgt, reduction=reduction) + - F.mse_loss(_feat_out_post, _feat_tgt, reduction=reduction) - ) - eos_loss = F.binary_cross_entropy_with_logits( - _eos_out, _eos_tgt, pos_weight=torch.tensor(self.bce_pos_weight), - reduction=reduction - ) - return l1_loss, mse_loss, eos_loss - - @classmethod - def reduce_metrics(cls, logging_outputs: List[Dict[str, Any]]) -> None: - ns = [log.get("sample_size", 0) for log in logging_outputs] - ntot = sum(ns) - ws = [n / (ntot + 1e-8) for n in ns] - for key in ["loss", "l1_loss", "mse_loss", "eos_loss", "attn_loss", "ctc_loss"]: - vals = [log.get(key, 0) for log in logging_outputs] - val = sum(val * w for val, w in zip(vals, ws)) - metrics.log_scalar(key, val, ntot, round=3) - metrics.log_scalar("sample_size", ntot, len(logging_outputs)) - - # inference metrics - if "targ_frames" not in logging_outputs[0]: - return - n = sum(log.get("targ_frames", 0) for log in logging_outputs) - for key, new_key in [ - ("mcd_loss", "mcd_loss"), - ("pred_frames", "pred_ratio"), - ("nins", "ins_rate"), - ("ndel", "del_rate"), - ]: - val = sum(log.get(key, 0) for log in logging_outputs) - metrics.log_scalar(new_key, val / n, n, round=3) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - return False diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/data_cfg.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/data_cfg.py deleted file mode 100644 index 95b403ad9c617afb5656131693c92b9cc3befd3b..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/data_cfg.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from pathlib import Path -from typing import Dict, Optional - - -class S2TDataConfig(object): - """Wrapper class for data config YAML""" - - def __init__(self, yaml_path: Path): - try: - import yaml - except ImportError: - print("Please install PyYAML: pip install PyYAML") - self.config = {} - if yaml_path.is_file(): - try: - with open(yaml_path) as f: - self.config = yaml.load(f, Loader=yaml.FullLoader) - except Exception as e: - raise Exception( - f"Failed to load config from {yaml_path.as_posix()}: {e}" - ) - else: - raise FileNotFoundError(f"{yaml_path.as_posix()} not found") - self.root = yaml_path.parent - - def _auto_convert_to_abs_path(self, x): - if isinstance(x, str): - if not Path(x).exists() and (self.root / x).exists(): - return (self.root / x).as_posix() - elif isinstance(x, dict): - return {k: self._auto_convert_to_abs_path(v) for k, v in x.items()} - return x - - @property - def vocab_filename(self): - """fairseq vocabulary file under data root""" - return self.config.get("vocab_filename", "dict.txt") - - @property - def speaker_set_filename(self): - """fairseq vocabulary file under data root""" - return self.config.get("speaker_set_filename", None) - - @property - def shuffle(self) -> bool: - """Shuffle dataset samples before batching""" - return self.config.get("shuffle", False) - - @property - def pre_tokenizer(self) -> Dict: - """Pre-tokenizer to apply before subword tokenization. Returning - a dictionary with `tokenizer` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - tokenizer = self.config.get("pre_tokenizer", {"tokenizer": None}) - return self._auto_convert_to_abs_path(tokenizer) - - @property - def bpe_tokenizer(self) -> Dict: - """Subword tokenizer to apply after pre-tokenization. Returning - a dictionary with `bpe` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - tokenizer = self.config.get("bpe_tokenizer", {"bpe": None}) - return self._auto_convert_to_abs_path(tokenizer) - - @property - def prepend_tgt_lang_tag(self) -> bool: - """Prepend target lang ID token as the target BOS (e.g. for to-many - multilingual setting). During inference, this requires `--prefix-size 1` - to force BOS to be lang ID token.""" - return self.config.get("prepend_tgt_lang_tag", False) - - @property - def input_feat_per_channel(self): - """The dimension of input features (per audio channel)""" - return self.config.get("input_feat_per_channel", 80) - - @property - def input_channels(self): - """The number of channels in the input audio""" - return self.config.get("input_channels", 1) - - @property - def sample_rate(self): - return self.config.get("sample_rate", 16_000) - - @property - def sampling_alpha(self): - """Hyper-parameter alpha = 1/T for temperature-based resampling. - (alpha = 1 for no resampling)""" - return self.config.get("sampling_alpha", 1.0) - - @property - def use_audio_input(self): - """Needed by the dataset loader to see if the model requires - raw audio as inputs.""" - return self.config.get("use_audio_input", False) - - @property - def use_sample_rate(self): - """Needed by the dataset loader to see if the model requires - raw audio with specific sample rate as inputs.""" - return self.config.get("use_sample_rate", 16000) - - @property - def audio_root(self): - """Audio paths in the manifest TSV can be relative and this provides - the root path. Set this to empty string when using absolute paths.""" - return self.config.get("audio_root", "") - - def get_feature_transforms(self, split, is_train): - """Split-specific feature transforms. Allowing train set - wildcard `_train`, evaluation set wildcard `_eval` and general - wildcard `*` for matching.""" - from copy import deepcopy - - cfg = deepcopy(self.config) - _cur = cfg.get("transforms", {}) - cur = _cur.get(split) - cur = _cur.get("_train") if cur is None and is_train else cur - cur = _cur.get("_eval") if cur is None and not is_train else cur - cur = _cur.get("*") if cur is None else cur - cfg["transforms"] = cur - return cfg - - @property - def global_cmvn_stats_npz(self) -> Optional[str]: - path = self.config.get("global_cmvn", {}).get("stats_npz_path", None) - return self._auto_convert_to_abs_path(path) - - @property - def vocoder(self) -> Optional[Dict[str, str]]: - return self.config.get("vocoder", None) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/huggingface/hf_gpt2.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/huggingface/hf_gpt2.py deleted file mode 100644 index 3a8eb78198f5808557092f814e92f1c9d72933ec..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/models/huggingface/hf_gpt2.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys -from typing import Dict, List, Optional - -import torch -from fairseq.models import ( - FairseqIncrementalDecoder, - FairseqLanguageModel, - register_model, - register_model_architecture, -) - - -logger = logging.getLogger(__name__) - - -DEFAULT_MAX_TARGET_POSITIONS = 1024 - - -@register_model("hf_gpt2") -class HuggingFaceGPT2LanguageModel(FairseqLanguageModel): - def __init__(self, decoder): - super().__init__(decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--embed-dim', type=int, metavar='N', - help='embedding dimension') - parser.add_argument('--num-attention-heads', type=int, metavar='N', - help='num attention heads') - parser.add_argument('--num-layers', type=int, metavar='N', - help='num layers') - parser.add_argument('--dropout', type=float, metavar='D', - help='dropout probability for all fully connected layers ' - 'in the embeddings, encoder, and pooler') - parser.add_argument('--attention-dropout', type=float, metavar='D', - help='dropout probability for attention weights') - # fmt: on - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - default_architecture(args) - return cls(HuggingFaceGPT2Decoder(args, task)) - - -class HuggingFaceGPT2Decoder(FairseqIncrementalDecoder): - def __init__(self, args, task): - try: - from transformers import GPT2Config, GPT2LMHeadModel - except ImportError: - raise ImportError( - "\n\nPlease install huggingface/transformers with:" - "\n\n pip install transformers" - ) - - super().__init__(task.target_dictionary) - - config = GPT2Config( - vocab_size=len(task.target_dictionary), - n_positions=args.max_target_positions + 1, - n_ctx=args.max_target_positions, - n_embd=args.embed_dim, - n_layer=args.num_layers, - n_head=args.num_attention_heads, - resid_pdrop=args.dropout, - embd_pdrop=args.dropout, - attn_pdrop=args.attention_dropout, - layer_norm_epsilon=1e-6, - ) - self.model = GPT2LMHeadModel(config) - - # set zero embedding for padding symbol - self.pad_idx = task.target_dictionary.pad() - self.model.transformer.wte.weight.data[self.pad_idx].zero_() - self.model.transformer.wpe.weight.data[0].zero_() - - def forward( - self, - prev_output_tokens, - src_lengths=None, - incremental_state: Optional[Dict[str, List[torch.Tensor]]] = None, - encoder_out=None, - ): - features = self.extract_features(prev_output_tokens, incremental_state) - lm_logits = self.model.lm_head(features) - return (lm_logits,) - - def extract_features( - self, - prev_output_tokens, - incremental_state: Optional[Dict[str, List[torch.Tensor]]] = None, - ): - if incremental_state: - past = self.get_incremental_state("past") - else: - past = None - - # don't attend to padding symbols - attention_mask = prev_output_tokens.ne(self.pad_idx).int() - - # set position ids to exclude padding symbols - position_ids = attention_mask * ( - torch.arange(1, 1 + prev_output_tokens.size(1)) - .to(prev_output_tokens) - .repeat(prev_output_tokens.size(0), 1) - ) - - outputs = self.model.transformer( - input_ids=prev_output_tokens, - past=past, - attention_mask=attention_mask, - position_ids=position_ids, - ) - last_hidden_states = outputs[0] - - if incremental_state: - self.set_incremental_state(incremental_state, "past", outputs[1]) - - return last_hidden_states - - def max_positions(self): - return self.model.config.n_positions - 1 - - -@register_model_architecture("hf_gpt2", "hf_gpt2") -def default_architecture(args): - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = getattr( - args, "tokens_per_sample", DEFAULT_MAX_TARGET_POSITIONS - ) - args.embed_dim = getattr(args, "embed_dim", 768) - args.num_attention_heads = getattr(args, "num_attention_heads", 12) - args.num_layers = getattr(args, "num_layers", 12) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - - -@register_model_architecture("hf_gpt2", "hf_gpt2_medium") -def hf_gpt2_medium(args): - args.embed_dim = getattr(args, "embed_dim", 1024) - args.num_attention_heads = getattr(args, "num_attention_heads", 16) - args.num_layers = getattr(args, "num_layers", 24) - default_architecture(args) - - -@register_model_architecture("hf_gpt2", "hf_gpt2_large") -def hf_gpt2_large(args): - args.embed_dim = getattr(args, "embed_dim", 1280) - args.num_attention_heads = getattr(args, "num_attention_heads", 20) - args.num_layers = getattr(args, "num_layers", 36) - default_architecture(args) - - -@register_model_architecture("hf_gpt2", "hf_gpt2_xl") -def hf_gpt2_xl(args): - args.embed_dim = getattr(args, "embed_dim", 1600) - args.num_attention_heads = getattr(args, "num_attention_heads", 25) - args.num_layers = getattr(args, "num_layers", 48) - default_architecture(args) diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/inception.py b/spaces/Iceclear/StableSR/StableSR/basicsr/archs/inception.py deleted file mode 100644 index de1abef67270dc1aba770943b53577029141f527..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/inception.py +++ /dev/null @@ -1,307 +0,0 @@ -# Modified from https://github.com/mseitzer/pytorch-fid/blob/master/pytorch_fid/inception.py # noqa: E501 -# For FID metric - -import os -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.utils.model_zoo import load_url -from torchvision import models - -# Inception weights ported to Pytorch from -# http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz -FID_WEIGHTS_URL = 'https://github.com/mseitzer/pytorch-fid/releases/download/fid_weights/pt_inception-2015-12-05-6726825d.pth' # noqa: E501 -LOCAL_FID_WEIGHTS = 'experiments/pretrained_models/pt_inception-2015-12-05-6726825d.pth' # noqa: E501 - - -class InceptionV3(nn.Module): - """Pretrained InceptionV3 network returning feature maps""" - - # Index of default block of inception to return, - # corresponds to output of final average pooling - DEFAULT_BLOCK_INDEX = 3 - - # Maps feature dimensionality to their output blocks indices - BLOCK_INDEX_BY_DIM = { - 64: 0, # First max pooling features - 192: 1, # Second max pooling features - 768: 2, # Pre-aux classifier features - 2048: 3 # Final average pooling features - } - - def __init__(self, - output_blocks=(DEFAULT_BLOCK_INDEX), - resize_input=True, - normalize_input=True, - requires_grad=False, - use_fid_inception=True): - """Build pretrained InceptionV3. - - Args: - output_blocks (list[int]): Indices of blocks to return features of. - Possible values are: - - 0: corresponds to output of first max pooling - - 1: corresponds to output of second max pooling - - 2: corresponds to output which is fed to aux classifier - - 3: corresponds to output of final average pooling - resize_input (bool): If true, bilinearly resizes input to width and - height 299 before feeding input to model. As the network - without fully connected layers is fully convolutional, it - should be able to handle inputs of arbitrary size, so resizing - might not be strictly needed. Default: True. - normalize_input (bool): If true, scales the input from range (0, 1) - to the range the pretrained Inception network expects, - namely (-1, 1). Default: True. - requires_grad (bool): If true, parameters of the model require - gradients. Possibly useful for finetuning the network. - Default: False. - use_fid_inception (bool): If true, uses the pretrained Inception - model used in Tensorflow's FID implementation. - If false, uses the pretrained Inception model available in - torchvision. The FID Inception model has different weights - and a slightly different structure from torchvision's - Inception model. If you want to compute FID scores, you are - strongly advised to set this parameter to true to get - comparable results. Default: True. - """ - super(InceptionV3, self).__init__() - - self.resize_input = resize_input - self.normalize_input = normalize_input - self.output_blocks = sorted(output_blocks) - self.last_needed_block = max(output_blocks) - - assert self.last_needed_block <= 3, ('Last possible output block index is 3') - - self.blocks = nn.ModuleList() - - if use_fid_inception: - inception = fid_inception_v3() - else: - try: - inception = models.inception_v3(pretrained=True, init_weights=False) - except TypeError: - # pytorch < 1.5 does not have init_weights for inception_v3 - inception = models.inception_v3(pretrained=True) - - # Block 0: input to maxpool1 - block0 = [ - inception.Conv2d_1a_3x3, inception.Conv2d_2a_3x3, inception.Conv2d_2b_3x3, - nn.MaxPool2d(kernel_size=3, stride=2) - ] - self.blocks.append(nn.Sequential(*block0)) - - # Block 1: maxpool1 to maxpool2 - if self.last_needed_block >= 1: - block1 = [inception.Conv2d_3b_1x1, inception.Conv2d_4a_3x3, nn.MaxPool2d(kernel_size=3, stride=2)] - self.blocks.append(nn.Sequential(*block1)) - - # Block 2: maxpool2 to aux classifier - if self.last_needed_block >= 2: - block2 = [ - inception.Mixed_5b, - inception.Mixed_5c, - inception.Mixed_5d, - inception.Mixed_6a, - inception.Mixed_6b, - inception.Mixed_6c, - inception.Mixed_6d, - inception.Mixed_6e, - ] - self.blocks.append(nn.Sequential(*block2)) - - # Block 3: aux classifier to final avgpool - if self.last_needed_block >= 3: - block3 = [ - inception.Mixed_7a, inception.Mixed_7b, inception.Mixed_7c, - nn.AdaptiveAvgPool2d(output_size=(1, 1)) - ] - self.blocks.append(nn.Sequential(*block3)) - - for param in self.parameters(): - param.requires_grad = requires_grad - - def forward(self, x): - """Get Inception feature maps. - - Args: - x (Tensor): Input tensor of shape (b, 3, h, w). - Values are expected to be in range (-1, 1). You can also input - (0, 1) with setting normalize_input = True. - - Returns: - list[Tensor]: Corresponding to the selected output block, sorted - ascending by index. - """ - output = [] - - if self.resize_input: - x = F.interpolate(x, size=(299, 299), mode='bilinear', align_corners=False) - - if self.normalize_input: - x = 2 * x - 1 # Scale from range (0, 1) to range (-1, 1) - - for idx, block in enumerate(self.blocks): - x = block(x) - if idx in self.output_blocks: - output.append(x) - - if idx == self.last_needed_block: - break - - return output - - -def fid_inception_v3(): - """Build pretrained Inception model for FID computation. - - The Inception model for FID computation uses a different set of weights - and has a slightly different structure than torchvision's Inception. - - This method first constructs torchvision's Inception and then patches the - necessary parts that are different in the FID Inception model. - """ - try: - inception = models.inception_v3(num_classes=1008, aux_logits=False, pretrained=False, init_weights=False) - except TypeError: - # pytorch < 1.5 does not have init_weights for inception_v3 - inception = models.inception_v3(num_classes=1008, aux_logits=False, pretrained=False) - - inception.Mixed_5b = FIDInceptionA(192, pool_features=32) - inception.Mixed_5c = FIDInceptionA(256, pool_features=64) - inception.Mixed_5d = FIDInceptionA(288, pool_features=64) - inception.Mixed_6b = FIDInceptionC(768, channels_7x7=128) - inception.Mixed_6c = FIDInceptionC(768, channels_7x7=160) - inception.Mixed_6d = FIDInceptionC(768, channels_7x7=160) - inception.Mixed_6e = FIDInceptionC(768, channels_7x7=192) - inception.Mixed_7b = FIDInceptionE_1(1280) - inception.Mixed_7c = FIDInceptionE_2(2048) - - if os.path.exists(LOCAL_FID_WEIGHTS): - state_dict = torch.load(LOCAL_FID_WEIGHTS, map_location=lambda storage, loc: storage) - else: - state_dict = load_url(FID_WEIGHTS_URL, progress=True) - - inception.load_state_dict(state_dict) - return inception - - -class FIDInceptionA(models.inception.InceptionA): - """InceptionA block patched for FID computation""" - - def __init__(self, in_channels, pool_features): - super(FIDInceptionA, self).__init__(in_channels, pool_features) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch5x5 = self.branch5x5_1(x) - branch5x5 = self.branch5x5_2(branch5x5) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionC(models.inception.InceptionC): - """InceptionC block patched for FID computation""" - - def __init__(self, in_channels, channels_7x7): - super(FIDInceptionC, self).__init__(in_channels, channels_7x7) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch7x7 = self.branch7x7_1(x) - branch7x7 = self.branch7x7_2(branch7x7) - branch7x7 = self.branch7x7_3(branch7x7) - - branch7x7dbl = self.branch7x7dbl_1(x) - branch7x7dbl = self.branch7x7dbl_2(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_3(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_4(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_5(branch7x7dbl) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch7x7, branch7x7dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionE_1(models.inception.InceptionE): - """First InceptionE block patched for FID computation""" - - def __init__(self, in_channels): - super(FIDInceptionE_1, self).__init__(in_channels) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch3x3 = self.branch3x3_1(x) - branch3x3 = [ - self.branch3x3_2a(branch3x3), - self.branch3x3_2b(branch3x3), - ] - branch3x3 = torch.cat(branch3x3, 1) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = [ - self.branch3x3dbl_3a(branch3x3dbl), - self.branch3x3dbl_3b(branch3x3dbl), - ] - branch3x3dbl = torch.cat(branch3x3dbl, 1) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionE_2(models.inception.InceptionE): - """Second InceptionE block patched for FID computation""" - - def __init__(self, in_channels): - super(FIDInceptionE_2, self).__init__(in_channels) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch3x3 = self.branch3x3_1(x) - branch3x3 = [ - self.branch3x3_2a(branch3x3), - self.branch3x3_2b(branch3x3), - ] - branch3x3 = torch.cat(branch3x3, 1) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = [ - self.branch3x3dbl_3a(branch3x3dbl), - self.branch3x3dbl_3b(branch3x3dbl), - ] - branch3x3dbl = torch.cat(branch3x3dbl, 1) - - # Patch: The FID Inception model uses max pooling instead of average - # pooling. This is likely an error in this specific Inception - # implementation, as other Inception models use average pooling here - # (which matches the description in the paper). - branch_pool = F.max_pool2d(x, kernel_size=3, stride=1, padding=1) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) diff --git a/spaces/Iceclear/StableSR/StableSR/taming/modules/losses/__init__.py b/spaces/Iceclear/StableSR/StableSR/taming/modules/losses/__init__.py deleted file mode 100644 index d09caf9eb805f849a517f1b23503e1a4d6ea1ec5..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/taming/modules/losses/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from taming.modules.losses.vqperceptual import DummyLoss - diff --git a/spaces/Illumotion/Koboldcpp/examples/reason-act.sh b/spaces/Illumotion/Koboldcpp/examples/reason-act.sh deleted file mode 100644 index 046c48db584bc31f9c5fd57f84c21d84932a3511..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/reason-act.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/bash - -cd `dirname $0` -cd .. - -# get -m model parameter otherwise defer to default -if [ "$1" == "-m" ]; then - MODEL="-m $2 " -fi - -./main $MODEL --color \ - -f ./prompts/reason-act.txt \ - -i --interactive-first \ - --top_k 10000 --temp 0.2 --repeat_penalty 1 -t 7 -c 2048 \ - -r "Question:" -r "Observation:" --in-prefix " " \ - -n -1 diff --git a/spaces/Illumotion/Koboldcpp/otherarch/ggml_v1.h b/spaces/Illumotion/Koboldcpp/otherarch/ggml_v1.h deleted file mode 100644 index f333b580e39b2ace49cfbccbf9d18aef9ac21f44..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/otherarch/ggml_v1.h +++ /dev/null @@ -1,753 +0,0 @@ -#pragma once - -// -// GGML Tensor Library -// -// This documentation is still a work in progress. -// If you wish some specific topics to be covered, feel free to drop a comment: -// -// https://github.com/ggerganov/whisper.cpp/issues/40 -// -// ## Overview -// -// This library implements: -// -// - a set of tensor operations -// - automatic differentiation -// - basic optimization algorithms -// -// The aim of this library is to provide a minimalistic approach for various machine learning tasks. This includes, -// but is not limited to, the following: -// -// - linear regression -// - support vector machines -// - neural networks -// -// The library allows the user to define a certain function using the available tensor operations. This function -// definition is represented internally via a computation graph. Each tensor operation in the function definition -// corresponds to a node in the graph. Having the computation graph defined, the user can choose to compute the -// function's value and/or its gradient with respect to the input variables. Optionally, the function can be optimized -// using one of the available optimization algorithms. -// -// For example, here we define the function: f(x) = a*x^2 + b -// -// { -// struct ggml_v1_init_params params = { -// .mem_size = 16*1024*1024, -// .mem_buffer = NULL, -// }; -// -// // memory allocation happens here -// struct ggml_v1_context * ctx = ggml_v1_init(params); -// -// struct ggml_v1_tensor * x = ggml_v1_new_tensor_1d(ctx, GGML_V1_TYPE_F32, 1); -// -// ggml_v1_set_param(ctx, x); // x is an input variable -// -// struct ggml_v1_tensor * a = ggml_v1_new_tensor_1d(ctx, GGML_V1_TYPE_F32, 1); -// struct ggml_v1_tensor * b = ggml_v1_new_tensor_1d(ctx, GGML_V1_TYPE_F32, 1); -// struct ggml_v1_tensor * x2 = ggml_v1_mul(ctx, x, x); -// struct ggml_v1_tensor * f = ggml_v1_add(ctx, ggml_v1_mul(ctx, a, x2), b); -// -// ... -// } -// -// Notice that the function definition above does not involve any actual computation. The computation is performed only -// when the user explicitly requests it. For example, to compute the function's value at x = 2.0: -// -// { -// ... -// -// struct ggml_v1_cgraph gf = ggml_v1_build_forward(f); -// -// // set the input variable and parameter values -// ggml_v1_set_f32(x, 2.0f); -// ggml_v1_set_f32(a, 3.0f); -// ggml_v1_set_f32(b, 4.0f); -// -// ggml_v1_graph_compute(ctx0, &gf); -// -// printf("f = %f\n", ggml_v1_get_f32_1d(f, 0)); -// -// ... -// } -// -// The actual computation is performed in the ggml_v1_graph_compute() function. -// -// The ggml_v1_new_tensor_...() functions create new tensors. They are allocated in the memory buffer provided to the -// ggml_v1_init() function. You have to be careful not to exceed the memory buffer size. Therefore, you have to know -// in advance how much memory you need for your computation. Alternatively, you can allocate a large enough memory -// and after defining the computation graph, call the ggml_v1_used_mem() function to find out how much memory was -// actually needed. -// -// The ggml_v1_set_param() function marks a tensor as an input variable. This is used by the automatic -// differentiation and optimization algorithms. -// -// The described approach allows to define the function graph once and then compute its forward or backward graphs -// multiple times. All computations will use the same memory buffer allocated in the ggml_v1_init() function. This way -// the user can avoid the memory allocation overhead at runtime. -// -// The library supports multi-dimensional tensors - up to 4 dimensions. The FP16 and FP32 data types are first class -// citizens, but in theory the library can be extended to support FP8 and integer data types. -// -// Each tensor operation produces a new tensor. Initially the library was envisioned to support only the use of unary -// and binary operations. Most of the available operations fall into one of these two categories. With time, it became -// clear that the library needs to support more complex operations. The way to support these operations is not clear -// yet, but a few examples are demonstrated in the following operations: -// -// - ggml_v1_permute() -// - ggml_v1_conv_1d_1s() -// - ggml_v1_conv_1d_2s() -// -// For each tensor operator, the library implements a forward and backward computation function. The forward function -// computes the output tensor value given the input tensor values. The backward function computes the adjoint of the -// input tensors given the adjoint of the output tensor. For a detailed explanation of what this means, take a -// calculus class, or watch the following video: -// -// What is Automatic Differentiation? -// https://www.youtube.com/watch?v=wG_nF1awSSY -// -// -// ## Tensor data (struct ggml_v1_tensor) -// -// The tensors are stored in memory via the ggml_v1_tensor struct. The structure provides information about the size of -// the tensor, the data type, and the memory buffer where the tensor data is stored. Additionally, it contains -// pointers to the "source" tensors - i.e. the tensors that were used to compute the current tensor. For example: -// -// { -// struct ggml_v1_tensor * c = ggml_v1_add(ctx, a, b); -// -// assert(c->src[0] == a); -// assert(c->src[1] == b); -// } -// -// The multi-dimensional tensors are stored in row-major order. The ggml_v1_tensor struct contains fields for the -// number of elements in each dimension ("ne") as well as the number of bytes ("nb", a.k.a. stride). This allows -// to store tensors that are not contiguous in memory, which is useful for operations such as transposition and -// permutation. All tensor operations have to take the stride into account and not assume that the tensor is -// contiguous in memory. -// -// The data of the tensor is accessed via the "data" pointer. For example: -// -// { -// struct ggml_v1_tensor * a = ggml_v1_new_tensor_2d(ctx, GGML_V1_TYPE_F32, 2, 3); -// -// // a[1, 2] = 1.0f; -// *(float *) ((char *) a->data + 2*a->nb[1] + 1*a->nb[0]) = 1.0f; -// -// // a[2, 0] = 2.0f; -// *(float *) ((char *) a->data + 0*a->nb[1] + 2*a->nb[0]) = 2.0f; -// -// ... -// } -// -// Alternatively, there are helper functions, such as ggml_v1_get_f32_1d() and ggml_v1_set_f32_1d() that can be used. -// -// ## The matrix multiplication operator (ggml_v1_mul_mat) -// -// TODO -// -// -// ## Multi-threading -// -// TODO -// -// -// ## Overview of ggml.c -// -// TODO -// -// -// ## SIMD optimizations -// -// TODO -// -// -// ## Debugging ggml -// -// TODO -// -// - -#ifdef __cplusplus -extern "C" { -#endif - -#include -#include -#include - -#define GGML_V1_MAX_DIMS 4 -#define GGML_V1_MAX_NODES 4096 -#define GGML_V1_MAX_PARAMS 16 -#define GGML_V1_MAX_CONTEXTS 64 -#define GGML_V1_MAX_OPT 4 - -#ifdef __ARM_NEON -// we use the built-in 16-bit float type -typedef __fp16 ggml_v1_fp16_t; -#else -typedef uint16_t ggml_v1_fp16_t; -#endif - -// convert FP16 <-> FP32 -float ggml_v1_fp16_to_fp32(ggml_v1_fp16_t x); -ggml_v1_fp16_t ggml_v1_fp32_to_fp16(float x); - -struct ggml_v1_object; -struct ggml_v1_context; - -enum ggml_v1_type { - GGML_V1_TYPE_Q4_0, - GGML_V1_TYPE_Q4_1, - GGML_V1_TYPE_I8, - GGML_V1_TYPE_I16, - GGML_V1_TYPE_I32, - GGML_V1_TYPE_F16, - GGML_V1_TYPE_F32, - GGML_V1_TYPE_COUNT, -}; - -// available tensor operations: -enum ggml_v1_op { - GGML_V1_OP_NONE = 0, - - GGML_V1_OP_DUP, - GGML_V1_OP_ADD, - GGML_V1_OP_SUB, - GGML_V1_OP_MUL, - GGML_V1_OP_DIV, - GGML_V1_OP_SQR, - GGML_V1_OP_SQRT, - GGML_V1_OP_SUM, - GGML_V1_OP_MEAN, - GGML_V1_OP_REPEAT, - GGML_V1_OP_ABS, - GGML_V1_OP_SGN, - GGML_V1_OP_NEG, - GGML_V1_OP_STEP, - GGML_V1_OP_RELU, - GGML_V1_OP_GELU, - GGML_V1_OP_NORM, // normalize - - GGML_V1_OP_MUL_MAT, - - GGML_V1_OP_SCALE, - GGML_V1_OP_CPY, - GGML_V1_OP_RESHAPE, - GGML_V1_OP_VIEW, - GGML_V1_OP_PERMUTE, - GGML_V1_OP_TRANSPOSE, - GGML_V1_OP_GET_ROWS, - GGML_V1_OP_DIAG_MASK_INF, - GGML_V1_OP_SOFT_MAX, - GGML_V1_OP_ROPE, - GGML_V1_OP_CONV_1D_1S, - GGML_V1_OP_CONV_1D_2S, - - GGML_V1_OP_FLASH_ATTN, - GGML_V1_OP_FLASH_FF, - - GGML_V1_OP_COUNT, -}; - -// n-dimensional tensor -struct ggml_v1_tensor { - enum ggml_v1_type type; - - int n_dims; - int ne[GGML_V1_MAX_DIMS]; // number of elements - size_t nb[GGML_V1_MAX_DIMS]; // stride in bytes: - // nb[0] = sizeof(type) - // nb[1] = nb[0] * ne[0] + padding - // nb[i] = nb[i-1] * ne[i-1] - - // compute data - enum ggml_v1_op op; - - bool is_param; - - struct ggml_v1_tensor * grad; - struct ggml_v1_tensor * src0; - struct ggml_v1_tensor * src1; - struct ggml_v1_tensor * opt[GGML_V1_MAX_OPT]; - - // thread scheduling - int n_tasks; - - // performance - int perf_runs; - int64_t perf_cycles; - int64_t perf_time_us; - - void * data; - char padding[8]; -}; - -// computation graph -struct ggml_v1_cgraph { - int n_nodes; - int n_leafs; - int n_threads; - - size_t work_size; - struct ggml_v1_tensor * work; - - struct ggml_v1_tensor * nodes[GGML_V1_MAX_NODES]; - struct ggml_v1_tensor * grads[GGML_V1_MAX_NODES]; - struct ggml_v1_tensor * leafs[GGML_V1_MAX_NODES]; - - // performance - int perf_runs; - int64_t perf_cycles; - int64_t perf_time_us; -}; - -// scratch buffer -struct ggml_v1_scratch { - size_t offs; - size_t size; - void * data; -}; - -struct ggml_v1_init_params { - // memory pool - size_t mem_size; // bytes - void * mem_buffer; // if NULL, memory will be allocated internally -}; - -void ggml_v1_time_init(void); // call this once at the beginning of the program -int64_t ggml_v1_time_ms(void); -int64_t ggml_v1_time_us(void); -int64_t ggml_v1_cycles(void); -int64_t ggml_v1_cycles_per_ms(void); - -void ggml_v1_print_object (const struct ggml_v1_object * obj); -void ggml_v1_print_objects(const struct ggml_v1_context * ctx); - -int ggml_v1_nelements(const struct ggml_v1_tensor * tensor); -size_t ggml_v1_nbytes (const struct ggml_v1_tensor * tensor); - -int ggml_v1_blck_size (enum ggml_v1_type type); -size_t ggml_v1_type_size (enum ggml_v1_type type); // size in bytes for all elements in a block -float ggml_v1_type_sizef(enum ggml_v1_type type); // ggml_v1_type_size()/ggml_v1_blck_size() as float - -size_t ggml_v1_element_size(const struct ggml_v1_tensor * tensor); - -struct ggml_v1_context * ggml_v1_init(struct ggml_v1_init_params params); -void ggml_v1_free(struct ggml_v1_context * ctx); - -size_t ggml_v1_used_mem(const struct ggml_v1_context * ctx); - -size_t ggml_v1_set_scratch(struct ggml_v1_context * ctx, struct ggml_v1_scratch scratch); - -struct ggml_v1_tensor * ggml_v1_new_tensor( - struct ggml_v1_context * ctx, - enum ggml_v1_type type, - int n_dims, - const int *ne); - -struct ggml_v1_tensor * ggml_v1_new_tensor_1d( - struct ggml_v1_context * ctx, - enum ggml_v1_type type, - int ne0); - -struct ggml_v1_tensor * ggml_v1_new_tensor_2d( - struct ggml_v1_context * ctx, - enum ggml_v1_type type, - int ne0, - int ne1); - -struct ggml_v1_tensor * ggml_v1_new_tensor_3d( - struct ggml_v1_context * ctx, - enum ggml_v1_type type, - int ne0, - int ne1, - int ne2); - -struct ggml_v1_tensor * ggml_v1_new_tensor_4d( - struct ggml_v1_context * ctx, - enum ggml_v1_type type, - int ne0, - int ne1, - int ne2, - int ne3); - -struct ggml_v1_tensor * ggml_v1_new_i32(struct ggml_v1_context * ctx, int32_t value); -struct ggml_v1_tensor * ggml_v1_new_f32(struct ggml_v1_context * ctx, float value); - -struct ggml_v1_tensor * ggml_v1_dup_tensor (struct ggml_v1_context * ctx, const struct ggml_v1_tensor * src); -struct ggml_v1_tensor * ggml_v1_view_tensor(struct ggml_v1_context * ctx, const struct ggml_v1_tensor * src); - -struct ggml_v1_tensor * ggml_v1_set_zero(struct ggml_v1_tensor * tensor); -struct ggml_v1_tensor * ggml_v1_set_i32 (struct ggml_v1_tensor * tensor, int32_t value); -struct ggml_v1_tensor * ggml_v1_set_f32 (struct ggml_v1_tensor * tensor, float value); - -int32_t ggml_v1_get_i32_1d(const struct ggml_v1_tensor * tensor, int i); -void ggml_v1_set_i32_1d(const struct ggml_v1_tensor * tensor, int i, int32_t value); - -float ggml_v1_get_f32_1d(const struct ggml_v1_tensor * tensor, int i); -void ggml_v1_set_f32_1d(const struct ggml_v1_tensor * tensor, int i, float value); - - void * ggml_v1_get_data (const struct ggml_v1_tensor * tensor); -float * ggml_v1_get_data_f32(const struct ggml_v1_tensor * tensor); - -// -// operations on tensors with backpropagation -// - -struct ggml_v1_tensor * ggml_v1_dup( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -struct ggml_v1_tensor * ggml_v1_add( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b); - -struct ggml_v1_tensor * ggml_v1_sub( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b); - -struct ggml_v1_tensor * ggml_v1_mul( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b); - -struct ggml_v1_tensor * ggml_v1_div( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b); - -struct ggml_v1_tensor * ggml_v1_sqr( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -struct ggml_v1_tensor * ggml_v1_sqrt( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -// return scalar -// TODO: compute sum along rows -struct ggml_v1_tensor * ggml_v1_sum( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -// mean along rows -struct ggml_v1_tensor * ggml_v1_mean( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -// if a is the same shape as b, and a is not parameter, return a -// otherwise, return a new tensor: repeat(a) to fit in b -struct ggml_v1_tensor * ggml_v1_repeat( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b); - -struct ggml_v1_tensor * ggml_v1_abs( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -struct ggml_v1_tensor * ggml_v1_sgn( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -struct ggml_v1_tensor * ggml_v1_neg( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -struct ggml_v1_tensor * ggml_v1_step( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -struct ggml_v1_tensor * ggml_v1_relu( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -// TODO: double-check this computation is correct -struct ggml_v1_tensor * ggml_v1_gelu( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -// normalize along rows -// TODO: eps is hardcoded to 1e-5 for now -struct ggml_v1_tensor * ggml_v1_norm( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -// A: m rows, n columns -// B: p rows, n columns (i.e. we transpose it internally) -// result is m columns, p rows -struct ggml_v1_tensor * ggml_v1_mul_mat( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b); - -// -// operations on tensors without backpropagation -// - -// in-place, returns view(a) -struct ggml_v1_tensor * ggml_v1_scale( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b); - -// a -> b, return view(b) -struct ggml_v1_tensor * ggml_v1_cpy( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b); - -// return view(a), b specifies the new shape -// TODO: when we start computing gradient, make a copy instead of view -struct ggml_v1_tensor * ggml_v1_reshape( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b); - -// return view(a) -// TODO: when we start computing gradient, make a copy instead of view -struct ggml_v1_tensor * ggml_v1_reshape_2d( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - int ne0, - int ne1); - -// return view(a) -// TODO: when we start computing gradient, make a copy instead of view -struct ggml_v1_tensor * ggml_v1_reshape_3d( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - int ne0, - int ne1, - int ne2); - -// offset in bytes -struct ggml_v1_tensor * ggml_v1_view_1d( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - int ne0, - size_t offset); - -struct ggml_v1_tensor * ggml_v1_view_2d( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - int ne0, - int ne1, - size_t nb1, // row stride in bytes - size_t offset); - -struct ggml_v1_tensor * ggml_v1_permute( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - int axis0, - int axis1, - int axis2, - int axis3); - -// alias for ggml_v1_permute(ctx, a, 1, 0, 2, 3) -struct ggml_v1_tensor * ggml_v1_transpose( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -struct ggml_v1_tensor * ggml_v1_get_rows( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b); - -// set elements above the diagonal to -INF -// in-place, returns view(a) -struct ggml_v1_tensor * ggml_v1_diag_mask_inf( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - int n_past); - -// in-place, returns view(a) -struct ggml_v1_tensor * ggml_v1_soft_max( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a); - -// rotary position embedding -// in-place, returns view(a) -// if mode == 1, skip n_past elements -// TODO: avoid creating a new tensor every time -struct ggml_v1_tensor * ggml_v1_rope( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - int n_past, - int n_dims, - int mode); - -// padding = 1 -// TODO: we don't support extra parameters for now -// that's why we are hard-coding the stride, padding, and dilation -// not great .. -struct ggml_v1_tensor * ggml_v1_conv_1d_1s( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b); - -struct ggml_v1_tensor * ggml_v1_conv_1d_2s( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b); - -struct ggml_v1_tensor * ggml_v1_flash_attn( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * q, - struct ggml_v1_tensor * k, - struct ggml_v1_tensor * v, - bool masked); - -struct ggml_v1_tensor * ggml_v1_flash_ff( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * a, - struct ggml_v1_tensor * b0, - struct ggml_v1_tensor * b1, - struct ggml_v1_tensor * c0, - struct ggml_v1_tensor * c1); - -// -// automatic differentiation -// - -void ggml_v1_set_param( - struct ggml_v1_context * ctx, - struct ggml_v1_tensor * tensor); - -void ggml_v1_build_forward_expand(struct ggml_v1_cgraph * cgraph, struct ggml_v1_tensor * tensor); - -struct ggml_v1_cgraph ggml_v1_build_forward (struct ggml_v1_tensor * tensor); -struct ggml_v1_cgraph ggml_v1_build_backward(struct ggml_v1_context * ctx, struct ggml_v1_cgraph * gf, bool keep); - -void ggml_v1_graph_compute(struct ggml_v1_context * ctx, struct ggml_v1_cgraph * cgraph); -void ggml_v1_graph_reset (struct ggml_v1_cgraph * cgraph); - -// print info and performance information for the graph -void ggml_v1_graph_print(const struct ggml_v1_cgraph * cgraph); - -// dump the graph into a file using the dot format -void ggml_v1_graph_dump_dot(const struct ggml_v1_cgraph * gb, const struct ggml_v1_cgraph * gf, const char * filename); - -// -// optimization -// - -// optimization methods -enum ggml_v1_opt_type { - GGML_V1_OPT_ADAM, - GGML_V1_OPT_LBFGS, -}; - -// linesearch methods -enum ggml_v1_linesearch { - GGML_V1_LINESEARCH_DEFAULT = 1, - - GGML_V1_LINESEARCH_BACKTRACKING_ARMIJO = 0, - GGML_V1_LINESEARCH_BACKTRACKING_WOLFE = 1, - GGML_V1_LINESEARCH_BACKTRACKING_STRONG_WOLFE = 2, -}; - -// optimization return values -enum ggml_v1_opt_result { - GGML_V1_OPT_OK = 0, - GGML_V1_OPT_DID_NOT_CONVERGE, - GGML_V1_OPT_NO_CONTEXT, - GGML_V1_OPT_INVALID_WOLFE, - GGML_V1_OPT_FAIL, - - GGML_V1_LINESEARCH_FAIL = -128, - GGML_V1_LINESEARCH_MINIMUM_STEP, - GGML_V1_LINESEARCH_MAXIMUM_STEP, - GGML_V1_LINESEARCH_MAXIMUM_ITERATIONS, - GGML_V1_LINESEARCH_INVALID_PARAMETERS, -}; - -// optimization parameters -// -// see ggml.c (ggml_v1_opt_default_params) for default values -// -struct ggml_v1_opt_params { - enum ggml_v1_opt_type type; - - int n_threads; - - // delta-based convergence test - // - // if past == 0 - disabled - // if past > 0: - // stop if |f(x) - f(x_past)| < delta * max(1, |f(x)|) - // - int past; - float delta; - - // maximum number of iterations without improvement - // - // if 0 - disabled - // if > 0: - // assume convergence if no cost improvement in this number of iterations - // - int max_no_improvement; - - bool print_forward_graph; - bool print_backward_graph; - - // ADAM parameters - struct { - int n_iter; - - float alpha; // learning rate - float beta1; - float beta2; - float eps; // epsilon for numerical stability - float eps_f; // epsilon for convergence test - float eps_g; // epsilon for convergence test - } adam; - - // LBFGS parameters - struct { - int m; // number of corrections to approximate the inv. Hessian - int n_iter; - int max_linesearch; - - float eps; // convergence tolerance - float ftol; // line search tolerance - float wolfe; - float min_step; - float max_step; - - enum ggml_v1_linesearch linesearch; - } lbfgs; -}; - -struct ggml_v1_opt_params ggml_v1_opt_default_params(enum ggml_v1_opt_type type); - -// optimize the function defined by the tensor f -enum ggml_v1_opt_result ggml_v1_opt( - struct ggml_v1_context * ctx, - struct ggml_v1_opt_params params, - struct ggml_v1_tensor * f); - -// -// system info -// - -int ggml_v1_cpu_has_avx(void); -int ggml_v1_cpu_has_avx2(void); -int ggml_v1_cpu_has_avx512(void); -int ggml_v1_cpu_has_fma(void); -int ggml_v1_cpu_has_neon(void); -int ggml_v1_cpu_has_arm_fma(void); -int ggml_v1_cpu_has_f16c(void); -int ggml_v1_cpu_has_fp16_va(void); -int ggml_v1_cpu_has_wasm_simd(void); -int ggml_v1_cpu_has_blas(void); -int ggml_v1_cpu_has_sse3(void); -int ggml_v1_cpu_has_vsx(void); - -#ifdef __cplusplus -} -#endif diff --git a/spaces/Inderdev07/Attendance-FaceRecognition/README.md b/spaces/Inderdev07/Attendance-FaceRecognition/README.md deleted file mode 100644 index 969f372d1f8d237205103862740aef80abcd5914..0000000000000000000000000000000000000000 --- a/spaces/Inderdev07/Attendance-FaceRecognition/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Attendance FaceRecognition -emoji: ⚡ -colorFrom: gray -colorTo: gray -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: cc ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/InpaintAI/Inpaint-Anything/app.py b/spaces/InpaintAI/Inpaint-Anything/app.py deleted file mode 100644 index 2463f20871ad057325389168043efe13e4599025..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/app.py +++ /dev/null @@ -1,231 +0,0 @@ -import os -import sys -# sys.path.append(os.path.abspath(os.path.dirname(os.getcwd()))) -# os.chdir("../") -import gradio as gr -import numpy as np -from pathlib import Path -from matplotlib import pyplot as plt -import torch -import tempfile -from lama_inpaint import inpaint_img_with_lama, build_lama_model, inpaint_img_with_builded_lama -from utils import load_img_to_array, save_array_to_img, dilate_mask, \ - show_mask, show_points -from PIL import Image -sys.path.insert(0, str(Path(__file__).resolve().parent / "third_party" / "segment-anything")) -from segment_anything import SamPredictor, sam_model_registry -import argparse - -def setup_args(parser): - parser.add_argument( - "--lama_config", type=str, - default="./third_party/lama/configs/prediction/default.yaml", - help="The path to the config file of lama model. " - "Default: the config of big-lama", - ) - parser.add_argument( - "--lama_ckpt", type=str, - default="pretrained_models/big-lama", - help="The path to the lama checkpoint.", - ) - parser.add_argument( - "--sam_ckpt", type=str, - default="./pretrained_models/sam_vit_h_4b8939.pth", - help="The path to the SAM checkpoint to use for mask generation.", - ) -def mkstemp(suffix, dir=None): - fd, path = tempfile.mkstemp(suffix=f"{suffix}", dir=dir) - os.close(fd) - return Path(path) - - -def get_sam_feat(img): - model['sam'].set_image(img) - features = model['sam'].features - orig_h = model['sam'].orig_h - orig_w = model['sam'].orig_w - input_h = model['sam'].input_h - input_w = model['sam'].input_w - model['sam'].reset_image() - return features, orig_h, orig_w, input_h, input_w - - -def get_masked_img(img, w, h, features, orig_h, orig_w, input_h, input_w, dilate_kernel_size): - point_coords = [w, h] - point_labels = [1] - - model['sam'].is_image_set = True - model['sam'].features = features - model['sam'].orig_h = orig_h - model['sam'].orig_w = orig_w - model['sam'].input_h = input_h - model['sam'].input_w = input_w - - # model['sam'].set_image(img) # todo : update here for accelerating - masks, _, _ = model['sam'].predict( - point_coords=np.array([point_coords]), - point_labels=np.array(point_labels), - multimask_output=True, - ) - - masks = masks.astype(np.uint8) * 255 - - # dilate mask to avoid unmasked edge effect - if dilate_kernel_size is not None: - masks = [dilate_mask(mask, dilate_kernel_size) for mask in masks] - else: - masks = [mask for mask in masks] - - figs = [] - for idx, mask in enumerate(masks): - # save the pointed and masked image - tmp_p = mkstemp(".png") - dpi = plt.rcParams['figure.dpi'] - height, width = img.shape[:2] - fig = plt.figure(figsize=(width/dpi/0.77, height/dpi/0.77)) - plt.imshow(img) - plt.axis('off') - show_points(plt.gca(), [point_coords], point_labels, - size=(width*0.04)**2) - show_mask(plt.gca(), mask, random_color=False) - plt.tight_layout() - plt.savefig(tmp_p, bbox_inches='tight', pad_inches=0) - figs.append(fig) - plt.close() - return *figs, *masks - - -def get_inpainted_img(img, mask0, mask1, mask2): - lama_config = args.lama_config - device = "cuda" if torch.cuda.is_available() else "cpu" - out = [] - for mask in [mask0, mask1, mask2]: - if len(mask.shape)==3: - mask = mask[:,:,0] - img_inpainted = inpaint_img_with_builded_lama( - model['lama'], img, mask, lama_config, device=device) - out.append(img_inpainted) - return out - - -# get args -parser = argparse.ArgumentParser() -setup_args(parser) -args = parser.parse_args(sys.argv[1:]) -# build models -model = {} -# build the sam model -model_type="vit_h" -ckpt_p=args.sam_ckpt -model_sam = sam_model_registry[model_type](checkpoint=ckpt_p) -device = "cuda" if torch.cuda.is_available() else "cpu" -model_sam.to(device=device) -model['sam'] = SamPredictor(model_sam) - -# build the lama model -lama_config = args.lama_config -lama_ckpt = args.lama_ckpt -device = "cuda" if torch.cuda.is_available() else "cpu" -model['lama'] = build_lama_model(lama_config, lama_ckpt, device=device) - -button_size = (100,50) -with gr.Blocks() as demo: - features = gr.State(None) - orig_h = gr.State(None) - orig_w = gr.State(None) - input_h = gr.State(None) - input_w = gr.State(None) - - with gr.Row().style(mobile_collapse=False, equal_height=True): - with gr.Column(variant="panel"): - with gr.Row(): - gr.Markdown("## Input Image") - with gr.Row(): - img = gr.Image(label="Input Image").style(height="200px") - with gr.Column(variant="panel"): - with gr.Row(): - gr.Markdown("## Pointed Image") - with gr.Row(): - img_pointed = gr.Plot(label='Pointed Image') - with gr.Column(variant="panel"): - with gr.Row(): - gr.Markdown("## Control Panel") - with gr.Row(): - w = gr.Number(label="Point Coordinate W") - h = gr.Number(label="Point Coordinate H") - dilate_kernel_size = gr.Slider(label="Dilate Kernel Size", minimum=0, maximum=100, step=1, value=15) - sam_mask = gr.Button("Predict Mask", variant="primary").style(full_width=True, size="sm") - lama = gr.Button("Inpaint Image", variant="primary").style(full_width=True, size="sm") - clear_button_image = gr.Button(value="Reset", label="Reset", variant="secondary").style(full_width=True, size="sm") - - # todo: maybe we can delete this row, for it's unnecessary to show the original mask for customers - with gr.Row(variant="panel"): - with gr.Column(): - with gr.Row(): - gr.Markdown("## Segmentation Mask") - with gr.Row(): - mask_0 = gr.outputs.Image(type="numpy", label="Segmentation Mask 0").style(height="200px") - mask_1 = gr.outputs.Image(type="numpy", label="Segmentation Mask 1").style(height="200px") - mask_2 = gr.outputs.Image(type="numpy", label="Segmentation Mask 2").style(height="200px") - - with gr.Row(variant="panel"): - with gr.Column(): - with gr.Row(): - gr.Markdown("## Image with Mask") - with gr.Row(): - img_with_mask_0 = gr.Plot(label="Image with Segmentation Mask 0") - img_with_mask_1 = gr.Plot(label="Image with Segmentation Mask 1") - img_with_mask_2 = gr.Plot(label="Image with Segmentation Mask 2") - - with gr.Row(variant="panel"): - with gr.Column(): - with gr.Row(): - gr.Markdown("## Image Removed with Mask") - with gr.Row(): - img_rm_with_mask_0 = gr.outputs.Image( - type="numpy", label="Image Removed with Segmentation Mask 0").style(height="200px") - img_rm_with_mask_1 = gr.outputs.Image( - type="numpy", label="Image Removed with Segmentation Mask 1").style(height="200px") - img_rm_with_mask_2 = gr.outputs.Image( - type="numpy", label="Image Removed with Segmentation Mask 2").style(height="200px") - - - def get_select_coords(img, evt: gr.SelectData): - dpi = plt.rcParams['figure.dpi'] - height, width = img.shape[:2] - fig = plt.figure(figsize=(width/dpi/0.77, height/dpi/0.77)) - plt.imshow(img) - plt.axis('off') - plt.tight_layout() - show_points(plt.gca(), [[evt.index[0], evt.index[1]]], [1], - size=(width*0.04)**2) - return evt.index[0], evt.index[1], fig - - img.select(get_select_coords, [img], [w, h, img_pointed]) - img.upload(get_sam_feat, [img], [features, orig_h, orig_w, input_h, input_w]) - - sam_mask.click( - get_masked_img, - [img, w, h, features, orig_h, orig_w, input_h, input_w, dilate_kernel_size], - [img_with_mask_0, img_with_mask_1, img_with_mask_2, mask_0, mask_1, mask_2] - ) - - lama.click( - get_inpainted_img, - [img, mask_0, mask_1, mask_2], - [img_rm_with_mask_0, img_rm_with_mask_1, img_rm_with_mask_2] - ) - - - def reset(*args): - return [None for _ in args] - - clear_button_image.click( - reset, - [img, features, img_pointed, w, h, mask_0, mask_1, mask_2, img_with_mask_0, img_with_mask_1, img_with_mask_2, img_rm_with_mask_0, img_rm_with_mask_1, img_rm_with_mask_2], - [img, features, img_pointed, w, h, mask_0, mask_1, mask_2, img_with_mask_0, img_with_mask_1, img_with_mask_2, img_rm_with_mask_0, img_rm_with_mask_1, img_rm_with_mask_2] - ) - -if __name__ == "__main__": - demo.launch() - \ No newline at end of file diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/utils.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/utils.py deleted file mode 100644 index c2d67ed8bc793dd5113224fa322adb88f3ed9b22..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/utils.py +++ /dev/null @@ -1,177 +0,0 @@ -import bisect -import functools -import logging -import numbers -import os -import signal -import sys -import traceback -import warnings - -import torch -from pytorch_lightning import seed_everything - -LOGGER = logging.getLogger(__name__) - -import platform -if platform.system() != 'Linux': - signal.SIGUSR1 = 1 - -def check_and_warn_input_range(tensor, min_value, max_value, name): - actual_min = tensor.min() - actual_max = tensor.max() - if actual_min < min_value or actual_max > max_value: - warnings.warn(f"{name} must be in {min_value}..{max_value} range, but it ranges {actual_min}..{actual_max}") - - -def sum_dict_with_prefix(target, cur_dict, prefix, default=0): - for k, v in cur_dict.items(): - target_key = prefix + k - target[target_key] = target.get(target_key, default) + v - - -def average_dicts(dict_list): - result = {} - norm = 1e-3 - for dct in dict_list: - sum_dict_with_prefix(result, dct, '') - norm += 1 - for k in list(result): - result[k] /= norm - return result - - -def add_prefix_to_keys(dct, prefix): - return {prefix + k: v for k, v in dct.items()} - - -def set_requires_grad(module, value): - for param in module.parameters(): - param.requires_grad = value - - -def flatten_dict(dct): - result = {} - for k, v in dct.items(): - if isinstance(k, tuple): - k = '_'.join(k) - if isinstance(v, dict): - for sub_k, sub_v in flatten_dict(v).items(): - result[f'{k}_{sub_k}'] = sub_v - else: - result[k] = v - return result - - -class LinearRamp: - def __init__(self, start_value=0, end_value=1, start_iter=-1, end_iter=0): - self.start_value = start_value - self.end_value = end_value - self.start_iter = start_iter - self.end_iter = end_iter - - def __call__(self, i): - if i < self.start_iter: - return self.start_value - if i >= self.end_iter: - return self.end_value - part = (i - self.start_iter) / (self.end_iter - self.start_iter) - return self.start_value * (1 - part) + self.end_value * part - - -class LadderRamp: - def __init__(self, start_iters, values): - self.start_iters = start_iters - self.values = values - assert len(values) == len(start_iters) + 1, (len(values), len(start_iters)) - - def __call__(self, i): - segment_i = bisect.bisect_right(self.start_iters, i) - return self.values[segment_i] - - -def get_ramp(kind='ladder', **kwargs): - if kind == 'linear': - return LinearRamp(**kwargs) - if kind == 'ladder': - return LadderRamp(**kwargs) - raise ValueError(f'Unexpected ramp kind: {kind}') - - -def print_traceback_handler(sig, frame): - LOGGER.warning(f'Received signal {sig}') - bt = ''.join(traceback.format_stack()) - LOGGER.warning(f'Requested stack trace:\n{bt}') - - -def register_debug_signal_handlers(sig=signal.SIGUSR1, handler=print_traceback_handler): - LOGGER.warning(f'Setting signal {sig} handler {handler}') - signal.signal(sig, handler) - - -def handle_deterministic_config(config): - seed = dict(config).get('seed', None) - if seed is None: - return False - - seed_everything(seed) - return True - - -def get_shape(t): - if torch.is_tensor(t): - return tuple(t.shape) - elif isinstance(t, dict): - return {n: get_shape(q) for n, q in t.items()} - elif isinstance(t, (list, tuple)): - return [get_shape(q) for q in t] - elif isinstance(t, numbers.Number): - return type(t) - else: - raise ValueError('unexpected type {}'.format(type(t))) - - -def get_has_ddp_rank(): - master_port = os.environ.get('MASTER_PORT', None) - node_rank = os.environ.get('NODE_RANK', None) - local_rank = os.environ.get('LOCAL_RANK', None) - world_size = os.environ.get('WORLD_SIZE', None) - has_rank = master_port is not None or node_rank is not None or local_rank is not None or world_size is not None - return has_rank - - -def handle_ddp_subprocess(): - def main_decorator(main_func): - @functools.wraps(main_func) - def new_main(*args, **kwargs): - # Trainer sets MASTER_PORT, NODE_RANK, LOCAL_RANK, WORLD_SIZE - parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None) - has_parent = parent_cwd is not None - has_rank = get_has_ddp_rank() - assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}' - - if has_parent: - # we are in the worker - sys.argv.extend([ - f'hydra.run.dir={parent_cwd}', - # 'hydra/hydra_logging=disabled', - # 'hydra/job_logging=disabled' - ]) - # do nothing if this is a top-level process - # TRAINING_PARENT_WORK_DIR is set in handle_ddp_parent_process after hydra initialization - - main_func(*args, **kwargs) - return new_main - return main_decorator - - -def handle_ddp_parent_process(): - parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None) - has_parent = parent_cwd is not None - has_rank = get_has_ddp_rank() - assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}' - - if parent_cwd is None: - os.environ['TRAINING_PARENT_WORK_DIR'] = os.getcwd() - - return has_parent diff --git a/spaces/Intel/ldm3d/static/public/js/three-6dof.min.js b/spaces/Intel/ldm3d/static/public/js/three-6dof.min.js deleted file mode 100644 index c6eb438a16e634f7df7fc6ba698e02d72cfd9f55..0000000000000000000000000000000000000000 --- a/spaces/Intel/ldm3d/static/public/js/three-6dof.min.js +++ /dev/null @@ -1,2 +0,0 @@ -!function(e,t){"object"==typeof exports&&"undefined"!=typeof module?t(exports,require("three")):"function"==typeof define&&define.amd?define(["exports","three"],t):t((e=e||self).SixDOF={},e.THREE)}(this,(function(e,t){"use strict";var r,n,i,o="#define GLSLIFY 1\n\n/** Small util to get the depth texture */\nvec3 getDepth(sampler2D depth, vec2 uvs) {\n\n /** Return the depth texture */\n return texture2D(depth, uvs).rgb;\n}\n\n/** Small util to get the lower half of a texture (in our case the depthmap) */\nvec3 getDepthFromBottomHalf(sampler2D tex, vec2 uvs) {\n \n /** Chop the uvs to the lower half of the texture (i.e top-bottom) */\n vec2 lower_half_uvs = vec2(uvs.x, uvs.y * 0.5);\n\n /** Return the depth texture */\n return texture2D(tex, lower_half_uvs).rgb;\n}\n\n/** Small util to get the upper half of a texture (in our case the color texture) */\nvec3 getColorFromUpperHalf(sampler2D tex, vec2 uvs) {\n \n /** Chop the uvs to the lower half of the texture (i.e top-bottom) */\n vec2 upper_half_uvs = vec2(uvs.x, (uvs.y * 0.5) + 0.5);\n\n /** Return the depth texture */\n return texture2D(tex, upper_half_uvs).rgb;\n}\n\n// Uniforms\nuniform sampler2D colorTexture;\nuniform sampler2D depthTexture;\nuniform float debugDepth;\nuniform float opacity;\n\n// Varyings from vertex program\nvarying vec2 vUv;\n\n// Internal\nvec3 depth;\nvec3 color;\n\nvoid main() {\n\n/** Use compiler definitions to know which method to pick */\n#ifdef TOP_BOTTOM\n depth = getDepthFromBottomHalf(colorTexture, vUv);\n color = getColorFromUpperHalf(colorTexture, vUv);\n#endif\n\n#ifdef SEPERATE\n depth = getDepth(depthTexture, vUv);\n color = texture2D(colorTexture, vUv).rgb;\n#endif\n\n // Mix the depth and color based on debugDepth value\n vec3 depthColorMixer = mix(color, depth , debugDepth);\n\n // Render dat fragment\n gl_FragColor = vec4(depthColorMixer, opacity);\n}\n",s="#define GLSLIFY 1\n\n/** Small util to get the depth texture */\nvec3 getDepth(sampler2D depth, vec2 uvs) {\n\n /** Return the depth texture */\n return texture2D(depth, uvs).rgb;\n}\n\n/** Small util to get the lower half of a texture (in our case the depthmap) */\nvec3 getDepthFromBottomHalf(sampler2D tex, vec2 uvs) {\n \n /** Chop the uvs to the lower half of the texture (i.e top-bottom) */\n vec2 lower_half_uvs = vec2(uvs.x, uvs.y * 0.5);\n\n /** Return the depth texture */\n return texture2D(tex, lower_half_uvs).rgb;\n}\n\n// Uniforms\nuniform sampler2D colorTexture;\nuniform sampler2D depthTexture;\nuniform float pointSize;\nuniform float displacement;\n\n// Varyings passed to fragment\nvarying vec2 vUv;\n\n// Internal\nfloat depth;\n\nvoid main() {\n\n /** Transform and pass to fragment shader */\n vUv = uv;\n\n /** Set the GL point size for when rendering points, ignored otherwise */\n gl_PointSize = pointSize;\n\n/** Use compiler definitions to know which method to pick */\n#ifdef TOP_BOTTOM\n depth = getDepthFromBottomHalf(colorTexture, vUv).r;\n#endif\n\n#ifdef SEPERATE\n depth = getDepth(depthTexture, vUv).r;\n#endif\n\n /** \n * Invert the normals (since they are pointing outwards) and \n * move the position on the normal direction scaled by the \n * displacement which is the depth for the current vertex\n * multiplied by a `displacement` scalaer\n **/\n float disp = displacement * depth;\n vec3 offset = position + (-normal) * disp;\n\n /** Transform */\n gl_Position = projectionMatrix *\n modelViewMatrix *\n vec4(offset, 1.0);\n}",a={colorTexture:{type:"t",value:null},depthTexture:{type:"t",value:null},time:{type:"f",value:0},opacity:{type:"f",value:1},pointSize:{type:"f",value:3},debugDepth:{type:"f",value:0},displacement:{type:"f",value:1}};(r=e.MeshDensity||(e.MeshDensity={}))[r.LOW=64]="LOW",r[r.MEDIUM=128]="MEDIUM",r[r.HIGH=256]="HIGH",r[r.EXTRA_HIGH=512]="EXTRA_HIGH",r[r.EPIC=1024]="EPIC",(n=e.Style||(e.Style={}))[n.WIRE=0]="WIRE",n[n.POINTS=1]="POINTS",n[n.MESH=2]="MESH",(i=e.TextureType||(e.TextureType={}))[i.TOP_BOTTOM=0]="TOP_BOTTOM",i[i.SEPERATE=1]="SEPERATE";class u{constructor(){this.type=e.TextureType.SEPERATE,this.density=e.MeshDensity.HIGH,this.style=e.Style.MESH,this.displacement=4,this.radius=6}}class p extends t.Object3D{constructor(r,n,i){super(),this.props=new u,this.material=new t.ShaderMaterial({uniforms:a,vertexShader:s,fragmentShader:o,transparent:!0,side:t.BackSide}),this.setProps(this.props,i),this.setShaderDefines(this.material,[e.TextureType[this.props.type]]),p.geometry||(p.geometry=this.createSphereGeometry(this.props.radius,this.props.density)),this.assignTexture(this.props.type,r,n),this.displacement=this.props.displacement,super.add(this.createMesh(p.geometry,this.material,this.props.style))}setShaderDefines(e,t){t.forEach((function(t){return e.defines[t]=""}))}createSphereGeometry(e,r){return new t.SphereBufferGeometry(e,r,r)}setProps(e,t){if(t)for(var r in t)e[r]?e[r]=t[r]:console.warn("THREE.SixDOF: Provided ".concat(r," in config but it is not a valid property and being ignored"))}assignTexture(t,r,n){if(t===e.TextureType.SEPERATE){if(!n)throw new Error("When using seperate texture type, depthmap must be provided");this.depth=this.setDefaultTextureProps(n)}this.texture=this.setDefaultTextureProps(r)}setDefaultTextureProps(e){return e.minFilter=t.NearestFilter,e.magFilter=t.LinearFilter,e.format=t.RGBFormat,e.generateMipmaps=!1,e}createMesh(r,n,i){switch(i){case e.Style.WIRE:return this.material.wireframe||(this.material.wireframe=!0),new t.Mesh(r,n);case e.Style.MESH:return this.material.wireframe&&(this.material.wireframe=!1),new t.Mesh(r,n);case e.Style.POINTS:return new t.Points(r,n)}}toggleDepthDebug(e){this.material.uniforms.debugDepth.value=null!=e?e:!this.material.uniforms.debugDepth.value}set displacement(e){this.material.uniforms.displacement.value=e}set depth(e){this.material.uniforms.depthTexture.value=e}set texture(e){this.material.uniforms.colorTexture.value=e}set opacity(e){this.material.uniforms.opacity.value=e}set pointSize(e){this.material.uniforms.pointSize.value=e}get config(){return this.props}get opacity(){return this.material.uniforms.opacity.value}get pointSize(){return this.material.uniforms.pointSize.value}get displacement(){return this.material.uniforms.displacement.value}get texture(){return this.material.uniforms.colorTexture.value}get depth(){return this.material.uniforms.opacity.value}}p.geometry=void 0,e.Viewer=p,Object.defineProperty(e,"__esModule",{value:!0})})); -//# sourceMappingURL=three-6dof.min.js.map diff --git a/spaces/Jose-Alonso26/API-Online/index.html b/spaces/Jose-Alonso26/API-Online/index.html deleted file mode 100644 index bf21791d7b86c70b0e76238cc79fc88b0bc80a92..0000000000000000000000000000000000000000 --- a/spaces/Jose-Alonso26/API-Online/index.html +++ /dev/null @@ -1,127 +0,0 @@ - - - - - - Contactos App - - - -
    -

    Contactos

    - -

    Agregar Contacto

    -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - -
    - -

    Listado de Contactos

    -
      - -

      Buscar Contactos

      -
      - -
      -
      - -
      -
        -
        - - - - diff --git a/spaces/KPCGD/bingo/tailwind.config.js b/spaces/KPCGD/bingo/tailwind.config.js deleted file mode 100644 index 03da3c3c45be6983b9f5ffa6df5f1fd0870e9636..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/tailwind.config.js +++ /dev/null @@ -1,48 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: [ - './src/pages/**/*.{js,ts,jsx,tsx,mdx}', - './src/components/**/*.{js,ts,jsx,tsx,mdx}', - './src/app/**/*.{js,ts,jsx,tsx,mdx}', - './src/ui/**/*.{js,ts,jsx,tsx,mdx}', - ], - "darkMode": "class", - theme: { - extend: { - colors: { - 'primary-blue': 'rgb(var(--color-primary-blue) / )', - secondary: 'rgb(var(--color-secondary) / )', - 'primary-background': 'rgb(var(--primary-background) / )', - 'primary-text': 'rgb(var(--primary-text) / )', - 'secondary-text': 'rgb(var(--secondary-text) / )', - 'light-text': 'rgb(var(--light-text) / )', - 'primary-border': 'rgb(var(--primary-border) / )', - }, - keyframes: { - slideDownAndFade: { - from: { opacity: 0, transform: 'translateY(-2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideLeftAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - slideUpAndFade: { - from: { opacity: 0, transform: 'translateY(2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideRightAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - }, - animation: { - slideDownAndFade: 'slideDownAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideLeftAndFade: 'slideLeftAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideUpAndFade: 'slideUpAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideRightAndFade: 'slideRightAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - }, - }, - }, - plugins: [require('@headlessui/tailwindcss'), require('tailwind-scrollbar')], -} diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/encoders/__init__.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/encoders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/params_data.py b/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/params_data.py deleted file mode 100644 index bdb1716ed45617f2b127a7fb8885afe6cc74fb71..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/params_data.py +++ /dev/null @@ -1,29 +0,0 @@ - -## Mel-filterbank -mel_window_length = 25 # In milliseconds -mel_window_step = 10 # In milliseconds -mel_n_channels = 40 - - -## Audio -sampling_rate = 16000 -# Number of spectrogram frames in a partial utterance -partials_n_frames = 160 # 1600 ms -# Number of spectrogram frames at inference -inference_n_frames = 80 # 800 ms - - -## Voice Activation Detection -# Window size of the VAD. Must be either 10, 20 or 30 milliseconds. -# This sets the granularity of the VAD. Should not need to be changed. -vad_window_length = 30 # In milliseconds -# Number of frames to average together when performing the moving average smoothing. -# The larger this value, the larger the VAD variations must be to not get smoothed out. -vad_moving_average_width = 8 -# Maximum number of consecutive silent frames a segment can have. -vad_max_silence_length = 6 - - -## Audio volume normalization -audio_norm_target_dBFS = -30 - diff --git a/spaces/KiranK7/chatBOt-4/README.md b/spaces/KiranK7/chatBOt-4/README.md deleted file mode 100644 index 1e8bb1db61a6c783cf5898d7d1fd3aae1534fb9f..0000000000000000000000000000000000000000 --- a/spaces/KiranK7/chatBOt-4/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ChatBOt 4 -emoji: 🏆 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kotinagendla/MyGenAIChatBot/app.py b/spaces/Kotinagendla/MyGenAIChatBot/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/Kotinagendla/MyGenAIChatBot/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Kreaols/ChuanhuChatGPT/chatgpt - windows.bat b/spaces/Kreaols/ChuanhuChatGPT/chatgpt - windows.bat deleted file mode 100644 index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000 --- a/spaces/Kreaols/ChuanhuChatGPT/chatgpt - windows.bat +++ /dev/null @@ -1,14 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" - -REM The web page can be accessed with delayed start http://127.0.0.1:7860/ -ping -n 5 127.0.0.1>nul - -REM access chargpt via your default browser -start "" "http://127.0.0.1:7860/" - - -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). \ No newline at end of file diff --git a/spaces/Lianjd/stock_dashboard/backtrader/filters/calendardays.py b/spaces/Lianjd/stock_dashboard/backtrader/filters/calendardays.py deleted file mode 100644 index 110e3f8d8607cf3627bd00329f51e5255b03d2c2..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/filters/calendardays.py +++ /dev/null @@ -1,120 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -from datetime import date, datetime, timedelta - -from backtrader import TimeFrame -from backtrader.utils.py3 import with_metaclass -from .. import metabase - - -class CalendarDays(with_metaclass(metabase.MetaParams, object)): - ''' - Bar Filler to add missing calendar days to trading days - - Params: - - - fill_price (def: None): - - > 0: The given value to fill - 0 or None: Use the last known closing price - -1: Use the midpoint of the last bar (High-Low average) - - - fill_vol (def: float('NaN')): - - Value to use to fill the missing volume - - - fill_oi (def: float('NaN')): - - Value to use to fill the missing Open Interest - ''' - params = (('fill_price', None), - ('fill_vol', float('NaN')), - ('fill_oi', float('NaN')),) - - ONEDAY = timedelta(days=1) - lastdt = date.max - - def __init__(self, data): - pass - - def __call__(self, data): - ''' - If the data has a gap larger than 1 day amongst bars, the missing bars - are added to the stream. - - Params: - - data: the data source to filter/process - - Returns: - - False (always): this filter does not remove bars from the stream - - ''' - dt = data.datetime.date() - if (dt - self.lastdt) > self.ONEDAY: # gap in place - self._fillbars(data, dt, self.lastdt) - - self.lastdt = dt - return False # no bar has been removed from the stream - - def _fillbars(self, data, dt, lastdt): - ''' - Fills one by one bars as needed from time_start to time_end - - Invalidates the control dtime_prev if requested - ''' - tm = data.datetime.time(0) # get time part - - # Same price for all bars - if self.p.fill_price > 0: - price = self.p.fill_price - elif not self.p.fill_price: - price = data.close[-1] - elif self.p.fill_price == -1: - price = (data.high[-1] + data.low[-1]) / 2.0 - - while lastdt < dt: - lastdt += self.ONEDAY - - # Prepare an array of the needed size - bar = [float('Nan')] * data.size() - # Fill the datetime - bar[data.DateTime] = data.date2num(datetime.combine(lastdt, tm)) - - # Fill price fields - for pricetype in [data.Open, data.High, data.Low, data.Close]: - bar[pricetype] = price - - # Fill volume and open interest - bar[data.Volume] = self.p.fill_vol - bar[data.OpenInterest] = self.p.fill_oi - - # Fill extra lines the data feed may have defined beyond DateTime - for i in range(data.DateTime + 1, data.size()): - bar[i] = data.lines[i][0] - - # Add this constructed bar to the stack of the stream - data._add2stack(bar) - - # Save to stack the bar that signaled the gap - data._save2stack(erase=True) diff --git a/spaces/Lianjd/stock_dashboard/backtrader/indicators/accdecoscillator.py b/spaces/Lianjd/stock_dashboard/backtrader/indicators/accdecoscillator.py deleted file mode 100644 index 263745431575ab6c3d2195eb2f6ca18eff5a0f0c..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/indicators/accdecoscillator.py +++ /dev/null @@ -1,59 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Ssoftware Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -import backtrader as bt -from . import MovAv, AwesomeOscillator - - -__all__ = ['AccelerationDecelerationOscillator', 'AccDeOsc'] - - -class AccelerationDecelerationOscillator(bt.Indicator): - ''' - Acceleration/Deceleration Technical Indicator (AC) measures acceleration - and deceleration of the current driving force. This indicator will change - direction before any changes in the driving force, which, it its turn, will - change its direction before the price. - - Formula: - - AcdDecOsc = AwesomeOscillator - SMA(AwesomeOscillator, period) - - See: - - https://www.metatrader5.com/en/terminal/help/indicators/bw_indicators/ao - - https://www.ifcmarkets.com/en/ntx-indicators/ntx-indicators-accelerator-decelerator-oscillator - - ''' - alias = ('AccDeOsc',) - lines = ('accde', ) - - params = ( - ('period', 5), - ('movav', MovAv.SMA), - ) - - plotlines = dict(accde=dict(_method='bar', alpha=0.50, width=1.0)) - - def __init__(self): - ao = AwesomeOscillator() - self.l.accde = ao - self.p.movav(ao, period=self.p.period) - super(AccelerationDecelerationOscillator, self).__init__() diff --git a/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/models/yolo.py b/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/models/yolo.py deleted file mode 100644 index 70845d972f0bcfd3632fcbac096b23e1b4d4d779..0000000000000000000000000000000000000000 --- a/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/models/yolo.py +++ /dev/null @@ -1,235 +0,0 @@ -import math -from copy import deepcopy -from pathlib import Path - -import torch -import yaml # for torch hub -from torch import nn - -from facelib.detection.yolov5face.models.common import ( - C3, - NMS, - SPP, - AutoShape, - Bottleneck, - BottleneckCSP, - Concat, - Conv, - DWConv, - Focus, - ShuffleV2Block, - StemBlock, -) -from facelib.detection.yolov5face.models.experimental import CrossConv, MixConv2d -from facelib.detection.yolov5face.utils.autoanchor import check_anchor_order -from facelib.detection.yolov5face.utils.general import make_divisible -from facelib.detection.yolov5face.utils.torch_utils import copy_attr, fuse_conv_and_bn - - -class Detect(nn.Module): - stride = None # strides computed during build - export = False # onnx export - - def __init__(self, nc=80, anchors=(), ch=()): # detection layer - super().__init__() - self.nc = nc # number of classes - self.no = nc + 5 + 10 # number of outputs per anchor - - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - a = torch.tensor(anchors).float().view(self.nl, -1, 2) - self.register_buffer("anchors", a) # shape(nl,na,2) - self.register_buffer("anchor_grid", a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - - def forward(self, x): - z = [] # inference output - if self.export: - for i in range(self.nl): - x[i] = self.m[i](x[i]) - return x - for i in range(self.nl): - x[i] = self.m[i](x[i]) # conv - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = torch.full_like(x[i], 0) - y[..., [0, 1, 2, 3, 4, 15]] = x[i][..., [0, 1, 2, 3, 4, 15]].sigmoid() - y[..., 5:15] = x[i][..., 5:15] - - y[..., 0:2] = (y[..., 0:2] * 2.0 - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - - y[..., 5:7] = ( - y[..., 5:7] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] - ) # landmark x1 y1 - y[..., 7:9] = ( - y[..., 7:9] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] - ) # landmark x2 y2 - y[..., 9:11] = ( - y[..., 9:11] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] - ) # landmark x3 y3 - y[..., 11:13] = ( - y[..., 11:13] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] - ) # landmark x4 y4 - y[..., 13:15] = ( - y[..., 13:15] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] - ) # landmark x5 y5 - - z.append(y.view(bs, -1, self.no)) - - return x if self.training else (torch.cat(z, 1), x) - - @staticmethod - def _make_grid(nx=20, ny=20): - # yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)], indexing="ij") # for pytorch>=1.10 - yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - - -class Model(nn.Module): - def __init__(self, cfg="yolov5s.yaml", ch=3, nc=None): # model, input channels, number of classes - super().__init__() - self.yaml_file = Path(cfg).name - with Path(cfg).open(encoding="utf8") as f: - self.yaml = yaml.safe_load(f) # model dict - - # Define model - ch = self.yaml["ch"] = self.yaml.get("ch", ch) # input channels - if nc and nc != self.yaml["nc"]: - self.yaml["nc"] = nc # override yaml value - - self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist - self.names = [str(i) for i in range(self.yaml["nc"])] # default names - - # Build strides, anchors - m = self.model[-1] # Detect() - if isinstance(m, Detect): - s = 128 # 2x min stride - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - m.anchors /= m.stride.view(-1, 1, 1) - check_anchor_order(m) - self.stride = m.stride - self._initialize_biases() # only run once - - def forward(self, x): - return self.forward_once(x) # single-scale inference, train - - def forward_once(self, x): - y = [] # outputs - for m in self.model: - if m.f != -1: # if not from previous layer - x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - - x = m(x) # run - y.append(x if m.i in self.save else None) # save output - - return x - - def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - m = self.model[-1] # Detect() module - for mi, s in zip(m.m, m.stride): # from - b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - - def _print_biases(self): - m = self.model[-1] # Detect() module - for mi in m.m: # from - b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) - print(("%6g Conv2d.bias:" + "%10.3g" * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean())) - - def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers - print("Fusing layers... ") - for m in self.model.modules(): - if isinstance(m, Conv) and hasattr(m, "bn"): - m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - delattr(m, "bn") # remove batchnorm - m.forward = m.fuseforward # update forward - elif type(m) is nn.Upsample: - m.recompute_scale_factor = None # torch 1.11.0 compatibility - return self - - def nms(self, mode=True): # add or remove NMS module - present = isinstance(self.model[-1], NMS) # last layer is NMS - if mode and not present: - print("Adding NMS... ") - m = NMS() # module - m.f = -1 # from - m.i = self.model[-1].i + 1 # index - self.model.add_module(name=str(m.i), module=m) # add - self.eval() - elif not mode and present: - print("Removing NMS... ") - self.model = self.model[:-1] # remove - return self - - def autoshape(self): # add autoShape module - print("Adding autoShape... ") - m = AutoShape(self) # wrap model - copy_attr(m, self, include=("yaml", "nc", "hyp", "names", "stride"), exclude=()) # copy attributes - return m - - -def parse_model(d, ch): # model_dict, input_channels(3) - anchors, nc, gd, gw = d["anchors"], d["nc"], d["depth_multiple"], d["width_multiple"] - na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors - no = na * (nc + 5) # number of outputs = anchors * (classes + 5) - - layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out - for i, (f, n, m, args) in enumerate(d["backbone"] + d["head"]): # from, number, module, args - m = eval(m) if isinstance(m, str) else m # eval strings - for j, a in enumerate(args): - try: - args[j] = eval(a) if isinstance(a, str) else a # eval strings - except: - pass - - n = max(round(n * gd), 1) if n > 1 else n # depth gain - if m in [ - Conv, - Bottleneck, - SPP, - DWConv, - MixConv2d, - Focus, - CrossConv, - BottleneckCSP, - C3, - ShuffleV2Block, - StemBlock, - ]: - c1, c2 = ch[f], args[0] - - c2 = make_divisible(c2 * gw, 8) if c2 != no else c2 - - args = [c1, c2, *args[1:]] - if m in [BottleneckCSP, C3]: - args.insert(2, n) - n = 1 - elif m is nn.BatchNorm2d: - args = [ch[f]] - elif m is Concat: - c2 = sum(ch[-1 if x == -1 else x + 1] for x in f) - elif m is Detect: - args.append([ch[x + 1] for x in f]) - if isinstance(args[1], int): # number of anchors - args[1] = [list(range(args[1] * 2))] * len(f) - else: - c2 = ch[f] - - m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module - t = str(m)[8:-2].replace("__main__.", "") # module type - np = sum(x.numel() for x in m_.parameters()) # number params - m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist - layers.append(m_) - ch.append(c2) - return nn.Sequential(*layers), sorted(save) diff --git a/spaces/Longtong/foodvision_mini_video/model.py b/spaces/Longtong/foodvision_mini_video/model.py deleted file mode 100644 index d6c9099e121a50c40ce4957fe5be2715be234ef3..0000000000000000000000000000000000000000 --- a/spaces/Longtong/foodvision_mini_video/model.py +++ /dev/null @@ -1,33 +0,0 @@ -import torch -from torch import nn -from torchvision.models import efficientnet_b2, EfficientNet_B2_Weights - - -def create_effnetb2_model(num_classes:int=3, seed:int=42): - - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - - # 1. Setup pretrained EffNetB2,weights - weights = EfficientNet_B2_Weights.DEFAULT - - # 2. Create transforms - transform = weights.transforms() - - # 3. Create the model - model = efficientnet_b2(weights=weights) - - # 4. freeze all layers - for param in model.parameters(): - param.requires_grad=False - - # 5. number of hidden units - in_features = list(model.classifier.children())[1].in_features - - # 6. create a new classifier - model.classifier = nn.Sequential( - nn.Dropout(0.3, inplace=True), - nn.Linear(in_features=in_features, out_features=num_classes) - ) - - return model, transform diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/__init__.py b/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/__init__.py deleted file mode 100644 index 6d9b36c74b1808b56ded68cf080a689db7e0ee4e..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# -*- coding: utf-8 -*- -# File : __init__.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -from .batchnorm import set_sbn_eps_mode -from .batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d -from .batchnorm import patch_sync_batchnorm, convert_model -from .replicate import DataParallelWithCallback, patch_replication_callback diff --git a/spaces/MarioWasTaken/BackroomsIG/style.css b/spaces/MarioWasTaken/BackroomsIG/style.css deleted file mode 100644 index e2f0dfe6d6d82641d419629d93eb311f046bfeb6..0000000000000000000000000000000000000000 --- a/spaces/MarioWasTaken/BackroomsIG/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid yellow; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/MathysL/AutoGPT4/autogpt/speech/gtts.py b/spaces/MathysL/AutoGPT4/autogpt/speech/gtts.py deleted file mode 100644 index 1c3e9cae0567428582891b11eca42f82a64f5c8e..0000000000000000000000000000000000000000 --- a/spaces/MathysL/AutoGPT4/autogpt/speech/gtts.py +++ /dev/null @@ -1,22 +0,0 @@ -""" GTTS Voice. """ -import os - -import gtts -from playsound import playsound - -from autogpt.speech.base import VoiceBase - - -class GTTSVoice(VoiceBase): - """GTTS Voice.""" - - def _setup(self) -> None: - pass - - def _speech(self, text: str, _: int = 0) -> bool: - """Play the given text.""" - tts = gtts.gTTS(text) - tts.save("speech.mp3") - playsound("speech.mp3", True) - os.remove("speech.mp3") - return True diff --git a/spaces/Metatron/LEO/README.md b/spaces/Metatron/LEO/README.md deleted file mode 100644 index dd991024dfba5dc925c501e93efb596f97cb4321..0000000000000000000000000000000000000000 --- a/spaces/Metatron/LEO/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Zenml Server -emoji: 🧘 -colorFrom: purple -colorTo: green -sdk: docker -pinned: false -app_port: 8080 -license: creativeml-openrail-m -duplicated_from: zenml/zenml ---- diff --git a/spaces/Mikey211/computing/README.md b/spaces/Mikey211/computing/README.md deleted file mode 100644 index 64aac8d889ef998091df3446c704b455fad6b9a6..0000000000000000000000000000000000000000 --- a/spaces/Mikey211/computing/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Computing -emoji: 🚀 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MountLiteraSwd/mount_ai_school/README.md b/spaces/MountLiteraSwd/mount_ai_school/README.md deleted file mode 100644 index 33c8a64c858b51d14ddc26c68e6797222492376e..0000000000000000000000000000000000000000 --- a/spaces/MountLiteraSwd/mount_ai_school/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mount Ai School -emoji: 💻 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/vintext_converter.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/vintext_converter.py deleted file mode 100644 index fb7a364d9591bec7785a73d571670121bb985978..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/vintext_converter.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os -import os.path as osp - -import mmcv -import mmengine - -from mmocr.utils import dump_ocr_data - - -def collect_files(img_dir, gt_dir): - """Collect all images and their corresponding groundtruth files. - - Args: - img_dir (str): The image directory - gt_dir (str): The groundtruth directory - - Returns: - files (list): The list of tuples (img_file, groundtruth_file) - """ - assert isinstance(img_dir, str) - assert img_dir - assert isinstance(gt_dir, str) - assert gt_dir - - ann_list, imgs_list = [], [] - for img_file in os.listdir(img_dir): - ann_file = 'gt_' + str(int(img_file[2:6])) + '.txt' - ann_list.append(osp.join(gt_dir, ann_file)) - imgs_list.append(osp.join(img_dir, img_file)) - - files = list(zip(imgs_list, ann_list)) - assert len(files), f'No images found in {img_dir}' - print(f'Loaded {len(files)} images from {img_dir}') - - return files - - -def collect_annotations(files, nproc=1): - """Collect the annotation information. - - Args: - files (list): The list of tuples (image_file, groundtruth_file) - nproc (int): The number of process to collect annotations - - Returns: - images (list): The list of image information dicts - """ - assert isinstance(files, list) - assert isinstance(nproc, int) - - if nproc > 1: - images = mmengine.track_parallel_progress( - load_img_info, files, nproc=nproc) - else: - images = mmengine.track_progress(load_img_info, files) - - return images - - -def load_img_info(files): - """Load the information of one image. - - Args: - files (tuple): The tuple of (img_file, groundtruth_file) - - Returns: - img_info (dict): The dict of the img and annotation information - """ - assert isinstance(files, tuple) - - img_file, gt_file = files - assert int(osp.basename(gt_file)[3:-4]) == int( - osp.basename(img_file)[2:-4]) - # read imgs while ignoring orientations - img = mmcv.imread(img_file, 'unchanged') - - img_info = dict( - file_name=osp.basename(img_file), - height=img.shape[0], - width=img.shape[1], - segm_file=osp.basename(gt_file)) - - if osp.splitext(gt_file)[1] == '.txt': - img_info = load_txt_info(gt_file, img_info) - else: - raise NotImplementedError - - return img_info - - -def load_txt_info(gt_file, img_info): - """Collect the annotation information. - - The annotation format is as the following: - x1,y1,x2,y2,x3,y3,x4,y4,text - 118,15,147,15,148,46,118,46,LƯỢNG - 149,9,165,9,165,43,150,43,TỐT - 167,9,180,9,179,43,167,42,ĐỂ - 181,12,193,12,193,43,181,43,CÓ - 195,13,215,14,215,46,196,46,VIỆC - 217,13,237,14,239,47,217,46,LÀM, - - Args: - gt_file (str): The path to ground-truth - img_info (dict): The dict of the img and annotation information - - Returns: - img_info (dict): The dict of the img and annotation information - """ - - with open(gt_file, encoding='utf-8') as f: - anno_info = [] - for line in f: - line = line.strip('\n') - ann = line.split(',') - bbox = ann[0:8] - word = line[len(','.join(bbox)) + 1:] - bbox = [int(coord) for coord in bbox] - segmentation = bbox - x_min = min(bbox[0], bbox[2], bbox[4], bbox[6]) - x_max = max(bbox[0], bbox[2], bbox[4], bbox[6]) - y_min = min(bbox[1], bbox[3], bbox[5], bbox[7]) - y_max = max(bbox[1], bbox[3], bbox[5], bbox[7]) - w = x_max - x_min - h = y_max - y_min - bbox = [x_min, y_min, w, h] - iscrowd = 1 if word == '###' else 0 - - anno = dict( - iscrowd=iscrowd, - category_id=1, - bbox=bbox, - area=w * h, - segmentation=[segmentation]) - anno_info.append(anno) - - img_info.update(anno_info=anno_info) - - return img_info - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Generate training and test set of VinText ') - parser.add_argument('root_path', help='Root dir path of VinText') - parser.add_argument( - '--nproc', default=1, type=int, help='Number of processes') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - root_path = args.root_path - for split in ['training', 'test', 'unseen_test']: - print(f'Processing {split} set...') - with mmengine.Timer( - print_tmpl='It takes {}s to convert VinText annotation'): - files = collect_files( - osp.join(root_path, 'imgs', split), - osp.join(root_path, 'annotations')) - image_infos = collect_annotations(files, nproc=args.nproc) - dump_ocr_data(image_infos, - osp.join(root_path, 'instances_' + split + '.json'), - 'textdet') - - -if __name__ == '__main__': - main() diff --git a/spaces/MrBodean/VoiceClone/encoder_train.py b/spaces/MrBodean/VoiceClone/encoder_train.py deleted file mode 100644 index b8740a894d615aadfe529cb36068fc8e3496125f..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/encoder_train.py +++ /dev/null @@ -1,47 +0,0 @@ -from utils.argutils import print_args -from encoder.train import train -from pathlib import Path -import argparse - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Trains the speaker encoder. You must have run encoder_preprocess.py first.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - - parser.add_argument("run_id", type=str, help= \ - "Name for this model instance. If a model state from the same run ID was previously " - "saved, the training will restart from there. Pass -f to overwrite saved states and " - "restart from scratch.") - parser.add_argument("clean_data_root", type=Path, help= \ - "Path to the output directory of encoder_preprocess.py. If you left the default " - "output directory when preprocessing, it should be /SV2TTS/encoder/.") - parser.add_argument("-m", "--models_dir", type=Path, default="encoder/saved_models/", help=\ - "Path to the output directory that will contain the saved model weights, as well as " - "backups of those weights and plots generated during training.") - parser.add_argument("-v", "--vis_every", type=int, default=10, help= \ - "Number of steps between updates of the loss and the plots.") - parser.add_argument("-u", "--umap_every", type=int, default=100, help= \ - "Number of steps between updates of the umap projection. Set to 0 to never update the " - "projections.") - parser.add_argument("-s", "--save_every", type=int, default=500, help= \ - "Number of steps between updates of the model on the disk. Set to 0 to never save the " - "model.") - parser.add_argument("-b", "--backup_every", type=int, default=7500, help= \ - "Number of steps between backups of the model. Set to 0 to never make backups of the " - "model.") - parser.add_argument("-f", "--force_restart", action="store_true", help= \ - "Do not load any saved model.") - parser.add_argument("--visdom_server", type=str, default="http://localhost") - parser.add_argument("--no_visdom", action="store_true", help= \ - "Disable visdom.") - args = parser.parse_args() - - # Process the arguments - args.models_dir.mkdir(exist_ok=True) - - # Run the training - print_args(args, parser) - train(**vars(args)) - \ No newline at end of file diff --git a/spaces/NATSpeech/DiffSpeech/modules/commons/conformer/espnet_transformer_attn.py b/spaces/NATSpeech/DiffSpeech/modules/commons/conformer/espnet_transformer_attn.py deleted file mode 100644 index a479a27ea6fd4202359da435234408ba074f7577..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/DiffSpeech/modules/commons/conformer/espnet_transformer_attn.py +++ /dev/null @@ -1,186 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Shigeki Karita -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Multi-Head Attention layer definition.""" - -import math - -import numpy -import torch -from torch import nn - - -class MultiHeadedAttention(nn.Module): - """Multi-Head Attention layer. - Args: - n_head (int): The number of heads. - n_feat (int): The number of features. - dropout_rate (float): Dropout rate. - """ - - def __init__(self, n_head, n_feat, dropout_rate): - """Construct an MultiHeadedAttention object.""" - super(MultiHeadedAttention, self).__init__() - assert n_feat % n_head == 0 - # We assume d_v always equals d_k - self.d_k = n_feat // n_head - self.h = n_head - self.linear_q = nn.Linear(n_feat, n_feat) - self.linear_k = nn.Linear(n_feat, n_feat) - self.linear_v = nn.Linear(n_feat, n_feat) - self.linear_out = nn.Linear(n_feat, n_feat) - self.attn = None - self.dropout = nn.Dropout(p=dropout_rate) - - def forward_qkv(self, query, key, value): - """Transform query, key and value. - Args: - query (torch.Tensor): Query tensor (#batch, time1, size). - key (torch.Tensor): Key tensor (#batch, time2, size). - value (torch.Tensor): Value tensor (#batch, time2, size). - Returns: - torch.Tensor: Transformed query tensor (#batch, n_head, time1, d_k). - torch.Tensor: Transformed key tensor (#batch, n_head, time2, d_k). - torch.Tensor: Transformed value tensor (#batch, n_head, time2, d_k). - """ - n_batch = query.size(0) - q = self.linear_q(query).view(n_batch, -1, self.h, self.d_k) - k = self.linear_k(key).view(n_batch, -1, self.h, self.d_k) - v = self.linear_v(value).view(n_batch, -1, self.h, self.d_k) - q = q.transpose(1, 2) # (batch, head, time1, d_k) - k = k.transpose(1, 2) # (batch, head, time2, d_k) - v = v.transpose(1, 2) # (batch, head, time2, d_k) - - return q, k, v - - def forward_attention(self, value, scores, mask): - """Compute attention context vector. - Args: - value (torch.Tensor): Transformed value (#batch, n_head, time2, d_k). - scores (torch.Tensor): Attention score (#batch, n_head, time1, time2). - mask (torch.Tensor): Mask (#batch, 1, time2) or (#batch, time1, time2). - Returns: - torch.Tensor: Transformed value (#batch, time1, d_model) - weighted by the attention score (#batch, time1, time2). - """ - n_batch = value.size(0) - if mask is not None: - mask = mask.unsqueeze(1).eq(0) # (batch, 1, *, time2) - min_value = float( - numpy.finfo(torch.tensor(0, dtype=scores.dtype).numpy().dtype).min - ) - scores = scores.masked_fill(mask, min_value) - self.attn = torch.softmax(scores, dim=-1).masked_fill( - mask, 0.0 - ) # (batch, head, time1, time2) - else: - self.attn = torch.softmax(scores, dim=-1) # (batch, head, time1, time2) - - p_attn = self.dropout(self.attn) - x = torch.matmul(p_attn, value) # (batch, head, time1, d_k) - x = ( - x.transpose(1, 2).contiguous().view(n_batch, -1, self.h * self.d_k) - ) # (batch, time1, d_model) - - return self.linear_out(x) # (batch, time1, d_model) - - def forward(self, query, key, value, mask): - """Compute scaled dot product attention. - Args: - query (torch.Tensor): Query tensor (#batch, time1, size). - key (torch.Tensor): Key tensor (#batch, time2, size). - value (torch.Tensor): Value tensor (#batch, time2, size). - mask (torch.Tensor): Mask tensor (#batch, 1, time2) or - (#batch, time1, time2). - Returns: - torch.Tensor: Output tensor (#batch, time1, d_model). - """ - q, k, v = self.forward_qkv(query, key, value) - scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.d_k) - return self.forward_attention(v, scores, mask) - - -class RelPositionMultiHeadedAttention(MultiHeadedAttention): - """Multi-Head Attention layer with relative position encoding. - Paper: https://arxiv.org/abs/1901.02860 - Args: - n_head (int): The number of heads. - n_feat (int): The number of features. - dropout_rate (float): Dropout rate. - """ - - def __init__(self, n_head, n_feat, dropout_rate): - """Construct an RelPositionMultiHeadedAttention object.""" - super().__init__(n_head, n_feat, dropout_rate) - # linear transformation for positional ecoding - self.linear_pos = nn.Linear(n_feat, n_feat, bias=False) - # these two learnable bias are used in matrix c and matrix d - # as described in https://arxiv.org/abs/1901.02860 Section 3.3 - self.pos_bias_u = nn.Parameter(torch.Tensor(self.h, self.d_k)) - self.pos_bias_v = nn.Parameter(torch.Tensor(self.h, self.d_k)) - torch.nn.init.xavier_uniform_(self.pos_bias_u) - torch.nn.init.xavier_uniform_(self.pos_bias_v) - - def rel_shift(self, x, zero_triu=False): - """Compute relative positinal encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, size). - zero_triu (bool): If true, return the lower triangular part of the matrix. - Returns: - torch.Tensor: Output tensor. - """ - zero_pad = torch.zeros((*x.size()[:3], 1), device=x.device, dtype=x.dtype) - x_padded = torch.cat([zero_pad, x], dim=-1) - - x_padded = x_padded.view(*x.size()[:2], x.size(3) + 1, x.size(2)) - x = x_padded[:, :, 1:].view_as(x) - - if zero_triu: - ones = torch.ones((x.size(2), x.size(3))) - x = x * torch.tril(ones, x.size(3) - x.size(2))[None, None, :, :] - - return x - - def forward(self, query, key, value, pos_emb, mask): - """Compute 'Scaled Dot Product Attention' with rel. positional encoding. - Args: - query (torch.Tensor): Query tensor (#batch, time1, size). - key (torch.Tensor): Key tensor (#batch, time2, size). - value (torch.Tensor): Value tensor (#batch, time2, size). - pos_emb (torch.Tensor): Positional embedding tensor (#batch, time2, size). - mask (torch.Tensor): Mask tensor (#batch, 1, time2) or - (#batch, time1, time2). - Returns: - torch.Tensor: Output tensor (#batch, time1, d_model). - """ - q, k, v = self.forward_qkv(query, key, value) - q = q.transpose(1, 2) # (batch, time1, head, d_k) - - n_batch_pos = pos_emb.size(0) - p = self.linear_pos(pos_emb).view(n_batch_pos, -1, self.h, self.d_k) - p = p.transpose(1, 2) # (batch, head, time1, d_k) - - # (batch, head, time1, d_k) - q_with_bias_u = (q + self.pos_bias_u).transpose(1, 2) - # (batch, head, time1, d_k) - q_with_bias_v = (q + self.pos_bias_v).transpose(1, 2) - - # compute attention score - # first compute matrix a and matrix c - # as described in https://arxiv.org/abs/1901.02860 Section 3.3 - # (batch, head, time1, time2) - matrix_ac = torch.matmul(q_with_bias_u, k.transpose(-2, -1)) - - # compute matrix b and matrix d - # (batch, head, time1, time2) - matrix_bd = torch.matmul(q_with_bias_v, p.transpose(-2, -1)) - matrix_bd = self.rel_shift(matrix_bd) - - scores = (matrix_ac + matrix_bd) / math.sqrt( - self.d_k - ) # (batch, head, time1, time2) - - return self.forward_attention(v, scores, mask) diff --git a/spaces/NATSpeech/PortaSpeech/mfa_usr/install_mfa.sh b/spaces/NATSpeech/PortaSpeech/mfa_usr/install_mfa.sh deleted file mode 100644 index c694cf307b60cd96c254bc2089f0745d9dd602c2..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/mfa_usr/install_mfa.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/bash -set -e -pip uninstall -y typing -pip install --ignore-requires-python git+https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner.git@v2.0.0b3 -mfa thirdparty download -sudo apt install -y libopenblas-base libsox-fmt-mp3 libfst8 libfst-tools \ No newline at end of file diff --git a/spaces/Nickhilearla135095/Google-Drive/app.py b/spaces/Nickhilearla135095/Google-Drive/app.py deleted file mode 100644 index b6a49ffb3ba2da6cc7af769fbde25295ae15b784..0000000000000000000000000000000000000000 --- a/spaces/Nickhilearla135095/Google-Drive/app.py +++ /dev/null @@ -1,55 +0,0 @@ -import os -import openai -import gradio as gr - -#if you have OpenAI API key as an environment variable, enable the below -#openai.api_key = os.getenv("OPENAI_API_KEY") - -#if you have OpenAI API key as a string, enable the below -openai.api_key = "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" - -start_sequence = "\nAI:" -restart_sequence = "\nHuman: " - -prompt = "The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How can I help you today?\nHuman: " - -def openai_create(prompt): - - response = openai.Completion.create( - model="text-davinci-003", - prompt=prompt, - temperature=0.9, - max_tokens=500, - top_p=1, - frequency_penalty=0, - presence_penalty=0.6, - stop=[" Human:", " AI:"] - ) - - return response.choices[0].text - - - -def chatgpt_clone(input, history): - history = history or [] - s = list(sum(history, ())) - s.append(input) - inp = ' '.join(s) - output = openai_create(inp) - history.append((input, output)) - return history, history - - -block = gr.Blocks() - - -with block: - gr.Markdown("""

        Nickhils Chat GPT

        - """) - chatbot = gr.Chatbot() - message = gr.Textbox(placeholder=prompt) - state = gr.State() - submit = gr.Button("SEND") - submit.click(chatgpt_clone, inputs=[message, state], outputs=[chatbot, state]) - -block.launch(debug = True) diff --git a/spaces/NimaBoscarino/climategan/utils_scripts/make_640_masker_validation_set.py b/spaces/NimaBoscarino/climategan/utils_scripts/make_640_masker_validation_set.py deleted file mode 100644 index 6ca26f553387d65b9bc9fd47074f7e54d10c48e4..0000000000000000000000000000000000000000 --- a/spaces/NimaBoscarino/climategan/utils_scripts/make_640_masker_validation_set.py +++ /dev/null @@ -1,198 +0,0 @@ -import sys -from pathlib import Path -from skimage.io import imread, imsave -from skimage.transform import resize -from skimage.color import rgba2rgb -from argparse import ArgumentParser -import numpy as np - -IMG_EXTENSIONS = set( - [".jpg", ".JPG", ".jpeg", ".JPEG", ".png", ".PNG", ".ppm", ".PPM", ".bmp", ".BMP"] -) - - -def is_image_file(filename): - """Check that a file's name points to a known image format - """ - if isinstance(filename, Path): - return filename.suffix in IMG_EXTENSIONS - - return Path(filename).suffix in IMG_EXTENSIONS - - -def find_images(path, recursive=False): - """ - Get a list of all images contained in a directory: - - - path.glob("*") if not recursive - - path.glob("**/*") if recursive - """ - p = Path(path) - assert p.exists() - assert p.is_dir() - pattern = "*" - if recursive: - pattern += "*/*" - - return [i for i in p.glob(pattern) if i.is_file() and is_image_file(i)] - - -def uint8(array): - return array.astype(np.uint8) - - -def crop_and_resize(image_path, label_path): - """ - Resizes an image so that it keeps the aspect ratio and the smallest dimensions - is 640, then crops this resized image in its center so that the output is 640x640 - without aspect ratio distortion - - Args: - image_path (Path or str): Path to an image - label_path (Path or str): Path to the image's associated label - - Returns: - tuple((np.ndarray, np.ndarray)): (new image, new label) - """ - dolab = label_path is not None - - img = imread(image_path) - if dolab: - lab = imread(label_path) - - if img.shape[-1] == 4: - img = uint8(rgba2rgb(img) * 255) - - if dolab and img.shape != lab.shape: - print("\nWARNING: shape mismatch. Entering breakpoint to investigate:") - breakpoint() - - # resize keeping aspect ratio: smallest dim is 640 - h, w = img.shape[:2] - if h < w: - size = (640, int(640 * w / h)) - else: - size = (int(640 * h / w), 640) - - r_img = resize(img, size, preserve_range=True, anti_aliasing=True) - r_img = uint8(r_img) - - if dolab: - # nearest neighbor for labels - r_lab = resize(lab, size, preserve_range=True, anti_aliasing=False, order=0) - r_lab = uint8(r_lab) - - # crop in the center - H, W = r_img.shape[:2] - - top = (H - 640) // 2 - left = (W - 640) // 2 - - rc_img = r_img[top : top + 640, left : left + 640, :] - if dolab: - rc_lab = r_lab[top : top + 640, left : left + 640, :] - else: - rc_lab = None - - return rc_img, rc_lab - - -def label(img, label, alpha=0.4): - return uint8(alpha * label + (1 - alpha) * img) - - -if __name__ == "__main__": - parser = ArgumentParser() - parser.add_argument( - "-i", "--input_dir", type=str, help="Directory to recursively read images from" - ) - parser.add_argument( - "-o", - "--output_dir", - type=str, - help="Where to writ the result of the script," - + " keeping the input dir's structure", - ) - parser.add_argument( - "--no_labels", - action="store_true", - help="Only process images, don't look for labels", - ) - parser.add_argument( - "--store_labeled", - action="store_true", - help="Store a superposition of the label and the image in out/labeled/", - ) - args = parser.parse_args() - - dolab = not args.no_labels - dolabeled = args.store_labeled - - input_base = Path(args.input_dir).expanduser().resolve() - output_base = Path(args.output_dir).expanduser().resolve() - - input_images = input_base / "imgs" - output_images = output_base / "imgs" - - if dolab: - input_labels = input_base / "labels" - output_labels = output_base / "labels" - if dolabeled: - output_labeled = output_base / "labeled" - - print("Input images:", str(input_images)) - print("Output images:", str(output_images)) - if dolab: - print("Input labels:", str(input_labels)) - print("Output labels:", str(output_labels)) - if dolabeled: - print("Output labeled:", str(output_labeled)) - else: - print("NO LABEL PROCESSING (args.no_labels is specified)") - print() - - assert input_images.exists() - if dolab: - assert input_labels.exists() - - if output_base.exists(): - if ( - "n" - in input( - "WARNING: output dir already exists." - + " Overwrite its content? (y/n, default: y)" - ).lower() - ): - sys.exit() - - output_images.mkdir(parents=True, exist_ok=True) - if dolab: - output_labels.mkdir(parents=True, exist_ok=True) - if dolabeled: - output_labeled.mkdir(parents=True, exist_ok=True) - - images_paths = list( - map(Path, sorted((map(str, find_images(input_images, recursive=True))))) - ) - if dolab: - labels_paths = list( - map(Path, sorted((map(str, find_images(input_labels, recursive=True))))) - ) - else: - labels_paths = [None] * len(images_paths) - - for i, (image_path, label_path) in enumerate(zip(images_paths, labels_paths)): - print( - f"Processing {i + 1 :3} / {len(images_paths)} : {image_path.name}", - end="\r", - flush=True, - ) - processed_image, processed_label = crop_and_resize(image_path, label_path) - imsave(output_images / f"{image_path.stem}.png", processed_image) - if dolab: - imsave(output_labels / f"{label_path.stem}.png", processed_label) - if dolabeled: - labeled = label(processed_image, processed_label) - imsave(output_labeled / f"{image_path.stem}.png", labeled) - - print("\nDone.") diff --git a/spaces/Nultx/stable-diffusion-webui-cpu/README.md b/spaces/Nultx/stable-diffusion-webui-cpu/README.md deleted file mode 100644 index 63b3498a69a69ebc4e5b5648050854d58529283a..0000000000000000000000000000000000000000 --- a/spaces/Nultx/stable-diffusion-webui-cpu/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Stable Diffusion Webui on Cpu -emoji: 🏃 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.31.0 -app_file: app.py -pinned: false -python_version: 3.10.6 -duplicated_from: DreamSunny/stable-diffusion-webui-cpu ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/hubert/measure_teacher_quality.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/hubert/measure_teacher_quality.py deleted file mode 100644 index 92279b2214bb2ba4a99aea92098907ef4f55821b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/hubert/measure_teacher_quality.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import os.path as op -import re -from tabulate import tabulate -from collections import Counter - - -def comp_purity(p_xy, axis): - max_p = p_xy.max(axis=axis) - marg_p = p_xy.sum(axis=axis) - indv_pur = max_p / marg_p - aggr_pur = max_p.sum() - return indv_pur, aggr_pur - - -def comp_entropy(p): - return (-p * np.log(p + 1e-8)).sum() - - -def comp_norm_mutual_info(p_xy): - p_x = p_xy.sum(axis=1, keepdims=True) - p_y = p_xy.sum(axis=0, keepdims=True) - pmi = np.log(p_xy / np.matmul(p_x, p_y) + 1e-8) - mi = (p_xy * pmi).sum() - h_x = comp_entropy(p_x) - h_y = comp_entropy(p_y) - return mi, mi / h_x, mi / h_y, h_x, h_y - - -def pad(labs, n): - if n == 0: - return np.array(labs) - return np.concatenate([[labs[0]] * n, labs, [labs[-1]] * n]) - - -def comp_avg_seg_dur(labs_list): - n_frms = 0 - n_segs = 0 - for labs in labs_list: - labs = np.array(labs) - edges = np.zeros(len(labs)).astype(bool) - edges[0] = True - edges[1:] = labs[1:] != labs[:-1] - n_frms += len(edges) - n_segs += edges.astype(int).sum() - return n_frms / n_segs - - -def comp_joint_prob(uid2refs, uid2hyps): - """ - Args: - pad: padding for spliced-feature derived labels - """ - cnts = Counter() - skipped = [] - abs_frmdiff = 0 - for uid in uid2refs: - if uid not in uid2hyps: - skipped.append(uid) - continue - refs = uid2refs[uid] - hyps = uid2hyps[uid] - abs_frmdiff += abs(len(refs) - len(hyps)) - min_len = min(len(refs), len(hyps)) - refs = refs[:min_len] - hyps = hyps[:min_len] - cnts.update(zip(refs, hyps)) - tot = sum(cnts.values()) - - ref_set = sorted({ref for ref, _ in cnts.keys()}) - hyp_set = sorted({hyp for _, hyp in cnts.keys()}) - ref2pid = dict(zip(ref_set, range(len(ref_set)))) - hyp2lid = dict(zip(hyp_set, range(len(hyp_set)))) - # print(hyp_set) - p_xy = np.zeros((len(ref2pid), len(hyp2lid)), dtype=float) - for (ref, hyp), cnt in cnts.items(): - p_xy[ref2pid[ref], hyp2lid[hyp]] = cnt - p_xy /= p_xy.sum() - return p_xy, ref2pid, hyp2lid, tot, abs_frmdiff, skipped - - -def read_phn(tsv_path, rm_stress=True): - uid2phns = {} - with open(tsv_path) as f: - for line in f: - uid, phns = line.rstrip().split("\t") - phns = phns.split(",") - if rm_stress: - phns = [re.sub("[0-9]", "", phn) for phn in phns] - uid2phns[uid] = phns - return uid2phns - - -def read_lab(tsv_path, lab_path, pad_len=0, upsample=1): - """ - tsv is needed to retrieve the uids for the labels - """ - with open(tsv_path) as f: - f.readline() - uids = [op.splitext(op.basename(line.rstrip().split()[0]))[0] for line in f] - with open(lab_path) as f: - labs_list = [pad(line.rstrip().split(), pad_len).repeat(upsample) for line in f] - assert len(uids) == len(labs_list) - return dict(zip(uids, labs_list)) - - -def main_lab_lab( - tsv_dir, - lab_dir, - lab_name, - lab_sets, - ref_dir, - ref_name, - pad_len=0, - upsample=1, - verbose=False, -): - # assume tsv_dir is the same for both the reference and the hypotheses - tsv_dir = lab_dir if tsv_dir is None else tsv_dir - - uid2refs = {} - for s in lab_sets: - uid2refs.update(read_lab(f"{tsv_dir}/{s}.tsv", f"{ref_dir}/{s}.{ref_name}")) - - uid2hyps = {} - for s in lab_sets: - uid2hyps.update( - read_lab( - f"{tsv_dir}/{s}.tsv", f"{lab_dir}/{s}.{lab_name}", pad_len, upsample - ) - ) - _main(uid2refs, uid2hyps, verbose) - - -def main_phn_lab( - tsv_dir, - lab_dir, - lab_name, - lab_sets, - phn_dir, - phn_sets, - pad_len=0, - upsample=1, - verbose=False, -): - uid2refs = {} - for s in phn_sets: - uid2refs.update(read_phn(f"{phn_dir}/{s}.tsv")) - - uid2hyps = {} - tsv_dir = lab_dir if tsv_dir is None else tsv_dir - for s in lab_sets: - uid2hyps.update( - read_lab( - f"{tsv_dir}/{s}.tsv", f"{lab_dir}/{s}.{lab_name}", pad_len, upsample - ) - ) - _main(uid2refs, uid2hyps, verbose) - - -def _main(uid2refs, uid2hyps, verbose): - (p_xy, ref2pid, hyp2lid, tot, frmdiff, skipped) = comp_joint_prob( - uid2refs, uid2hyps - ) - ref_pur_by_hyp, ref_pur = comp_purity(p_xy, axis=0) - hyp_pur_by_ref, hyp_pur = comp_purity(p_xy, axis=1) - (mi, mi_norm_by_ref, mi_norm_by_hyp, h_ref, h_hyp) = comp_norm_mutual_info(p_xy) - outputs = { - "ref pur": ref_pur, - "hyp pur": hyp_pur, - "H(ref)": h_ref, - "H(hyp)": h_hyp, - "MI": mi, - "MI/H(ref)": mi_norm_by_ref, - "ref segL": comp_avg_seg_dur(uid2refs.values()), - "hyp segL": comp_avg_seg_dur(uid2hyps.values()), - "p_xy shape": p_xy.shape, - "frm tot": tot, - "frm diff": frmdiff, - "utt tot": len(uid2refs), - "utt miss": len(skipped), - } - print(tabulate([outputs.values()], outputs.keys(), floatfmt=".4f")) - - -if __name__ == "__main__": - """ - compute quality of labels with respect to phone or another labels if set - """ - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("tsv_dir") - parser.add_argument("lab_dir") - parser.add_argument("lab_name") - parser.add_argument("--lab_sets", default=["valid"], type=str, nargs="+") - parser.add_argument( - "--phn_dir", - default="/checkpoint/wnhsu/data/librispeech/960h/fa/raw_phn/phone_frame_align_v1", - ) - parser.add_argument( - "--phn_sets", default=["dev-clean", "dev-other"], type=str, nargs="+" - ) - parser.add_argument("--pad_len", default=0, type=int, help="padding for hypotheses") - parser.add_argument( - "--upsample", default=1, type=int, help="upsample factor for hypotheses" - ) - parser.add_argument("--ref_lab_dir", default="") - parser.add_argument("--ref_lab_name", default="") - parser.add_argument("--verbose", action="store_true") - args = parser.parse_args() - - if args.ref_lab_dir and args.ref_lab_name: - main_lab_lab( - args.tsv_dir, - args.lab_dir, - args.lab_name, - args.lab_sets, - args.ref_lab_dir, - args.ref_lab_name, - args.pad_len, - args.upsample, - args.verbose, - ) - else: - main_phn_lab( - args.tsv_dir, - args.lab_dir, - args.lab_name, - args.lab_sets, - args.phn_dir, - args.phn_sets, - args.pad_len, - args.upsample, - args.verbose, - ) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/multilingual/data_scripts/download_wmt19_and_before.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/multilingual/data_scripts/download_wmt19_and_before.py deleted file mode 100644 index 3465731eb3e55047c44d1b336a97e99cb3a89a53..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/multilingual/data_scripts/download_wmt19_and_before.py +++ /dev/null @@ -1,899 +0,0 @@ -from typing import NamedTuple, List -from urllib.parse import urlparse -import os, sys -import subprocess -from subprocess import check_call, check_output -import glob -import wget -import re -import multiprocessing as mp -from functools import partial -import pathlib -from collections import OrderedDict - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - -# scripts and data locations -CWD = os.getcwd() -UTILS = f"{CWD}/utils" - -MOSES = f"{UTILS}/mosesdecoder" -SGM_TOOL = f'{MOSES}/scripts/ems/support/input-from-sgm.perl' - -TMX2CORPUS = f"{UTILS}/tmx2corpus" -TMX_TOOL = f'python {TMX2CORPUS}/tmx2corpus.py' - -to_data_path = f'{WORKDIR_ROOT}/wmt' -download_to = f'{to_data_path}/downloads' -manually_downloads = f'{to_data_path}/downloads' -extract_to = f'{to_data_path}/extracted' -#DESTDIR=${WORKDIR_ROOT}/ML50/raw/ -raw_data = f'{WORKDIR_ROOT}/ML50/raw' -#### - -class DLDataset(NamedTuple): - name: str - train_urls: List[str] - valid_urls: List[str] - test_urls: List[str] - train_files_patterns: List[str] = [] - valid_files_patterns: List[str] = [] - test_files_patterns: List[str] = [] - - - -def bar_custom(current, total, width=80): - print("Downloading: %d%% [%d / %d] Ks" % (current / total * 100, current / 1000, total / 1000), end='\r') - -def get_downloaded_file(dl_folder, url): - if isinstance(url, tuple): - url, f = url - else: - url_f = urlparse(url) - # f = os.path.split(url_f.path)[-1] - f = '_'.join(url_f.path.split('/')[1:]) - return url, f"{dl_folder}/{f}" - -def download_parts_and_combine(dl_folder, urls, filename): - parts = [] - for url_record in urls: - url, part_file = get_downloaded_file(dl_folder, url_record) - if os.path.exists(part_file): - print(f'{part_file} has already been downloaded so skip') - else: - part_file = wget.download(url, part_file, bar=bar_custom) - parts.append(part_file) - - def get_combine_cmd(parts): - #default as tar.gz.?? - return f'cat {" ".join(parts)} > {filename}' - - combine_cmd = get_combine_cmd(parts) - call(combine_cmd, debug=True) - return filename - -def download_a_url(dl_folder, url): - url, filename = get_downloaded_file(dl_folder, url) - if os.path.exists(filename): - print(f'{filename} has already been downloaded so skip') - return filename - - print(f'downloading {url} to {filename}') - if isinstance(url, list) or isinstance(url, tuple): - download_parts_and_combine(dl_folder, url, filename) - else: - wget.download(url, filename, bar=bar_custom) - print(f'dowloaded: {filename}') - return filename - -def download_files(dl_folder, urls, completed_urls={}): - for url_record in urls: - url, _ = get_downloaded_file(dl_folder, url_record) - filename = download_a_url(dl_folder, url_record) - completed_urls[str(url)] = filename - return completed_urls - -def check_need_manual_downalod(dl_folder, to_manually_download_urls): - to_be_manually_dowloaded = [] - manually_completed_urls = {} - for url_record, instruction in to_manually_download_urls: - url, filename = get_downloaded_file(dl_folder, url_record) - if not os.path.exists(filename): - print(f'{url} need to be download manually, please download it manually following {instruction}; and copy it to {filename}') - to_be_manually_dowloaded.append((url, filename)) - else: - manually_completed_urls[url] = filename - # if len(to_be_manually_dowloaded) > 0: - # raise ValueError('Missing files that need to be downloaded manually; stop the process now.') - return to_be_manually_dowloaded - -def download_dataset(to_folder, dl_dataset, completed_urls={}): - download_files(to_folder, dl_dataset.train_urls, completed_urls) - download_files(to_folder, dl_dataset.valid_urls, completed_urls) - download_files(to_folder, dl_dataset.test_urls, completed_urls) - print('completed downloading') - return completed_urls - -def call(cmd, debug=False): - if debug: - print(cmd) - check_call(cmd, shell=True) - - -def get_extract_name(file_path): - path = os.path.split(file_path) - return path[-1] + '_extract' #.split('.')[0] - -def extract_file(downloaded_file, extract_folder, get_extract_name=get_extract_name, debug=False): - extract_name = get_extract_name(downloaded_file) - extract_to = f'{extract_folder}/{extract_name}' - os.makedirs(extract_to, exist_ok=True) - if os.path.exists(f'{extract_to}/DONE'): - print(f'{downloaded_file} has already been extracted to {extract_to} so skip') - return extract_to - def get_extract_cmd(filename): - if filename.endswith('.tgz') or filename.endswith('tar.gz'): - return f'tar xzfv {filename} -C {extract_to}' - elif filename.endswith('.gz.tar'): - return f'tar xfv {filename} -C {extract_to}; (cd {extract_to}; gzip -d *.gz; [ $? -eq 0 ] || gzip -d */*.gz)' - elif filename.endswith('.tar'): - return f'tar xfv {filename} -C {extract_to}' - elif filename.endswith('.gz'): - return f'cp {filename} {extract_to}; (cd {extract_to}; gzip -d *.gz)' - elif filename.endswith('.zip'): - return f'unzip {filename} -d {extract_to}' - extract_cmd = get_extract_cmd(downloaded_file) - print(f'extracting {downloaded_file}') - if isinstance(extract_cmd, list): - for c in extract_cmd: - call(c, debug=debug) - else: - call(extract_cmd, debug=debug) - call(f'echo DONE > {extract_to}/DONE') - return extract_to - - -def extract_all_files( - completed_urls, extract_folder, - get_extract_name=get_extract_name, - completed_extraction={}, - debug=False): - extracted_folders = OrderedDict() - for url, downloaded_file in set(completed_urls.items()): - if downloaded_file in completed_extraction: - print(f'{downloaded_file} is already extracted; so skip') - continue - folder = extract_file(downloaded_file, extract_folder, get_extract_name, debug) - extracted_folders[url] = folder - return extracted_folders - - -def my_glob(folder): - for p in [f'{folder}/*', f'{folder}/*/*', f'{folder}/*/*/*']: - for f in glob.glob(p): - yield f - - -def sgm2raw(sgm, debug): - to_file = sgm[0:len(sgm) - len('.sgm')] - if os.path.exists(to_file): - debug and print(f'{sgm} already converted to {to_file}; so skip') - return to_file - cmd = f'{SGM_TOOL} < {sgm} > {to_file}' - call(cmd, debug) - return to_file - -def tmx2raw(tmx, debug): - to_file = tmx[0:len(tmx) - len('.tmx')] - to_folder = os.path.join(*os.path.split(tmx)[:-1]) - if os.path.exists(f'{to_folder}/bitext.en'): - debug and print(f'{tmx} already extracted to {to_file}; so skip') - return to_file - cmd = f'(cd {to_folder}; {TMX_TOOL} {tmx})' - call(cmd, debug) - return to_file - -CZENG16_REGEX = re.compile(r'.*?data.plaintext-format/0[0-9]train$') -WMT19_WIKITITLES_REGEX = re.compile(r'.*?wikititles-v1.(\w\w)-en.tsv.gz') -TSV_REGEX = re.compile(r'.*?(\w\w)-(\w\w).tsv$') - - - -def cut_wikitles(wiki_file, debug): - # different languages have different file names: - if wiki_file.endswith('wiki/fi-en/titles.fi-en'): - to_file1 = f'{wiki_file}.fi' - to_file2 = f'{wiki_file}.en' - BACKSLASH = '\\' - cmd1 = f"cat {wiki_file} | sed 's/|||/{BACKSLASH}t/g' |cut -f1 |awk '{{$1=$1}};1' > {to_file1}" - cmd2 = f"cat {wiki_file} | sed 's/|||/{BACKSLASH}t/g' |cut -f2 |awk '{{$1=$1}};1' > {to_file2}" -# elif WMT19_WIKITITLES_REGEX.match(wiki_file): -# src = WMT19_WIKITITLES_REGEX.match(wiki_file).groups()[0] -# to_file1 = f'{wiki_file}.{src}' -# to_file2 = f'{wiki_file}.en' -# cmd1 = f"cat {wiki_file} | cut -f1 |awk '{{$1=$1}};1' > {to_file1}" -# cmd2 = f"cat {wiki_file} | cut -f2 |awk '{{$1=$1}};1' > {to_file2}" - else: - return None - if os.path.exists(to_file1) and os.path.exists(to_file2): - debug and print(f'{wiki_file} already processed to {to_file1} and {to_file2}; so skip') - return wiki_file - - call(cmd1, debug=debug) - call(cmd2, debug=debug) - return wiki_file - -def cut_tsv(file, debug): - m = TSV_REGEX.match(file) - if m is None: - raise ValueError(f'{file} is not matching tsv pattern') - src = m.groups()[0] - tgt = m.groups()[1] - - to_file1 = f'{file}.{src}' - to_file2 = f'{file}.{tgt}' - cmd1 = f"cat {file} | cut -f1 |awk '{{$1=$1}};1' > {to_file1}" - cmd2 = f"cat {file} | cut -f2 |awk '{{$1=$1}};1' > {to_file2}" - if os.path.exists(to_file1) and os.path.exists(to_file2): - debug and print(f'{file} already processed to {to_file1} and {to_file2}; so skip') - return file - - call(cmd1, debug=debug) - call(cmd2, debug=debug) - return file - - -def convert_file_if_needed(file, debug): - if file.endswith('.sgm'): - return sgm2raw(file, debug) - elif file.endswith('.tmx'): - return tmx2raw(file, debug) - elif file.endswith('wiki/fi-en/titles.fi-en'): - return cut_wikitles(file, debug) -# elif WMT19_WIKITITLES_REGEX.match(file): -# return cut_wikitles(file, debug) - elif file.endswith('.tsv'): - return cut_tsv(file, debug) - elif CZENG16_REGEX.match(file): - return convert2czeng17(file, debug) - else: - return file - - -def convert_files_if_needed(extracted_foldrs, my_glob=my_glob, debug=False): - return { - url: list(sorted(set(convert_file_if_needed(f, debug)) for f in sorted(set(my_glob(folder))))) - for url, folder in extracted_foldrs.items() - } - -def match_patt(file_path, file_pattern, src, tgt, lang): - return file_pattern.format(src=src, tgt=tgt, lang=lang) in file_path - -def match_patts(file_path, file_patterns, src, tgt, lang): - for file_pattern in file_patterns: - params = { k: v for k, v in [('src', src), ('tgt', tgt), ('lang', lang)] if k in file_pattern} - matching = file_pattern.format(**params) - - if isinstance(file_pattern, tuple): - pattern, directions = file_pattern - if f'{src}-{tgt}' in directions and matching in file_path: - return True - else: - if matching in file_path: - return True - return False - -def extracted_glob(extracted_folder, file_patterns, src, tgt, lang): - def get_matching_pattern(file_pattern): - params = { - k: v - for k, v in [('src', src), ('tgt', tgt), ('lang', lang)] - if '{' + k + '}' in file_pattern - } - file_pattern = re.sub(r'{src:(.*?)}', r'\1' if lang == src else '', file_pattern) - file_pattern = re.sub(r'{tgt:(.*?)}', r'\1' if lang == tgt else '', file_pattern) - file_pattern = file_pattern.format(**params) - return file_pattern - for file_pattern in file_patterns: - if isinstance(file_pattern, tuple): - file_pattern, lang_pairs = file_pattern - if f'{src}-{tgt}' not in lang_pairs: - continue -# print('working on pattern: ', file_pattern, lang_pairs ) - matching_pattern = get_matching_pattern(file_pattern) - if matching_pattern is None: - continue - glob_patterns = f'{extracted_folder}/{matching_pattern}' -# print('glob_patterns: ', glob_patterns) - for f in glob.glob(glob_patterns): - yield f - -# for debug usage -def all_extracted_files(split, src, tgt, extracted_folders, split_urls): - def get_url(url): - if isinstance(url, tuple): - url, downloaded_file = url - return url - return [ - f - for url in split_urls - for f in my_glob(extracted_folders[str(get_url(url))]) - ] - -def concat_files(split, src, tgt, extracted_folders, split_urls, path_patterns, to_folder, debug=False): -# if debug: -# print('extracted files to be filtered by patterns: ', -# '\n\t'.join(sorted(all_extracted_files(split, src, tgt, extracted_folders, split_urls)))) - for lang in [src, tgt]: - to_file = f'{to_folder}/{split}.{src}-{tgt}.{lang}' - s_src, s_tgt, s_lang = src.split('_')[0], tgt.split('_')[0], lang.split('_')[0] - files = [] - for url in split_urls: - if isinstance(url, tuple): - url, downloaded_file = url - if str(url) not in extracted_folders: - print(f'warning: {url} not in extracted files') - for extracted_file in set( - extracted_glob( - extracted_folders[str(url)], path_patterns, - s_src, s_tgt, s_lang)): - files.append(extracted_file) - if len(files) == 0: - print('warning: ', f'No files found for split {to_file}') - continue - files = sorted(set(files)) - print(f'concating {len(files)} files into {to_file}') - cmd = ['cat'] + [f'"{f}"' for f in files] + [f'>{to_file}'] - cmd = " ".join(cmd) - call(cmd, debug=debug) - -UTILS = os.path.join(pathlib.Path(__file__).parent, 'utils') -LID_MODEL = f'{download_to}/lid.176.bin' -LID_MULTI = f'{UTILS}/fasttext_multi_filter.py' - -def lid_filter(split, src, tgt, from_folder, to_folder, debug=False): - if not os.path.exists(LID_MODEL): - call(f'wget -nc https://dl.fbaipublicfiles.com/fasttext/supervised-models/lid.176.bin -O {LID_MODEL}') - from_prefix = f'{from_folder}/{split}.{src}-{tgt}' - to_prefix = f'{to_folder}/{split}.{src}-{tgt}' - if os.path.exists(f'{from_prefix}.{src}') and os.path.exists(f'{from_prefix}.{tgt}'): - s_src, s_tgt = src.split('_')[0], tgt.split('_')[0] - cmd = ( - f'python {LID_MULTI} --model {LID_MODEL} --inputs {from_prefix}.{src} {from_prefix}.{tgt} ' - f'--langs {s_src} {s_tgt} --outputs {to_prefix}.{src} {to_prefix}.{tgt}' - ) - print(f'filtering {from_prefix}') - call(cmd, debug=debug) - -def concat_into_splits(dl_dataset, src, tgt, extracted_folders, to_folder, debug): - to_folder_tmp = f"{to_folder}_tmp" - os.makedirs(to_folder_tmp, exist_ok=True) - concat_files('train', src, tgt, - extracted_folders, - split_urls=dl_dataset.train_urls, - path_patterns=dl_dataset.train_files_patterns, - to_folder=to_folder_tmp, debug=debug) - lid_filter('train', src, tgt, to_folder_tmp, to_folder, debug) - - concat_files('valid', src, tgt, - extracted_folders, - split_urls=dl_dataset.valid_urls, - path_patterns=dl_dataset.valid_files_patterns, - to_folder=to_folder, debug=debug) - concat_files('test', src, tgt, - extracted_folders, - split_urls=dl_dataset.test_urls, - path_patterns=dl_dataset.test_files_patterns, - to_folder=to_folder, debug=debug) - - -def download_multi(dl_folder, extract_folder, urls, num_processes=8, debug=False): - pool = mp.Pool(processes=num_processes) - download_f = partial(download_a_url, dl_folder) - downloaded_files = pool.imap_unordered(download_f, urls) - pool.close() - pool.join() - -BLEU_REGEX = re.compile("^BLEU\\S* = (\\S+) ") -def run_eval_bleu(cmd): - output = check_output(cmd, shell=True, stderr=subprocess.STDOUT).decode("utf-8").strip() - print(output) - bleu = -1.0 - for line in output.strip().split('\n'): - m = BLEU_REGEX.search(line) - if m is not None: - bleu = m.groups()[0] - bleu = float(bleu) - break - return bleu - -def check_wmt_test_bleu(raw_folder, wmt_lang_pairs): - not_matchings = [] - for wmt, src_tgts in wmt_lang_pairs: - for src_tgt in src_tgts: - print(f'checking test bleus for: {src_tgt} at {wmt}') - src, tgt = src_tgt.split('-') - ssrc, stgt = src[:2], tgt[:2] - if os.path.exists(f'{raw_folder}/test.{tgt}-{src}.{src}'): - # reversed direction may have different test set - test_src = f'{raw_folder}/test.{tgt}-{src}.{src}' - else: - test_src = f'{raw_folder}/test.{src}-{tgt}.{src}' - cmd1 = f'cat {test_src} | sacrebleu -t "{wmt}" -l {stgt}-{ssrc}; [ $? -eq 0 ] || echo ""' - test_tgt = f'{raw_folder}/test.{src}-{tgt}.{tgt}' - cmd2 = f'cat {test_tgt} | sacrebleu -t "{wmt}" -l {ssrc}-{stgt}; [ $? -eq 0 ] || echo ""' - bleu1 = run_eval_bleu(cmd1) - if bleu1 != 100.0: - not_matchings.append(f'{wmt}:{src_tgt} source side not matching: {test_src}') - bleu2 = run_eval_bleu(cmd2) - if bleu2 != 100.0: - not_matchings.append(f'{wmt}:{src_tgt} target side not matching: {test_tgt}') - return not_matchings - -def download_and_extract( - to_folder, lang_pairs, dl_dataset, - to_manually_download_urls, - completed_urls={}, completed_extraction={}, - debug=False): - - dl_folder = f'{to_folder}/downloads' - extract_folder = f'{to_folder}/extracted' - raw_folder = f'{to_folder}/raw' - lid_filtered = f'{to_folder}/lid_filtered' - - os.makedirs(extract_folder, exist_ok=True) - os.makedirs(raw_folder, exist_ok=True) - os.makedirs(lid_filtered, exist_ok=True) - - - to_be_manually_dowloaded = check_need_manual_downalod(dl_folder, to_manually_download_urls) - - completed_urls = download_dataset( - dl_folder, dl_dataset, completed_urls) - if debug: - print('completed urls: ', completed_urls) - - - extracted_folders = extract_all_files( - completed_urls, - extract_folder=extract_folder, - completed_extraction=completed_extraction, - debug=debug) - if debug: - print('download files have been extracted to folders: ', extracted_folders) - - converted_files = convert_files_if_needed(extracted_folders, debug=False) - for src_tgt in lang_pairs: - print(f'working on {dl_dataset.name}: {src_tgt}') - src, tgt = src_tgt.split('-') - concat_into_splits(dl_dataset, - src=src, tgt=tgt, - extracted_folders=extracted_folders, - to_folder=raw_folder, debug=debug) - print('completed data into: ', raw_folder) - -def download_czang16(download_to, username=None): - wgets = [ - f'wget --user={username} --password=czeng -P {download_to} http://ufallab.ms.mff.cuni.cz/~bojar/czeng16-data/data-plaintext-format.{i}.tar' - for i in range(10)] - cmds = [] - for i, cmd in enumerate(wgets): - filename = f'{download_to}/data-plaintext-format.{i}.tar' - if os.path.exists(filename): - print(f'{filename} has already been downloaded; so skip') - continue - cmds.append(cmd) - if cmds and username is None: - raise ValueError('No czeng username is given; please register at http://ufal.mff.cuni.cz/czeng/czeng16 to obtain username to download') - for cmd in cmds: - call(cmd) - print('done with downloading czeng1.6') - -def download_czeng17_script(download_to, extract_folder, debug=False): - url = 'http://ufal.mff.cuni.cz/czeng/download.php?f=convert_czeng16_to_17.pl.zip' - filename = f'{download_to}/convert_czeng16_to_17.pl.zip' - extract_to = f'{extract_folder}/{get_extract_name(filename)}' - script_path = f'{extract_to}/convert_czeng16_to_17.pl' - - if not os.path.exists(script_path): - wget.download(url, filename, bar=bar_custom) - extract_to = extract_file(f'{download_to}/convert_czeng16_to_17.pl.zip', extract_folder, get_extract_name=get_extract_name, debug=debug) - return script_path - -czeng17_script_path = "" -def convert2czeng17(file, debug): - en_file = f'{file}.en' - cs_file = f'{file}.cs' - - if not os.path.exists(en_file) or not os.path.exists(cs_file): - cs_cmd = f'cat {file} | perl {czeng17_script_path} | cut -f3 > {cs_file}' - en_cmd = f'cat {file} | perl {czeng17_script_path} | cut -f4 > {en_file}' - call(cs_cmd, debug) - call(en_cmd, debug) - else: - print(f'already extracted: {en_file} and {cs_file}') - return file - -def extract_czeng17(extract_folder, debug=False): - url = 'http://ufal.mff.cuni.cz/czeng/download.php?f=convert_czeng16_to_17.pl.zip' - filename = f'{download_to}/convert_czeng16_to_17.pl.zip' - extract_to = f'{extract_folder}/{get_extract_name(filename)}' - script_path = f'{extract_to}/convert_czeng16_to_17.pl' - - if not os.path.exists(script_path): - wget.download(url, filename, bar=bar_custom) - extract_to = extract_file(f'{download_to}/convert_czeng16_to_17.pl.zip', extract_folder, get_extract_name=get_extract_name, debug=debug) - return script_path - -######### -# definitions of wmt data sources -# for es-en -# Punctuation in the official test sets will be encoded with ASCII characters (not complex Unicode characters) as much as possible. You may want to normalize your system's output before submission. You are able able to use a rawer version of the test sets that does not have this normalization. -# script to normalize punctuation: http://www.statmt.org/wmt11/normalize-punctuation.perl -wmt13_es_en = DLDataset( - name='wmt13_es-en', - train_urls=[ - 'http://www.statmt.org/wmt13/training-parallel-europarl-v7.tgz', - 'http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz', - 'http://www.statmt.org/wmt13/training-parallel-un.tgz', - 'http://www.statmt.org/wmt13/training-parallel-nc-v8.tgz', - ], - valid_urls=[ - ('http://www.statmt.org/wmt13/dev.tgz', 'wmt13_dev.tgz') - ], - test_urls=[ - ('http://www.statmt.org/wmt13/test.tgz', 'wmt13_test.tgz') - ], - train_files_patterns=[ - ('*/europarl-v7.{src}-{tgt}.{lang}', ['es-en']), - ('*commoncrawl.{src}-{tgt}.{lang}', ['es-en']), - ('*/news-commentary-v8.{src}-{tgt}.{lang}', ['es-en']), - ('un/*undoc.2000.{src}-{tgt}.{lang}', ['es-en']), - ] , - valid_files_patterns=[ - ('dev/newstest2012.{lang}', ['es-en']) - ], - test_files_patterns=[ - ('test/newstest*.{lang}', ['es-en']) - ], -) - -wmt14_de_fr_en = DLDataset( - name='wmt14_de_fr_en', - train_urls=[ - 'http://www.statmt.org/wmt13/training-parallel-europarl-v7.tgz', - 'http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz', - 'http://www.statmt.org/wmt13/training-parallel-un.tgz', - 'http://www.statmt.org/wmt14/training-parallel-nc-v9.tgz', - ('http://www.statmt.org/wmt10/training-giga-fren.tar', 'training-giga-fren.gz.tar'), #it is actuall a gz.tar - ], - valid_urls=[ - ('http://www.statmt.org/wmt14/dev.tgz', 'wmt14_dev.tgz'), - ], - test_urls=[ - ('http://www.statmt.org/wmt14/test-full.tgz', 'wmt14_test_full.tgz'), # cleaned test sets - ], - train_files_patterns=[ - ('*/europarl-v7.{src}-{tgt}.{lang}', ['fr-en', 'de-en']), - ('*commoncrawl.{src}-{tgt}.{lang}', ['fr-en', 'de-en']), - ('*/*news-commentary-v9.{src}-{tgt}.{lang}', ['fr-en', 'de-en']), - ('un/undoc.2000.{src}-{tgt}.{lang}', ['fr-en']), - ('*giga-{src}{tgt}*{lang}', ['fr-en']) - ], - valid_files_patterns=[ - ('dev/newstest2013.{lang}', ['fr-en', 'de-en']) - ], - test_files_patterns=[ - ('test-full/newstest*{src}{tgt}-{src:src}{tgt:ref}.{lang}', ['en-de', 'de-en', 'fr-en', 'en-fr']), - ], -) - -# pip install git+https://github.com/amake/tmx2corpus.git -wmt16_ro_en = DLDataset( - name='wmt16_ro-en', - train_urls=[ - ('http://data.statmt.org/wmt16/translation-task/training-parallel-ep-v8.tgz', 'wmt16_training-parallel-ep-v8.tgz'), - ('http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz', 'en-ro.tmx.gz'), - ], - valid_urls=[ - ('http://data.statmt.org/wmt16/translation-task/dev-romanian-updated.tgz', 'wmt16_dev.tgz') - ], - test_urls=[ - ('http://data.statmt.org/wmt16/translation-task/test.tgz', 'wmt16_test.tgz') - ], - train_files_patterns=[ - ('*/*europarl-v8.{src}-{tgt}.{lang}', ['ro-en']), - ('bitext.{lang}', ['ro-en']) #setimes from tmux - ] , - valid_files_patterns=[ - ('dev/newsdev2016*{src}{tgt}*.{lang}', ['ro-en', 'ro-en']) - ], - test_files_patterns=[ - ('test/newstest*{src}{tgt}*.{lang}', ['ro-en', 'en-ro']) - ], -) - -cwmt_wmt_instruction = 'cwmt download instruction at: http://nlp.nju.edu.cn/cwmt-wmt' -wmt17_fi_lv_tr_zh_en_manual_downloads = [ - # fake urls to have unique keys for the data - ( ('http://nlp.nju.edu.cn/cwmt-wmt/CASIA2015.zip', 'CASIA2015.zip'), cwmt_wmt_instruction), - ( ('http://nlp.nju.edu.cn/cwmt-wmt/CASICT2011.zip', 'CASICT2011.zip'), cwmt_wmt_instruction), - ( ('http://nlp.nju.edu.cn/cwmt-wmt/CASICT2015.zip', 'CASICT2015.zip'), cwmt_wmt_instruction), - ( ('http://nlp.nju.edu.cn/cwmt-wmt/Datum2015.zip', 'Datum2015.zip'), cwmt_wmt_instruction), - ( ('http://nlp.nju.edu.cn/cwmt-wmt/Datum2017.zip', 'Datum2017.zip'), cwmt_wmt_instruction), - ( ('http://nlp.nju.edu.cn/cwmt-wmt/NEU2017.zip', 'NEU2017.zip'), cwmt_wmt_instruction), -] -wmt17_fi_lv_tr_zh_en = DLDataset( - name='wmt17_fi_lv_tr_zh_en', - train_urls=[ - ('http://data.statmt.org/wmt17/translation-task/training-parallel-ep-v8.tgz', 'wmt17_training-parallel-ep-v8.tgz'), - 'http://data.statmt.org/wmt17/translation-task/training-parallel-nc-v12.tgz', - 'http://www.statmt.org/wmt15/wiki-titles.tgz', - ('http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-tr.tmx.gz', 'en-tr.tmx.gz'), - ('http://data.statmt.org/wmt17/translation-task/rapid2016.tgz', 'wmt17_rapid2016.tgz'), - 'http://data.statmt.org/wmt17/translation-task/leta.v1.tgz', - 'http://data.statmt.org/wmt17/translation-task/dcep.lv-en.v1.tgz', - 'http://data.statmt.org/wmt17/translation-task/books.lv-en.v1.tgz', - (('https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00', - 'https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.01',), 'UNv1.0.en-zh.tar.gz'), - #manually download files: - ('http://nlp.nju.edu.cn/cwmt-wmt/CASIA2015.zip', 'CASIA2015.zip'), - ('http://nlp.nju.edu.cn/cwmt-wmt/CASICT2011.zip', 'CASICT2011.zip'), - ('http://nlp.nju.edu.cn/cwmt-wmt/CASICT2015.zip', 'CASICT2015.zip'), - ('http://nlp.nju.edu.cn/cwmt-wmt/Datum2015.zip', 'Datum2015.zip'), - ('http://nlp.nju.edu.cn/cwmt-wmt/Datum2017.zip', 'Datum2017.zip'), - ('http://nlp.nju.edu.cn/cwmt-wmt/NEU2017.zip', 'NEU2017.zip'), - ], - valid_urls=[ - ('http://data.statmt.org/wmt17/translation-task/dev.tgz', 'wmt17_dev.tgz'), - ], - test_urls=[ - #NEW: Improved translations for zh test sets - ('http://data.statmt.org/wmt17/translation-task/test-update-1.tgz', 'wmt17_test_zh_en.tgz'), - ('http://data.statmt.org/wmt17/translation-task/test.tgz', 'wmt17_test_others.tgz') - ], - train_files_patterns=[ - ('casict*/cas*{src:ch}{tgt:en}.txt', ['zh-en', 'zh-en'] ), - ('casia*/cas*{src:ch}{tgt:en}.txt', ['zh-en', 'zh-en'] ), - ('dataum*/Book*{src:cn}{tgt:en}.txt', ['zh-en', 'zh-en']), - ('neu*/NEU*{src:cn}{tgt:en}.txt', ['zh-en', 'zh-en'] ), - ('*/*UNv1.0.en-zh.{src:zh}{tgt:en}', ['zh-en']), - ('training/*news-commentary-v12.{src}-{tgt}.{lang}', ['zh-en', ]), - - ('*/*europarl-v8.{src}-{tgt}.{lang}', ['fi-en', 'lv-en']), - ('wiki/fi-en/titles.{src}-{tgt}.{lang}', ['fi-en', ]), - ('rapid2016.{tgt}-{src}.{lang}', ['fi-en', 'lv-en']), - ('*/leta.{lang}', ['lv-en']), - ('*/dcep.{lang}', ['lv-en']), - ('*/farewell.{lang}', ['lv-en']), - ('bitext.{lang}', ['tr-en']), - ] , - valid_files_patterns=[ - ('dev/newsdev2017*{src}{tgt}-{src:src}{tgt:ref}.{lang}', - [ - 'fi-en', 'lv-en', 'tr-en', 'zh-en', - 'en-fi', 'en-lv', 'en-tr', 'en-zh' - ]), - ('dev/newstest2016*{src}{tgt}-{src:src}{tgt:ref}.{lang}', - [ - 'fi-en', 'tr-en', - 'en-fi', 'en-tr', - ]), - ], - test_files_patterns=[ - ('test/newstest2017-{src}{tgt}-{src:src}{tgt:ref}.{lang}', - [ - 'fi-en', 'lv-en', 'tr-en', - 'en-fi', 'en-lv', 'en-tr', - ]), - ('newstest2017-{src}{tgt}-{src:src}{tgt:ref}.{lang}', - [ - 'zh-en', - 'en-zh' - ]), - ], -) - -czeng_instruction = 'download instruction at: http://ufal.mff.cuni.cz/czeng/czeng16' -#alternative: use the prepared data but detokenize it? -wmt18_cs_et_en_manual_downloads = [ -#for cs, need to register and download; Register and download CzEng 1.6. -#Better results can be obtained by using a subset of sentences, released under a new version name CzEng 1.7. - # ((f'http://ufallab.ms.mff.cuni.cz/~bojar/czeng16-data/data-plaintext-format.{i}.tar', - # f'data-plaintext-format.{i}.tar'), czeng_instruction) - # for i in range(10) -] - -wmt18_cs_et_en = DLDataset( - name='wmt18_cs_et_en', - train_urls=[ - 'http://www.statmt.org/wmt13/training-parallel-europarl-v7.tgz', - 'http://data.statmt.org/wmt18/translation-task/training-parallel-ep-v8.tgz', - 'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-cs.zipporah0-dedup-clean.tgz', - 'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-et.zipporah0-dedup-clean.tgz', - 'http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz', - 'http://data.statmt.org/wmt18/translation-task/training-parallel-nc-v13.tgz', - ('http://data.statmt.org/wmt18/translation-task/rapid2016.tgz', 'wmt18_rapid2016.tgz'), - # (tuple( - # (f'http://ufallab.ms.mff.cuni.cz/~bojar/czeng16-data/data-plaintext-format.{i}.tar', - # f'data-plaintext-format.{i}.tar') - # for i in range(10) - # ), - # 'czeng16_data_plaintext.gz.tar'), - ], - valid_urls=[ - ('http://data.statmt.org/wmt18/translation-task/dev.tgz', 'wmt18_dev.tgz'), - ], - test_urls=[ - ('http://data.statmt.org/wmt18/translation-task/test.tgz', 'wmt18_test.tgz'), - ], - train_files_patterns=[ - # ('*/*europarl-v7.{src}-{tgt}.{lang}', ['cs-en']), - ('*/*europarl-v8.{src}-{tgt}.{lang}', ['et-en']), - # ('*paracrawl-release1.{tgt}-{src}.zipporah0-dedup-clean.{lang}', ['cs-en', 'et-en']), - ('*paracrawl-release1.{tgt}-{src}.zipporah0-dedup-clean.{lang}', ['et-en']), - # ('*commoncrawl.{src}-{tgt}.{lang}', ['cs-en']), - # ('*/news-commentary-v13.{src}-{tgt}.{lang}', ['cs-en']), - # ('data.plaintext-format/*train.{lang}', ['cs-en']), - ('rapid2016.{tgt}-{src}.{lang}', ['et-en']), - ] , - valid_files_patterns=[ - ('dev/newsdev2018*{src}{tgt}-{src:src}{tgt:ref}.{lang}', ['et-en']), - # ('dev/newstest2017*{src}{tgt}-{src:src}{tgt:ref}.{lang}', ['cs-en']) - ], - test_files_patterns=[ - ('test/newstest2018-{src}{tgt}-{src:src}{tgt:ref}.{lang}', - # ['cs-en', 'et-en']), - ['et-en']), - ] -) - -ru_en_yandex_instruction = 'Yandex Corpus download instruction at: https://translate.yandex.ru/corpus?lang=en' -wmt19_ru_gu_kk_lt_manual_downloads = [ - (('https://translate.yandex.ru/corpus?lang=en', 'wmt19_1mcorpus.zip'), ru_en_yandex_instruction) -] -wmt19_ru_gu_kk_lt = DLDataset( - name='wmt19_ru_gu_kk_lt', - train_urls=[ - 'http://www.statmt.org/europarl/v9/training/europarl-v9.lt-en.tsv.gz', - 'https://s3.amazonaws.com/web-language-models/paracrawl/release3/en-lt.bicleaner07.tmx.gz', - 'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-ru.zipporah0-dedup-clean.tgz', - 'http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz', - 'http://data.statmt.org/news-commentary/v14/training/news-commentary-v14-wmt19.en-kk.tsv.gz', - 'http://data.statmt.org/news-commentary/v14/training/news-commentary-v14.en-ru.tsv.gz', - 'http://data.statmt.org/wikititles/v1/wikititles-v1.kk-en.tsv.gz', - 'http://data.statmt.org/wikititles/v1/wikititles-v1.ru-en.tsv.gz', - 'http://data.statmt.org/wikititles/v1/wikititles-v1.kk-en.tsv.gz', - 'http://data.statmt.org/wikititles/v1/wikititles-v1.lt-en.tsv.gz', - 'http://data.statmt.org/wikititles/v1/wikititles-v1.gu-en.tsv.gz', - (('https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00', - 'https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.01', - 'https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.02',), - 'wmt19_UNv1.0.en-ru.tar.gz'), - 'https://tilde-model.s3-eu-west-1.amazonaws.com/rapid2016.en-lt.tmx.zip', - ('https://translate.yandex.ru/corpus?lang=en', 'wmt19_1mcorpus.zip'), - ], - valid_urls=[ - ('http://data.statmt.org/wmt19/translation-task/dev.tgz', 'wmt19_dev.tgz'), - ], - test_urls=[ - ('http://data.statmt.org/wmt19/translation-task/test.tgz', 'wmt19_test.tgz'), - ], - train_files_patterns=[ - ('*europarl-v9.{src}-{tgt}.tsv.{lang}', ['lt-en']), - #paracrawl - ('*paracrawl-release1.{tgt}-{src}.zipporah0-dedup-clean.{lang}', ['ru-en']), - ('bitext.{lang}', ['lt-en',]), - ('*commoncrawl.{src}-{tgt}.{lang}', ['ru-en',]), - ('*news-commentary-v14-wmt19.{tgt}-{src}.tsv.{lang}', ['kk-en', ]), - ('*news-commentary-v14.{tgt}-{src}.tsv.{lang}', ['ru-en']), - #yandex - ('corpus.{tgt}_{src}.1m.{lang}', ['ru-en']), - ('wikititles_v1_wikititles-v1.{src}-{tgt}.tsv.{lang}', ['ru-en', 'kk-en', 'lt-en', 'gu-en']), - ('*/UNv1.0.{tgt}-{src}.{lang}', ['ru-en']), - #rapid - ('bitext.{lang}', ['lt-en']) - ], - valid_files_patterns=[ - ('dev/newsdev2019*{src}{tgt}-{src:src}{tgt:ref}.{lang}', ['gu-en', 'kk-en', 'lt-en']), - ('dev/newstest2018*{src}{tgt}-{src:src}{tgt:ref}.{lang}', ['ru-en']), - ], - test_files_patterns=[ - ('sgm/newstest2019-{src}{tgt}-{src:src}{tgt:ref}.{lang}', - ['ru-en', 'gu-en', 'kk-en', 'lt-en', 'en-ru', 'en-gu', 'en-kk', 'en-lt']), - ] -) - - -######### - -if __name__ == "__main__": - # speed up the downloads with multiple processing - dl_folder = f'{to_data_path}/downloads' - extract_folder = f'{to_data_path}/extracted' - - urls = [ - url - for dataset in [wmt13_es_en, wmt14_de_fr_en, wmt16_ro_en, wmt18_cs_et_en, wmt19_ru_gu_kk_lt] - for urls in [dataset.train_urls, dataset.valid_urls, dataset.test_urls] - for url in urls - ] - urls = set(urls) - download_multi(dl_folder, extract_folder, urls, num_processes=8, debug=True) - - # check manually downlaods - to_manually_download_urls = ( - wmt17_fi_lv_tr_zh_en_manual_downloads + wmt18_cs_et_en_manual_downloads + wmt19_ru_gu_kk_lt_manual_downloads - ) - to_be_manually_dowloaded = check_need_manual_downalod(dl_folder, to_manually_download_urls) - if len(to_be_manually_dowloaded) > 0: - print('Missing files that need to be downloaded manually; stop the process now.') - exit(-1) - - completed_urls = {} - completed_extraction = {} - def work_on_wmt(directions, wmt_data): - download_and_extract( - to_data_path, - directions, - wmt_data, - to_manually_download_urls=to_manually_download_urls, - completed_urls=completed_urls, completed_extraction=completed_extraction, debug=True) - - work_on_wmt( - ['es_XX-en_XX'], - wmt13_es_en,) - work_on_wmt( - [ - 'fr_XX-en_XX', 'en_XX-fr_XX', - # 'en_XX-de_DE', 'de_DE-en_XX', - ], - wmt14_de_fr_en,) - work_on_wmt( - ['ro_RO-en_XX', 'en_XX-ro_XX'], - wmt16_ro_en,) - work_on_wmt( - [ - # 'zh_CN-en_XX', - 'lv_LV-en_XX', 'fi_FI-en_XX', 'tr_TR-en_XX', - #in case the reversed directions have different train/valid/test data - # 'en_XX-zh_CN', - 'en_XX-lv_LV', 'en_XX-fi_FI', 'en_XX-tr_TR', - ], - wmt17_fi_lv_tr_zh_en, ) - # czeng17_script_path = download_czeng17_script(download_to, extract_to, debug=False) - # cz_username = None - work_on_wmt( - [ - # 'cs_CZ-en_XX', - 'et_EE-en_XX'], - wmt18_cs_et_en,) - work_on_wmt( - [ - # 'ru_RU-en_XX', 'en_XX-ru_RU', - 'gu_IN-en_XX', 'kk_KZ-en_XX', 'lt_LT-en_XX', - #in case the reversed directions have different train/valid/test data - 'en_XX-gu_IN', 'en_XX-kk_KZ', 'en_XX-lt_LT' - ], - wmt19_ru_gu_kk_lt,) - - not_matching = check_wmt_test_bleu( - f'{to_data_path}/raw', - [ - ('wmt13', ['es_XX-en_XX']), - ('wmt14/full', ['fr_XX-en_XX',]), - ('wmt16', ['ro_RO-en_XX',]), - # ('wmt17/improved', ['zh_CN-en_XX']), - ('wmt17', [ 'lv_LV-en_XX', 'fi_FI-en_XX', 'tr_TR-en_XX']), - ('wmt18', ['cs_CZ-en_XX', 'et_EE-en_XX']), - ('wmt19', ['gu_IN-en_XX', 'kk_KZ-en_XX', 'lt_LT-en_XX']), - #'ru_RU-en_XX', - ] - ) - if len(not_matching) > 0: - print('the following datasets do not have matching test datasets:\n\t', '\n\t'.join(not_matching)) - diff --git a/spaces/OFA-Sys/OFA-Image_Caption/criterions/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/criterions/__init__.py deleted file mode 100644 index b6fb6e751cdedb2af4b1f6c0950557e187cd9519..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/criterions/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .scst_loss import ScstRewardCriterion -from .label_smoothed_cross_entropy import AjustLabelSmoothedCrossEntropyCriterion \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/roberta/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/roberta/__init__.py deleted file mode 100644 index 4cd723ae96aec8e3182773483f123109d23b620e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/roberta/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .hub_interface import * # noqa -from .model import * # noqa -from .enc_dec import * # noqa -from .model_camembert import * # noqa -from .model_gottbert import * # noqa -from .model_xlmr import * # noqa diff --git a/spaces/Olivier-Truong/faster-whisper-webui-v2/src/modelCache.py b/spaces/Olivier-Truong/faster-whisper-webui-v2/src/modelCache.py deleted file mode 100644 index 680a4b386fc37e17ed2353e72d04a646ece2c4a6..0000000000000000000000000000000000000000 --- a/spaces/Olivier-Truong/faster-whisper-webui-v2/src/modelCache.py +++ /dev/null @@ -1,17 +0,0 @@ -class ModelCache: - def __init__(self): - self._cache = dict() - - def get(self, model_key: str, model_factory): - result = self._cache.get(model_key) - - if result is None: - result = model_factory() - self._cache[model_key] = result - return result - - def clear(self): - self._cache.clear() - -# A global cache of models. This is mainly used by the daemon processes to avoid loading the same model multiple times. -GLOBAL_MODEL_CACHE = ModelCache() \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h deleted file mode 100644 index 03f4211003f42f601f0cfcf4a690f5da4a0a1f67..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h +++ /dev/null @@ -1,115 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. -#pragma once -#include - -namespace detectron2 { - -at::Tensor ROIAlignRotated_forward_cpu( - const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int sampling_ratio); - -at::Tensor ROIAlignRotated_backward_cpu( - const at::Tensor& grad, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width, - const int sampling_ratio); - -#if defined(WITH_CUDA) || defined(WITH_HIP) -at::Tensor ROIAlignRotated_forward_cuda( - const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int sampling_ratio); - -at::Tensor ROIAlignRotated_backward_cuda( - const at::Tensor& grad, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width, - const int sampling_ratio); -#endif - -// Interface for Python -inline at::Tensor ROIAlignRotated_forward( - const at::Tensor& input, - const at::Tensor& rois, - const double spatial_scale, - const int64_t pooled_height, - const int64_t pooled_width, - const int64_t sampling_ratio) { - if (input.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - return ROIAlignRotated_forward_cuda( - input, - rois, - spatial_scale, - pooled_height, - pooled_width, - sampling_ratio); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - return ROIAlignRotated_forward_cpu( - input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio); -} - -inline at::Tensor ROIAlignRotated_backward( - const at::Tensor& grad, - const at::Tensor& rois, - const double spatial_scale, - const int64_t pooled_height, - const int64_t pooled_width, - const int64_t batch_size, - const int64_t channels, - const int64_t height, - const int64_t width, - const int64_t sampling_ratio) { - if (grad.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - return ROIAlignRotated_backward_cuda( - grad, - rois, - spatial_scale, - pooled_height, - pooled_width, - batch_size, - channels, - height, - width, - sampling_ratio); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - return ROIAlignRotated_backward_cpu( - grad, - rois, - spatial_scale, - pooled_height, - pooled_width, - batch_size, - channels, - height, - width, - sampling_ratio); -} - -} // namespace detectron2 diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/__init__.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/__init__.py deleted file mode 100644 index 3cf93f8bec9cf0cef0a3bd76ca3ca92eb188f535..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -from .backbones import * # noqa: F401,F403 -from .builder import (BACKBONES, HEADS, LOSSES, SEGMENTORS, build_backbone, - build_head, build_loss, build_segmentor) -from .decode_heads import * # noqa: F401,F403 -from .losses import * # noqa: F401,F403 -from .necks import * # noqa: F401,F403 -from .segmentors import * # noqa: F401,F403 - -__all__ = [ - 'BACKBONES', 'HEADS', 'LOSSES', 'SEGMENTORS', 'build_backbone', - 'build_head', 'build_loss', 'build_segmentor' -] diff --git a/spaces/PKUWilliamYang/StyleGANEX/models/encoders/model_irse.py b/spaces/PKUWilliamYang/StyleGANEX/models/encoders/model_irse.py deleted file mode 100644 index bc41ace0ba04cf4285c283a28e6c36113a18e6d6..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/StyleGANEX/models/encoders/model_irse.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/PascalLiu/FNeVR_demo/sync_batchnorm/__init__.py b/spaces/PascalLiu/FNeVR_demo/sync_batchnorm/__init__.py deleted file mode 100644 index bc8709d92c610b36e0bcbd7da20c1eb41dc8cfcf..0000000000000000000000000000000000000000 --- a/spaces/PascalLiu/FNeVR_demo/sync_batchnorm/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# -*- coding: utf-8 -*- -# File : __init__.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -from .batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d -from .replicate import DataParallelWithCallback, patch_replication_callback diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/compile-bytecode.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/compile-bytecode.go deleted file mode 100644 index 27710a6d7e99d7db9120ccb16518be902b7abf84..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/compile-bytecode.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/De-limiter/dataloader/__init__.py b/spaces/PeepDaSlan9/De-limiter/dataloader/__init__.py deleted file mode 100644 index 859765e446f3c0dab9e4b7a815b6f759c7ba5f86..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/De-limiter/dataloader/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -from .dataset import aug_from_str, MusdbTrainDataset, MusdbValidDataset -from .singleset import SingleTrackSet -from .delimit_dataset import ( - DelimitTrainDataset, - DelimitValidDataset, - OzoneTrainDataset, - OzoneValidDataset, -) \ No newline at end of file diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/list_dataset.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/list_dataset.py deleted file mode 100644 index 9058d35b3d4279048732074f4a8dbb6edd4c9ed0..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/list_dataset.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -""" -Simple dataset class that wraps a list of path names -""" - -from PIL import Image - -from maskrcnn_benchmark.structures.bounding_box import BoxList - - -class ListDataset(object): - def __init__(self, image_lists, transforms=None): - self.image_lists = image_lists - self.transforms = transforms - - def __getitem__(self, item): - img = Image.open(self.image_lists[item]).convert("RGB") - - # dummy target - w, h = img.size - target = BoxList([[0, 0, w, h]], img.size, mode="xyxy") - - if self.transforms is not None: - img, target = self.transforms(img, target) - - return img, target - - def __len__(self): - return len(self.image_lists) - - def get_img_info(self, item): - """ - Return the image dimensions for the image, without - loading and pre-processing it - """ - pass diff --git a/spaces/QuophyDzifa/Sepsis-prediction-App/src/main.py b/spaces/QuophyDzifa/Sepsis-prediction-App/src/main.py deleted file mode 100644 index b1e5c88af805262047b11681c1fbb73b70c274e8..0000000000000000000000000000000000000000 --- a/spaces/QuophyDzifa/Sepsis-prediction-App/src/main.py +++ /dev/null @@ -1,109 +0,0 @@ - -# Importations - -from typing import Union -from fastapi import FastAPI -import pickle -from pydantic import BaseModel -import pandas as pd -import os -import uvicorn -from fastapi import HTTPException, status -from sklearn.preprocessing import StandardScaler -from sklearn.preprocessing import LabelEncoder - -# Setup Section - -# Create FastAPI instance -app = FastAPI(title="Sepsis Prediction API", - description="API for Predicting Sespsis ") -# A function to load machine Learning components to re-use - - -def Ml_loading_components(fp): - with open(fp, "rb") as f: - object = pickle.load(f) - return (object) - - -# Loading the machine learning components -DIRPATH = os.path.dirname(os.path.realpath(__file__)) -ml_core_fp = os.path.join(DIRPATH, "ML", "ML_Model.pkl") -ml_components_dict = Ml_loading_components(fp=ml_core_fp) - - -# Defining the variables for each component -label_encoder = ml_components_dict['label_encoder'] # The label encoder -# Loaded scaler component -scaler = ml_components_dict['scaler'] -# Loaded model -model = ml_components_dict['model'] -# Defining our input variables - - -class InputData(BaseModel): - PRG: int - PL: int - BP: int - SK: int - TS: int - BMI: float - BD2: float - Age: int - - -""" -* PRG: Plasma glucose - -* PL: Blood Work Result-1 (mu U/ml) - -* PR: Blood Pressure (mmHg) - -* SK: Blood Work Result-2(mm) - -* TS: Blood Work Result-3 (muU/ml) - -* M11: Body mass index (weight in kg/(height in m)^2 - -* BD2: Blood Work Result-4 (mu U/ml) - -* Age: patients age(years) - -""" -# Index route - - -@app.get("/") -def index(): - return {'message': 'Hello, Welcome to My Sepsis Prediction FastAPI'} - - -# Create prediction endpoint -@app.post("/predict") -def predict(df: InputData): - - # Prepare the feature and structure them like in the notebook - df = pd.DataFrame([df.dict().values()], columns=df.dict().keys()) - - print(f"[Info] The inputed dataframe is : {df.to_markdown()}") - age = df['Age'] - print(age) - # Scaling the inputs - df_scaled = scaler.transform(df) - - # Prediction - raw_prediction = model.predict(df_scaled) - - if raw_prediction == 0: - raise HTTPException(status_code=status.HTTP_200_OK, - detail="The patient will Not Develop Sepsis") - elif raw_prediction == 1: - raise HTTPException(status_code=status.HTTP_200_OK, - detail="The patient Will Develop Sepsis") - else: - raise HTTPException( - status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail="Prediction Error") - - -if __name__ == "__main__": - uvicorn.run("main:app", reload=True) diff --git a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/models.py b/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/models.py deleted file mode 100644 index 7a387b888f63ecd6f1f1bd3ed10aa2176a944d2c..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/models.py +++ /dev/null @@ -1,1174 +0,0 @@ -import math -import logging - -logger = logging.getLogger(__name__) - -import numpy as np -import torch -from torch import nn -from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d -from torch.nn import functional as F -from torch.nn.utils import remove_weight_norm, spectral_norm, weight_norm - -from infer.lib.infer_pack import attentions, commons, modules -from infer.lib.infer_pack.commons import get_padding, init_weights -has_xpu = bool(hasattr(torch, "xpu") and torch.xpu.is_available()) - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - if uv.device.type == "privateuseone": # for DirectML - uv = uv.float() - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - if hasattr(self, "ddtype") == False: - self.ddtype = self.l_linear.weight.dtype - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - # print(x.dtype,sine_wavs.dtype,self.l_linear.weight.dtype) - # if self.is_half: - # sine_wavs = sine_wavs.half() - # sine_merge = self.l_tanh(self.l_linear(sine_wavs.to(x))) - # print(sine_wavs.dtype,self.ddtype) - if sine_wavs.dtype != self.ddtype: - sine_wavs = sine_wavs.to(self.ddtype) - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - logger.debug( - "gin_channels: " - + str(gin_channels) - + ", self.spk_embed_dim: " - + str(self.spk_embed_dim) - ) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - logger.debug( - "gin_channels: " - + str(gin_channels) - + ", self.spk_embed_dim: " - + str(self.spk_embed_dim) - ) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - logger.debug( - "gin_channels: " - + str(gin_channels) - + ", self.spk_embed_dim: " - + str(self.spk_embed_dim) - ) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - logger.debug( - "gin_channels: " - + str(gin_channels) - + ", self.spk_embed_dim: " - + str(self.spk_embed_dim) - ) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - if has_xpu and x.dtype == torch.bfloat16: - x = F.pad(x.to(dtype=torch.float16), (0, n_pad), "reflect").to(dtype=torch.bfloat16) - else: - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/bdist_egg.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/bdist_egg.py deleted file mode 100644 index 11a1c6be28ad008b7c083c229bb0df644ec58a0e..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/bdist_egg.py +++ /dev/null @@ -1,457 +0,0 @@ -"""setuptools.command.bdist_egg - -Build .egg distributions""" - -from distutils.dir_util import remove_tree, mkpath -from distutils import log -from types import CodeType -import sys -import os -import re -import textwrap -import marshal - -from pkg_resources import get_build_platform, Distribution -from setuptools.extension import Library -from setuptools import Command -from .._path import ensure_directory - -from sysconfig import get_path, get_python_version - - -def _get_purelib(): - return get_path("purelib") - - -def strip_module(filename): - if '.' in filename: - filename = os.path.splitext(filename)[0] - if filename.endswith('module'): - filename = filename[:-6] - return filename - - -def sorted_walk(dir): - """Do os.walk in a reproducible way, - independent of indeterministic filesystem readdir order - """ - for base, dirs, files in os.walk(dir): - dirs.sort() - files.sort() - yield base, dirs, files - - -def write_stub(resource, pyfile): - _stub_template = textwrap.dedent(""" - def __bootstrap__(): - global __bootstrap__, __loader__, __file__ - import sys, pkg_resources, importlib.util - __file__ = pkg_resources.resource_filename(__name__, %r) - __loader__ = None; del __bootstrap__, __loader__ - spec = importlib.util.spec_from_file_location(__name__,__file__) - mod = importlib.util.module_from_spec(spec) - spec.loader.exec_module(mod) - __bootstrap__() - """).lstrip() - with open(pyfile, 'w') as f: - f.write(_stub_template % resource) - - -class bdist_egg(Command): - description = "create an \"egg\" distribution" - - user_options = [ - ('bdist-dir=', 'b', - "temporary directory for creating the distribution"), - ('plat-name=', 'p', "platform name to embed in generated filenames " - "(default: %s)" % get_build_platform()), - ('exclude-source-files', None, - "remove all .py files from the generated egg"), - ('keep-temp', 'k', - "keep the pseudo-installation tree around after " + - "creating the distribution archive"), - ('dist-dir=', 'd', - "directory to put final built distributions in"), - ('skip-build', None, - "skip rebuilding everything (for testing/debugging)"), - ] - - boolean_options = [ - 'keep-temp', 'skip-build', 'exclude-source-files' - ] - - def initialize_options(self): - self.bdist_dir = None - self.plat_name = None - self.keep_temp = 0 - self.dist_dir = None - self.skip_build = 0 - self.egg_output = None - self.exclude_source_files = None - - def finalize_options(self): - ei_cmd = self.ei_cmd = self.get_finalized_command("egg_info") - self.egg_info = ei_cmd.egg_info - - if self.bdist_dir is None: - bdist_base = self.get_finalized_command('bdist').bdist_base - self.bdist_dir = os.path.join(bdist_base, 'egg') - - if self.plat_name is None: - self.plat_name = get_build_platform() - - self.set_undefined_options('bdist', ('dist_dir', 'dist_dir')) - - if self.egg_output is None: - - # Compute filename of the output egg - basename = Distribution( - None, None, ei_cmd.egg_name, ei_cmd.egg_version, - get_python_version(), - self.distribution.has_ext_modules() and self.plat_name - ).egg_name() - - self.egg_output = os.path.join(self.dist_dir, basename + '.egg') - - def do_install_data(self): - # Hack for packages that install data to install's --install-lib - self.get_finalized_command('install').install_lib = self.bdist_dir - - site_packages = os.path.normcase(os.path.realpath(_get_purelib())) - old, self.distribution.data_files = self.distribution.data_files, [] - - for item in old: - if isinstance(item, tuple) and len(item) == 2: - if os.path.isabs(item[0]): - realpath = os.path.realpath(item[0]) - normalized = os.path.normcase(realpath) - if normalized == site_packages or normalized.startswith( - site_packages + os.sep - ): - item = realpath[len(site_packages) + 1:], item[1] - # XXX else: raise ??? - self.distribution.data_files.append(item) - - try: - log.info("installing package data to %s", self.bdist_dir) - self.call_command('install_data', force=0, root=None) - finally: - self.distribution.data_files = old - - def get_outputs(self): - return [self.egg_output] - - def call_command(self, cmdname, **kw): - """Invoke reinitialized command `cmdname` with keyword args""" - for dirname in INSTALL_DIRECTORY_ATTRS: - kw.setdefault(dirname, self.bdist_dir) - kw.setdefault('skip_build', self.skip_build) - kw.setdefault('dry_run', self.dry_run) - cmd = self.reinitialize_command(cmdname, **kw) - self.run_command(cmdname) - return cmd - - def run(self): # noqa: C901 # is too complex (14) # FIXME - # Generate metadata first - self.run_command("egg_info") - # We run install_lib before install_data, because some data hacks - # pull their data path from the install_lib command. - log.info("installing library code to %s", self.bdist_dir) - instcmd = self.get_finalized_command('install') - old_root = instcmd.root - instcmd.root = None - if self.distribution.has_c_libraries() and not self.skip_build: - self.run_command('build_clib') - cmd = self.call_command('install_lib', warn_dir=0) - instcmd.root = old_root - - all_outputs, ext_outputs = self.get_ext_outputs() - self.stubs = [] - to_compile = [] - for (p, ext_name) in enumerate(ext_outputs): - filename, ext = os.path.splitext(ext_name) - pyfile = os.path.join(self.bdist_dir, strip_module(filename) + - '.py') - self.stubs.append(pyfile) - log.info("creating stub loader for %s", ext_name) - if not self.dry_run: - write_stub(os.path.basename(ext_name), pyfile) - to_compile.append(pyfile) - ext_outputs[p] = ext_name.replace(os.sep, '/') - - if to_compile: - cmd.byte_compile(to_compile) - if self.distribution.data_files: - self.do_install_data() - - # Make the EGG-INFO directory - archive_root = self.bdist_dir - egg_info = os.path.join(archive_root, 'EGG-INFO') - self.mkpath(egg_info) - if self.distribution.scripts: - script_dir = os.path.join(egg_info, 'scripts') - log.info("installing scripts to %s", script_dir) - self.call_command('install_scripts', install_dir=script_dir, - no_ep=1) - - self.copy_metadata_to(egg_info) - native_libs = os.path.join(egg_info, "native_libs.txt") - if all_outputs: - log.info("writing %s", native_libs) - if not self.dry_run: - ensure_directory(native_libs) - libs_file = open(native_libs, 'wt') - libs_file.write('\n'.join(all_outputs)) - libs_file.write('\n') - libs_file.close() - elif os.path.isfile(native_libs): - log.info("removing %s", native_libs) - if not self.dry_run: - os.unlink(native_libs) - - write_safety_flag( - os.path.join(archive_root, 'EGG-INFO'), self.zip_safe() - ) - - if os.path.exists(os.path.join(self.egg_info, 'depends.txt')): - log.warn( - "WARNING: 'depends.txt' will not be used by setuptools 0.6!\n" - "Use the install_requires/extras_require setup() args instead." - ) - - if self.exclude_source_files: - self.zap_pyfiles() - - # Make the archive - make_zipfile(self.egg_output, archive_root, verbose=self.verbose, - dry_run=self.dry_run, mode=self.gen_header()) - if not self.keep_temp: - remove_tree(self.bdist_dir, dry_run=self.dry_run) - - # Add to 'Distribution.dist_files' so that the "upload" command works - getattr(self.distribution, 'dist_files', []).append( - ('bdist_egg', get_python_version(), self.egg_output)) - - def zap_pyfiles(self): - log.info("Removing .py files from temporary directory") - for base, dirs, files in walk_egg(self.bdist_dir): - for name in files: - path = os.path.join(base, name) - - if name.endswith('.py'): - log.debug("Deleting %s", path) - os.unlink(path) - - if base.endswith('__pycache__'): - path_old = path - - pattern = r'(?P.+)\.(?P[^.]+)\.pyc' - m = re.match(pattern, name) - path_new = os.path.join( - base, os.pardir, m.group('name') + '.pyc') - log.info( - "Renaming file from [%s] to [%s]" - % (path_old, path_new)) - try: - os.remove(path_new) - except OSError: - pass - os.rename(path_old, path_new) - - def zip_safe(self): - safe = getattr(self.distribution, 'zip_safe', None) - if safe is not None: - return safe - log.warn("zip_safe flag not set; analyzing archive contents...") - return analyze_egg(self.bdist_dir, self.stubs) - - def gen_header(self): - return 'w' - - def copy_metadata_to(self, target_dir): - "Copy metadata (egg info) to the target_dir" - # normalize the path (so that a forward-slash in egg_info will - # match using startswith below) - norm_egg_info = os.path.normpath(self.egg_info) - prefix = os.path.join(norm_egg_info, '') - for path in self.ei_cmd.filelist.files: - if path.startswith(prefix): - target = os.path.join(target_dir, path[len(prefix):]) - ensure_directory(target) - self.copy_file(path, target) - - def get_ext_outputs(self): - """Get a list of relative paths to C extensions in the output distro""" - - all_outputs = [] - ext_outputs = [] - - paths = {self.bdist_dir: ''} - for base, dirs, files in sorted_walk(self.bdist_dir): - for filename in files: - if os.path.splitext(filename)[1].lower() in NATIVE_EXTENSIONS: - all_outputs.append(paths[base] + filename) - for filename in dirs: - paths[os.path.join(base, filename)] = (paths[base] + - filename + '/') - - if self.distribution.has_ext_modules(): - build_cmd = self.get_finalized_command('build_ext') - for ext in build_cmd.extensions: - if isinstance(ext, Library): - continue - fullname = build_cmd.get_ext_fullname(ext.name) - filename = build_cmd.get_ext_filename(fullname) - if not os.path.basename(filename).startswith('dl-'): - if os.path.exists(os.path.join(self.bdist_dir, filename)): - ext_outputs.append(filename) - - return all_outputs, ext_outputs - - -NATIVE_EXTENSIONS = dict.fromkeys('.dll .so .dylib .pyd'.split()) - - -def walk_egg(egg_dir): - """Walk an unpacked egg's contents, skipping the metadata directory""" - walker = sorted_walk(egg_dir) - base, dirs, files = next(walker) - if 'EGG-INFO' in dirs: - dirs.remove('EGG-INFO') - yield base, dirs, files - for bdf in walker: - yield bdf - - -def analyze_egg(egg_dir, stubs): - # check for existing flag in EGG-INFO - for flag, fn in safety_flags.items(): - if os.path.exists(os.path.join(egg_dir, 'EGG-INFO', fn)): - return flag - if not can_scan(): - return False - safe = True - for base, dirs, files in walk_egg(egg_dir): - for name in files: - if name.endswith('.py') or name.endswith('.pyw'): - continue - elif name.endswith('.pyc') or name.endswith('.pyo'): - # always scan, even if we already know we're not safe - safe = scan_module(egg_dir, base, name, stubs) and safe - return safe - - -def write_safety_flag(egg_dir, safe): - # Write or remove zip safety flag file(s) - for flag, fn in safety_flags.items(): - fn = os.path.join(egg_dir, fn) - if os.path.exists(fn): - if safe is None or bool(safe) != flag: - os.unlink(fn) - elif safe is not None and bool(safe) == flag: - f = open(fn, 'wt') - f.write('\n') - f.close() - - -safety_flags = { - True: 'zip-safe', - False: 'not-zip-safe', -} - - -def scan_module(egg_dir, base, name, stubs): - """Check whether module possibly uses unsafe-for-zipfile stuff""" - - filename = os.path.join(base, name) - if filename[:-1] in stubs: - return True # Extension module - pkg = base[len(egg_dir) + 1:].replace(os.sep, '.') - module = pkg + (pkg and '.' or '') + os.path.splitext(name)[0] - if sys.version_info < (3, 7): - skip = 12 # skip magic & date & file size - else: - skip = 16 # skip magic & reserved? & date & file size - f = open(filename, 'rb') - f.read(skip) - code = marshal.load(f) - f.close() - safe = True - symbols = dict.fromkeys(iter_symbols(code)) - for bad in ['__file__', '__path__']: - if bad in symbols: - log.warn("%s: module references %s", module, bad) - safe = False - if 'inspect' in symbols: - for bad in [ - 'getsource', 'getabsfile', 'getsourcefile', 'getfile' - 'getsourcelines', 'findsource', 'getcomments', 'getframeinfo', - 'getinnerframes', 'getouterframes', 'stack', 'trace' - ]: - if bad in symbols: - log.warn("%s: module MAY be using inspect.%s", module, bad) - safe = False - return safe - - -def iter_symbols(code): - """Yield names and strings used by `code` and its nested code objects""" - for name in code.co_names: - yield name - for const in code.co_consts: - if isinstance(const, str): - yield const - elif isinstance(const, CodeType): - for name in iter_symbols(const): - yield name - - -def can_scan(): - if not sys.platform.startswith('java') and sys.platform != 'cli': - # CPython, PyPy, etc. - return True - log.warn("Unable to analyze compiled code on this platform.") - log.warn("Please ask the author to include a 'zip_safe'" - " setting (either True or False) in the package's setup.py") - - -# Attribute names of options for commands that might need to be convinced to -# install to the egg build directory - -INSTALL_DIRECTORY_ATTRS = [ - 'install_lib', 'install_dir', 'install_data', 'install_base' -] - - -def make_zipfile(zip_filename, base_dir, verbose=0, dry_run=0, compress=True, - mode='w'): - """Create a zip file from all the files under 'base_dir'. The output - zip file will be named 'base_dir' + ".zip". Uses either the "zipfile" - Python module (if available) or the InfoZIP "zip" utility (if installed - and found on the default search path). If neither tool is available, - raises DistutilsExecError. Returns the name of the output zip file. - """ - import zipfile - - mkpath(os.path.dirname(zip_filename), dry_run=dry_run) - log.info("creating '%s' and adding '%s' to it", zip_filename, base_dir) - - def visit(z, dirname, names): - for name in names: - path = os.path.normpath(os.path.join(dirname, name)) - if os.path.isfile(path): - p = path[len(base_dir) + 1:] - if not dry_run: - z.write(path, p) - log.debug("adding '%s'", p) - - compression = zipfile.ZIP_DEFLATED if compress else zipfile.ZIP_STORED - if not dry_run: - z = zipfile.ZipFile(zip_filename, mode, compression=compression) - for dirname, dirs, files in sorted_walk(base_dir): - visit(z, dirname, files) - z.close() - else: - for dirname, dirs, files in sorted_walk(base_dir): - visit(None, dirname, files) - return zip_filename diff --git a/spaces/Raspberry-ai/main/app.py b/spaces/Raspberry-ai/main/app.py deleted file mode 100644 index f196a96b49b8d234987114b408415028b0dd8f22..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/app.py +++ /dev/null @@ -1,447 +0,0 @@ -import gradio as gr -import torch -from PIL import Image -import datetime -import time -#import psutil -import math -import random -from rembg import remove -import tensorflow as tf -from config import * #The config initializes the models and MAX_SEED objects -from download_js import download_primary_image_url_js #Developed by Raghav -from ai_converter import convert_gallery_image_to_ai_js #Developed by Raghav -#from raspberry_flagging import RaspberryHuggingFaceDatasetSaver #Developed by Raghav, commenting it out b/c it seemed to cause errors -import os -import gc -import GPUtil - - - -print(os.environ["PYTORCH_CUDA_ALLOC_CONF"]) - -####################################### -# Logging for whether we are using GPU or not ## -################################### -if torch.cuda.is_available(): - print("Is CUDA available: True") - # True - print(f"CUDA device: {torch.cuda.get_device_name(torch.cuda.current_device())}") - # Tesla T4 -else: - print("Is CUDA available: False") - -start_time = time.time() - -device = "cuda" if torch.cuda.is_available() else "cpu" -print(device) - - -####################################### -# Define utility functions -####################################### - -####################################### -## Rahgav's code. I can not comment or explain -####################################### -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -####################################### -## This is the main function that does the inference. it decides to execute img to imge or txt to imge based on whether users input an image ## -####################################### -def inference(model_name, prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, - neg_prompt="", samples=4): - generator = torch.Generator(device).manual_seed(seed) if seed != 0 else None #initiate random seed generator - #this part log and empty unused cuda memory to prevent out-of-memory errors# - gc.collect() - torch.cuda.empty_cache() - GPUtil.showUtilization() - #----------------------------------# - try: - if img is not None: - return img_to_img(model_name, prompt, neg_prompt, img, strength, guidance, steps, width, height, - samples, generator), None, str(datetime.datetime.utcnow()) - else: - return txt_to_img(model_name, prompt, neg_prompt, guidance, steps, width, height, samples, - generator), None, str(datetime.datetime.utcnow()) - except Exception as e: - return None, error_str(e), str(datetime.datetime.utcnow()) - -####################################### -## function to execute txt to img ## -####################################### -def txt_to_img(model_name, prompt, neg_prompt, guidance, steps, width, height, samples, generator): - print(f"{datetime.datetime.now()} txt_to_img, model: {model_name}") - - pipe = models[model_name].pipe_t2i - pipe.to(device) - pipe.enable_xformers_memory_efficient_attention() # this is a memory saving technic recommended by HF. See: https://huggingface.co/docs/diffusers/optimization/fp16 - with torch.inference_mode(): # this is a memory saving technic recommended by HF. - result = pipe( - prompt, - negative_prompt=neg_prompt, - num_inference_steps=int(steps), - guidance_scale=guidance, - num_images_per_prompt=samples, - width=width, - height=height, - generator=generator) - pipe.to('cpu') #Downgrading the pipeline to cpu will generate some warning messages but it's the most effective way to prevent out of memory issue based on my experiments. - gc.collect() - torch.cuda.empty_cache() #clean out unused Cuda memory - return replace_nsfw_images(result, model_name) - -####################################### -## This function execute img to img ## -####################################### -def img_to_img(model_name, prompt, neg_prompt, img, strength, guidance, steps, width, height, samples, generator): - - print(f"{datetime.datetime.now()} img_to_img, model: {model_name}") - - pipe = models[model_name].pipe_i2i - pipe.to(device) - pipe.enable_xformers_memory_efficient_attention() # this is memory saving technics recommended by HF. - - #The higher resolution of the input image, the more aethetic the output image will be. - #However, if the the image is too big, processing it (i.e. turning it to an embedding vector) will cause out of memory (OOM) issue. - #Hence, if the height or width of the imput image is greater than 1300, we shrink it propotionally to be no bigger than 1300. - #1300 was chosen by a rough trial and error to see what's the highest we can go without causing out of memory error. It is by no means the perfect optimal number. - size_limit = 1300 - if max(img.height,img.width)> size_limit: - ratio = min(size_limit / img.height, size_limit/ img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.Resampling.LANCZOS) - # Most of the time, resizing once is enough. However, if the image is very square, like 1600*1600, shrinking it to 1300*1300 may - # still not be enough to prevent the OOM issue. so if the shorter side of the post-resizing image is still big we resize it again. - # 1024 was chosen by a couple trials. It is by no means the perfect optimal number. - if min(img.height,img.width)> 1024: - ratio = max(1024/ img.height, 1024/ img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.Resampling.LANCZOS) - with torch.inference_mode(): # this is memory saving technics recommended by HF. - result = pipe( - prompt=prompt, - negative_prompt=neg_prompt, - image=img, - num_inference_steps=int(steps), - strength=strength, - guidance_scale=guidance, - num_images_per_prompt=samples, - generator=generator) - pipe.to('cpu') - gc.collect() - torch.cuda.empty_cache() - return replace_nsfw_images(result, model_name) - -####################################### -##This function was supposed to re-run othe inference if any one of the return images triggers NSFW, which will return a black or blank output. -##However, I never got to prioritized working on it, so now it's just a placeholder that determines whether to remove the background -##of the output images before returning them to the users based on the use case (as specified in the config file. -##E.g. currently we only remove the background for Technical drawings. -##For inspiration image usecase, we have seen that background removal don't work perfectly everytime (e.g. when the main product shares -##similar color as the background). Hence we do not remove the background. Instead, we rely on users inputting an image with plain white background -####################################### -def replace_nsfw_images(results, model_name): - if models[model_name].background_removal: - for i in range(len(results.images)): - results.images[i] = background_rm(results.images[i]) - # if results.nsfw_content_detected[i]: - # results.images[i] = Image.open("app_default_image.jpg") - # else: - # print("no nsfw detected") - return results.images - - -####################################### -## This function generates a seed and preserves it if the users chose not to specify the seed. -## If users do specify a seed, the function simply preserves it for record keeping -####################################### -def get_fixed_seed(seed): - if seed is None or seed == '' or seed == 0: - return int(random.randrange(MAX_SEED)) - return seed - -####################################### -## This function retries the inference, each time with a lower denoising strength -## It was designed for the use case where the input image has a human model, but we want only the product in the output -## Based on a few experiments, we found that the SD can't achieve this. -## so instead we let it try a few times (determined by the 'loops' parameter, each time using the output of the previous try -## as the input, in order to "gradually remove" the human model. However, because we also don't want the final output to completely -##'forgot' and deviate away from the apparel product in the original input image, we lower then denoising_strength a little in each subsequent retry. -## This idea didn't work out too well, so the app currently does not actively use it (Instead, it just assume there's no human model in the input image) -####################################### -def inference_loop(model_name, prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, - neg_prompt="", samples=1, loops = 4, denoising_strength_change_factor = 0.9): - #manually create a seed in order to preserve it - history = [] - for i in range(loops): - init_strength = strength - new_output = inference(model_name, prompt, guidance, steps, width, height, seed, img, strength, - neg_prompt, samples)[0][0] - img = new_output - strength = min(max(init_strength * denoising_strength_change_factor, 0.1), 1) - history.append(new_output) - return history, None, str(datetime.datetime.utcnow()) - -####################################### -## This function removes the background of the output image using the rembg library -####################################### -def background_rm(img): - output = remove(img) - new_output = Image.new("RGBA", output.size, "WHITE") # Create a white rgba background - new_output.paste(output, (0, 0), - output) - new_output = new_output.convert('RGB') - return new_output - -####################################### -## This is the inference function that the app called. It pre-processes some input parameters like the prompt, seed, before calling the actual inference function -####################################### -def inference_master(model_name, prompt, category, guidance, steps, width=512, height=512, seed=0, img=None, scale=0.5, - neg_prompt="", has_human=False, has_background=False): - #Preprocess the prompt: - # If there's no input image (i.e. in the txt-to-image use case): we take the text prompt from the users, and add some prompt engineering - # before and after the user prompt. The type of prompts engineering we do are different for Technical drawings v.s. Inspiration Image, - # and are specified in the model dictionary in the config file - # In the edge case where users leave the text prompt blanked, the current design uses the apparel category - # as the prompt rather than simpley erroring out. Currently we assume that if users leave both the image input and prompt input blank, - # they simply want to test out the app and don't have a specific apparel category in mind. - # In this case, we set the category to be "dress" for the Technical drawings and "shoes" for Inspiration image scenario (these are specified in the model dictionary in the config file, not here) - # simply because those are the first categories we tried out the most when developing each of those two use cases. - if img == None: # text to image use case - if prompt == "" or prompt is None: - prompt = category - prompt = prompt + models[model_name].token[category]['post'] - prompt = models[model_name].token[category]['pre'] + prompt - - # We always add some default negative prompt for the users, though they don't seem to work perfectly all the time (based on a few trials, not rigorous testing). - # If users specify a negative prompt, we prioritize honoring it by appending the default prompt behind the user-input prompt - # because SD weighs texts in the front of the sentence more than those in the back (I can't remember where I read this from) - default_neg_prompt = "ugly face, disfigured, deformed, blurry, bad anatomy, body, women, woman, man, child, men, legs, hands, skins, head, face, human, feet, ankles, hanger, accessories, colored background, buttons, purse, leaves, fur" - if neg_prompt!="": - neg_prompt = neg_prompt+', '+default_neg_prompt - else: - neg_prompt = default_neg_prompt - - # The SD use "denoising strength" to control how similar we want the output to look like the input image. Strength = 0 means identical, - # and =1 basically ignores the input image. However, we found it counter intuitive for our designer users, so we inverse it to "similarity scale" - # Here we convert the user-selected similarity scale back to the denoising strength that SD understands - strength = 1-scale - seed = get_fixed_seed(seed) - #Log the prompt, negative prompt, and the seed for debugging purpose only. In the future, we should figure out how to present these records - #to the users in the UI as well - print(f"Prompt = {prompt}") - print(f"Neg Prompt = {neg_prompt}") - print(f"Seed = {seed}") - if has_human: - strength = 0.85 # chosen arbitrarily. we just need a high value - return inference_loop(model_name, prompt, guidance, steps, width, height, seed, img, strength, - neg_prompt) - else: - return inference(model_name, prompt, guidance, steps, width, height, seed, img, strength, - neg_prompt) - -####################################### -## This function updates the app UI based on the use case chosen -## In Technical drawings, we hide the input image and similarity scale options from the users to prevent confusions because -## we can't yet turn an input product image into a technical drawings -####################################### -def change_input_option(model_name): - categories = list(models[model_name].token.keys()) - is_technical_drawing = (model_name=='Technical drawings') - if is_technical_drawing: - return {category: gr.Dropdown.update(label="What are you designing?", choices=[c for c in categories], - value = categories[0]), - image: gr.update(visible=True), - scale: gr.update(visible=True, value = 0.8) - } - else: - return {category: gr.Dropdown.update(label="What are you designing?", choices=[c for c in categories], - value = categories[0]), - image: gr.update(visible=True), - scale: gr.update(visible=True) - } -## Raghav's code - currently muted b/c it seemed to cause some errors that I don't know how to fix## -# Set up for flagging -# os.environ["GRADIO_ALLOW_FLAGGING"] = "manual" -# hf_writer_callback = RaspberryHuggingFaceDatasetSaver( -# "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", #HF_token must be a 'write' token. -# dataset_url="https://huggingface.co/datasets/Raspberry-ai/raspberry-analytics", -# repo_id="Raspberry-ai/raspberry-analytics") -## end of Raghav's code - -####################################### -# Build the Gradio app -####################################### -with gr.Blocks(css=css) as demo: - top_description = gr.HTML(f''' -
        -To automatically generate a fashion design or technical drawing: -1. Inspiration image: Generate variations of a reference product photo, turn a hiking boot into a rain boot, or describe your dream design in words. -2. Technical drawing: Generate tech packs with stitch detail from a written description. Coming soon: turn any product photo into a tech drawing! -For best results, use 1000+ pixel plain product photos (no people modeling the outfit) and detailed written descriptions. - -Play with the options to get different designs: -- Similarity scale: controls how similar outputs will be to the input image -- Seed: change this or set it to zero to ask the AI to try again -- Guidance scale: controls how much the image generation process follows the text prompt; increase to add weight to the text prompt - - ''') - ## Specify app design ## - with gr.Row(): - # In the current UI, all user inputs are grouped in a column on the left, which takes up 45% of the canvas width. - # The outputs are presented in a column on the right that takes up 55% of the canvas width. The logic behind it was an assumption - # that the users care more about and want to see the output, so it should be big and clear. However, at the same time not too big - # that it hinders users from seeing the input choices. Hence the 45/55 split. No user survey has been conducted to verify this logic. - with gr.Column(scale=45): - with gr.Tab("Create"): - with gr.Group(): - model_name = gr.Dropdown(visible=True, label="What do you want to create?", choices=list(models.keys()), - value=list(models.keys())[0]) - prompt = gr.Textbox(label="Describe your design in words", show_label=True, max_lines=2, - placeholder="e.g. Men’s waterproof work boots") # .style(container=False) - neg_prompt = gr.Textbox(label="What to exclude", placeholder="e.g. models, accessories") - # The "category" is a legacy code. To make it user friendly, we currently do not require users to choose an apparel category by setting this option to be invisible - # because we assume a user-input text or image prompt is fairly sufficient to tell SD what to generate in most cases . - category = gr.Dropdown(visible=False, label="What are you designing?", - choices=list(list(models.values())[0].token.keys()), - value=list(list(models.values())[0].token.keys())[0], - interactive=True) - # Here we group the image and scale inputs separately from other inputs above because we may hide them from users based on the use case - with gr.Column() as details_col: - with gr.Group(): - image = gr.Image(label="Input product photo", tool="editor", type="pil", - visible=True).style(height=200) - #For similarity scale, we chose 0.3 as the default value by rough trial and errors as it seems to generate inspiration images not too different from the input image but still honor the text prompt - #We truncated it at 0.2 as the lowest value users can choose from because we started to see distorted human forms - #(e.g. extremely long legs, arms show up where the legs supposed to be) when the similarity scale goes below 0.2 for input images that have human models - scale = gr.Slider(label="How similar should the outputs be to the input image? Lower for more variations, higher for most similar", - minimum=0.2, maximum=1, step=0.05, value=0.3, visible=True) - # scale_explanation = gr.HTML(f''' - #

        - # The higher the scale, the closer the output will be to the original image - #

        - # ''') - # To make it user friendly, we currently hide the following two options from the users and just assume that the input image has no human - # model and has plain white background. In the future when we figure out how to best process human models or background, we - # may turn these options back on - # has_human = gr.Checkbox(visible=False, label="Contains models?", value=False) - # has_background = gr.Checkbox(visible=False, label="Contains background?", value=False) - - # The model_name ('Technical drawings' or 'Inspiration images' as specified in the config file) will trigger an update of the app UI as described previously in the change_input_option function - model_name.change(fn=change_input_option, inputs=model_name, outputs=[category, image, scale]) - generate = gr.Button(value="Generate").style(full_width=True) - # this hidden tab contains 4 more parameters - guidance scale, number of steps in the inference, width and hiegh of the output, - # that we don't believe users will use often. The default values are what recommended by concensus on the internet - with gr.Tab("Options"): - with gr.Group(): - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=50, minimum=2, maximum=75, step=1) - - with gr.Row(): - #The default of 512*512 was chosen somewhat randomly for txt-to-img. For img-to-img, SD will simply preserve the size of the - # input as the size of the output, so these two parameters are not even used by the SD img-to-imge pipeline. - width = gr.Slider(label="Image width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Image height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, MAX_SEED, label='Seed (0 = random)', value=0, step=1) #note: the uppder bound MAX_SEED is specified in the config file - with gr.Column(scale=55): - image_out = gr.Gallery( - label="Output", show_label=True, elem_id="gallery" - ).style(grid=[1], height='auto') - ## Rahgav's code## - error_output = gr.Markdown() - inference_execution_time = gr.Markdown(visible=False) # Do not remove - used for triggering data analytics. See comment below for details. - with gr.Row(): - download_button = gr.Button(value="Download", elem_id="download-btn") - convert_to_ai_button = gr.Button(value="Convert to .ai", elem_id="convert-to-ai") - ## end of Rahgav's code## - ## End of app design ## - - ## Execute the app with the given input and output format## - inputs = [model_name, prompt, category, guidance, steps, width, height, seed, image, scale, neg_prompt] - outputs = [image_out, error_output, inference_execution_time] - # The SD inference will be triggered (i.e. call the inference_master function) either by users pressing - # the Enter key when they complete typing the text prompt or by clicking the 'Generate' button - prompt.submit(inference_master, inputs=inputs, outputs=outputs) - generate.click(inference_master, inputs=inputs, outputs=outputs) - ## End of app execution ## - - # List of examples for Inspiration image - # Only the parameters specified below will be shown in the UI. Other parameters will use the default values specified in the app design section - ex_ii = gr.Examples([ - ["rubber rain boot, slip on", 3662441489, 'caterpillar_boot.jpg','Inspiration images'], - ["", 5214608943, 'high_heels.jpg','Inspiration images'], - ["leopard print", 5214608943, 'high_heels.jpg','Inspiration images'], - #["", 9659990480, 'strawberry_dress.jpg','Inspiration images'], - ["dress", 9275268464, 'green_fur_coat.jpg','Inspiration images'], - ["shirt", 4588852266, 'colorful_jacket.jpeg','Inspiration images'], - ["jeans with many cargo pockets", 4648864138, 'revolve_jeans.jpeg','Inspiration images'], - ["jeans with Frayed hem and distressed details", 7800241265, 'revolve_jeans.jpeg','Inspiration images'], - ["ruffle maxi dress", 3678929147, 'revolve_dress.jpeg','Inspiration images'], - ["oil painting floral and leaves design on the dress", 1243336480, 'revolve_dress.jpeg','Inspiration images'], - # ["punk leather jacket", 5237416518, 'punk_jacket.jpg','Inspiration images'], - # ["dress with gigantic puff sleeves and mock neck", 1558563429, "phillip_lim_dress.jpeg",'Inspiration images' ], - ["men's hiker work boots, waterproof, composite toe protection", 8011630240, None, 'Inspiration images'], - ["women's duck boot, premium quilted nylon upper with a vulcanized rubber shell", 8011630240, None, 'Inspiration images'], - ], inputs=[prompt, seed, image, model_name], outputs=outputs, fn=inference,label = "Examples of inspiration images", cache_examples=False) - - # List of examples for Technical drawings - # Only the parameters specified below will be shown in the UI. Other parameters will use the default values specified in the app design section - ex_td = gr.Examples([ - ["midi shirt dress with the classic point collar, short roll up sleeves, button down along the front, two chest pockets, tie waist", 2773414568, None, 'Technical drawings'], - ["maxi dress, short puff sleeves, a tiered construction with 4 tiers and tasseled necktie, splitneck, elasticised cuffs, button front with 3 buttons, waist spaghetti tie, embroidered hem", - 3153349073, None, 'Technical drawings'], - [" mini dress with a cross neck wrap, sharp lines, a slashed cut out at the waist. Finished with ruched sides, this sheath-silhouetted style drapes with a jersey fabrication", - 4715243121, None, 'Technical drawings'], - ["midi babydoll bralette dress, buckle belt with belt holes, button down under below the belt along the front, curved neckline band, shoulder straps, sleeveless", - 5687562474, None, 'Technical drawings'], - ], inputs=[prompt, seed, image, model_name], outputs=outputs, fn=inference,label = "Examples of technical drawings", cache_examples=False) - - ######### Data analytics and Download functionality below - Rahgav's code ######### - # data_for_logging = [model_name, prompt, category, guidance, steps, image, neg_prompt, image_out] - # print("data for logging:", data_for_logging) - # # This needs to be called at some point prior to the first call to callback.flag() - # hf_writer_callback.setup(data_for_logging, "flagged_data_points") - # Gradio does not automatically log output because it has no way of tracking when inference has completed. See https://github.com/gradio-app/gradio/pull/2695 - # An approach to solve for this is to manually log (i.e. call the flag()) using an output component with an event listener. - # inference_execution_time, a markdown component, is updated whenever the inference method is called, which means we log after inference. - # See https://github.com/gradio-app/gradio/issues/2560 for reference. - # inference_execution_time.change( - # lambda *args: hf_writer_callback.flag(args), - # data_for_logging, - # None, - # show_progress=False, - # preprocess=False # See https://gradio.app/using_flagging/#flagging-with-blocks - # ) - # Download button with injected javascript to force download a file when clicked. - # Note: Passing a python function with the _js parameter set breaks download functionality. Not sure why. - download_button.click( - None, - inputs=[], - outputs=[], - _js=download_primary_image_url_js) - # Convert the primary image in the gallery to .ai file - convert_to_ai_button.click( - None, - inputs=[], - outputs=[], - _js=convert_gallery_image_to_ai_js) - - -print(f"Space built in {time.time() - start_time:.2f} seconds") -## End of Rahgav's code - -####################################### -# Launch the Gradio app -####################################### -#concurrency_count = 1 is the default value used by Gradio that I've seen other HF spaces use so I copied. I did NOT experiment other values. -#It controls the number of worker threads that will be processing requests from the queue concurrently. -#Increasing this number will increase the rate at which requests are processed, but will also increase the memory usage of the queue. -demo.queue(concurrency_count=1) -demo.launch(debug=True, share=False) - - - -# \ No newline at end of file diff --git a/spaces/Ricecake123/RVC-demo/lib/infer_pack/onnx_inference.py b/spaces/Ricecake123/RVC-demo/lib/infer_pack/onnx_inference.py deleted file mode 100644 index 6517853be49e61c427cf7cd9b5ed203f6d5f367e..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/lib/infer_pack/onnx_inference.py +++ /dev/null @@ -1,145 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import ( - HarvestF0Predictor, - ) - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/Riksarkivet/htr_demo/helper/text/overview/faq_discussion/faq.md b/spaces/Riksarkivet/htr_demo/helper/text/overview/faq_discussion/faq.md deleted file mode 100644 index d7d344e343b3aa530dfdd123aa7b39d3e3991287..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/helper/text/overview/faq_discussion/faq.md +++ /dev/null @@ -1,13 +0,0 @@ -## Frequently Asked Questions - -**Q**: Is my data secure? Can I upload my own images? -**A**: Absolutely. Uploaded files are not saved or stored. - -**Q**: Why am I always in a queue? -**A**: This is due to hardware constraints and rate limits imposed by Hugging Face. For alternative ways to use the app, refer to the tab > **Documentation** under > **Duplication for Own Use & API**. - -**Q**: Why is Fast track so slow? -**A**: The current speed is due to hardware limitations and the present state of the code. However, we plan to update the application in future releases, which will significantly improve the performance of the application. - -**Q**: Is it possible to run Fast track or the API on image batches? -**A**: Not currently, but we plan to implement this feature in the future. diff --git a/spaces/Ritori/play_with_baby_llama2/README.md b/spaces/Ritori/play_with_baby_llama2/README.md deleted file mode 100644 index 35d5cd948be067fe4cce5182344521ba13adcf5c..0000000000000000000000000000000000000000 --- a/spaces/Ritori/play_with_baby_llama2/README.md +++ /dev/null @@ -1,158 +0,0 @@ ---- -title: play_with_baby_llama2 -app_file: baby_llama2.py -sdk: gradio -sdk_version: 3.38.0 ---- - -## llama2.c - -Have you ever wanted to inference a baby [Llama 2](https://ai.meta.com/llama/) model in pure C? No? Well, now you can! - - - -With this code you can train the Llama 2 LLM architecture from scratch in PyTorch, then save the weights to a raw binary file, then load that into one ~simple 500-line C file ([run.c](run.c)) that inferences the model, simply in fp32 for now. On my cloud Linux devbox a dim 288 6-layer 6-head model (~15M params) inferences at ~100 tok/s in fp32, and about the same on my M1 MacBook Air. I was somewhat pleasantly surprised that one can run reasonably sized models (few ten million params) at highly interactive rates with an approach this simple. - -Please note that this is just a weekend project: I took nanoGPT, tuned it to implement the Llama-2 architecture instead of GPT-2, and the meat of it was writing the C inference engine in [run.c](run.c). As such, this is not really meant to be a production-grade library right now. - -Hat tip to [llama.cpp](https://github.com/ggerganov/llama.cpp) for inspiring this project. I wanted something super minimal so I chose to hard-code the llama-2 architecture, stick to fp32, and just roll one inference file of pure C with no dependencies. - -## feel the magic - -Let's just run a baby Llama 2 model in C. You need a model checkpoint. Download this 15M parameter model I trained on the [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) dataset (~58MB download) and place it into the default checkpoint directory `out`: - -```bash -wget https://karpathy.ai/llama2c/model.bin -P out -``` - -(if that doesn't work try [google drive](https://drive.google.com/file/d/1aTimLdx3JktDXxcHySNrZJOOk8Vb1qBR/view?usp=share_link)). Compile and run the C code: - -```bash -gcc -O3 -o run run.c -lm -./run out/model.bin -``` - -You'll see the text stream a sample. On my M1 MacBook Air this runs at ~100 tokens/s, not bad for super naive fp32 single-threaded C code. See [performance](#performance) for compile flags that can significantly speed this up. Sample output: - -*Once upon a time, there was a boy named Timmy. Timmy loved to play sports with his friends. He was very good at throwing and catching balls. One day, Timmy's mom gave him a new shirt to wear to a party. Timmy thought it was impressive and asked his mom to explain what a shirt could be for. "A shirt is like a special suit for a basketball game," his mom said. Timmy was happy to hear that and put on his new shirt. He felt like a soldier going to the army and shouting. From that day on, Timmy wore his new shirt every time he played sports with his friends at the party. Once upon a time, there was a little girl named Lily. She loved to play outside with her friends. One day, Lily and her friend Emma were playing with a ball. Emma threw the ball too hard and it hit Lily's face. Lily felt embarrassed and didn't want to play anymore. -Emma asked Lily what was wrong, and Lily told her about her memory. Emma told Lily that she was embarrassed because she had thrown the ball too hard. Lily felt bad -achieved tok/s: 98.746993347843922* - -**Update**: I've now also uploaded a bigger checkpoint. This one is dim 512, 8 layers, 8 heads and context length 1024, a ~44M param Transformer. It trained for 200K iterations batch size 32 on 4XA100 40GB GPUs in ~8 hours. You can use this bigger and more powerful checkpoint like so: - -```bash -wget https://karpathy.ai/llama2c/model44m.bin -P out44m -./run out44m/model44m.bin -``` - -On my MacBook Air compiled with $ gcc -Ofast -o run run.c -lm this ran at ~150 tok/s. Still way too fast! I have to train an even bigger checkpoint... This model samples more coherent and diverse stories: - -*Once upon a time, there was a little girl named Lily. She loved playing with her toys on top of her bed. One day, she decided to have a tea party with her stuffed animals. She poured some tea into a tiny teapot and put it on top of the teapot. Suddenly, her little brother Max came into the room and wanted to join the tea party too. Lily didn't want to share her tea and she told Max to go away. Max started to cry and Lily felt bad. She decided to yield her tea party to Max and they both shared the teapot. But then, something unexpected happened. The teapot started to shake and wiggle. Lily and Max were scared and didn't know what to do. Suddenly, the teapot started to fly towards the ceiling and landed on the top of the bed. Lily and Max were amazed and they hugged each other. They realized that sharing was much more fun than being selfish. From that day on, they always shared their tea parties and toys.* - -## howto - -It should be possible to load the weights released by Meta but I haven't tried because the inference speed, even of the 7B model, would probably be not great with this baby single-threaded C program. So in this repo we focus on more narrow applications, and train the same architecture but from scratch, in this case on the TinyStories dataset for fun. - -First let's download and pretokenize some source dataset, e.g. I like [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) so this is the only example currently available in this repo. But it should be very easy to add datasets, see the code. - -```bash -python tinystories.py download -python tinystories.py pretokenize -``` - -Then train our model: - -```bash -python train.py -``` - -See the train.py script for more exotic launches and hyperparameter overrides. I didn't tune the hyperparameters, I expect simple hyperparameter exploration should give better models. Totally understand if you want to skip model training, for simple demo just download my pretrained model and save it into the directory `out`: - -```bash -wget https://karpathy.ai/llama2c/model.bin -P out -``` - -Once we have the model.bin file, we can inference in C. Compile the C code first: - -```bash -gcc -O3 -o run run.c -lm -``` - -You can now run it simply as - -```bash -./run out/model.bin -``` - -Watch the tokens stream by, fun! We can also run the PyTorch inference script for comparison (to run, add [model.ckpt](https://drive.google.com/file/d/1SM0rMxzy7babB-v4MfTg1GFqOCgWar5w/view?usp=share_link) to /out if you haven't already): - -```bash -python sample.py -``` - -Which gives the same results. More detailed testing will be done in `test_all.py`, run as: - -```bash -$ pytest -``` - -Currently you will need two files to test or sample: the [model.bin](https://drive.google.com/file/d/1aTimLdx3JktDXxcHySNrZJOOk8Vb1qBR/view?usp=share_link) file and the [model.ckpt](https://drive.google.com/file/d/1SM0rMxzy7babB-v4MfTg1GFqOCgWar5w/view?usp=share_link) file from PyTorch training I ran earlier. I have to think through running the tests without having to download 200MB of data. - -## performance - -*(NOTE: this guide is not great because I personally spend a lot of my time in Python land and don't have an amazing understanding of a lot of these features and flags. If someone does and is willing to help document and briefly describe some of these and their tradeoffs, I'd welcome a PR)* - -There are many ways to potentially speed up this code depending on your system. Here we document a few together with a high-level guide on what they do. Here's again the default way to compile, but using -O3: - -```bash -gcc -O3 -o run run.c -lm -``` - --O3 includes optimizations that are expensive in terms of compile time and memory usage. Including vectorization, loop unrolling, and predicting branches. Here's a few more to try. - -`-Ofast` Run additional optimizations which may break compliance with the C/IEEE specifications, in addition to `-O3`. See [the GCC docs](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html) for more information. - -`-ffast-math` breaks IEEE compliance, e.g. allowing reordering of operations, disables a bunch of checks for e.g. NaNs (assuming they don't happen), enables reciprocal approximations, disables signed zero, etc. However, there is a good reason to be suspicious of this setting, one good writeup is here: ["Beware of fast-math"](https://simonbyrne.github.io/notes/fastmath/). - -`-funsafe-math-optimizations` a more limited form of -ffast-math, that still breaks IEEE compliance but doesn't have all of the numeric/error handling changes from `-ffasth-math`. See [the GCC docs](https://gcc.gnu.org/wiki/FloatingPointMath) for more information. - -`-march=native` Compile the program to use the architecture of the machine you're compiling on rather than a more generic CPU. This may enable additional optimizations and hardware-specific tuning such as improved vector instructions/width. - -Putting a few of these together, the fastest throughput I saw so far on my MacBook Air (M1) is with: - -```bash -gcc -Ofast -o run run.c -lm -``` - -Also, I saw someone report higher throughput replacing `gcc` with `clang`. - -**OpenMP** Big improvements can also be achieved by compiling with OpenMP, which "activates" the `#pragma omp parallel for` inside the matmul. You can compile e.g. like so: - -```bash -clang -Ofast -fopenmp -march=native run.c -lm -o run -``` - -(I believe you can swap clang/gcc, and may try to leave out -march=native). Then when you run inference, make sure to use OpenMP flags to set the number of threads, e.g.: - -```bash -OMP_NUM_THREADS=4 ./run out/model.bin -``` - -Depending on your system resources you may want to tweak these hyperparameters. (TODO: I am not intimitely familiar with OpenMP and its configuration, if someone would like to flesh out this section I would welcome a PR). - -## unsorted todos - -- why is there a leading space in C sampling code when we `./run`? -- todo multiquery support? doesn't seem as useful for smaller models that run on CPU (?) -- todo support inferencing beyond max_seq_len steps, have to think through the kv cache -- why is MFU so low (~10%) on my A100 40GB for training? -- weird errors with torch.compile and wandb when using DDP -- make more better tests to decrease yolo - -## ack - -I trained the llama2.c storyteller models on a 4X A100 40GB box graciously provided by the excellent [Lambda labs](https://lambdalabs.com/service/gpu-cloud), thank you. - -## License - -MIT diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/double_roi_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/double_roi_head.py deleted file mode 100644 index a1aa6c8244a889fbbed312a89574c3e11be294f0..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/double_roi_head.py +++ /dev/null @@ -1,33 +0,0 @@ -from ..builder import HEADS -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class DoubleHeadRoIHead(StandardRoIHead): - """RoI head for Double Head RCNN. - - https://arxiv.org/abs/1904.06493 - """ - - def __init__(self, reg_roi_scale_factor, **kwargs): - super(DoubleHeadRoIHead, self).__init__(**kwargs) - self.reg_roi_scale_factor = reg_roi_scale_factor - - def _bbox_forward(self, x, rois): - """Box head forward function used in both training and testing time.""" - bbox_cls_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - bbox_reg_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], - rois, - roi_scale_factor=self.reg_roi_scale_factor) - if self.with_shared_head: - bbox_cls_feats = self.shared_head(bbox_cls_feats) - bbox_reg_feats = self.shared_head(bbox_reg_feats) - cls_score, bbox_pred = self.bbox_head(bbox_cls_feats, bbox_reg_feats) - - bbox_results = dict( - cls_score=cls_score, - bbox_pred=bbox_pred, - bbox_feats=bbox_cls_feats) - return bbox_results diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/utils/up_conv_block.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/utils/up_conv_block.py deleted file mode 100644 index 378469da76cb7bff6a639e7877b3c275d50490fb..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/utils/up_conv_block.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule, build_upsample_layer - - -class UpConvBlock(nn.Module): - """Upsample convolution block in decoder for UNet. - - This upsample convolution block consists of one upsample module - followed by one convolution block. The upsample module expands the - high-level low-resolution feature map and the convolution block fuses - the upsampled high-level low-resolution feature map and the low-level - high-resolution feature map from encoder. - - Args: - conv_block (nn.Sequential): Sequential of convolutional layers. - in_channels (int): Number of input channels of the high-level - skip_channels (int): Number of input channels of the low-level - high-resolution feature map from encoder. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers in the conv_block. - Default: 2. - stride (int): Stride of convolutional layer in conv_block. Default: 1. - dilation (int): Dilation rate of convolutional layer in conv_block. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). If the size of - high-level feature map is the same as that of skip feature map - (low-level feature map from encoder), it does not need upsample the - high-level feature map and the upsample_cfg is None. - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - conv_block, - in_channels, - skip_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - dcn=None, - plugins=None): - super(UpConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.conv_block = conv_block( - in_channels=2 * skip_channels, - out_channels=out_channels, - num_convs=num_convs, - stride=stride, - dilation=dilation, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None) - if upsample_cfg is not None: - self.upsample = build_upsample_layer( - cfg=upsample_cfg, - in_channels=in_channels, - out_channels=skip_channels, - with_cp=with_cp, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - else: - self.upsample = ConvModule( - in_channels, - skip_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, skip, x): - """Forward function.""" - - x = self.upsample(x) - out = torch.cat([skip, x], dim=1) - out = self.conv_block(out) - - return out diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmseg/core/utils/misc.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmseg/core/utils/misc.py deleted file mode 100644 index eb862a82bd47c8624db3dd5c6fb6ad8a03b62466..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmseg/core/utils/misc.py +++ /dev/null @@ -1,17 +0,0 @@ -def add_prefix(inputs, prefix): - """Add prefix for dict. - - Args: - inputs (dict): The input dict with str keys. - prefix (str): The prefix to add. - - Returns: - - dict: The dict with keys updated with ``prefix``. - """ - - outputs = dict() - for name, value in inputs.items(): - outputs[f'{prefix}.{name}'] = value - - return outputs diff --git a/spaces/Rongjiehuang/ProDiff/utils/audio.py b/spaces/Rongjiehuang/ProDiff/utils/audio.py deleted file mode 100644 index aba7ab926cf793d085bbdc70c97f376001183fe1..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/ProDiff/utils/audio.py +++ /dev/null @@ -1,56 +0,0 @@ -import subprocess -import matplotlib - -matplotlib.use('Agg') -import librosa -import librosa.filters -import numpy as np -from scipy import signal -from scipy.io import wavfile - - -def save_wav(wav, path, sr, norm=False): - if norm: - wav = wav / np.abs(wav).max() - wav *= 32767 - # proposed by @dsmiller - wavfile.write(path, sr, wav.astype(np.int16)) - - -def get_hop_size(hparams): - hop_size = hparams['hop_size'] - if hop_size is None: - assert hparams['frame_shift_ms'] is not None - hop_size = int(hparams['frame_shift_ms'] / 1000 * hparams['audio_sample_rate']) - return hop_size - - -########################################################################################### -def _stft(y, hparams): - return librosa.stft(y=y, n_fft=hparams['fft_size'], hop_length=get_hop_size(hparams), - win_length=hparams['win_size'], pad_mode='constant') - - -def _istft(y, hparams): - return librosa.istft(y, hop_length=get_hop_size(hparams), win_length=hparams['win_size']) - - -def librosa_pad_lr(x, fsize, fshift, pad_sides=1): - '''compute right padding (final frame) or both sides padding (first and final frames) - ''' - assert pad_sides in (1, 2) - # return int(fsize // 2) - pad = (x.shape[0] // fshift + 1) * fshift - x.shape[0] - if pad_sides == 1: - return 0, pad - else: - return pad // 2, pad // 2 + pad % 2 - - -# Conversions -def amp_to_db(x): - return 20 * np.log10(np.maximum(1e-5, x)) - - -def normalize(S, hparams): - return (S - hparams['min_level_db']) / -hparams['min_level_db'] diff --git a/spaces/Sense-X/uniformer_image_demo/imagenet_class_index.py b/spaces/Sense-X/uniformer_image_demo/imagenet_class_index.py deleted file mode 100644 index 5407d1471197fd9bfa466c47d8dfb6683cb9f551..0000000000000000000000000000000000000000 --- a/spaces/Sense-X/uniformer_image_demo/imagenet_class_index.py +++ /dev/null @@ -1,1002 +0,0 @@ -imagenet_classnames = { - "0": ["n01440764", "tench"], - "1": ["n01443537", "goldfish"], - "2": ["n01484850", "great_white_shark"], - "3": ["n01491361", "tiger_shark"], - "4": ["n01494475", "hammerhead"], - "5": ["n01496331", "electric_ray"], - "6": ["n01498041", "stingray"], - "7": ["n01514668", "cock"], - "8": ["n01514859", "hen"], - "9": ["n01518878", "ostrich"], - "10": ["n01530575", "brambling"], - "11": ["n01531178", "goldfinch"], - "12": ["n01532829", "house_finch"], - "13": ["n01534433", "junco"], - "14": ["n01537544", "indigo_bunting"], - "15": ["n01558993", "robin"], - "16": ["n01560419", "bulbul"], - "17": ["n01580077", "jay"], - "18": ["n01582220", "magpie"], - "19": ["n01592084", "chickadee"], - "20": ["n01601694", "water_ouzel"], - "21": ["n01608432", "kite"], - "22": ["n01614925", "bald_eagle"], - "23": ["n01616318", "vulture"], - "24": ["n01622779", "great_grey_owl"], - "25": ["n01629819", "European_fire_salamander"], - "26": ["n01630670", "common_newt"], - "27": ["n01631663", "eft"], - "28": ["n01632458", "spotted_salamander"], - "29": ["n01632777", "axolotl"], - "30": ["n01641577", "bullfrog"], - "31": ["n01644373", "tree_frog"], - "32": ["n01644900", "tailed_frog"], - "33": ["n01664065", "loggerhead"], - "34": ["n01665541", "leatherback_turtle"], - "35": ["n01667114", "mud_turtle"], - "36": ["n01667778", "terrapin"], - "37": ["n01669191", "box_turtle"], - "38": ["n01675722", "banded_gecko"], - "39": ["n01677366", "common_iguana"], - "40": ["n01682714", "American_chameleon"], - "41": ["n01685808", "whiptail"], - "42": ["n01687978", "agama"], - "43": ["n01688243", "frilled_lizard"], - "44": ["n01689811", "alligator_lizard"], - "45": ["n01692333", "Gila_monster"], - "46": ["n01693334", "green_lizard"], - "47": ["n01694178", "African_chameleon"], - "48": ["n01695060", "Komodo_dragon"], - "49": ["n01697457", "African_crocodile"], - "50": ["n01698640", "American_alligator"], - "51": ["n01704323", "triceratops"], - "52": ["n01728572", "thunder_snake"], - "53": ["n01728920", "ringneck_snake"], - "54": ["n01729322", "hognose_snake"], - "55": ["n01729977", "green_snake"], - "56": ["n01734418", "king_snake"], - "57": ["n01735189", "garter_snake"], - "58": ["n01737021", "water_snake"], - "59": ["n01739381", "vine_snake"], - "60": ["n01740131", "night_snake"], - "61": ["n01742172", "boa_constrictor"], - "62": ["n01744401", "rock_python"], - "63": ["n01748264", "Indian_cobra"], - "64": ["n01749939", "green_mamba"], - "65": ["n01751748", "sea_snake"], - "66": ["n01753488", "horned_viper"], - "67": ["n01755581", "diamondback"], - "68": ["n01756291", "sidewinder"], - "69": ["n01768244", "trilobite"], - "70": ["n01770081", "harvestman"], - "71": ["n01770393", "scorpion"], - "72": ["n01773157", "black_and_gold_garden_spider"], - "73": ["n01773549", "barn_spider"], - "74": ["n01773797", "garden_spider"], - "75": ["n01774384", "black_widow"], - "76": ["n01774750", "tarantula"], - "77": ["n01775062", "wolf_spider"], - "78": ["n01776313", "tick"], - "79": ["n01784675", "centipede"], - "80": ["n01795545", "black_grouse"], - "81": ["n01796340", "ptarmigan"], - "82": ["n01797886", "ruffed_grouse"], - "83": ["n01798484", "prairie_chicken"], - "84": ["n01806143", "peacock"], - "85": ["n01806567", "quail"], - "86": ["n01807496", "partridge"], - "87": ["n01817953", "African_grey"], - "88": ["n01818515", "macaw"], - "89": ["n01819313", "sulphur-crested_cockatoo"], - "90": ["n01820546", "lorikeet"], - "91": ["n01824575", "coucal"], - "92": ["n01828970", "bee_eater"], - "93": ["n01829413", "hornbill"], - "94": ["n01833805", "hummingbird"], - "95": ["n01843065", "jacamar"], - "96": ["n01843383", "toucan"], - "97": ["n01847000", "drake"], - "98": ["n01855032", "red-breasted_merganser"], - "99": ["n01855672", "goose"], - "100": ["n01860187", "black_swan"], - "101": ["n01871265", "tusker"], - "102": ["n01872401", "echidna"], - "103": ["n01873310", "platypus"], - "104": ["n01877812", "wallaby"], - "105": ["n01882714", "koala"], - "106": ["n01883070", "wombat"], - "107": ["n01910747", "jellyfish"], - "108": ["n01914609", "sea_anemone"], - "109": ["n01917289", "brain_coral"], - "110": ["n01924916", "flatworm"], - "111": ["n01930112", "nematode"], - "112": ["n01943899", "conch"], - "113": ["n01944390", "snail"], - "114": ["n01945685", "slug"], - "115": ["n01950731", "sea_slug"], - "116": ["n01955084", "chiton"], - "117": ["n01968897", "chambered_nautilus"], - "118": ["n01978287", "Dungeness_crab"], - "119": ["n01978455", "rock_crab"], - "120": ["n01980166", "fiddler_crab"], - "121": ["n01981276", "king_crab"], - "122": ["n01983481", "American_lobster"], - "123": ["n01984695", "spiny_lobster"], - "124": ["n01985128", "crayfish"], - "125": ["n01986214", "hermit_crab"], - "126": ["n01990800", "isopod"], - "127": ["n02002556", "white_stork"], - "128": ["n02002724", "black_stork"], - "129": ["n02006656", "spoonbill"], - "130": ["n02007558", "flamingo"], - "131": ["n02009229", "little_blue_heron"], - "132": ["n02009912", "American_egret"], - "133": ["n02011460", "bittern"], - "134": ["n02012849", "crane"], - "135": ["n02013706", "limpkin"], - "136": ["n02017213", "European_gallinule"], - "137": ["n02018207", "American_coot"], - "138": ["n02018795", "bustard"], - "139": ["n02025239", "ruddy_turnstone"], - "140": ["n02027492", "red-backed_sandpiper"], - "141": ["n02028035", "redshank"], - "142": ["n02033041", "dowitcher"], - "143": ["n02037110", "oystercatcher"], - "144": ["n02051845", "pelican"], - "145": ["n02056570", "king_penguin"], - "146": ["n02058221", "albatross"], - "147": ["n02066245", "grey_whale"], - "148": ["n02071294", "killer_whale"], - "149": ["n02074367", "dugong"], - "150": ["n02077923", "sea_lion"], - "151": ["n02085620", "Chihuahua"], - "152": ["n02085782", "Japanese_spaniel"], - "153": ["n02085936", "Maltese_dog"], - "154": ["n02086079", "Pekinese"], - "155": ["n02086240", "Shih-Tzu"], - "156": ["n02086646", "Blenheim_spaniel"], - "157": ["n02086910", "papillon"], - "158": ["n02087046", "toy_terrier"], - "159": ["n02087394", "Rhodesian_ridgeback"], - "160": ["n02088094", "Afghan_hound"], - "161": ["n02088238", "basset"], - "162": ["n02088364", "beagle"], - "163": ["n02088466", "bloodhound"], - "164": ["n02088632", "bluetick"], - "165": ["n02089078", "black-and-tan_coonhound"], - "166": ["n02089867", "Walker_hound"], - "167": ["n02089973", "English_foxhound"], - "168": ["n02090379", "redbone"], - "169": ["n02090622", "borzoi"], - "170": ["n02090721", "Irish_wolfhound"], - "171": ["n02091032", "Italian_greyhound"], - "172": ["n02091134", "whippet"], - "173": ["n02091244", "Ibizan_hound"], - "174": ["n02091467", "Norwegian_elkhound"], - "175": ["n02091635", "otterhound"], - "176": ["n02091831", "Saluki"], - "177": ["n02092002", "Scottish_deerhound"], - "178": ["n02092339", "Weimaraner"], - "179": ["n02093256", "Staffordshire_bullterrier"], - "180": ["n02093428", "American_Staffordshire_terrier"], - "181": ["n02093647", "Bedlington_terrier"], - "182": ["n02093754", "Border_terrier"], - "183": ["n02093859", "Kerry_blue_terrier"], - "184": ["n02093991", "Irish_terrier"], - "185": ["n02094114", "Norfolk_terrier"], - "186": ["n02094258", "Norwich_terrier"], - "187": ["n02094433", "Yorkshire_terrier"], - "188": ["n02095314", "wire-haired_fox_terrier"], - "189": ["n02095570", "Lakeland_terrier"], - "190": ["n02095889", "Sealyham_terrier"], - "191": ["n02096051", "Airedale"], - "192": ["n02096177", "cairn"], - "193": ["n02096294", "Australian_terrier"], - "194": ["n02096437", "Dandie_Dinmont"], - "195": ["n02096585", "Boston_bull"], - "196": ["n02097047", "miniature_schnauzer"], - "197": ["n02097130", "giant_schnauzer"], - "198": ["n02097209", "standard_schnauzer"], - "199": ["n02097298", "Scotch_terrier"], - "200": ["n02097474", "Tibetan_terrier"], - "201": ["n02097658", "silky_terrier"], - "202": ["n02098105", "soft-coated_wheaten_terrier"], - "203": ["n02098286", "West_Highland_white_terrier"], - "204": ["n02098413", "Lhasa"], - "205": ["n02099267", "flat-coated_retriever"], - "206": ["n02099429", "curly-coated_retriever"], - "207": ["n02099601", "golden_retriever"], - "208": ["n02099712", "Labrador_retriever"], - "209": ["n02099849", "Chesapeake_Bay_retriever"], - "210": ["n02100236", "German_short-haired_pointer"], - "211": ["n02100583", "vizsla"], - "212": ["n02100735", "English_setter"], - "213": ["n02100877", "Irish_setter"], - "214": ["n02101006", "Gordon_setter"], - "215": ["n02101388", "Brittany_spaniel"], - "216": ["n02101556", "clumber"], - "217": ["n02102040", "English_springer"], - "218": ["n02102177", "Welsh_springer_spaniel"], - "219": ["n02102318", "cocker_spaniel"], - "220": ["n02102480", "Sussex_spaniel"], - "221": ["n02102973", "Irish_water_spaniel"], - "222": ["n02104029", "kuvasz"], - "223": ["n02104365", "schipperke"], - "224": ["n02105056", "groenendael"], - "225": ["n02105162", "malinois"], - "226": ["n02105251", "briard"], - "227": ["n02105412", "kelpie"], - "228": ["n02105505", "komondor"], - "229": ["n02105641", "Old_English_sheepdog"], - "230": ["n02105855", "Shetland_sheepdog"], - "231": ["n02106030", "collie"], - "232": ["n02106166", "Border_collie"], - "233": ["n02106382", "Bouvier_des_Flandres"], - "234": ["n02106550", "Rottweiler"], - "235": ["n02106662", "German_shepherd"], - "236": ["n02107142", "Doberman"], - "237": ["n02107312", "miniature_pinscher"], - "238": ["n02107574", "Greater_Swiss_Mountain_dog"], - "239": ["n02107683", "Bernese_mountain_dog"], - "240": ["n02107908", "Appenzeller"], - "241": ["n02108000", "EntleBucher"], - "242": ["n02108089", "boxer"], - "243": ["n02108422", "bull_mastiff"], - "244": ["n02108551", "Tibetan_mastiff"], - "245": ["n02108915", "French_bulldog"], - "246": ["n02109047", "Great_Dane"], - "247": ["n02109525", "Saint_Bernard"], - "248": ["n02109961", "Eskimo_dog"], - "249": ["n02110063", "malamute"], - "250": ["n02110185", "Siberian_husky"], - "251": ["n02110341", "dalmatian"], - "252": ["n02110627", "affenpinscher"], - "253": ["n02110806", "basenji"], - "254": ["n02110958", "pug"], - "255": ["n02111129", "Leonberg"], - "256": ["n02111277", "Newfoundland"], - "257": ["n02111500", "Great_Pyrenees"], - "258": ["n02111889", "Samoyed"], - "259": ["n02112018", "Pomeranian"], - "260": ["n02112137", "chow"], - "261": ["n02112350", "keeshond"], - "262": ["n02112706", "Brabancon_griffon"], - "263": ["n02113023", "Pembroke"], - "264": ["n02113186", "Cardigan"], - "265": ["n02113624", "toy_poodle"], - "266": ["n02113712", "miniature_poodle"], - "267": ["n02113799", "standard_poodle"], - "268": ["n02113978", "Mexican_hairless"], - "269": ["n02114367", "timber_wolf"], - "270": ["n02114548", "white_wolf"], - "271": ["n02114712", "red_wolf"], - "272": ["n02114855", "coyote"], - "273": ["n02115641", "dingo"], - "274": ["n02115913", "dhole"], - "275": ["n02116738", "African_hunting_dog"], - "276": ["n02117135", "hyena"], - "277": ["n02119022", "red_fox"], - "278": ["n02119789", "kit_fox"], - "279": ["n02120079", "Arctic_fox"], - "280": ["n02120505", "grey_fox"], - "281": ["n02123045", "tabby"], - "282": ["n02123159", "tiger_cat"], - "283": ["n02123394", "Persian_cat"], - "284": ["n02123597", "Siamese_cat"], - "285": ["n02124075", "Egyptian_cat"], - "286": ["n02125311", "cougar"], - "287": ["n02127052", "lynx"], - "288": ["n02128385", "leopard"], - "289": ["n02128757", "snow_leopard"], - "290": ["n02128925", "jaguar"], - "291": ["n02129165", "lion"], - "292": ["n02129604", "tiger"], - "293": ["n02130308", "cheetah"], - "294": ["n02132136", "brown_bear"], - "295": ["n02133161", "American_black_bear"], - "296": ["n02134084", "ice_bear"], - "297": ["n02134418", "sloth_bear"], - "298": ["n02137549", "mongoose"], - "299": ["n02138441", "meerkat"], - "300": ["n02165105", "tiger_beetle"], - "301": ["n02165456", "ladybug"], - "302": ["n02167151", "ground_beetle"], - "303": ["n02168699", "long-horned_beetle"], - "304": ["n02169497", "leaf_beetle"], - "305": ["n02172182", "dung_beetle"], - "306": ["n02174001", "rhinoceros_beetle"], - "307": ["n02177972", "weevil"], - "308": ["n02190166", "fly"], - "309": ["n02206856", "bee"], - "310": ["n02219486", "ant"], - "311": ["n02226429", "grasshopper"], - "312": ["n02229544", "cricket"], - "313": ["n02231487", "walking_stick"], - "314": ["n02233338", "cockroach"], - "315": ["n02236044", "mantis"], - "316": ["n02256656", "cicada"], - "317": ["n02259212", "leafhopper"], - "318": ["n02264363", "lacewing"], - "319": ["n02268443", "dragonfly"], - "320": ["n02268853", "damselfly"], - "321": ["n02276258", "admiral"], - "322": ["n02277742", "ringlet"], - "323": ["n02279972", "monarch"], - "324": ["n02280649", "cabbage_butterfly"], - "325": ["n02281406", "sulphur_butterfly"], - "326": ["n02281787", "lycaenid"], - "327": ["n02317335", "starfish"], - "328": ["n02319095", "sea_urchin"], - "329": ["n02321529", "sea_cucumber"], - "330": ["n02325366", "wood_rabbit"], - "331": ["n02326432", "hare"], - "332": ["n02328150", "Angora"], - "333": ["n02342885", "hamster"], - "334": ["n02346627", "porcupine"], - "335": ["n02356798", "fox_squirrel"], - "336": ["n02361337", "marmot"], - "337": ["n02363005", "beaver"], - "338": ["n02364673", "guinea_pig"], - "339": ["n02389026", "sorrel"], - "340": ["n02391049", "zebra"], - "341": ["n02395406", "hog"], - "342": ["n02396427", "wild_boar"], - "343": ["n02397096", "warthog"], - "344": ["n02398521", "hippopotamus"], - "345": ["n02403003", "ox"], - "346": ["n02408429", "water_buffalo"], - "347": ["n02410509", "bison"], - "348": ["n02412080", "ram"], - "349": ["n02415577", "bighorn"], - "350": ["n02417914", "ibex"], - "351": ["n02422106", "hartebeest"], - "352": ["n02422699", "impala"], - "353": ["n02423022", "gazelle"], - "354": ["n02437312", "Arabian_camel"], - "355": ["n02437616", "llama"], - "356": ["n02441942", "weasel"], - "357": ["n02442845", "mink"], - "358": ["n02443114", "polecat"], - "359": ["n02443484", "black-footed_ferret"], - "360": ["n02444819", "otter"], - "361": ["n02445715", "skunk"], - "362": ["n02447366", "badger"], - "363": ["n02454379", "armadillo"], - "364": ["n02457408", "three-toed_sloth"], - "365": ["n02480495", "orangutan"], - "366": ["n02480855", "gorilla"], - "367": ["n02481823", "chimpanzee"], - "368": ["n02483362", "gibbon"], - "369": ["n02483708", "siamang"], - "370": ["n02484975", "guenon"], - "371": ["n02486261", "patas"], - "372": ["n02486410", "baboon"], - "373": ["n02487347", "macaque"], - "374": ["n02488291", "langur"], - "375": ["n02488702", "colobus"], - "376": ["n02489166", "proboscis_monkey"], - "377": ["n02490219", "marmoset"], - "378": ["n02492035", "capuchin"], - "379": ["n02492660", "howler_monkey"], - "380": ["n02493509", "titi"], - "381": ["n02493793", "spider_monkey"], - "382": ["n02494079", "squirrel_monkey"], - "383": ["n02497673", "Madagascar_cat"], - "384": ["n02500267", "indri"], - "385": ["n02504013", "Indian_elephant"], - "386": ["n02504458", "African_elephant"], - "387": ["n02509815", "lesser_panda"], - "388": ["n02510455", "giant_panda"], - "389": ["n02514041", "barracouta"], - "390": ["n02526121", "eel"], - "391": ["n02536864", "coho"], - "392": ["n02606052", "rock_beauty"], - "393": ["n02607072", "anemone_fish"], - "394": ["n02640242", "sturgeon"], - "395": ["n02641379", "gar"], - "396": ["n02643566", "lionfish"], - "397": ["n02655020", "puffer"], - "398": ["n02666196", "abacus"], - "399": ["n02667093", "abaya"], - "400": ["n02669723", "academic_gown"], - "401": ["n02672831", "accordion"], - "402": ["n02676566", "acoustic_guitar"], - "403": ["n02687172", "aircraft_carrier"], - "404": ["n02690373", "airliner"], - "405": ["n02692877", "airship"], - "406": ["n02699494", "altar"], - "407": ["n02701002", "ambulance"], - "408": ["n02704792", "amphibian"], - "409": ["n02708093", "analog_clock"], - "410": ["n02727426", "apiary"], - "411": ["n02730930", "apron"], - "412": ["n02747177", "ashcan"], - "413": ["n02749479", "assault_rifle"], - "414": ["n02769748", "backpack"], - "415": ["n02776631", "bakery"], - "416": ["n02777292", "balance_beam"], - "417": ["n02782093", "balloon"], - "418": ["n02783161", "ballpoint"], - "419": ["n02786058", "Band_Aid"], - "420": ["n02787622", "banjo"], - "421": ["n02788148", "bannister"], - "422": ["n02790996", "barbell"], - "423": ["n02791124", "barber_chair"], - "424": ["n02791270", "barbershop"], - "425": ["n02793495", "barn"], - "426": ["n02794156", "barometer"], - "427": ["n02795169", "barrel"], - "428": ["n02797295", "barrow"], - "429": ["n02799071", "baseball"], - "430": ["n02802426", "basketball"], - "431": ["n02804414", "bassinet"], - "432": ["n02804610", "bassoon"], - "433": ["n02807133", "bathing_cap"], - "434": ["n02808304", "bath_towel"], - "435": ["n02808440", "bathtub"], - "436": ["n02814533", "beach_wagon"], - "437": ["n02814860", "beacon"], - "438": ["n02815834", "beaker"], - "439": ["n02817516", "bearskin"], - "440": ["n02823428", "beer_bottle"], - "441": ["n02823750", "beer_glass"], - "442": ["n02825657", "bell_cote"], - "443": ["n02834397", "bib"], - "444": ["n02835271", "bicycle-built-for-two"], - "445": ["n02837789", "bikini"], - "446": ["n02840245", "binder"], - "447": ["n02841315", "binoculars"], - "448": ["n02843684", "birdhouse"], - "449": ["n02859443", "boathouse"], - "450": ["n02860847", "bobsled"], - "451": ["n02865351", "bolo_tie"], - "452": ["n02869837", "bonnet"], - "453": ["n02870880", "bookcase"], - "454": ["n02871525", "bookshop"], - "455": ["n02877765", "bottlecap"], - "456": ["n02879718", "bow"], - "457": ["n02883205", "bow_tie"], - "458": ["n02892201", "brass"], - "459": ["n02892767", "brassiere"], - "460": ["n02894605", "breakwater"], - "461": ["n02895154", "breastplate"], - "462": ["n02906734", "broom"], - "463": ["n02909870", "bucket"], - "464": ["n02910353", "buckle"], - "465": ["n02916936", "bulletproof_vest"], - "466": ["n02917067", "bullet_train"], - "467": ["n02927161", "butcher_shop"], - "468": ["n02930766", "cab"], - "469": ["n02939185", "caldron"], - "470": ["n02948072", "candle"], - "471": ["n02950826", "cannon"], - "472": ["n02951358", "canoe"], - "473": ["n02951585", "can_opener"], - "474": ["n02963159", "cardigan"], - "475": ["n02965783", "car_mirror"], - "476": ["n02966193", "carousel"], - "477": ["n02966687", "carpenter's_kit"], - "478": ["n02971356", "carton"], - "479": ["n02974003", "car_wheel"], - "480": ["n02977058", "cash_machine"], - "481": ["n02978881", "cassette"], - "482": ["n02979186", "cassette_player"], - "483": ["n02980441", "castle"], - "484": ["n02981792", "catamaran"], - "485": ["n02988304", "CD_player"], - "486": ["n02992211", "cello"], - "487": ["n02992529", "cellular_telephone"], - "488": ["n02999410", "chain"], - "489": ["n03000134", "chainlink_fence"], - "490": ["n03000247", "chain_mail"], - "491": ["n03000684", "chain_saw"], - "492": ["n03014705", "chest"], - "493": ["n03016953", "chiffonier"], - "494": ["n03017168", "chime"], - "495": ["n03018349", "china_cabinet"], - "496": ["n03026506", "Christmas_stocking"], - "497": ["n03028079", "church"], - "498": ["n03032252", "cinema"], - "499": ["n03041632", "cleaver"], - "500": ["n03042490", "cliff_dwelling"], - "501": ["n03045698", "cloak"], - "502": ["n03047690", "clog"], - "503": ["n03062245", "cocktail_shaker"], - "504": ["n03063599", "coffee_mug"], - "505": ["n03063689", "coffeepot"], - "506": ["n03065424", "coil"], - "507": ["n03075370", "combination_lock"], - "508": ["n03085013", "computer_keyboard"], - "509": ["n03089624", "confectionery"], - "510": ["n03095699", "container_ship"], - "511": ["n03100240", "convertible"], - "512": ["n03109150", "corkscrew"], - "513": ["n03110669", "cornet"], - "514": ["n03124043", "cowboy_boot"], - "515": ["n03124170", "cowboy_hat"], - "516": ["n03125729", "cradle"], - "517": ["n03126707", "crane"], - "518": ["n03127747", "crash_helmet"], - "519": ["n03127925", "crate"], - "520": ["n03131574", "crib"], - "521": ["n03133878", "Crock_Pot"], - "522": ["n03134739", "croquet_ball"], - "523": ["n03141823", "crutch"], - "524": ["n03146219", "cuirass"], - "525": ["n03160309", "dam"], - "526": ["n03179701", "desk"], - "527": ["n03180011", "desktop_computer"], - "528": ["n03187595", "dial_telephone"], - "529": ["n03188531", "diaper"], - "530": ["n03196217", "digital_clock"], - "531": ["n03197337", "digital_watch"], - "532": ["n03201208", "dining_table"], - "533": ["n03207743", "dishrag"], - "534": ["n03207941", "dishwasher"], - "535": ["n03208938", "disk_brake"], - "536": ["n03216828", "dock"], - "537": ["n03218198", "dogsled"], - "538": ["n03220513", "dome"], - "539": ["n03223299", "doormat"], - "540": ["n03240683", "drilling_platform"], - "541": ["n03249569", "drum"], - "542": ["n03250847", "drumstick"], - "543": ["n03255030", "dumbbell"], - "544": ["n03259280", "Dutch_oven"], - "545": ["n03271574", "electric_fan"], - "546": ["n03272010", "electric_guitar"], - "547": ["n03272562", "electric_locomotive"], - "548": ["n03290653", "entertainment_center"], - "549": ["n03291819", "envelope"], - "550": ["n03297495", "espresso_maker"], - "551": ["n03314780", "face_powder"], - "552": ["n03325584", "feather_boa"], - "553": ["n03337140", "file"], - "554": ["n03344393", "fireboat"], - "555": ["n03345487", "fire_engine"], - "556": ["n03347037", "fire_screen"], - "557": ["n03355925", "flagpole"], - "558": ["n03372029", "flute"], - "559": ["n03376595", "folding_chair"], - "560": ["n03379051", "football_helmet"], - "561": ["n03384352", "forklift"], - "562": ["n03388043", "fountain"], - "563": ["n03388183", "fountain_pen"], - "564": ["n03388549", "four-poster"], - "565": ["n03393912", "freight_car"], - "566": ["n03394916", "French_horn"], - "567": ["n03400231", "frying_pan"], - "568": ["n03404251", "fur_coat"], - "569": ["n03417042", "garbage_truck"], - "570": ["n03424325", "gasmask"], - "571": ["n03425413", "gas_pump"], - "572": ["n03443371", "goblet"], - "573": ["n03444034", "go-kart"], - "574": ["n03445777", "golf_ball"], - "575": ["n03445924", "golfcart"], - "576": ["n03447447", "gondola"], - "577": ["n03447721", "gong"], - "578": ["n03450230", "gown"], - "579": ["n03452741", "grand_piano"], - "580": ["n03457902", "greenhouse"], - "581": ["n03459775", "grille"], - "582": ["n03461385", "grocery_store"], - "583": ["n03467068", "guillotine"], - "584": ["n03476684", "hair_slide"], - "585": ["n03476991", "hair_spray"], - "586": ["n03478589", "half_track"], - "587": ["n03481172", "hammer"], - "588": ["n03482405", "hamper"], - "589": ["n03483316", "hand_blower"], - "590": ["n03485407", "hand-held_computer"], - "591": ["n03485794", "handkerchief"], - "592": ["n03492542", "hard_disc"], - "593": ["n03494278", "harmonica"], - "594": ["n03495258", "harp"], - "595": ["n03496892", "harvester"], - "596": ["n03498962", "hatchet"], - "597": ["n03527444", "holster"], - "598": ["n03529860", "home_theater"], - "599": ["n03530642", "honeycomb"], - "600": ["n03532672", "hook"], - "601": ["n03534580", "hoopskirt"], - "602": ["n03535780", "horizontal_bar"], - "603": ["n03538406", "horse_cart"], - "604": ["n03544143", "hourglass"], - "605": ["n03584254", "iPod"], - "606": ["n03584829", "iron"], - "607": ["n03590841", "jack-o'-lantern"], - "608": ["n03594734", "jean"], - "609": ["n03594945", "jeep"], - "610": ["n03595614", "jersey"], - "611": ["n03598930", "jigsaw_puzzle"], - "612": ["n03599486", "jinrikisha"], - "613": ["n03602883", "joystick"], - "614": ["n03617480", "kimono"], - "615": ["n03623198", "knee_pad"], - "616": ["n03627232", "knot"], - "617": ["n03630383", "lab_coat"], - "618": ["n03633091", "ladle"], - "619": ["n03637318", "lampshade"], - "620": ["n03642806", "laptop"], - "621": ["n03649909", "lawn_mower"], - "622": ["n03657121", "lens_cap"], - "623": ["n03658185", "letter_opener"], - "624": ["n03661043", "library"], - "625": ["n03662601", "lifeboat"], - "626": ["n03666591", "lighter"], - "627": ["n03670208", "limousine"], - "628": ["n03673027", "liner"], - "629": ["n03676483", "lipstick"], - "630": ["n03680355", "Loafer"], - "631": ["n03690938", "lotion"], - "632": ["n03691459", "loudspeaker"], - "633": ["n03692522", "loupe"], - "634": ["n03697007", "lumbermill"], - "635": ["n03706229", "magnetic_compass"], - "636": ["n03709823", "mailbag"], - "637": ["n03710193", "mailbox"], - "638": ["n03710637", "maillot"], - "639": ["n03710721", "maillot"], - "640": ["n03717622", "manhole_cover"], - "641": ["n03720891", "maraca"], - "642": ["n03721384", "marimba"], - "643": ["n03724870", "mask"], - "644": ["n03729826", "matchstick"], - "645": ["n03733131", "maypole"], - "646": ["n03733281", "maze"], - "647": ["n03733805", "measuring_cup"], - "648": ["n03742115", "medicine_chest"], - "649": ["n03743016", "megalith"], - "650": ["n03759954", "microphone"], - "651": ["n03761084", "microwave"], - "652": ["n03763968", "military_uniform"], - "653": ["n03764736", "milk_can"], - "654": ["n03769881", "minibus"], - "655": ["n03770439", "miniskirt"], - "656": ["n03770679", "minivan"], - "657": ["n03773504", "missile"], - "658": ["n03775071", "mitten"], - "659": ["n03775546", "mixing_bowl"], - "660": ["n03776460", "mobile_home"], - "661": ["n03777568", "Model_T"], - "662": ["n03777754", "modem"], - "663": ["n03781244", "monastery"], - "664": ["n03782006", "monitor"], - "665": ["n03785016", "moped"], - "666": ["n03786901", "mortar"], - "667": ["n03787032", "mortarboard"], - "668": ["n03788195", "mosque"], - "669": ["n03788365", "mosquito_net"], - "670": ["n03791053", "motor_scooter"], - "671": ["n03792782", "mountain_bike"], - "672": ["n03792972", "mountain_tent"], - "673": ["n03793489", "mouse"], - "674": ["n03794056", "mousetrap"], - "675": ["n03796401", "moving_van"], - "676": ["n03803284", "muzzle"], - "677": ["n03804744", "nail"], - "678": ["n03814639", "neck_brace"], - "679": ["n03814906", "necklace"], - "680": ["n03825788", "nipple"], - "681": ["n03832673", "notebook"], - "682": ["n03837869", "obelisk"], - "683": ["n03838899", "oboe"], - "684": ["n03840681", "ocarina"], - "685": ["n03841143", "odometer"], - "686": ["n03843555", "oil_filter"], - "687": ["n03854065", "organ"], - "688": ["n03857828", "oscilloscope"], - "689": ["n03866082", "overskirt"], - "690": ["n03868242", "oxcart"], - "691": ["n03868863", "oxygen_mask"], - "692": ["n03871628", "packet"], - "693": ["n03873416", "paddle"], - "694": ["n03874293", "paddlewheel"], - "695": ["n03874599", "padlock"], - "696": ["n03876231", "paintbrush"], - "697": ["n03877472", "pajama"], - "698": ["n03877845", "palace"], - "699": ["n03884397", "panpipe"], - "700": ["n03887697", "paper_towel"], - "701": ["n03888257", "parachute"], - "702": ["n03888605", "parallel_bars"], - "703": ["n03891251", "park_bench"], - "704": ["n03891332", "parking_meter"], - "705": ["n03895866", "passenger_car"], - "706": ["n03899768", "patio"], - "707": ["n03902125", "pay-phone"], - "708": ["n03903868", "pedestal"], - "709": ["n03908618", "pencil_box"], - "710": ["n03908714", "pencil_sharpener"], - "711": ["n03916031", "perfume"], - "712": ["n03920288", "Petri_dish"], - "713": ["n03924679", "photocopier"], - "714": ["n03929660", "pick"], - "715": ["n03929855", "pickelhaube"], - "716": ["n03930313", "picket_fence"], - "717": ["n03930630", "pickup"], - "718": ["n03933933", "pier"], - "719": ["n03935335", "piggy_bank"], - "720": ["n03937543", "pill_bottle"], - "721": ["n03938244", "pillow"], - "722": ["n03942813", "ping-pong_ball"], - "723": ["n03944341", "pinwheel"], - "724": ["n03947888", "pirate"], - "725": ["n03950228", "pitcher"], - "726": ["n03954731", "plane"], - "727": ["n03956157", "planetarium"], - "728": ["n03958227", "plastic_bag"], - "729": ["n03961711", "plate_rack"], - "730": ["n03967562", "plow"], - "731": ["n03970156", "plunger"], - "732": ["n03976467", "Polaroid_camera"], - "733": ["n03976657", "pole"], - "734": ["n03977966", "police_van"], - "735": ["n03980874", "poncho"], - "736": ["n03982430", "pool_table"], - "737": ["n03983396", "pop_bottle"], - "738": ["n03991062", "pot"], - "739": ["n03992509", "potter's_wheel"], - "740": ["n03995372", "power_drill"], - "741": ["n03998194", "prayer_rug"], - "742": ["n04004767", "printer"], - "743": ["n04005630", "prison"], - "744": ["n04008634", "projectile"], - "745": ["n04009552", "projector"], - "746": ["n04019541", "puck"], - "747": ["n04023962", "punching_bag"], - "748": ["n04026417", "purse"], - "749": ["n04033901", "quill"], - "750": ["n04033995", "quilt"], - "751": ["n04037443", "racer"], - "752": ["n04039381", "racket"], - "753": ["n04040759", "radiator"], - "754": ["n04041544", "radio"], - "755": ["n04044716", "radio_telescope"], - "756": ["n04049303", "rain_barrel"], - "757": ["n04065272", "recreational_vehicle"], - "758": ["n04067472", "reel"], - "759": ["n04069434", "reflex_camera"], - "760": ["n04070727", "refrigerator"], - "761": ["n04074963", "remote_control"], - "762": ["n04081281", "restaurant"], - "763": ["n04086273", "revolver"], - "764": ["n04090263", "rifle"], - "765": ["n04099969", "rocking_chair"], - "766": ["n04111531", "rotisserie"], - "767": ["n04116512", "rubber_eraser"], - "768": ["n04118538", "rugby_ball"], - "769": ["n04118776", "rule"], - "770": ["n04120489", "running_shoe"], - "771": ["n04125021", "safe"], - "772": ["n04127249", "safety_pin"], - "773": ["n04131690", "saltshaker"], - "774": ["n04133789", "sandal"], - "775": ["n04136333", "sarong"], - "776": ["n04141076", "sax"], - "777": ["n04141327", "scabbard"], - "778": ["n04141975", "scale"], - "779": ["n04146614", "school_bus"], - "780": ["n04147183", "schooner"], - "781": ["n04149813", "scoreboard"], - "782": ["n04152593", "screen"], - "783": ["n04153751", "screw"], - "784": ["n04154565", "screwdriver"], - "785": ["n04162706", "seat_belt"], - "786": ["n04179913", "sewing_machine"], - "787": ["n04192698", "shield"], - "788": ["n04200800", "shoe_shop"], - "789": ["n04201297", "shoji"], - "790": ["n04204238", "shopping_basket"], - "791": ["n04204347", "shopping_cart"], - "792": ["n04208210", "shovel"], - "793": ["n04209133", "shower_cap"], - "794": ["n04209239", "shower_curtain"], - "795": ["n04228054", "ski"], - "796": ["n04229816", "ski_mask"], - "797": ["n04235860", "sleeping_bag"], - "798": ["n04238763", "slide_rule"], - "799": ["n04239074", "sliding_door"], - "800": ["n04243546", "slot"], - "801": ["n04251144", "snorkel"], - "802": ["n04252077", "snowmobile"], - "803": ["n04252225", "snowplow"], - "804": ["n04254120", "soap_dispenser"], - "805": ["n04254680", "soccer_ball"], - "806": ["n04254777", "sock"], - "807": ["n04258138", "solar_dish"], - "808": ["n04259630", "sombrero"], - "809": ["n04263257", "soup_bowl"], - "810": ["n04264628", "space_bar"], - "811": ["n04265275", "space_heater"], - "812": ["n04266014", "space_shuttle"], - "813": ["n04270147", "spatula"], - "814": ["n04273569", "speedboat"], - "815": ["n04275548", "spider_web"], - "816": ["n04277352", "spindle"], - "817": ["n04285008", "sports_car"], - "818": ["n04286575", "spotlight"], - "819": ["n04296562", "stage"], - "820": ["n04310018", "steam_locomotive"], - "821": ["n04311004", "steel_arch_bridge"], - "822": ["n04311174", "steel_drum"], - "823": ["n04317175", "stethoscope"], - "824": ["n04325704", "stole"], - "825": ["n04326547", "stone_wall"], - "826": ["n04328186", "stopwatch"], - "827": ["n04330267", "stove"], - "828": ["n04332243", "strainer"], - "829": ["n04335435", "streetcar"], - "830": ["n04336792", "stretcher"], - "831": ["n04344873", "studio_couch"], - "832": ["n04346328", "stupa"], - "833": ["n04347754", "submarine"], - "834": ["n04350905", "suit"], - "835": ["n04355338", "sundial"], - "836": ["n04355933", "sunglass"], - "837": ["n04356056", "sunglasses"], - "838": ["n04357314", "sunscreen"], - "839": ["n04366367", "suspension_bridge"], - "840": ["n04367480", "swab"], - "841": ["n04370456", "sweatshirt"], - "842": ["n04371430", "swimming_trunks"], - "843": ["n04371774", "swing"], - "844": ["n04372370", "switch"], - "845": ["n04376876", "syringe"], - "846": ["n04380533", "table_lamp"], - "847": ["n04389033", "tank"], - "848": ["n04392985", "tape_player"], - "849": ["n04398044", "teapot"], - "850": ["n04399382", "teddy"], - "851": ["n04404412", "television"], - "852": ["n04409515", "tennis_ball"], - "853": ["n04417672", "thatch"], - "854": ["n04418357", "theater_curtain"], - "855": ["n04423845", "thimble"], - "856": ["n04428191", "thresher"], - "857": ["n04429376", "throne"], - "858": ["n04435653", "tile_roof"], - "859": ["n04442312", "toaster"], - "860": ["n04443257", "tobacco_shop"], - "861": ["n04447861", "toilet_seat"], - "862": ["n04456115", "torch"], - "863": ["n04458633", "totem_pole"], - "864": ["n04461696", "tow_truck"], - "865": ["n04462240", "toyshop"], - "866": ["n04465501", "tractor"], - "867": ["n04467665", "trailer_truck"], - "868": ["n04476259", "tray"], - "869": ["n04479046", "trench_coat"], - "870": ["n04482393", "tricycle"], - "871": ["n04483307", "trimaran"], - "872": ["n04485082", "tripod"], - "873": ["n04486054", "triumphal_arch"], - "874": ["n04487081", "trolleybus"], - "875": ["n04487394", "trombone"], - "876": ["n04493381", "tub"], - "877": ["n04501370", "turnstile"], - "878": ["n04505470", "typewriter_keyboard"], - "879": ["n04507155", "umbrella"], - "880": ["n04509417", "unicycle"], - "881": ["n04515003", "upright"], - "882": ["n04517823", "vacuum"], - "883": ["n04522168", "vase"], - "884": ["n04523525", "vault"], - "885": ["n04525038", "velvet"], - "886": ["n04525305", "vending_machine"], - "887": ["n04532106", "vestment"], - "888": ["n04532670", "viaduct"], - "889": ["n04536866", "violin"], - "890": ["n04540053", "volleyball"], - "891": ["n04542943", "waffle_iron"], - "892": ["n04548280", "wall_clock"], - "893": ["n04548362", "wallet"], - "894": ["n04550184", "wardrobe"], - "895": ["n04552348", "warplane"], - "896": ["n04553703", "washbasin"], - "897": ["n04554684", "washer"], - "898": ["n04557648", "water_bottle"], - "899": ["n04560804", "water_jug"], - "900": ["n04562935", "water_tower"], - "901": ["n04579145", "whiskey_jug"], - "902": ["n04579432", "whistle"], - "903": ["n04584207", "wig"], - "904": ["n04589890", "window_screen"], - "905": ["n04590129", "window_shade"], - "906": ["n04591157", "Windsor_tie"], - "907": ["n04591713", "wine_bottle"], - "908": ["n04592741", "wing"], - "909": ["n04596742", "wok"], - "910": ["n04597913", "wooden_spoon"], - "911": ["n04599235", "wool"], - "912": ["n04604644", "worm_fence"], - "913": ["n04606251", "wreck"], - "914": ["n04612504", "yawl"], - "915": ["n04613696", "yurt"], - "916": ["n06359193", "web_site"], - "917": ["n06596364", "comic_book"], - "918": ["n06785654", "crossword_puzzle"], - "919": ["n06794110", "street_sign"], - "920": ["n06874185", "traffic_light"], - "921": ["n07248320", "book_jacket"], - "922": ["n07565083", "menu"], - "923": ["n07579787", "plate"], - "924": ["n07583066", "guacamole"], - "925": ["n07584110", "consomme"], - "926": ["n07590611", "hot_pot"], - "927": ["n07613480", "trifle"], - "928": ["n07614500", "ice_cream"], - "929": ["n07615774", "ice_lolly"], - "930": ["n07684084", "French_loaf"], - "931": ["n07693725", "bagel"], - "932": ["n07695742", "pretzel"], - "933": ["n07697313", "cheeseburger"], - "934": ["n07697537", "hotdog"], - "935": ["n07711569", "mashed_potato"], - "936": ["n07714571", "head_cabbage"], - "937": ["n07714990", "broccoli"], - "938": ["n07715103", "cauliflower"], - "939": ["n07716358", "zucchini"], - "940": ["n07716906", "spaghetti_squash"], - "941": ["n07717410", "acorn_squash"], - "942": ["n07717556", "butternut_squash"], - "943": ["n07718472", "cucumber"], - "944": ["n07718747", "artichoke"], - "945": ["n07720875", "bell_pepper"], - "946": ["n07730033", "cardoon"], - "947": ["n07734744", "mushroom"], - "948": ["n07742313", "Granny_Smith"], - "949": ["n07745940", "strawberry"], - "950": ["n07747607", "orange"], - "951": ["n07749582", "lemon"], - "952": ["n07753113", "fig"], - "953": ["n07753275", "pineapple"], - "954": ["n07753592", "banana"], - "955": ["n07754684", "jackfruit"], - "956": ["n07760859", "custard_apple"], - "957": ["n07768694", "pomegranate"], - "958": ["n07802026", "hay"], - "959": ["n07831146", "carbonara"], - "960": ["n07836838", "chocolate_sauce"], - "961": ["n07860988", "dough"], - "962": ["n07871810", "meat_loaf"], - "963": ["n07873807", "pizza"], - "964": ["n07875152", "potpie"], - "965": ["n07880968", "burrito"], - "966": ["n07892512", "red_wine"], - "967": ["n07920052", "espresso"], - "968": ["n07930864", "cup"], - "969": ["n07932039", "eggnog"], - "970": ["n09193705", "alp"], - "971": ["n09229709", "bubble"], - "972": ["n09246464", "cliff"], - "973": ["n09256479", "coral_reef"], - "974": ["n09288635", "geyser"], - "975": ["n09332890", "lakeside"], - "976": ["n09399592", "promontory"], - "977": ["n09421951", "sandbar"], - "978": ["n09428293", "seashore"], - "979": ["n09468604", "valley"], - "980": ["n09472597", "volcano"], - "981": ["n09835506", "ballplayer"], - "982": ["n10148035", "groom"], - "983": ["n10565667", "scuba_diver"], - "984": ["n11879895", "rapeseed"], - "985": ["n11939491", "daisy"], - "986": ["n12057211", "yellow_lady's_slipper"], - "987": ["n12144580", "corn"], - "988": ["n12267677", "acorn"], - "989": ["n12620546", "hip"], - "990": ["n12768682", "buckeye"], - "991": ["n12985857", "coral_fungus"], - "992": ["n12998815", "agaric"], - "993": ["n13037406", "gyromitra"], - "994": ["n13040303", "stinkhorn"], - "995": ["n13044778", "earthstar"], - "996": ["n13052670", "hen-of-the-woods"], - "997": ["n13054560", "bolete"], - "998": ["n13133613", "ear"], - "999": ["n15075141", "toilet_tissue"] -} \ No newline at end of file diff --git a/spaces/ServerX/PorcoDiaz/infer_uvr5.py b/spaces/ServerX/PorcoDiaz/infer_uvr5.py deleted file mode 100644 index 8c8c05429a1d65dd8b198f16a8ea8c6e68991c07..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/infer_uvr5.py +++ /dev/null @@ -1,363 +0,0 @@ -import os, sys, torch, warnings, pdb - -now_dir = os.getcwd() -sys.path.append(now_dir) -from json import load as ll - -warnings.filterwarnings("ignore") -import librosa -import importlib -import numpy as np -import hashlib, math -from tqdm import tqdm -from lib.uvr5_pack.lib_v5 import spec_utils -from lib.uvr5_pack.utils import _get_name_params, inference -from lib.uvr5_pack.lib_v5.model_param_init import ModelParameters -import soundfile as sf -from lib.uvr5_pack.lib_v5.nets_new import CascadedNet -from lib.uvr5_pack.lib_v5 import nets_61968KB as nets - - -class _audio_pre_: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("lib/uvr5_pack/lib_v5/modelparams/4band_v2.json") - model = nets.CascadedASPPNet(mp.param["bins"] * 2) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"): - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - print("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - print("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -class _audio_pre_new: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("lib/uvr5_pack/lib_v5/modelparams/4band_v3.json") - nout = 64 if "DeReverb" in model_path else 48 - model = CascadedNet(mp.param["bins"] * 2, nout) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_( - self, music_file, vocal_root=None, ins_root=None, format="flac" - ): # 3个VR模型vocal和ins是反的 - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - print("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - print("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -if __name__ == "__main__": - device = "cuda" - is_half = True - # model_path = "uvr5_weights/2_HP-UVR.pth" - # model_path = "uvr5_weights/VR-DeEchoDeReverb.pth" - # model_path = "uvr5_weights/VR-DeEchoNormal.pth" - model_path = "uvr5_weights/DeEchoNormal.pth" - # pre_fun = _audio_pre_(model_path=model_path, device=device, is_half=True,agg=10) - pre_fun = _audio_pre_new(model_path=model_path, device=device, is_half=True, agg=10) - audio_path = "雪雪伴奏对消HP5.wav" - save_path = "opt" - pre_fun._path_audio_(audio_path, save_path, save_path) diff --git a/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/utils/utils.py b/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/utils/utils.py deleted file mode 100644 index d48a5ed28e8555d4b8cfb15fdee86426bbb9e368..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/utils/utils.py +++ /dev/null @@ -1,169 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""Utility functions.""" - -import fnmatch -import logging -import os -import sys - -import h5py -import numpy as np - - -def find_files(root_dir, query="*.wav", include_root_dir=True): - """Find files recursively. - - Args: - root_dir (str): Root root_dir to find. - query (str): Query to find. - include_root_dir (bool): If False, root_dir name is not included. - - Returns: - list: List of found filenames. - - """ - files = [] - for root, dirnames, filenames in os.walk(root_dir, followlinks=True): - for filename in fnmatch.filter(filenames, query): - files.append(os.path.join(root, filename)) - if not include_root_dir: - files = [file_.replace(root_dir + "/", "") for file_ in files] - - return files - - -def read_hdf5(hdf5_name, hdf5_path): - """Read hdf5 dataset. - - Args: - hdf5_name (str): Filename of hdf5 file. - hdf5_path (str): Dataset name in hdf5 file. - - Return: - any: Dataset values. - - """ - if not os.path.exists(hdf5_name): - logging.error(f"There is no such a hdf5 file ({hdf5_name}).") - sys.exit(1) - - hdf5_file = h5py.File(hdf5_name, "r") - - if hdf5_path not in hdf5_file: - logging.error(f"There is no such a data in hdf5 file. ({hdf5_path})") - sys.exit(1) - - hdf5_data = hdf5_file[hdf5_path][()] - hdf5_file.close() - - return hdf5_data - - -def write_hdf5(hdf5_name, hdf5_path, write_data, is_overwrite=True): - """Write dataset to hdf5. - - Args: - hdf5_name (str): Hdf5 dataset filename. - hdf5_path (str): Dataset path in hdf5. - write_data (ndarray): Data to write. - is_overwrite (bool): Whether to overwrite dataset. - - """ - # convert to numpy array - write_data = np.array(write_data) - - # check folder existence - folder_name, _ = os.path.split(hdf5_name) - if not os.path.exists(folder_name) and len(folder_name) != 0: - os.makedirs(folder_name) - - # check hdf5 existence - if os.path.exists(hdf5_name): - # if already exists, open with r+ mode - hdf5_file = h5py.File(hdf5_name, "r+") - # check dataset existence - if hdf5_path in hdf5_file: - if is_overwrite: - logging.warning("Dataset in hdf5 file already exists. " - "recreate dataset in hdf5.") - hdf5_file.__delitem__(hdf5_path) - else: - logging.error("Dataset in hdf5 file already exists. " - "if you want to overwrite, please set is_overwrite = True.") - hdf5_file.close() - sys.exit(1) - else: - # if not exists, open with w mode - hdf5_file = h5py.File(hdf5_name, "w") - - # write data to hdf5 - hdf5_file.create_dataset(hdf5_path, data=write_data) - hdf5_file.flush() - hdf5_file.close() - - -class HDF5ScpLoader(object): - """Loader class for a fests.scp file of hdf5 file. - - Examples: - key1 /some/path/a.h5:feats - key2 /some/path/b.h5:feats - key3 /some/path/c.h5:feats - key4 /some/path/d.h5:feats - ... - >>> loader = HDF5ScpLoader("hdf5.scp") - >>> array = loader["key1"] - - key1 /some/path/a.h5 - key2 /some/path/b.h5 - key3 /some/path/c.h5 - key4 /some/path/d.h5 - ... - >>> loader = HDF5ScpLoader("hdf5.scp", "feats") - >>> array = loader["key1"] - - """ - - def __init__(self, feats_scp, default_hdf5_path="feats"): - """Initialize HDF5 scp loader. - - Args: - feats_scp (str): Kaldi-style feats.scp file with hdf5 format. - default_hdf5_path (str): Path in hdf5 file. If the scp contain the info, not used. - - """ - self.default_hdf5_path = default_hdf5_path - with open(feats_scp) as f: - lines = [line.replace("\n", "") for line in f.readlines()] - self.data = {} - for line in lines: - key, value = line.split() - self.data[key] = value - - def get_path(self, key): - """Get hdf5 file path for a given key.""" - return self.data[key] - - def __getitem__(self, key): - """Get ndarray for a given key.""" - p = self.data[key] - if ":" in p: - return read_hdf5(*p.split(":")) - else: - return read_hdf5(p, self.default_hdf5_path) - - def __len__(self): - """Return the length of the scp file.""" - return len(self.data) - - def __iter__(self): - """Return the iterator of the scp file.""" - return iter(self.data) - - def keys(self): - """Return the keys of the scp file.""" - return self.data.keys() diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/models/__init__.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/models/__init__.py deleted file mode 100644 index be6bfe4b787a132aeaabaed1c3437c9ecd5c656c..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/models/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -""" -Models for EnCodec, AudioGen, MusicGen, as well as the generic LMModel. -""" -# flake8: noqa -from . import builders, loaders -from .encodec import ( - CompressionModel, EncodecModel, DAC, - HFEncodecModel, HFEncodecCompressionModel) -from .audiogen import AudioGen -from .lm import LMModel -from .multibanddiffusion import MultiBandDiffusion -from .musicgen import MusicGen -from .unet import DiffusionUnet diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/prompts.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/prompts.py deleted file mode 100644 index 3f5c07b980ef0b65dae36bd2970836fb1d9d2769..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/prompts.py +++ /dev/null @@ -1,108 +0,0 @@ -"""Terminal input and output prompts.""" - -from pygments.token import Token -import sys - -from IPython.core.displayhook import DisplayHook - -from prompt_toolkit.formatted_text import fragment_list_width, PygmentsTokens -from prompt_toolkit.shortcuts import print_formatted_text -from prompt_toolkit.enums import EditingMode - - -class Prompts(object): - def __init__(self, shell): - self.shell = shell - - def vi_mode(self): - if (getattr(self.shell.pt_app, 'editing_mode', None) == EditingMode.VI - and self.shell.prompt_includes_vi_mode): - mode = str(self.shell.pt_app.app.vi_state.input_mode) - if mode.startswith('InputMode.'): - mode = mode[10:13].lower() - elif mode.startswith('vi-'): - mode = mode[3:6] - return '['+mode+'] ' - return '' - - - def in_prompt_tokens(self): - return [ - (Token.Prompt, self.vi_mode() ), - (Token.Prompt, 'In ['), - (Token.PromptNum, str(self.shell.execution_count)), - (Token.Prompt, ']: '), - ] - - def _width(self): - return fragment_list_width(self.in_prompt_tokens()) - - def continuation_prompt_tokens(self, width=None): - if width is None: - width = self._width() - return [ - (Token.Prompt, (' ' * (width - 5)) + '...: '), - ] - - def rewrite_prompt_tokens(self): - width = self._width() - return [ - (Token.Prompt, ('-' * (width - 2)) + '> '), - ] - - def out_prompt_tokens(self): - return [ - (Token.OutPrompt, 'Out['), - (Token.OutPromptNum, str(self.shell.execution_count)), - (Token.OutPrompt, ']: '), - ] - -class ClassicPrompts(Prompts): - def in_prompt_tokens(self): - return [ - (Token.Prompt, '>>> '), - ] - - def continuation_prompt_tokens(self, width=None): - return [ - (Token.Prompt, '... ') - ] - - def rewrite_prompt_tokens(self): - return [] - - def out_prompt_tokens(self): - return [] - -class RichPromptDisplayHook(DisplayHook): - """Subclass of base display hook using coloured prompt""" - def write_output_prompt(self): - sys.stdout.write(self.shell.separate_out) - # If we're not displaying a prompt, it effectively ends with a newline, - # because the output will be left-aligned. - self.prompt_end_newline = True - - if self.do_full_cache: - tokens = self.shell.prompts.out_prompt_tokens() - prompt_txt = ''.join(s for t, s in tokens) - if prompt_txt and not prompt_txt.endswith('\n'): - # Ask for a newline before multiline output - self.prompt_end_newline = False - - if self.shell.pt_app: - print_formatted_text(PygmentsTokens(tokens), - style=self.shell.pt_app.app.style, end='', - ) - else: - sys.stdout.write(prompt_txt) - - def write_format_data(self, format_dict, md_dict=None) -> None: - if self.shell.mime_renderers: - - for mime, handler in self.shell.mime_renderers.items(): - if mime in format_dict: - handler(format_dict[mime], None) - return - - super().write_format_data(format_dict, md_dict) - diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_interactivshell.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_interactivshell.py deleted file mode 100644 index ae7da217f1dffde0133685ea50baeda1b7091c92..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_interactivshell.py +++ /dev/null @@ -1,255 +0,0 @@ -# -*- coding: utf-8 -*- -"""Tests for the TerminalInteractiveShell and related pieces.""" -# Copyright (c) IPython Development Team. -# Distributed under the terms of the Modified BSD License. - -import sys -import unittest -import os - -from prompt_toolkit.auto_suggest import AutoSuggestFromHistory - - -from IPython.testing import tools as tt - -from IPython.terminal.ptutils import _elide, _adjust_completion_text_based_on_context -from IPython.terminal.shortcuts.auto_suggest import NavigableAutoSuggestFromHistory - - -class TestAutoSuggest(unittest.TestCase): - def test_changing_provider(self): - ip = get_ipython() - ip.autosuggestions_provider = None - self.assertEqual(ip.auto_suggest, None) - ip.autosuggestions_provider = "AutoSuggestFromHistory" - self.assertIsInstance(ip.auto_suggest, AutoSuggestFromHistory) - ip.autosuggestions_provider = "NavigableAutoSuggestFromHistory" - self.assertIsInstance(ip.auto_suggest, NavigableAutoSuggestFromHistory) - - -class TestElide(unittest.TestCase): - def test_elide(self): - _elide("concatenate((a1, a2, ...), axis", "") # do not raise - _elide("concatenate((a1, a2, ..), . axis", "") # do not raise - self.assertEqual( - _elide("aaaa.bbbb.ccccc.dddddd.eeeee.fffff.gggggg.hhhhhh", ""), - "aaaa.b…g.hhhhhh", - ) - - test_string = os.sep.join(["", 10 * "a", 10 * "b", 10 * "c", ""]) - expect_string = ( - os.sep + "a" + "\N{HORIZONTAL ELLIPSIS}" + "b" + os.sep + 10 * "c" - ) - self.assertEqual(_elide(test_string, ""), expect_string) - - def test_elide_typed_normal(self): - self.assertEqual( - _elide( - "the quick brown fox jumped over the lazy dog", - "the quick brown fox", - min_elide=10, - ), - "the…fox jumped over the lazy dog", - ) - - def test_elide_typed_short_match(self): - """ - if the match is too short we don't elide. - avoid the "the...the" - """ - self.assertEqual( - _elide("the quick brown fox jumped over the lazy dog", "the", min_elide=10), - "the quick brown fox jumped over the lazy dog", - ) - - def test_elide_typed_no_match(self): - """ - if the match is too short we don't elide. - avoid the "the...the" - """ - # here we typed red instead of brown - self.assertEqual( - _elide( - "the quick brown fox jumped over the lazy dog", - "the quick red fox", - min_elide=10, - ), - "the quick brown fox jumped over the lazy dog", - ) - - -class TestContextAwareCompletion(unittest.TestCase): - def test_adjust_completion_text_based_on_context(self): - # Adjusted case - self.assertEqual( - _adjust_completion_text_based_on_context("arg1=", "func1(a=)", 7), "arg1" - ) - - # Untouched cases - self.assertEqual( - _adjust_completion_text_based_on_context("arg1=", "func1(a)", 7), "arg1=" - ) - self.assertEqual( - _adjust_completion_text_based_on_context("arg1=", "func1(a", 7), "arg1=" - ) - self.assertEqual( - _adjust_completion_text_based_on_context("%magic", "func1(a=)", 7), "%magic" - ) - self.assertEqual( - _adjust_completion_text_based_on_context("func2", "func1(a=)", 7), "func2" - ) - - -# Decorator for interaction loop tests ----------------------------------------- - - -class mock_input_helper(object): - """Machinery for tests of the main interact loop. - - Used by the mock_input decorator. - """ - def __init__(self, testgen): - self.testgen = testgen - self.exception = None - self.ip = get_ipython() - - def __enter__(self): - self.orig_prompt_for_code = self.ip.prompt_for_code - self.ip.prompt_for_code = self.fake_input - return self - - def __exit__(self, etype, value, tb): - self.ip.prompt_for_code = self.orig_prompt_for_code - - def fake_input(self): - try: - return next(self.testgen) - except StopIteration: - self.ip.keep_running = False - return u'' - except: - self.exception = sys.exc_info() - self.ip.keep_running = False - return u'' - -def mock_input(testfunc): - """Decorator for tests of the main interact loop. - - Write the test as a generator, yield-ing the input strings, which IPython - will see as if they were typed in at the prompt. - """ - def test_method(self): - testgen = testfunc(self) - with mock_input_helper(testgen) as mih: - mih.ip.interact() - - if mih.exception is not None: - # Re-raise captured exception - etype, value, tb = mih.exception - import traceback - traceback.print_tb(tb, file=sys.stdout) - del tb # Avoid reference loop - raise value - - return test_method - -# Test classes ----------------------------------------------------------------- - -class InteractiveShellTestCase(unittest.TestCase): - def rl_hist_entries(self, rl, n): - """Get last n readline history entries as a list""" - return [rl.get_history_item(rl.get_current_history_length() - x) - for x in range(n - 1, -1, -1)] - - @mock_input - def test_inputtransformer_syntaxerror(self): - ip = get_ipython() - ip.input_transformers_post.append(syntax_error_transformer) - - try: - #raise Exception - with tt.AssertPrints('4', suppress=False): - yield u'print(2*2)' - - with tt.AssertPrints('SyntaxError: input contains', suppress=False): - yield u'print(2345) # syntaxerror' - - with tt.AssertPrints('16', suppress=False): - yield u'print(4*4)' - - finally: - ip.input_transformers_post.remove(syntax_error_transformer) - - def test_repl_not_plain_text(self): - ip = get_ipython() - formatter = ip.display_formatter - assert formatter.active_types == ['text/plain'] - - # terminal may have arbitrary mimetype handler to open external viewer - # or inline images. - assert formatter.ipython_display_formatter.enabled - - class Test(object): - def __repr__(self): - return "" % id(self) - - def _repr_html_(self): - return '' - - # verify that HTML repr isn't computed - obj = Test() - data, _ = formatter.format(obj) - self.assertEqual(data, {'text/plain': repr(obj)}) - - class Test2(Test): - def _ipython_display_(self): - from IPython.display import display, HTML - - display(HTML("")) - - # verify that mimehandlers are called - called = False - - def handler(data, metadata): - print("Handler called") - nonlocal called - called = True - - ip.display_formatter.active_types.append("text/html") - ip.display_formatter.formatters["text/html"].enabled = True - ip.mime_renderers["text/html"] = handler - try: - obj = Test() - display(obj) - finally: - ip.display_formatter.formatters["text/html"].enabled = False - del ip.mime_renderers["text/html"] - - assert called == True - - -def syntax_error_transformer(lines): - """Transformer that throws SyntaxError if 'syntaxerror' is in the code.""" - for line in lines: - pos = line.find('syntaxerror') - if pos >= 0: - e = SyntaxError('input contains "syntaxerror"') - e.text = line - e.offset = pos + 1 - raise e - return lines - - -class TerminalMagicsTestCase(unittest.TestCase): - def test_paste_magics_blankline(self): - """Test that code with a blank line doesn't get split (gh-3246).""" - ip = get_ipython() - s = ('def pasted_func(a):\n' - ' b = a+1\n' - '\n' - ' return b') - - tm = ip.magics_manager.registry['TerminalMagics'] - tm.store_or_execute(s, name=None) - - self.assertEqual(ip.user_ns['pasted_func'](54), 55) diff --git a/spaces/TH5314/newbing/src/components/turn-counter.tsx b/spaces/TH5314/newbing/src/components/turn-counter.tsx deleted file mode 100644 index 08a9e488f044802a8600f4d195b106567c35aab4..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/src/components/turn-counter.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import React from 'react' -import { Throttling } from '@/lib/bots/bing/types' - -export interface TurnCounterProps { - throttling?: Throttling -} - -export function TurnCounter({ throttling }: TurnCounterProps) { - if (!throttling) { - return null - } - - return ( -
        -
        - {throttling.numUserMessagesInConversation} - - {throttling.maxNumUserMessagesInConversation} -
        -
        -
        - ) -} diff --git a/spaces/TYH71/gradio-ml-skeleton/src/__init__.py b/spaces/TYH71/gradio-ml-skeleton/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/utf8prober.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/utf8prober.py deleted file mode 100644 index d96354d97c2195320d0acc1717a5876eafbea2af..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/utf8prober.py +++ /dev/null @@ -1,82 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from typing import Union - -from .charsetprober import CharSetProber -from .codingstatemachine import CodingStateMachine -from .enums import MachineState, ProbingState -from .mbcssm import UTF8_SM_MODEL - - -class UTF8Prober(CharSetProber): - ONE_CHAR_PROB = 0.5 - - def __init__(self) -> None: - super().__init__() - self.coding_sm = CodingStateMachine(UTF8_SM_MODEL) - self._num_mb_chars = 0 - self.reset() - - def reset(self) -> None: - super().reset() - self.coding_sm.reset() - self._num_mb_chars = 0 - - @property - def charset_name(self) -> str: - return "utf-8" - - @property - def language(self) -> str: - return "" - - def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState: - for c in byte_str: - coding_state = self.coding_sm.next_state(c) - if coding_state == MachineState.ERROR: - self._state = ProbingState.NOT_ME - break - if coding_state == MachineState.ITS_ME: - self._state = ProbingState.FOUND_IT - break - if coding_state == MachineState.START: - if self.coding_sm.get_current_charlen() >= 2: - self._num_mb_chars += 1 - - if self.state == ProbingState.DETECTING: - if self.get_confidence() > self.SHORTCUT_THRESHOLD: - self._state = ProbingState.FOUND_IT - - return self.state - - def get_confidence(self) -> float: - unlike = 0.99 - if self._num_mb_chars < 6: - unlike *= self.ONE_CHAR_PROB**self._num_mb_chars - return 1.0 - unlike - return unlike diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/zipp.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/zipp.py deleted file mode 100644 index 26b723c1fd3e25740e0268b8c9b50905c58c3d4a..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/zipp.py +++ /dev/null @@ -1,329 +0,0 @@ -import io -import posixpath -import zipfile -import itertools -import contextlib -import sys -import pathlib - -if sys.version_info < (3, 7): - from collections import OrderedDict -else: - OrderedDict = dict - - -__all__ = ['Path'] - - -def _parents(path): - """ - Given a path with elements separated by - posixpath.sep, generate all parents of that path. - - >>> list(_parents('b/d')) - ['b'] - >>> list(_parents('/b/d/')) - ['/b'] - >>> list(_parents('b/d/f/')) - ['b/d', 'b'] - >>> list(_parents('b')) - [] - >>> list(_parents('')) - [] - """ - return itertools.islice(_ancestry(path), 1, None) - - -def _ancestry(path): - """ - Given a path with elements separated by - posixpath.sep, generate all elements of that path - - >>> list(_ancestry('b/d')) - ['b/d', 'b'] - >>> list(_ancestry('/b/d/')) - ['/b/d', '/b'] - >>> list(_ancestry('b/d/f/')) - ['b/d/f', 'b/d', 'b'] - >>> list(_ancestry('b')) - ['b'] - >>> list(_ancestry('')) - [] - """ - path = path.rstrip(posixpath.sep) - while path and path != posixpath.sep: - yield path - path, tail = posixpath.split(path) - - -_dedupe = OrderedDict.fromkeys -"""Deduplicate an iterable in original order""" - - -def _difference(minuend, subtrahend): - """ - Return items in minuend not in subtrahend, retaining order - with O(1) lookup. - """ - return itertools.filterfalse(set(subtrahend).__contains__, minuend) - - -class CompleteDirs(zipfile.ZipFile): - """ - A ZipFile subclass that ensures that implied directories - are always included in the namelist. - """ - - @staticmethod - def _implied_dirs(names): - parents = itertools.chain.from_iterable(map(_parents, names)) - as_dirs = (p + posixpath.sep for p in parents) - return _dedupe(_difference(as_dirs, names)) - - def namelist(self): - names = super(CompleteDirs, self).namelist() - return names + list(self._implied_dirs(names)) - - def _name_set(self): - return set(self.namelist()) - - def resolve_dir(self, name): - """ - If the name represents a directory, return that name - as a directory (with the trailing slash). - """ - names = self._name_set() - dirname = name + '/' - dir_match = name not in names and dirname in names - return dirname if dir_match else name - - @classmethod - def make(cls, source): - """ - Given a source (filename or zipfile), return an - appropriate CompleteDirs subclass. - """ - if isinstance(source, CompleteDirs): - return source - - if not isinstance(source, zipfile.ZipFile): - return cls(_pathlib_compat(source)) - - # Only allow for FastLookup when supplied zipfile is read-only - if 'r' not in source.mode: - cls = CompleteDirs - - source.__class__ = cls - return source - - -class FastLookup(CompleteDirs): - """ - ZipFile subclass to ensure implicit - dirs exist and are resolved rapidly. - """ - - def namelist(self): - with contextlib.suppress(AttributeError): - return self.__names - self.__names = super(FastLookup, self).namelist() - return self.__names - - def _name_set(self): - with contextlib.suppress(AttributeError): - return self.__lookup - self.__lookup = super(FastLookup, self)._name_set() - return self.__lookup - - -def _pathlib_compat(path): - """ - For path-like objects, convert to a filename for compatibility - on Python 3.6.1 and earlier. - """ - try: - return path.__fspath__() - except AttributeError: - return str(path) - - -class Path: - """ - A pathlib-compatible interface for zip files. - - Consider a zip file with this structure:: - - . - ├── a.txt - └── b - ├── c.txt - └── d - └── e.txt - - >>> data = io.BytesIO() - >>> zf = zipfile.ZipFile(data, 'w') - >>> zf.writestr('a.txt', 'content of a') - >>> zf.writestr('b/c.txt', 'content of c') - >>> zf.writestr('b/d/e.txt', 'content of e') - >>> zf.filename = 'mem/abcde.zip' - - Path accepts the zipfile object itself or a filename - - >>> root = Path(zf) - - From there, several path operations are available. - - Directory iteration (including the zip file itself): - - >>> a, b = root.iterdir() - >>> a - Path('mem/abcde.zip', 'a.txt') - >>> b - Path('mem/abcde.zip', 'b/') - - name property: - - >>> b.name - 'b' - - join with divide operator: - - >>> c = b / 'c.txt' - >>> c - Path('mem/abcde.zip', 'b/c.txt') - >>> c.name - 'c.txt' - - Read text: - - >>> c.read_text() - 'content of c' - - existence: - - >>> c.exists() - True - >>> (b / 'missing.txt').exists() - False - - Coercion to string: - - >>> import os - >>> str(c).replace(os.sep, posixpath.sep) - 'mem/abcde.zip/b/c.txt' - - At the root, ``name``, ``filename``, and ``parent`` - resolve to the zipfile. Note these attributes are not - valid and will raise a ``ValueError`` if the zipfile - has no filename. - - >>> root.name - 'abcde.zip' - >>> str(root.filename).replace(os.sep, posixpath.sep) - 'mem/abcde.zip' - >>> str(root.parent) - 'mem' - """ - - __repr = "{self.__class__.__name__}({self.root.filename!r}, {self.at!r})" - - def __init__(self, root, at=""): - """ - Construct a Path from a ZipFile or filename. - - Note: When the source is an existing ZipFile object, - its type (__class__) will be mutated to a - specialized type. If the caller wishes to retain the - original type, the caller should either create a - separate ZipFile object or pass a filename. - """ - self.root = FastLookup.make(root) - self.at = at - - def open(self, mode='r', *args, pwd=None, **kwargs): - """ - Open this entry as text or binary following the semantics - of ``pathlib.Path.open()`` by passing arguments through - to io.TextIOWrapper(). - """ - if self.is_dir(): - raise IsADirectoryError(self) - zip_mode = mode[0] - if not self.exists() and zip_mode == 'r': - raise FileNotFoundError(self) - stream = self.root.open(self.at, zip_mode, pwd=pwd) - if 'b' in mode: - if args or kwargs: - raise ValueError("encoding args invalid for binary operation") - return stream - return io.TextIOWrapper(stream, *args, **kwargs) - - @property - def name(self): - return pathlib.Path(self.at).name or self.filename.name - - @property - def suffix(self): - return pathlib.Path(self.at).suffix or self.filename.suffix - - @property - def suffixes(self): - return pathlib.Path(self.at).suffixes or self.filename.suffixes - - @property - def stem(self): - return pathlib.Path(self.at).stem or self.filename.stem - - @property - def filename(self): - return pathlib.Path(self.root.filename).joinpath(self.at) - - def read_text(self, *args, **kwargs): - with self.open('r', *args, **kwargs) as strm: - return strm.read() - - def read_bytes(self): - with self.open('rb') as strm: - return strm.read() - - def _is_child(self, path): - return posixpath.dirname(path.at.rstrip("/")) == self.at.rstrip("/") - - def _next(self, at): - return self.__class__(self.root, at) - - def is_dir(self): - return not self.at or self.at.endswith("/") - - def is_file(self): - return self.exists() and not self.is_dir() - - def exists(self): - return self.at in self.root._name_set() - - def iterdir(self): - if not self.is_dir(): - raise ValueError("Can't listdir a file") - subs = map(self._next, self.root.namelist()) - return filter(self._is_child, subs) - - def __str__(self): - return posixpath.join(self.root.filename, self.at) - - def __repr__(self): - return self.__repr.format(self=self) - - def joinpath(self, *other): - next = posixpath.join(self.at, *map(_pathlib_compat, other)) - return self._next(self.root.resolve_dir(next)) - - __truediv__ = joinpath - - @property - def parent(self): - if not self.at: - return self.filename.parent - parent_at = posixpath.dirname(self.at.rstrip('/')) - if parent_at: - parent_at += '/' - return self._next(parent_at) diff --git a/spaces/Tatiana2u1/Tatiana/README.md b/spaces/Tatiana2u1/Tatiana/README.md deleted file mode 100644 index fa6fadcdb9b4a2cc4c7e18c39165aa763fb8fec3..0000000000000000000000000000000000000000 --- a/spaces/Tatiana2u1/Tatiana/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Tatiana -emoji: 🏢 -colorFrom: red -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TechnoByte/wd-v1-4-tags/app.py b/spaces/TechnoByte/wd-v1-4-tags/app.py deleted file mode 100644 index 14f920b64badc2b7943b9896b8eb931d2353c358..0000000000000000000000000000000000000000 --- a/spaces/TechnoByte/wd-v1-4-tags/app.py +++ /dev/null @@ -1,268 +0,0 @@ -from __future__ import annotations - -import argparse -import functools -import html -import os - -import gradio as gr -import huggingface_hub -import numpy as np -import onnxruntime as rt -import pandas as pd -import piexif -import piexif.helper -import PIL.Image - -from Utils import dbimutils - -TITLE = "WaifuDiffusion v1.4 Tags" - -HF_TOKEN = os.environ["HF_TOKEN"] -MOAT_MODEL_REPO = "SmilingWolf/wd-v1-4-moat-tagger-v2" -SWIN_MODEL_REPO = "SmilingWolf/wd-v1-4-swinv2-tagger-v2" -CONV_MODEL_REPO = "SmilingWolf/wd-v1-4-convnext-tagger-v2" -CONV2_MODEL_REPO = "SmilingWolf/wd-v1-4-convnextv2-tagger-v2" -VIT_MODEL_REPO = "SmilingWolf/wd-v1-4-vit-tagger-v2" -MODEL_FILENAME = "model.onnx" -LABEL_FILENAME = "selected_tags.csv" - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument("--score-slider-step", type=float, default=0.05) - parser.add_argument("--score-general-threshold", type=float, default=0.35) - parser.add_argument("--score-character-threshold", type=float, default=0.85) - parser.add_argument("--share", action="store_true") - return parser.parse_args() - - -def load_model(model_repo: str, model_filename: str) -> rt.InferenceSession: - path = huggingface_hub.hf_hub_download( - model_repo, model_filename, use_auth_token=HF_TOKEN - ) - model = rt.InferenceSession(path) - return model - - -def change_model(model_name): - global loaded_models - - if model_name == "MOAT": - model = load_model(MOAT_MODEL_REPO, MODEL_FILENAME) - elif model_name == "SwinV2": - model = load_model(SWIN_MODEL_REPO, MODEL_FILENAME) - elif model_name == "ConvNext": - model = load_model(CONV_MODEL_REPO, MODEL_FILENAME) - elif model_name == "ConvNextV2": - model = load_model(CONV2_MODEL_REPO, MODEL_FILENAME) - elif model_name == "ViT": - model = load_model(VIT_MODEL_REPO, MODEL_FILENAME) - - loaded_models[model_name] = model - return loaded_models[model_name] - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download( - MOAT_MODEL_REPO, LABEL_FILENAME, use_auth_token=HF_TOKEN - ) - df = pd.read_csv(path) - - tag_names = df["name"].tolist() - rating_indexes = list(np.where(df["category"] == 9)[0]) - general_indexes = list(np.where(df["category"] == 0)[0]) - character_indexes = list(np.where(df["category"] == 4)[0]) - return tag_names, rating_indexes, general_indexes, character_indexes - - -def plaintext_to_html(text): - text = ( - "

        " + "
        \n".join([f"{html.escape(x)}" for x in text.split("\n")]) + "

        " - ) - return text - - -def predict( - image: PIL.Image.Image, - model_name: str, - general_threshold: float, - character_threshold: float, - tag_names: list[str], - rating_indexes: list[np.int64], - general_indexes: list[np.int64], - character_indexes: list[np.int64], -): - global loaded_models - - rawimage = image - - model = loaded_models[model_name] - if model is None: - model = change_model(model_name) - - _, height, width, _ = model.get_inputs()[0].shape - - # Alpha to white - image = image.convert("RGBA") - new_image = PIL.Image.new("RGBA", image.size, "WHITE") - new_image.paste(image, mask=image) - image = new_image.convert("RGB") - image = np.asarray(image) - - # PIL RGB to OpenCV BGR - image = image[:, :, ::-1] - - image = dbimutils.make_square(image, height) - image = dbimutils.smart_resize(image, height) - image = image.astype(np.float32) - image = np.expand_dims(image, 0) - - input_name = model.get_inputs()[0].name - label_name = model.get_outputs()[0].name - probs = model.run([label_name], {input_name: image})[0] - - labels = list(zip(tag_names, probs[0].astype(float))) - - # First 4 labels are actually ratings: pick one with argmax - ratings_names = [labels[i] for i in rating_indexes] - rating = dict(ratings_names) - - # Then we have general tags: pick any where prediction confidence > threshold - general_names = [labels[i] for i in general_indexes] - general_res = [x for x in general_names if x[1] > general_threshold] - general_res = dict(general_res) - - # Everything else is characters: pick any where prediction confidence > threshold - character_names = [labels[i] for i in character_indexes] - character_res = [x for x in character_names if x[1] > character_threshold] - character_res = dict(character_res) - - b = dict(sorted(general_res.items(), key=lambda item: item[1], reverse=True)) - a = ( - ", ".join(list(b.keys())) - .replace("_", " ") - .replace("(", "\(") - .replace(")", "\)") - ) - c = ", ".join(list(b.keys())) - - items = rawimage.info - geninfo = "" - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b"") - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode("utf8", errors="ignore") - - items["exif comment"] = exif_comment - geninfo = exif_comment - - for field in [ - "jfif", - "jfif_version", - "jfif_unit", - "jfif_density", - "dpi", - "exif", - "loop", - "background", - "timestamp", - "duration", - ]: - items.pop(field, None) - - geninfo = items.get("parameters", geninfo) - - info = f""" -

        PNG Info

        -""" - for key, text in items.items(): - info += ( - f""" -
        -

        {plaintext_to_html(str(key))}

        -

        {plaintext_to_html(str(text))}

        -
        -""".strip() - + "\n" - ) - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

        {message}

        " - - return (a, c, rating, character_res, general_res, info) - - -def main(): - global loaded_models - loaded_models = { - "MOAT": None, - "SwinV2": None, - "ConvNext": None, - "ConvNextV2": None, - "ViT": None, - } - - args = parse_args() - - change_model("MOAT") - - tag_names, rating_indexes, general_indexes, character_indexes = load_labels() - - func = functools.partial( - predict, - tag_names=tag_names, - rating_indexes=rating_indexes, - general_indexes=general_indexes, - character_indexes=character_indexes, - ) - - gr.Interface( - fn=func, - inputs=[ - gr.Image(type="pil", label="Input"), - gr.Radio( - ["MOAT", "SwinV2", "ConvNext", "ConvNextV2", "ViT"], - value="MOAT", - label="Model", - ), - gr.Slider( - 0, - 1, - step=args.score_slider_step, - value=args.score_general_threshold, - label="General Tags Threshold", - ), - gr.Slider( - 0, - 1, - step=args.score_slider_step, - value=args.score_character_threshold, - label="Character Tags Threshold", - ), - ], - outputs=[ - gr.Textbox(label="Output (string)"), - gr.Textbox(label="Output (raw string)"), - gr.Label(label="Rating"), - gr.Label(label="Output (characters)"), - gr.Label(label="Output (tags)"), - gr.HTML(), - ], - examples=[], - title=TITLE, - allow_flagging="never", - theme='TechnoByte/soft-improved', - ).launch( - enable_queue=True, - share=args.share, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/TencentARC/Caption-Anything/caption_anything/utils/parser.py b/spaces/TencentARC/Caption-Anything/caption_anything/utils/parser.py deleted file mode 100644 index 6abf61535a437616ced23a7df38eb4ea1b295fa2..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/Caption-Anything/caption_anything/utils/parser.py +++ /dev/null @@ -1,35 +0,0 @@ -import argparse - -def parse_augment(): - parser = argparse.ArgumentParser() - parser.add_argument('--captioner', type=str, default="blip2") - parser.add_argument('--segmenter', type=str, default="huge") - parser.add_argument('--text_refiner', type=str, default="base") - parser.add_argument('--segmenter_checkpoint', type=str, default=None, help="SAM checkpoint path") - parser.add_argument('--seg_crop_mode', type=str, default="wo_bg", choices=['wo_bg', 'w_bg'], - help="whether to add or remove background of the image when captioning") - parser.add_argument('--clip_filter', action="store_true", help="use clip to filter bad captions") - parser.add_argument('--context_captions', action="store_true", - help="use surrounding captions to enhance current caption (TODO)") - parser.add_argument('--disable_regular_box', action="store_true", default=False, - help="crop image with a regular box") - parser.add_argument('--device', type=str, default="cuda:0") - parser.add_argument('--port', type=int, default=6086, help="only useful when running gradio applications") - parser.add_argument('--debug', action="store_true") - parser.add_argument('--gradio_share', action="store_true") - parser.add_argument('--disable_gpt', action="store_true") - parser.add_argument('--enable_reduce_tokens', action="store_true", default=False) - parser.add_argument('--disable_reuse_features', action="store_true", default=False) - parser.add_argument('--enable_morphologyex', action="store_true", default=False) - parser.add_argument('--chat_tools_dict', type=str, default='VisualQuestionAnswering_cuda:0', help='Visual ChatGPT tools, only useful when running gradio applications') - - parser.add_argument('--pred_iou_thresh', type=float, default=0.88, help="sam post-precessing") - parser.add_argument('--min_mask_region_area', type=int, default=0, help="sam post-precessing") - parser.add_argument('--stability_score_thresh', type=float, default=0.95, help='sam post-processing') - parser.add_argument('--box_nms_thresh', type=float, default=0.7, help='sam post-processing') - - args = parser.parse_args() - - if args.debug: - print(args) - return args diff --git a/spaces/UserXTheUnknown/stablediffusion-infinity/PyPatchMatch/csrc/masked_image.cpp b/spaces/UserXTheUnknown/stablediffusion-infinity/PyPatchMatch/csrc/masked_image.cpp deleted file mode 100644 index 448a776b3cda9f39f4dd0ad908f1b135c647ca8f..0000000000000000000000000000000000000000 --- a/spaces/UserXTheUnknown/stablediffusion-infinity/PyPatchMatch/csrc/masked_image.cpp +++ /dev/null @@ -1,138 +0,0 @@ -#include "masked_image.h" -#include -#include - -const cv::Size MaskedImage::kDownsampleKernelSize = cv::Size(6, 6); -const int MaskedImage::kDownsampleKernel[6] = {1, 5, 10, 10, 5, 1}; - -bool MaskedImage::contains_mask(int y, int x, int patch_size) const { - auto mask_size = size(); - for (int dy = -patch_size; dy <= patch_size; ++dy) { - for (int dx = -patch_size; dx <= patch_size; ++dx) { - int yy = y + dy, xx = x + dx; - if (yy >= 0 && yy < mask_size.height && xx >= 0 && xx < mask_size.width) { - if (is_masked(yy, xx) && !is_globally_masked(yy, xx)) return true; - } - } - } - return false; -} - -MaskedImage MaskedImage::downsample() const { - const auto &kernel_size = MaskedImage::kDownsampleKernelSize; - const auto &kernel = MaskedImage::kDownsampleKernel; - - const auto size = this->size(); - const auto new_size = cv::Size(size.width / 2, size.height / 2); - - auto ret = MaskedImage(new_size.width, new_size.height); - if (!m_global_mask.empty()) ret.init_global_mask_mat(); - for (int y = 0; y < size.height - 1; y += 2) { - for (int x = 0; x < size.width - 1; x += 2) { - int r = 0, g = 0, b = 0, ksum = 0; - bool is_gmasked = true; - - for (int dy = -kernel_size.height / 2 + 1; dy <= kernel_size.height / 2; ++dy) { - for (int dx = -kernel_size.width / 2 + 1; dx <= kernel_size.width / 2; ++dx) { - int yy = y + dy, xx = x + dx; - if (yy >= 0 && yy < size.height && xx >= 0 && xx < size.width) { - if (!is_globally_masked(yy, xx)) { - is_gmasked = false; - } - if (!is_masked(yy, xx)) { - auto source_ptr = get_image(yy, xx); - int k = kernel[kernel_size.height / 2 - 1 + dy] * kernel[kernel_size.width / 2 - 1 + dx]; - r += source_ptr[0] * k, g += source_ptr[1] * k, b += source_ptr[2] * k; - ksum += k; - } - } - } - } - - if (ksum > 0) r /= ksum, g /= ksum, b /= ksum; - - if (!m_global_mask.empty()) { - ret.set_global_mask(y / 2, x / 2, is_gmasked); - } - if (ksum > 0) { - auto target_ptr = ret.get_mutable_image(y / 2, x / 2); - target_ptr[0] = r, target_ptr[1] = g, target_ptr[2] = b; - ret.set_mask(y / 2, x / 2, 0); - } else { - ret.set_mask(y / 2, x / 2, 1); - } - } - } - - return ret; -} - -MaskedImage MaskedImage::upsample(int new_w, int new_h) const { - const auto size = this->size(); - auto ret = MaskedImage(new_w, new_h); - if (!m_global_mask.empty()) ret.init_global_mask_mat(); - for (int y = 0; y < new_h; ++y) { - for (int x = 0; x < new_w; ++x) { - int yy = y * size.height / new_h; - int xx = x * size.width / new_w; - - if (is_globally_masked(yy, xx)) { - ret.set_global_mask(y, x, 1); - ret.set_mask(y, x, 1); - } else { - if (!m_global_mask.empty()) ret.set_global_mask(y, x, 0); - - if (is_masked(yy, xx)) { - ret.set_mask(y, x, 1); - } else { - auto source_ptr = get_image(yy, xx); - auto target_ptr = ret.get_mutable_image(y, x); - for (int c = 0; c < 3; ++c) - target_ptr[c] = source_ptr[c]; - ret.set_mask(y, x, 0); - } - } - } - } - - return ret; -} - -MaskedImage MaskedImage::upsample(int new_w, int new_h, const cv::Mat &new_global_mask) const { - auto ret = upsample(new_w, new_h); - ret.set_global_mask_mat(new_global_mask); - return ret; -} - -void MaskedImage::compute_image_gradients() { - if (m_image_grad_computed) { - return; - } - - const auto size = m_image.size(); - m_image_grady = cv::Mat(size, CV_8UC3); - m_image_gradx = cv::Mat(size, CV_8UC3); - m_image_grady = cv::Scalar::all(0); - m_image_gradx = cv::Scalar::all(0); - - for (int i = 1; i < size.height - 1; ++i) { - const auto *ptr = m_image.ptr(i, 0); - const auto *ptry1 = m_image.ptr(i + 1, 0); - const auto *ptry2 = m_image.ptr(i - 1, 0); - const auto *ptrx1 = m_image.ptr(i, 0) + 3; - const auto *ptrx2 = m_image.ptr(i, 0) - 3; - auto *mptry = m_image_grady.ptr(i, 0); - auto *mptrx = m_image_gradx.ptr(i, 0); - for (int j = 3; j < size.width * 3 - 3; ++j) { - mptry[j] = (ptry1[j] / 2 - ptry2[j] / 2) + 128; - mptrx[j] = (ptrx1[j] / 2 - ptrx2[j] / 2) + 128; - } - } - - m_image_grad_computed = true; -} - -void MaskedImage::compute_image_gradients() const { - const_cast(this)->compute_image_gradients(); -} - diff --git a/spaces/VickyKira/NASAGPT/client/css/conversation.css b/spaces/VickyKira/NASAGPT/client/css/conversation.css deleted file mode 100644 index d20f178c45e8ccbfc9539f99914b25fc572045bd..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/client/css/conversation.css +++ /dev/null @@ -1,158 +0,0 @@ -.conversation { - width: 60%; - margin: 0px 16px; - display: flex; - flex-direction: column; -} - -.conversation #messages { - width: 100%; - display: flex; - flex-direction: column; - overflow: auto; - overflow-wrap: break-word; - padding-bottom: 8px; -} - -.conversation .user-input { - max-height: 180px; - margin: 16px 0px; -} - -.conversation .user-input input { - font-size: 1rem; - background: none; - border: none; - outline: none; - color: var(--colour-3); -} - -.conversation .user-input input::placeholder { - color: var(--user-input); -} - -.conversation-title { - color: var(--colour-3); - font-size: 14px; -} - -.conversation .user-input textarea { - font-size: 1rem; - width: 100%; - height: 100%; - padding: 12px; - background: none; - border: none; - outline: none; - color: var(--colour-3); - resize: vertical; - max-height: 150px; - min-height: 80px; -} - -.box { - backdrop-filter: blur(20px); - -webkit-backdrop-filter: blur(20px); - background-color: var(--blur-bg); - height: 100%; - width: 100%; - border-radius: var(--border-radius-1); - border: 1px solid var(--blur-border); -} - -.box.input-box { - position: relative; - align-items: center; - padding: 8px; - cursor: pointer; -} - -#send-button { - position: absolute; - bottom: 25%; - right: 10px; - z-index: 1; - padding: 16px; -} - -#cursor { - line-height: 17px; - margin-left: 3px; - -webkit-animation: blink 0.8s infinite; - animation: blink 0.8s infinite; - width: 7px; - height: 15px; -} - -@keyframes blink { - 0% { - background: #ffffff00; - } - - 50% { - background: white; - } - - 100% { - background: #ffffff00; - } -} - -@-webkit-keyframes blink { - 0% { - background: #ffffff00; - } - - 50% { - background: white; - } - - 100% { - background: #ffffff00; - } -} - -/* scrollbar */ -.conversation #messages::-webkit-scrollbar { - width: 4px; - padding: 8px 0px; -} - -.conversation #messages::-webkit-scrollbar-track { - background-color: #ffffff00; -} - -.conversation #messages::-webkit-scrollbar-thumb { - background-color: #555555; - border-radius: 10px; -} - -@media screen and (max-width: 990px) { - .conversation { - width: 100%; - height: 90%; - } -} - -@media screen and (max-height: 720px) { - .conversation.box { - height: 70%; - } - - .conversation .user-input textarea { - font-size: 0.875rem; - } -} - -@media screen and (max-width: 360px) { - .box { - border-radius: 0; - } - .conversation { - margin: 0; - margin-top: 48px; - } - .conversation .user-input { - margin: 2px 0 8px 0; - } -} diff --git a/spaces/VoiceHero69/changer/webui/modules/implementations/__init__.py b/spaces/VoiceHero69/changer/webui/modules/implementations/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/WZUN666/vits-uma-genshin-honkai/models.py b/spaces/WZUN666/vits-uma-genshin-honkai/models.py deleted file mode 100644 index 8353b867f441de7e4d05aef980e672899c3a8889..0000000000000000000000000000000000000000 --- a/spaces/WZUN666/vits-uma-genshin-honkai/models.py +++ /dev/null @@ -1,533 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/Wayben/ChatGPT/custom.css b/spaces/Wayben/ChatGPT/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/Wayben/ChatGPT/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/WhyLIM/ChatGPT-academic/check_proxy.py b/spaces/WhyLIM/ChatGPT-academic/check_proxy.py deleted file mode 100644 index d6263ad981272b0a798bf278a9e83b99e6928711..0000000000000000000000000000000000000000 --- a/spaces/WhyLIM/ChatGPT-academic/check_proxy.py +++ /dev/null @@ -1,22 +0,0 @@ - -def check_proxy(proxies): - import requests - proxies_https = proxies['https'] if proxies is not None else '无' - try: - response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4) - data = response.json() - print(f'查询代理的地理位置,返回的结果是{data}') - country = data['country_name'] - result = f"代理配置 {proxies_https}, 代理所在地:{country}" - print(result) - return result - except: - result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效" - print(result) - return result - - -if __name__ == '__main__': - try: from config_private import proxies # 放自己的秘密如API和代理网址 os.path.exists('config_private.py') - except: from config import proxies - check_proxy(proxies) \ No newline at end of file diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/modules/transformer.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/modules/transformer.py deleted file mode 100644 index be6a5e420fc53eebe9947aa5dde7bfebd3cb4dad..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/modules/transformer.py +++ /dev/null @@ -1,704 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Transformer model, with streaming support, xformer attention support -and easy causal attention with a potentially finite receptive field. - -See `StreamingTransformer` for more information. - -Unlike regular PyTorch Transformer, we make the hard choice that batches are first. -""" - -import typing as tp - -from einops import rearrange -import torch -import torch.nn as nn -from torch.nn import functional as F -from torch.utils.checkpoint import checkpoint as torch_checkpoint -from xformers import ops - -from .rope import RotaryEmbedding -from .streaming import StreamingModule - - -def _is_profiled() -> bool: - # Return true if we are currently running with a xformers profiler activated. - try: - from xformers.profiler import profiler - except ImportError: - return False - return profiler._Profiler._CURRENT_PROFILER is not None - - -def create_norm_fn(norm_type: str, dim: int, **kwargs) -> nn.Module: - """Create normalization module for transformer encoder layer. - - Args: - norm_type (str): Normalization method. - dim (int): Dimension of the normalized layer. - **kwargs (dict): Additional parameters for normalization layer. - Returns: - nn.Module: Normalization module. - """ - if norm_type == 'layer_norm': - return nn.LayerNorm(dim, eps=1e-5, **kwargs) - else: - raise ValueError(f"Unknown norm type: {norm_type}") - - -def create_sin_embedding(positions: torch.Tensor, dim: int, max_period: float = 10000, - dtype: torch.dtype = torch.float32) -> torch.Tensor: - """Create sinusoidal positional embedding, with shape `[B, T, C]`. - - Args: - positions (torch.Tensor): LongTensor of positions. - dim (int): Dimension of the embedding. - max_period (float): Maximum period of the cosine/sine functions. - dtype (torch.dtype or str): dtype to use to generate the embedding. - Returns: - torch.Tensor: Sinusoidal positional embedding. - """ - # We aim for BTC format - assert dim % 2 == 0 - half_dim = dim // 2 - positions = positions.to(dtype) - adim = torch.arange(half_dim, device=positions.device, dtype=dtype).view(1, 1, -1) - max_period_tensor = torch.full([], max_period, device=positions.device, dtype=dtype) # avoid sync point - phase = positions / (max_period_tensor ** (adim / (half_dim - 1))) - return torch.cat([torch.cos(phase), torch.sin(phase)], dim=-1) - - -def expand_repeated_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor: - """torch.repeat_interleave(x, dim=2, repeats=n_rep) from xlformers""" - bs, slen, n_kv_heads, head_dim = x.shape - if n_rep == 1: - return x - return ( - x[:, :, :, None, :] - .expand(bs, slen, n_kv_heads, n_rep, head_dim) - .reshape(bs, slen, n_kv_heads * n_rep, head_dim) - ) - - -class LayerScale(nn.Module): - """Layer scale from [Touvron et al 2021] (https://arxiv.org/pdf/2103.17239.pdf). - This rescales diagonaly the residual outputs close to 0, with a learnt scale. - - Args: - channels (int): Number of channels. - init (float): Initial scale. - channel_last (bool): If True, expect `[*, C]` shaped tensors, otherwise, `[*, C, T]`. - device (torch.device or None): Device on which to initialize the module. - dtype (torch.dtype or None): dtype to use to initialize the module. - """ - def __init__(self, channels: int, init: float = 1e-4, channel_last: bool = True, - device=None, dtype=None): - super().__init__() - self.channel_last = channel_last - self.scale = nn.Parameter( - torch.full((channels,), init, - requires_grad=True, device=device, dtype=dtype)) - - def forward(self, x: torch.Tensor): - if self.channel_last: - return self.scale * x - else: - return self.scale[:, None] * x - - -class StreamingMultiheadAttention(StreamingModule): - """Similar to `nn.MultiheadAttention` but with support for streaming, causal evaluation. - - Args: - embed_dim (int): Dimension to project to. - num_heads (int): Number of heads. - dropout (float): Dropout level. - bias (bool): Use bias in projections. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - rope (`RotaryEmbedding` or None): Rope embedding to use. - cross_attention: Should be true when used as a cross attention. - All keys and values must be available at once, streaming is only for the queries. - Cannot be used with `causal` or `rope` (as it wouldn't make sens to - intepret the time steps in the keys relative to those in the queries). - safe_streaming (bool): Bug fix, will go away with xformers update. - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Sevice on which to initialize. - dtype (torch.dtype or None): dtype to use. - """ - def __init__(self, embed_dim: int, num_heads: int, dropout: float = 0.0, bias: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - rope: tp.Optional[RotaryEmbedding] = None, cross_attention: bool = False, - safe_streaming: bool = True, qk_layer_norm: bool = False, kv_repeat: int = 1, - device=None, dtype=None): - super().__init__() - factory_kwargs = {'device': device, 'dtype': dtype} - if past_context is not None: - assert causal - - self.embed_dim = embed_dim - self.causal = causal - self.past_context = past_context - self.memory_efficient = memory_efficient - self.attention_as_float32 = attention_as_float32 - self.rope = rope - self.cross_attention = cross_attention - self.safe_streaming = safe_streaming - self.num_heads = num_heads - self.dropout = dropout - self.kv_repeat = kv_repeat - if cross_attention: - assert not causal, "Causal cannot work with cross attention." - assert rope is None, "Rope cannot work with cross attention." - - if memory_efficient: - _verify_xformers_memory_efficient_compat() - - self.custom = _is_custom(custom, memory_efficient) - if self.custom: - out_dim = embed_dim - assert num_heads % kv_repeat == 0 - assert not cross_attention or kv_repeat == 1 - num_kv = num_heads // kv_repeat - kv_dim = (embed_dim // num_heads) * num_kv - out_dim += 2 * kv_dim - in_proj = nn.Linear(embed_dim, out_dim, bias=bias, **factory_kwargs) - # We try to follow the default PyTorch MHA convention, to easily compare results. - self.in_proj_weight = in_proj.weight - self.in_proj_bias = in_proj.bias - if bias: - self.in_proj_bias.data.zero_() # Following Pytorch convention - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias, **factory_kwargs) - if bias: - self.out_proj.bias.data.zero_() - else: - assert not qk_layer_norm - assert kv_repeat == 1 - self.mha = nn.MultiheadAttention( - embed_dim, num_heads, dropout=dropout, bias=bias, batch_first=True, - **factory_kwargs) - self.qk_layer_norm = qk_layer_norm - if qk_layer_norm: - assert self.custom - assert kv_repeat == 1 - ln_dim = embed_dim - self.q_layer_norm = nn.LayerNorm(ln_dim) - self.k_layer_norm = nn.LayerNorm(ln_dim) - - def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs): - if not self.custom: - # Support compat with regular MHA - keys = [n for n, _ in self.mha.named_parameters()] - for key in keys: - if prefix + key in state_dict: - state_dict[prefix + "mha." + key] = state_dict.pop(prefix + key) - super()._load_from_state_dict(state_dict, prefix, *args, **kwargs) - - def _get_mask(self, current_steps: int, device: torch.device, dtype: torch.dtype): - # Return a causal mask, accounting for potentially stored past keys/values - # We actually return a bias for the attention score, as this has the same - # convention both in the builtin MHA in Pytorch, and Xformers functions. - if self.memory_efficient: - from xformers.ops import LowerTriangularMask - if current_steps == 1: - # If we only have one step, then we do not need a mask. - return None - elif 'past_keys' in self._streaming_state: - raise RuntimeError('Not supported at the moment') - else: - # Then we can safely use a lower triangular mask - return LowerTriangularMask() - if self._streaming_state: - past_keys = self._streaming_state['past_keys'] - past_steps = past_keys.shape[1] - else: - past_steps = 0 - - queries_pos = torch.arange( - past_steps, current_steps + past_steps, device=device).view(-1, 1) - keys_pos = torch.arange(past_steps + current_steps, device=device).view(1, -1) - delta = queries_pos - keys_pos - valid = delta >= 0 - if self.past_context is not None: - valid &= (delta <= self.past_context) - return torch.where( - valid, - torch.zeros([], device=device, dtype=dtype), - torch.full([], float('-inf'), device=device, dtype=dtype)) - - def _complete_kv(self, k, v): - if self.cross_attention: - # With cross attention we assume all keys and values - # are already available, and streaming is with respect - # to the queries only. - return k, v - # Complete the key/value pair using the streaming state. - if self._streaming_state: - pk = self._streaming_state['past_keys'] - nk = torch.cat([pk, k], dim=1) - if v is k: - nv = nk - else: - pv = self._streaming_state['past_values'] - nv = torch.cat([pv, v], dim=1) - else: - nk = k - nv = v - - assert nk.shape[1] == nv.shape[1] - offset = 0 - if self.past_context is not None: - offset = max(0, nk.shape[1] - self.past_context) - if self._is_streaming: - self._streaming_state['past_keys'] = nk[:, offset:] - if v is not k: - self._streaming_state['past_values'] = nv[:, offset:] - if 'offset' in self._streaming_state: - self._streaming_state['offset'] += offset - else: - self._streaming_state['offset'] = torch.tensor(0) - return nk, nv - - def _apply_rope(self, query: torch.Tensor, key: torch.Tensor): - # Apply rope embeddings to query and key tensors. - assert self.rope is not None - if 'past_keys' in self._streaming_state: - past_keys_offset = self._streaming_state['past_keys'].shape[1] - else: - past_keys_offset = 0 - if 'offset' in self._streaming_state: - past_context_offset = int(self._streaming_state['offset'].item()) - else: - past_context_offset = 0 - streaming_offset = past_context_offset + past_keys_offset - return self.rope.rotate_qk(query, key, start=streaming_offset) - - def forward(self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor, - key_padding_mask=None, need_weights=False, attn_mask=None, - average_attn_weights=True, is_causal=False): - assert attn_mask is None - assert not is_causal, ("new param added in torch 2.0.1 not supported, " - "use the causal args in the constructor.") - - dtype = query.dtype - if self._is_streaming: - assert self.causal or self.cross_attention, \ - "Streaming only available for causal or cross attention" - - if self.causal: - # At the moment we specialize only for the self-attention case. - assert query.shape[1] == key.shape[1], "Causal only for same length query / key / value" - assert value.shape[1] == key.shape[1], "Causal only for same length query / key / value" - attn_mask = self._get_mask(query.shape[1], query.device, query.dtype) - - if self.custom: - # custom implementation - assert need_weights is False - assert key_padding_mask is None - if self.cross_attention: - # Different queries, keys, values, we have to spit manually the weights - # before applying the linear. - dim = self.in_proj_weight.shape[0] // 3 - if self.in_proj_bias is None: - bias_q, bias_k, bias_v = None, None, None - else: - bias_q = self.in_proj_bias[:dim] - bias_k = self.in_proj_bias[dim: 2 * dim] - bias_v = self.in_proj_bias[2 * dim:] - q = nn.functional.linear(query, self.in_proj_weight[:dim], bias_q) - # todo: when streaming, we could actually save k, v and check the shape actually match. - k = nn.functional.linear(key, self.in_proj_weight[dim: 2 * dim], bias_k) - v = nn.functional.linear(value, self.in_proj_weight[2 * dim:], bias_v) - if self.qk_layer_norm is True: - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - # q, k, v = [rearrange(x, "b t (h d) -> (b h) t d", h=self.num_heads) for x in [q, k, v]] - q, k, v = [rearrange(x, "b t (h d) -> b t h d", h=self.num_heads) for x in [q, k, v]] - else: - if not _is_profiled(): - # profiling breaks that propertysomehow. - assert query is key, "specialized implementation" - assert value is key, "specialized implementation" - projected = nn.functional.linear(query, self.in_proj_weight, self.in_proj_bias) - if self.kv_repeat == 1: - packed = rearrange(projected, "b t (p h d) -> b t p h d", p=3, h=self.num_heads) - q, k, v = ops.unbind(packed, dim=2) - else: - embed_dim = self.embed_dim - per_head_dim = (embed_dim // self.num_heads) - kv_heads = self.num_heads // self.kv_repeat - q = projected[:, :, :embed_dim] - start = embed_dim - end = start + per_head_dim * kv_heads - k = projected[:, :, start: end] - v = projected[:, :, end:] - q = rearrange(q, "b t (h d) -> b t h d", h=self.num_heads) - k = rearrange(k, "b t (h d) -> b t h d", h=kv_heads) - v = rearrange(v, "b t (h d) -> b t h d", h=kv_heads) - - if self.qk_layer_norm is True: - assert self.kv_repeat == 1 - q, k = [rearrange(x, "b t h d -> b t (h d)") for x in [q, k]] - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k = [rearrange(x, "b t (h d) -> b t h d", h=self.num_heads) for x in [q, k]] - if self.rope: - q, k = self._apply_rope(q, k) - k, v = self._complete_kv(k, v) - if self.kv_repeat > 1: - k = expand_repeated_kv(k, self.kv_repeat) - v = expand_repeated_kv(v, self.kv_repeat) - if self.attention_as_float32: - q, k, v = [x.float() for x in [q, k, v]] - if self.memory_efficient: - p = self.dropout if self.training else 0 - x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p) - else: - # We include the dot product as float32, for consistency - # with the other implementations that include that step - # as part of the attention. Note that when using `autocast`, - # the einsums would be done as bfloat16, but the softmax - # would be done as bfloat16, so `attention_as_float32` will - # extend a bit the range of operations done in float32, - # although this should make no difference. - q = q / q.shape[-1] ** 0.5 - if self._is_streaming and self.safe_streaming and q.device.type == 'cuda': - with torch.autocast(device_type=q.device.type, dtype=torch.float32): - pre_w = torch.einsum("bqhc,bkhc->bhqk", q, k) - else: - pre_w = torch.einsum("bqhc,bkhc->bhqk", q, k) - if attn_mask is not None: - pre_w = pre_w + attn_mask - w = torch.softmax(pre_w, dim=-1) - w = F.dropout(w, self.dropout, training=self.training).to(v) - x = torch.einsum("bhqk,bkhc->bqhc", w, v) - x = x.to(dtype) - x = rearrange(x, "b t h d -> b t (h d)", h=self.num_heads) - x = self.out_proj(x) - else: - key, value = self._complete_kv(key, value) - if self.attention_as_float32: - query, key, value = [x.float() for x in [query, key, value]] - x, _ = self.mha( - query, key, value, key_padding_mask, - need_weights, attn_mask, average_attn_weights) - x = x.to(dtype) - - return x, None - - -class StreamingTransformerLayer(nn.TransformerEncoderLayer): - """TransformerLayer with Streaming / Causal support. - This also integrates cross_attention, when passing `cross_attention=True`, - rather than having two separate classes like in PyTorch. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product in attention. - qk_layer_norm_cross (bool): Same for the cross attention. - cross_attention (bool): If True, expect to get secondary input for cross-attention. - Cross attention will use the default MHA, as it typically won't require - special treatment. - layer_scale (float or None): If not None, LayerScale will be used with - the given value as initial scale. - rope (`RotaryEmbedding` or None): Rope embedding to use. - attention_dropout (float or None): If not None, separate the value of the dimension dropout - in FFN and of the attention dropout. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, dim_feedforward: int = 2048, dropout: float = 0.1, - bias_ff: bool = True, bias_attn: bool = True, causal: bool = False, - past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - qk_layer_norm: bool = False, qk_layer_norm_cross: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - rope: tp.Optional[RotaryEmbedding] = None, attention_dropout: tp.Optional[float] = None, - kv_repeat: int = 1, norm: str = 'layer_norm', device=None, dtype=None, **kwargs): - super().__init__(d_model, num_heads, dim_feedforward, dropout, - device=device, dtype=dtype, batch_first=True, **kwargs) - factory_kwargs = {'device': device, 'dtype': dtype} - # Redefine self_attn to our streaming multi-head attention - attn_kwargs: tp.Dict[str, tp.Any] = { - 'embed_dim': d_model, - 'num_heads': num_heads, - 'dropout': dropout if attention_dropout is None else attention_dropout, - 'bias': bias_attn, - 'custom': custom, - 'memory_efficient': memory_efficient, - 'attention_as_float32': attention_as_float32, - } - self.self_attn: StreamingMultiheadAttention = StreamingMultiheadAttention( - causal=causal, past_context=past_context, rope=rope, qk_layer_norm=qk_layer_norm, - kv_repeat=kv_repeat, **attn_kwargs, **factory_kwargs) # type: ignore - # Redefine feedforward layers to expose bias parameter - self.linear1 = nn.Linear(d_model, dim_feedforward, bias=bias_ff, **factory_kwargs) - self.linear2 = nn.Linear(dim_feedforward, d_model, bias=bias_ff, **factory_kwargs) - - self.layer_scale_1: nn.Module - self.layer_scale_2: nn.Module - if layer_scale is None: - self.layer_scale_1 = nn.Identity() - self.layer_scale_2 = nn.Identity() - else: - self.layer_scale_1 = LayerScale(d_model, layer_scale, **factory_kwargs) - self.layer_scale_2 = LayerScale(d_model, layer_scale, **factory_kwargs) - - self.cross_attention: tp.Optional[nn.Module] = None - if cross_attention: - self.cross_attention = StreamingMultiheadAttention( - cross_attention=True, qk_layer_norm=qk_layer_norm_cross, - **attn_kwargs, **factory_kwargs) - # Norm and dropout - self.dropout_cross = nn.Dropout(dropout) - # eps value matching that used in PyTorch reference implementation. - self.norm_cross = nn.LayerNorm(d_model, eps=1e-5, **factory_kwargs) - self.layer_scale_cross: nn.Module - if layer_scale is None: - self.layer_scale_cross = nn.Identity() - else: - self.layer_scale_cross = LayerScale(d_model, layer_scale, **factory_kwargs) - self.norm1 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - self.norm2 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - - def _cross_attention_block(self, src: torch.Tensor, - cross_attention_src: torch.Tensor) -> torch.Tensor: - assert self.cross_attention is not None - # queries are from src, keys and values from cross_attention_src. - x = self.cross_attention( - src, cross_attention_src, cross_attention_src, need_weights=False)[0] - return self.dropout_cross(x) # type: ignore - - def forward(self, src: torch.Tensor, src_mask: tp.Optional[torch.Tensor] = None, # type: ignore - src_key_padding_mask: tp.Optional[torch.Tensor] = None, - cross_attention_src: tp.Optional[torch.Tensor] = None): - if self.cross_attention is None: - assert cross_attention_src is None - else: - assert cross_attention_src is not None - x = src - if self.norm_first: - x = x + self.layer_scale_1( - self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)) - if cross_attention_src is not None: - x = x + self.layer_scale_cross( - self._cross_attention_block( - self.norm_cross(x), cross_attention_src)) - x = x + self.layer_scale_2(self._ff_block(self.norm2(x))) - else: - x = self.norm1(x + self.layer_scale_1( - self._sa_block(x, src_mask, src_key_padding_mask))) - if cross_attention_src is not None: - x = self.norm_cross( - x + self.layer_scale_cross( - self._cross_attention_block(src, cross_attention_src))) - x = self.norm2(x + self.layer_scale_2(self._ff_block(x))) - return x - - -class StreamingTransformer(StreamingModule): - """Transformer with Streaming / Causal support. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - cross_attention (bool): If True, expect to get secondary input for cross-attention. - layer_scale (float or None): If not None, LayerScale will be used - with the given value as initial scale. - positional_embedding (str): Positional embedding strategy (sin, rope, or sin_rope). - max_period (float): Maximum period of the time embedding. - positional_scale (float): Scale of positional embedding, set to 0 to deactivate. - xpos (bool): Apply xpos exponential decay to positional embedding (rope only). - lr (float or None): learning rate override through the `make_optim_group` API. - weight_decay (float or None): Weight_decay override through the `make_optim_group` API. - layer_class: (subclass of `StreamingTransformerLayer): class to use - to initialize the layers, allowing further customization outside of Audiocraft. - checkpointing (str): Checkpointing strategy to reduce memory usage. - No checkpointing if set to 'none'. Per layer checkpointing using PyTorch - if set to 'torch' (entire layer checkpointed, i.e. linears are evaluated twice, - minimal memory usage, but maximal runtime). Finally, `xformers_default` provide - a policy for opting-out some operations of the checkpointing like - linear layers and attention, providing a middle ground between speed and memory. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, num_layers: int, dim_feedforward: int = 2048, - dropout: float = 0.1, bias_ff: bool = True, bias_attn: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, - custom: bool = False, memory_efficient: bool = False, attention_as_float32: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - positional_embedding: str = 'sin', max_period: float = 10_000, positional_scale: float = 1., - xpos: bool = False, lr: tp.Optional[float] = None, weight_decay: tp.Optional[float] = None, - layer_class: tp.Type[StreamingTransformerLayer] = StreamingTransformerLayer, - checkpointing: str = 'none', device=None, dtype=None, **kwargs): - super().__init__() - assert d_model % num_heads == 0 - - self.positional_embedding = positional_embedding - self.max_period = max_period - self.positional_scale = positional_scale - self.weight_decay = weight_decay - self.lr = lr - - assert positional_embedding in ['sin', 'rope', 'sin_rope'] - self.rope: tp.Optional[RotaryEmbedding] = None - if self.positional_embedding in ['rope', 'sin_rope']: - assert _is_custom(custom, memory_efficient) - self.rope = RotaryEmbedding(d_model // num_heads, max_period=max_period, - xpos=xpos, scale=positional_scale, device=device) - - self.checkpointing = checkpointing - - assert checkpointing in ['none', 'torch', 'xformers_default', 'xformers_mm'] - if self.checkpointing.startswith('xformers'): - _verify_xformers_internal_compat() - - self.layers = nn.ModuleList() - for idx in range(num_layers): - self.layers.append( - layer_class( - d_model=d_model, num_heads=num_heads, dim_feedforward=dim_feedforward, - dropout=dropout, bias_ff=bias_ff, bias_attn=bias_attn, - causal=causal, past_context=past_context, custom=custom, - memory_efficient=memory_efficient, attention_as_float32=attention_as_float32, - cross_attention=cross_attention, layer_scale=layer_scale, rope=self.rope, - device=device, dtype=dtype, **kwargs)) - - if self.checkpointing != 'none': - for layer in self.layers: - # see audiocraft/optim/fsdp.py, magic signal to indicate this requires fixing the - # backward hook inside of FSDP... - layer._magma_checkpointed = True # type: ignore - assert layer.layer_drop == 0., "Need further checking" # type: ignore - - def _apply_layer(self, layer, *args, **kwargs): - method = self.checkpointing - if method == 'none': - return layer(*args, **kwargs) - elif method == 'torch': - return torch_checkpoint(layer, *args, use_reentrant=False, **kwargs) - elif method.startswith('xformers'): - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy - if method == 'xformers_default': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "xformers.efficient_attention_forward_cutlass.default", - "xformers_flash.flash_fwd.default", - "aten.addmm.default", - "aten.mm.default", - ] - elif method == 'xformers_mm': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "aten.addmm.default", - "aten.mm.default", - ] - else: - raise ValueError(f"xformers checkpointing xformers policy {method} is not known.") - policy_fn = _get_default_policy(allow_list) - return checkpoint(layer, *args, policy_fn=policy_fn, **kwargs) - else: - raise ValueError(f"Checkpointing method {method} is unknown.") - - def forward(self, x: torch.Tensor, *args, **kwargs): - B, T, C = x.shape - - if 'offsets' in self._streaming_state: - offsets = self._streaming_state['offsets'] - else: - offsets = torch.zeros(B, dtype=torch.long, device=x.device) - - if self.positional_embedding in ['sin', 'sin_rope']: - positions = torch.arange(T, device=x.device).view(1, -1, 1) - positions = positions + offsets.view(-1, 1, 1) - pos_emb = create_sin_embedding(positions, C, max_period=self.max_period, dtype=x.dtype) - x = x + self.positional_scale * pos_emb - - for layer in self.layers: - x = self._apply_layer(layer, x, *args, **kwargs) - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return x - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - if self.weight_decay is not None: - group["weight_decay"] = self.weight_decay - return group - - -# special attention attention related function - -def _verify_xformers_memory_efficient_compat(): - try: - from xformers.ops import memory_efficient_attention, LowerTriangularMask # noqa - except ImportError: - raise ImportError( - "xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _verify_xformers_internal_compat(): - try: - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy # noqa - except ImportError: - raise ImportError( - "Francisco's fairinternal xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _is_custom(custom: bool, memory_efficient: bool): - return custom or memory_efficient diff --git a/spaces/XFcontinue/bingo/Dockerfile b/spaces/XFcontinue/bingo/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/XFcontinue/bingo/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/Xhaheen/stable-diffusion-depth2img-test/Dockerfile b/spaces/Xhaheen/stable-diffusion-depth2img-test/Dockerfile deleted file mode 100644 index 520ed0021f743919019b6f16cf4d4a13766eefca..0000000000000000000000000000000000000000 --- a/spaces/Xhaheen/stable-diffusion-depth2img-test/Dockerfile +++ /dev/null @@ -1,52 +0,0 @@ -FROM nvidia/cuda:11.3.1-cudnn8-devel-ubuntu18.04 -CMD nvidia-smi - -ENV DEBIAN_FRONTEND noninteractive -RUN apt-get update && apt-get install -y \ - git \ - make build-essential libssl-dev zlib1g-dev \ - libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \ - libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev \ - ffmpeg libsm6 libxext6 cmake libgl1-mesa-glx \ - && rm -rf /var/lib/apt/lists/* - && git lfs install - - -RUN useradd -ms /bin/bash user -USER user - -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -RUN curl https://pyenv.run | bash -ENV PATH=$HOME/.pyenv/shims:$HOME/.pyenv/bin:$PATH -RUN pyenv install 3.8.15 && \ - pyenv global 3.8.15 && \ - pyenv rehash && \ - pip install --no-cache-dir --upgrade pip setuptools wheel - -ENV WORKDIR=/code -WORKDIR $WORKDIR -RUN chown -R user:user $WORKDIR -RUN chmod -R 777 $WORKDIR - -COPY requirements.txt $WORKDIR/requirements.txt -RUN pip install --no-cache-dir --upgrade -r $WORKDIR/requirements.txt -RUN pip install ninja - -RUN curl https://github.com/isl-org/DPT/releases/download/1_0/dpt_hybrid-midas-501f0c75.pt --create-dirs -o $WORKDIR/midas_models/dpt_hybrid-midas-501f0c75.pt -RUN curl https://github.com/isl-org/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt --create-dirs -o $WORKDIR/midas_models/dpt_large-midas-2f21e586.pt - -COPY . . - -ARG TORCH_CUDA_ARCH_LIST=7.5+PTX - -USER root -RUN chown -R user:user $HOME -RUN chmod -R 777 $HOME -RUN chown -R user:user $WORKDIR -RUN chmod -R 777 $WORKDIR - -USER user - -CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/XzJosh/Bella-Bert-VITS2/transforms.py b/spaces/XzJosh/Bella-Bert-VITS2/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Bella-Bert-VITS2/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/XzJosh/Jianmo-Bert-VITS2/bert_gen.py b/spaces/XzJosh/Jianmo-Bert-VITS2/bert_gen.py deleted file mode 100644 index 44814715396ffc3abe84a12c74d66293c356eb4f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jianmo-Bert-VITS2/bert_gen.py +++ /dev/null @@ -1,53 +0,0 @@ -import torch -from torch.utils.data import DataLoader -from multiprocessing import Pool -import commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate -from tqdm import tqdm -import warnings - -from text import cleaned_text_to_sequence, get_bert - -config_path = 'configs/config.json' -hps = utils.get_hparams_from_file(config_path) - -def process_line(line): - _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|") - phone = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - wav_path = f'{_id}' - - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - assert bert.shape[-1] == len(phone) - torch.save(bert, bert_path) - - -if __name__ == '__main__': - lines = [] - with open(hps.data.training_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - with open(hps.data.validation_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - with Pool(processes=12) as pool: #A100 40GB suitable config,if coom,please decrease the processess number. - for _ in tqdm(pool.imap_unordered(process_line, lines)): - pass diff --git a/spaces/XzJosh/LAPLACE-Bert-VITS2/bert_gen.py b/spaces/XzJosh/LAPLACE-Bert-VITS2/bert_gen.py deleted file mode 100644 index 44814715396ffc3abe84a12c74d66293c356eb4f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/LAPLACE-Bert-VITS2/bert_gen.py +++ /dev/null @@ -1,53 +0,0 @@ -import torch -from torch.utils.data import DataLoader -from multiprocessing import Pool -import commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate -from tqdm import tqdm -import warnings - -from text import cleaned_text_to_sequence, get_bert - -config_path = 'configs/config.json' -hps = utils.get_hparams_from_file(config_path) - -def process_line(line): - _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|") - phone = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - wav_path = f'{_id}' - - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - assert bert.shape[-1] == len(phone) - torch.save(bert, bert_path) - - -if __name__ == '__main__': - lines = [] - with open(hps.data.training_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - with open(hps.data.validation_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - with Pool(processes=12) as pool: #A100 40GB suitable config,if coom,please decrease the processess number. - for _ in tqdm(pool.imap_unordered(process_line, lines)): - pass diff --git a/spaces/XzJosh/Nana7mi-Bert-VITS2/text/japanese.py b/spaces/XzJosh/Nana7mi-Bert-VITS2/text/japanese.py deleted file mode 100644 index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Nana7mi-Bert-VITS2/text/japanese.py +++ /dev/null @@ -1,104 +0,0 @@ -# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py -import re -import sys - -import pyopenjtalk - -from text import symbols - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def preprocess_jap(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = [] - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - p = pyopenjtalk.g2p(sentence) - text += p.split(" ") - - if i < len(marks): - text += [marks[i].replace(' ', '')] - return text - -def text_normalize(text): - # todo: jap text normalize - return text - -def g2p(norm_text): - phones = preprocess_jap(norm_text) - phones = [post_replace_ph(i) for i in phones] - # todo: implement tones and word2ph - tones = [0 for i in phones] - word2ph = [1 for i in phones] - return phones, tones, word2ph - - -if __name__ == '__main__': - for line in open("../../../Downloads/transcript_utf8.txt").readlines(): - text = line.split(":")[1] - phones, tones, word2ph = g2p(text) - for p in phones: - if p == "z": - print(text, phones) - sys.exit(0) diff --git a/spaces/XzJosh/XingTong-Bert-VITS2/commons.py b/spaces/XzJosh/XingTong-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/XingTong-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/YUANAI/DiffspeechResearch/utils/nn/schedulers.py b/spaces/YUANAI/DiffspeechResearch/utils/nn/schedulers.py deleted file mode 100644 index c91969dd8e01a8342488e060592700f3957c3651..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/utils/nn/schedulers.py +++ /dev/null @@ -1,57 +0,0 @@ -class NoneSchedule(object): - def __init__(self, optimizer, lr): - self.optimizer = optimizer - self.constant_lr = lr - self.step(0) - - def step(self, num_updates): - self.lr = self.constant_lr - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.lr - return self.lr - - def get_lr(self): - return self.optimizer.param_groups[0]['lr'] - - def get_last_lr(self): - return self.get_lr() - - -class RSQRTSchedule(NoneSchedule): - def __init__(self, optimizer, lr, warmup_updates, hidden_size): - self.optimizer = optimizer - self.constant_lr = lr - self.warmup_updates = warmup_updates - self.hidden_size = hidden_size - self.lr = lr - for param_group in optimizer.param_groups: - param_group['lr'] = self.lr - self.step(0) - - def step(self, num_updates): - constant_lr = self.constant_lr - warmup = min(num_updates / self.warmup_updates, 1.0) - rsqrt_decay = max(self.warmup_updates, num_updates) ** -0.5 - rsqrt_hidden = self.hidden_size ** -0.5 - self.lr = max(constant_lr * warmup * rsqrt_decay * rsqrt_hidden, 1e-7) - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.lr - return self.lr - - -class WarmupSchedule(NoneSchedule): - def __init__(self, optimizer, lr, warmup_updates): - self.optimizer = optimizer - self.constant_lr = self.lr = lr - self.warmup_updates = warmup_updates - for param_group in optimizer.param_groups: - param_group['lr'] = self.lr - self.step(0) - - def step(self, num_updates): - constant_lr = self.constant_lr - warmup = min(num_updates / self.warmup_updates, 1.0) - self.lr = max(constant_lr * warmup, 1e-7) - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.lr - return self.lr diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/flask_api.py b/spaces/YazawaSunrise/so-vits-svc-LoveLive/flask_api.py deleted file mode 100644 index 8cc236a1c34c9ddeddea99bcea13024fb0ccc90b..0000000000000000000000000000000000000000 --- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/flask_api.py +++ /dev/null @@ -1,56 +0,0 @@ -import io -import logging - -import soundfile -import torch -import torchaudio -from flask import Flask, request, send_file -from flask_cors import CORS - -from inference.infer_tool import Svc, RealTimeVC - -app = Flask(__name__) - -CORS(app) - -logging.getLogger('numba').setLevel(logging.WARNING) - - -@app.route("/voiceChangeModel", methods=["POST"]) -def voice_change_model(): - request_form = request.form - wave_file = request.files.get("sample", None) - # 变调信息 - f_pitch_change = float(request_form.get("fPitchChange", 0)) - # DAW所需的采样率 - daw_sample = int(float(request_form.get("sampleRate", 0))) - speaker_id = int(float(request_form.get("sSpeakId", 0))) - # http获得wav文件并转换 - input_wav_path = io.BytesIO(wave_file.read()) - - # 模型推理 - if raw_infer: - out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path) - tar_audio = torchaudio.functional.resample(out_audio, svc_model.target_sample, daw_sample) - else: - out_audio = svc.process(svc_model, speaker_id, f_pitch_change, input_wav_path) - tar_audio = torchaudio.functional.resample(torch.from_numpy(out_audio), svc_model.target_sample, daw_sample) - # 返回音频 - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, tar_audio.cpu().numpy(), daw_sample, format="wav") - out_wav_path.seek(0) - return send_file(out_wav_path, download_name="temp.wav", as_attachment=True) - - -if __name__ == '__main__': - # 启用则为直接切片合成,False为交叉淡化方式 - # vst插件调整0.3-0.5s切片时间可以降低延迟,直接切片方法会有连接处爆音、交叉淡化会有轻微重叠声音 - # 自行选择能接受的方法,或将vst最大切片时间调整为1s,此处设为Ture,延迟大音质稳定一些 - raw_infer = True - # 每个模型和config是唯一对应的 - model_name = "logs/32k/G_174000-Copy1.pth" - config_name = "configs/config.json" - svc_model = Svc(model_name, config_name) - svc = RealTimeVC() - # 此处与vst插件对应,不建议更改 - app.run(port=6842, host="0.0.0.0", debug=False, threaded=False) diff --git a/spaces/Yiqin/ChatVID/model/fastchat/model/__init__.py b/spaces/Yiqin/ChatVID/model/fastchat/model/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Yumko/Idk/README.md b/spaces/Yumko/Idk/README.md deleted file mode 100644 index 4336e70817d0bfc314d6dd826b5afba9dad3f42b..0000000000000000000000000000000000000000 --- a/spaces/Yumko/Idk/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Idk -emoji: 📊 -colorFrom: purple -colorTo: blue -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ZeroTwo3/WavJourney/add_voice_preset.py b/spaces/ZeroTwo3/WavJourney/add_voice_preset.py deleted file mode 100644 index 70cbd91ae5afb25fdad3f6cb1a23bc4a96a34569..0000000000000000000000000000000000000000 --- a/spaces/ZeroTwo3/WavJourney/add_voice_preset.py +++ /dev/null @@ -1,21 +0,0 @@ -import argparse -import voice_presets - -def main(): - # Argument Parsing - parser = argparse.ArgumentParser(description="Add Voice Preset") - parser.add_argument("--id", required=True, help="ID of the voice") - parser.add_argument("--desc", required=True, help="Description of the voice") - parser.add_argument("--wav-path", required=True, help="Path to the .wav file") - parser.add_argument("--session-id", required=True, help="session_id, if set to '' then it's system voice presets") - args = parser.parse_args() - - if args.session_id: - print(voice_presets.add_session_voice_preset(args.id, args.desc, args.wav_path, args.session_id)) - else: - print(voice_presets.add_system_voice_preset(args.id, args.desc, args.wav_path)) - - - -if __name__ == "__main__": - main() diff --git a/spaces/Zitang/Self-attention-based-V1MT-motion-model/FFV1MT_MS.py b/spaces/Zitang/Self-attention-based-V1MT-motion-model/FFV1MT_MS.py deleted file mode 100644 index db5038486bf4eb3fe02b96d9daf547c2820ca17a..0000000000000000000000000000000000000000 --- a/spaces/Zitang/Self-attention-based-V1MT-motion-model/FFV1MT_MS.py +++ /dev/null @@ -1,323 +0,0 @@ -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F -from MT import FeatureTransformer -from torch.cuda.amp import autocast as autocast -from flow_tools import viz_img_seq, save_img_seq, plt_show_img_flow -from copy import deepcopy -from V1 import V1 -import matplotlib.pyplot as plt -from io import BytesIO -from PIL import Image - -def conv(in_planes, out_planes, kernel_size=3, stride=1, dilation=1, isReLU=True): - if isReLU: - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, - dilation=dilation, - padding=((kernel_size - 1) * dilation) // 2, bias=True), - nn.GELU() - ) - else: - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, - dilation=dilation, - padding=((kernel_size - 1) * dilation) // 2, bias=True) - ) - - - -def plt_attention(attention, h, w): - col = len(attention) // 2 - fig = plt.figure(figsize=(10, 8)) - - for i in range(len(attention)): - viz = attention[i][0, :, :, h, w].detach().cpu().numpy() - # viz = viz[7:-7, 7:-7] - if i == 0: - viz_all = viz - else: - viz_all = viz_all + viz - - ax1 = fig.add_subplot(2, col, i + 1) - img = ax1.imshow(viz, cmap="rainbow", interpolation="bilinear") - ax1.scatter(w, h, color='grey', s=300, alpha=0.5) - ax1.scatter(w, h, color='red', s=150, alpha=0.5) - plt.title(" Iteration %d" % (i + 1)) - if i == len(attention) - 1: - plt.title(" Final Iteration") - plt.xticks([]) - plt.yticks([]) - - - # tight layout - plt.tight_layout() - # save the figure - buf = BytesIO() - plt.savefig(buf, format='png') - buf.seek(0) - plt.close() - # convert the figure to an array - img = Image.open(buf) - img = np.array(img) - return img - - -class FlowDecoder(nn.Module): - # can reduce 25% of training time. - def __init__(self, ch_in): - super(FlowDecoder, self).__init__() - self.conv1 = conv(ch_in, 256, kernel_size=1) - self.conv2 = conv(256, 128, kernel_size=1) - self.conv3 = conv(256 + 128, 96, kernel_size=1) - self.conv4 = conv(96 + 128, 64, kernel_size=1) - self.conv5 = conv(96 + 64, 32, kernel_size=1) - - self.feat_dim = 32 - self.predict_flow = conv(64 + 32, 2, isReLU=False) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv2(x1) - x3 = self.conv3(torch.cat([x1, x2], dim=1)) - x4 = self.conv4(torch.cat([x2, x3], dim=1)) - x5 = self.conv5(torch.cat([x3, x4], dim=1)) - flow = self.predict_flow(torch.cat([x4, x5], dim=1)) - return flow - - -class FFV1DNN(nn.Module): - def __init__(self, - num_scales=8, - num_cells=256, - upsample_factor=8, - feature_channels=256, - scale_factor=16, - num_layers=6, - ): - super(FFV1DNN, self).__init__() - self.ffv1 = V1(spatial_num=num_cells // num_scales, scale_num=num_scales, scale_factor=scale_factor, - kernel_radius=7, num_ft=num_cells // num_scales, - kernel_size=6, average_time=True) - self.v1_kz = 7 - self.scale_factor = scale_factor - scale_each_level = np.exp(1 / (num_scales - 1) * np.log(1 / scale_factor)) - self.scale_num = num_scales - self.scale_each_level = scale_each_level - v1_channel = self.ffv1.num_after_st - self.num_scales = num_scales - self.MT_channel = feature_channels - assert self.MT_channel == v1_channel - self.feature_channels = feature_channels - - self.upsample_factor = upsample_factor - self.num_layers = num_layers - # convex upsampling: concat feature0 and flow as input - self.upsampler_1 = nn.Sequential(nn.Conv2d(2 + feature_channels, 256, 3, 1, 1), - nn.ReLU(inplace=True), - nn.Conv2d(256, 256, 3, 1, 1), - nn.ReLU(inplace=True), - nn.Conv2d(256, upsample_factor ** 2 * 9, 3, 1, 1)) - self.decoder = FlowDecoder(feature_channels) - self.conv_feat = nn.ModuleList([conv(v1_channel, feature_channels, 1) for i in range(num_scales)]) - self.MT = FeatureTransformer(d_model=feature_channels, num_layers=self.num_layers) - - # 2*2*8*scale` - def upsample_flow(self, flow, feature, upsampler=None, bilinear=False, upsample_factor=4): - if bilinear: - up_flow = F.interpolate(flow, scale_factor=upsample_factor, - mode='bilinear', align_corners=True) * upsample_factor - else: - # convex upsampling - concat = torch.cat((flow, feature), dim=1) - mask = upsampler(concat) - b, flow_channel, h, w = flow.shape - mask = mask.view(b, 1, 9, upsample_factor, upsample_factor, h, w) # [B, 1, 9, K, K, H, W] - mask = torch.softmax(mask, dim=2) - - up_flow = F.unfold(upsample_factor * flow, [3, 3], padding=1) - up_flow = up_flow.view(b, flow_channel, 9, 1, 1, h, w) # [B, 2, 9, 1, 1, H, W] - - up_flow = torch.sum(mask * up_flow, dim=2) # [B, 2, K, K, H, W] - up_flow = up_flow.permute(0, 1, 4, 2, 5, 3) # [B, 2, K, H, K, W] - up_flow = up_flow.reshape(b, flow_channel, upsample_factor * h, - upsample_factor * w) # [B, 2, K*H, K*W] - - return up_flow - - def forward(self, image_list, mix_enable=True, layer=6): - if layer is not None: - self.MT.num_layers = layer - self.num_layers = layer - results_dict = {} - padding = self.v1_kz * self.scale_factor - with torch.no_grad(): - if image_list[0].max() > 10: - image_list = [img / 255.0 for img in image_list] # [B, 1, H, W] 0-1 - if image_list[0].shape[1] == 3: - # convert to gray using transform Gray = R*0.299 + G*0.587 + B*0.114 - image_list = [img[:, 0, :, :] * 0.299 + img[:, 1, :, :] * 0.587 + img[:, 2, :, :] * 0.114 for img in - image_list] - image_list = [img.unsqueeze(1) for img in image_list] - - B, _, H, W = image_list[0].shape - MT_size = (H // 8, W // 8) - with autocast(enabled=mix_enable): - # with torch.no_grad(): # TODO: only for test wheather a trainable V1 is needed. - st_component = self.ffv1(image_list) - # viz_img_seq(image_scale, if_debug=True) - if self.num_layers == 0: - motion_feature = [st_component] - flows = [self.decoder(feature) for feature in motion_feature] - flows_up = [self.upsample_flow(flow, feature=None, bilinear=True, upsample_factor=8) for flow in flows] - results_dict["flow_seq"] = flows_up - return results_dict - motion_feature, attn = self.MT.forward_save_mem(st_component) - flow_v1 = self.decoder(st_component) - - flows = [flow_v1] + [self.decoder(feature) for feature in motion_feature] - flows_bi = [self.upsample_flow(flow, feature=None, bilinear=True, upsample_factor=8) for flow in flows] - flows_up = [flows_bi[0]] + \ - [self.upsample_flow(flows, upsampler=self.upsampler_1, feature=attn, upsample_factor=8) for - flows, attn in zip(flows[1:], attn)] - assert len(flows_bi) == len(flows_up) - results_dict["flow_seq"] = flows_up - results_dict["flow_seq_bi"] = flows_bi - return results_dict - - def forward_test(self, image_list, mix_enable=True, layer=6): - if layer is not None: - self.MT.num_layers = layer - self.num_layers = layer - results_dict = {} - padding = self.v1_kz * self.scale_factor - with torch.no_grad(): - if image_list[0].max() > 10: - image_list = [img / 255.0 for img in image_list] # [B, 1, H, W] 0-1 - - B, _, H, W = image_list[0].shape - MT_size = (H // 8, W // 8) - with autocast(enabled=mix_enable): - st_component = self.ffv1(image_list) - # viz_img_seq(image_scale, if_debug=True) - if self.num_layers == 0: - motion_feature = [st_component] - flows = [self.decoder(feature) for feature in motion_feature] - flows_up = [self.upsample_flow(flow, feature=None, bilinear=True, upsample_factor=8) for flow in flows] - results_dict["flow_seq"] = flows_up - return results_dict - motion_feature, attn, _ = self.MT.forward_save_mem(st_component) - flow_v1 = self.decoder(st_component) - flows = [flow_v1] + [self.decoder(feature) for feature in motion_feature] - flows_bi = [self.upsample_flow(flow, feature=None, bilinear=True, upsample_factor=8) for flow in flows] - flows_up = [flows_bi[0]] + \ - [self.upsample_flow(flows, upsampler=self.upsampler_1, feature=attn, upsample_factor=8) for - flows, attn in zip(flows[1:], attn)] - assert len(flows_bi) == len(flows_up) - results_dict["flow_seq"] = flows_up - results_dict["flow_seq_bi"] = flows_bi - return results_dict - - def forward_viz(self, image_list, layer=None, x=50, y=50): - x = x / 100 - y = y / 100 - if layer is not None: - self.MT.num_layers = layer - results_dict = {} - padding = self.v1_kz * self.scale_factor - with torch.no_grad(): - if image_list[0].max() > 10: - image_list = [img / 255.0 for img in image_list] # [B, 1, H, W] 0-1 - if image_list[0].shape[1] == 3: - # convert to gray using transform Gray = R*0.299 + G*0.587 + B*0.114 - image_list = [img[:, 0, :, :] * 0.299 + img[:, 1, :, :] * 0.587 + img[:, 2, :, :] * 0.114 for img in - image_list] - image_list = [img.unsqueeze(1) for img in image_list] - image_list_ori = deepcopy(image_list) - - B, _, H, W = image_list[0].shape - MT_size = (H // 8, W // 8) - with autocast(enabled=True): - st_component = self.ffv1(image_list) - activation = self.ffv1.visualize_activation(st_component) - # viz_img_seq(image_scale, if_debug=True) - motion_feature, attn, attn_viz = self.MT(st_component) - flow_v1 = self.decoder(st_component) - - flows = [flow_v1] + [self.decoder(feature) for feature in motion_feature] - flows_bi = [self.upsample_flow(flow, feature=None, bilinear=True, upsample_factor=8) for flow in flows] - flows_up = [flows_bi[0]] + \ - [self.upsample_flow(flows, upsampler=self.upsampler_1, feature=attn, upsample_factor=8) for - flows, attn in zip(flows[1:], attn)] - assert len(flows_bi) == len(flows_up) - results_dict["flow_seq"] = flows_up - # select 1,3,5,7 - flows_up = [flows_up[i] for i in [0, 2, 4]] + [flows_up[-1]] - attn_viz = [attn_viz[i] for i in [0, 2, 4]] + [attn_viz[-1]] - flow = plt_show_img_flow(image_list_ori, flows_up) - h = int(MT_size[0] * y) - w = int(MT_size[1] * x) - attention = plt_attention(attn_viz, h=h, w=w) - print("done") - plt.clf() - plt.cla() - plt.close() - results_dict["activation"] = activation - plt.clf() - plt.cla() - plt.close() - results_dict["attention"] = attention - plt.clf() - plt.cla() - plt.close() - results_dict["flow"] = flow - plt.clf() - plt.cla() - plt.close() - - return results_dict - - def num_parameters(self): - return sum( - [p.data.nelement() if p.requires_grad else 0 for p in self.parameters()]) - - def init_weights(self): - for layer in self.named_modules(): - if isinstance(layer, nn.Conv2d): - nn.init.kaiming_normal_(layer.weight) - if layer.bias is not None: - nn.init.constant_(layer.bias, 0) - if isinstance(layer, nn.Conv1d): - nn.init.kaiming_normal_(layer.weight) - if layer.bias is not None: - nn.init.constant_(layer.bias, 0) - - elif isinstance(layer, nn.ConvTranspose2d): - nn.init.kaiming_normal_(layer.weight) - if layer.bias is not None: - nn.init.constant_(layer.bias, 0) - - @staticmethod - def demo(file=None): - import time - from utils import torch_utils as utils - frame_list = [torch.randn([4, 1, 512, 512], device="cuda")] * 11 - model = FFV1DNN(num_scales=8, scale_factor=16, num_cells=256, upsample_factor=8, num_layers=6, - feature_channels=256).cuda() - if file is not None: - model = utils.restore_model(model, file) - print(model.num_parameters()) - for i in range(100): - start = time.time() - output = model.forward_viz(frame_list, layer=7) - # print(output["flow_seq"][-1]) - torch.mean(output["flow_seq"][-1]).backward() - print(torch.any(torch.isnan(output["flow_seq"][-1]))) - end = time.time() - print(end - start) - print("#================================++#") - - -if __name__ == '__main__': - FFV1DNN.demo(None) diff --git a/spaces/abdvl/datahub_qa_bot/docs/managed-datahub/welcome-acryl.md b/spaces/abdvl/datahub_qa_bot/docs/managed-datahub/welcome-acryl.md deleted file mode 100644 index 42971a5c81a85fc23f5e447c9635965800841e89..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/managed-datahub/welcome-acryl.md +++ /dev/null @@ -1,59 +0,0 @@ -# Getting Started with Acryl DataHub - - -Welcome to the Acryl DataHub! We at Acryl are on a mission to make data reliable by bringing clarity to the who, what, when, & how of your data ecosystem. We're thrilled to be on this journey with you; and cannot wait to see what we build together! - -Close communication is not only welcomed, but highly encouraged. For all questions, concerns, & feedback, please reach out to us directly at support@acryl.io. - -## Prerequisites - -Before you go further, you'll need to have a DataHub instance provisioned. The Acryl integrations team will provide you the following once it has been deployed: - -1. The URL for your Acryl instance (https://your-domain-name.acryl.io) -2. Admin account credentials for logging into the DataHub UI - -Once you have these, you're ready to go. - -:::info -If you wish to have a private connection to your DataHub instance, Acryl supports [AWS PrivateLink](https://aws.amazon.com/privatelink/) to complete this connection to your existing AWS account. Please see more details [here](integrations/aws-privatelink.md). -::: - -### Logging In - -Acryl DataHub currently supports the following means to log into a DataHub instance: - -1. **Admin account**: With each deployment of DataHub comes a master admin account. It has a randomly generated password that can be accessed by reaching out to Acryl Integrations team (support@acryl.io). To log in with an admin account, navigate to https://your-domain.acryl.io/login -2. **OIDC**: Acryl DataHub also supports OIDC integration with the Identity Provider of your choice (Okta, Google, etc). To set this up, Acryl integrations team will require the following: -3. _Client ID_ - A unique identifier for your application with the identity provider -4. _Client Secret_ - A shared secret to use for exchange between you and your identity provider. To send this over securely, we recommend using [onetimesecret.com](https://onetimesecret.com/) to create a link. -5. _Discovery URL_ - A URL where the OIDC API of your identity provider can be discovered. This should suffixed by `.well-known/openid-configuration`. Sometimes, identity providers will not explicitly include this URL in their setup guides, though this endpoint will exist as per the OIDC specification. For more info see [here](http://openid.net/specs/openid-connect-discovery-1\_0.html). - -The callback URL to register in your Identity Provider will be - -``` -https://your-acryl-domain.acryl.io/callback/oidc -``` - -_Note that we do not yet support LDAP or SAML authentication. Please let us know if either of these integrations would be useful for your organization._ - -## Getting Started - -Acryl DataHub is first and foremost a metadata Search & Discovery product. As such, the two most important parts of the experience are - -1. Ingesting metadata -2. Discovering metadata - -### Ingesting Metadata - -Acryl DataHub employs a push-based metadata ingestion model. In practice, this means running an Acryl-provided agent inside your organization's infrastructure, and pushing that data out to your DataHub instance in the cloud. One benefit of this approach is that metadata can be aggregated across any number of distributed sources, regardless of form or location. - -This approach comes with another benefit: security. By managing your own instance of the agent, you can keep the secrets and credentials within your walled garden. Skip uploading secrets & keys into a third-party cloud tool. - -To push metadata into DataHub, Acryl provide's an ingestion framework written in Python. Typically, push jobs are run on a schedule at an interval of your choosing. For our step-by-step guide on ingestion, click [here](docs/managed-datahub/metadata-ingestion-with-acryl/ingestion.md). - -### Discovering Metadata - -There are 2 primary ways to find metadata: search and browse. Both can be accessed via the DataHub home page. - -By default, we provide rich search capabilities across your ingested metadata. This includes the ability to search by tags, descriptions, column names, column descriptions, and more using the global search bar found on the home page. - diff --git a/spaces/abhaskumarsinha/MinimalGPT-Felis_Catus/subword/bpe_toy.py b/spaces/abhaskumarsinha/MinimalGPT-Felis_Catus/subword/bpe_toy.py deleted file mode 100644 index 0421b255861cb56eb40bf58a8225807cc396e968..0000000000000000000000000000000000000000 --- a/spaces/abhaskumarsinha/MinimalGPT-Felis_Catus/subword/bpe_toy.py +++ /dev/null @@ -1,51 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# Author: Rico Sennrich - -"""Use byte pair encoding (BPE) to learn a variable-length encoding of the vocabulary in a text. -Unlike the original BPE, it does not compress the plain text, but can be used to reduce the vocabulary -of a text to a configurable number of symbols, with only a small increase in the number of tokens. -This is an (inefficient) toy implementation that shows the algorithm. For processing large datasets, -indexing and incremental updates can be used to speed up the implementation (see learn_bpe.py). - -Reference: -Rico Sennrich, Barry Haddow and Alexandra Birch (2016). Neural Machine Translation of Rare Words with Subword Units. -Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Berlin, Germany. -""" - - -import re -import sys -import collections - -def get_stats(vocab): - pairs = collections.defaultdict(int) - for word, freq in vocab.items(): - symbols = word.split() - for i in range(len(symbols)-1): - pairs[symbols[i],symbols[i+1]] += freq - return pairs - -def merge_vocab(pair, v_in): - v_out = {} - bigram_pattern = re.escape(' '.join(pair)) - p = re.compile(r'(?' : 5, 'l o w e r' : 2, - 'n e w e s t' : 6, 'w i d e s t' : 3} -num_merges = 15 -for i in range(num_merges): - pairs = get_stats(vocab) - try: - best = max(pairs, key=pairs.get) - except ValueError: - break - if pairs[best] < 2: - sys.stderr.write('no pair has frequency > 1. Stopping\n') - break - vocab = merge_vocab(best, vocab) - print(best) diff --git a/spaces/abokbot/wikipedia-search-engine/app.py b/spaces/abokbot/wikipedia-search-engine/app.py deleted file mode 100644 index 5d530e06bed6b8aaa4639b2d4abeb9793d42e993..0000000000000000000000000000000000000000 --- a/spaces/abokbot/wikipedia-search-engine/app.py +++ /dev/null @@ -1,106 +0,0 @@ -import streamlit as st -from datasets import load_dataset -from sentence_transformers import SentenceTransformer, CrossEncoder, util -import torch -from huggingface_hub import hf_hub_download - -embedding_path = "abokbot/wikipedia-embedding" - -st.header("Wikipedia Search Engine app") - -st_model_load = st.text('Loading embeddings, encoders and dataset (takes about 5min)') - -@st.cache_resource -def load_embedding(): - print("Loading embedding...") - path = hf_hub_download(repo_id="abokbot/wikipedia-embedding", filename="wikipedia_en_embedding.pt") - wikipedia_embedding = torch.load(path, map_location=torch.device('cpu')) - print("Embedding loaded!") - return wikipedia_embedding - -wikipedia_embedding = load_embedding() - -@st.cache_resource -def load_encoders(): - print("Loading encoders...") - bi_encoder = SentenceTransformer('msmarco-MiniLM-L-6-v3') - bi_encoder.max_seq_length = 256 #Truncate long passages to 256 tokens - top_k = 32 - cross_encoder = CrossEncoder('cross-encoder/ms-marco-TinyBERT-L-2-v2') - print("Encoders loaded!") - return bi_encoder, cross_encoder - -bi_encoder, cross_encoder = load_encoders() - -@st.cache_resource -def load_wikipedia_dataset(): - print("Loading wikipedia dataset...") - dataset = load_dataset("abokbot/wikipedia-first-paragraph")["train"] - print("Dataset loaded!") - return dataset - -dataset = load_wikipedia_dataset() -st.success('Search engine ready') -st_model_load.text("") - -if 'text' not in st.session_state: - st.session_state.text = "" -st.markdown("Enter query") -st_text_area = st.text_area( - 'E.g. What is the hashing trick? or Largest city in Morocco', - value=st.session_state.text, - height=25 -) - - -def search(): - st.session_state.text = st_text_area - query = st_text_area - print("Input question:", query) - - ##### Sematic Search ##### - print("Semantic Search") - # Encode the query using the bi-encoder and find potentially relevant passages - top_k = 32 - question_embedding = bi_encoder.encode(query, convert_to_tensor=True) - hits = util.semantic_search(question_embedding, wikipedia_embedding, top_k=top_k) - hits = hits[0] # Get the hits for the first query - - ##### Re-Ranking ##### - # Now, score all retrieved passages with the cross_encoder - print("Re-Ranking") - cross_inp = [[query, dataset[hit['corpus_id']]["text"]] for hit in hits] - cross_scores = cross_encoder.predict(cross_inp) - - # Sort results by the cross-encoder scores - for idx in range(len(cross_scores)): - hits[idx]['cross-score'] = cross_scores[idx] - - hits = sorted(hits, key=lambda x: x['cross-score'], reverse=True) - # Output of top-3 hits from re-ranker - print("\n-------------------------\n") - print("Top-3 Cross-Encoder Re-ranker hits") - results = [] - for hit in hits[:3]: - results.append( - { - "score": round(hit['cross-score'], 3), - "title": dataset[hit['corpus_id']]["title"], - "abstract": dataset[hit['corpus_id']]["text"].replace("\n", " "), - "link": dataset[hit['corpus_id']]["url"] - } - ) - return results - - -# search button -st_search_button = st.button('Search') -if st_search_button: - results = search() - st.subheader("Top-3 Search results") - for i, result in enumerate(results): - st.markdown(f"#### Result {i+1}") - st.markdown("**Wikipedia article:** " + result["title"]) - st.markdown("**Link:** " + result["link"]) - st.markdown("**First paragraph:** " + result["abstract"]) - st.text("") \ No newline at end of file diff --git a/spaces/adwod/Streamlite_ViT_2000/README.md b/spaces/adwod/Streamlite_ViT_2000/README.md deleted file mode 100644 index 5c8879f86aaaaa6b7b27ec85cf8421f267560606..0000000000000000000000000000000000000000 --- a/spaces/adwod/Streamlite_ViT_2000/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Streamlite ViT 2000 -emoji: 🦀 -colorFrom: purple -colorTo: red -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aimstack/aim/Dockerfile b/spaces/aimstack/aim/Dockerfile deleted file mode 100644 index 8413c7bea4e5782045058219730e82279899e13a..0000000000000000000000000000000000000000 --- a/spaces/aimstack/aim/Dockerfile +++ /dev/null @@ -1,30 +0,0 @@ -FROM python:3.9 - -RUN useradd -m -u 1000 aim_user - -# Switch to the "aim_user" user -USER aim_user - -# Set home to the user's home directory -ENV HOME=/home/aim_user \ - PATH=/home/aim_user/.local/bin:$PATH - -# Set the working directory to the user's home directory -WORKDIR $HOME - -# install the `aim` package on the latest version -RUN pip install aim - -RUN aim telemetry off - -ENTRYPOINT ["/bin/sh", "-c"] - -COPY aim_repo.tar.gz . -RUN tar xvzf aim_repo.tar.gz -# have to run `aim init` in the directory that stores aim data for -# otherwise `aim up` will prompt for confirmation to create the directory itself. -# We run aim listening on 0.0.0.0 to expose all ports. Also, we run -# using `--dev` to print verbose logs. Port 43800 is the default port of -# `aim up` but explicit is better than implicit. -CMD ["aim up --host 0.0.0.0 --port 7860 --workers 2"] - diff --git a/spaces/akhaliq/Mask2Former/datasets/prepare_ade20k_pan_seg.py b/spaces/akhaliq/Mask2Former/datasets/prepare_ade20k_pan_seg.py deleted file mode 100644 index 8b1b03281dd4307c6dad2dc7e2505aacad3e72a3..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/datasets/prepare_ade20k_pan_seg.py +++ /dev/null @@ -1,500 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. -import glob -import json -import os -from collections import Counter - -import numpy as np -import tqdm -from panopticapi.utils import IdGenerator, save_json -from PIL import Image - -ADE20K_SEM_SEG_CATEGORIES = [ - "wall", - "building", - "sky", - "floor", - "tree", - "ceiling", - "road, route", - "bed", - "window ", - "grass", - "cabinet", - "sidewalk, pavement", - "person", - "earth, ground", - "door", - "table", - "mountain, mount", - "plant", - "curtain", - "chair", - "car", - "water", - "painting, picture", - "sofa", - "shelf", - "house", - "sea", - "mirror", - "rug", - "field", - "armchair", - "seat", - "fence", - "desk", - "rock, stone", - "wardrobe, closet, press", - "lamp", - "tub", - "rail", - "cushion", - "base, pedestal, stand", - "box", - "column, pillar", - "signboard, sign", - "chest of drawers, chest, bureau, dresser", - "counter", - "sand", - "sink", - "skyscraper", - "fireplace", - "refrigerator, icebox", - "grandstand, covered stand", - "path", - "stairs", - "runway", - "case, display case, showcase, vitrine", - "pool table, billiard table, snooker table", - "pillow", - "screen door, screen", - "stairway, staircase", - "river", - "bridge, span", - "bookcase", - "blind, screen", - "coffee table", - "toilet, can, commode, crapper, pot, potty, stool, throne", - "flower", - "book", - "hill", - "bench", - "countertop", - "stove", - "palm, palm tree", - "kitchen island", - "computer", - "swivel chair", - "boat", - "bar", - "arcade machine", - "hovel, hut, hutch, shack, shanty", - "bus", - "towel", - "light", - "truck", - "tower", - "chandelier", - "awning, sunshade, sunblind", - "street lamp", - "booth", - "tv", - "plane", - "dirt track", - "clothes", - "pole", - "land, ground, soil", - "bannister, banister, balustrade, balusters, handrail", - "escalator, moving staircase, moving stairway", - "ottoman, pouf, pouffe, puff, hassock", - "bottle", - "buffet, counter, sideboard", - "poster, posting, placard, notice, bill, card", - "stage", - "van", - "ship", - "fountain", - "conveyer belt, conveyor belt, conveyer, conveyor, transporter", - "canopy", - "washer, automatic washer, washing machine", - "plaything, toy", - "pool", - "stool", - "barrel, cask", - "basket, handbasket", - "falls", - "tent", - "bag", - "minibike, motorbike", - "cradle", - "oven", - "ball", - "food, solid food", - "step, stair", - "tank, storage tank", - "trade name", - "microwave", - "pot", - "animal", - "bicycle", - "lake", - "dishwasher", - "screen", - "blanket, cover", - "sculpture", - "hood, exhaust hood", - "sconce", - "vase", - "traffic light", - "tray", - "trash can", - "fan", - "pier", - "crt screen", - "plate", - "monitor", - "bulletin board", - "shower", - "radiator", - "glass, drinking glass", - "clock", - "flag", # noqa -] - -PALETTE = [ - [120, 120, 120], - [180, 120, 120], - [6, 230, 230], - [80, 50, 50], - [4, 200, 3], - [120, 120, 80], - [140, 140, 140], - [204, 5, 255], - [230, 230, 230], - [4, 250, 7], - [224, 5, 255], - [235, 255, 7], - [150, 5, 61], - [120, 120, 70], - [8, 255, 51], - [255, 6, 82], - [143, 255, 140], - [204, 255, 4], - [255, 51, 7], - [204, 70, 3], - [0, 102, 200], - [61, 230, 250], - [255, 6, 51], - [11, 102, 255], - [255, 7, 71], - [255, 9, 224], - [9, 7, 230], - [220, 220, 220], - [255, 9, 92], - [112, 9, 255], - [8, 255, 214], - [7, 255, 224], - [255, 184, 6], - [10, 255, 71], - [255, 41, 10], - [7, 255, 255], - [224, 255, 8], - [102, 8, 255], - [255, 61, 6], - [255, 194, 7], - [255, 122, 8], - [0, 255, 20], - [255, 8, 41], - [255, 5, 153], - [6, 51, 255], - [235, 12, 255], - [160, 150, 20], - [0, 163, 255], - [140, 140, 200], - [250, 10, 15], - [20, 255, 0], - [31, 255, 0], - [255, 31, 0], - [255, 224, 0], - [153, 255, 0], - [0, 0, 255], - [255, 71, 0], - [0, 235, 255], - [0, 173, 255], - [31, 0, 255], - [11, 200, 200], - [255, 82, 0], - [0, 255, 245], - [0, 61, 255], - [0, 255, 112], - [0, 255, 133], - [255, 0, 0], - [255, 163, 0], - [255, 102, 0], - [194, 255, 0], - [0, 143, 255], - [51, 255, 0], - [0, 82, 255], - [0, 255, 41], - [0, 255, 173], - [10, 0, 255], - [173, 255, 0], - [0, 255, 153], - [255, 92, 0], - [255, 0, 255], - [255, 0, 245], - [255, 0, 102], - [255, 173, 0], - [255, 0, 20], - [255, 184, 184], - [0, 31, 255], - [0, 255, 61], - [0, 71, 255], - [255, 0, 204], - [0, 255, 194], - [0, 255, 82], - [0, 10, 255], - [0, 112, 255], - [51, 0, 255], - [0, 194, 255], - [0, 122, 255], - [0, 255, 163], - [255, 153, 0], - [0, 255, 10], - [255, 112, 0], - [143, 255, 0], - [82, 0, 255], - [163, 255, 0], - [255, 235, 0], - [8, 184, 170], - [133, 0, 255], - [0, 255, 92], - [184, 0, 255], - [255, 0, 31], - [0, 184, 255], - [0, 214, 255], - [255, 0, 112], - [92, 255, 0], - [0, 224, 255], - [112, 224, 255], - [70, 184, 160], - [163, 0, 255], - [153, 0, 255], - [71, 255, 0], - [255, 0, 163], - [255, 204, 0], - [255, 0, 143], - [0, 255, 235], - [133, 255, 0], - [255, 0, 235], - [245, 0, 255], - [255, 0, 122], - [255, 245, 0], - [10, 190, 212], - [214, 255, 0], - [0, 204, 255], - [20, 0, 255], - [255, 255, 0], - [0, 153, 255], - [0, 41, 255], - [0, 255, 204], - [41, 0, 255], - [41, 255, 0], - [173, 0, 255], - [0, 245, 255], - [71, 0, 255], - [122, 0, 255], - [0, 255, 184], - [0, 92, 255], - [184, 255, 0], - [0, 133, 255], - [255, 214, 0], - [25, 194, 194], - [102, 255, 0], - [92, 0, 255], -] - - -if __name__ == "__main__": - dataset_dir = os.getenv("DETECTRON2_DATASETS", "datasets") - - for name, dirname in [("train", "training"), ("val", "validation")]: - image_dir = os.path.join(dataset_dir, f"ADEChallengeData2016/images/{dirname}/") - semantic_dir = os.path.join(dataset_dir, f"ADEChallengeData2016/annotations/{dirname}/") - instance_dir = os.path.join( - dataset_dir, f"ADEChallengeData2016/annotations_instance/{dirname}/" - ) - - # folder to store panoptic PNGs - out_folder = os.path.join(dataset_dir, f"ADEChallengeData2016/ade20k_panoptic_{name}/") - # json with segmentations information - out_file = os.path.join(dataset_dir, f"ADEChallengeData2016/ade20k_panoptic_{name}.json") - - if not os.path.isdir(out_folder): - print("Creating folder {} for panoptic segmentation PNGs".format(out_folder)) - os.mkdir(out_folder) - - # json config - config_file = "datasets/ade20k_instance_imgCatIds.json" - with open(config_file) as f: - config = json.load(f) - - # load catid mapping - mapping_file = "datasets/ade20k_instance_catid_mapping.txt" - with open(mapping_file) as f: - map_id = {} - for i, line in enumerate(f.readlines()): - if i == 0: - continue - ins_id, sem_id, _ = line.strip().split() - # shift id by 1 because we want it to start from 0! - # ignore_label becomes 255 - map_id[int(ins_id) - 1] = int(sem_id) - 1 - - ADE20K_150_CATEGORIES = [] - for cat_id, cat_name in enumerate(ADE20K_SEM_SEG_CATEGORIES): - ADE20K_150_CATEGORIES.append( - { - "name": cat_name, - "id": cat_id, - "isthing": int(cat_id in map_id.values()), - "color": PALETTE[cat_id], - } - ) - categories_dict = {cat["id"]: cat for cat in ADE20K_150_CATEGORIES} - - panoptic_json_categories = ADE20K_150_CATEGORIES[:] - panoptic_json_images = [] - panoptic_json_annotations = [] - - filenames = sorted(glob.glob(os.path.join(image_dir, "*.jpg"))) - for idx, filename in enumerate(tqdm.tqdm(filenames)): - panoptic_json_image = {} - panoptic_json_annotation = {} - - image_id = os.path.basename(filename).split(".")[0] - - panoptic_json_image["id"] = image_id - panoptic_json_image["file_name"] = os.path.basename(filename) - - original_format = np.array(Image.open(filename)) - panoptic_json_image["width"] = original_format.shape[1] - panoptic_json_image["height"] = original_format.shape[0] - - pan_seg = np.zeros( - (original_format.shape[0], original_format.shape[1], 3), dtype=np.uint8 - ) - id_generator = IdGenerator(categories_dict) - - filename_semantic = os.path.join(semantic_dir, image_id + ".png") - filename_instance = os.path.join(instance_dir, image_id + ".png") - - sem_seg = np.asarray(Image.open(filename_semantic)) - ins_seg = np.asarray(Image.open(filename_instance)) - - assert sem_seg.dtype == np.uint8 - assert ins_seg.dtype == np.uint8 - - semantic_cat_ids = sem_seg - 1 - instance_cat_ids = ins_seg[..., 0] - 1 - # instance id starts from 1! - # because 0 is reserved as VOID label - instance_ins_ids = ins_seg[..., 1] - - segm_info = [] - - # NOTE: there is some overlap between semantic and instance annotation - # thus we paste stuffs first - - # process stuffs - for semantic_cat_id in np.unique(semantic_cat_ids): - if semantic_cat_id == 255: - continue - if categories_dict[semantic_cat_id]["isthing"]: - continue - mask = semantic_cat_ids == semantic_cat_id - # should not have any overlap - assert pan_seg[mask].sum() == 0 - - segment_id, color = id_generator.get_id_and_color(semantic_cat_id) - pan_seg[mask] = color - - area = np.sum(mask) # segment area computation - # bbox computation for a segment - hor = np.sum(mask, axis=0) - hor_idx = np.nonzero(hor)[0] - x = hor_idx[0] - width = hor_idx[-1] - x + 1 - vert = np.sum(mask, axis=1) - vert_idx = np.nonzero(vert)[0] - y = vert_idx[0] - height = vert_idx[-1] - y + 1 - bbox = [int(x), int(y), int(width), int(height)] - - segm_info.append( - { - "id": int(segment_id), - "category_id": int(semantic_cat_id), - "area": int(area), - "bbox": bbox, - "iscrowd": 0, - } - ) - - # process things - for thing_id in np.unique(instance_ins_ids): - if thing_id == 0: - continue - mask = instance_ins_ids == thing_id - instance_cat_id = np.unique(instance_cat_ids[mask]) - assert len(instance_cat_id) == 1 - - semantic_cat_id = map_id[instance_cat_id[0]] - - segment_id, color = id_generator.get_id_and_color(semantic_cat_id) - pan_seg[mask] = color - - area = np.sum(mask) # segment area computation - # bbox computation for a segment - hor = np.sum(mask, axis=0) - hor_idx = np.nonzero(hor)[0] - x = hor_idx[0] - width = hor_idx[-1] - x + 1 - vert = np.sum(mask, axis=1) - vert_idx = np.nonzero(vert)[0] - y = vert_idx[0] - height = vert_idx[-1] - y + 1 - bbox = [int(x), int(y), int(width), int(height)] - - segm_info.append( - { - "id": int(segment_id), - "category_id": int(semantic_cat_id), - "area": int(area), - "bbox": bbox, - "iscrowd": 0, - } - ) - - panoptic_json_annotation = { - "image_id": image_id, - "file_name": image_id + ".png", - "segments_info": segm_info, - } - - Image.fromarray(pan_seg).save(os.path.join(out_folder, image_id + ".png")) - - panoptic_json_images.append(panoptic_json_image) - panoptic_json_annotations.append(panoptic_json_annotation) - - # save this - d = { - "images": panoptic_json_images, - "annotations": panoptic_json_annotations, - "categories": panoptic_json_categories, - } - - save_json(d, out_file) diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/jsss/voc1/path.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/jsss/voc1/path.sh deleted file mode 100644 index b0ca27c615f70aa29e240222ec370f8ad4e7b45a..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/jsss/voc1/path.sh +++ /dev/null @@ -1,33 +0,0 @@ -# cuda related -export CUDA_HOME=/usr/local/cuda-10.0 -export LD_LIBRARY_PATH="${CUDA_HOME}/lib64:${LD_LIBRARY_PATH}" - -# path related -export PRJ_ROOT="${PWD}/../../.." -if [ -e "${PRJ_ROOT}/tools/venv/bin/activate" ]; then - # shellcheck disable=SC1090 - . "${PRJ_ROOT}/tools/venv/bin/activate" -fi - -# python related -export OMP_NUM_THREADS=1 -export PYTHONIOENCODING=UTF-8 -export MPL_BACKEND=Agg - -# check installation -if ! command -v parallel-wavegan-train > /dev/null; then - echo "Error: It seems setup is not finished." >&2 - echo "Error: Please setup your environment by following README.md" >&2 - return 1 -fi -if ! command -v jq > /dev/null; then - echo "Error: It seems jq is not installed." >&2 - echo "Error: Please install via \`sudo apt-get install jq\`." >&2 - echo "Error: If you do not have sudo, please download from https://stedolan.github.io/jq/download/." >&2 - return 1 -fi -if ! command -v yq > /dev/null; then - echo "Error: It seems yq is not installed." >&2 - echo "Error: Please install via \`pip install yq\`." >&2 - return 1 -fi diff --git a/spaces/akhaliq/stylegan3_clip/gui_utils/imgui_window.py b/spaces/akhaliq/stylegan3_clip/gui_utils/imgui_window.py deleted file mode 100644 index aaf7caa76f0e1261e490bce8ef1c8267a1e5c31d..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/stylegan3_clip/gui_utils/imgui_window.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import imgui -import imgui.integrations.glfw - -from . import glfw_window -from . import imgui_utils -from . import text_utils - -#---------------------------------------------------------------------------- - -class ImguiWindow(glfw_window.GlfwWindow): - def __init__(self, *, title='ImguiWindow', font=None, font_sizes=range(14,24), **glfw_kwargs): - if font is None: - font = text_utils.get_default_font() - font_sizes = {int(size) for size in font_sizes} - super().__init__(title=title, **glfw_kwargs) - - # Init fields. - self._imgui_context = None - self._imgui_renderer = None - self._imgui_fonts = None - self._cur_font_size = max(font_sizes) - - # Delete leftover imgui.ini to avoid unexpected behavior. - if os.path.isfile('imgui.ini'): - os.remove('imgui.ini') - - # Init ImGui. - self._imgui_context = imgui.create_context() - self._imgui_renderer = _GlfwRenderer(self._glfw_window) - self._attach_glfw_callbacks() - imgui.get_io().ini_saving_rate = 0 # Disable creating imgui.ini at runtime. - imgui.get_io().mouse_drag_threshold = 0 # Improve behavior with imgui_utils.drag_custom(). - self._imgui_fonts = {size: imgui.get_io().fonts.add_font_from_file_ttf(font, size) for size in font_sizes} - self._imgui_renderer.refresh_font_texture() - - def close(self): - self.make_context_current() - self._imgui_fonts = None - if self._imgui_renderer is not None: - self._imgui_renderer.shutdown() - self._imgui_renderer = None - if self._imgui_context is not None: - #imgui.destroy_context(self._imgui_context) # Commented out to avoid creating imgui.ini at the end. - self._imgui_context = None - super().close() - - def _glfw_key_callback(self, *args): - super()._glfw_key_callback(*args) - self._imgui_renderer.keyboard_callback(*args) - - @property - def font_size(self): - return self._cur_font_size - - @property - def spacing(self): - return round(self._cur_font_size * 0.4) - - def set_font_size(self, target): # Applied on next frame. - self._cur_font_size = min((abs(key - target), key) for key in self._imgui_fonts.keys())[1] - - def begin_frame(self): - # Begin glfw frame. - super().begin_frame() - - # Process imgui events. - self._imgui_renderer.mouse_wheel_multiplier = self._cur_font_size / 10 - if self.content_width > 0 and self.content_height > 0: - self._imgui_renderer.process_inputs() - - # Begin imgui frame. - imgui.new_frame() - imgui.push_font(self._imgui_fonts[self._cur_font_size]) - imgui_utils.set_default_style(spacing=self.spacing, indent=self.font_size, scrollbar=self.font_size+4) - - def end_frame(self): - imgui.pop_font() - imgui.render() - imgui.end_frame() - self._imgui_renderer.render(imgui.get_draw_data()) - super().end_frame() - -#---------------------------------------------------------------------------- -# Wrapper class for GlfwRenderer to fix a mouse wheel bug on Linux. - -class _GlfwRenderer(imgui.integrations.glfw.GlfwRenderer): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.mouse_wheel_multiplier = 1 - - def scroll_callback(self, window, x_offset, y_offset): - self.io.mouse_wheel += y_offset * self.mouse_wheel_multiplier - -#---------------------------------------------------------------------------- diff --git a/spaces/alamin655/websurfx/public/templates/settings.html b/spaces/alamin655/websurfx/public/templates/settings.html deleted file mode 100644 index 3c9721395cc941814bfe98058a803216cd50d4c0..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/public/templates/settings.html +++ /dev/null @@ -1,22 +0,0 @@ -{{>header this}} -
        -

        Settings

        -
        -
        - -
        - {{> general_tab}} {{> user_interface_tab}} {{> engines_tab}} {{> - cookies_tab}} -

        - -
        -
        -
        - - -{{>footer}} diff --git a/spaces/alamin655/websurfx/src/cache/cacher.rs b/spaces/alamin655/websurfx/src/cache/cacher.rs deleted file mode 100644 index 12f88ffb0729df77867b784449a9ba6424f3d3a2..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/src/cache/cacher.rs +++ /dev/null @@ -1,267 +0,0 @@ -//! This module provides the functionality to cache the aggregated results fetched and aggregated -//! from the upstream search engines in a json format. - -use error_stack::Report; -#[cfg(feature = "memory-cache")] -use mini_moka::sync::Cache as MokaCache; -#[cfg(feature = "memory-cache")] -use std::time::Duration; -use tokio::sync::Mutex; - -use crate::{config::parser::Config, models::aggregation_models::SearchResults}; - -use super::error::CacheError; -#[cfg(feature = "redis-cache")] -use super::redis_cacher::RedisCache; - -/// Different implementations for caching, currently it is possible to cache in-memory or in Redis. -#[derive(Clone)] -pub enum Cache { - /// Caching is disabled - Disabled, - #[cfg(all(feature = "redis-cache", not(feature = "memory-cache")))] - /// Encapsulates the Redis based cache - Redis(RedisCache), - #[cfg(all(feature = "memory-cache", not(feature = "redis-cache")))] - /// Contains the in-memory cache. - InMemory(MokaCache), - #[cfg(all(feature = "redis-cache", feature = "memory-cache"))] - /// Contains both the in-memory cache and Redis based cache - Hybrid(RedisCache, MokaCache), -} - -impl Cache { - /// A function that builds the cache from the given configuration. - /// - /// # Arguments - /// - /// * `config` - It takes the config struct as an argument. - /// - /// # Returns - /// - /// It returns a newly initialized variant based on the feature enabled by the user. - pub async fn build(_config: &Config) -> Self { - #[cfg(all(feature = "redis-cache", feature = "memory-cache"))] - { - log::info!("Using a hybrid cache"); - Cache::new_hybrid( - RedisCache::new(&_config.redis_url, 5) - .await - .expect("Redis cache configured"), - ) - } - #[cfg(all(feature = "redis-cache", not(feature = "memory-cache")))] - { - log::info!("Listening redis server on {}", &_config.redis_url); - Cache::new( - RedisCache::new(&_config.redis_url, 5) - .await - .expect("Redis cache configured"), - ) - } - #[cfg(all(feature = "memory-cache", not(feature = "redis-cache")))] - { - log::info!("Using an in-memory cache"); - Cache::new_in_memory() - } - #[cfg(not(any(feature = "memory-cache", feature = "redis-cache")))] - { - log::info!("Caching is disabled"); - Cache::Disabled - } - } - - /// A function that initializes a new connection pool struct. - /// - /// # Arguments - /// - /// * `redis_cache` - It takes the newly initialized connection pool struct as an argument. - /// - /// # Returns - /// - /// It returns a `Redis` variant with the newly initialized connection pool struct. - #[cfg(all(feature = "redis-cache", not(feature = "memory-cache")))] - pub fn new(redis_cache: RedisCache) -> Self { - Cache::Redis(redis_cache) - } - - /// A function that initializes the `in memory` cache which is used to cache the results in - /// memory with the search engine thus improving performance by making retrieval and caching of - /// results faster. - /// - /// # Returns - /// - /// It returns a `InMemory` variant with the newly initialized in memory cache type. - #[cfg(all(feature = "memory-cache", not(feature = "redis-cache")))] - pub fn new_in_memory() -> Self { - let cache = MokaCache::builder() - .max_capacity(1000) - .time_to_live(Duration::from_secs(60)) - .build(); - Cache::InMemory(cache) - } - - /// A function that initializes both in memory cache and redis client connection for being used - /// for managing hybrid cache which increases resiliancy of the search engine by allowing the - /// cache to switch to `in memory` caching if the `redis` cache server is temporarily - /// unavailable. - /// - /// # Arguments - /// - /// * `redis_cache` - It takes `redis` client connection struct as an argument. - /// - /// # Returns - /// - /// It returns a tuple variant `Hybrid` storing both the in-memory cache type and the `redis` - /// client connection struct. - #[cfg(all(feature = "redis-cache", feature = "memory-cache"))] - pub fn new_hybrid(redis_cache: RedisCache) -> Self { - let cache = MokaCache::builder() - .max_capacity(1000) - .time_to_live(Duration::from_secs(60)) - .build(); - Cache::Hybrid(redis_cache, cache) - } - - /// A function which fetches the cached json results as json string. - /// - /// # Arguments - /// - /// * `url` - It takes an url as a string. - /// - /// # Error - /// - /// Returns the `SearchResults` from the cache if the program executes normally otherwise - /// returns a `CacheError` if the results cannot be retrieved from the cache. - pub async fn cached_json(&mut self, _url: &str) -> Result> { - match self { - Cache::Disabled => Err(Report::new(CacheError::MissingValue)), - #[cfg(all(feature = "redis-cache", not(feature = "memory-cache")))] - Cache::Redis(redis_cache) => { - let json = redis_cache.cached_json(_url).await?; - Ok(serde_json::from_str::(&json) - .map_err(|_| CacheError::SerializationError)?) - } - #[cfg(all(feature = "memory-cache", not(feature = "redis-cache")))] - Cache::InMemory(in_memory) => match in_memory.get(&_url.to_string()) { - Some(res) => Ok(res), - None => Err(Report::new(CacheError::MissingValue)), - }, - #[cfg(all(feature = "redis-cache", feature = "memory-cache"))] - Cache::Hybrid(redis_cache, in_memory) => match redis_cache.cached_json(_url).await { - Ok(res) => Ok(serde_json::from_str::(&res) - .map_err(|_| CacheError::SerializationError)?), - Err(_) => match in_memory.get(&_url.to_string()) { - Some(res) => Ok(res), - None => Err(Report::new(CacheError::MissingValue)), - }, - }, - } - } - - /// A function which caches the results by using the `url` as the key and - /// `json results` as the value and stores it in the cache - /// - /// # Arguments - /// - /// * `json_results` - It takes the json results string as an argument. - /// * `url` - It takes the url as a String. - /// - /// # Error - /// - /// Returns a unit type if the program caches the given search results without a failure - /// otherwise it returns a `CacheError` if the search results cannot be cached due to a - /// failure. - pub async fn cache_results( - &mut self, - _search_results: &SearchResults, - _url: &str, - ) -> Result<(), Report> { - match self { - Cache::Disabled => Ok(()), - #[cfg(all(feature = "redis-cache", not(feature = "memory-cache")))] - Cache::Redis(redis_cache) => { - let json = serde_json::to_string(_search_results) - .map_err(|_| CacheError::SerializationError)?; - redis_cache.cache_results(&json, _url).await - } - #[cfg(all(feature = "memory-cache", not(feature = "redis-cache")))] - Cache::InMemory(cache) => { - cache.insert(_url.to_string(), _search_results.clone()); - Ok(()) - } - #[cfg(all(feature = "memory-cache", feature = "redis-cache"))] - Cache::Hybrid(redis_cache, cache) => { - let json = serde_json::to_string(_search_results) - .map_err(|_| CacheError::SerializationError)?; - match redis_cache.cache_results(&json, _url).await { - Ok(_) => Ok(()), - Err(_) => { - cache.insert(_url.to_string(), _search_results.clone()); - Ok(()) - } - } - } - } - } -} - -/// A structure to efficiently share the cache between threads - as it is protected by a Mutex. -pub struct SharedCache { - /// The internal cache protected from concurrent access by a mutex - cache: Mutex, -} - -impl SharedCache { - /// A function that creates a new `SharedCache` from a Cache implementation. - /// - /// # Arguments - /// - /// * `cache` - It takes the `Cache` enum variant as an argument with the prefered cache type. - /// - /// Returns a newly constructed `SharedCache` struct. - pub fn new(cache: Cache) -> Self { - Self { - cache: Mutex::new(cache), - } - } - - /// A getter function which retrieves the cached SearchResulsts from the internal cache. - /// - /// # Arguments - /// - /// * `url` - It takes the search url as an argument which will be used as the key to fetch the - /// cached results from the cache. - /// - /// # Error - /// - /// Returns a `SearchResults` struct containing the search results from the cache if nothing - /// goes wrong otherwise returns a `CacheError`. - pub async fn cached_json(&self, url: &str) -> Result> { - let mut mut_cache = self.cache.lock().await; - mut_cache.cached_json(url).await - } - - /// A setter function which caches the results by using the `url` as the key and - /// `SearchResults` as the value. - /// - /// # Arguments - /// - /// * `search_results` - It takes the `SearchResults` as an argument which are results that - /// needs to be cached. - /// * `url` - It takes the search url as an argument which will be used as the key for storing - /// results in the cache. - /// - /// # Error - /// - /// Returns an unit type if the results are cached succesfully otherwise returns a `CacheError` - /// on a failure. - pub async fn cache_results( - &self, - search_results: &SearchResults, - url: &str, - ) -> Result<(), Report> { - let mut mut_cache = self.cache.lock().await; - mut_cache.cache_results(search_results, url).await - } -} diff --git a/spaces/ali-ghamdan/gfp-Gans/scripts/convert_gfpganv_to_clean.py b/spaces/ali-ghamdan/gfp-Gans/scripts/convert_gfpganv_to_clean.py deleted file mode 100644 index 8fdccb6195c29e78cec2ac8dcc6f9ccb604e35ca..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/gfp-Gans/scripts/convert_gfpganv_to_clean.py +++ /dev/null @@ -1,164 +0,0 @@ -import argparse -import math -import torch - -from gfpgan.archs.gfpganv1_clean_arch import GFPGANv1Clean - - -def modify_checkpoint(checkpoint_bilinear, checkpoint_clean): - for ori_k, ori_v in checkpoint_bilinear.items(): - if 'stylegan_decoder' in ori_k: - if 'style_mlp' in ori_k: # style_mlp_layers - lr_mul = 0.01 - prefix, name, idx, var = ori_k.split('.') - idx = (int(idx) * 2) - 1 - crt_k = f'{prefix}.{name}.{idx}.{var}' - if var == 'weight': - _, c_in = ori_v.size() - scale = (1 / math.sqrt(c_in)) * lr_mul - crt_v = ori_v * scale * 2**0.5 - else: - crt_v = ori_v * lr_mul * 2**0.5 - checkpoint_clean[crt_k] = crt_v - elif 'modulation' in ori_k: # modulation in StyleConv - lr_mul = 1 - crt_k = ori_k - var = ori_k.split('.')[-1] - if var == 'weight': - _, c_in = ori_v.size() - scale = (1 / math.sqrt(c_in)) * lr_mul - crt_v = ori_v * scale - else: - crt_v = ori_v * lr_mul - checkpoint_clean[crt_k] = crt_v - elif 'style_conv' in ori_k: - # StyleConv in style_conv1 and style_convs - if 'activate' in ori_k: # FusedLeakyReLU - # eg. style_conv1.activate.bias - # eg. style_convs.13.activate.bias - split_rlt = ori_k.split('.') - if len(split_rlt) == 4: - prefix, name, _, var = split_rlt - crt_k = f'{prefix}.{name}.{var}' - elif len(split_rlt) == 5: - prefix, name, idx, _, var = split_rlt - crt_k = f'{prefix}.{name}.{idx}.{var}' - crt_v = ori_v * 2**0.5 # 2**0.5 used in FusedLeakyReLU - c = crt_v.size(0) - checkpoint_clean[crt_k] = crt_v.view(1, c, 1, 1) - elif 'modulated_conv' in ori_k: - # eg. style_conv1.modulated_conv.weight - # eg. style_convs.13.modulated_conv.weight - _, c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - crt_k = ori_k - checkpoint_clean[crt_k] = ori_v * scale - elif 'weight' in ori_k: - crt_k = ori_k - checkpoint_clean[crt_k] = ori_v * 2**0.5 - elif 'to_rgb' in ori_k: # StyleConv in to_rgb1 and to_rgbs - if 'modulated_conv' in ori_k: - # eg. to_rgb1.modulated_conv.weight - # eg. to_rgbs.5.modulated_conv.weight - _, c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - crt_k = ori_k - checkpoint_clean[crt_k] = ori_v * scale - else: - crt_k = ori_k - checkpoint_clean[crt_k] = ori_v - else: - crt_k = ori_k - checkpoint_clean[crt_k] = ori_v - # end of 'stylegan_decoder' - elif 'conv_body_first' in ori_k or 'final_conv' in ori_k: - # key name - name, _, var = ori_k.split('.') - crt_k = f'{name}.{var}' - # weight and bias - if var == 'weight': - c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - checkpoint_clean[crt_k] = ori_v * scale * 2**0.5 - else: - checkpoint_clean[crt_k] = ori_v * 2**0.5 - elif 'conv_body' in ori_k: - if 'conv_body_up' in ori_k: - ori_k = ori_k.replace('conv2.weight', 'conv2.1.weight') - ori_k = ori_k.replace('skip.weight', 'skip.1.weight') - name1, idx1, name2, _, var = ori_k.split('.') - crt_k = f'{name1}.{idx1}.{name2}.{var}' - if name2 == 'skip': - c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - checkpoint_clean[crt_k] = ori_v * scale / 2**0.5 - else: - if var == 'weight': - c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - checkpoint_clean[crt_k] = ori_v * scale - else: - checkpoint_clean[crt_k] = ori_v - if 'conv1' in ori_k: - checkpoint_clean[crt_k] *= 2**0.5 - elif 'toRGB' in ori_k: - crt_k = ori_k - if 'weight' in ori_k: - c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - checkpoint_clean[crt_k] = ori_v * scale - else: - checkpoint_clean[crt_k] = ori_v - elif 'final_linear' in ori_k: - crt_k = ori_k - if 'weight' in ori_k: - _, c_in = ori_v.size() - scale = 1 / math.sqrt(c_in) - checkpoint_clean[crt_k] = ori_v * scale - else: - checkpoint_clean[crt_k] = ori_v - elif 'condition' in ori_k: - crt_k = ori_k - if '0.weight' in ori_k: - c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - checkpoint_clean[crt_k] = ori_v * scale * 2**0.5 - elif '0.bias' in ori_k: - checkpoint_clean[crt_k] = ori_v * 2**0.5 - elif '2.weight' in ori_k: - c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - checkpoint_clean[crt_k] = ori_v * scale - elif '2.bias' in ori_k: - checkpoint_clean[crt_k] = ori_v - - return checkpoint_clean - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--ori_path', type=str, help='Path to the original model') - parser.add_argument('--narrow', type=float, default=1) - parser.add_argument('--channel_multiplier', type=float, default=2) - parser.add_argument('--save_path', type=str) - args = parser.parse_args() - - ori_ckpt = torch.load(args.ori_path)['params_ema'] - - net = GFPGANv1Clean( - 512, - num_style_feat=512, - channel_multiplier=args.channel_multiplier, - decoder_load_path=None, - fix_decoder=False, - # for stylegan decoder - num_mlp=8, - input_is_latent=True, - different_w=True, - narrow=args.narrow, - sft_half=True) - crt_ckpt = net.state_dict() - - crt_ckpt = modify_checkpoint(ori_ckpt, crt_ckpt) - print(f'Save to {args.save_path}.') - torch.save(dict(params_ema=crt_ckpt), args.save_path, _use_new_zipfile_serialization=False) diff --git a/spaces/allknowingroger/Image-Models-Test29/app.py b/spaces/allknowingroger/Image-Models-Test29/app.py deleted file mode 100644 index 9790a8b1377fc8dec58cefda8b9f12ff1bccc89a..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test29/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "digiplay/YabaLMixTrue25D_V1.0", - "lukemarsden/lora-trained-xl", - "WALIDALI/bekinorrev", - "Daniil-plotnikov/russian-vision-v4", - "LucianStorm/panda", - "SvenN/sdxl-emoji", - "digiplay/OLDFish_2348_diffusers", - "digiplay/BlankCanvas_v1", - "digiplay/Juggernaut_final", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/amankishore/sjc/voxnerf/vox.py b/spaces/amankishore/sjc/voxnerf/vox.py deleted file mode 100644 index 42f23fc5f6f365d755af677c09a46eb202af7d56..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/voxnerf/vox.py +++ /dev/null @@ -1,271 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from einops import rearrange -from my.registry import Registry - -VOXRF_REGISTRY = Registry("VoxRF") - - -def to_grid_samp_coords(xyz_sampled, aabb): - # output range is [-1, 1] - aabbSize = aabb[1] - aabb[0] - return (xyz_sampled - aabb[0]) / aabbSize * 2 - 1 - - -def add_non_state_tsr(nn_module, key, val): - # tsr added here does not appear in module's state_dict; - nn_module.register_buffer(key, val, persistent=False) - - -@VOXRF_REGISTRY.register() -class VoxRF(nn.Module): - def __init__( - self, aabb, grid_size, step_ratio=0.5, - density_shift=-10, ray_march_weight_thres=0.0001, c=3, - blend_bg_texture=True, bg_texture_hw=64 - ): - assert aabb.shape == (2, 3) - xyz = grid_size - del grid_size - - super().__init__() - add_non_state_tsr(self, "aabb", torch.tensor(aabb, dtype=torch.float32)) - add_non_state_tsr(self, "grid_size", torch.LongTensor(xyz)) - - self.density_shift = density_shift - self.ray_march_weight_thres = ray_march_weight_thres - self.step_ratio = step_ratio - - zyx = xyz[::-1] - self.density = torch.nn.Parameter( - torch.zeros((1, 1, *zyx)) - ) - self.color = torch.nn.Parameter( - torch.randn((1, c, *zyx)) - ) - - self.blend_bg_texture = blend_bg_texture - self.bg = torch.nn.Parameter( - torch.randn((1, c, bg_texture_hw, bg_texture_hw)) - ) - - self.c = c - self.alphaMask = None - self.feats2color = lambda feats: torch.sigmoid(feats) - - self.d_scale = torch.nn.Parameter(torch.tensor(0.0)) - - @property - def device(self): - return self.density.device - - def compute_density_feats(self, xyz_sampled): - xyz_sampled = to_grid_samp_coords(xyz_sampled, self.aabb) - n = xyz_sampled.shape[0] - xyz_sampled = xyz_sampled.reshape(1, n, 1, 1, 3) - σ = F.grid_sample(self.density, xyz_sampled).view(n) - # We notice that DreamFusion also uses an exp scaling on densities. - # The technique here is developed BEFORE DreamFusion came out, - # and forms part of our upcoming technical report discussing invariant - # scaling for volume rendering. The reseach was presented to our - # funding agency (TRI) on Aug. 25th, and discussed with a few researcher friends - # during the period. - σ = σ * torch.exp(self.d_scale) - σ = F.softplus(σ + self.density_shift) - return σ - - def compute_app_feats(self, xyz_sampled): - xyz_sampled = to_grid_samp_coords(xyz_sampled, self.aabb) - n = xyz_sampled.shape[0] - xyz_sampled = xyz_sampled.reshape(1, n, 1, 1, 3) - feats = F.grid_sample(self.color, xyz_sampled).view(self.c, n) - feats = feats.T - return feats - - def compute_bg(self, uv): - n = uv.shape[0] - uv = uv.reshape(1, n, 1, 2) - feats = F.grid_sample(self.bg, uv).view(self.c, n) - feats = feats.T - return feats - - def get_per_voxel_length(self): - aabb_size = self.aabb[1] - self.aabb[0] - # NOTE I am not -1 on grid_size here; - # I interpret a voxel as a square and val sits at the center; like pixel - # this is consistent with align_corners=False - vox_xyz_length = aabb_size / self.grid_size - return vox_xyz_length - - def get_num_samples(self, max_size=None): - # funny way to set step size; whatever - unit = torch.mean(self.get_per_voxel_length()) - step_size = unit * self.step_ratio - step_size = step_size.item() # get the float - - if max_size is None: - aabb_size = self.aabb[1] - self.aabb[0] - aabb_diag = torch.norm(aabb_size) - max_size = aabb_diag - - num_samples = int((max_size / step_size).item()) + 1 - return num_samples, step_size - - @torch.no_grad() - def resample(self, target_xyz: list): - zyx = target_xyz[::-1] - self.density = self._resamp_param(self.density, zyx) - self.color = self._resamp_param(self.color, zyx) - target_xyz = torch.LongTensor(target_xyz).to(self.aabb.device) - add_non_state_tsr(self, "grid_size", target_xyz) - - @staticmethod - def _resamp_param(param, target_size): - return torch.nn.Parameter(F.interpolate( - param.data, size=target_size, mode="trilinear" - )) - - @torch.no_grad() - def compute_volume_alpha(self): - xyz = self.grid_size.tolist() - unit_xyz = self.get_per_voxel_length() - xs, ys, zs = torch.meshgrid( - *[torch.arange(nd) for nd in xyz], indexing="ij" - ) - pts = torch.stack([xs, ys, zs], dim=-1).to(unit_xyz.device) # [nx, ny, nz, 3] - pts = self.aabb[0] + (pts + 0.5) * unit_xyz - pts = pts.reshape(-1, 3) - # could potentially filter with alpha mask itself if exists - σ = self.compute_density_feats(pts) - d = torch.mean(unit_xyz) - α = 1 - torch.exp(-σ * d) - α = rearrange(α.view(xyz), "x y z -> 1 1 z y x") - α = α.contiguous() - return α - - @torch.no_grad() - def make_alpha_mask(self): - α = self.compute_volume_alpha() - ks = 3 - α = F.max_pool3d(α, kernel_size=ks, padding=ks // 2, stride=1) - α = (α > 0.08).float() - vol_mask = AlphaMask(self.aabb, α) - self.alphaMask = vol_mask - - def state_dict(self, *args, **kwargs): - state = super().state_dict(*args, **kwargs) - if self.alphaMask is not None: - state['alpha_mask'] = self.alphaMask.export_state() - return state - - def load_state_dict(self, state_dict): - if 'alpha_mask' in state_dict.keys(): - state = state_dict.pop("alpha_mask") - self.alphaMask = AlphaMask.from_state(state) - return super().load_state_dict(state_dict, strict=True) - - -@VOXRF_REGISTRY.register() -class V_SJC(VoxRF): - """ - For SJC, when sampling density σ, add a gaussian ball offset - """ - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - # rendering color in [-1, 1] range, since score models all operate on centered img - self.feats2color = lambda feats: torch.sigmoid(feats) * 2 - 1 - - def opt_params(self): - groups = [] - for name, param in self.named_parameters(): - # print(f"{name} {param.shape}") - grp = {"params": param} - if name in ["bg"]: - grp["lr"] = 0.0001 - if name in ["density"]: - # grp["lr"] = 0. - pass - groups.append(grp) - return groups - - def annealed_opt_params(self, base_lr, σ): - groups = [] - for name, param in self.named_parameters(): - # print(f"{name} {param.shape}") - grp = {"params": param, "lr": base_lr * σ} - if name in ["density"]: - grp["lr"] = base_lr * σ - if name in ["d_scale"]: - grp["lr"] = 0. - if name in ["color"]: - grp["lr"] = base_lr * σ - if name in ["bg"]: - grp["lr"] = 0.01 - groups.append(grp) - return groups - - -@VOXRF_REGISTRY.register() -class V_SD(V_SJC): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - # rendering in feature space; no sigmoid thresholding - self.feats2color = lambda feats: feats - - -class AlphaMask(nn.Module): - def __init__(self, aabb, alphas): - super().__init__() - zyx = list(alphas.shape[-3:]) - add_non_state_tsr(self, "alphas", alphas.view(1, 1, *zyx)) - xyz = zyx[::-1] - add_non_state_tsr(self, "grid_size", torch.LongTensor(xyz)) - add_non_state_tsr(self, "aabb", aabb) - - def sample_alpha(self, xyz_pts): - xyz_pts = to_grid_samp_coords(xyz_pts, self.aabb) - xyz_pts = xyz_pts.view(1, -1, 1, 1, 3) - α = F.grid_sample(self.alphas, xyz_pts).view(-1) - return α - - def export_state(self): - state = {} - alphas = self.alphas.bool().cpu().numpy() - state['shape'] = alphas.shape - state['mask'] = np.packbits(alphas.reshape(-1)) - state['aabb'] = self.aabb.cpu() - return state - - @classmethod - def from_state(cls, state): - shape = state['shape'] - mask = state['mask'] - aabb = state['aabb'] - - length = np.prod(shape) - alphas = torch.from_numpy( - np.unpackbits(mask)[:length].reshape(shape) - ) - amask = cls(aabb, alphas.float()) - return amask - - -def test(): - device = torch.device("cuda:1") - - aabb = 1.5 * np.array([ - [-1, -1, -1], - [1, 1, 1] - ]) - model = VoxRF(aabb, [10, 20, 30]) - model.to(device) - print(model.density.shape) - print(model.grid_size) - - return - - -if __name__ == "__main__": - test() diff --git a/spaces/amir0900/s/library.py b/spaces/amir0900/s/library.py deleted file mode 100644 index 6edbc144f026dac0419d43b1d05988fef60e6eb6..0000000000000000000000000000000000000000 --- a/spaces/amir0900/s/library.py +++ /dev/null @@ -1,3 +0,0 @@ -import os - -os.system("pip install rubpy==5.0.5") \ No newline at end of file diff --git a/spaces/archietram/Medical_Image_Classifier/app.py b/spaces/archietram/Medical_Image_Classifier/app.py deleted file mode 100644 index 68f06df5d4747a8fefb9df22022fa0e0646110eb..0000000000000000000000000000000000000000 --- a/spaces/archietram/Medical_Image_Classifier/app.py +++ /dev/null @@ -1,22 +0,0 @@ -from fastai.vision.all import * -import gradio as gr -import pathlib -pathlib.WindowsPath = pathlib.PosixPath - -learn = load_learner("export.pkl") - -categories = ('CT', 'MRI', "Ultrasound", "X-Ray") - -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) - -image = gr.inputs.Image(shape=(192,192)) -label = gr.outputs.Label() -examples = ['mri.jpg','ct.jpg','ultrasound.jpg', 'xray.jpg'] -title = 'Medical Image Classifier' -description = 'You need to know whether you are dealing with an MRI , X-Ray, CT, or Ultrasound image, and you need an answer fast? Then you have come to the right place. Upload the picture to classify it.' -article = "Author: Archie Tram. " - -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples, title=title, description=description, article=article) -intf.launch(inline=False) \ No newline at end of file diff --git a/spaces/ardha27/rvc_TTS/config.py b/spaces/ardha27/rvc_TTS/config.py deleted file mode 100644 index 4038dad0ac30ba03b6271499f4e37bbc745a2032..0000000000000000000000000000000000000000 --- a/spaces/ardha27/rvc_TTS/config.py +++ /dev/null @@ -1,115 +0,0 @@ -import argparse -import sys -import torch -from multiprocessing import cpu_count - - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.iscolab, - self.noparallel, - self.noautoopen, - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - exe = sys.executable or "python" - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument("--pycmd", type=str, default=exe, help="Python command") - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - ) - - # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. - # check `getattr` and try it for compatibility - @staticmethod - def has_mps() -> bool: - if not torch.backends.mps.is_available(): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("Found GPU", self.gpu_name, ", force to fp32") - self.is_half = False - else: - print("Found GPU", self.gpu_name) - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - elif self.has_mps(): - print("No supported Nvidia GPU found, use MPS instead") - self.device = "mps" - self.is_half = False - else: - print("No supported Nvidia GPU found, use CPU instead") - self.device = "cpu" - self.is_half = False - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/text_tests/test_text_cleaners.py b/spaces/artificialguybr/video-dubbing/TTS/tests/text_tests/test_text_cleaners.py deleted file mode 100644 index fcfa71e77dde8daa6002aa71a56e4f8ca96a51a7..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/text_tests/test_text_cleaners.py +++ /dev/null @@ -1,21 +0,0 @@ -#!/usr/bin/env python3 - -from TTS.tts.utils.text.cleaners import english_cleaners, phoneme_cleaners - - -def test_time() -> None: - assert english_cleaners("It's 11:00") == "it's eleven a m" - assert english_cleaners("It's 9:01") == "it's nine oh one a m" - assert english_cleaners("It's 16:00") == "it's four p m" - assert english_cleaners("It's 00:00 am") == "it's twelve a m" - - -def test_currency() -> None: - assert phoneme_cleaners("It's $10.50") == "It's ten dollars fifty cents" - assert phoneme_cleaners("£1.1") == "one pound sterling one penny" - assert phoneme_cleaners("¥1") == "one yen" - - -def test_expand_numbers() -> None: - assert phoneme_cleaners("-1") == "minus one" - assert phoneme_cleaners("1") == "one" diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PdfParser.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PdfParser.py deleted file mode 100644 index fd5cc5a61e3262017a39565d550bbe26a649510a..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PdfParser.py +++ /dev/null @@ -1,998 +0,0 @@ -import calendar -import codecs -import collections -import mmap -import os -import re -import time -import zlib - - -# see 7.9.2.2 Text String Type on page 86 and D.3 PDFDocEncoding Character Set -# on page 656 -def encode_text(s): - return codecs.BOM_UTF16_BE + s.encode("utf_16_be") - - -PDFDocEncoding = { - 0x16: "\u0017", - 0x18: "\u02D8", - 0x19: "\u02C7", - 0x1A: "\u02C6", - 0x1B: "\u02D9", - 0x1C: "\u02DD", - 0x1D: "\u02DB", - 0x1E: "\u02DA", - 0x1F: "\u02DC", - 0x80: "\u2022", - 0x81: "\u2020", - 0x82: "\u2021", - 0x83: "\u2026", - 0x84: "\u2014", - 0x85: "\u2013", - 0x86: "\u0192", - 0x87: "\u2044", - 0x88: "\u2039", - 0x89: "\u203A", - 0x8A: "\u2212", - 0x8B: "\u2030", - 0x8C: "\u201E", - 0x8D: "\u201C", - 0x8E: "\u201D", - 0x8F: "\u2018", - 0x90: "\u2019", - 0x91: "\u201A", - 0x92: "\u2122", - 0x93: "\uFB01", - 0x94: "\uFB02", - 0x95: "\u0141", - 0x96: "\u0152", - 0x97: "\u0160", - 0x98: "\u0178", - 0x99: "\u017D", - 0x9A: "\u0131", - 0x9B: "\u0142", - 0x9C: "\u0153", - 0x9D: "\u0161", - 0x9E: "\u017E", - 0xA0: "\u20AC", -} - - -def decode_text(b): - if b[: len(codecs.BOM_UTF16_BE)] == codecs.BOM_UTF16_BE: - return b[len(codecs.BOM_UTF16_BE) :].decode("utf_16_be") - else: - return "".join(PDFDocEncoding.get(byte, chr(byte)) for byte in b) - - -class PdfFormatError(RuntimeError): - """An error that probably indicates a syntactic or semantic error in the - PDF file structure""" - - pass - - -def check_format_condition(condition, error_message): - if not condition: - raise PdfFormatError(error_message) - - -class IndirectReference( - collections.namedtuple("IndirectReferenceTuple", ["object_id", "generation"]) -): - def __str__(self): - return "%s %s R" % self - - def __bytes__(self): - return self.__str__().encode("us-ascii") - - def __eq__(self, other): - return ( - other.__class__ is self.__class__ - and other.object_id == self.object_id - and other.generation == self.generation - ) - - def __ne__(self, other): - return not (self == other) - - def __hash__(self): - return hash((self.object_id, self.generation)) - - -class IndirectObjectDef(IndirectReference): - def __str__(self): - return "%s %s obj" % self - - -class XrefTable: - def __init__(self): - self.existing_entries = {} # object ID => (offset, generation) - self.new_entries = {} # object ID => (offset, generation) - self.deleted_entries = {0: 65536} # object ID => generation - self.reading_finished = False - - def __setitem__(self, key, value): - if self.reading_finished: - self.new_entries[key] = value - else: - self.existing_entries[key] = value - if key in self.deleted_entries: - del self.deleted_entries[key] - - def __getitem__(self, key): - try: - return self.new_entries[key] - except KeyError: - return self.existing_entries[key] - - def __delitem__(self, key): - if key in self.new_entries: - generation = self.new_entries[key][1] + 1 - del self.new_entries[key] - self.deleted_entries[key] = generation - elif key in self.existing_entries: - generation = self.existing_entries[key][1] + 1 - self.deleted_entries[key] = generation - elif key in self.deleted_entries: - generation = self.deleted_entries[key] - else: - raise IndexError( - "object ID " + str(key) + " cannot be deleted because it doesn't exist" - ) - - def __contains__(self, key): - return key in self.existing_entries or key in self.new_entries - - def __len__(self): - return len( - set(self.existing_entries.keys()) - | set(self.new_entries.keys()) - | set(self.deleted_entries.keys()) - ) - - def keys(self): - return ( - set(self.existing_entries.keys()) - set(self.deleted_entries.keys()) - ) | set(self.new_entries.keys()) - - def write(self, f): - keys = sorted(set(self.new_entries.keys()) | set(self.deleted_entries.keys())) - deleted_keys = sorted(set(self.deleted_entries.keys())) - startxref = f.tell() - f.write(b"xref\n") - while keys: - # find a contiguous sequence of object IDs - prev = None - for index, key in enumerate(keys): - if prev is None or prev + 1 == key: - prev = key - else: - contiguous_keys = keys[:index] - keys = keys[index:] - break - else: - contiguous_keys = keys - keys = None - f.write(b"%d %d\n" % (contiguous_keys[0], len(contiguous_keys))) - for object_id in contiguous_keys: - if object_id in self.new_entries: - f.write(b"%010d %05d n \n" % self.new_entries[object_id]) - else: - this_deleted_object_id = deleted_keys.pop(0) - check_format_condition( - object_id == this_deleted_object_id, - f"expected the next deleted object ID to be {object_id}, " - f"instead found {this_deleted_object_id}", - ) - try: - next_in_linked_list = deleted_keys[0] - except IndexError: - next_in_linked_list = 0 - f.write( - b"%010d %05d f \n" - % (next_in_linked_list, self.deleted_entries[object_id]) - ) - return startxref - - -class PdfName: - def __init__(self, name): - if isinstance(name, PdfName): - self.name = name.name - elif isinstance(name, bytes): - self.name = name - else: - self.name = name.encode("us-ascii") - - def name_as_str(self): - return self.name.decode("us-ascii") - - def __eq__(self, other): - return ( - isinstance(other, PdfName) and other.name == self.name - ) or other == self.name - - def __hash__(self): - return hash(self.name) - - def __repr__(self): - return f"PdfName({repr(self.name)})" - - @classmethod - def from_pdf_stream(cls, data): - return cls(PdfParser.interpret_name(data)) - - allowed_chars = set(range(33, 127)) - {ord(c) for c in "#%/()<>[]{}"} - - def __bytes__(self): - result = bytearray(b"/") - for b in self.name: - if b in self.allowed_chars: - result.append(b) - else: - result.extend(b"#%02X" % b) - return bytes(result) - - -class PdfArray(list): - def __bytes__(self): - return b"[ " + b" ".join(pdf_repr(x) for x in self) + b" ]" - - -class PdfDict(collections.UserDict): - def __setattr__(self, key, value): - if key == "data": - collections.UserDict.__setattr__(self, key, value) - else: - self[key.encode("us-ascii")] = value - - def __getattr__(self, key): - try: - value = self[key.encode("us-ascii")] - except KeyError as e: - raise AttributeError(key) from e - if isinstance(value, bytes): - value = decode_text(value) - if key.endswith("Date"): - if value.startswith("D:"): - value = value[2:] - - relationship = "Z" - if len(value) > 17: - relationship = value[14] - offset = int(value[15:17]) * 60 - if len(value) > 20: - offset += int(value[18:20]) - - format = "%Y%m%d%H%M%S"[: len(value) - 2] - value = time.strptime(value[: len(format) + 2], format) - if relationship in ["+", "-"]: - offset *= 60 - if relationship == "+": - offset *= -1 - value = time.gmtime(calendar.timegm(value) + offset) - return value - - def __bytes__(self): - out = bytearray(b"<<") - for key, value in self.items(): - if value is None: - continue - value = pdf_repr(value) - out.extend(b"\n") - out.extend(bytes(PdfName(key))) - out.extend(b" ") - out.extend(value) - out.extend(b"\n>>") - return bytes(out) - - -class PdfBinary: - def __init__(self, data): - self.data = data - - def __bytes__(self): - return b"<%s>" % b"".join(b"%02X" % b for b in self.data) - - -class PdfStream: - def __init__(self, dictionary, buf): - self.dictionary = dictionary - self.buf = buf - - def decode(self): - try: - filter = self.dictionary.Filter - except AttributeError: - return self.buf - if filter == b"FlateDecode": - try: - expected_length = self.dictionary.DL - except AttributeError: - expected_length = self.dictionary.Length - return zlib.decompress(self.buf, bufsize=int(expected_length)) - else: - raise NotImplementedError( - f"stream filter {repr(self.dictionary.Filter)} unknown/unsupported" - ) - - -def pdf_repr(x): - if x is True: - return b"true" - elif x is False: - return b"false" - elif x is None: - return b"null" - elif isinstance(x, (PdfName, PdfDict, PdfArray, PdfBinary)): - return bytes(x) - elif isinstance(x, int): - return str(x).encode("us-ascii") - elif isinstance(x, float): - return str(x).encode("us-ascii") - elif isinstance(x, time.struct_time): - return b"(D:" + time.strftime("%Y%m%d%H%M%SZ", x).encode("us-ascii") + b")" - elif isinstance(x, dict): - return bytes(PdfDict(x)) - elif isinstance(x, list): - return bytes(PdfArray(x)) - elif isinstance(x, str): - return pdf_repr(encode_text(x)) - elif isinstance(x, bytes): - # XXX escape more chars? handle binary garbage - x = x.replace(b"\\", b"\\\\") - x = x.replace(b"(", b"\\(") - x = x.replace(b")", b"\\)") - return b"(" + x + b")" - else: - return bytes(x) - - -class PdfParser: - """Based on - https://www.adobe.com/content/dam/acom/en/devnet/acrobat/pdfs/PDF32000_2008.pdf - Supports PDF up to 1.4 - """ - - def __init__(self, filename=None, f=None, buf=None, start_offset=0, mode="rb"): - if buf and f: - raise RuntimeError("specify buf or f or filename, but not both buf and f") - self.filename = filename - self.buf = buf - self.f = f - self.start_offset = start_offset - self.should_close_buf = False - self.should_close_file = False - if filename is not None and f is None: - self.f = f = open(filename, mode) - self.should_close_file = True - if f is not None: - self.buf = buf = self.get_buf_from_file(f) - self.should_close_buf = True - if not filename and hasattr(f, "name"): - self.filename = f.name - self.cached_objects = {} - if buf: - self.read_pdf_info() - else: - self.file_size_total = self.file_size_this = 0 - self.root = PdfDict() - self.root_ref = None - self.info = PdfDict() - self.info_ref = None - self.page_tree_root = {} - self.pages = [] - self.orig_pages = [] - self.pages_ref = None - self.last_xref_section_offset = None - self.trailer_dict = {} - self.xref_table = XrefTable() - self.xref_table.reading_finished = True - if f: - self.seek_end() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - self.close() - return False # do not suppress exceptions - - def start_writing(self): - self.close_buf() - self.seek_end() - - def close_buf(self): - try: - self.buf.close() - except AttributeError: - pass - self.buf = None - - def close(self): - if self.should_close_buf: - self.close_buf() - if self.f is not None and self.should_close_file: - self.f.close() - self.f = None - - def seek_end(self): - self.f.seek(0, os.SEEK_END) - - def write_header(self): - self.f.write(b"%PDF-1.4\n") - - def write_comment(self, s): - self.f.write(f"% {s}\n".encode()) - - def write_catalog(self): - self.del_root() - self.root_ref = self.next_object_id(self.f.tell()) - self.pages_ref = self.next_object_id(0) - self.rewrite_pages() - self.write_obj(self.root_ref, Type=PdfName(b"Catalog"), Pages=self.pages_ref) - self.write_obj( - self.pages_ref, - Type=PdfName(b"Pages"), - Count=len(self.pages), - Kids=self.pages, - ) - return self.root_ref - - def rewrite_pages(self): - pages_tree_nodes_to_delete = [] - for i, page_ref in enumerate(self.orig_pages): - page_info = self.cached_objects[page_ref] - del self.xref_table[page_ref.object_id] - pages_tree_nodes_to_delete.append(page_info[PdfName(b"Parent")]) - if page_ref not in self.pages: - # the page has been deleted - continue - # make dict keys into strings for passing to write_page - stringified_page_info = {} - for key, value in page_info.items(): - # key should be a PdfName - stringified_page_info[key.name_as_str()] = value - stringified_page_info["Parent"] = self.pages_ref - new_page_ref = self.write_page(None, **stringified_page_info) - for j, cur_page_ref in enumerate(self.pages): - if cur_page_ref == page_ref: - # replace the page reference with the new one - self.pages[j] = new_page_ref - # delete redundant Pages tree nodes from xref table - for pages_tree_node_ref in pages_tree_nodes_to_delete: - while pages_tree_node_ref: - pages_tree_node = self.cached_objects[pages_tree_node_ref] - if pages_tree_node_ref.object_id in self.xref_table: - del self.xref_table[pages_tree_node_ref.object_id] - pages_tree_node_ref = pages_tree_node.get(b"Parent", None) - self.orig_pages = [] - - def write_xref_and_trailer(self, new_root_ref=None): - if new_root_ref: - self.del_root() - self.root_ref = new_root_ref - if self.info: - self.info_ref = self.write_obj(None, self.info) - start_xref = self.xref_table.write(self.f) - num_entries = len(self.xref_table) - trailer_dict = {b"Root": self.root_ref, b"Size": num_entries} - if self.last_xref_section_offset is not None: - trailer_dict[b"Prev"] = self.last_xref_section_offset - if self.info: - trailer_dict[b"Info"] = self.info_ref - self.last_xref_section_offset = start_xref - self.f.write( - b"trailer\n" - + bytes(PdfDict(trailer_dict)) - + b"\nstartxref\n%d\n%%%%EOF" % start_xref - ) - - def write_page(self, ref, *objs, **dict_obj): - if isinstance(ref, int): - ref = self.pages[ref] - if "Type" not in dict_obj: - dict_obj["Type"] = PdfName(b"Page") - if "Parent" not in dict_obj: - dict_obj["Parent"] = self.pages_ref - return self.write_obj(ref, *objs, **dict_obj) - - def write_obj(self, ref, *objs, **dict_obj): - f = self.f - if ref is None: - ref = self.next_object_id(f.tell()) - else: - self.xref_table[ref.object_id] = (f.tell(), ref.generation) - f.write(bytes(IndirectObjectDef(*ref))) - stream = dict_obj.pop("stream", None) - if stream is not None: - dict_obj["Length"] = len(stream) - if dict_obj: - f.write(pdf_repr(dict_obj)) - for obj in objs: - f.write(pdf_repr(obj)) - if stream is not None: - f.write(b"stream\n") - f.write(stream) - f.write(b"\nendstream\n") - f.write(b"endobj\n") - return ref - - def del_root(self): - if self.root_ref is None: - return - del self.xref_table[self.root_ref.object_id] - del self.xref_table[self.root[b"Pages"].object_id] - - @staticmethod - def get_buf_from_file(f): - if hasattr(f, "getbuffer"): - return f.getbuffer() - elif hasattr(f, "getvalue"): - return f.getvalue() - else: - try: - return mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) - except ValueError: # cannot mmap an empty file - return b"" - - def read_pdf_info(self): - self.file_size_total = len(self.buf) - self.file_size_this = self.file_size_total - self.start_offset - self.read_trailer() - self.root_ref = self.trailer_dict[b"Root"] - self.info_ref = self.trailer_dict.get(b"Info", None) - self.root = PdfDict(self.read_indirect(self.root_ref)) - if self.info_ref is None: - self.info = PdfDict() - else: - self.info = PdfDict(self.read_indirect(self.info_ref)) - check_format_condition(b"Type" in self.root, "/Type missing in Root") - check_format_condition( - self.root[b"Type"] == b"Catalog", "/Type in Root is not /Catalog" - ) - check_format_condition(b"Pages" in self.root, "/Pages missing in Root") - check_format_condition( - isinstance(self.root[b"Pages"], IndirectReference), - "/Pages in Root is not an indirect reference", - ) - self.pages_ref = self.root[b"Pages"] - self.page_tree_root = self.read_indirect(self.pages_ref) - self.pages = self.linearize_page_tree(self.page_tree_root) - # save the original list of page references - # in case the user modifies, adds or deletes some pages - # and we need to rewrite the pages and their list - self.orig_pages = self.pages[:] - - def next_object_id(self, offset=None): - try: - # TODO: support reuse of deleted objects - reference = IndirectReference(max(self.xref_table.keys()) + 1, 0) - except ValueError: - reference = IndirectReference(1, 0) - if offset is not None: - self.xref_table[reference.object_id] = (offset, 0) - return reference - - delimiter = rb"[][()<>{}/%]" - delimiter_or_ws = rb"[][()<>{}/%\000\011\012\014\015\040]" - whitespace = rb"[\000\011\012\014\015\040]" - whitespace_or_hex = rb"[\000\011\012\014\015\0400-9a-fA-F]" - whitespace_optional = whitespace + b"*" - whitespace_mandatory = whitespace + b"+" - # No "\012" aka "\n" or "\015" aka "\r": - whitespace_optional_no_nl = rb"[\000\011\014\040]*" - newline_only = rb"[\r\n]+" - newline = whitespace_optional_no_nl + newline_only + whitespace_optional_no_nl - re_trailer_end = re.compile( - whitespace_mandatory - + rb"trailer" - + whitespace_optional - + rb"<<(.*>>)" - + newline - + rb"startxref" - + newline - + rb"([0-9]+)" - + newline - + rb"%%EOF" - + whitespace_optional - + rb"$", - re.DOTALL, - ) - re_trailer_prev = re.compile( - whitespace_optional - + rb"trailer" - + whitespace_optional - + rb"<<(.*?>>)" - + newline - + rb"startxref" - + newline - + rb"([0-9]+)" - + newline - + rb"%%EOF" - + whitespace_optional, - re.DOTALL, - ) - - def read_trailer(self): - search_start_offset = len(self.buf) - 16384 - if search_start_offset < self.start_offset: - search_start_offset = self.start_offset - m = self.re_trailer_end.search(self.buf, search_start_offset) - check_format_condition(m, "trailer end not found") - # make sure we found the LAST trailer - last_match = m - while m: - last_match = m - m = self.re_trailer_end.search(self.buf, m.start() + 16) - if not m: - m = last_match - trailer_data = m.group(1) - self.last_xref_section_offset = int(m.group(2)) - self.trailer_dict = self.interpret_trailer(trailer_data) - self.xref_table = XrefTable() - self.read_xref_table(xref_section_offset=self.last_xref_section_offset) - if b"Prev" in self.trailer_dict: - self.read_prev_trailer(self.trailer_dict[b"Prev"]) - - def read_prev_trailer(self, xref_section_offset): - trailer_offset = self.read_xref_table(xref_section_offset=xref_section_offset) - m = self.re_trailer_prev.search( - self.buf[trailer_offset : trailer_offset + 16384] - ) - check_format_condition(m, "previous trailer not found") - trailer_data = m.group(1) - check_format_condition( - int(m.group(2)) == xref_section_offset, - "xref section offset in previous trailer doesn't match what was expected", - ) - trailer_dict = self.interpret_trailer(trailer_data) - if b"Prev" in trailer_dict: - self.read_prev_trailer(trailer_dict[b"Prev"]) - - re_whitespace_optional = re.compile(whitespace_optional) - re_name = re.compile( - whitespace_optional - + rb"/([!-$&'*-.0-;=?-Z\\^-z|~]+)(?=" - + delimiter_or_ws - + rb")" - ) - re_dict_start = re.compile(whitespace_optional + rb"<<") - re_dict_end = re.compile(whitespace_optional + rb">>" + whitespace_optional) - - @classmethod - def interpret_trailer(cls, trailer_data): - trailer = {} - offset = 0 - while True: - m = cls.re_name.match(trailer_data, offset) - if not m: - m = cls.re_dict_end.match(trailer_data, offset) - check_format_condition( - m and m.end() == len(trailer_data), - "name not found in trailer, remaining data: " - + repr(trailer_data[offset:]), - ) - break - key = cls.interpret_name(m.group(1)) - value, offset = cls.get_value(trailer_data, m.end()) - trailer[key] = value - check_format_condition( - b"Size" in trailer and isinstance(trailer[b"Size"], int), - "/Size not in trailer or not an integer", - ) - check_format_condition( - b"Root" in trailer and isinstance(trailer[b"Root"], IndirectReference), - "/Root not in trailer or not an indirect reference", - ) - return trailer - - re_hashes_in_name = re.compile(rb"([^#]*)(#([0-9a-fA-F]{2}))?") - - @classmethod - def interpret_name(cls, raw, as_text=False): - name = b"" - for m in cls.re_hashes_in_name.finditer(raw): - if m.group(3): - name += m.group(1) + bytearray.fromhex(m.group(3).decode("us-ascii")) - else: - name += m.group(1) - if as_text: - return name.decode("utf-8") - else: - return bytes(name) - - re_null = re.compile(whitespace_optional + rb"null(?=" + delimiter_or_ws + rb")") - re_true = re.compile(whitespace_optional + rb"true(?=" + delimiter_or_ws + rb")") - re_false = re.compile(whitespace_optional + rb"false(?=" + delimiter_or_ws + rb")") - re_int = re.compile( - whitespace_optional + rb"([-+]?[0-9]+)(?=" + delimiter_or_ws + rb")" - ) - re_real = re.compile( - whitespace_optional - + rb"([-+]?([0-9]+\.[0-9]*|[0-9]*\.[0-9]+))(?=" - + delimiter_or_ws - + rb")" - ) - re_array_start = re.compile(whitespace_optional + rb"\[") - re_array_end = re.compile(whitespace_optional + rb"]") - re_string_hex = re.compile( - whitespace_optional + rb"<(" + whitespace_or_hex + rb"*)>" - ) - re_string_lit = re.compile(whitespace_optional + rb"\(") - re_indirect_reference = re.compile( - whitespace_optional - + rb"([-+]?[0-9]+)" - + whitespace_mandatory - + rb"([-+]?[0-9]+)" - + whitespace_mandatory - + rb"R(?=" - + delimiter_or_ws - + rb")" - ) - re_indirect_def_start = re.compile( - whitespace_optional - + rb"([-+]?[0-9]+)" - + whitespace_mandatory - + rb"([-+]?[0-9]+)" - + whitespace_mandatory - + rb"obj(?=" - + delimiter_or_ws - + rb")" - ) - re_indirect_def_end = re.compile( - whitespace_optional + rb"endobj(?=" + delimiter_or_ws + rb")" - ) - re_comment = re.compile( - rb"(" + whitespace_optional + rb"%[^\r\n]*" + newline + rb")*" - ) - re_stream_start = re.compile(whitespace_optional + rb"stream\r?\n") - re_stream_end = re.compile( - whitespace_optional + rb"endstream(?=" + delimiter_or_ws + rb")" - ) - - @classmethod - def get_value(cls, data, offset, expect_indirect=None, max_nesting=-1): - if max_nesting == 0: - return None, None - m = cls.re_comment.match(data, offset) - if m: - offset = m.end() - m = cls.re_indirect_def_start.match(data, offset) - if m: - check_format_condition( - int(m.group(1)) > 0, - "indirect object definition: object ID must be greater than 0", - ) - check_format_condition( - int(m.group(2)) >= 0, - "indirect object definition: generation must be non-negative", - ) - check_format_condition( - expect_indirect is None - or expect_indirect - == IndirectReference(int(m.group(1)), int(m.group(2))), - "indirect object definition different than expected", - ) - object, offset = cls.get_value(data, m.end(), max_nesting=max_nesting - 1) - if offset is None: - return object, None - m = cls.re_indirect_def_end.match(data, offset) - check_format_condition(m, "indirect object definition end not found") - return object, m.end() - check_format_condition( - not expect_indirect, "indirect object definition not found" - ) - m = cls.re_indirect_reference.match(data, offset) - if m: - check_format_condition( - int(m.group(1)) > 0, - "indirect object reference: object ID must be greater than 0", - ) - check_format_condition( - int(m.group(2)) >= 0, - "indirect object reference: generation must be non-negative", - ) - return IndirectReference(int(m.group(1)), int(m.group(2))), m.end() - m = cls.re_dict_start.match(data, offset) - if m: - offset = m.end() - result = {} - m = cls.re_dict_end.match(data, offset) - while not m: - key, offset = cls.get_value(data, offset, max_nesting=max_nesting - 1) - if offset is None: - return result, None - value, offset = cls.get_value(data, offset, max_nesting=max_nesting - 1) - result[key] = value - if offset is None: - return result, None - m = cls.re_dict_end.match(data, offset) - offset = m.end() - m = cls.re_stream_start.match(data, offset) - if m: - try: - stream_len = int(result[b"Length"]) - except (TypeError, KeyError, ValueError) as e: - raise PdfFormatError( - "bad or missing Length in stream dict (%r)" - % result.get(b"Length", None) - ) from e - stream_data = data[m.end() : m.end() + stream_len] - m = cls.re_stream_end.match(data, m.end() + stream_len) - check_format_condition(m, "stream end not found") - offset = m.end() - result = PdfStream(PdfDict(result), stream_data) - else: - result = PdfDict(result) - return result, offset - m = cls.re_array_start.match(data, offset) - if m: - offset = m.end() - result = [] - m = cls.re_array_end.match(data, offset) - while not m: - value, offset = cls.get_value(data, offset, max_nesting=max_nesting - 1) - result.append(value) - if offset is None: - return result, None - m = cls.re_array_end.match(data, offset) - return result, m.end() - m = cls.re_null.match(data, offset) - if m: - return None, m.end() - m = cls.re_true.match(data, offset) - if m: - return True, m.end() - m = cls.re_false.match(data, offset) - if m: - return False, m.end() - m = cls.re_name.match(data, offset) - if m: - return PdfName(cls.interpret_name(m.group(1))), m.end() - m = cls.re_int.match(data, offset) - if m: - return int(m.group(1)), m.end() - m = cls.re_real.match(data, offset) - if m: - # XXX Decimal instead of float??? - return float(m.group(1)), m.end() - m = cls.re_string_hex.match(data, offset) - if m: - # filter out whitespace - hex_string = bytearray( - b for b in m.group(1) if b in b"0123456789abcdefABCDEF" - ) - if len(hex_string) % 2 == 1: - # append a 0 if the length is not even - yes, at the end - hex_string.append(ord(b"0")) - return bytearray.fromhex(hex_string.decode("us-ascii")), m.end() - m = cls.re_string_lit.match(data, offset) - if m: - return cls.get_literal_string(data, m.end()) - # return None, offset # fallback (only for debugging) - raise PdfFormatError("unrecognized object: " + repr(data[offset : offset + 32])) - - re_lit_str_token = re.compile( - rb"(\\[nrtbf()\\])|(\\[0-9]{1,3})|(\\(\r\n|\r|\n))|(\r\n|\r|\n)|(\()|(\))" - ) - escaped_chars = { - b"n": b"\n", - b"r": b"\r", - b"t": b"\t", - b"b": b"\b", - b"f": b"\f", - b"(": b"(", - b")": b")", - b"\\": b"\\", - ord(b"n"): b"\n", - ord(b"r"): b"\r", - ord(b"t"): b"\t", - ord(b"b"): b"\b", - ord(b"f"): b"\f", - ord(b"("): b"(", - ord(b")"): b")", - ord(b"\\"): b"\\", - } - - @classmethod - def get_literal_string(cls, data, offset): - nesting_depth = 0 - result = bytearray() - for m in cls.re_lit_str_token.finditer(data, offset): - result.extend(data[offset : m.start()]) - if m.group(1): - result.extend(cls.escaped_chars[m.group(1)[1]]) - elif m.group(2): - result.append(int(m.group(2)[1:], 8)) - elif m.group(3): - pass - elif m.group(5): - result.extend(b"\n") - elif m.group(6): - result.extend(b"(") - nesting_depth += 1 - elif m.group(7): - if nesting_depth == 0: - return bytes(result), m.end() - result.extend(b")") - nesting_depth -= 1 - offset = m.end() - raise PdfFormatError("unfinished literal string") - - re_xref_section_start = re.compile(whitespace_optional + rb"xref" + newline) - re_xref_subsection_start = re.compile( - whitespace_optional - + rb"([0-9]+)" - + whitespace_mandatory - + rb"([0-9]+)" - + whitespace_optional - + newline_only - ) - re_xref_entry = re.compile(rb"([0-9]{10}) ([0-9]{5}) ([fn])( \r| \n|\r\n)") - - def read_xref_table(self, xref_section_offset): - subsection_found = False - m = self.re_xref_section_start.match( - self.buf, xref_section_offset + self.start_offset - ) - check_format_condition(m, "xref section start not found") - offset = m.end() - while True: - m = self.re_xref_subsection_start.match(self.buf, offset) - if not m: - check_format_condition( - subsection_found, "xref subsection start not found" - ) - break - subsection_found = True - offset = m.end() - first_object = int(m.group(1)) - num_objects = int(m.group(2)) - for i in range(first_object, first_object + num_objects): - m = self.re_xref_entry.match(self.buf, offset) - check_format_condition(m, "xref entry not found") - offset = m.end() - is_free = m.group(3) == b"f" - generation = int(m.group(2)) - if not is_free: - new_entry = (int(m.group(1)), generation) - check_format_condition( - i not in self.xref_table or self.xref_table[i] == new_entry, - "xref entry duplicated (and not identical)", - ) - self.xref_table[i] = new_entry - return offset - - def read_indirect(self, ref, max_nesting=-1): - offset, generation = self.xref_table[ref[0]] - check_format_condition( - generation == ref[1], - f"expected to find generation {ref[1]} for object ID {ref[0]} in xref " - f"table, instead found generation {generation} at offset {offset}", - ) - value = self.get_value( - self.buf, - offset + self.start_offset, - expect_indirect=IndirectReference(*ref), - max_nesting=max_nesting, - )[0] - self.cached_objects[ref] = value - return value - - def linearize_page_tree(self, node=None): - if node is None: - node = self.page_tree_root - check_format_condition( - node[b"Type"] == b"Pages", "/Type of page tree node is not /Pages" - ) - pages = [] - for kid in node[b"Kids"]: - kid_object = self.read_indirect(kid) - if kid_object[b"Type"] == b"Page": - pages.append(kid) - else: - pages.extend(self.linearize_page_tree(node=kid_object)) - return pages diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/binned_scatterplot.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/binned_scatterplot.py deleted file mode 100644 index 413aacba2ce24da741b6c1f386c7334659676845..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/binned_scatterplot.py +++ /dev/null @@ -1,16 +0,0 @@ -""" -Binned Scatterplot ------------------- -This example shows how to make a binned scatterplot. -""" -# category: scatter plots -import altair as alt -from vega_datasets import data - -source = data.movies.url - -alt.Chart(source).mark_circle().encode( - alt.X('IMDB_Rating:Q', bin=True), - alt.Y('Rotten_Tomatoes_Rating:Q', bin=True), - size='count()' -) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/checkpoint_utils.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/checkpoint_utils.py deleted file mode 100644 index 138b4d1eb2d886e046e1b64c8765c328108580d3..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/checkpoint_utils.py +++ /dev/null @@ -1,905 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import collections -import contextlib -import inspect -import logging -import os -import re -import time -import traceback -from collections import OrderedDict -from pathlib import Path -from typing import Any, Dict, Optional, Union - -import numpy as np -import torch -from fairseq.data import data_utils -from fairseq.dataclass.configs import CheckpointConfig -from fairseq.dataclass.utils import ( - convert_namespace_to_omegaconf, - overwrite_args_by_name, -) -from fairseq.distributed.fully_sharded_data_parallel import FSDP, has_FSDP -from fairseq.file_io import PathManager -from fairseq.models import FairseqDecoder, FairseqEncoder -from omegaconf import DictConfig, OmegaConf, open_dict - -logger = logging.getLogger(__name__) - - -def save_checkpoint(cfg: CheckpointConfig, trainer, epoch_itr, val_loss): - from fairseq import meters - - # only one worker should attempt to create the required dir - if trainer.data_parallel_rank == 0: - os.makedirs(cfg.save_dir, exist_ok=True) - - prev_best = getattr(save_checkpoint, "best", val_loss) - if val_loss is not None: - best_function = max if cfg.maximize_best_checkpoint_metric else min - save_checkpoint.best = best_function(val_loss, prev_best) - - if cfg.no_save: - return - - trainer.consolidate_optimizer() # TODO(SS): do we need this if no_save_optimizer_state - - if not trainer.should_save_checkpoint_on_current_rank: - if trainer.always_call_state_dict_during_save_checkpoint: - trainer.state_dict() - return - - write_timer = meters.StopwatchMeter() - write_timer.start() - - epoch = epoch_itr.epoch - end_of_epoch = epoch_itr.end_of_epoch() - updates = trainer.get_num_updates() - - logger.info(f"Preparing to save checkpoint for epoch {epoch} @ {updates} updates") - - def is_better(a, b): - return a >= b if cfg.maximize_best_checkpoint_metric else a <= b - - suffix = trainer.checkpoint_suffix - checkpoint_conds = collections.OrderedDict() - checkpoint_conds["checkpoint{}{}.pt".format(epoch, suffix)] = ( - end_of_epoch and not cfg.no_epoch_checkpoints and epoch % cfg.save_interval == 0 - ) - checkpoint_conds["checkpoint_{}_{}{}.pt".format(epoch, updates, suffix)] = ( - not end_of_epoch - and cfg.save_interval_updates > 0 - and updates % cfg.save_interval_updates == 0 - ) - checkpoint_conds["checkpoint_best{}.pt".format(suffix)] = val_loss is not None and ( - not hasattr(save_checkpoint, "best") - or is_better(val_loss, save_checkpoint.best) - ) - if val_loss is not None and cfg.keep_best_checkpoints > 0: - worst_best = getattr(save_checkpoint, "best", None) - chkpts = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format( - cfg.best_checkpoint_metric, suffix - ), - ) - if len(chkpts) > 0: - p = chkpts[-1] if cfg.maximize_best_checkpoint_metric else chkpts[0] - worst_best = float(p.rsplit("_")[-1].replace("{}.pt".format(suffix), "")) - # add random digits to resolve ties - with data_utils.numpy_seed(epoch, updates, val_loss): - rand_sfx = np.random.randint(0, cfg.keep_best_checkpoints) - - checkpoint_conds[ - "checkpoint.best_{}_{:.3f}{}{}.pt".format( - cfg.best_checkpoint_metric, val_loss, rand_sfx, suffix - ) - ] = worst_best is None or is_better(val_loss, worst_best) - checkpoint_conds[ - "checkpoint_last{}.pt".format(suffix) - ] = not cfg.no_last_checkpoints - - extra_state = {"train_iterator": epoch_itr.state_dict(), "val_loss": val_loss} - if hasattr(save_checkpoint, "best"): - extra_state.update({"best": save_checkpoint.best}) - - checkpoints = [ - os.path.join(cfg.save_dir, fn) for fn, cond in checkpoint_conds.items() if cond - ] - if len(checkpoints) > 0 and trainer.should_save_checkpoint_on_current_rank: - trainer.save_checkpoint(checkpoints[0], extra_state) - for cp in checkpoints[1:]: - if cfg.write_checkpoints_asynchronously: - # TODO[ioPath]: Need to implement a delayed asynchronous - # file copying/moving feature. - logger.warning( - f"ioPath is not copying {checkpoints[0]} to {cp} " - "since async write mode is on." - ) - else: - assert PathManager.copy( - checkpoints[0], cp, overwrite=True - ), f"Failed to copy {checkpoints[0]} to {cp}" - - write_timer.stop() - logger.info( - "Saved checkpoint {} (epoch {} @ {} updates, score {}) (writing took {} seconds)".format( - checkpoints[0], epoch, updates, val_loss, write_timer.sum - ) - ) - - if not end_of_epoch and cfg.keep_interval_updates > 0: - # remove old checkpoints; checkpoints are sorted in descending order - if cfg.keep_interval_updates_pattern == -1: - checkpoints = checkpoint_paths( - cfg.save_dir, pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix) - ) - else: - checkpoints = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix), - keep_match=True, - ) - checkpoints = [ - x[0] - for x in checkpoints - if x[1] % cfg.keep_interval_updates_pattern != 0 - ] - - for old_chk in checkpoints[cfg.keep_interval_updates :]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - if cfg.keep_last_epochs > 0: - # remove old epoch checkpoints; checkpoints are sorted in descending order - checkpoints = checkpoint_paths( - cfg.save_dir, pattern=r"checkpoint(\d+){}\.pt".format(suffix) - ) - for old_chk in checkpoints[cfg.keep_last_epochs :]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - if cfg.keep_best_checkpoints > 0: - # only keep the best N checkpoints according to validation metric - checkpoints = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format( - cfg.best_checkpoint_metric, suffix - ), - ) - if not cfg.maximize_best_checkpoint_metric: - checkpoints = checkpoints[::-1] - for old_chk in checkpoints[cfg.keep_best_checkpoints :]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - -def load_checkpoint(cfg: CheckpointConfig, trainer, **passthrough_args): - """ - Load a checkpoint and restore the training iterator. - - *passthrough_args* will be passed through to - ``trainer.get_train_iterator``. - """ - - reset_optimizer = cfg.reset_optimizer - reset_lr_scheduler = cfg.reset_lr_scheduler - optimizer_overrides = ast.literal_eval(cfg.optimizer_overrides) - reset_meters = cfg.reset_meters - reset_dataloader = cfg.reset_dataloader - - if cfg.finetune_from_model is not None and ( - reset_optimizer or reset_lr_scheduler or reset_meters or reset_dataloader - ): - raise ValueError( - "--finetune-from-model can not be set together with either --reset-optimizer" - " or reset_lr_scheduler or reset_meters or reset_dataloader" - ) - - suffix = trainer.checkpoint_suffix - if ( - cfg.restore_file == "checkpoint_last.pt" - ): # default value of restore_file is 'checkpoint_last.pt' - checkpoint_path = os.path.join( - cfg.save_dir, "checkpoint_last{}.pt".format(suffix) - ) - first_launch = not PathManager.exists(checkpoint_path) - if first_launch and getattr(cfg, "continue_once", None) is not None: - checkpoint_path = cfg.continue_once - elif cfg.finetune_from_model is not None and first_launch: - # if there is no last checkpoint to restore, start the finetune from pretrained model - # else just use usual logic to load checkpoint, e.g. restart from last checkpoint and etc. - if PathManager.exists(cfg.finetune_from_model): - checkpoint_path = cfg.finetune_from_model - reset_optimizer = True - reset_lr_scheduler = True - reset_meters = True - reset_dataloader = True - logger.info( - f"loading pretrained model from {checkpoint_path}: " - "optimizer, lr scheduler, meters, dataloader will be reset" - ) - else: - raise ValueError( - f"--finetune-from-model {cfg.finetune_from_model} does not exist" - ) - elif suffix is not None: - checkpoint_path = cfg.restore_file.replace(".pt", suffix + ".pt") - else: - checkpoint_path = cfg.restore_file - - if cfg.restore_file != "checkpoint_last.pt" and cfg.finetune_from_model: - raise ValueError( - "--finetune-from-model and --restore-file (non-default value) " - "can not be specified together: " + str(cfg) - ) - - extra_state = trainer.load_checkpoint( - checkpoint_path, - reset_optimizer, - reset_lr_scheduler, - optimizer_overrides, - reset_meters=reset_meters, - ) - - if ( - extra_state is not None - and "best" in extra_state - and not reset_optimizer - and not reset_meters - ): - save_checkpoint.best = extra_state["best"] - - if extra_state is not None and not reset_dataloader: - # restore iterator from checkpoint - itr_state = extra_state["train_iterator"] - epoch_itr = trainer.get_train_iterator( - epoch=itr_state["epoch"], load_dataset=True, **passthrough_args - ) - epoch_itr.load_state_dict(itr_state) - else: - epoch_itr = trainer.get_train_iterator( - epoch=1, load_dataset=True, **passthrough_args - ) - - trainer.lr_step(epoch_itr.epoch) - - return extra_state, epoch_itr - - -def load_checkpoint_to_cpu(path, arg_overrides=None, load_on_all_ranks=False): - """Loads a checkpoint to CPU (with upgrading for backward compatibility). - - If doing single-GPU training or if the checkpoint is only being loaded by at - most one process on each node (current default behavior is for only rank 0 - to read the checkpoint from disk), load_on_all_ranks should be False to - avoid errors from torch.distributed not having been initialized or - torch.distributed.barrier() hanging. - - If all processes on each node may be loading the checkpoint - simultaneously, load_on_all_ranks should be set to True to avoid I/O - conflicts. - - There's currently no support for > 1 but < all processes loading the - checkpoint on each node. - """ - local_path = PathManager.get_local_path(path) - # The locally cached file returned by get_local_path() may be stale for - # remote files that are periodically updated/overwritten (ex: - # checkpoint_last.pt) - so we remove the local copy, sync across processes - # (if needed), and then download a fresh copy. - if local_path != path and PathManager.path_requires_pathmanager(path): - try: - os.remove(local_path) - except FileNotFoundError: - # With potentially multiple processes removing the same file, the - # file being missing is benign (missing_ok isn't available until - # Python 3.8). - pass - if load_on_all_ranks: - torch.distributed.barrier() - local_path = PathManager.get_local_path(path) - - with open(local_path, "rb") as f: - state = torch.load(f, map_location=torch.device("cpu")) - - if "args" in state and state["args"] is not None and arg_overrides is not None: - args = state["args"] - for arg_name, arg_val in arg_overrides.items(): - setattr(args, arg_name, arg_val) - - if "cfg" in state and state["cfg"] is not None: - - # hack to be able to set Namespace in dict config. this should be removed when we update to newer - # omegaconf version that supports object flags, or when we migrate all existing models - from omegaconf import __version__ as oc_version - from omegaconf import _utils - - if oc_version < "2.2": - old_primitive = _utils.is_primitive_type - _utils.is_primitive_type = lambda _: True - - state["cfg"] = OmegaConf.create(state["cfg"]) - - _utils.is_primitive_type = old_primitive - OmegaConf.set_struct(state["cfg"], True) - else: - state["cfg"] = OmegaConf.create(state["cfg"], flags={"allow_objects": True}) - - if arg_overrides is not None: - overwrite_args_by_name(state["cfg"], arg_overrides) - - state = _upgrade_state_dict(state) - return state - - -def load_model_ensemble( - filenames, - arg_overrides: Optional[Dict[str, Any]] = None, - task=None, - strict=True, - suffix="", - num_shards=1, - state=None, -): - """Loads an ensemble of models. - - Args: - filenames (List[str]): checkpoint files to load - arg_overrides (Dict[str,Any], optional): override model args that - were used during model training - task (fairseq.tasks.FairseqTask, optional): task to use for loading - """ - assert not ( - strict and num_shards > 1 - ), "Cannot load state dict with strict=True and checkpoint shards > 1" - ensemble, args, _task = load_model_ensemble_and_task( - filenames, - arg_overrides, - task, - strict, - suffix, - num_shards, - state, - ) - return ensemble, args - - -def get_maybe_sharded_checkpoint_filename( - filename: str, suffix: str, shard_idx: int, num_shards: int -) -> str: - orig_filename = filename - filename = filename.replace(".pt", suffix + ".pt") - fsdp_filename = filename[:-3] + f"-shard{shard_idx}.pt" - model_parallel_filename = orig_filename[:-3] + f"_part{shard_idx}.pt" - if PathManager.exists(fsdp_filename): - return fsdp_filename - elif num_shards > 1: - return model_parallel_filename - else: - return filename - - -def load_model_ensemble_and_task( - filenames, - arg_overrides: Optional[Dict[str, Any]] = None, - task=None, - strict=True, - suffix="", - num_shards=1, - state=None, -): - assert state is None or len(filenames) == 1 - - from fairseq import tasks - - assert not ( - strict and num_shards > 1 - ), "Cannot load state dict with strict=True and checkpoint shards > 1" - ensemble = [] - cfg = None - for filename in filenames: - orig_filename = filename - model_shard_state = {"shard_weights": [], "shard_metadata": []} - assert num_shards > 0 - st = time.time() - for shard_idx in range(num_shards): - filename = get_maybe_sharded_checkpoint_filename( - orig_filename, suffix, shard_idx, num_shards - ) - - if not PathManager.exists(filename): - raise IOError("Model file not found: {}".format(filename)) - if state is None: - state = load_checkpoint_to_cpu(filename, arg_overrides) - if "args" in state and state["args"] is not None: - cfg = convert_namespace_to_omegaconf(state["args"]) - elif "cfg" in state and state["cfg"] is not None: - cfg = state["cfg"] - else: - raise RuntimeError( - f"Neither args nor cfg exist in state keys = {state.keys()}" - ) - - if task is None: - task = tasks.setup_task(cfg.task) - - if "task_state" in state: - task.load_state_dict(state["task_state"]) - - if "fsdp_metadata" in state and num_shards > 1: - model_shard_state["shard_weights"].append(state["model"]) - model_shard_state["shard_metadata"].append(state["fsdp_metadata"]) - # check FSDP import before the code goes too far - if not has_FSDP: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - if shard_idx == num_shards - 1: - consolidated_model_state = FSDP.consolidate_shard_weights( - shard_weights=model_shard_state["shard_weights"], - shard_metadata=model_shard_state["shard_metadata"], - ) - model = task.build_model(cfg.model) - if ( - "optimizer_history" in state - and len(state["optimizer_history"]) > 0 - and "num_updates" in state["optimizer_history"][-1] - ): - model.set_num_updates( - state["optimizer_history"][-1]["num_updates"] - ) - model.load_state_dict( - consolidated_model_state, strict=strict, model_cfg=cfg.model - ) - else: - # model parallel checkpoint or unsharded checkpoint - # support old external tasks - - argspec = inspect.getfullargspec(task.build_model) - if "from_checkpoint" in argspec.args: - model = task.build_model(cfg.model, from_checkpoint=True) - else: - model = task.build_model(cfg.model) - if ( - "optimizer_history" in state - and len(state["optimizer_history"]) > 0 - and "num_updates" in state["optimizer_history"][-1] - ): - model.set_num_updates(state["optimizer_history"][-1]["num_updates"]) - model.load_state_dict( - state["model"], strict=strict, model_cfg=cfg.model - ) - - # reset state so it gets loaded for the next model in ensemble - state = None - if shard_idx % 10 == 0 and shard_idx > 0: - elapsed = time.time() - st - logger.info( - f"Loaded {shard_idx} shards in {elapsed:.2f}s, {elapsed / (shard_idx+1):.2f}s/shard" - ) - - # build model for ensemble - ensemble.append(model) - return ensemble, cfg, task - - -def load_model_ensemble_and_task_from_hf_hub( - model_id, - cache_dir: Optional[str] = None, - arg_overrides: Optional[Dict[str, Any]] = None, - **kwargs: Any, -): - try: - from huggingface_hub import snapshot_download - except ImportError: - raise ImportError( - "You need to install huggingface_hub to use `load_from_hf_hub`. " - "See https://pypi.org/project/huggingface-hub/ for installation." - ) - - library_name = "fairseq" - cache_dir = cache_dir or (Path.home() / ".cache" / library_name).as_posix() - cache_dir = snapshot_download( - model_id, cache_dir=cache_dir, library_name=library_name, **kwargs - ) - - _arg_overrides = arg_overrides or {} - _arg_overrides["data"] = cache_dir - return load_model_ensemble_and_task( - [p.as_posix() for p in Path(cache_dir).glob("*.pt")], - arg_overrides=_arg_overrides, - ) - - -def checkpoint_paths(path, pattern=r"checkpoint(\d+)\.pt", keep_match=False): - """Retrieves all checkpoints found in `path` directory. - - Checkpoints are identified by matching filename to the specified pattern. If - the pattern contains groups, the result will be sorted by the first group in - descending order. - """ - pt_regexp = re.compile(pattern) - files = PathManager.ls(path) - - entries = [] - for i, f in enumerate(files): - m = pt_regexp.fullmatch(f) - if m is not None: - idx = float(m.group(1)) if len(m.groups()) > 0 else i - entries.append((idx, m.group(0))) - if keep_match: - return [(os.path.join(path, x[1]), x[0]) for x in sorted(entries, reverse=True)] - else: - return [os.path.join(path, x[1]) for x in sorted(entries, reverse=True)] - - -def torch_persistent_save(obj, filename, async_write: bool = False): - if async_write: - with PathManager.opena(filename, "wb") as f: - _torch_persistent_save(obj, f) - else: - if PathManager.supports_rename(filename): - # do atomic save - with PathManager.open(filename + ".tmp", "wb") as f: - _torch_persistent_save(obj, f) - PathManager.rename(filename + ".tmp", filename) - else: - # fallback to non-atomic save - with PathManager.open(filename, "wb") as f: - _torch_persistent_save(obj, f) - - -def _torch_persistent_save(obj, f): - if isinstance(f, str): - with PathManager.open(f, "wb") as h: - torch_persistent_save(obj, h) - return - for i in range(3): - try: - return torch.save(obj, f) - except Exception: - if i == 2: - logger.error(traceback.format_exc()) - raise - - -def _upgrade_state_dict(state): - """Helper for upgrading old model checkpoints.""" - - # add optimizer_history - if "optimizer_history" not in state: - state["optimizer_history"] = [ - {"criterion_name": "CrossEntropyCriterion", "best_loss": state["best_loss"]} - ] - state["last_optimizer_state"] = state["optimizer"] - del state["optimizer"] - del state["best_loss"] - # move extra_state into sub-dictionary - if "epoch" in state and "extra_state" not in state: - state["extra_state"] = { - "epoch": state["epoch"], - "batch_offset": state["batch_offset"], - "val_loss": state["val_loss"], - } - del state["epoch"] - del state["batch_offset"] - del state["val_loss"] - # reduce optimizer history's memory usage (only keep the last state) - if "optimizer" in state["optimizer_history"][-1]: - state["last_optimizer_state"] = state["optimizer_history"][-1]["optimizer"] - for optim_hist in state["optimizer_history"]: - del optim_hist["optimizer"] - # record the optimizer class name - if "optimizer_name" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["optimizer_name"] = "FairseqNAG" - # move best_loss into lr_scheduler_state - if "lr_scheduler_state" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["lr_scheduler_state"] = { - "best": state["optimizer_history"][-1]["best_loss"] - } - del state["optimizer_history"][-1]["best_loss"] - # keep track of number of updates - if "num_updates" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["num_updates"] = 0 - # use stateful training data iterator - if "train_iterator" not in state["extra_state"]: - state["extra_state"]["train_iterator"] = { - "epoch": state["extra_state"].get("epoch", 0), - "iterations_in_epoch": state["extra_state"].get("batch_offset", 0), - } - - # backward compatibility, cfg updates - if "args" in state and state["args"] is not None: - # old model checkpoints may not have separate source/target positions - if hasattr(state["args"], "max_positions") and not hasattr( - state["args"], "max_source_positions" - ): - state["args"].max_source_positions = state["args"].max_positions - state["args"].max_target_positions = state["args"].max_positions - # default to translation task - if not hasattr(state["args"], "task"): - state["args"].task = "translation" - # --raw-text and --lazy-load are deprecated - if getattr(state["args"], "raw_text", False): - state["args"].dataset_impl = "raw" - elif getattr(state["args"], "lazy_load", False): - state["args"].dataset_impl = "lazy" - # epochs start at 1 - if state["extra_state"]["train_iterator"] is not None: - state["extra_state"]["train_iterator"]["epoch"] = max( - state["extra_state"]["train_iterator"].get("epoch", 1), 1 - ) - # --remove-bpe ==> --postprocess - if hasattr(state["args"], "remove_bpe"): - state["args"].post_process = state["args"].remove_bpe - # --min-lr ==> --stop-min-lr - if hasattr(state["args"], "min_lr"): - state["args"].stop_min_lr = state["args"].min_lr - del state["args"].min_lr - # binary_cross_entropy / kd_binary_cross_entropy => wav2vec criterion - if hasattr(state["args"], "criterion") and state["args"].criterion in [ - "binary_cross_entropy", - "kd_binary_cross_entropy", - ]: - state["args"].criterion = "wav2vec" - # remove log_keys if it's None (criteria will supply a default value of []) - if hasattr(state["args"], "log_keys") and state["args"].log_keys is None: - delattr(state["args"], "log_keys") - # speech_pretraining => audio pretraining - if ( - hasattr(state["args"], "task") - and state["args"].task == "speech_pretraining" - ): - state["args"].task = "audio_pretraining" - # audio_cpc => wav2vec - if hasattr(state["args"], "arch") and state["args"].arch == "audio_cpc": - state["args"].arch = "wav2vec" - # convert legacy float learning rate to List[float] - if hasattr(state["args"], "lr") and isinstance(state["args"].lr, float): - state["args"].lr = [state["args"].lr] - # convert task data arg to a string instead of List[string] - if ( - hasattr(state["args"], "data") - and isinstance(state["args"].data, list) - and len(state["args"].data) > 0 - ): - state["args"].data = state["args"].data[0] - - state["cfg"] = convert_namespace_to_omegaconf(state["args"]) - - if "cfg" in state and state["cfg"] is not None: - cfg = state["cfg"] - with open_dict(cfg): - # any upgrades for Hydra-based configs - if ( - "task" in cfg - and "eval_wer_config" in cfg.task - and isinstance(cfg.task.eval_wer_config.print_alignment, bool) - ): - cfg.task.eval_wer_config.print_alignment = "hard" - if "generation" in cfg and isinstance(cfg.generation.print_alignment, bool): - cfg.generation.print_alignment = ( - "hard" if cfg.generation.print_alignment else None - ) - if ( - "model" in cfg - and "w2v_args" in cfg.model - and cfg.model.w2v_args is not None - and ( - hasattr(cfg.model.w2v_args, "task") or "task" in cfg.model.w2v_args - ) - and hasattr(cfg.model.w2v_args.task, "eval_wer_config") - and cfg.model.w2v_args.task.eval_wer_config is not None - and isinstance( - cfg.model.w2v_args.task.eval_wer_config.print_alignment, bool - ) - ): - cfg.model.w2v_args.task.eval_wer_config.print_alignment = "hard" - - return state - - -def prune_state_dict(state_dict, model_cfg: Optional[DictConfig]): - """Prune the given state_dict if desired for LayerDrop - (https://arxiv.org/abs/1909.11556). - - Training with LayerDrop allows models to be robust to pruning at inference - time. This function prunes state_dict to allow smaller models to be loaded - from a larger model and re-maps the existing state_dict for this to occur. - - It's called by functions that load models from checkpoints and does not - need to be called directly. - """ - arch = None - if model_cfg is not None: - arch = ( - model_cfg._name - if isinstance(model_cfg, DictConfig) - else getattr(model_cfg, "arch", None) - ) - - if not model_cfg or arch is None or arch == "ptt_transformer": - # args should not be none, but don't crash if it is. - return state_dict - - encoder_layers_to_keep = getattr(model_cfg, "encoder_layers_to_keep", None) - decoder_layers_to_keep = getattr(model_cfg, "decoder_layers_to_keep", None) - - if not encoder_layers_to_keep and not decoder_layers_to_keep: - return state_dict - - # apply pruning - logger.info( - "Pruning model to specified layer configuration - this works best if the model was trained with LayerDrop" - ) - - def create_pruning_pass(layers_to_keep, layer_name): - keep_layers = sorted( - int(layer_string) for layer_string in layers_to_keep.split(",") - ) - mapping_dict = {} - for i in range(len(keep_layers)): - mapping_dict[str(keep_layers[i])] = str(i) - - regex = re.compile(r"^{layer}.*\.layers\.(\d+)".format(layer=layer_name)) - return {"substitution_regex": regex, "mapping_dict": mapping_dict} - - pruning_passes = [] - if encoder_layers_to_keep: - pruning_passes.append(create_pruning_pass(encoder_layers_to_keep, "encoder")) - if decoder_layers_to_keep: - pruning_passes.append(create_pruning_pass(decoder_layers_to_keep, "decoder")) - - new_state_dict = {} - for layer_name in state_dict.keys(): - match = re.search(r"\.layers\.(\d+)\.", layer_name) - # if layer has no number in it, it is a supporting layer, such as an - # embedding - if not match: - new_state_dict[layer_name] = state_dict[layer_name] - continue - - # otherwise, layer should be pruned. - original_layer_number = match.group(1) - # figure out which mapping dict to replace from - for pruning_pass in pruning_passes: - if original_layer_number in pruning_pass["mapping_dict"] and pruning_pass[ - "substitution_regex" - ].search(layer_name): - new_layer_number = pruning_pass["mapping_dict"][original_layer_number] - substitution_match = pruning_pass["substitution_regex"].search( - layer_name - ) - new_state_key = ( - layer_name[: substitution_match.start(1)] - + new_layer_number - + layer_name[substitution_match.end(1) :] - ) - new_state_dict[new_state_key] = state_dict[layer_name] - - # Since layers are now pruned, *_layers_to_keep are no longer needed. - # This is more of "It would make it work fix" rather than a proper fix. - if isinstance(model_cfg, DictConfig): - context = open_dict(model_cfg) - else: - context = contextlib.ExitStack() - with context: - if hasattr(model_cfg, "encoder_layers_to_keep"): - model_cfg.encoder_layers_to_keep = None - if hasattr(model_cfg, "decoder_layers_to_keep"): - model_cfg.decoder_layers_to_keep = None - - return new_state_dict - - -def load_pretrained_component_from_model( - component: Union[FairseqEncoder, FairseqDecoder], - checkpoint: str, - strict: bool = True, -): - """ - Load a pretrained FairseqEncoder or FairseqDecoder from checkpoint into the - provided `component` object. If state_dict fails to load, there may be a - mismatch in the architecture of the corresponding `component` found in the - `checkpoint` file. - """ - if not PathManager.exists(checkpoint): - raise IOError("Model file not found: {}".format(checkpoint)) - state = load_checkpoint_to_cpu(checkpoint) - if isinstance(component, FairseqEncoder): - component_type = "encoder" - elif isinstance(component, FairseqDecoder): - component_type = "decoder" - else: - raise ValueError( - "component to load must be either a FairseqEncoder or " - "FairseqDecoder. Loading other component types are not supported." - ) - component_state_dict = OrderedDict() - for key in state["model"].keys(): - if key.startswith(component_type): - # encoder.input_layers.0.0.weight --> input_layers.0.0.weight - component_subkey = key[len(component_type) + 1 :] - component_state_dict[component_subkey] = state["model"][key] - component.load_state_dict(component_state_dict, strict=strict) - return component - - -def verify_checkpoint_directory(save_dir: str) -> None: - if not os.path.exists(save_dir): - os.makedirs(save_dir, exist_ok=True) - temp_file_path = os.path.join(save_dir, "dummy") - try: - with open(temp_file_path, "w"): - pass - except OSError as e: - logger.warning( - "Unable to access checkpoint save directory: {}".format(save_dir) - ) - raise e - else: - os.remove(temp_file_path) - - -def save_ema_as_checkpoint(src_path, dst_path): - state = load_ema_from_checkpoint(src_path) - torch_persistent_save(state, dst_path) - - -def load_ema_from_checkpoint(fpath): - """Loads exponential moving averaged (EMA) checkpoint from input and - returns a model with ema weights. - - Args: - fpath: A string path of checkpoint to load from. - - Returns: - A dict of string keys mapping to various values. The 'model' key - from the returned dict should correspond to an OrderedDict mapping - string parameter names to torch Tensors. - """ - params_dict = collections.OrderedDict() - new_state = None - - with PathManager.open(fpath, "rb") as f: - new_state = torch.load( - f, - map_location=( - lambda s, _: torch.serialization.default_restore_location(s, "cpu") - ), - ) - - # EMA model is stored in a separate "extra state" - model_params = new_state["extra_state"]["ema"] - - for key in list(model_params.keys()): - p = model_params[key] - if isinstance(p, torch.HalfTensor): - p = p.float() - if key not in params_dict: - params_dict[key] = p.clone() - # NOTE: clone() is needed in case of p is a shared parameter - else: - raise ValueError("Key {} is repeated in EMA model params.".format(key)) - - if len(params_dict) == 0: - raise ValueError( - f"Input checkpoint path '{fpath}' does not contain " - "ema model weights, is this model trained with EMA?" - ) - - new_state["model"] = params_dict - return new_state diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Haebichan Jung.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Haebichan Jung.html deleted file mode 100644 index 97981659ec173054fba14292566e0e2fdb210b3d..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Haebichan Jung.html +++ /dev/null @@ -1,134 +0,0 @@ - - - - Haebichan Jung - - - - -
        -

        Haebichan Jung

        - -
        -

        Application

        As preface, I met and worked with Jeremy Harris at TowardsDataScience.com (TDS) while building the youtube/podcast platform for Ludo Benistant, TDS founder. Edouard approached me many years ago about becoming a mentor. I resonated with this position then, since I was a person who broke into the data science industry after working as an English teacher at a public school. Since my career transition, I’ve been passionate about mentoring others on how to mirror my path to finding success in landing a data science job.

        To dive deeper in my mentorship experience, I've mentored 5+ students from UC Berkeley (my alma mater) in the last 3 years in securing data science roles, hosted through the university’s internal data alumni program. Furthermore, I have written 5+ highly successful articles on TowardsDataScience (TDS) on finding a job as a Data Scientist (One of my articles has been read over 100,000 times).

        I have also interviewed many leaders in data science (such as the head of Data Science at Patreon) on questions about landing a data science job while working as a project lead at TDS. Many readers/audience have reached out to me on how my content has played an important role in performing well during interviews. 

        To summarize, I have two reasons for wanting to become a mentor at SharpestMinds: 

        1. I have a deep passion and in-depth knowledge in data science mentorship. I have transitioned into Data Science from a non-data related field, and thus have a frontline experience in knowing how to break into the data industry. This goes from how to write resumes, how to perform well during interviews, pitfalls to avoid, etc. 

        2. I deeply align with the company’s mission about nurturing junior/aspiring Data Scientists and helping them succeed in landing a dream job. It’s what I have been doing on the side out of passion, especially at TDS and my alma mater programs. 

        I am very excited to take this passion to the next level by becoming a mentor at SharpestMinds. 


        Interview


        How did you hear about SM?
        • Worked with Jer a bit at TDS
        • Met the team at TMLS
        • Now has the bandwidth to actually do it

        Mentorship experience?
        • Mentored 4-5 ppl at UC Berkely through a non profit
          • resume workshops
          • 6-8 weeks
          • coffee chats
          • small projects (Tableau visualization)
          • Interview steps and advice
          • focusing on the seniors
        • Been mentoring on the side / community building etc.

        What are beginners lacking?
        • Technical interviews
        • A lot of bad information on the internet
          • TDS had a lot of this problem
          • A lot of emphasis on projects (quantity), research, certifications
          • Not enough emphasis on interviewing
        And how can you help?
        • Already have built a lot of resources (e.g. on YouTube) interviewing leaders in the industry
        • Leading ppl to the right resources
        • How do you shorten that step
        • How to structure your resume
        • Confidence building! 
        -
        -


        Questions about SM?
        • ISA - is that something prepared for us? How much do I need to do?
        • How does the pairing work? 
        • Do you have stats on what the average offer looks like (in terms of length, % etc.)?


        -
        - -
        - - - \ No newline at end of file diff --git a/spaces/awacke1/ASRSpeechRecognition1/app.py b/spaces/awacke1/ASRSpeechRecognition1/app.py deleted file mode 100644 index 1ce0aef68b42e23dc404fa2776e6c4160164939c..0000000000000000000000000000000000000000 --- a/spaces/awacke1/ASRSpeechRecognition1/app.py +++ /dev/null @@ -1,138 +0,0 @@ -import gradio as gr -import torch -import time -import librosa -import soundfile -import nemo.collections.asr as nemo_asr -import tempfile -import os -import uuid - -from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration -import torch - -# PersistDataset ----- -import os -import csv -import gradio as gr -from gradio import inputs, outputs -import huggingface_hub -from huggingface_hub import Repository, hf_hub_download, upload_file -from datetime import datetime - -# --------------------------------------------- -# Dataset and Token links - change awacke1 to your own HF id, and add a HF_TOKEN copy to your repo for write permissions -# This should allow you to save your results to your own Dataset hosted on HF. - -DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/ASRLive.csv" -DATASET_REPO_ID = "awacke1/ASRLive.csv" -DATA_FILENAME = "ASRLive.csv" -DATA_FILE = os.path.join("data", DATA_FILENAME) -HF_TOKEN = os.environ.get("HF_TOKEN") - -PersistToDataset = False -#PersistToDataset = True # uncomment to save inference output to ASRLive.csv dataset - -if PersistToDataset: - try: - hf_hub_download( - repo_id=DATASET_REPO_ID, - filename=DATA_FILENAME, - cache_dir=DATA_DIRNAME, - force_filename=DATA_FILENAME - ) - except: - print("file not found") - repo = Repository( - local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN - ) - -def store_message(name: str, message: str): - if name and message: - with open(DATA_FILE, "a") as csvfile: - writer = csv.DictWriter(csvfile, fieldnames=["name", "message", "time"]) - writer.writerow( - {"name": name.strip(), "message": message.strip(), "time": str(datetime.now())} - ) - # uncomment line below to begin saving - - commit_url = repo.push_to_hub() - ret = "" - with open(DATA_FILE, "r") as csvfile: - reader = csv.DictReader(csvfile) - - for row in reader: - ret += row - ret += "\r\n" - return ret - -# main ------------------------- -mname = "facebook/blenderbot-400M-distill" -model = BlenderbotForConditionalGeneration.from_pretrained(mname) -tokenizer = BlenderbotTokenizer.from_pretrained(mname) - -def take_last_tokens(inputs, note_history, history): - filterTokenCount = 128 # filter last 128 tokens - if inputs['input_ids'].shape[1] > filterTokenCount: - inputs['input_ids'] = torch.tensor([inputs['input_ids'][0][-filterTokenCount:].tolist()]) - inputs['attention_mask'] = torch.tensor([inputs['attention_mask'][0][-filterTokenCount:].tolist()]) - note_history = [' '.join(note_history[0].split(' ')[2:])] - history = history[1:] - return inputs, note_history, history - -def add_note_to_history(note, note_history): - note_history.append(note) - note_history = ' '.join(note_history) - return [note_history] - - - -SAMPLE_RATE = 16000 -model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_en_conformer_transducer_xlarge") -model.change_decoding_strategy(None) -model.eval() - -def process_audio_file(file): - data, sr = librosa.load(file) - if sr != SAMPLE_RATE: - data = librosa.resample(data, orig_sr=sr, target_sr=SAMPLE_RATE) - data = librosa.to_mono(data) - return data - - -def transcribe(audio, state = ""): - if state is None: - state = "" - audio_data = process_audio_file(audio) - with tempfile.TemporaryDirectory() as tmpdir: - audio_path = os.path.join(tmpdir, f'audio_{uuid.uuid4()}.wav') - soundfile.write(audio_path, audio_data, SAMPLE_RATE) - transcriptions = model.transcribe([audio_path]) - if type(transcriptions) == tuple and len(transcriptions) == 2: - transcriptions = transcriptions[0] - transcriptions = transcriptions[0] - - if PersistToDataset: - ret = store_message(transcriptions, state) # Save to dataset - uncomment to store into a dataset - hint you will need your HF_TOKEN - state = state + transcriptions + " " + ret - else: - state = state + transcriptions - return state, state - -gr.Interface( - fn=transcribe, - inputs=[ - gr.Audio(source="microphone", type='filepath', streaming=True), - "state", - ], - outputs=[ - "textbox", - "state" - ], - layout="horizontal", - theme="huggingface", - title="🗣️ASR-Live🧠Memory💾", - description=f"Live Automatic Speech Recognition (ASR) with Memory💾 Dataset.", - allow_flagging='never', - live=True, - article=f"Result Output Saved to Memory💾 Dataset: [{DATASET_REPO_URL}]({DATASET_REPO_URL})" -).launch(debug=True) \ No newline at end of file diff --git a/spaces/awacke1/ContextQuestionAnswerNLP/app.py b/spaces/awacke1/ContextQuestionAnswerNLP/app.py deleted file mode 100644 index 255664e15b6c173d9fe8f9f33160a406d6f7f53f..0000000000000000000000000000000000000000 --- a/spaces/awacke1/ContextQuestionAnswerNLP/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import gradio as gr -from transformers import pipeline - - -def SetContext(textfile): - context = "" - with open(textfile, 'r') as file: - context = file.read() - return context - -context = SetContext('WritingCarePlans.txt') # open text file fill textbox -question = "What should be documented in a care plan?" -title = "Transformers - Sentence to Paragraph - For Mindfulness" -examples = [ - ["Break the cycle of stress and anxiety"], - ["Feel calm in stressful situations"], - ["Deal with work pressure"], - ["Learn to reduce feelings of overwhelmed"] -] -model1 = gr.Interface.load("huggingface/deepset/roberta-base-squad2") - -def f1(inputs): - #query =[[inputs[0]],[inputs[1]] - examples=[['My name is Sarah and I live in London','Where do I live?']] - return model1(examples) - - -demo = gr.Blocks() -with demo: - inputs=[gr.inputs.Textbox(lines=40, default=context, label="Context paragraph"),gr.inputs.Textbox(lines=10, default=question, label="Question")] - out1=gr.Textbox() - b1 = gr.Button("roberta") - - #inp = gr.Textbox(placeholder="What is a care plan for?") - - inputs[1].change(fn=f1, - inputs=inputs, - outputs=out1) - - b1.click(fn=f1, inputs=inputs, outputs=out1) - -demo.launch() \ No newline at end of file diff --git a/spaces/awacke1/GPU-Memory-Detector-Aframe/style.css b/spaces/awacke1/GPU-Memory-Detector-Aframe/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/GPU-Memory-Detector-Aframe/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/awacke1/GradioFlanT5BloomAndTaskSource/README.md b/spaces/awacke1/GradioFlanT5BloomAndTaskSource/README.md deleted file mode 100644 index def2d7969ffa89202783851be9f7dca68f20506e..0000000000000000000000000000000000000000 --- a/spaces/awacke1/GradioFlanT5BloomAndTaskSource/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Flan-T5-Large-Demo -emoji: 📊 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -The Flan-T5 model is a variant of the T5 (Text-to-Text Transfer Transformer) model, which is a large-scale neural network architecture designed for a wide range of natural language processing tasks, including language translation, question-answering, and summarization, among others. - -The Flan-T5 model was trained on a combination of several large-scale datasets, including: - -C4: The Colossal Clean Crawled Corpus - a large, multilingual dataset of web pages collected by crawling the internet, containing over 700GB of text in more than 100 languages. - -Wikipedia: A dataset of text extracted from Wikipedia articles in various languages. - -Common Crawl News: A dataset of news articles collected from various news sources. - -BooksCorpus: A large dataset of text passages extracted from over 11,000 books, containing over 800 million words. - -OpenWebText: A dataset of text scraped from web pages, similar to C4. - -WebText: A smaller dataset of web pages containing around 40GB of text. - -English-language books and articles from the JSTOR database. - -These datasets were preprocessed and used to train the Flan-T5 model on a range of natural language processing tasks, including text classification, question-answering, and summarization, among others. The Flan-T5 model has been fine-tuned on various downstream tasks, including sentiment analysis, summarization, and language translation. \ No newline at end of file diff --git a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/pretrained.py b/spaces/badayvedat/AudioSep/models/CLAP/open_clip/pretrained.py deleted file mode 100644 index e211d8b5b59320a599e62605f1dee6199f317253..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/pretrained.py +++ /dev/null @@ -1,167 +0,0 @@ -import hashlib -import os -import urllib -import warnings - -from tqdm import tqdm - -_RN50 = dict( - openai="https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt", - yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-yfcc15m-455df137.pt", - cc12m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-cc12m-f000538c.pt", -) - -_RN50_quickgelu = dict( - openai="https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt", - yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-yfcc15m-455df137.pt", - cc12m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-cc12m-f000538c.pt", -) - -_RN101 = dict( - openai="https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt", - yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn101-quickgelu-yfcc15m-3e04b30e.pt", -) - -_RN101_quickgelu = dict( - openai="https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt", - yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn101-quickgelu-yfcc15m-3e04b30e.pt", -) - -_RN50x4 = dict( - openai="https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt", -) - -_RN50x16 = dict( - openai="https://openaipublic.azureedge.net/clip/models/52378b407f34354e150460fe41077663dd5b39c54cd0bfd2b27167a4a06ec9aa/RN50x16.pt", -) - -_RN50x64 = dict( - openai="https://openaipublic.azureedge.net/clip/models/be1cfb55d75a9666199fb2206c106743da0f6468c9d327f3e0d0a543a9919d9c/RN50x64.pt", -) - -_VITB32 = dict( - openai="https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt", - laion400m_e31="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e31-d867053b.pt", - laion400m_e32="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e32-46683a32.pt", - laion400m_avg="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_avg-8a00ab3c.pt", -) - -_VITB32_quickgelu = dict( - openai="https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt", - laion400m_e31="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e31-d867053b.pt", - laion400m_e32="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e32-46683a32.pt", - laion400m_avg="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_avg-8a00ab3c.pt", -) - -_VITB16 = dict( - openai="https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt", -) - -_VITL14 = dict( - openai="https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt", -) - -_PRETRAINED = { - "RN50": _RN50, - "RN50-quickgelu": _RN50_quickgelu, - "RN101": _RN101, - "RN101-quickgelu": _RN101_quickgelu, - "RN50x4": _RN50x4, - "RN50x16": _RN50x16, - "ViT-B-32": _VITB32, - "ViT-B-32-quickgelu": _VITB32_quickgelu, - "ViT-B-16": _VITB16, - "ViT-L-14": _VITL14, -} - - -def list_pretrained(as_str: bool = False): - """returns list of pretrained models - Returns a tuple (model_name, pretrain_tag) by default or 'name:tag' if as_str == True - """ - return [ - ":".join([k, t]) if as_str else (k, t) - for k in _PRETRAINED.keys() - for t in _PRETRAINED[k].keys() - ] - - -def list_pretrained_tag_models(tag: str): - """return all models having the specified pretrain tag""" - models = [] - for k in _PRETRAINED.keys(): - if tag in _PRETRAINED[k]: - models.append(k) - return models - - -def list_pretrained_model_tags(model: str): - """return all pretrain tags for the specified model architecture""" - tags = [] - if model in _PRETRAINED: - tags.extend(_PRETRAINED[model].keys()) - return tags - - -def get_pretrained_url(model: str, tag: str): - if model not in _PRETRAINED: - return "" - model_pretrained = _PRETRAINED[model] - if tag not in model_pretrained: - return "" - return model_pretrained[tag] - - -def download_pretrained(url: str, root: str = os.path.expanduser("~/.cache/clip")): - os.makedirs(root, exist_ok=True) - filename = os.path.basename(url) - - if "openaipublic" in url: - expected_sha256 = url.split("/")[-2] - else: - expected_sha256 = "" - - download_target = os.path.join(root, filename) - - if os.path.exists(download_target) and not os.path.isfile(download_target): - raise RuntimeError(f"{download_target} exists and is not a regular file") - - if os.path.isfile(download_target): - if expected_sha256: - if ( - hashlib.sha256(open(download_target, "rb").read()).hexdigest() - == expected_sha256 - ): - return download_target - else: - warnings.warn( - f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file" - ) - else: - return download_target - - with urllib.request.urlopen(url) as source, open(download_target, "wb") as output: - with tqdm( - total=int(source.info().get("Content-Length")), - ncols=80, - unit="iB", - unit_scale=True, - ) as loop: - while True: - buffer = source.read(8192) - if not buffer: - break - - output.write(buffer) - loop.update(len(buffer)) - - if ( - expected_sha256 - and hashlib.sha256(open(download_target, "rb").read()).hexdigest() - != expected_sha256 - ): - raise RuntimeError( - f"Model has been downloaded but the SHA256 checksum does not not match" - ) - - return download_target diff --git a/spaces/badongtakla/ithaca/ithaca/util/region_names.py b/spaces/badongtakla/ithaca/ithaca/util/region_names.py deleted file mode 100644 index a0fd36e9befccb32d2d2bf395e98324558fd6543..0000000000000000000000000000000000000000 --- a/spaces/badongtakla/ithaca/ithaca/util/region_names.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright 2021 the Ithaca Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Subregion mapping used to train the model. - -The subregion IDs originate from the I.PHI generator and may be subject to -change in future versions of the PHI dataset. -""" - - -def load_region_maps(region_file): - """Extracts creates a map from PHI region id to a continuous region id.""" - region_ids = [] # Used mainly for eval - region_ids_inv = {} # Used in data loader - region_names_inv = {} # Used in eval - for l in region_file.read().strip().split('\n'): - tok_name_id, _ = l.strip().split(';') # second field is frequency, unused - region_name, region_id = tok_name_id.split('_') - region_name = region_name.strip() - region_id = int(region_id) - # Ignore unknown regions: - if ((region_name == 'Unknown Provenances' and region_id == 884) or - (region_name == 'unspecified subregion' and region_id == 885) or - (region_name == 'unspecified subregion' and region_id == 1439)): - continue - region_ids.append(region_id) - region_ids_inv[region_id] = len(region_ids_inv) - region_names_inv[len(region_names_inv)] = region_name - - return { - 'ids': region_ids, - 'ids_inv': region_ids_inv, - 'names_inv': region_names_inv - } diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/FresnelShader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/FresnelShader.js deleted file mode 100644 index e7639a723936d05c6358a47c7db7fc679be7821c..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/FresnelShader.js +++ /dev/null @@ -1,74 +0,0 @@ -/** - * @author alteredq / http://alteredqualia.com/ - * - * Based on Nvidia Cg tutorial - */ - -THREE.FresnelShader = { - - uniforms: { - - "mRefractionRatio": { value: 1.02 }, - "mFresnelBias": { value: 0.1 }, - "mFresnelPower": { value: 2.0 }, - "mFresnelScale": { value: 1.0 }, - "tCube": { value: null } - - }, - - vertexShader: [ - - "uniform float mRefractionRatio;", - "uniform float mFresnelBias;", - "uniform float mFresnelScale;", - "uniform float mFresnelPower;", - - "varying vec3 vReflect;", - "varying vec3 vRefract[3];", - "varying float vReflectionFactor;", - - "void main() {", - - "vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );", - "vec4 worldPosition = modelMatrix * vec4( position, 1.0 );", - - "vec3 worldNormal = normalize( mat3( modelMatrix[0].xyz, modelMatrix[1].xyz, modelMatrix[2].xyz ) * normal );", - - "vec3 I = worldPosition.xyz - cameraPosition;", - - "vReflect = reflect( I, worldNormal );", - "vRefract[0] = refract( normalize( I ), worldNormal, mRefractionRatio );", - "vRefract[1] = refract( normalize( I ), worldNormal, mRefractionRatio * 0.99 );", - "vRefract[2] = refract( normalize( I ), worldNormal, mRefractionRatio * 0.98 );", - "vReflectionFactor = mFresnelBias + mFresnelScale * pow( 1.0 + dot( normalize( I ), worldNormal ), mFresnelPower );", - - "gl_Position = projectionMatrix * mvPosition;", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform samplerCube tCube;", - - "varying vec3 vReflect;", - "varying vec3 vRefract[3];", - "varying float vReflectionFactor;", - - "void main() {", - - "vec4 reflectedColor = textureCube( tCube, vec3( -vReflect.x, vReflect.yz ) );", - "vec4 refractedColor = vec4( 1.0 );", - - "refractedColor.r = textureCube( tCube, vec3( -vRefract[0].x, vRefract[0].yz ) ).r;", - "refractedColor.g = textureCube( tCube, vec3( -vRefract[1].x, vRefract[1].yz ) ).g;", - "refractedColor.b = textureCube( tCube, vec3( -vRefract[2].x, vRefract[2].yz ) ).b;", - - "gl_FragColor = mix( refractedColor, reflectedColor, clamp( vReflectionFactor, 0.0, 1.0 ) );", - - "}" - - ].join( "\n" ) - -}; diff --git a/spaces/barani/ControlNet/image_segmentor.py b/spaces/barani/ControlNet/image_segmentor.py deleted file mode 100644 index 46a972e2d29ad9f40902b484585b2d574de8c0c3..0000000000000000000000000000000000000000 --- a/spaces/barani/ControlNet/image_segmentor.py +++ /dev/null @@ -1,39 +0,0 @@ -import cv2 -import numpy as np -import PIL.Image -import torch -from controlnet_aux.util import HWC3, ade_palette -from transformers import AutoImageProcessor, UperNetForSemanticSegmentation - -from cv_utils import resize_image - - -class ImageSegmentor: - def __init__(self): - self.image_processor = AutoImageProcessor.from_pretrained( - 'openmmlab/upernet-convnext-small') - self.image_segmentor = UperNetForSemanticSegmentation.from_pretrained( - 'openmmlab/upernet-convnext-small') - - @torch.inference_mode() - def __call__(self, image: np.ndarray, **kwargs) -> PIL.Image.Image: - detect_resolution = kwargs.pop('detect_resolution', 512) - image_resolution = kwargs.pop('image_resolution', 512) - image = HWC3(image) - image = resize_image(image, resolution=detect_resolution) - image = PIL.Image.fromarray(image) - - pixel_values = self.image_processor(image, - return_tensors='pt').pixel_values - outputs = self.image_segmentor(pixel_values) - seg = self.image_processor.post_process_semantic_segmentation( - outputs, target_sizes=[image.size[::-1]])[0] - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - for label, color in enumerate(ade_palette()): - color_seg[seg == label, :] = color - color_seg = color_seg.astype(np.uint8) - - color_seg = resize_image(color_seg, - resolution=image_resolution, - interpolation=cv2.INTER_NEAREST) - return PIL.Image.fromarray(color_seg) diff --git a/spaces/bioriAsaeru/text-to-voice/ETABS 2013 Crack Keygen Serial How to Download and Install the Ultimate Software for Structural Analysis and Design.md b/spaces/bioriAsaeru/text-to-voice/ETABS 2013 Crack Keygen Serial How to Download and Install the Ultimate Software for Structural Analysis and Design.md deleted file mode 100644 index 4526468f172bf8fb9e60a6895306413d413634b4..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/ETABS 2013 Crack Keygen Serial How to Download and Install the Ultimate Software for Structural Analysis and Design.md +++ /dev/null @@ -1,6 +0,0 @@ -

        etabs 2013 crack keygen serial


        Download →→→ https://urloso.com/2uyPnC



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/bioriAsaeru/text-to-voice/Elder Lich Saga Awakening Free [2021] 11 Sterne Reisemobil Vo.md b/spaces/bioriAsaeru/text-to-voice/Elder Lich Saga Awakening Free [2021] 11 Sterne Reisemobil Vo.md deleted file mode 100644 index 6f74d59df4cf552351bd8485dbd84c6f5a93e4e1..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Elder Lich Saga Awakening Free [2021] 11 Sterne Reisemobil Vo.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Elder Lich Saga: Awakening Free 11 sterne reisemobil vo


        Download Zip 🗸 https://urloso.com/2uyRWH



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/bioriAsaeru/text-to-voice/Flexistarter 10 Free Tips and Tricks to Boost Your Creativity.md b/spaces/bioriAsaeru/text-to-voice/Flexistarter 10 Free Tips and Tricks to Boost Your Creativity.md deleted file mode 100644 index 9af9beaff450f7838fc0f50bad4b4018be5e2df3..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Flexistarter 10 Free Tips and Tricks to Boost Your Creativity.md +++ /dev/null @@ -1,9 +0,0 @@ -
        -

        There was a download of FlexiSTARTER 10.0 on the developer's website when we last checked. We cannot confirm if there is a free download of this software available. This program is a product of Cutterpros. You can set up FlexiSTARTER on Windows XP/Vista/7 32-bit.

        -

        The program is included in Photo & Graphics Tools. The following versions: 10.0 and 1.0 are the most frequently downloaded ones by the program users. According to the results of the Google Safe Browsing check, the developer's site is safe. Despite this, we recommend checking the downloaded files with any free antivirus software. The program's installer file is commonly found as App.exe.

        -

        Flexistarter 10 Free


        Download Zip 🗸 https://urloso.com/2uyRet



        -

        i really appreciate all your help. seems like you are a veteran. is flexi starter 10 good, bad, ok? all i have messed with is signblazer which isn't terrible but its free and sure cuts alot pro which came with my cutter. which is the best in your opinion? a guy that i know is gonna give me corel draw or something like that if he can find the disk give me your opinion on that also if you don;t mind

        -

        FlexiSIGNPRO 8.1v1.exe and App.exe are the most used setup packages for this program. The download link was found to be safe by our antivirus system. Flexisign pro software free download; Flexisign pro 8.1 free download; Flexisign 7.5 free download software. FlexiSign Pro 8.1 includes graphic design, color tracing, and text serialization.

        -

        If you looking on the internet a FlexiSign Pro 10.5 Full Version Free Download latest version So, you come to the right place now a day shares with you an amazing application FlexiSign Pro 10.5 free for Windows of charge 32-bit and 64-bit Windows standalone offline download. FlexiSign Pro 10.5 is a robust tool to create logos and vector graphics. FlexiSign Pro is a versatile program that allows consumers to generate excellent vector graphics and logos. FlexiSign Pro 10.5 for Windows 10, 8, 8.1, 7 is a powerful application. This is a great tool for digital design fans.

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Freemake Video Downloader Crack Cocaine _HOT_.md b/spaces/bioriAsaeru/text-to-voice/Freemake Video Downloader Crack Cocaine _HOT_.md deleted file mode 100644 index e9a1a4b20a96c6c02821277e1bec4c33f714322c..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Freemake Video Downloader Crack Cocaine _HOT_.md +++ /dev/null @@ -1,6 +0,0 @@ -

        freemake video downloader crack cocaine


        DOWNLOAD →→→ https://urloso.com/2uyP1T



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/bioriAsaeru/text-to-voice/Karvalo Kannada Book TOP Free Download.md b/spaces/bioriAsaeru/text-to-voice/Karvalo Kannada Book TOP Free Download.md deleted file mode 100644 index ff1fdd0961a0403c9376cacd7941b45e61110c22..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Karvalo Kannada Book TOP Free Download.md +++ /dev/null @@ -1,42 +0,0 @@ - -

        Karvalo Kannada Book Free Download: A Guide to Enjoy the Novel by Poornachandra Tejaswi

        - -

        Karvalo is a novel by Poornachandra Tejaswi, a renowned Kannada writer who wrote novels, short stories, non-fiction and poetry. The novel won the 'most creative novel of the year' award from the Sahitya Akademi in 1980. The novel is a blend of science, nature and philosophy, set in a rural village in Karnataka.

        - -

        The novel revolves around a farmer who meets Karvalo, a scientist in search of a rare flying lizard. The farmer also befriends Mandanna, a local cowboy who has a keen interest in nature and wildlife. Together, they embark on an adventure to find the elusive lizard in the dense forests of the Western Ghats.

        -

        karvalo kannada book free download


        Download Filehttps://urloso.com/2uySbv



        - -

        The novel is a masterpiece of Kannada literature that explores themes such as human curiosity, scientific discovery, environmental conservation and rural life. The novel also has a touch of humor and suspense that keeps the readers engaged.

        - -

        How to Download Karvalo Kannada Book PDF for Free

        - -

        If you want to download Karvalo Kannada book PDF for free, you have several options. You can either buy or borrow the book from online or offline stores. You can also read the book online from platforms like Goodreads or Archive.org. However, you may need to register or sign up to access the book online.

        - -

        Another option is to download the book from torrent sites or other sources. However, this may be illegal and risky, as you may face legal issues or malware attacks. Therefore, it is advisable to download the book legally and safely from authorized sources.

        - -

        Once you have downloaded the book, you need to open it on your device. You can use any device that supports PDF files, such as a computer, laptop, tablet or smartphone. You can also use any PDF reader application that allows you to read and adjust the settings of the book according to your preference.

        - -

        Why You Should Read Karvalo Kannada Book

        - -

        There are many reasons why you should read Karvalo Kannada book. Here are some of them:

        -

        - -
          -
        • You can enjoy the novel in its original language and appreciate the beauty and richness of Kannada literature.
        • -
        • You can learn more about the culture and history of Karnataka and its people.
        • -
        • You can gain knowledge and insight about science, nature and philosophy from the novel.
        • -
        • You can experience the thrill and excitement of an adventure story that takes you to the exotic and mysterious world of the Western Ghats.
        • -
        • You can appreciate the creativity and imagination of Poornachandra Tejaswi, who was one of the most influential and versatile writers of Kannada.
        • -
        - -

        So what are you waiting for? Download Karvalo Kannada book PDF for free today and enjoy reading this amazing novel!

        - -

        Conclusion

        - -

        Karvalo is a novel that deserves to be read by everyone who loves literature. The novel is a unique and innovative work that showcases the best of Kannada literature in terms of story, style, language and theme. The novel is also a timeless classic that remains relevant and entertaining even today.

        - -

        If you want to read Karvalo Kannada book PDF for free, you can follow the steps mentioned above and find the best sources for the book. You can also share your views and opinions about the book with other readers and appreciate the book together.

        - -

        So don't wait any longer and read Karvalo Kannada book PDF for free today!

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Keygen [CRACKED] Autocad 2013 Mac Os X.md b/spaces/bioriAsaeru/text-to-voice/Keygen [CRACKED] Autocad 2013 Mac Os X.md deleted file mode 100644 index 208dc28e7620ee36806d83720dc8beb778b9c0e5..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Keygen [CRACKED] Autocad 2013 Mac Os X.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Keygen Autocad 2013 Mac Os X


        DOWNLOADhttps://urloso.com/2uyOyI



        - -PASSWARD OF Autodesk 2013 products universal keygen for Home windows and Mac Osx ! begin XFORCE Keygen 32bits model or 64bits ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/modules/lstm.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/modules/lstm.py deleted file mode 100644 index c0866175950c1ca4f6cca98649525e6481853bba..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/modules/lstm.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from torch import nn - - -class StreamableLSTM(nn.Module): - """LSTM without worrying about the hidden state, nor the layout of the data. - Expects input as convolutional layout. - """ - def __init__(self, dimension: int, num_layers: int = 2, skip: bool = True): - super().__init__() - self.skip = skip - self.lstm = nn.LSTM(dimension, dimension, num_layers) - - def forward(self, x): - x = x.permute(2, 0, 1) - y, _ = self.lstm(x) - if self.skip: - y = y + x - y = y.permute(1, 2, 0) - return y diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/TensorMask/setup.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/TensorMask/setup.py deleted file mode 100644 index f6980e0dd2d2d239faed11e1474e1a8394c9b843..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/TensorMask/setup.py +++ /dev/null @@ -1,69 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. - -import glob -import os -from setuptools import find_packages, setup -import torch -from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension - - -def get_extensions(): - this_dir = os.path.dirname(os.path.abspath(__file__)) - extensions_dir = os.path.join(this_dir, "tensormask", "layers", "csrc") - - main_source = os.path.join(extensions_dir, "vision.cpp") - sources = glob.glob(os.path.join(extensions_dir, "**", "*.cpp")) - source_cuda = glob.glob(os.path.join(extensions_dir, "**", "*.cu")) + glob.glob( - os.path.join(extensions_dir, "*.cu") - ) - - sources = [main_source] + sources - - extension = CppExtension - - extra_compile_args = {"cxx": []} - define_macros = [] - - if (torch.cuda.is_available() and CUDA_HOME is not None) or os.getenv("FORCE_CUDA", "0") == "1": - extension = CUDAExtension - sources += source_cuda - define_macros += [("WITH_CUDA", None)] - extra_compile_args["nvcc"] = [ - "-DCUDA_HAS_FP16=1", - "-D__CUDA_NO_HALF_OPERATORS__", - "-D__CUDA_NO_HALF_CONVERSIONS__", - "-D__CUDA_NO_HALF2_OPERATORS__", - ] - - # It's better if pytorch can do this by default .. - CC = os.environ.get("CC", None) - if CC is not None: - extra_compile_args["nvcc"].append("-ccbin={}".format(CC)) - - sources = [os.path.join(extensions_dir, s) for s in sources] - - include_dirs = [extensions_dir] - - ext_modules = [ - extension( - "tensormask._C", - sources, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - ) - ] - - return ext_modules - - -setup( - name="tensormask", - version="0.1", - author="FAIR", - packages=find_packages(exclude=("configs", "tests")), - python_requires=">=3.7", - ext_modules=get_extensions(), - cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension}, -) diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/config/test_lazy_config.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/config/test_lazy_config.py deleted file mode 100644 index ff68143dbe60742fe0a44ba874837ca65d07c386..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/config/test_lazy_config.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import os -import unittest -import tempfile -from itertools import count - -from detectron2.config import LazyConfig, LazyCall as L -from omegaconf import DictConfig - - -class TestLazyPythonConfig(unittest.TestCase): - def setUp(self): - self.curr_dir = os.path.dirname(__file__) - self.root_filename = os.path.join(self.curr_dir, "root_cfg.py") - - def test_load(self): - cfg = LazyConfig.load(self.root_filename) - - self.assertEqual(cfg.dir1a_dict.a, "modified") - self.assertEqual(cfg.dir1b_dict.a, 1) - self.assertEqual(cfg.lazyobj.x, "base_a_1") - - cfg.lazyobj.x = "new_x" - # reload - cfg = LazyConfig.load(self.root_filename) - self.assertEqual(cfg.lazyobj.x, "base_a_1") - - def test_save_load(self): - cfg = LazyConfig.load(self.root_filename) - with tempfile.TemporaryDirectory(prefix="detectron2") as d: - fname = os.path.join(d, "test_config.yaml") - LazyConfig.save(cfg, fname) - cfg2 = LazyConfig.load(fname) - - self.assertEqual(cfg2.lazyobj._target_, "itertools.count") - self.assertEqual(cfg.lazyobj._target_, count) - cfg2.lazyobj.pop("_target_") - cfg.lazyobj.pop("_target_") - # the rest are equal - self.assertEqual(cfg, cfg2) - - def test_failed_save(self): - cfg = DictConfig({"x": lambda: 3}, flags={"allow_objects": True}) - with tempfile.TemporaryDirectory(prefix="detectron2") as d: - fname = os.path.join(d, "test_config.yaml") - LazyConfig.save(cfg, fname) - self.assertTrue(os.path.exists(fname)) - self.assertTrue(os.path.exists(fname + ".pkl")) - - def test_overrides(self): - cfg = LazyConfig.load(self.root_filename) - LazyConfig.apply_overrides(cfg, ["lazyobj.x=123", 'dir1b_dict.a="123"']) - self.assertEqual(cfg.dir1b_dict.a, "123") - self.assertEqual(cfg.lazyobj.x, 123) - - LazyConfig.apply_overrides(cfg, ["dir1b_dict.a=abc"]) - self.assertEqual(cfg.dir1b_dict.a, "abc") - - def test_invalid_overrides(self): - cfg = LazyConfig.load(self.root_filename) - with self.assertRaises(KeyError): - LazyConfig.apply_overrides(cfg, ["lazyobj.x.xxx=123"]) - - def test_to_py(self): - cfg = LazyConfig.load(self.root_filename) - cfg.lazyobj.x = {"a": 1, "b": 2, "c": L(count)(x={"r": "a", "s": 2.4, "t": [1, 2, 3, "z"]})} - cfg.list = ["a", 1, "b", 3.2] - py_str = LazyConfig.to_py(cfg) - expected = """cfg.dir1a_dict.a = "modified" -cfg.dir1a_dict.b = 2 -cfg.dir1b_dict.a = 1 -cfg.dir1b_dict.b = 2 -cfg.lazyobj = itertools.count( - x={ - "a": 1, - "b": 2, - "c": itertools.count(x={"r": "a", "s": 2.4, "t": [1, 2, 3, "z"]}), - }, - y="base_a_1_from_b", -) -cfg.list = ["a", 1, "b", 3.2] -""" - self.assertEqual(py_str, expected) - - def test_bad_import(self): - file = os.path.join(self.curr_dir, "dir1", "bad_import.py") - with self.assertRaisesRegex(ImportError, "relative import"): - LazyConfig.load(file) - - def test_bad_import2(self): - file = os.path.join(self.curr_dir, "dir1", "bad_import2.py") - with self.assertRaisesRegex(ImportError, "not exist"): - LazyConfig.load(file) - - def test_load_rel(self): - file = os.path.join(self.curr_dir, "dir1", "load_rel.py") - cfg = LazyConfig.load(file) - self.assertIn("x", cfg) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/layers/test_deformable.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/layers/test_deformable.py deleted file mode 100644 index 4aa319fc7e614f6a7a8ece7a45c177211c03012d..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/layers/test_deformable.py +++ /dev/null @@ -1,175 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -import unittest -import torch - -from detectron2.layers import DeformConv, ModulatedDeformConv -from detectron2.utils.env import TORCH_VERSION - - -@unittest.skipIf( - TORCH_VERSION == (1, 8) and torch.cuda.is_available(), - "This test fails under cuda11 + torch1.8.", -) -class DeformableTest(unittest.TestCase): - @unittest.skipIf(not torch.cuda.is_available(), "Deformable not supported for cpu") - def test_forward_output(self): - device = torch.device("cuda") - N, C, H, W = shape = 1, 1, 5, 5 - kernel_size = 3 - padding = 1 - - inputs = torch.arange(np.prod(shape), dtype=torch.float32).reshape(*shape).to(device) - """ - 0 1 2 3 4 - 5 6 7 8 9 - 10 11 12 13 14 - 15 16 17 18 19 - 20 21 22 23 24 - """ - offset_channels = kernel_size * kernel_size * 2 - offset = torch.full((N, offset_channels, H, W), 0.5, dtype=torch.float32).to(device) - - # Test DCN v1 - deform = DeformConv(C, C, kernel_size=kernel_size, padding=padding).to(device) - deform.weight = torch.nn.Parameter(torch.ones_like(deform.weight)) - output = deform(inputs, offset) - output = output.detach().cpu().numpy() - deform_results = np.array( - [ - [30, 41.25, 48.75, 45, 28.75], - [62.25, 81, 90, 80.25, 50.25], - [99.75, 126, 135, 117.75, 72.75], - [105, 131.25, 138.75, 120, 73.75], - [71.75, 89.25, 93.75, 80.75, 49.5], - ] - ) - self.assertTrue(np.allclose(output.flatten(), deform_results.flatten())) - - # Test DCN v2 - mask_channels = kernel_size * kernel_size - mask = torch.full((N, mask_channels, H, W), 0.5, dtype=torch.float32).to(device) - modulate_deform = ModulatedDeformConv(C, C, kernel_size, padding=padding, bias=False).to( - device - ) - modulate_deform.weight = deform.weight - output = modulate_deform(inputs, offset, mask) - output = output.detach().cpu().numpy() - self.assertTrue(np.allclose(output.flatten(), deform_results.flatten() * 0.5)) - - def test_forward_output_on_cpu(self): - device = torch.device("cpu") - N, C, H, W = shape = 1, 1, 5, 5 - kernel_size = 3 - padding = 1 - - inputs = torch.arange(np.prod(shape), dtype=torch.float32).reshape(*shape).to(device) - - offset_channels = kernel_size * kernel_size * 2 - offset = torch.full((N, offset_channels, H, W), 0.5, dtype=torch.float32).to(device) - - # Test DCN v1 on cpu - deform = DeformConv(C, C, kernel_size=kernel_size, padding=padding).to(device) - deform.weight = torch.nn.Parameter(torch.ones_like(deform.weight)) - output = deform(inputs, offset) - output = output.detach().cpu().numpy() - deform_results = np.array( - [ - [30, 41.25, 48.75, 45, 28.75], - [62.25, 81, 90, 80.25, 50.25], - [99.75, 126, 135, 117.75, 72.75], - [105, 131.25, 138.75, 120, 73.75], - [71.75, 89.25, 93.75, 80.75, 49.5], - ] - ) - self.assertTrue(np.allclose(output.flatten(), deform_results.flatten())) - - @unittest.skipIf(not torch.cuda.is_available(), "This test requires gpu access") - def test_forward_output_on_cpu_equals_output_on_gpu(self): - N, C, H, W = shape = 2, 4, 10, 10 - kernel_size = 3 - padding = 1 - - for groups in [1, 2]: - inputs = torch.arange(np.prod(shape), dtype=torch.float32).reshape(*shape) - offset_channels = kernel_size * kernel_size * 2 - offset = torch.full((N, offset_channels, H, W), 0.5, dtype=torch.float32) - - deform_gpu = DeformConv( - C, C, kernel_size=kernel_size, padding=padding, groups=groups - ).to("cuda") - deform_gpu.weight = torch.nn.Parameter(torch.ones_like(deform_gpu.weight)) - output_gpu = deform_gpu(inputs.to("cuda"), offset.to("cuda")).detach().cpu().numpy() - - deform_cpu = DeformConv( - C, C, kernel_size=kernel_size, padding=padding, groups=groups - ).to("cpu") - deform_cpu.weight = torch.nn.Parameter(torch.ones_like(deform_cpu.weight)) - output_cpu = deform_cpu(inputs.to("cpu"), offset.to("cpu")).detach().numpy() - - self.assertTrue(np.allclose(output_gpu.flatten(), output_cpu.flatten())) - - @unittest.skipIf(not torch.cuda.is_available(), "Deformable not supported for cpu") - def test_small_input(self): - device = torch.device("cuda") - for kernel_size in [3, 5]: - padding = kernel_size // 2 - N, C, H, W = shape = (1, 1, kernel_size - 1, kernel_size - 1) - - inputs = torch.rand(shape).to(device) # input size is smaller than kernel size - - offset_channels = kernel_size * kernel_size * 2 - offset = torch.randn((N, offset_channels, H, W), dtype=torch.float32).to(device) - deform = DeformConv(C, C, kernel_size=kernel_size, padding=padding).to(device) - output = deform(inputs, offset) - self.assertTrue(output.shape == inputs.shape) - - mask_channels = kernel_size * kernel_size - mask = torch.ones((N, mask_channels, H, W), dtype=torch.float32).to(device) - modulate_deform = ModulatedDeformConv( - C, C, kernel_size, padding=padding, bias=False - ).to(device) - output = modulate_deform(inputs, offset, mask) - self.assertTrue(output.shape == inputs.shape) - - @unittest.skipIf(not torch.cuda.is_available(), "Deformable not supported for cpu") - def test_raise_exception(self): - device = torch.device("cuda") - N, C, H, W = shape = 1, 1, 3, 3 - kernel_size = 3 - padding = 1 - - inputs = torch.rand(shape, dtype=torch.float32).to(device) - offset_channels = kernel_size * kernel_size # This is wrong channels for offset - offset = torch.randn((N, offset_channels, H, W), dtype=torch.float32).to(device) - deform = DeformConv(C, C, kernel_size=kernel_size, padding=padding).to(device) - self.assertRaises(RuntimeError, deform, inputs, offset) - - offset_channels = kernel_size * kernel_size * 2 - offset = torch.randn((N, offset_channels, H, W), dtype=torch.float32).to(device) - mask_channels = kernel_size * kernel_size * 2 # This is wrong channels for mask - mask = torch.ones((N, mask_channels, H, W), dtype=torch.float32).to(device) - modulate_deform = ModulatedDeformConv(C, C, kernel_size, padding=padding, bias=False).to( - device - ) - self.assertRaises(RuntimeError, modulate_deform, inputs, offset, mask) - - def test_repr(self): - module = DeformConv(3, 10, kernel_size=3, padding=1, deformable_groups=2) - correct_string = ( - "DeformConv(in_channels=3, out_channels=10, kernel_size=(3, 3), " - "stride=(1, 1), padding=(1, 1), dilation=(1, 1), " - "groups=1, deformable_groups=2, bias=False)" - ) - self.assertEqual(repr(module), correct_string) - - module = ModulatedDeformConv(3, 10, kernel_size=3, padding=1, deformable_groups=2) - correct_string = ( - "ModulatedDeformConv(in_channels=3, out_channels=10, kernel_size=(3, 3), " - "stride=1, padding=1, dilation=1, groups=1, deformable_groups=2, bias=True)" - ) - self.assertEqual(repr(module), correct_string) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/structures/__init__.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/structures/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/catundchat/tts_cn/README.md b/spaces/catundchat/tts_cn/README.md deleted file mode 100644 index e9f36de87f09353fe4c665ad4147ecb3b53adf77..0000000000000000000000000000000000000000 --- a/spaces/catundchat/tts_cn/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Vits Chinese -emoji: 🌍 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: maxmax20160403/vits_chinese ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ccolas/TastyPiano/src/music2cocktailrep/__init__.py b/spaces/ccolas/TastyPiano/src/music2cocktailrep/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/changlisheng/shangChat/modules/config.py b/spaces/changlisheng/shangChat/modules/config.py deleted file mode 100644 index 3ccbd507cdbf9ff01763667606276069cc1f7eb4..0000000000000000000000000000000000000000 --- a/spaces/changlisheng/shangChat/modules/config.py +++ /dev/null @@ -1,145 +0,0 @@ -from collections import defaultdict -from contextlib import contextmanager -import os -import logging -import sys -import json - -from . import shared - - -__all__ = [ - "my_api_key", - "authflag", - "auth_list", - "dockerflag", - "retrieve_proxy", - "log_level", - "advance_docs", - "update_doc_config", - "multi_api_key", -] - -# 添加一个统一的config文件,避免文件过多造成的疑惑(优先级最低) -# 同时,也可以为后续支持自定义功能提供config的帮助 -if os.path.exists("config.json"): - with open("config.json", "r", encoding='utf-8') as f: - config = json.load(f) -else: - config = {} - -## 处理docker if we are running in Docker -dockerflag = config.get("dockerflag", False) -if os.environ.get("dockerrun") == "yes": - dockerflag = True - -## 处理 api-key 以及 允许的用户列表 -my_api_key = config.get("openai_api_key", "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") # 在这里输入你的 API 密钥 -my_api_key = os.environ.get("my_api_key", my_api_key) - -## 多账户机制 -multi_api_key = config.get("multi_api_key", False) # 是否开启多账户机制 -if multi_api_key: - api_key_list = config.get("api_key_list", []) - if len(api_key_list) == 0: - logging.error("多账号模式已开启,但api_key_list为空,请检查config.json") - sys.exit(1) - shared.state.set_api_key_queue(api_key_list) - -auth_list = config.get("users", []) # 实际上是使用者的列表 -authflag = len(auth_list) > 0 # 是否开启认证的状态值,改为判断auth_list长度 - -# 处理自定义的api_host,优先读环境变量的配置,如果存在则自动装配 -api_host = os.environ.get("api_host", config.get("api_host", "")) -if api_host: - shared.state.set_api_host(api_host) - -if dockerflag: - if my_api_key == "empty": - logging.error("Please give a api key!") - sys.exit(1) - # auth - username = os.environ.get("USERNAME") - password = os.environ.get("PASSWORD") - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - auth_list.append((os.environ.get("USERNAME"), os.environ.get("PASSWORD"))) - authflag = True -else: - if ( - not my_api_key - and os.path.exists("api_key.txt") - and os.path.getsize("api_key.txt") - ): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - authflag = True - with open("auth.json", "r", encoding='utf-8') as f: - auth = json.load(f) - for _ in auth: - if auth[_]["username"] and auth[_]["password"]: - auth_list.append((auth[_]["username"], auth[_]["password"])) - else: - logging.error("请检查auth.json文件中的用户名和密码!") - sys.exit(1) - -@contextmanager -def retrieve_openai_api(api_key = None): - old_api_key = os.environ.get("OPENAI_API_KEY", "") - if api_key is None: - os.environ["OPENAI_API_KEY"] = my_api_key - yield my_api_key - else: - os.environ["OPENAI_API_KEY"] = api_key - yield api_key - os.environ["OPENAI_API_KEY"] = old_api_key - -## 处理log -log_level = config.get("log_level", "INFO") -logging.basicConfig( - level=log_level, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -## 处理代理: -http_proxy = config.get("http_proxy", "") -https_proxy = config.get("https_proxy", "") -http_proxy = os.environ.get("HTTP_PROXY", http_proxy) -https_proxy = os.environ.get("HTTPS_PROXY", https_proxy) - -# 重置系统变量,在不需要设置的时候不设置环境变量,以免引起全局代理报错 -os.environ["HTTP_PROXY"] = "" -os.environ["HTTPS_PROXY"] = "" - -@contextmanager -def retrieve_proxy(proxy=None): - """ - 1, 如果proxy = NONE,设置环境变量,并返回最新设置的代理 - 2,如果proxy != NONE,更新当前的代理配置,但是不更新环境变量 - """ - global http_proxy, https_proxy - if proxy is not None: - http_proxy = proxy - https_proxy = proxy - yield http_proxy, https_proxy - else: - old_var = os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] - os.environ["HTTP_PROXY"] = http_proxy - os.environ["HTTPS_PROXY"] = https_proxy - yield http_proxy, https_proxy # return new proxy - - # return old proxy - os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] = old_var - - -## 处理advance docs -advance_docs = defaultdict(lambda: defaultdict(dict)) -advance_docs.update(config.get("advance_docs", {})) -def update_doc_config(two_column_pdf): - global advance_docs - if two_column_pdf: - advance_docs["pdf"]["two_column"] = True - else: - advance_docs["pdf"]["two_column"] = False - - logging.info(f"更新后的文件参数为:{advance_docs}") \ No newline at end of file diff --git a/spaces/chatpdfdemo/chatpdfdemo/README.md b/spaces/chatpdfdemo/chatpdfdemo/README.md deleted file mode 100644 index cb2e03cc4f72f7b2ff8548a9cb27092f0d02bed2..0000000000000000000000000000000000000000 --- a/spaces/chatpdfdemo/chatpdfdemo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chatpdfdemo -emoji: 📊 -colorFrom: red -colorTo: green -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chendl/compositional_test/multimodal/range_aro.sh b/spaces/chendl/compositional_test/multimodal/range_aro.sh deleted file mode 100644 index 30aedf5a0d3caab83c9911635037d14afee96d21..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/range_aro.sh +++ /dev/null @@ -1,20 +0,0 @@ -sbatch -J aro2 submit_eval.sh eval_aro.sh checkpoints/091701_pythiaS_previsual_fix/checkpoint_2000.pt -sbatch -J aro4 submit_eval.sh eval_aro.sh checkpoints/091701_pythiaS_previsual_fix/checkpoint_4000.pt -sbatch -J aro6 submit_eval.sh eval_aro.sh checkpoints/091701_pythiaS_previsual_fix/checkpoint_6000.pt -sbatch -J aro8 submit_eval.sh eval_aro.sh checkpoints/091701_pythiaS_previsual_fix/checkpoint_8000.pt -sbatch -J aro10 submit_eval.sh eval_aro.sh checkpoints/091701_pythiaS_previsual_fix/checkpoint_10000.pt -sbatch -J aro11 submit_eval.sh eval_aro.sh checkpoints/091701_pythiaS_previsual_fix/checkpoint_11000.pt -sbatch -J aro12 submit_eval.sh eval_aro.sh checkpoints/091701_pythiaS_previsual_fix/checkpoint_12000.pt -sbatch -J aro13 submit_eval.sh eval_aro.sh checkpoints/091701_pythiaS_previsual_fix/checkpoint_13000.pt -sbatch -J aro14 submit_eval.sh eval_aro.sh checkpoints/091701_pythiaS_previsual_fix/checkpoint_14000.pt -sbatch -J aro15 submit_eval.sh eval_aro.sh checkpoints/091701_pythiaS_previsual_fix/checkpoint_15000.pt - - -sbatch -J aro3B2 submit_eval.sh eval_aro_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_2000.pt -sbatch -J aro3B4 submit_eval.sh eval_aro_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_4000.pt -sbatch -J aro3B6 submit_eval.sh eval_aro_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_6000.pt -sbatch -J aro3B8 submit_eval.sh eval_aro_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_8000.pt -sbatch -J aro3B10 submit_eval.sh eval_aro_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_10000.pt -sbatch -J aro3B12 submit_eval.sh eval_aro_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_12000.pt -sbatch -J aro3B14 submit_eval.sh eval_aro_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_14000.pt -sbatch -J aro3B16 submit_eval.sh eval_aro_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_16000.pt diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/models/auto/processing_auto.py b/spaces/chendl/compositional_test/transformers/src/transformers/models/auto/processing_auto.py deleted file mode 100644 index 9e6edc0ae16f79f8d7ed8df04fed9ea45506d00a..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/models/auto/processing_auto.py +++ /dev/null @@ -1,316 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" AutoProcessor class.""" -import importlib -import inspect -import json -from collections import OrderedDict - -# Build the list of all feature extractors -from ...configuration_utils import PretrainedConfig -from ...dynamic_module_utils import get_class_from_dynamic_module -from ...feature_extraction_utils import FeatureExtractionMixin -from ...image_processing_utils import ImageProcessingMixin -from ...tokenization_utils import TOKENIZER_CONFIG_FILE -from ...utils import FEATURE_EXTRACTOR_NAME, get_file_from_repo, logging -from .auto_factory import _LazyAutoMapping -from .configuration_auto import ( - CONFIG_MAPPING_NAMES, - AutoConfig, - model_type_to_module_name, - replace_list_option_in_docstrings, -) -from .feature_extraction_auto import AutoFeatureExtractor -from .image_processing_auto import AutoImageProcessor -from .tokenization_auto import AutoTokenizer - - -logger = logging.get_logger(__name__) - -PROCESSOR_MAPPING_NAMES = OrderedDict( - [ - ("align", "AlignProcessor"), - ("altclip", "AltCLIPProcessor"), - ("blip", "BlipProcessor"), - ("blip-2", "Blip2Processor"), - ("bridgetower", "BridgeTowerProcessor"), - ("chinese_clip", "ChineseCLIPProcessor"), - ("clap", "ClapProcessor"), - ("clip", "CLIPProcessor"), - ("clipseg", "CLIPSegProcessor"), - ("flava", "FlavaProcessor"), - ("git", "GitProcessor"), - ("groupvit", "CLIPProcessor"), - ("hubert", "Wav2Vec2Processor"), - ("layoutlmv2", "LayoutLMv2Processor"), - ("layoutlmv3", "LayoutLMv3Processor"), - ("markuplm", "MarkupLMProcessor"), - ("mgp-str", "MgpstrProcessor"), - ("oneformer", "OneFormerProcessor"), - ("owlvit", "OwlViTProcessor"), - ("pix2struct", "Pix2StructProcessor"), - ("sew", "Wav2Vec2Processor"), - ("sew-d", "Wav2Vec2Processor"), - ("speech_to_text", "Speech2TextProcessor"), - ("speech_to_text_2", "Speech2Text2Processor"), - ("speecht5", "SpeechT5Processor"), - ("trocr", "TrOCRProcessor"), - ("tvlt", "TvltProcessor"), - ("unispeech", "Wav2Vec2Processor"), - ("unispeech-sat", "Wav2Vec2Processor"), - ("vilt", "ViltProcessor"), - ("vision-text-dual-encoder", "VisionTextDualEncoderProcessor"), - ("wav2vec2", "Wav2Vec2Processor"), - ("wav2vec2-conformer", "Wav2Vec2Processor"), - ("wavlm", "Wav2Vec2Processor"), - ("whisper", "WhisperProcessor"), - ("xclip", "XCLIPProcessor"), - ] -) - -PROCESSOR_MAPPING = _LazyAutoMapping(CONFIG_MAPPING_NAMES, PROCESSOR_MAPPING_NAMES) - - -def processor_class_from_name(class_name: str): - for module_name, processors in PROCESSOR_MAPPING_NAMES.items(): - if class_name in processors: - module_name = model_type_to_module_name(module_name) - - module = importlib.import_module(f".{module_name}", "transformers.models") - try: - return getattr(module, class_name) - except AttributeError: - continue - - for processor in PROCESSOR_MAPPING._extra_content.values(): - if getattr(processor, "__name__", None) == class_name: - return processor - - # We did not fine the class, but maybe it's because a dep is missing. In that case, the class will be in the main - # init and we return the proper dummy to get an appropriate error message. - main_module = importlib.import_module("transformers") - if hasattr(main_module, class_name): - return getattr(main_module, class_name) - - return None - - -class AutoProcessor: - r""" - This is a generic processor class that will be instantiated as one of the processor classes of the library when - created with the [`AutoProcessor.from_pretrained`] class method. - - This class cannot be instantiated directly using `__init__()` (throws an error). - """ - - def __init__(self): - raise EnvironmentError( - "AutoProcessor is designed to be instantiated " - "using the `AutoProcessor.from_pretrained(pretrained_model_name_or_path)` method." - ) - - @classmethod - @replace_list_option_in_docstrings(PROCESSOR_MAPPING_NAMES) - def from_pretrained(cls, pretrained_model_name_or_path, **kwargs): - r""" - Instantiate one of the processor classes of the library from a pretrained model vocabulary. - - The processor class to instantiate is selected based on the `model_type` property of the config object (either - passed as an argument or loaded from `pretrained_model_name_or_path` if possible): - - List options - - Params: - pretrained_model_name_or_path (`str` or `os.PathLike`): - This can be either: - - - a string, the *model id* of a pretrained feature_extractor hosted inside a model repo on - huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or - namespaced under a user or organization name, like `dbmdz/bert-base-german-cased`. - - a path to a *directory* containing a processor files saved using the `save_pretrained()` method, - e.g., `./my_model_directory/`. - cache_dir (`str` or `os.PathLike`, *optional*): - Path to a directory in which a downloaded pretrained model feature extractor should be cached if the - standard cache should not be used. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force to (re-)download the feature extractor files and override the cached versions - if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received file. Attempts to resume the download if such a file - exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request. - use_auth_token (`str` or *bool*, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated - when running `huggingface-cli login` (stored in `~/.huggingface`). - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - return_unused_kwargs (`bool`, *optional*, defaults to `False`): - If `False`, then this function returns just the final feature extractor object. If `True`, then this - functions returns a `Tuple(feature_extractor, unused_kwargs)` where *unused_kwargs* is a dictionary - consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of - `kwargs` which has not been used to update `feature_extractor` and is otherwise ignored. - trust_remote_code (`bool`, *optional*, defaults to `False`): - Whether or not to allow for custom models defined on the Hub in their own modeling files. This option - should only be set to `True` for repositories you trust and in which you have read the code, as it will - execute code present on the Hub on your local machine. - kwargs (`Dict[str, Any]`, *optional*): - The values in kwargs of any keys which are feature extractor attributes will be used to override the - loaded values. Behavior concerning key/value pairs whose keys are *not* feature extractor attributes is - controlled by the `return_unused_kwargs` keyword parameter. - - - - Passing `use_auth_token=True` is required when you want to use a private model. - - - - Examples: - - ```python - >>> from transformers import AutoProcessor - - >>> # Download processor from huggingface.co and cache. - >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") - - >>> # If processor files are in a directory (e.g. processor was saved using *save_pretrained('./test/saved_model/')*) - >>> # processor = AutoProcessor.from_pretrained("./test/saved_model/") - ```""" - config = kwargs.pop("config", None) - trust_remote_code = kwargs.pop("trust_remote_code", False) - kwargs["_from_auto"] = True - - processor_class = None - processor_auto_map = None - - # First, let's see if we have a preprocessor config. - # Filter the kwargs for `get_file_from_repo`. - get_file_from_repo_kwargs = { - key: kwargs[key] for key in inspect.signature(get_file_from_repo).parameters.keys() if key in kwargs - } - # Let's start by checking whether the processor class is saved in an image processor - preprocessor_config_file = get_file_from_repo( - pretrained_model_name_or_path, FEATURE_EXTRACTOR_NAME, **get_file_from_repo_kwargs - ) - if preprocessor_config_file is not None: - config_dict, _ = ImageProcessingMixin.get_image_processor_dict(pretrained_model_name_or_path, **kwargs) - processor_class = config_dict.get("processor_class", None) - if "AutoProcessor" in config_dict.get("auto_map", {}): - processor_auto_map = config_dict["auto_map"]["AutoProcessor"] - - # If not found, let's check whether the processor class is saved in a feature extractor config - if preprocessor_config_file is not None and processor_class is None: - config_dict, _ = FeatureExtractionMixin.get_feature_extractor_dict(pretrained_model_name_or_path, **kwargs) - processor_class = config_dict.get("processor_class", None) - if "AutoProcessor" in config_dict.get("auto_map", {}): - processor_auto_map = config_dict["auto_map"]["AutoProcessor"] - - if processor_class is None: - # Next, let's check whether the processor class is saved in a tokenizer - tokenizer_config_file = get_file_from_repo( - pretrained_model_name_or_path, TOKENIZER_CONFIG_FILE, **get_file_from_repo_kwargs - ) - if tokenizer_config_file is not None: - with open(tokenizer_config_file, encoding="utf-8") as reader: - config_dict = json.load(reader) - - processor_class = config_dict.get("processor_class", None) - if "AutoProcessor" in config_dict.get("auto_map", {}): - processor_auto_map = config_dict["auto_map"]["AutoProcessor"] - - if processor_class is None: - # Otherwise, load config, if it can be loaded. - if not isinstance(config, PretrainedConfig): - config = AutoConfig.from_pretrained( - pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs - ) - - # And check if the config contains the processor class. - processor_class = getattr(config, "processor_class", None) - if hasattr(config, "auto_map") and "AutoProcessor" in config.auto_map: - processor_auto_map = config.auto_map["AutoProcessor"] - - if processor_class is not None: - # If we have custom code for a feature extractor, we get the proper class. - if processor_auto_map is not None: - if not trust_remote_code: - raise ValueError( - f"Loading {pretrained_model_name_or_path} requires you to execute the feature extractor file " - "in that repo on your local machine. Make sure you have read the code there to avoid " - "malicious use, then set the option `trust_remote_code=True` to remove this error." - ) - if kwargs.get("revision", None) is None: - logger.warning( - "Explicitly passing a `revision` is encouraged when loading a feature extractor with custom " - "code to ensure no malicious code has been contributed in a newer revision." - ) - - module_file, class_name = processor_auto_map.split(".") - processor_class = get_class_from_dynamic_module( - pretrained_model_name_or_path, module_file + ".py", class_name, **kwargs - ) - processor_class.register_for_auto_class() - else: - processor_class = processor_class_from_name(processor_class) - - return processor_class.from_pretrained( - pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs - ) - - # Last try: we use the PROCESSOR_MAPPING. - if type(config) in PROCESSOR_MAPPING: - return PROCESSOR_MAPPING[type(config)].from_pretrained(pretrained_model_name_or_path, **kwargs) - - # At this stage, there doesn't seem to be a `Processor` class available for this model, so let's try a - # tokenizer. - try: - return AutoTokenizer.from_pretrained( - pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs - ) - except Exception: - try: - return AutoImageProcessor.from_pretrained( - pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs - ) - except Exception: - pass - - try: - return AutoFeatureExtractor.from_pretrained( - pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs - ) - except Exception: - pass - - raise ValueError( - f"Unrecognized processing class in {pretrained_model_name_or_path}. Can't instantiate a processor, a " - "tokenizer, an image processor or a feature extractor for this model. Make sure the repository contains" - "the files of at least one of those processing classes." - ) - - @staticmethod - def register(config_class, processor_class): - """ - Register a new processor for this class. - - Args: - config_class ([`PretrainedConfig`]): - The configuration corresponding to the model to register. - processor_class ([`FeatureExtractorMixin`]): The processor to register. - """ - PROCESSOR_MAPPING.register(config_class, processor_class) diff --git a/spaces/chongjie/PoseDiffusion_MVP/util/__init__.py b/spaces/chongjie/PoseDiffusion_MVP/util/__init__.py deleted file mode 100644 index 5ac6521439f089fdd38b198884b1ef25b1df3d80..0000000000000000000000000000000000000000 --- a/spaces/chongjie/PoseDiffusion_MVP/util/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/abc/_subprocesses.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/abc/_subprocesses.py deleted file mode 100644 index 704b44a2dda9e21997acf52c268e414d01bd2eb5..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/abc/_subprocesses.py +++ /dev/null @@ -1,79 +0,0 @@ -from __future__ import annotations - -from abc import abstractmethod -from signal import Signals - -from ._resources import AsyncResource -from ._streams import ByteReceiveStream, ByteSendStream - - -class Process(AsyncResource): - """An asynchronous version of :class:`subprocess.Popen`.""" - - @abstractmethod - async def wait(self) -> int: - """ - Wait until the process exits. - - :return: the exit code of the process - """ - - @abstractmethod - def terminate(self) -> None: - """ - Terminates the process, gracefully if possible. - - On Windows, this calls ``TerminateProcess()``. - On POSIX systems, this sends ``SIGTERM`` to the process. - - .. seealso:: :meth:`subprocess.Popen.terminate` - """ - - @abstractmethod - def kill(self) -> None: - """ - Kills the process. - - On Windows, this calls ``TerminateProcess()``. - On POSIX systems, this sends ``SIGKILL`` to the process. - - .. seealso:: :meth:`subprocess.Popen.kill` - """ - - @abstractmethod - def send_signal(self, signal: Signals) -> None: - """ - Send a signal to the subprocess. - - .. seealso:: :meth:`subprocess.Popen.send_signal` - - :param signal: the signal number (e.g. :data:`signal.SIGHUP`) - """ - - @property - @abstractmethod - def pid(self) -> int: - """The process ID of the process.""" - - @property - @abstractmethod - def returncode(self) -> int | None: - """ - The return code of the process. If the process has not yet terminated, this will be - ``None``. - """ - - @property - @abstractmethod - def stdin(self) -> ByteSendStream | None: - """The stream for the standard input of the process.""" - - @property - @abstractmethod - def stdout(self) -> ByteReceiveStream | None: - """The stream for the standard output of the process.""" - - @property - @abstractmethod - def stderr(self) -> ByteReceiveStream | None: - """The stream for the standard error output of the process.""" diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-93c91554.css b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-93c91554.css deleted file mode 100644 index beda351dfc765484ad744113e3d1734eb71cacd1..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-93c91554.css +++ /dev/null @@ -1 +0,0 @@ -div.svelte-15lo0d8{display:flex;flex-wrap:wrap;gap:var(--layout-gap);width:var(--size-full)}.hide.svelte-15lo0d8{display:none}.compact.svelte-15lo0d8>*,.compact.svelte-15lo0d8 .box{border-radius:0}.compact.svelte-15lo0d8,.panel.svelte-15lo0d8{border-radius:var(--container-radius);background:var(--background-fill-secondary);padding:var(--size-2)}.unequal-height.svelte-15lo0d8{align-items:flex-start}.stretch.svelte-15lo0d8{align-items:stretch}div.svelte-15lo0d8>*,div.svelte-15lo0d8>.form>*{flex:1 1 0%;flex-wrap:wrap;min-width:min(160px,100%)} diff --git a/spaces/cihyFjudo/fairness-paper-search/First Eagles 2 Torrent What You Need to Know Before You Fly.md b/spaces/cihyFjudo/fairness-paper-search/First Eagles 2 Torrent What You Need to Know Before You Fly.md deleted file mode 100644 index 96c9f7cd1e6e12c75204841695da341b54d179e0..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/First Eagles 2 Torrent What You Need to Know Before You Fly.md +++ /dev/null @@ -1,6 +0,0 @@ -

        first eagles 2 torrent


        Download ☆☆☆☆☆ https://tinurli.com/2uwkrD



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/cihyFjudo/fairness-paper-search/Otsav Dj Pro 1.90 Crackedl What You Need to Know Before You Buy or Upgrade.md b/spaces/cihyFjudo/fairness-paper-search/Otsav Dj Pro 1.90 Crackedl What You Need to Know Before You Buy or Upgrade.md deleted file mode 100644 index 1c8b6b56bc4c0ba201c1167b40a062dc2e2ff185..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Otsav Dj Pro 1.90 Crackedl What You Need to Know Before You Buy or Upgrade.md +++ /dev/null @@ -1,6 +0,0 @@ -
        -

        (This will NOT work with 1.90 or earlier licenses unless you have an OtsAV DJ Pro-Classic+ license [with free upgrades to 2.0] - download 1.90 or other below instead or upgrade your license to OtsAV DJ version 1.94)

        -

        Otsav Dj Pro 1.90 Crackedl


        Download File ★★★ https://tinurli.com/2uwjTL



        -

        (This will NOT work with 1.85 licenses unless you have an OtsAV DJ Pro-Classic+ license [with free upgrades to 2.0] - download 1.85 below instead or upgrade your license to OtsAV DJ version 1.90)

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Turbotax 2018 Download Mac Get the Maximum Refund Guaranteed.md b/spaces/cihyFjudo/fairness-paper-search/Turbotax 2018 Download Mac Get the Maximum Refund Guaranteed.md deleted file mode 100644 index a65524d1dbdf78c36eed76137054a0e04411e112..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Turbotax 2018 Download Mac Get the Maximum Refund Guaranteed.md +++ /dev/null @@ -1,21 +0,0 @@ - -

        The Most Competitive Prices in the Online Software License Market.
        Easy to Order with Paypal and Immediate Delivery, No Wait!
        Keeping 95% of our Customers Happy over the Last 3 Years!
        SSL Secure Payment Data and Protecting Your Personal Information!
        Email:sales@turbotax-shop.com

        -

        I have used H&R Block for the past few years because it was less expensive and did a fine job. I have used Turbo Tax previously. But last year my taxes were more complicated and the H&R Block help was poor. I remember how good the Turbo Tax help was so I bought Turbo Tax for 2018 taxes. I have not done much of my taxes yet this year but the little I have done shows me that the help should make my taxes easier to do. Turbo Tax is more expensive, but when I think of how much time I spent last year trying to find tax answers, the extra expense should more that pay for itself in time savings and less frustration.

        -

        Turbotax 2018 Download Mac


        DOWNLOAD ⇒⇒⇒ https://tinurli.com/2uwjLa



        -

        I paid for Turbo Tax 2018 online and completed some returns on it for 2018. Now I am using a PC with Windows 10. How do I download the version I bought (MAC) to my PC Windows 10 and transfer my *.tax2018 individual returns to my PC?

        -

        Intuit TurboTax Home And Business 2018 Free Download includes all the necessary files to run perfectly on your system, uploaded program contains all latest and updated files, it is full offline or standalone version of Intuit TurboTax Home And Business 2018 Free Download for compatible versions of windows, download link at the end of the post.

        -

        Below are some amazing features you can experience after installation of Intuit TurboTax Home And Business 2018 Free Download please keep in mind features may vary and totally depends if your system support them.

        -

        Click on below button to start Intuit TurboTax Home And Business 2018 Free Download. This is complete offline installer and standalone setup for Intuit TurboTax Home And Business 2018. This would be compatible with compatible version of windows.

        -

        In 2018, over a million taxpayers didn't file their federal return, leaving $1.5 billion in unclaimed refund money. It's not too late for people to file and get their refund, but the deadline is soon.

        -

        To claim a refund for 2018, taxpayers must mail returns to the IRS center listed on the Form 1040 instructionsPDF. While they must mail in a 2018 return, taxpayers can still e-file for 2019, 2020 and 2021.

        -

        A professional application to deal with the tax matters along personal finance and accounting, Intuit TurboTax Business 2018 delivers one of the best solutions. It provides a friendly environment with simple options and self-explaining tools that enhance financial matters. This powerful program provides different administration operations as well as accounting solution for small businesses. This powerful application provides support for a variety of features like installments, dates, taxes and a lot of other financial matters. Determine different rules, generate reports and manage all the tasks relating to finance and business.

        It provides a professional environment that helps the user to deal with the inventory and generate different reports. It provides a complete solution for generating different reports as well as perform numerous other financial operations.

        -

        -

        With the above information in-hand - follow this How to get old versions of macOS and verify what version this computer Qualifies to install. For Best results use Safari to commence the download as Others may not work.

        -

        Deleted TAX2018 files that cannot be found in the Windows Recycle Bin or Mac Trash can still be recovered using an effective data recovery program. Just make sure you stop using the drive they were in right away to avoid overwriting the deleted file. Alternatively, if you had File History enabled or created a System Restore point beforehand, you can use those methods.

        -

        Check the software list above to confirm that your software is certified for the year you need. If you are using uncertified software or an older version of the software, you may need to update or download a certified version from the developer.

        -

        Quicken products contain online services such as transaction download and online bill pay. A purchase of a Quicken product license includes the ability to use these online services until the discontinuation date (listed in the chart above).

        -

        The NETFILE program is now open for the electronic filing of your 2015, 2016, 2017, and 2018 T1 private earnings tax and profit return. When tax laws change , TurboTax modifications with them, so that you might be positive your tax return includes the newest IRS and state tax kinds. This tax season has been a watch-opener for filers. It marks the first time people are submitting their returns below the Tax Cuts and Jobs Act, an overhaul of the tax code that went into effect in 2018.

        -

        The Deluxe version is by far the preferred. After studying the opinions at Amazon and Costco I learned long time users, like myself, of TurboTax are leaping ship as a result of TT Deluxe now not helps small buyers and companies as it has in years previous, with out upgrading. Absent the Web I might have never identified. TurboTax Home & Enterprise could be downloaded or purchased on a CD for both PC and Mac computers. Select your choice when buying.

        -

        The beta code is available to download from the Apple Developer Center and via over-the-air updates to devices, for developers registered into the testing program. Typically, public beta counterparts to the developer betas are released a few days later.

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/FitsImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/FitsImagePlugin.py deleted file mode 100644 index 1359aeb1282ee78e38f40fc25b4a50b621db4043..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/FitsImagePlugin.py +++ /dev/null @@ -1,73 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# FITS file handling -# -# Copyright (c) 1998-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import math - -from . import Image, ImageFile - - -def _accept(prefix): - return prefix[:6] == b"SIMPLE" - - -class FitsImageFile(ImageFile.ImageFile): - format = "FITS" - format_description = "FITS" - - def _open(self): - headers = {} - while True: - header = self.fp.read(80) - if not header: - msg = "Truncated FITS file" - raise OSError(msg) - keyword = header[:8].strip() - if keyword == b"END": - break - value = header[8:].split(b"/")[0].strip() - if value.startswith(b"="): - value = value[1:].strip() - if not headers and (not _accept(keyword) or value != b"T"): - msg = "Not a FITS file" - raise SyntaxError(msg) - headers[keyword] = value - - naxis = int(headers[b"NAXIS"]) - if naxis == 0: - msg = "No image data" - raise ValueError(msg) - elif naxis == 1: - self._size = 1, int(headers[b"NAXIS1"]) - else: - self._size = int(headers[b"NAXIS1"]), int(headers[b"NAXIS2"]) - - number_of_bits = int(headers[b"BITPIX"]) - if number_of_bits == 8: - self.mode = "L" - elif number_of_bits == 16: - self.mode = "I" - # rawmode = "I;16S" - elif number_of_bits == 32: - self.mode = "I" - elif number_of_bits in (-32, -64): - self.mode = "F" - # rawmode = "F" if number_of_bits == -32 else "F;64F" - - offset = math.ceil(self.fp.tell() / 2880) * 2880 - self.tile = [("raw", (0, 0) + self.size, offset, (self.mode, 0, -1))] - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(FitsImageFile.format, FitsImageFile, _accept) - -Image.register_extensions(FitsImageFile.format, [".fit", ".fits"]) diff --git a/spaces/cmudrc/wecnet-api/README.md b/spaces/cmudrc/wecnet-api/README.md deleted file mode 100644 index 8cbc2311e72e05fb5f88b04aaf557ad77364a4f1..0000000000000000000000000000000000000000 --- a/spaces/cmudrc/wecnet-api/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: wecnet-api -emoji: 🌊 ↔ 🖥️ -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: mit -duplicated_from: cmudrc/wecnet ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cncanon/freeturbo/greeting.md b/spaces/cncanon/freeturbo/greeting.md deleted file mode 100644 index 4b50cbfc43ce45a78691abb9824ecd8e1b67606b..0000000000000000000000000000000000000000 --- a/spaces/cncanon/freeturbo/greeting.md +++ /dev/null @@ -1 +0,0 @@ -Password: `peace_through_power` \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/a64multienc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/a64multienc.c deleted file mode 100644 index 26a9debc225279b26e0e4a4f22caa8bab5149258..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/a64multienc.c +++ /dev/null @@ -1,424 +0,0 @@ -/* - * a64 video encoder - multicolor modes - * Copyright (c) 2009 Tobias Bindhammer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * a64 video encoder - multicolor modes - */ - -#include "config_components.h" - -#include "a64colors.h" -#include "a64tables.h" -#include "codec_internal.h" -#include "elbg.h" -#include "encode.h" -#include "libavutil/avassert.h" -#include "libavutil/common.h" -#include "libavutil/intreadwrite.h" - -#define DITHERSTEPS 8 -#define CHARSET_CHARS 256 -#define INTERLACED 1 -#define CROP_SCREENS 1 - -#define C64XRES 320 -#define C64YRES 200 - -typedef struct A64Context { - /* variables for multicolor modes */ - struct ELBGContext *elbg; - AVLFG randctx; - int mc_lifetime; - int mc_use_5col; - unsigned mc_frame_counter; - int *mc_meta_charset; - int *mc_charmap; - int *mc_best_cb; - int mc_luma_vals[5]; - uint8_t *mc_colram; - uint8_t *mc_palette; - int mc_pal_size; - - /* pts of the next packet that will be output */ - int64_t next_pts; -} A64Context; - -/* gray gradient */ -static const uint8_t mc_colors[5]={0x0,0xb,0xc,0xf,0x1}; - -/* other possible gradients - to be tested */ -//static const uint8_t mc_colors[5]={0x0,0x8,0xa,0xf,0x7}; -//static const uint8_t mc_colors[5]={0x0,0x9,0x8,0xa,0x3}; - -static void to_meta_with_crop(AVCodecContext *avctx, - const AVFrame *p, int *dest) -{ - int blockx, blocky, x, y; - int luma = 0; - int height = FFMIN(avctx->height, C64YRES); - int width = FFMIN(avctx->width , C64XRES); - const uint8_t *src = p->data[0]; - - for (blocky = 0; blocky < C64YRES; blocky += 8) { - for (blockx = 0; blockx < C64XRES; blockx += 8) { - for (y = blocky; y < blocky + 8 && y < C64YRES; y++) { - for (x = blockx; x < blockx + 8 && x < C64XRES; x += 2) { - if(x < width && y < height) { - if (x + 1 < width) { - /* build average over 2 pixels */ - luma = (src[(x + 0 + y * p->linesize[0])] + - src[(x + 1 + y * p->linesize[0])]) / 2; - } else { - luma = src[(x + y * p->linesize[0])]; - } - /* write blocks as linear data now so they are suitable for elbg */ - dest[0] = luma; - } - dest++; - } - } - } - } -} - -static void render_charset(AVCodecContext *avctx, uint8_t *charset, - uint8_t *colrammap) -{ - A64Context *c = avctx->priv_data; - uint8_t row1, row2; - int charpos, x, y; - int a, b; - uint8_t pix; - int lowdiff, highdiff; - int *best_cb = c->mc_best_cb; - uint8_t index1[256]; - uint8_t index2[256]; - uint8_t dither[256]; - int i; - int distance; - - /* Generate lookup-tables for dither and index before looping. - * This code relies on c->mc_luma_vals[c->mc_pal_size - 1] being - * the maximum of all the mc_luma_vals values and on the minimum - * being zero; this ensures that dither is properly initialized. */ - i = 0; - for (a=0; a < 256; a++) { - if(i < c->mc_pal_size -1 && a == c->mc_luma_vals[i + 1]) { - distance = c->mc_luma_vals[i + 1] - c->mc_luma_vals[i]; - for(b = 0; b <= distance; b++) { - dither[c->mc_luma_vals[i] + b] = b * (DITHERSTEPS - 1) / distance; - } - i++; - } - if(i >= c->mc_pal_size - 1) dither[a] = 0; - index1[a] = i; - index2[a] = FFMIN(i + 1, c->mc_pal_size - 1); - } - - /* and render charset */ - for (charpos = 0; charpos < CHARSET_CHARS; charpos++) { - lowdiff = 0; - highdiff = 0; - for (y = 0; y < 8; y++) { - row1 = 0; row2 = 0; - for (x = 0; x < 4; x++) { - pix = best_cb[y * 4 + x]; - - /* accumulate error for brightest/darkest color */ - if (index1[pix] >= 3) - highdiff += pix - c->mc_luma_vals[3]; - if (index1[pix] < 1) - lowdiff += c->mc_luma_vals[1] - pix; - - row1 <<= 2; - - if (INTERLACED) { - row2 <<= 2; - if (interlaced_dither_patterns[dither[pix]][(y & 3) * 2 + 0][x & 3]) - row1 |= 3-(index2[pix] & 3); - else - row1 |= 3-(index1[pix] & 3); - - if (interlaced_dither_patterns[dither[pix]][(y & 3) * 2 + 1][x & 3]) - row2 |= 3-(index2[pix] & 3); - else - row2 |= 3-(index1[pix] & 3); - } - else { - if (multi_dither_patterns[dither[pix]][(y & 3)][x & 3]) - row1 |= 3-(index2[pix] & 3); - else - row1 |= 3-(index1[pix] & 3); - } - } - charset[y+0x000] = row1; - if (INTERLACED) charset[y+0x800] = row2; - } - /* do we need to adjust pixels? */ - if (highdiff > 0 && lowdiff > 0 && c->mc_use_5col) { - if (lowdiff > highdiff) { - for (x = 0; x < 32; x++) - best_cb[x] = FFMIN(c->mc_luma_vals[3], best_cb[x]); - } else { - for (x = 0; x < 32; x++) - best_cb[x] = FFMAX(c->mc_luma_vals[1], best_cb[x]); - } - charpos--; /* redo now adjusted char */ - /* no adjustment needed, all fine */ - } else { - /* advance pointers */ - best_cb += 32; - charset += 8; - - /* remember colorram value */ - colrammap[charpos] = (highdiff > 0); - } - } -} - -static av_cold int a64multi_close_encoder(AVCodecContext *avctx) -{ - A64Context *c = avctx->priv_data; - - avpriv_elbg_free(&c->elbg); - - av_freep(&c->mc_meta_charset); - av_freep(&c->mc_best_cb); - av_freep(&c->mc_charmap); - av_freep(&c->mc_colram); - return 0; -} - -static av_cold int a64multi_encode_init(AVCodecContext *avctx) -{ - A64Context *c = avctx->priv_data; - int a; - av_lfg_init(&c->randctx, 1); - - if (avctx->global_quality < 1) { - c->mc_lifetime = 4; - } else { - c->mc_lifetime = avctx->global_quality / FF_QP2LAMBDA; - } - - av_log(avctx, AV_LOG_INFO, "charset lifetime set to %d frame(s)\n", c->mc_lifetime); - - c->mc_frame_counter = 0; - c->mc_use_5col = avctx->codec->id == AV_CODEC_ID_A64_MULTI5; - c->mc_pal_size = 4 + c->mc_use_5col; - - /* precalc luma values for later use */ - for (a = 0; a < c->mc_pal_size; a++) { - c->mc_luma_vals[a]=a64_palette[mc_colors[a]][0] * 0.30 + - a64_palette[mc_colors[a]][1] * 0.59 + - a64_palette[mc_colors[a]][2] * 0.11; - } - - if (!(c->mc_meta_charset = av_calloc(c->mc_lifetime, 32000 * sizeof(int))) || - !(c->mc_best_cb = av_malloc(CHARSET_CHARS * 32 * sizeof(int))) || - !(c->mc_charmap = av_calloc(c->mc_lifetime, 1000 * sizeof(int))) || - !(c->mc_colram = av_mallocz(CHARSET_CHARS * sizeof(uint8_t)))) { - av_log(avctx, AV_LOG_ERROR, "Failed to allocate buffer memory.\n"); - return AVERROR(ENOMEM); - } - - /* set up extradata */ - if (!(avctx->extradata = av_mallocz(8 * 4 + AV_INPUT_BUFFER_PADDING_SIZE))) { - av_log(avctx, AV_LOG_ERROR, "Failed to allocate memory for extradata.\n"); - return AVERROR(ENOMEM); - } - avctx->extradata_size = 8 * 4; - AV_WB32(avctx->extradata, c->mc_lifetime); - AV_WB32(avctx->extradata + 16, INTERLACED); - - if (!avctx->codec_tag) - avctx->codec_tag = AV_RL32("a64m"); - - c->next_pts = AV_NOPTS_VALUE; - - return 0; -} - -static void a64_compress_colram(unsigned char *buf, int *charmap, uint8_t *colram) -{ - int a; - uint8_t temp; - /* only needs to be done in 5col mode */ - /* XXX could be squeezed to 0x80 bytes */ - for (a = 0; a < 256; a++) { - temp = colram[charmap[a + 0x000]] << 0; - temp |= colram[charmap[a + 0x100]] << 1; - temp |= colram[charmap[a + 0x200]] << 2; - if (a < 0xe8) temp |= colram[charmap[a + 0x300]] << 3; - buf[a] = temp << 2; - } -} - -static int a64multi_encode_frame(AVCodecContext *avctx, AVPacket *pkt, - const AVFrame *p, int *got_packet) -{ - A64Context *c = avctx->priv_data; - - int frame; - int x, y; - int b_height; - int b_width; - - int req_size, ret; - uint8_t *buf = NULL; - - int *charmap = c->mc_charmap; - uint8_t *colram = c->mc_colram; - int *meta = c->mc_meta_charset; - int *best_cb = c->mc_best_cb; - - int charset_size = 0x800 * (INTERLACED + 1); - int colram_size = 0x100 * c->mc_use_5col; - int screen_size; - - if(CROP_SCREENS) { - b_height = FFMIN(avctx->height,C64YRES) >> 3; - b_width = FFMIN(avctx->width ,C64XRES) >> 3; - screen_size = b_width * b_height; - } else { - b_height = C64YRES >> 3; - b_width = C64XRES >> 3; - screen_size = 0x400; - } - - /* no data, means end encoding asap */ - if (!p) { - /* all done, end encoding */ - if (!c->mc_lifetime) return 0; - /* no more frames in queue, prepare to flush remaining frames */ - if (!c->mc_frame_counter) { - c->mc_lifetime = 0; - } - /* still frames in queue so limit lifetime to remaining frames */ - else c->mc_lifetime = c->mc_frame_counter; - /* still new data available */ - } else { - /* fill up mc_meta_charset with data until lifetime exceeds */ - if (c->mc_frame_counter < c->mc_lifetime) { - to_meta_with_crop(avctx, p, meta + 32000 * c->mc_frame_counter); - c->mc_frame_counter++; - if (c->next_pts == AV_NOPTS_VALUE) - c->next_pts = p->pts; - /* lifetime is not reached so wait for next frame first */ - return 0; - } - } - - /* lifetime reached so now convert X frames at once */ - if (c->mc_frame_counter == c->mc_lifetime) { - req_size = 0; - /* any frames to encode? */ - if (c->mc_lifetime) { - int alloc_size = charset_size + c->mc_lifetime*(screen_size + colram_size); - if ((ret = ff_get_encode_buffer(avctx, pkt, alloc_size, 0)) < 0) - return ret; - buf = pkt->data; - - /* calc optimal new charset + charmaps */ - ret = avpriv_elbg_do(&c->elbg, meta, 32, 1000 * c->mc_lifetime, - best_cb, CHARSET_CHARS, 50, charmap, &c->randctx, 0); - if (ret < 0) - return ret; - - /* create colorram map and a c64 readable charset */ - render_charset(avctx, buf, colram); - - /* advance pointers */ - buf += charset_size; - req_size += charset_size; - } - - /* write x frames to buf */ - for (frame = 0; frame < c->mc_lifetime; frame++) { - /* copy charmap to buf. buf is uchar*, charmap is int*, so no memcpy here, sorry */ - for (y = 0; y < b_height; y++) { - for (x = 0; x < b_width; x++) { - buf[y * b_width + x] = charmap[y * b_width + x]; - } - } - /* advance pointers */ - buf += screen_size; - req_size += screen_size; - - /* compress and copy colram to buf */ - if (c->mc_use_5col) { - a64_compress_colram(buf, charmap, colram); - /* advance pointers */ - buf += colram_size; - req_size += colram_size; - } - - /* advance to next charmap */ - charmap += 1000; - } - - AV_WB32(avctx->extradata + 4, c->mc_frame_counter); - AV_WB32(avctx->extradata + 8, charset_size); - AV_WB32(avctx->extradata + 12, screen_size + colram_size); - - /* reset counter */ - c->mc_frame_counter = 0; - - pkt->pts = pkt->dts = c->next_pts; - c->next_pts = AV_NOPTS_VALUE; - - av_assert0(pkt->size == req_size); - *got_packet = !!req_size; - } - return 0; -} - -#if CONFIG_A64MULTI_ENCODER -const FFCodec ff_a64multi_encoder = { - .p.name = "a64multi", - CODEC_LONG_NAME("Multicolor charset for Commodore 64"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_A64_MULTI, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY, - .priv_data_size = sizeof(A64Context), - .init = a64multi_encode_init, - FF_CODEC_ENCODE_CB(a64multi_encode_frame), - .close = a64multi_close_encoder, - .p.pix_fmts = (const enum AVPixelFormat[]) {AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE}, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, -}; -#endif -#if CONFIG_A64MULTI5_ENCODER -const FFCodec ff_a64multi5_encoder = { - .p.name = "a64multi5", - CODEC_LONG_NAME("Multicolor charset for Commodore 64, extended with 5th color (colram)"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_A64_MULTI5, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY, - .priv_data_size = sizeof(A64Context), - .init = a64multi_encode_init, - FF_CODEC_ENCODE_CB(a64multi_encode_frame), - .close = a64multi_close_encoder, - .p.pix_fmts = (const enum AVPixelFormat[]) {AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE}, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, -}; -#endif diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacps_tablegen.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacps_tablegen.c deleted file mode 100644 index 26a6752faa97850b37497bd00258a6b68b0b06db..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacps_tablegen.c +++ /dev/null @@ -1,24 +0,0 @@ -/* - * Generate a header file for hardcoded Parametric Stereo tables - * - * Copyright (c) 2010 Alex Converse - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#define USE_FIXED 0 -#include "aacps_tablegen_template.c" diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/elbg.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/elbg.c deleted file mode 100644 index d97a7bc3f99302a5ce93097121bd6e7528d5e627..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/elbg.c +++ /dev/null @@ -1,514 +0,0 @@ -/* - * Copyright (C) 2007 Vitor Sessak - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Codebook Generator using the ELBG algorithm - */ - -#include - -#include "libavutil/avassert.h" -#include "libavutil/common.h" -#include "libavutil/lfg.h" -#include "elbg.h" - -#define DELTA_ERR_MAX 0.1 ///< Precision of the ELBG algorithm (as percentage error) - -/** - * In the ELBG jargon, a cell is the set of points that are closest to a - * codebook entry. Not to be confused with a RoQ Video cell. */ -typedef struct cell_s { - int index; - struct cell_s *next; -} cell; - -/** - * ELBG internal data - */ -typedef struct ELBGContext { - int64_t error; - int dim; - int num_cb; - int *codebook; - cell **cells; - int64_t *utility; - int64_t *utility_inc; - int *nearest_cb; - int *points; - int *temp_points; - int *size_part; - AVLFG *rand_state; - int *scratchbuf; - cell *cell_buffer; - - /* Sizes for the buffers above. Pointers without such a field - * are not allocated by us and only valid for the duration - * of a single call to avpriv_elbg_do(). */ - unsigned utility_allocated; - unsigned utility_inc_allocated; - unsigned size_part_allocated; - unsigned cells_allocated; - unsigned scratchbuf_allocated; - unsigned cell_buffer_allocated; - unsigned temp_points_allocated; -} ELBGContext; - -static inline int distance_limited(int *a, int *b, int dim, int limit) -{ - int i, dist=0; - for (i=0; i limit) - return INT_MAX; - } - - return dist; -} - -static inline void vect_division(int *res, int *vect, int div, int dim) -{ - int i; - if (div > 1) - for (i=0; inext) - error += distance_limited(centroid, elbg->points + cells->index*elbg->dim, elbg->dim, INT_MAX); - - return error; -} - -static int get_closest_codebook(ELBGContext *elbg, int index) -{ - int pick = 0; - for (int i = 0, diff_min = INT_MAX; i < elbg->num_cb; i++) - if (i != index) { - int diff; - diff = distance_limited(elbg->codebook + i*elbg->dim, elbg->codebook + index*elbg->dim, elbg->dim, diff_min); - if (diff < diff_min) { - pick = i; - diff_min = diff; - } - } - return pick; -} - -static int get_high_utility_cell(ELBGContext *elbg) -{ - int i=0; - /* Using linear search, do binary if it ever turns to be speed critical */ - uint64_t r; - - if (elbg->utility_inc[elbg->num_cb - 1] < INT_MAX) { - r = av_lfg_get(elbg->rand_state) % (unsigned int)elbg->utility_inc[elbg->num_cb - 1] + 1; - } else { - r = av_lfg_get(elbg->rand_state); - r = (av_lfg_get(elbg->rand_state) + (r<<32)) % elbg->utility_inc[elbg->num_cb - 1] + 1; - } - - while (elbg->utility_inc[i] < r) { - i++; - } - - av_assert2(elbg->cells[i]); - - return i; -} - -/** - * Implementation of the simple LBG algorithm for just two codebooks - */ -static int simple_lbg(ELBGContext *elbg, - int dim, - int *centroid[3], - int newutility[3], - int *points, - cell *cells) -{ - int i, idx; - int numpoints[2] = {0,0}; - int *newcentroid[2] = { - elbg->scratchbuf + 3*dim, - elbg->scratchbuf + 4*dim - }; - cell *tempcell; - - memset(newcentroid[0], 0, 2 * dim * sizeof(*newcentroid[0])); - - newutility[0] = - newutility[1] = 0; - - for (tempcell = cells; tempcell; tempcell=tempcell->next) { - idx = distance_limited(centroid[0], points + tempcell->index*dim, dim, INT_MAX)>= - distance_limited(centroid[1], points + tempcell->index*dim, dim, INT_MAX); - numpoints[idx]++; - for (i=0; iindex*dim + i]; - } - - vect_division(centroid[0], newcentroid[0], numpoints[0], dim); - vect_division(centroid[1], newcentroid[1], numpoints[1], dim); - - for (tempcell = cells; tempcell; tempcell=tempcell->next) { - int dist[2] = {distance_limited(centroid[0], points + tempcell->index*dim, dim, INT_MAX), - distance_limited(centroid[1], points + tempcell->index*dim, dim, INT_MAX)}; - int idx = dist[0] > dist[1]; - newutility[idx] += dist[idx]; - } - - return newutility[0] + newutility[1]; -} - -static void get_new_centroids(ELBGContext *elbg, int huc, int *newcentroid_i, - int *newcentroid_p) -{ - cell *tempcell; - int *min = newcentroid_i; - int *max = newcentroid_p; - int i; - - for (i=0; i< elbg->dim; i++) { - min[i]=INT_MAX; - max[i]=0; - } - - for (tempcell = elbg->cells[huc]; tempcell; tempcell = tempcell->next) - for(i=0; idim; i++) { - min[i]=FFMIN(min[i], elbg->points[tempcell->index*elbg->dim + i]); - max[i]=FFMAX(max[i], elbg->points[tempcell->index*elbg->dim + i]); - } - - for (i=0; idim; i++) { - int ni = min[i] + (max[i] - min[i])/3; - int np = min[i] + (2*(max[i] - min[i]))/3; - newcentroid_i[i] = ni; - newcentroid_p[i] = np; - } -} - -/** - * Add the points in the low utility cell to its closest cell. Split the high - * utility cell, putting the separated points in the (now empty) low utility - * cell. - * - * @param elbg Internal elbg data - * @param indexes {luc, huc, cluc} - * @param newcentroid A vector with the position of the new centroids - */ -static void shift_codebook(ELBGContext *elbg, int *indexes, - int *newcentroid[3]) -{ - cell *tempdata; - cell **pp = &elbg->cells[indexes[2]]; - - while(*pp) - pp= &(*pp)->next; - - *pp = elbg->cells[indexes[0]]; - - elbg->cells[indexes[0]] = NULL; - tempdata = elbg->cells[indexes[1]]; - elbg->cells[indexes[1]] = NULL; - - while(tempdata) { - cell *tempcell2 = tempdata->next; - int idx = distance_limited(elbg->points + tempdata->index*elbg->dim, - newcentroid[0], elbg->dim, INT_MAX) > - distance_limited(elbg->points + tempdata->index*elbg->dim, - newcentroid[1], elbg->dim, INT_MAX); - - tempdata->next = elbg->cells[indexes[idx]]; - elbg->cells[indexes[idx]] = tempdata; - tempdata = tempcell2; - } -} - -static void evaluate_utility_inc(ELBGContext *elbg) -{ - int64_t inc=0; - - for (int i = 0; i < elbg->num_cb; i++) { - if (elbg->num_cb * elbg->utility[i] > elbg->error) - inc += elbg->utility[i]; - elbg->utility_inc[i] = inc; - } -} - - -static void update_utility_and_n_cb(ELBGContext *elbg, int idx, int newutility) -{ - cell *tempcell; - - elbg->utility[idx] = newutility; - for (tempcell=elbg->cells[idx]; tempcell; tempcell=tempcell->next) - elbg->nearest_cb[tempcell->index] = idx; -} - -/** - * Evaluate if a shift lower the error. If it does, call shift_codebooks - * and update elbg->error, elbg->utility and elbg->nearest_cb. - * - * @param elbg Internal elbg data - * @param idx {luc (low utility cell, huc (high utility cell), cluc (closest cell to low utility cell)} - */ -static void try_shift_candidate(ELBGContext *elbg, int idx[3]) -{ - int j, k, cont=0; - int64_t olderror=0, newerror; - int newutility[3]; - int *newcentroid[3] = { - elbg->scratchbuf, - elbg->scratchbuf + elbg->dim, - elbg->scratchbuf + 2*elbg->dim - }; - cell *tempcell; - - for (j=0; j<3; j++) - olderror += elbg->utility[idx[j]]; - - memset(newcentroid[2], 0, elbg->dim*sizeof(int)); - - for (k=0; k<2; k++) - for (tempcell=elbg->cells[idx[2*k]]; tempcell; tempcell=tempcell->next) { - cont++; - for (j=0; jdim; j++) - newcentroid[2][j] += elbg->points[tempcell->index*elbg->dim + j]; - } - - vect_division(newcentroid[2], newcentroid[2], cont, elbg->dim); - - get_new_centroids(elbg, idx[1], newcentroid[0], newcentroid[1]); - - newutility[2] = eval_error_cell(elbg, newcentroid[2], elbg->cells[idx[0]]); - newutility[2] += eval_error_cell(elbg, newcentroid[2], elbg->cells[idx[2]]); - - newerror = newutility[2]; - - newerror += simple_lbg(elbg, elbg->dim, newcentroid, newutility, elbg->points, - elbg->cells[idx[1]]); - - if (olderror > newerror) { - shift_codebook(elbg, idx, newcentroid); - - elbg->error += newerror - olderror; - - for (j=0; j<3; j++) - update_utility_and_n_cb(elbg, idx[j], newutility[j]); - - evaluate_utility_inc(elbg); - } - } - -/** - * Implementation of the ELBG block - */ -static void do_shiftings(ELBGContext *elbg) -{ - int idx[3]; - - evaluate_utility_inc(elbg); - - for (idx[0]=0; idx[0] < elbg->num_cb; idx[0]++) - if (elbg->num_cb * elbg->utility[idx[0]] < elbg->error) { - if (elbg->utility_inc[elbg->num_cb - 1] == 0) - return; - - idx[1] = get_high_utility_cell(elbg); - idx[2] = get_closest_codebook(elbg, idx[0]); - - if (idx[1] != idx[0] && idx[1] != idx[2]) - try_shift_candidate(elbg, idx); - } -} - -static void do_elbg(ELBGContext *av_restrict elbg, int *points, int numpoints, - int max_steps) -{ - int *const size_part = elbg->size_part; - int i, j, steps = 0; - int best_idx = 0; - int64_t last_error; - - elbg->error = INT64_MAX; - elbg->points = points; - - do { - cell *free_cells = elbg->cell_buffer; - last_error = elbg->error; - steps++; - memset(elbg->utility, 0, elbg->num_cb * sizeof(*elbg->utility)); - memset(elbg->cells, 0, elbg->num_cb * sizeof(*elbg->cells)); - - elbg->error = 0; - - /* This loop evaluate the actual Voronoi partition. It is the most - costly part of the algorithm. */ - for (i=0; i < numpoints; i++) { - int best_dist = distance_limited(elbg->points + i * elbg->dim, - elbg->codebook + best_idx * elbg->dim, - elbg->dim, INT_MAX); - for (int k = 0; k < elbg->num_cb; k++) { - int dist = distance_limited(elbg->points + i * elbg->dim, - elbg->codebook + k * elbg->dim, - elbg->dim, best_dist); - if (dist < best_dist) { - best_dist = dist; - best_idx = k; - } - } - elbg->nearest_cb[i] = best_idx; - elbg->error += best_dist; - elbg->utility[elbg->nearest_cb[i]] += best_dist; - free_cells->index = i; - free_cells->next = elbg->cells[elbg->nearest_cb[i]]; - elbg->cells[elbg->nearest_cb[i]] = free_cells; - free_cells++; - } - - do_shiftings(elbg); - - memset(size_part, 0, elbg->num_cb * sizeof(*size_part)); - - memset(elbg->codebook, 0, elbg->num_cb * elbg->dim * sizeof(*elbg->codebook)); - - for (i=0; i < numpoints; i++) { - size_part[elbg->nearest_cb[i]]++; - for (j=0; j < elbg->dim; j++) - elbg->codebook[elbg->nearest_cb[i]*elbg->dim + j] += - elbg->points[i*elbg->dim + j]; - } - - for (int i = 0; i < elbg->num_cb; i++) - vect_division(elbg->codebook + i*elbg->dim, - elbg->codebook + i*elbg->dim, size_part[i], elbg->dim); - - } while(((last_error - elbg->error) > DELTA_ERR_MAX*elbg->error) && - (steps < max_steps)); -} - -#define BIG_PRIME 433494437LL - -/** - * Initialize the codebook vector for the elbg algorithm. - * If numpoints <= 24 * num_cb this function fills codebook with random numbers. - * If not, it calls do_elbg for a (smaller) random sample of the points in - * points. - */ -static void init_elbg(ELBGContext *av_restrict elbg, int *points, int *temp_points, - int numpoints, int max_steps) -{ - int dim = elbg->dim; - - if (numpoints > 24LL * elbg->num_cb) { - /* ELBG is very costly for a big number of points. So if we have a lot - of them, get a good initial codebook to save on iterations */ - for (int i = 0; i < numpoints / 8; i++) { - int k = (i*BIG_PRIME) % numpoints; - memcpy(temp_points + i*dim, points + k*dim, dim * sizeof(*temp_points)); - } - - /* If anything is changed in the recursion parameters, - * the allocated size of temp_points will also need to be updated. */ - init_elbg(elbg, temp_points, temp_points + numpoints / 8 * dim, - numpoints / 8, 2 * max_steps); - do_elbg(elbg, temp_points, numpoints / 8, 2 * max_steps); - } else // If not, initialize the codebook with random positions - for (int i = 0; i < elbg->num_cb; i++) - memcpy(elbg->codebook + i * dim, points + ((i*BIG_PRIME)%numpoints)*dim, - dim * sizeof(*elbg->codebook)); -} - -int avpriv_elbg_do(ELBGContext **elbgp, int *points, int dim, int numpoints, - int *codebook, int num_cb, int max_steps, - int *closest_cb, AVLFG *rand_state, uintptr_t flags) -{ - ELBGContext *const av_restrict elbg = *elbgp ? *elbgp : av_mallocz(sizeof(*elbg)); - - if (!elbg) - return AVERROR(ENOMEM); - *elbgp = elbg; - - elbg->nearest_cb = closest_cb; - elbg->rand_state = rand_state; - elbg->codebook = codebook; - elbg->num_cb = num_cb; - elbg->dim = dim; - -#define ALLOCATE_IF_NECESSARY(field, new_elements, multiplicator) \ - if (elbg->field ## _allocated < new_elements) { \ - av_freep(&elbg->field); \ - elbg->field = av_malloc_array(new_elements, \ - multiplicator * sizeof(*elbg->field)); \ - if (!elbg->field) { \ - elbg->field ## _allocated = 0; \ - return AVERROR(ENOMEM); \ - } \ - elbg->field ## _allocated = new_elements; \ - } - /* Allocating the buffers for do_elbg() here once relies - * on their size being always the same even when do_elbg() - * is called from init_elbg(). It also relies on do_elbg() - * never calling itself recursively. */ - ALLOCATE_IF_NECESSARY(cells, num_cb, 1) - ALLOCATE_IF_NECESSARY(utility, num_cb, 1) - ALLOCATE_IF_NECESSARY(utility_inc, num_cb, 1) - ALLOCATE_IF_NECESSARY(size_part, num_cb, 1) - ALLOCATE_IF_NECESSARY(cell_buffer, numpoints, 1) - ALLOCATE_IF_NECESSARY(scratchbuf, dim, 5) - if (numpoints > 24LL * elbg->num_cb) { - /* The first step in the recursion in init_elbg() needs a buffer with - * (numpoints / 8) * dim elements; the next step needs numpoints / 8 / 8 - * * dim elements etc. The geometric series leads to an upper bound of - * numpoints / 8 * 8 / 7 * dim elements. */ - uint64_t prod = dim * (uint64_t)(numpoints / 7U); - if (prod > INT_MAX) - return AVERROR(ERANGE); - ALLOCATE_IF_NECESSARY(temp_points, prod, 1) - } - - init_elbg(elbg, points, elbg->temp_points, numpoints, max_steps); - do_elbg (elbg, points, numpoints, max_steps); - return 0; -} - -av_cold void avpriv_elbg_free(ELBGContext **elbgp) -{ - ELBGContext *elbg = *elbgp; - if (!elbg) - return; - - av_freep(&elbg->size_part); - av_freep(&elbg->utility); - av_freep(&elbg->cell_buffer); - av_freep(&elbg->cells); - av_freep(&elbg->utility_inc); - av_freep(&elbg->scratchbuf); - av_freep(&elbg->temp_points); - - av_freep(elbgp); -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/hevc_idct_lsx.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/hevc_idct_lsx.c deleted file mode 100644 index 2193b27546b50f0ea71a3002f7885e935feda5d8..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/hevc_idct_lsx.c +++ /dev/null @@ -1,842 +0,0 @@ -/* - * Copyright (c) 2022 Loongson Technology Corporation Limited - * Contributed by Shiyou Yin - * Hao Chen - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/loongarch/loongson_intrinsics.h" -#include "hevcdsp_lsx.h" - -static const int16_t gt8x8_cnst[16] __attribute__ ((aligned (64))) = { - 64, 64, 83, 36, 89, 50, 18, 75, 64, -64, 36, -83, 75, -89, -50, -18 -}; - -static const int16_t gt16x16_cnst[64] __attribute__ ((aligned (64))) = { - 64, 83, 64, 36, 89, 75, 50, 18, 90, 80, 57, 25, 70, 87, 9, 43, - 64, 36, -64, -83, 75, -18, -89, -50, 87, 9, -80, -70, -43, 57, -25, -90, - 64, -36, -64, 83, 50, -89, 18, 75, 80, -70, -25, 90, -87, 9, 43, 57, - 64, -83, 64, -36, 18, -50, 75, -89, 70, -87, 90, -80, 9, -43, -57, 25 -}; - -static const int16_t gt32x32_cnst0[256] __attribute__ ((aligned (64))) = { - 90, 90, 88, 85, 82, 78, 73, 67, 61, 54, 46, 38, 31, 22, 13, 4, - 90, 82, 67, 46, 22, -4, -31, -54, -73, -85, -90, -88, -78, -61, -38, -13, - 88, 67, 31, -13, -54, -82, -90, -78, -46, -4, 38, 73, 90, 85, 61, 22, - 85, 46, -13, -67, -90, -73, -22, 38, 82, 88, 54, -4, -61, -90, -78, -31, - 82, 22, -54, -90, -61, 13, 78, 85, 31, -46, -90, -67, 4, 73, 88, 38, - 78, -4, -82, -73, 13, 85, 67, -22, -88, -61, 31, 90, 54, -38, -90, -46, - 73, -31, -90, -22, 78, 67, -38, -90, -13, 82, 61, -46, -88, -4, 85, 54, - 67, -54, -78, 38, 85, -22, -90, 4, 90, 13, -88, -31, 82, 46, -73, -61, - 61, -73, -46, 82, 31, -88, -13, 90, -4, -90, 22, 85, -38, -78, 54, 67, - 54, -85, -4, 88, -46, -61, 82, 13, -90, 38, 67, -78, -22, 90, -31, -73, - 46, -90, 38, 54, -90, 31, 61, -88, 22, 67, -85, 13, 73, -82, 4, 78, - 38, -88, 73, -4, -67, 90, -46, -31, 85, -78, 13, 61, -90, 54, 22, -82, - 31, -78, 90, -61, 4, 54, -88, 82, -38, -22, 73, -90, 67, -13, -46, 85, - 22, -61, 85, -90, 73, -38, -4, 46, -78, 90, -82, 54, -13, -31, 67, -88, - 13, -38, 61, -78, 88, -90, 85, -73, 54, -31, 4, 22, -46, 67, -82, 90, - 4, -13, 22, -31, 38, -46, 54, -61, 67, -73, 78, -82, 85, -88, 90, -90 -}; - -static const int16_t gt32x32_cnst1[64] __attribute__ ((aligned (64))) = { - 90, 87, 80, 70, 57, 43, 25, 9, 87, 57, 9, -43, -80, -90, -70, -25, - 80, 9, -70, -87, -25, 57, 90, 43, 70, -43, -87, 9, 90, 25, -80, -57, - 57, -80, -25, 90, -9, -87, 43, 70, 43, -90, 57, 25, -87, 70, 9, -80, - 25, -70, 90, -80, 43, 9, -57, 87, 9, -25, 43, -57, 70, -80, 87, -90 -}; - -static const int16_t gt32x32_cnst2[16] __attribute__ ((aligned (64))) = { - 89, 75, 50, 18, 75, -18, -89, -50, 50, -89, 18, 75, 18, -50, 75, -89 -}; - -#define HEVC_IDCT4x4_COL(in_r0, in_l0, in_r1, in_l1, \ - sum0, sum1, sum2, sum3, shift) \ -{ \ - __m128i vec0, vec1, vec2, vec3, vec4, vec5; \ - __m128i cnst64 = __lsx_vldi(0x0840); \ - __m128i cnst83 = __lsx_vldi(0x0853); \ - __m128i cnst36 = __lsx_vldi(0x0824); \ - \ - vec0 = __lsx_vdp2_w_h(in_r0, cnst64); \ - vec1 = __lsx_vdp2_w_h(in_l0, cnst83); \ - vec2 = __lsx_vdp2_w_h(in_r1, cnst64); \ - vec3 = __lsx_vdp2_w_h(in_l1, cnst36); \ - vec4 = __lsx_vdp2_w_h(in_l0, cnst36); \ - vec5 = __lsx_vdp2_w_h(in_l1, cnst83); \ - \ - sum0 = __lsx_vadd_w(vec0, vec2); \ - sum1 = __lsx_vsub_w(vec0, vec2); \ - vec1 = __lsx_vadd_w(vec1, vec3); \ - vec4 = __lsx_vsub_w(vec4, vec5); \ - sum2 = __lsx_vsub_w(sum1, vec4); \ - sum3 = __lsx_vsub_w(sum0, vec1); \ - sum0 = __lsx_vadd_w(sum0, vec1); \ - sum1 = __lsx_vadd_w(sum1, vec4); \ - \ - sum0 = __lsx_vsrari_w(sum0, shift); \ - sum1 = __lsx_vsrari_w(sum1, shift); \ - sum2 = __lsx_vsrari_w(sum2, shift); \ - sum3 = __lsx_vsrari_w(sum3, shift); \ - sum0 = __lsx_vsat_w(sum0, 15); \ - sum1 = __lsx_vsat_w(sum1, 15); \ - sum2 = __lsx_vsat_w(sum2, 15); \ - sum3 = __lsx_vsat_w(sum3, 15); \ -} - -#define HEVC_IDCT8x8_COL(in0, in1, in2, in3, in4, in5, in6, in7, shift) \ -{ \ - __m128i src0_r, src1_r, src2_r, src3_r; \ - __m128i src0_l, src1_l, src2_l, src3_l; \ - __m128i filter0, filter1, filter2, filter3; \ - __m128i temp0_r, temp1_r, temp2_r, temp3_r, temp4_r, temp5_r; \ - __m128i temp0_l, temp1_l, temp2_l, temp3_l, temp4_l, temp5_l; \ - __m128i sum0_r, sum1_r, sum2_r, sum3_r; \ - __m128i sum0_l, sum1_l, sum2_l, sum3_l; \ - \ - DUP4_ARG2(__lsx_vilvl_h, in4, in0, in6, in2, in5, in1, in3, in7, \ - src0_r, src1_r, src2_r, src3_r); \ - DUP4_ARG2(__lsx_vilvh_h, in4, in0, in6, in2, in5, in1, in3, in7, \ - src0_l, src1_l, src2_l, src3_l); \ - \ - DUP4_ARG2(__lsx_vldrepl_w, filter, 0, filter, 4, filter, 8, \ - filter, 12, filter0, filter1, filter2, filter3); \ - DUP4_ARG2(__lsx_vdp2_w_h, src0_r, filter0, src0_l, filter0, \ - src1_r, filter1, src1_l, filter1, temp0_r, temp0_l, \ - temp1_r, temp1_l); \ - \ - LSX_BUTTERFLY_4_W(temp0_r, temp0_l, temp1_l, temp1_r, sum0_r, sum0_l,\ - sum1_l, sum1_r); \ - sum2_r = sum1_r; \ - sum2_l = sum1_l; \ - sum3_r = sum0_r; \ - sum3_l = sum0_l; \ - \ - DUP4_ARG2(__lsx_vdp2_w_h, src2_r, filter2, src2_l, filter2, \ - src3_r, filter3, src3_l, filter3, temp2_r, temp2_l, \ - temp3_r, temp3_l); \ - temp2_r = __lsx_vadd_w(temp2_r, temp3_r); \ - temp2_l = __lsx_vadd_w(temp2_l, temp3_l); \ - sum0_r = __lsx_vadd_w(sum0_r, temp2_r); \ - sum0_l = __lsx_vadd_w(sum0_l, temp2_l); \ - sum3_r = __lsx_vsub_w(sum3_r, temp2_r); \ - sum3_l = __lsx_vsub_w(sum3_l, temp2_l); \ - \ - in0 = __lsx_vssrarni_h_w(sum0_l, sum0_r, shift); \ - in7 = __lsx_vssrarni_h_w(sum3_l, sum3_r, shift); \ - \ - DUP4_ARG2(__lsx_vdp2_w_h, src2_r, filter3, src2_l, filter3, \ - src3_r, filter2, src3_l, filter2, temp4_r, temp4_l, \ - temp5_r, temp5_l); \ - temp4_r = __lsx_vsub_w(temp4_r, temp5_r); \ - temp4_l = __lsx_vsub_w(temp4_l, temp5_l); \ - sum1_r = __lsx_vadd_w(sum1_r, temp4_r); \ - sum1_l = __lsx_vadd_w(sum1_l, temp4_l); \ - sum2_r = __lsx_vsub_w(sum2_r, temp4_r); \ - sum2_l = __lsx_vsub_w(sum2_l, temp4_l); \ - \ - in3 = __lsx_vssrarni_h_w(sum1_l, sum1_r, shift); \ - in4 = __lsx_vssrarni_h_w(sum2_l, sum2_r, shift); \ - \ - DUP4_ARG2(__lsx_vldrepl_w, filter, 16, filter, 20, filter, 24, \ - filter, 28, filter0, filter1, filter2, filter3); \ - DUP4_ARG2(__lsx_vdp2_w_h, src0_r, filter0, src0_l, filter0, \ - src1_r, filter1, src1_l, filter1, temp0_r, temp0_l, \ - temp1_r, temp1_l); \ - \ - LSX_BUTTERFLY_4_W(temp0_r, temp0_l, temp1_l, temp1_r, sum0_r, sum0_l,\ - sum1_l, sum1_r); \ - sum2_r = sum1_r; \ - sum2_l = sum1_l; \ - sum3_r = sum0_r; \ - sum3_l = sum0_l; \ - \ - DUP4_ARG2(__lsx_vdp2_w_h, src2_r, filter2, src2_l, filter2, \ - src3_r, filter3, src3_l, filter3, temp2_r, temp2_l, \ - temp3_r, temp3_l); \ - temp2_r = __lsx_vadd_w(temp2_r, temp3_r); \ - temp2_l = __lsx_vadd_w(temp2_l, temp3_l); \ - sum0_r = __lsx_vadd_w(sum0_r, temp2_r); \ - sum0_l = __lsx_vadd_w(sum0_l, temp2_l); \ - sum3_r = __lsx_vsub_w(sum3_r, temp2_r); \ - sum3_l = __lsx_vsub_w(sum3_l, temp2_l); \ - \ - in1 = __lsx_vssrarni_h_w(sum0_l, sum0_r, shift); \ - in6 = __lsx_vssrarni_h_w(sum3_l, sum3_r, shift); \ - \ - DUP4_ARG2(__lsx_vdp2_w_h, src2_r, filter3, src2_l, filter3, \ - src3_r, filter2, src3_l, filter2, temp4_r, temp4_l, \ - temp5_r, temp5_l); \ - temp4_r = __lsx_vsub_w(temp4_r, temp5_r); \ - temp4_l = __lsx_vsub_w(temp4_l, temp5_l); \ - sum1_r = __lsx_vsub_w(sum1_r, temp4_r); \ - sum1_l = __lsx_vsub_w(sum1_l, temp4_l); \ - sum2_r = __lsx_vadd_w(sum2_r, temp4_r); \ - sum2_l = __lsx_vadd_w(sum2_l, temp4_l); \ - \ - in2 = __lsx_vssrarni_h_w(sum1_l, sum1_r, shift); \ - in5 = __lsx_vssrarni_h_w(sum2_l, sum2_r, shift); \ -} - -#define HEVC_IDCT16x16_COL(src0_r, src1_r, src2_r, src3_r, \ - src4_r, src5_r, src6_r, src7_r, \ - src0_l, src1_l, src2_l, src3_l, \ - src4_l, src5_l, src6_l, src7_l, shift) \ -{ \ - int16_t *ptr0, *ptr1; \ - __m128i dst0, dst1; \ - __m128i filter0, filter1, filter2, filter3; \ - __m128i temp0_r, temp1_r, temp0_l, temp1_l; \ - __m128i sum0_r, sum1_r, sum2_r, sum3_r, sum0_l, sum1_l, sum2_l; \ - __m128i sum3_l, res0_r, res1_r, res0_l, res1_l; \ - \ - ptr0 = (buf_ptr + 112); \ - ptr1 = (buf_ptr + 128); \ - k = -1; \ - \ - for (j = 0; j < 4; j++) \ - { \ - DUP4_ARG2(__lsx_vldrepl_w, filter, 0, filter, 4, filter, 16, \ - filter, 20, filter0, filter1, filter2, filter3); \ - DUP4_ARG2(__lsx_vdp2_w_h, src0_r, filter0, src0_l, filter0, \ - src4_r, filter2, src4_l, filter2, sum0_r, sum0_l, \ - sum2_r, sum2_l); \ - DUP2_ARG2(__lsx_vdp2_w_h, src7_r, filter2, src7_l, filter2, \ - sum3_r, sum3_l); \ - DUP4_ARG3(__lsx_vdp2add_w_h, sum0_r, src1_r, filter1, sum0_l, \ - src1_l, filter1, sum2_r, src5_r, filter3, sum2_l, \ - src5_l, filter3, sum0_r, sum0_l, sum2_r, sum2_l); \ - DUP2_ARG3(__lsx_vdp2add_w_h, sum3_r, src6_r, filter3, sum3_l, \ - src6_l, filter3, sum3_r, sum3_l); \ - \ - sum1_r = sum0_r; \ - sum1_l = sum0_l; \ - \ - DUP4_ARG2(__lsx_vldrepl_w, filter, 8, filter, 12, filter, 24, \ - filter, 28, filter0, filter1, filter2, filter3); \ - filter += 16; \ - DUP2_ARG2(__lsx_vdp2_w_h, src2_r, filter0, src2_l, filter0, \ - temp0_r, temp0_l); \ - DUP2_ARG3(__lsx_vdp2add_w_h, sum2_r, src6_r, filter2, sum2_l, \ - src6_l, filter2, sum2_r, sum2_l); \ - DUP2_ARG2(__lsx_vdp2_w_h, src5_r, filter2, src5_l, filter2, \ - temp1_r, temp1_l); \ - \ - sum0_r = __lsx_vadd_w(sum0_r, temp0_r); \ - sum0_l = __lsx_vadd_w(sum0_l, temp0_l); \ - sum1_r = __lsx_vsub_w(sum1_r, temp0_r); \ - sum1_l = __lsx_vsub_w(sum1_l, temp0_l); \ - sum3_r = __lsx_vsub_w(temp1_r, sum3_r); \ - sum3_l = __lsx_vsub_w(temp1_l, sum3_l); \ - \ - DUP2_ARG2(__lsx_vdp2_w_h, src3_r, filter1, src3_l, filter1, \ - temp0_r, temp0_l); \ - DUP4_ARG3(__lsx_vdp2add_w_h, sum2_r, src7_r, filter3, sum2_l, \ - src7_l, filter3, sum3_r, src4_r, filter3, sum3_l, \ - src4_l, filter3, sum2_r, sum2_l, sum3_r, sum3_l); \ - \ - sum0_r = __lsx_vadd_w(sum0_r, temp0_r); \ - sum0_l = __lsx_vadd_w(sum0_l, temp0_l); \ - sum1_r = __lsx_vsub_w(sum1_r, temp0_r); \ - sum1_l = __lsx_vsub_w(sum1_l, temp0_l); \ - \ - LSX_BUTTERFLY_4_W(sum0_r, sum0_l, sum2_l, sum2_r, res0_r, res0_l, \ - res1_l, res1_r); \ - dst0 = __lsx_vssrarni_h_w(res0_l, res0_r, shift); \ - dst1 = __lsx_vssrarni_h_w(res1_l, res1_r, shift); \ - __lsx_vst(dst0, buf_ptr, 0); \ - __lsx_vst(dst1, (buf_ptr + ((15 - (j * 2)) << 4)), 0); \ - \ - LSX_BUTTERFLY_4_W(sum1_r, sum1_l, sum3_l, sum3_r, res0_r, res0_l, \ - res1_l, res1_r); \ - \ - dst0 = __lsx_vssrarni_h_w(res0_l, res0_r, shift); \ - dst1 = __lsx_vssrarni_h_w(res1_l, res1_r, shift); \ - __lsx_vst(dst0, (ptr0 + ((((j + 1) >> 1) * 2 * k) << 4)), 0); \ - __lsx_vst(dst1, (ptr1 - ((((j + 1) >> 1) * 2 * k) << 4)), 0); \ - \ - k *= -1; \ - buf_ptr += 16; \ - } \ -} - -#define HEVC_EVEN16_CALC(input, sum0_r, sum0_l, load_idx, store_idx) \ -{ \ - tmp0_r = __lsx_vld(input + load_idx * 8, 0); \ - tmp0_l = __lsx_vld(input + load_idx * 8, 16); \ - tmp1_r = sum0_r; \ - tmp1_l = sum0_l; \ - sum0_r = __lsx_vadd_w(sum0_r, tmp0_r); \ - sum0_l = __lsx_vadd_w(sum0_l, tmp0_l); \ - __lsx_vst(sum0_r, (input + load_idx * 8), 0); \ - __lsx_vst(sum0_l, (input + load_idx * 8), 16); \ - tmp1_r = __lsx_vsub_w(tmp1_r, tmp0_r); \ - tmp1_l = __lsx_vsub_w(tmp1_l, tmp0_l); \ - __lsx_vst(tmp1_r, (input + store_idx * 8), 0); \ - __lsx_vst(tmp1_l, (input + store_idx * 8), 16); \ -} - -#define HEVC_IDCT_LUMA4x4_COL(in_r0, in_l0, in_r1, in_l1, \ - res0, res1, res2, res3, shift) \ -{ \ - __m128i vec0, vec1, vec2, vec3; \ - __m128i cnst74 = __lsx_vldi(0x84a); \ - __m128i cnst55 = __lsx_vldi(0x837); \ - __m128i cnst29 = __lsx_vldi(0x81d); \ - \ - vec0 = __lsx_vadd_w(in_r0, in_r1); \ - vec2 = __lsx_vsub_w(in_r0, in_l1); \ - res0 = __lsx_vmul_w(vec0, cnst29); \ - res1 = __lsx_vmul_w(vec2, cnst55); \ - res2 = __lsx_vsub_w(in_r0, in_r1); \ - vec1 = __lsx_vadd_w(in_r1, in_l1); \ - res2 = __lsx_vadd_w(res2, in_l1); \ - vec3 = __lsx_vmul_w(in_l0, cnst74); \ - res3 = __lsx_vmul_w(vec0, cnst55); \ - \ - res0 = __lsx_vadd_w(res0, __lsx_vmul_w(vec1, cnst55)); \ - res1 = __lsx_vsub_w(res1, __lsx_vmul_w(vec1, cnst29)); \ - res2 = __lsx_vmul_w(res2, cnst74); \ - res3 = __lsx_vadd_w(res3, __lsx_vmul_w(vec2, cnst29)); \ - \ - res0 = __lsx_vadd_w(res0, vec3); \ - res1 = __lsx_vadd_w(res1, vec3); \ - res3 = __lsx_vsub_w(res3, vec3); \ - \ - res0 = __lsx_vsrari_w(res0, shift); \ - res1 = __lsx_vsrari_w(res1, shift); \ - res2 = __lsx_vsrari_w(res2, shift); \ - res3 = __lsx_vsrari_w(res3, shift); \ - res0 = __lsx_vsat_w(res0, 15); \ - res1 = __lsx_vsat_w(res1, 15); \ - res2 = __lsx_vsat_w(res2, 15); \ - res3 = __lsx_vsat_w(res3, 15); \ -} - -void ff_hevc_idct_4x4_lsx(int16_t *coeffs, int col_limit) -{ - __m128i in0, in1; - __m128i in_r0, in_l0, in_r1, in_l1; - __m128i sum0, sum1, sum2, sum3; - __m128i zero = __lsx_vldi(0x00); - - in0 = __lsx_vld(coeffs, 0); - in1 = __lsx_vld(coeffs, 16); - in_r0 = __lsx_vilvl_h(zero, in0); - in_l0 = __lsx_vilvh_h(zero, in0); - in_r1 = __lsx_vilvl_h(zero, in1); - in_l1 = __lsx_vilvh_h(zero, in1); - - HEVC_IDCT4x4_COL(in_r0, in_l0, in_r1, in_l1, sum0, sum1, sum2, sum3, 7); - LSX_TRANSPOSE4x4_W(sum0, sum1, sum2, sum3, in_r0, in_l0, in_r1, in_l1); - HEVC_IDCT4x4_COL(in_r0, in_l0, in_r1, in_l1, sum0, sum1, sum2, sum3, 12); - - /* Pack and transpose */ - in0 = __lsx_vpickev_h(sum2, sum0); - in1 = __lsx_vpickev_h(sum3, sum1); - sum0 = __lsx_vilvl_h(in1, in0); - sum1 = __lsx_vilvh_h(in1, in0); - in0 = __lsx_vilvl_w(sum1, sum0); - in1 = __lsx_vilvh_w(sum1, sum0); - - __lsx_vst(in0, coeffs, 0); - __lsx_vst(in1, coeffs, 16); -} - -void ff_hevc_idct_8x8_lsx(int16_t *coeffs, int col_limit) -{ - const int16_t *filter = >8x8_cnst[0]; - __m128i in0, in1, in2, in3, in4, in5, in6, in7; - - DUP4_ARG2(__lsx_vld, coeffs, 0, coeffs, 16, coeffs, 32, - coeffs, 48, in0, in1, in2, in3); - DUP4_ARG2(__lsx_vld, coeffs, 64, coeffs, 80, coeffs, 96, - coeffs, 112, in4, in5, in6, in7); - HEVC_IDCT8x8_COL(in0, in1, in2, in3, in4, in5, in6, in7, 7); - LSX_TRANSPOSE8x8_H(in0, in1, in2, in3, in4, in5, in6, in7, - in0, in1, in2, in3, in4, in5, in6, in7); - HEVC_IDCT8x8_COL(in0, in1, in2, in3, in4, in5, in6, in7, 12); - LSX_TRANSPOSE8x8_H(in0, in1, in2, in3, in4, in5, in6, in7, - in0, in1, in2, in3, in4, in5, in6, in7); - - __lsx_vst(in0, coeffs, 0); - __lsx_vst(in1, coeffs, 16); - __lsx_vst(in2, coeffs, 32); - __lsx_vst(in3, coeffs, 48); - __lsx_vst(in4, coeffs, 64); - __lsx_vst(in5, coeffs, 80); - __lsx_vst(in6, coeffs, 96); - __lsx_vst(in7, coeffs, 112); -} - -void ff_hevc_idct_16x16_lsx(int16_t *coeffs, int col_limit) -{ - int16_t i, j, k; - int16_t buf[256]; - int16_t *buf_ptr = &buf[0]; - int16_t *src = coeffs; - const int16_t *filter = >16x16_cnst[0]; - __m128i in0, in1, in2, in3, in4, in5, in6, in7; - __m128i in8, in9, in10, in11, in12, in13, in14, in15; - __m128i vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7; - __m128i src0_r, src1_r, src2_r, src3_r, src4_r, src5_r, src6_r, src7_r; - __m128i src0_l, src1_l, src2_l, src3_l, src4_l, src5_l, src6_l, src7_l; - - for (i = 2; i--;) { - DUP4_ARG2(__lsx_vld, src, 0, src, 32, src, 64, src, 96, - in0, in1, in2, in3); - DUP4_ARG2(__lsx_vld, src, 128, src, 160, src, 192, src, 224, - in4, in5, in6, in7); - DUP4_ARG2(__lsx_vld, src, 256, src, 288, src, 320, src, 352, - in8, in9, in10, in11); - DUP4_ARG2(__lsx_vld, src, 384, src, 416, src, 448, src, 480, - in12, in13, in14, in15); - - DUP4_ARG2(__lsx_vilvl_h, in4, in0, in12, in8, in6, in2, in14, in10, - src0_r, src1_r, src2_r, src3_r); - DUP4_ARG2(__lsx_vilvl_h, in5, in1, in13, in9, in3, in7, in11, in15, - src4_r, src5_r, src6_r, src7_r); - DUP4_ARG2(__lsx_vilvh_h, in4, in0, in12, in8, in6, in2, in14, in10, - src0_l, src1_l, src2_l, src3_l); - DUP4_ARG2(__lsx_vilvh_h, in5, in1, in13, in9, in3, in7, in11, in15, - src4_l, src5_l, src6_l, src7_l); - - HEVC_IDCT16x16_COL(src0_r, src1_r, src2_r, src3_r, src4_r, src5_r, - src6_r, src7_r, src0_l, src1_l, src2_l, src3_l, - src4_l, src5_l, src6_l, src7_l, 7); - - src += 8; - buf_ptr = (&buf[0] + 8); - filter = >16x16_cnst[0]; - } - - src = &buf[0]; - buf_ptr = coeffs; - filter = >16x16_cnst[0]; - - for (i = 2; i--;) { - DUP4_ARG2(__lsx_vld, src, 0, src, 16, src, 32, src, 48, - in0, in8, in1, in9); - DUP4_ARG2(__lsx_vld, src, 64, src, 80, src, 96, src, 112, - in2, in10, in3, in11); - DUP4_ARG2(__lsx_vld, src, 128, src, 144, src, 160, src, 176, - in4, in12, in5, in13); - DUP4_ARG2(__lsx_vld, src, 192, src, 208, src, 224, src, 240, - in6, in14, in7, in15); - LSX_TRANSPOSE8x8_H(in0, in1, in2, in3, in4, in5, in6, in7, - in0, in1, in2, in3, in4, in5, in6, in7); - LSX_TRANSPOSE8x8_H(in8, in9, in10, in11, in12, in13, in14, in15, - in8, in9, in10, in11, in12, in13, in14, in15); - DUP4_ARG2(__lsx_vilvl_h, in4, in0, in12, in8, in6, in2, in14, in10, - src0_r, src1_r, src2_r, src3_r); - DUP4_ARG2(__lsx_vilvl_h, in5, in1, in13, in9, in3, in7, in11, in15, - src4_r, src5_r, src6_r, src7_r); - DUP4_ARG2(__lsx_vilvh_h, in4, in0, in12, in8, in6, in2, in14, in10, - src0_l, src1_l, src2_l, src3_l); - DUP4_ARG2(__lsx_vilvh_h, in5, in1, in13, in9, in3, in7, in11, in15, - src4_l, src5_l, src6_l, src7_l); - HEVC_IDCT16x16_COL(src0_r, src1_r, src2_r, src3_r, src4_r, src5_r, - src6_r, src7_r, src0_l, src1_l, src2_l, src3_l, - src4_l, src5_l, src6_l, src7_l, 12); - - src += 128; - buf_ptr = coeffs + 8; - filter = >16x16_cnst[0]; - } - - DUP4_ARG2(__lsx_vld, coeffs, 0, coeffs, 32, coeffs, 64, coeffs, 96, - in0, in1, in2, in3); - DUP4_ARG2(__lsx_vld, coeffs, 128, coeffs, 160, coeffs, 192, coeffs, 224, - in4, in5, in6, in7); - LSX_TRANSPOSE8x8_H(in0, in1, in2, in3, in4, in5, in6, in7, - vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7); - __lsx_vst(vec0, coeffs, 0); - __lsx_vst(vec1, coeffs, 32); - __lsx_vst(vec2, coeffs, 64); - __lsx_vst(vec3, coeffs, 96); - __lsx_vst(vec4, coeffs, 128); - __lsx_vst(vec5, coeffs, 160); - __lsx_vst(vec6, coeffs, 192); - __lsx_vst(vec7, coeffs, 224); - - src = coeffs + 8; - DUP4_ARG2(__lsx_vld, src, 0, src, 32, src, 64, src, 96, in0, in1, in2, in3); - DUP4_ARG2(__lsx_vld, src, 128, src, 160, src, 192, src, 224, - in4, in5, in6, in7); - LSX_TRANSPOSE8x8_H(in0, in1, in2, in3, in4, in5, in6, in7, - vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7); - src = coeffs + 128; - DUP4_ARG2(__lsx_vld, src, 0, src, 32, src, 64, src, 96, - in8, in9, in10, in11); - DUP4_ARG2(__lsx_vld, src, 128, src, 160, src, 192, src, 224, - in12, in13, in14, in15); - - __lsx_vst(vec0, src, 0); - __lsx_vst(vec1, src, 32); - __lsx_vst(vec2, src, 64); - __lsx_vst(vec3, src, 96); - __lsx_vst(vec4, src, 128); - __lsx_vst(vec5, src, 160); - __lsx_vst(vec6, src, 192); - __lsx_vst(vec7, src, 224); - LSX_TRANSPOSE8x8_H(in8, in9, in10, in11, in12, in13, in14, in15, - vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7); - src = coeffs + 8; - __lsx_vst(vec0, src, 0); - __lsx_vst(vec1, src, 32); - __lsx_vst(vec2, src, 64); - __lsx_vst(vec3, src, 96); - __lsx_vst(vec4, src, 128); - __lsx_vst(vec5, src, 160); - __lsx_vst(vec6, src, 192); - __lsx_vst(vec7, src, 224); - - src = coeffs + 136; - DUP4_ARG2(__lsx_vld, src, 0, src, 32, src, 64, src, 96, - in0, in1, in2, in3); - DUP4_ARG2(__lsx_vld, src, 128, src, 160, src, 192, src, 224, - in4, in5, in6, in7); - LSX_TRANSPOSE8x8_H(in0, in1, in2, in3, in4, in5, in6, in7, - vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7); - __lsx_vst(vec0, src, 0); - __lsx_vst(vec1, src, 32); - __lsx_vst(vec2, src, 64); - __lsx_vst(vec3, src, 96); - __lsx_vst(vec4, src, 128); - __lsx_vst(vec5, src, 160); - __lsx_vst(vec6, src, 192); - __lsx_vst(vec7, src, 224); -} - -static void hevc_idct_8x32_column_lsx(int16_t *coeffs, int32_t buf_pitch, - uint8_t round) -{ - uint8_t i; - int32_t buf_pitch_2 = buf_pitch << 1; - int32_t buf_pitch_4 = buf_pitch << 2; - int32_t buf_pitch_8 = buf_pitch << 3; - int32_t buf_pitch_16 = buf_pitch << 4; - - const int16_t *filter_ptr0 = >32x32_cnst0[0]; - const int16_t *filter_ptr1 = >32x32_cnst1[0]; - const int16_t *filter_ptr2 = >32x32_cnst2[0]; - const int16_t *filter_ptr3 = >8x8_cnst[0]; - int16_t *src0 = (coeffs + buf_pitch); - int16_t *src1 = (coeffs + buf_pitch_2); - int16_t *src2 = (coeffs + buf_pitch_4); - int16_t *src3 = (coeffs); - int32_t tmp_buf[8 * 32 + 15]; - int32_t *tmp_buf_ptr = tmp_buf + 15; - __m128i in0, in1, in2, in3, in4, in5, in6, in7; - __m128i src0_r, src1_r, src2_r, src3_r, src4_r, src5_r, src6_r, src7_r; - __m128i src0_l, src1_l, src2_l, src3_l, src4_l, src5_l, src6_l, src7_l; - __m128i filter0, filter1, filter2, filter3; - __m128i sum0_r, sum0_l, sum1_r, sum1_l, tmp0_r, tmp0_l, tmp1_r, tmp1_l; - - /* Align pointer to 64 byte boundary */ - tmp_buf_ptr = (int32_t *)(((uintptr_t) tmp_buf_ptr) & ~(uintptr_t) 63); - - /* process coeff 4, 12, 20, 28 */ - in0 = __lsx_vld(src2, 0); - in1 = __lsx_vld(src2 + buf_pitch_8, 0); - in2 = __lsx_vld(src2 + buf_pitch_16, 0); - in3 = __lsx_vld(src2 + buf_pitch_16 + buf_pitch_8, 0); - in4 = __lsx_vld(src3, 0); - in5 = __lsx_vld(src3 + buf_pitch_8, 0); - in6 = __lsx_vld(src3 + buf_pitch_16, 0); - in7 = __lsx_vld(src3 + buf_pitch_16 + buf_pitch_8, 0); - DUP4_ARG2(__lsx_vilvl_h, in1, in0, in3, in2, in6, in4, in7, in5, - src0_r, src1_r, src2_r, src3_r); - DUP4_ARG2(__lsx_vilvh_h, in1, in0, in3, in2, in6, in4, in7, in5, - src0_l, src1_l, src2_l, src3_l); - - filter0 = __lsx_vldrepl_w(filter_ptr2, 0); - filter1 = __lsx_vldrepl_w(filter_ptr2, 4); - sum0_r = __lsx_vdp2_w_h(src0_r, filter0); - sum0_l = __lsx_vdp2_w_h(src0_l, filter0); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src1_r, filter1); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src1_l, filter1); - __lsx_vst(sum0_r, tmp_buf_ptr, 0); - __lsx_vst(sum0_l, tmp_buf_ptr, 16); - - filter0 = __lsx_vldrepl_w(filter_ptr2, 8); - filter1 = __lsx_vldrepl_w(filter_ptr2, 12); - sum0_r = __lsx_vdp2_w_h(src0_r, filter0); - sum0_l = __lsx_vdp2_w_h(src0_l, filter0); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src1_r, filter1); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src1_l, filter1); - __lsx_vst(sum0_r, tmp_buf_ptr, 32); - __lsx_vst(sum0_l, tmp_buf_ptr, 48); - - filter0 = __lsx_vldrepl_w(filter_ptr2, 16); - filter1 = __lsx_vldrepl_w(filter_ptr2, 20); - sum0_r = __lsx_vdp2_w_h(src0_r, filter0); - sum0_l = __lsx_vdp2_w_h(src0_l, filter0); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src1_r, filter1); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src1_l, filter1); - __lsx_vst(sum0_r, tmp_buf_ptr, 64); - __lsx_vst(sum0_l, tmp_buf_ptr, 80); - - filter0 = __lsx_vldrepl_w(filter_ptr2, 24); - filter1 = __lsx_vldrepl_w(filter_ptr2, 28); - sum0_r = __lsx_vdp2_w_h(src0_r, filter0); - sum0_l = __lsx_vdp2_w_h(src0_l, filter0); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src1_r, filter1); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src1_l, filter1); - __lsx_vst(sum0_r, tmp_buf_ptr, 96); - __lsx_vst(sum0_l, tmp_buf_ptr, 112); - - /* process coeff 0, 8, 16, 24 */ - filter0 = __lsx_vldrepl_w(filter_ptr3, 0); - filter1 = __lsx_vldrepl_w(filter_ptr3, 4); - - DUP4_ARG2(__lsx_vdp2_w_h, src2_r, filter0, src2_l, filter0, - src3_r, filter1, src3_l, filter1, sum0_r, sum0_l, tmp1_r, tmp1_l); - sum1_r = __lsx_vsub_w(sum0_r, tmp1_r); - sum1_l = __lsx_vsub_w(sum0_l, tmp1_l); - sum0_r = __lsx_vadd_w(sum0_r, tmp1_r); - sum0_l = __lsx_vadd_w(sum0_l, tmp1_l); - - HEVC_EVEN16_CALC(tmp_buf_ptr, sum0_r, sum0_l, 0, 7); - HEVC_EVEN16_CALC(tmp_buf_ptr, sum1_r, sum1_l, 3, 4); - - filter0 = __lsx_vldrepl_w(filter_ptr3, 16); - filter1 = __lsx_vldrepl_w(filter_ptr3, 20); - - DUP4_ARG2(__lsx_vdp2_w_h, src2_r, filter0, src2_l, filter0, - src3_r, filter1, src3_l, filter1, sum0_r, sum0_l, tmp1_r, tmp1_l); - sum1_r = __lsx_vsub_w(sum0_r, tmp1_r); - sum1_l = __lsx_vsub_w(sum0_l, tmp1_l); - sum0_r = __lsx_vadd_w(sum0_r, tmp1_r); - sum0_l = __lsx_vadd_w(sum0_l, tmp1_l); - - HEVC_EVEN16_CALC(tmp_buf_ptr, sum0_r, sum0_l, 1, 6); - HEVC_EVEN16_CALC(tmp_buf_ptr, sum1_r, sum1_l, 2, 5); - - /* process coeff 2 6 10 14 18 22 26 30 */ - in0 = __lsx_vld(src1, 0); - in1 = __lsx_vld(src1 + buf_pitch_4, 0); - in2 = __lsx_vld(src1 + buf_pitch_8, 0); - in3 = __lsx_vld(src1 + buf_pitch_8 + buf_pitch_4, 0); - in4 = __lsx_vld(src1 + buf_pitch_16, 0); - in5 = __lsx_vld(src1 + buf_pitch_16 + buf_pitch_4, 0); - in6 = __lsx_vld(src1 + buf_pitch_16 + buf_pitch_8, 0); - in7 = __lsx_vld(src1 + buf_pitch_16 + buf_pitch_8 + buf_pitch_4, 0); - - DUP4_ARG2(__lsx_vilvl_h, in1, in0, in3, in2, in5, in4, in7, in6, - src0_r, src1_r, src2_r, src3_r); - DUP4_ARG2(__lsx_vilvh_h, in1, in0, in3, in2, in5, in4, in7, in6, - src0_l, src1_l, src2_l, src3_l); - - /* loop for all columns of constants */ - for (i = 0; i < 8; i++) { - /* processing single column of constants */ - filter0 = __lsx_vldrepl_w(filter_ptr1, 0); - filter1 = __lsx_vldrepl_w(filter_ptr1, 4); - filter2 = __lsx_vldrepl_w(filter_ptr1, 8); - filter3 = __lsx_vldrepl_w(filter_ptr1, 12); - sum0_r = __lsx_vdp2_w_h(src0_r, filter0); - sum0_l = __lsx_vdp2_w_h(src0_l, filter0); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src1_r, filter1); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src1_l, filter1); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src2_r, filter2); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src2_l, filter2); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src3_r, filter3); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src3_l, filter3); - - tmp0_r = __lsx_vld(tmp_buf_ptr + (i << 3), 0); - tmp0_l = __lsx_vld(tmp_buf_ptr + (i << 3), 16); - tmp1_r = tmp0_r; - tmp1_l = tmp0_l; - tmp0_r = __lsx_vadd_w(tmp0_r, sum0_r); - tmp0_l = __lsx_vadd_w(tmp0_l, sum0_l); - tmp1_r = __lsx_vsub_w(tmp1_r, sum0_r); - tmp1_l = __lsx_vsub_w(tmp1_l, sum0_l); - __lsx_vst(tmp0_r, tmp_buf_ptr + (i << 3), 0); - __lsx_vst(tmp0_l, tmp_buf_ptr + (i << 3), 16); - __lsx_vst(tmp1_r, tmp_buf_ptr + ((15 - i) * 8), 0); - __lsx_vst(tmp1_l, tmp_buf_ptr + ((15 - i) * 8), 16); - - filter_ptr1 += 8; - } - - /* process coeff 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 */ - in0 = __lsx_vld(src0, 0); - in1 = __lsx_vld(src0 + buf_pitch_2, 0); - in2 = __lsx_vld(src0 + buf_pitch_4, 0); - in3 = __lsx_vld(src0 + buf_pitch_4 + buf_pitch_2, 0); - in4 = __lsx_vld(src0 + buf_pitch_8, 0); - in5 = __lsx_vld(src0 + buf_pitch_8 + buf_pitch_2, 0); - in6 = __lsx_vld(src0 + buf_pitch_8 + buf_pitch_4, 0); - in7 = __lsx_vld(src0 + buf_pitch_8 + buf_pitch_4 + buf_pitch_2, 0); - - src0 += 16 * buf_pitch; - DUP4_ARG2(__lsx_vilvl_h, in1, in0, in3, in2, in5, in4, in7, in6, - src0_r, src1_r, src2_r, src3_r); - DUP4_ARG2(__lsx_vilvh_h, in1, in0, in3, in2, in5, in4, in7, in6, - src0_l, src1_l, src2_l, src3_l); - in0 = __lsx_vld(src0, 0); - in1 = __lsx_vld(src0 + buf_pitch_2, 0); - in2 = __lsx_vld(src0 + buf_pitch_4, 0); - in3 = __lsx_vld(src0 + buf_pitch_4 + buf_pitch_2, 0); - in4 = __lsx_vld(src0 + buf_pitch_8, 0); - in5 = __lsx_vld(src0 + buf_pitch_8 + buf_pitch_2, 0); - in6 = __lsx_vld(src0 + buf_pitch_8 + buf_pitch_4, 0); - in7 = __lsx_vld(src0 + buf_pitch_8 + buf_pitch_4 + buf_pitch_2, 0); - - DUP4_ARG2(__lsx_vilvl_h, in1, in0, in3, in2, in5, in4, in7, in6, - src4_r, src5_r, src6_r, src7_r); - DUP4_ARG2(__lsx_vilvh_h, in1, in0, in3, in2, in5, in4, in7, in6, - src4_l, src5_l, src6_l, src7_l); - - /* loop for all columns of filter constants */ - for (i = 0; i < 16; i++) { - /* processing single column of constants */ - filter0 = __lsx_vldrepl_w(filter_ptr0, 0); - filter1 = __lsx_vldrepl_w(filter_ptr0, 4); - filter2 = __lsx_vldrepl_w(filter_ptr0, 8); - filter3 = __lsx_vldrepl_w(filter_ptr0, 12); - sum0_r = __lsx_vdp2_w_h(src0_r, filter0); - sum0_l = __lsx_vdp2_w_h(src0_l, filter0); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src1_r, filter1); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src1_l, filter1); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src2_r, filter2); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src2_l, filter2); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src3_r, filter3); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src3_l, filter3); - tmp1_r = sum0_r; - tmp1_l = sum0_l; - - filter0 = __lsx_vldrepl_w(filter_ptr0, 16); - filter1 = __lsx_vldrepl_w(filter_ptr0, 20); - filter2 = __lsx_vldrepl_w(filter_ptr0, 24); - filter3 = __lsx_vldrepl_w(filter_ptr0, 28); - sum0_r = __lsx_vdp2_w_h(src4_r, filter0); - sum0_l = __lsx_vdp2_w_h(src4_l, filter0); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src5_r, filter1); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src5_l, filter1); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src6_r, filter2); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src6_l, filter2); - sum0_r = __lsx_vdp2add_w_h(sum0_r, src7_r, filter3); - sum0_l = __lsx_vdp2add_w_h(sum0_l, src7_l, filter3); - sum0_r = __lsx_vadd_w(sum0_r, tmp1_r); - sum0_l = __lsx_vadd_w(sum0_l, tmp1_l); - - tmp0_r = __lsx_vld(tmp_buf_ptr + i * 8, 0); - tmp0_l = __lsx_vld(tmp_buf_ptr + i * 8, 16); - tmp1_r = tmp0_r; - tmp1_l = tmp0_l; - tmp0_r = __lsx_vadd_w(tmp0_r, sum0_r); - tmp0_l = __lsx_vadd_w(tmp0_l, sum0_l); - sum1_r = __lsx_vreplgr2vr_w(round); - tmp0_r = __lsx_vssrarn_h_w(tmp0_r, sum1_r); - tmp0_l = __lsx_vssrarn_h_w(tmp0_l, sum1_r); - in0 = __lsx_vpackev_d(tmp0_l, tmp0_r); - __lsx_vst(in0, (coeffs + i * buf_pitch), 0); - tmp1_r = __lsx_vsub_w(tmp1_r, sum0_r); - tmp1_l = __lsx_vsub_w(tmp1_l, sum0_l); - tmp1_r = __lsx_vssrarn_h_w(tmp1_r, sum1_r); - tmp1_l = __lsx_vssrarn_h_w(tmp1_l, sum1_r); - in0 = __lsx_vpackev_d(tmp1_l, tmp1_r); - __lsx_vst(in0, (coeffs + (31 - i) * buf_pitch), 0); - - filter_ptr0 += 16; - } -} - -static void hevc_idct_transpose_32x8_to_8x32(int16_t *coeffs, int16_t *tmp_buf) -{ - uint8_t i; - __m128i in0, in1, in2, in3, in4, in5, in6, in7; - - for (i = 0; i < 4; i++) { - DUP4_ARG2(__lsx_vld, coeffs, 0, coeffs, 64, coeffs, 128, - coeffs, 192, in0, in1, in2, in3); - DUP4_ARG2(__lsx_vld, coeffs, 256, coeffs, 320, coeffs, 384, - coeffs, 448, in4, in5, in6, in7); - coeffs += 8; - LSX_TRANSPOSE8x8_H(in0, in1, in2, in3, in4, in5, in6, in7, - in0, in1, in2, in3, in4, in5, in6, in7); - __lsx_vst(in0, tmp_buf, 0); - __lsx_vst(in1, tmp_buf, 16); - __lsx_vst(in2, tmp_buf, 32); - __lsx_vst(in3, tmp_buf, 48); - __lsx_vst(in4, tmp_buf, 64); - __lsx_vst(in5, tmp_buf, 80); - __lsx_vst(in6, tmp_buf, 96); - __lsx_vst(in7, tmp_buf, 112); - tmp_buf += 64; - } -} - -static void hevc_idct_transpose_8x32_to_32x8(int16_t *tmp_buf, int16_t *coeffs) -{ - uint8_t i; - __m128i in0, in1, in2, in3, in4, in5, in6, in7; - - for (i = 0; i < 4; i++) { - DUP4_ARG2(__lsx_vld, tmp_buf, 0, tmp_buf, 16, tmp_buf, 32, - tmp_buf, 48, in0, in1, in2, in3); - DUP4_ARG2(__lsx_vld, tmp_buf, 64, tmp_buf, 80, tmp_buf, 96, - tmp_buf, 112, in4, in5, in6, in7); - tmp_buf += 64; - LSX_TRANSPOSE8x8_H(in0, in1, in2, in3, in4, in5, in6, in7, - in0, in1, in2, in3, in4, in5, in6, in7); - __lsx_vst(in0, coeffs, 0); - __lsx_vst(in1, coeffs, 64); - __lsx_vst(in2, coeffs, 128); - __lsx_vst(in3, coeffs, 192); - __lsx_vst(in4, coeffs, 256); - __lsx_vst(in5, coeffs, 320); - __lsx_vst(in6, coeffs, 384); - __lsx_vst(in7, coeffs, 448); - coeffs += 8; - } -} - -void ff_hevc_idct_32x32_lsx(int16_t *coeffs, int col_limit) -{ - uint8_t row_cnt, col_cnt; - int16_t *src = coeffs; - int16_t tmp_buf[8 * 32 + 31]; - int16_t *tmp_buf_ptr = tmp_buf + 31; - uint8_t round; - int32_t buf_pitch; - - /* Align pointer to 64 byte boundary */ - tmp_buf_ptr = (int16_t *)(((uintptr_t) tmp_buf_ptr) & ~(uintptr_t) 63); - - /* column transform */ - round = 7; - buf_pitch = 32; - for (col_cnt = 0; col_cnt < 4; col_cnt++) { - /* process 8x32 blocks */ - hevc_idct_8x32_column_lsx((coeffs + col_cnt * 8), buf_pitch, round); - } - - /* row transform */ - round = 12; - buf_pitch = 8; - for (row_cnt = 0; row_cnt < 4; row_cnt++) { - /* process 32x8 blocks */ - src = (coeffs + 32 * 8 * row_cnt); - - hevc_idct_transpose_32x8_to_8x32(src, tmp_buf_ptr); - hevc_idct_8x32_column_lsx(tmp_buf_ptr, buf_pitch, round); - hevc_idct_transpose_8x32_to_32x8(tmp_buf_ptr, src); - } -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Truth or Dare Pro APK and Spice Up Your Night.md b/spaces/congsaPfin/Manga-OCR/logs/Download Truth or Dare Pro APK and Spice Up Your Night.md deleted file mode 100644 index 8cf67dfd004931d49b65197962ad35a9a2f40424..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Truth or Dare Pro APK and Spice Up Your Night.md +++ /dev/null @@ -1,112 +0,0 @@ - -

        Truth or Dare Pro APK: The Ultimate Party Game for Android

        -

        Are you looking for a fun and exciting game to play with your friends at parties, sleepovers, or any occasion? Do you want to spice up your conversations and interactions with hilarious questions and dares? Do you want to have unlimited access to thousands of challenges that will make you laugh, blush, scream, and more? If you answered yes to any of these questions, then you need to try Truth or Dare Pro APK, the ultimate party game for Android devices.

        -

        truth or dare pro apk


        Downloadhttps://urlca.com/2uOdhW



        -

        What is Truth or Dare Pro APK?

        -

        Truth or Dare Pro APK is a modified version of the popular game Truth or Dare, where you have to answer a question truthfully or perform a dare that is given to you by your friends. The game is simple but very fun and addictive. You can play it with anyone, anywhere, anytime. All you need is your Android device and some friends who are ready to have a blast.

        -

        Features of Truth or Dare Pro APK

        -

        Truth or Dare Pro APK has many features that make it stand out from other similar games. Here are some of them:

        -

        - Thousands of questions and dares

        -

        The game has over 10,000 questions and dares that are divided into different categories, such as funny, dirty, extreme, couples, kids, teens, adults, etc. You can choose the category that suits your mood and preference. You can also add your own questions and dares to make the game more personalized.

        -

        - Customizable game modes

        -

        The game has four game modes that you can customize according to your liking. You can choose between classic mode, where you spin a bottle and select truth or dare; random mode, where you get a random question or dare; custom mode, where you create your own questions and dares; and online mode, where you play with other players around the world.

        -

        truth or dare pro apk download
        -truth or dare pro apk mod
        -truth or dare pro apk free
        -truth or dare pro apk latest version
        -truth or dare pro apk unlocked
        -truth or dare pro apk premium
        -truth or dare pro apk full
        -truth or dare pro apk cracked
        -truth or dare pro apk hack
        -truth or dare pro apk android
        -truth or dare pro apk for pc
        -truth or dare pro apk online
        -truth or dare pro apk no ads
        -truth or dare pro apk dirty
        -truth or dare pro apk 18+
        -truth or dare pro apk couples
        -truth or dare pro apk kids
        -truth or dare pro apk teens
        -truth or dare pro apk family
        -truth or dare pro apk friends
        -truth or dare pro apk party
        -truth or dare pro apk game
        -truth or dare pro apk fun
        -truth or dare pro apk questions
        -truth or dare pro apk challenges
        -truth or dare pro apk generator
        -truth or dare pro apk custom
        -truth or dare pro apk editor
        -truth or dare pro apk offline
        -truth or dare pro apk best
        -truth or dare pro apk review
        -truth or dare pro apk rating
        -truth or dare pro apk features
        -truth or dare pro apk update
        -truth or dare pro apk new
        -truth or dare pro apk 2023
        -truth or dare pro apk reddit
        -truth or dare pro apk quora
        -truth or dare pro apk youtube
        -truth or dare pro apk facebook
        -truth or dare pro apk instagram
        -truth or dare pro apk twitter
        -truth or dare pro apk tiktok
        -truth or dare pro apk pinterest
        -truth or dare pro apk blogspot
        -truth or dare pro apk wordpress
        -truth or dare pro apk medium
        -truth or dare pro apk tumblr
        -truth or dare pro apk app store

        -

        - Download the APK file from a trusted source

        -

        You can download the APK file from a trusted source that offers safe and secure downloads. You can use the link below to get the latest version of Truth or Dare Pro APK for free. The file size is about 25 MB and it does not require any root access or special permissions.

        -

        Download Truth or Dare Pro APK

        -

        - Enable unknown sources on your device

        -

        Before you can install the APK file, you need to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings, then security, then toggle on the unknown sources option. You may see a warning message, but you can ignore it and proceed.

        -

        - Install the APK file and launch the app

        -

        Once you have enabled unknown sources, you can install the APK file by tapping on it and following the instructions. The installation process will take a few seconds and then you will see the app icon on your home screen. Tap on it and launch the app. You are now ready to play Truth or Dare Pro APK with your friends.

        How to play Truth or Dare Pro APK?

        -

        Playing Truth or Dare Pro APK is very easy and fun. You just need to follow these simple steps:

        -

        - Choose your game mode and players

        -

        You can choose between four game modes: classic, random, custom, and online. You can also choose how many players you want to play with, from 2 to 20. You can enter your names and select your avatars. You can also choose the category of questions and dares, from funny to extreme.

        -

        - Spin the bottle and select truth or dare

        -

        Once you have chosen your game mode and players, you can start the game by spinning the bottle. The bottle will point to one of the players, who will have to choose between truth or dare. If they choose truth, they will have to answer a question honestly. If they choose dare, they will have to perform a dare that is given to them by their friends.

        -

        - Answer the question or perform the dare

        -

        The player who has chosen truth or dare will have to answer the question or perform the dare that is displayed on the screen. They will have a limited time to do so, otherwise they will lose a point. The other players can judge if they have completed the challenge successfully or not. If they fail, they will have to face a penalty that is decided by their friends.

        -

        - Have fun and laugh with your friends

        -

        The game will continue until one of the players reaches a certain number of points or until you decide to stop. You can view the scores and statistics of each player at any time. You can also pause, resume, or restart the game whenever you want. The most important thing is to have fun and laugh with your friends as you discover new things about each other and challenge yourselves.

        -

        Why choose Truth or Dare Pro APK?

        -

        Truth or Dare Pro APK is not just another game. It is a game that can bring you many benefits and advantages. Here are some of them:

        Benefits of Truth or Dare Pro APK

        -

        Some of the benefits of Truth or Dare Pro APK are:

        -

        - Spice up your parties and gatherings

        -

        Truth or Dare Pro APK is the perfect game to play at parties and gatherings. It can make your events more lively, fun, and memorable. You can play it with your friends, family, classmates, coworkers, or anyone you want. You can also use it as an icebreaker, a conversation starter, or a bonding activity.

        -

        - Break the ice and get to know each other better

        -

        Truth or Dare Pro APK can help you break the ice and get to know each other better. You can learn new things about your friends, such as their secrets, preferences, opinions, experiences, etc. You can also share your own stories and reveal your true self. You can discover new sides of your friends and yourself that you never knew before.

        -

        - Challenge yourself and your friends

        -

        Truth or Dare Pro APK can challenge you and your friends to step out of your comfort zones and try new things. You can test your limits and face your fears. You can also dare your friends to do something they normally wouldn't do. You can have fun and laugh at each other's reactions and outcomes.

        -

        - Enjoy unlimited fun and entertainment

        -

        Truth or Dare Pro APK can provide you with unlimited fun and entertainment. You can play it anytime, anywhere, with anyone. You can choose from thousands of questions and dares that will keep you entertained for hours. You can also create your own questions and dares to make the game more interesting and unique.

        -

        Conclusion

        -

        Truth or Dare Pro APK is the ultimate party game for Android devices. It is a modified version of the popular game Truth or Dare, where you have to answer a question truthfully or perform a dare that is given to you by your friends. The game has many features that make it stand out from other similar games, such as thousands of questions and dares, customizable game modes, offline and online play, and fun and easy interface. The game also has many benefits that make it worth playing, such as spicing up your parties and gatherings, breaking the ice and getting to know each other better, challenging yourself and your friends, and enjoying unlimited fun and entertainment. If you are looking for a game that can make your events more lively, fun, and memorable, then you should download and install Truth or Dare Pro APK on your Android device today.

        -

        Here are some FAQs that you may have about the game:

        -

        FAQs

        -
          -
        1. Is Truth or Dare Pro APK safe to download and install?
        2. -

          Yes, Truth or Dare Pro APK is safe to download and install. The APK file is free from viruses, malware, spyware, or any other harmful elements. However, you should always download the APK file from a trusted source that offers safe and secure downloads.

          -
        3. Is Truth or Dare Pro APK legal to use?
        4. -

          Yes, Truth or Dare Pro APK is legal to use. The game does not violate any laws or regulations. However, you should always respect the rules and guidelines of the game and play it responsibly. You should also respect the privacy and consent of your friends who are playing with you.

          -
        5. How many players can play Truth or Dare Pro APK?
        6. -

          You can play Truth or Dare Pro APK with as many players as you want, from 2 to 20. However, the ideal number of players is between 4 to 10. This will ensure that everyone gets a chance to participate and have fun.

          -
        7. What are some tips to make the game more fun?
        8. -

          Some tips to make the game more fun are:

          -
            -
          • - Be honest and creative with your answers and dares.
          • -
          • - Be respectful and supportive of your friends who are answering or performing.
          • -
          • - Be open-minded and adventurous with your choices.
          • -
          • - Be careful and safe with your actions.
          • -
          • - Have fun and laugh with your friends.
          • -
          -
        9. Where can I get more information about Truth or Dare Pro APK?
        10. -

          You can get more information about Truth or Dare Pro APK by visiting the official website of the game. There you can find more details about the features, benefits, instructions, reviews, ratings, screenshots, videos, etc. of the game. You can also contact the developers of the game if you have any questions, feedback, suggestions, or issues regarding the game.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Download Case Simulator for Standoff 2 on Your PC or Mac.md b/spaces/congsaPfin/Manga-OCR/logs/How to Download Case Simulator for Standoff 2 on Your PC or Mac.md deleted file mode 100644 index 3e681afbb441b3af80da453313fce8a2bcffe7d1..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Download Case Simulator for Standoff 2 on Your PC or Mac.md +++ /dev/null @@ -1,121 +0,0 @@ - -

          Download Case Simulator for Standoff 2: A Guide for Beginners

          -

          If you are a fan of first-person shooter games, you might have heard of Standoff 2, a dynamic and realistic online multiplayer action FPS game that has over 200 million players worldwide . In this game, you can choose from more than 20 weapon models, customize them with skins, stickers, and charms, and join the standoff with your friends or other players in various maps and modes .

          -

          download case simulator for standoff 2


          Download Zip ———>>> https://urlca.com/2uO4QO



          -

          But what if you want to experience the thrill of opening cases and collecting skins without spending real money or risking your ranking? That's where case simulator for Standoff 2 comes in handy. Case simulator for Standoff 2 is a simulation application that allows you to open cases from the game and knock out expensive skins and knives. You can also play different game modes such as upgrade, jackpot, crash, quiz, tower, bomb defuse, and more . In this article, we will show you why you should download case simulator for Standoff 2, how to download it on different platforms, how to use it, and some tips and tricks for using it.

          -

          Why Download Case Simulator for Standoff 2?

          -

          There are many benefits of using case simulator for Standoff 2. Here are some of them:

          -
            -
          • You can learn about the skins, collections, and weapons in the game. Case simulator for Standoff 2 has all the skin collections from the original game, including rare and legendary ones. You can see how they look on different weapons, how much they cost, and how to get them. You can also compare the stats and features of different weapons and find your favorite one.
          • -
          • You can test your luck and skills in different game modes. Case simulator for Standoff 2 has many game modes that challenge your luck and skills. You can try to upgrade your skins to higher tiers, bet on jackpot or crash games, answer quiz questions about the game or general knowledge, climb the tower by opening cases or defusing bombs, and more. You can also earn coins by completing missions or watching ads.
          • -
          • You can have fun and enjoy the graphics and sounds. Case simulator for Standoff 2 has realistic graphics and sounds that mimic the original game. You can feel the excitement of opening cases and seeing what you get. You can also apply stickers on your weapons and create your own design. The app is easy to use and has a user-friendly interface.
          • -
          -

          How to Download Case Simulator for Standoff 2?

          -

          Downloading case simulator for Standoff 2 is easy and fast. Here are the step-by-step instructions for downloading it on different platforms:

          -

          How to download case simulator for standoff 2 on PC
          -Case simulator for standoff 2 online free play
          -Case simulator for standoff 2 skins and knives
          -Case simulator for standoff 2 game modes and features
          -Case simulator for standoff 2 tips and tricks
          -Best case simulator for standoff 2 app
          -Case simulator for standoff 2 apk download
          -Case simulator for standoff 2 mod apk unlimited money
          -Case simulator for standoff 2 hack tool
          -Case simulator for standoff 2 cheats and codes
          -Case simulator for standoff 2 review and rating
          -Case simulator for standoff 2 gameplay and walkthrough
          -Case simulator for standoff 2 update and patch notes
          -Case simulator for standoff 2 support and feedback
          -Case simulator for standoff 2 community and forum
          -Case simulator for standoff 2 yandex games
          -Case simulator for standoff 2 bluestacks emulator
          -Case simulator for standoff 2 car simulator games developer
          -Case simulator for standoff 2 funspirit publisher
          -Case simulator for standoff 2 license agreement and terms of service
          -Download case simulator for standoff 2 android
          -Download case simulator for standoff 2 ios
          -Download case simulator for standoff 2 windows
          -Download case simulator for standoff 2 mac
          -Download case simulator for standoff 2 linux
          -Download case simulator for standoff 2 chromebook
          -Download case simulator for standoff 2 amazon fire tablet
          -Download case simulator for standoff 2 samsung galaxy phone
          -Download case simulator for standoff 2 iphone and ipad
          -Download case simulator for standoff 2 huawei device
          -Download case simulator for standoff 2 google play store
          -Download case simulator for standoff 2 apple app store
          -Download case simulator for standoff 2 microsoft store
          -Download case simulator for standoff 2 steam platform
          -Download case simulator for standoff 2 epic games launcher
          -Download case simulator for standoff 2 origin client
          -Download case simulator for standoff 2 uplay service
          -Download case simulator for standoff 2 gog galaxy software
          -Download case simulator for standoff 2 itch.io website
          -Download case simulator for standoff 2 gamejolt site
          -Download case simulator for standoff 2 kongregate portal
          -Download case simulator for standoff 2 newgrounds page
          -Download case simulator for standoff 2 armor games network
          -Download case simulator for standoff 2 miniclip channel
          -Download case simulator for standoff 2 crazy games collection
          -Download case simulator for standoff 2 poki selection
          -Download case simulator for standoff 2 friv series
          -Download case simulator for standoff 2 coolmath games category

          -

          Android

          -
            -
          1. Go to Google Play Store on your Android device.
          2. -
          3. Search for "case simulator for standoff 2" or use this link: [6](https://play.google.com/store/apps/details?id=com.fallonight.c ase.simulator.standoff2&hl=en_US&gl=US)
          4. -
          5. Tap on the "Install" button and wait for the app to download and install on your device.
          6. -
          7. Open the app and enjoy opening cases and playing game modes.
          8. -
          -

          iOS

          -
            -
          1. Go to App Store on your iOS device.
          2. -
          3. Search for "case simulator for standoff 2" or use this link: [7](https://apps.apple.com/us/app/case-simulator-for-standoff-2/id1530120716)
          4. -
          5. Tap on the "Get" button and wait for the app to download and install on your device.
          6. -
          7. Open the app and enjoy opening cases and playing game modes.
          8. -
          -

          PC

          -
            -
          1. Go to your web browser on your PC.
          2. -
          3. Search for "case simulator for standoff 2 online" or use this link: [8](https://www.silvergames.com/en/case-simulator-for-standoff-2)
          4. -
          5. Click on the "Play" button and wait for the game to load on your browser.
          6. -
          7. Enjoy opening cases and playing game modes.
          8. -
          -

          Mac

          -
            -
          1. Go to your web browser on your Mac.
          2. -
          3. Search for "case simulator for standoff 2 online" or use this link: [8](https://www.silvergames.com/en/case-simulator-for-standoff-2)
          4. -
          5. Click on the "Play" button and wait for the game to load on your browser.
          6. -
          7. Enjoy opening cases and playing game modes.
          8. -
          -

          How to Use Case Simulator for Standoff 2?

          -

          Using case simulator for Standoff 2 is simple and fun. Here are some of the basic features and functions of the app:

          -

          Opening Cases

          -

          To open cases, you need to have coins. You can earn coins by completing missions, watching ads, or playing game modes. You can also buy coins with real money if you want. Once you have enough coins, you can choose from different case collections, such as Origin, Gold, Elite, etc. Each case has a different price and a different chance of getting rare skins. Tap on the case you want to open and swipe to open it. You will see what skin you got and how much it is worth. You can also see the odds of getting each skin in each case. You can keep the skin or sell it for coins. You can also open multiple cases at once by tapping on the "Open 10" or "Open 100" buttons.

          -

          Upgrading Skins

          -

          To upgrade skins, you need to have skins. You can get skins by opening cases or buying them from the market. Once you have some skins, you can go to the upgrade section and choose a skin you want to upgrade. You will see a slider that shows the percentage of success and the amount of coins you need to pay for the upgrade. You can adjust the slider to increase or decrease the chance of success and the cost of the upgrade. Tap on the "Upgrade" button and see if you succeed or fail. If you succeed, you will get a higher tier skin. If you fail, you will lose your skin and your coins.

          -

          Playing Game Modes

          -

          To play game modes, you need to have coins or skins. You can earn coins or skins by opening cases, upgrading skins, or selling skins. Once you have some coins or skins, you can go to the game modes section and choose a game mode you want to play. There are many game modes available, such as jackpot, crash, quiz, tower, bomb defuse, etc. Each game mode has different rules and rewards. For example, in jackpot, you can bet your skins against other players and try to win their skins. In crash, you can bet your coins on a multiplier that goes up and down and try to cash out before it crashes. In quiz, you can answer questions about Standoff 2 or general knowledge and earn coins for correct answers. In tower, you can open cases or defuse bombs to climb up a tower and win prizes. In bomb defuse, you can try to defuse a bomb by cutting wires in a limited time and win coins.

          -

          Tips and Tricks for Using Case Simulator for Standoff 2

          -

          To get the most out of case simulator for Standoff 2, here are some tips and tricks that might help you:

          -
            -
          • To save money, open cases that have a high chance of getting rare skins or knives. For example, the Elite case has a 10% chance of getting a knife, while the Origin case has only a 0.5% chance . You can also open cases that have a high value-to-price ratio, such as the Gold case, which costs 1000 coins and has skins worth up to 100000 coins .
          • -
          • To get rare skins, you can try to upgrade your skins to higher tiers. For example, you can upgrade a common skin to an uncommon skin with a 50% chance of success and a low cost. Then, you can upgrade the uncommon skin to a rare skin with a 25% chance of success and a higher cost. And so on, until you reach the legendary tier. You can also use stickers to increase the value of your skins and make them more unique.
          • -
          • To win game modes, you need to have some luck and some strategy. For example, in jackpot, you can increase your chance of winning by betting more skins or higher value skins. However, you also risk losing more if you lose. In crash, you can increase your profit by cashing out at a high multiplier. However, you also risk losing everything if the multiplier crashes before you cash out. In quiz, you can increase your score by answering quickly and correctly. However, you also risk losing points if you answer wrongly or run out of time. In tower, you can increase your reward by opening more cases or defusing more bombs. However, you also risk falling down if you open an empty case or cut the wrong wire. In bomb defuse, you can increase your speed by memorizing the wire colors and patterns. However, you also risk exploding if you cut the wrong wire or run out of time.
          • -
          -

          Conclusion

          -

          Case simulator for Standoff 2 is a great app for anyone who loves Standoff 2 and wants to open cases and collect skins without spending real money or risking their ranking. It is also a fun and educational app that teaches you about the skins, collections, and weapons in the game and tests your luck and skills in different game modes. You can download case simulator for Standoff 2 on Android, iOS, PC, or Mac and enjoy the realistic graphics and sounds of the app. You can also use some tips and tricks to save money, get rare skins, and win game modes. So what are you waiting for? Download case simulator for Standoff 2 today and enjoy the thrill of opening cases and collecting skins!

          -

          FAQs

          -

          Here are some frequently asked questions about case simulator for Standoff 2:

          -

          Q: Can I transfer my skins from case simulator for Standoff 2 to the original game?

          -

          A: No, you cannot transfer your skins from case simulator for Standoff 2 to the original game. Case simulator for Standoff 2 is a simulation app that is not affiliated with the original game or its developers. It is only for entertainment purposes and does not affect your account or inventory in the original game.

          -

          Q: Can I play case simulator for Standoff 2 offline?

          -

          A: Yes, you can play case simulator for Standoff 2 offline. However, some features and functions may not work properly or may require an internet connection to work. For example, you may not be able to watch ads, complete missions, or access the market without an internet connection.

          -

          Q: How can I get more coins in case simulator for Standoff 2?

          -

          A: There are several ways to get more coins in case simulator for Standoff 2. You can earn coins by opening cases, selling skins, completing missions, watching ads, or playing game modes. You can also buy coins with real money if you want.

          -

          Q: How can I get free stickers in case simulator for Standoff 2?

          -

          A: You can get free stickers in case simulator for Standoff 2 by opening sticker capsules or buying them from the market. You can also get free stickers by completing missions or watching ads.

          -

          Q: How can I contact the developer of case simulator for Standoff 2?

          -

          A: You can contact the developer of case simulator for Standoff 2 by sending an email to fallonightgames@gmail.com or by visiting their website at [9](https://fallonight.com/). You can also follow them on social media platforms such as Facebook, Twitter, Instagram, YouTube, etc.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Unlock All Cannons and Levels in Idle Cannon Tycoon Mod APK.md b/spaces/congsaPfin/Manga-OCR/logs/How to Unlock All Cannons and Levels in Idle Cannon Tycoon Mod APK.md deleted file mode 100644 index 9258a3d7567191eff1641a9f9f2273b83d8817d2..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Unlock All Cannons and Levels in Idle Cannon Tycoon Mod APK.md +++ /dev/null @@ -1,76 +0,0 @@ - -

          Idle Cannon Tycoon Mod Apk: A Fun and Addictive Strategy Game

          -

          Do you love strategy games that challenge your mind and test your skills? Do you enjoy shooting bricks and upgrading your cannons? Do you want to have unlimited money and gems to unlock new features and modes? If you answered yes to any of these questions, then you should download Idle Cannon Tycoon Mod Apk, a fun and addictive strategy game that will keep you entertained for hours.

          -

          idle cannon tycoon mod apk


          Download ⚙⚙⚙ https://urlca.com/2uOaK6



          -

          What is Idle Cannon Tycoon?

          -

          Idle Cannon Tycoon is a game where you shoot bricks and upgrade your cannons. You start with a simple cannon that can fire one bullet at a time. Your goal is to destroy as many bricks as possible before they reach the bottom of the screen. The more bricks you destroy, the more money you earn. You can use the money to buy more cannons, upgrade your existing ones, or unlock new features and modes.

          -

          Idle Cannon Tycoon is also a game where you can go idle or active. You can choose to play the game manually, tapping the screen to fire your cannons, or you can let the game play itself, earning money even when you are offline. You can also switch between different modes, such as normal mode, boss mode, or challenge mode, to spice up your gameplay.

          -

          Idle Cannon Tycoon is also a game where you can unlock new features and modes. As you progress in the game, you will be able to unlock new cannons with different abilities, such as laser cannons, rocket launchers, or plasma guns. You will also be able to unlock new levels with different themes, such as desert, forest, or space. You will also be able to unlock special events and rewards that will make your gaming experience more exciting.

          -

          Why should you download Idle Cannon Tycoon Mod Apk?

          -

          If you are a fan of Idle Cannon Tycoon, you might be wondering why you should download the mod apk version of the game. Well, there are many reasons why you should do so, such as:

          -

          You can enjoy unlimited money and gems. With the mod apk version of the game, you will have access to unlimited money and gems that you can use to buy anything you want in the game. You can buy as many cannons as you want, upgrade them to the max level, or unlock all the features and modes without any restrictions.

          -

          You can access all the cannons and levels. With the mod apk version of the game, you will be able to access all the cannons and levels that are normally locked or require real money to unlock. You can try out all the different cannons with their unique abilities and see which one suits your style best. You can also explore all the different levels with their different themes and challenges.

          -

          You can remove ads and enjoy a smooth gameplay. With the mod apk version of the game, you will be able to remove all the annoying ads that pop up every now and then in the game. You will be able to enjoy a smooth gameplay without any interruptions or distractions.

          -

          idle cannon tycoon mod apk unlimited money
          -idle cannon tycoon mod apk download
          -idle cannon tycoon mod apk latest version
          -idle cannon tycoon mod apk android 1
          -idle cannon tycoon mod apk free shopping
          -idle cannon tycoon mod apk hack
          -idle cannon tycoon mod apk revdl
          -idle cannon tycoon mod apk offline
          -idle cannon tycoon mod apk no ads
          -idle cannon tycoon mod apk 1.0.0.19
          -idle cannon tycoon mod apk 2023
          -idle cannon tycoon mod apk rexdl
          -idle cannon tycoon mod apk happymod
          -idle cannon tycoon mod apk unlimited gems
          -idle cannon tycoon mod apk unlocked
          -idle cannon tycoon mod apk online
          -idle cannon tycoon mod apk ios
          -idle cannon tycoon mod apk 1.0.0.18
          -idle cannon tycoon mod apk 1.0.0.17
          -idle cannon tycoon mod apk 1.0.0.16
          -idle cannon tycoon mod apk 1.0.0.15
          -idle cannon tycoon mod apk 1.0.0.14
          -idle cannon tycoon mod apk 1.0.0.13
          -idle cannon tycoon mod apk 1.0.0.12
          -idle cannon tycoon mod apk 1.0.0.11
          -idle cannon tycoon mod apk 1.0.0.10
          -idle cannon tycoon mod apk 1.0.0.9
          -idle cannon tycoon mod apk 1.0.0.8
          -idle cannon tycoon mod apk 1.0.0.7
          -idle cannon tycoon mod apk 1.0.0.6
          -idle cannon tycoon mod apk 1.0.0.5
          -idle cannon tycoon mod apk 1.0.0.4
          -idle cannon tycoon mod apk 1.0.0.3
          -idle cannon tycoon mod apk 1.0.0.2
          -idle cannon tycoon mod apk 1.0.0.1
          -idle cannon shooting game mod apk
          -idle brick breaker game mod apk
          -idle tower defense game mod apk
          -idle strategy game with cannons mod apk
          -free download of idle cannon game mod apk

          -

          How to download and install Idle Cannon Tycoon Mod Apk?

          -

          If you are interested in downloading and installing Idle Cannon Tycoon Mod Apk on your device, you can follow these simple steps:Step 1: Download the mod apk file from a trusted source. You can find the link to the mod apk file at the end of this article. Make sure you download the latest version of the game that is compatible with your device.

          -

          Step 2: Enable unknown sources on your device settings. To install the mod apk file, you need to allow your device to install apps from unknown sources. To do this, go to your device settings, then security, then enable unknown sources. This will allow you to install the mod apk file without any problems.

          -

          Step 3: Install the mod apk file and launch the game. Once you have downloaded the mod apk file, locate it on your device storage and tap on it to install it. Follow the instructions on the screen and wait for the installation to complete. After that, launch the game and enjoy the mod features.

          -

          Step 4: Enjoy the game with unlimited resources and features. Now that you have installed Idle Cannon Tycoon Mod Apk, you can enjoy the game with unlimited money and gems, access to all the cannons and levels, and no ads. You can also switch between different modes and events and have fun shooting bricks and upgrading your cannons.

          -

          Conclusion

          -

          Idle Cannon Tycoon Mod Apk is a fun and addictive strategy game that you should try if you love shooting bricks and upgrading your cannons. You can download the mod apk version of the game and enjoy unlimited money and gems, access to all the cannons and levels, and no ads. You can also go idle or active, switch between different modes and events, and unlock new features and rewards. Idle Cannon Tycoon Mod Apk is a game that will keep you entertained for hours.

          -

          FAQs

          -

          Here are some frequently asked questions about Idle Cannon Tycoon Mod Apk:

          -
            -
          • Is Idle Cannon Tycoon Mod Apk safe to download and install?
          • -
          • Yes, Idle Cannon Tycoon Mod Apk is safe to download and install as long as you get it from a trusted source. You can find the link to the mod apk file at the end of this article.
          • -
          • Do I need to root my device to use Idle Cannon Tycoon Mod Apk?
          • -
          • No, you do not need to root your device to use Idle Cannon Tycoon Mod Apk. You can install it on any Android device without rooting it.
          • -
          • Will I get banned for using Idle Cannon Tycoon Mod Apk?
          • -
          • No, you will not get banned for using Idle Cannon Tycoon Mod Apk. The mod apk version of the game is undetectable by the game servers and does not affect your account in any way.
          • -
          • Can I update Idle Cannon Tycoon Mod Apk?
          • -
          • Yes, you can update Idle Cannon Tycoon Mod Apk whenever there is a new version available. However, you will need to download and install the new mod apk file manually from the same source as before.
          • -
          • Can I play Idle Cannon Tycoon Mod Apk online with other players?
          • -
          • Yes, you can play Idle Cannon Tycoon Mod Apk online with other players. The mod apk version of the game does not interfere with your online connectivity or multiplayer mode.
          • -

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Play Free Slots Casino Games for Fun - No Registration Required.md b/spaces/congsaPfin/Manga-OCR/logs/Play Free Slots Casino Games for Fun - No Registration Required.md deleted file mode 100644 index 0033efc390628df0c4d88877e9605e9de01c5c5e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Play Free Slots Casino Games for Fun - No Registration Required.md +++ /dev/null @@ -1,118 +0,0 @@ - -

          Slots Casino: How to Play and Win Online

          -

          Slots casino games are among the most popular and exciting forms of gambling. They offer a chance to win big prizes with a simple click of a button. But how do you play and win at slots casino games online? In this article, we will answer this question and give you some tips and tricks to improve your chances of winning.

          -

          What are slots casino games?

          -

          Slots casino games are games of chance that involve spinning reels with symbols on them. The goal is to match the symbols on the paylines to form winning combinations. Depending on the game, you can win different amounts of money or even hit a jackpot.

          -

          slots casino


          DOWNLOADhttps://urlca.com/2uOewN



          -

          The history of slots casino games

          -

          The first slot machine was invented by Charles Fey in 1895 in San Francisco. It was called the Liberty Bell and had three reels with five symbols: horseshoes, diamonds, spades, hearts, and a liberty bell. The machine paid out 50 cents for a three-bell combination. The popularity of the machine led to the development of more complex and diverse slot machines over the years.

          -

          The types of slots casino games

          -

          Today, there are many types of slots casino games available online. Some of the most common ones are:

          -

          Classic slots

          -

          These are the traditional slot machines that have three or five reels and one or more paylines. They usually have simple graphics and sounds and feature classic symbols like fruits, bars, sevens, and bells. They are easy to play and offer low to medium payouts.

          -

          Video slots

          -

          These are the modern slot machines that have five or more reels and multiple paylines. They have advanced graphics and animations and feature various themes and characters. They also have special features like wilds, scatters, bonus rounds, free spins, and multipliers. They are more entertaining and offer higher payouts.

          -

          Progressive slots

          -

          These are the slot machines that have a jackpot that increases every time someone plays them. A percentage of each bet goes into the jackpot pool until someone hits it. The jackpot can be triggered randomly or by landing a specific combination of symbols. Progressive slots can offer life-changing payouts, but they are also very risky.

          -

          free slots casino games
          -online slots casino real money
          -best slots casino app
          -no deposit bonus slots casino
          -vegas slots casino online
          -slots casino jackpot mania
          -slots casino party
          -slots casino near me
          -slots casino bonus codes
          -slots casino free spins
          -play slots casino games online
          -slots casino reviews
          -slots casino no download
          -slots casino cheats
          -slots casino hack
          -slots casino tournaments
          -slots casino tips and tricks
          -slots casino strategy
          -slots casino odds
          -slots casino payout percentage
          -new slots casino 2023
          -mobile slots casino
          -live slots casino
          -penny slots casino
          -high limit slots casino
          -progressive slots casino
          -video slots casino
          -classic slots casino
          -3d slots casino
          -fruit slots casino
          -mega win slots casino
          -hot shot slots casino
          -quick hit slots casino
          -double down slots casino
          -cashman slots casino
          -gold fish slots casino
          -house of fun slots casino
          -caesars slots casino
          -huuuge slots casino
          -billionaire slots casino
          -heart of vegas slots casino
          -scatter slots casino
          -pop slots casino
          -myvegas slots casino
          -tycoon slots casino
          -zynga slots casino
          -gsn slots casino
          -big fish slots casino
          -slotomania slots casino
          -jackpot party slots casino

          -

          How to play slots casino games online?

          -

          If you want to play slots casino games online, you need to follow these steps:

          -

          Choose a reputable online casino

          -

          The first step is to find a reliable and trustworthy online casino that offers a variety of slots casino games. You can use our website to compare and review different online casinos based on their reputation, security, bonuses, customer service, game selection, and more.

          -

          Register and claim your bonus

          -

          The next step is to create an account at the online casino of your choice and make your first deposit. Most online casinos offer generous welcome bonuses for new players that can boost your bankroll and give you more chances to play and win. Make sure to read the terms and conditions of the bonus before claiming it.

          -

          Select a slot game that suits your preferences

          -

          The third step is to browse through the online casino's game lobby and choose a slot game that appeals to you. You can filter the games by type, theme, provider, features, or jackpot. You can also try out the games for free in demo mode before playing for real money.Learn the rules and features of the game -

          The fourth step is to familiarize yourself with the rules and features of the slot game you have chosen. You can do this by reading the game's paytable, which shows the symbols, payouts, paylines, and bonus features of the game. You can also check the game's RTP (return to player) and volatility, which indicate how often and how much the game pays out.

          -

          Place your bets and spin the reels

          -

          The final step is to place your bets and spin the reels. You can adjust your bet size by changing the coin value and the number of coins per payline. You can also choose to bet on all or some of the paylines. Then, you can either click on the spin button or use the autoplay feature to spin the reels automatically for a set number of times. If you land a winning combination, you will receive a payout according to the paytable. If you trigger a bonus feature, you will have a chance to win extra prizes or free spins.

          -

          How to win at slots casino games online?

          -

          While slots casino games are based on luck, there are some things you can do to increase your chances of winning. Here are some tips and tricks to help you win at slots casino games online:

          -

          Understand the RTP and volatility of the game

          -

          The RTP and volatility of a slot game are two important factors that affect your chances of winning. The RTP is the percentage of money that the game returns to the players over a long period of time. The higher the RTP, the more likely you are to win back some of your bets. The volatility is the level of risk and reward that the game offers. The higher the volatility, the more unpredictable the game is, but also the higher the potential payouts are.

          -

          You should choose a slot game that has an RTP of at least 96% and a volatility that matches your risk appetite. For example, if you have a small budget and want to play for longer, you should choose a low-volatility slot that pays out frequently but in smaller amounts. If you have a larger budget and want to chase big wins, you should choose a high-volatility slot that pays out rarely but in larger amounts.

          -

          Use a betting strategy that fits your budget

          -

          Another tip is to use a betting strategy that fits your budget and goals. A betting strategy is a set of rules that tells you how much to bet on each spin, depending on your previous outcomes. There are many betting strategies that you can use, such as the Martingale, the Fibonacci, or the Paroli. However, you should be aware that no betting strategy can guarantee you a win or overcome the house edge.

          -

          You should use a betting strategy that suits your budget and goals. For example, if you want to minimize your losses and play for longer, you should use a flat betting strategy that involves betting the same amount on each spin. If you want to maximize your wins and take advantage of winning streaks, you should use a progressive betting strategy that involves increasing your bet size after each win.

          Take advantage of free spins and other promotions

          -

          A third tip is to take advantage of free spins and other promotions that online casinos offer to their players. Free spins are spins that you can use on a slot game without risking your own money. They can be part of the welcome bonus, the loyalty program, or a special offer. Free spins can help you try out new games, extend your playtime, and increase your chances of winning.

          -

          Other promotions that online casinos offer include cashback, reload bonuses, tournaments, and giveaways. These promotions can also boost your bankroll and give you more opportunities to play and win. However, you should always read the terms and conditions of the promotions before claiming them, as they may have wagering requirements, expiration dates, or other restrictions.

          -

          Manage your bankroll and limit your losses

          -

          The last tip is to manage your bankroll and limit your losses. Your bankroll is the amount of money that you have allocated for gambling. You should never gamble with money that you cannot afford to lose or that you need for other purposes. You should also set a budget for each session and stick to it.

          -

          One way to manage your bankroll is to use the stop-loss technique. This means that you set a limit on how much you are willing to lose in a session and stop playing when you reach it. This way, you can avoid chasing your losses and losing more than you can afford. You should also set a win limit, which is the amount of money that you are happy to win in a session and stop playing when you reach it. This way, you can avoid losing your winnings and end on a high note.

          -

          Conclusion

          -

          Slots casino games are fun and exciting ways to gamble online. They offer a variety of themes, features, and payouts that can suit any player's preferences. However, they are also games of chance that require luck and skill to win. By following the tips and tricks we have shared in this article, you can improve your chances of winning at slots casino games online.

          -

          Remember to choose a reputable online casino, register and claim your bonus, select a slot game that suits your preferences, learn the rules and features of the game, place your bets and spin the reels, understand the RTP and volatility of the game, use a betting strategy that fits your budget, take advantage of free spins and other promotions, manage your bankroll and limit your losses.

          -

          We hope you enjoyed this article and learned something new. If you have any questions or feedback, please feel free to contact us. We would love to hear from you. Happy spinning!

          -

          FAQs

          -

          Here are some frequently asked questions about slots casino games online:

          -

          What is the best online casino for slots?

          -

          There is no definitive answer to this question, as different online casinos may offer different advantages and disadvantages for slots players. However, some of the factors that you should consider when choosing an online casino for slots are:

          -
            -
          • The reputation and security of the online casino
          • -
          • The variety and quality of the slot games available
          • -
          • The bonuses and promotions offered for slots players
          • -
          • The customer service and support provided by the online casino
          • -
          • The payment methods and withdrawal options available
          • -
          -

          You can use our website to compare and review different online casinos based on these factors and more.

          -

          How do I know if a slot game is fair?

          -

          All slot games at reputable online casinos are fair and random. They use a software called a random number generator (RNG) that ensures that every spin is independent and unpredictable. The RNG is tested and certified by independent third-party agencies that verify its accuracy and integrity.

          -

          You can check the fairness of a slot game by looking for its RTP (return to player) percentage. The RTP is the percentage of money that the game returns to the players over a long period of time. The higher the RTP, the more fair the game is.

          -

          How do I win a jackpot at slots?

          -

          A jackpot is a large prize that can be won at some slot games. There are two types of jackpots: fixed and progressive. A fixed jackpot is a set amount of money that does not change regardless of how many times it is won or how many people play the game. A progressive jackpot is an amount of money that increases every time someone plays the game until someone wins it.

          -

          To win a jackpot at slots, you need to play a slot game that offers one and meet the requirements to trigger it. For example, some slot games require you to bet the maximum amount or land a specific combination of symbols to win the jackpot. The chances of winning a jackpot are very low, but not impossible. You can increase your chances of winning a jackpot by playing slot games that have a high RTP, a low volatility, and a large number of paylines.

          -

          Can I play slots casino games for free?

          -

          Yes, you can play slots casino games for free at most online casinos. You can do this by using the demo mode or the free spins that the online casino offers. Playing slots casino games for free can help you learn the rules and features of the game, test different strategies, and have fun without risking your own money. However, you cannot win real money or jackpots when playing for free.

          -

          Can I play slots casino games on my mobile device?

          -

          Yes, you can play slots casino games on your mobile device at most online casinos. You can do this by using the mobile version of the online casino's website or by downloading the online casino's app. Playing slots casino games on your mobile device can give you more convenience, flexibility, and accessibility. You can play anytime and anywhere, as long as you have a stable internet connection and a compatible device.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Realistic Minecraft Mods The Ultimate List for 2023.md b/spaces/congsaPfin/Manga-OCR/logs/Realistic Minecraft Mods The Ultimate List for 2023.md deleted file mode 100644 index 18632458a97407a9537b17d1c7b3739f6b4d9ca9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Realistic Minecraft Mods The Ultimate List for 2023.md +++ /dev/null @@ -1,147 +0,0 @@ - -

          Minecraft Realistic Mod: How to Make Your Game More Immersive

          -

          Minecraft is a sandbox game that allows players to create and explore infinite worlds made of blocks. However, some players might find the default graphics, physics, sounds, and animations of the game too simplistic or unrealistic for their taste. That's where realistic mods come in.

          -

          Realistic mods are modifications that aim to make Minecraft more immersive by enhancing or changing various aspects of the game, such as terrain generation, lighting, shadows, water, fire, weather, animals, plants, items, etc. By using realistic mods, players can experience Minecraft in a whole new way.

          -

          minecraft realistic mod


          Download Zip ::: https://urlca.com/2uOfqZ



          -

          In this article, we will explain what are the benefits and drawbacks of using realistic mods, how to install them, and what are some of the best realistic mods available for Minecraft.

          -

          What are the benefits of using realistic mods?

          -

          Using realistic mods can have several advantages for players who want to make their game more immersive. Some of these benefits are:

          -
            -
          • Improved graphics: Realistic mods can make Minecraft look more stunning by adding high-resolution textures, shaders, reflections, shadows, fog, clouds, etc.
          • -
          • Improved physics: Realistic mods can make Minecraft behave more realistically by adding gravity, inertia, friction, buoy

            ancy, fluid dynamics, etc.

          • -
          • Improved animations: Realistic mods can make Minecraft more lively by adding smooth and natural animations for the player, mobs, items, blocks, etc.
          • -
          • Improved sounds: Realistic mods can make Minecraft more immersive by adding realistic and ambient sounds for the environment, weather, animals, blocks, etc.
          • -
          • Improved gameplay: Realistic mods can make Minecraft more challenging and fun by adding new features, mechanics, items, mobs, biomes, structures, etc.
          • -
          -

          What are the drawbacks of using realistic mods?

          -

          Using realistic mods can also have some disadvantages for players who want to make their game more immersive. Some of these drawbacks are:

          -
            -
          • Performance issues: Realistic mods can make Minecraft run slower or lag by increasing the load on the CPU, GPU, RAM, etc.
          • -
          • Compatibility problems: Realistic mods can make Minecraft crash or glitch by conflicting with each other or with the vanilla game.
          • -
          • Bugs: Realistic mods can make Minecraft behave unpredictably or incorrectly by introducing errors or bugs in the code or logic.
          • -
          -

          How to install realistic mods?

          -

          Requirements

          -

          To install realistic mods, you will need a few things:

          -
            -
          • A computer that meets the minimum or recommended system requirements for running realistic mods. The exact requirements will vary depending on the mod and your settings, but generally speaking, you will need a decent CPU, GPU, RAM, and storage space. You can check the mod pages for more details on the requirements.
          • -
          • A copy of Minecraft that is compatible with the mod version. Most realistic mods are made for the Java Edition of Minecraft, which is available for Windows, Mac OS, and Linux. You can buy and download Minecraft from the official website. You will also need to update your game to the latest version or the version that matches the mod.
          • -
          • A backup of your Minecraft files and worlds. Installing realistic mods can sometimes cause problems or overwrite your files and worlds. To avoid losing your progress or data, you should always make a backup of your .minecraft folder and your saves folder before installing any mod. You can find these folders in your %appdata% directory on Windows or in your Library/Application Support directory on Mac OS.
          • -
          -

          Mod loaders

          -

          A mod loader is a program that allows you to install and manage multiple mods for Minecraft. There are different mod loaders available for Minecraft, but the most popular ones are Forge and Fabric. You will need to install one of these mod loaders before installing any realistic mod.

          -
            -
          • Forge is a mod loader that has been around for a long time and supports many mods. You can download Forge from the official website. To install Forge, you need to run the installer file and select the Install client option. Then you need to launch Minecraft once with the Forge profile to create a mods folder in your .minecraft directory. You can then place any Forge-compatible mod files in this folder.
          • -
          • Fabric is a newer mod loader that is faster and lighter than Forge. You can download Fabric from the official website. To install Fabric, you need to run the installer file and select the Install client option. Then you need to launch Minecraft once with the Fabric profile to create a mods folder in your .minecraft directory. You also need to download and place Fabric API in this folder. You can then place any Fabric-compatible mod files in this folder.
          • -
          -

          Mod sources

          -

          To find and download realistic mods, you will need to visit some websites that host and distribute mods for Minecraft. There are many websites that offer mods for Minecraft, but some of the most reliable and popular ones are CurseForge and Planet Minecraft. You can browse these websites by categories, tags, ratings, downloads, etc. to find the mods that suit your preferences.

          -
            -
          • CurseForge is a website that hosts thousands of mods for various games, including Minecraft. You can download CurseForge from the official website. To use CurseForge, you need to create an account and log in. Then you can browse or search for realistic mods by using filters such as game version, mod loader, category, etc. You can also use the CurseForge app to install and update mods automatically.
          • -
          • Planet Minecraft is a website that hosts millions of projects for Minecraft, including mods, maps, skins, servers, etc. You can visit Planet Minecraft from the official website. To use Planet Minecraft, you do not need to create an account, but you can do so to access more features. Then you can browse or search for realistic mods by using filters such as game version, category, popularity, etc. You can also use the Planet Minecraft launcher to install and update mods automatically.
          • -
          -

          Mod installation

          -

          To install realistic mods, you will need to follow these steps:

          -
            -
          1. Download the mod files from the mod sources. Make sure to download the mod files that match your game version and mod loader. You can usually find the download links on the mod pages or in the description. Some mods may also require additional files or dependencies, such as libraries or other mods. You will need to download and install them as well.
          2. -
          3. Place the mod files in the mods folder in your .minecraft directory. Depending on your mod loader, you will have a different mods folder. For Forge, it is simply called mods. For Fabric, it is also called mods, but you need to place Fabric API in it as well. You can simply drag and drop the mod files in the mods folder, or use a file manager to copy and paste them.
          4. -
          5. Launch Minecraft with the mod loader profile. To run Minecraft with realistic mods, you will need to select the mod loader profile in the launcher. For Forge, it is called Forge. For Fabric, it is called Fabric. You can also rename or customize these profiles if you want. Then you can click Play and wait for Minecraft to load with the mods.
          6. -
          7. Enjoy your realistic Minecraft experience. Once Minecraft is loaded, you can check if the mods are working by going to the Mods menu in the main menu. You should see a list of all the installed mods and their information. You can also change some of the mod settings or options if available. Then you can create or load a world and start playing with realistic mods.
          8. -
          -

          What are some of the best realistic mods?

          -

          There are many realistic mods available for Minecraft, but some of them stand out for their quality and popularity. Here are some of the best realistic mods that you can try:

          -

          minecraft realistic mod forge
          -minecraft realistic mod fabric
          -minecraft realistic mod 1.19.2
          -minecraft realistic mod 1.18.2
          -minecraft realistic mod 1.20
          -minecraft realistic mod download
          -minecraft realistic mod curseforge
          -minecraft realistic mod planet minecraft
          -minecraft realistic mod shaders
          -minecraft realistic mod texture pack
          -minecraft realistic mod physics
          -minecraft realistic mod explosions
          -minecraft realistic mod fire
          -minecraft realistic mod water
          -minecraft realistic mod weather
          -minecraft realistic mod biomes
          -minecraft realistic mod terrain
          -minecraft realistic mod animals
          -minecraft realistic mod mobs
          -minecraft realistic mod horses
          -minecraft realistic mod genetics
          -minecraft realistic mod food
          -minecraft realistic mod cooking
          -minecraft realistic mod sleep
          -minecraft realistic mod time
          -minecraft realistic mod torches
          -minecraft realistic mod lighting
          -minecraft realistic mod sound
          -minecraft realistic mod movement
          -minecraft realistic mod animation
          -minecraft realistic mod cars
          -minecraft realistic mod furniture
          -minecraft realistic mod architecture
          -minecraft realistic mod interior design
          -minecraft realistic mod exterior design
          -minecraft realistic mod landscape design
          -minecraft realistic mod sci-fi design
          -minecraft realistic mod modern design
          -minecraft realistic mod medieval design
          -minecraft realistic mod fantasy design
          -minecraft realistic mod military design
          -minecraft realistic mod weapons design
          -minecraft realistic mod missiles design

          -

          Realistic Terrain Generation

          -

          This mod changes the way Minecraft generates terrain by using real-world data and algorithms. It creates more realistic and diverse landscapes with mountains, valleys, rivers, lakes, islands, etc. It also adds new biomes and structures that fit the terrain. You can customize the terrain generation settings to suit your preferences.

          -

          Realistic Terrain Generation screenshot

          -

          You can download Realistic Terrain Generation from [here].

          -

          Realistic Horse Genetics

          -

          This mod changes the way Minecraft spawns and breeds horses, donkeys, and mules by using real-world genetics and colors. It makes these animals have realistic coat patterns, markings, eye colors, etc. It also adds new features such as foals, gender differences, fertility, aging, etc. You can learn more about horse genetics and breeding by using this mod.

          -

          Realistic Horse Genetics screenshot

          -

          You can download Realistic Horse Genetics from [here].

          -

          Realistic Fire Spread

          -

          This mod changes the way Minecraft spreads fire by making it more realistic and dangerous. It makes fire spread faster and farther depending on the flammability of the blocks and the wind direction. It also makes fire consume oxygen and create smoke that can suffocate players and mobs. You will need to be more careful when dealing with fire by using this mod.

          -

          Realistic Fire Spread screenshot

          -

          You can download Realistic Fire Spread from [here].

          -

          Realistic Torches

          -

          This mod changes the way Minecraft handles torches by making them burn out after a configurable amount of time. It makes torches require matches or flint and steel to light up, and makes them emit smoke particles when burning. It also adds new types of torches such as stone torches, glowstone torches, etc. You will need to manage your light sources more wisely by using this mod.

          -

          Realistic Torches screenshot

          -

          You can download Realistic Torches from [here].

          -

          Realistic Item Drops

          -

          This mod changes the way Minecraft drops items by making them drop flat on the ground and disabling auto-pickup. It makes items behave more realistically by being affected by gravity, water

          Realistic Item Drops screenshot

          -

          You can download Realistic Item Drops from [here](^1^).

          -

          Realistic Bees

          -

          This mod changes the way Minecraft spawns and breeds bees by using real-world genetics and colors. It makes bees tiny or big, spawn in bigger groups, and have more hive space. It also adds new features such as honeycomb blocks, bee nests, bee armor, etc. You can learn more about bee biology and ecology by using this mod.

          -

          Realistic Bees screenshot

          -

          You can download Realistic Bees from [here](^4^).

          -

          Realistic Sleep

          -

          This mod changes the way Minecraft handles sleeping by making it speed up time instead of skipping to day. It makes sleeping more realistic and immersive by showing the night sky and the moon phases. It also adds new features such as insomnia, nightmares, sleepwalking, etc. You can customize the sleep settings to suit your preferences.

          -

          Realistic Sleep screenshot

          -

          You can download Realistic Sleep from [here](^7^).

          -

          Realistic Explosion Physics

          -

          This mod changes the way Minecraft handles explosions by making them more realistic and destructive. It makes explosions create shockwaves, debris, dust, smoke, fire, etc. It also makes explosions affect entities and blocks differently depending on their distance, mass, resistance, etc. You can enjoy more spectacular and dynamic explosions by using this mod.

          -

          Realistic Explosion Physics screenshot

          -

          You can download Realistic Explosion Physics from [here](^10^).

          -

          Conclusion

          -

          Realistic mods are a great way to make your Minecraft game more immersive and fun. They can improve or change various aspects of the game, such as graphics, physics, sounds, animations, gameplay, etc. However, they can also have some drawbacks, such as performance issues, compatibility problems, bugs, etc. Therefore, you should always check the mod requirements, compatibility, and updates before installing them.

          -

          To install realistic mods, you will need a computer that meets the system requirements, a copy of Minecraft that is compatible with the mod version, a backup of your files and worlds, a mod loader such as Forge or Fabric, and a mod source such as CurseForge or Planet Minecraft. You will also need to download and place the mod files in the mods folder in your .minecraft directory and launch Minecraft with the mod loader profile.

          -

          There are many realistic mods available for Minecraft, but some of them stand out for their quality and popularity. We have listed some of the best realistic mods that you can try, such as Realistic Terrain Generation, Realistic Horse Genetics, Realistic Fire Spread, Realistic Torches, Realistic Item Drops, Realistic Bees, Realistic Sleep , and Realistic Explosion Physics. You can download them from the links provided and enjoy your realistic Minecraft experience.

          -

          We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

          -

          FAQs

          -

          Here are some of the frequently asked questions about realistic mods:

          -
            -
          • Q: Can I use realistic mods on multiplayer servers?
          • -
          • A: It depends on the server and the mod. Some servers may allow or require certain mods, while others may ban or disable them. You should always check the server rules and mod compatibility before joining a multiplayer server with realistic mods.
          • -
          • Q: Can I use realistic mods on other platforms such as Bedrock Edition or Console Edition?
          • -
          • A: No, realistic mods are only available for the Java Edition of Minecraft, which is compatible with Windows, Mac OS, and Linux. Other platforms such as Bedrock Edition or Console Edition do not support modding or have very limited options.
          • -
          • Q: Can I use realistic mods with other types of mods such as adventure, magic, technology, etc.?
          • -
          • A: Yes, you can use realistic mods with other types of mods as long as they are compatible with each other and with your game version and mod loader. However, you should be aware that using too many or conflicting mods can cause performance issues, compatibility problems, bugs, etc.
          • -
          • Q: Can I create my own realistic mod or suggest a feature for an existing mod?
          • -
          • A: Yes, you can create your own realistic mod if you have the skills and tools to do so. You can also suggest a feature for an existing mod by contacting the mod developer or leaving a comment on the mod page. However, you should respect the mod developer's vision and decision and not expect them to implement your suggestion.
          • -
          • Q: Can I share or redistribute a realistic mod that I downloaded or created?
          • -
          • A: It depends on the mod license and permissions. Some mods may allow you to share or redistribute them as long as you give credit to the original mod developer and link to the original mod page. Other mods may prohibit you from sharing or redistributing them without the mod developer's consent. You should always check the mod license and permissions before sharing or redistributing a realistic mod.
          • -

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Tic-Tac-Toe in React A Beginners Guide.md b/spaces/congsaPfin/Manga-OCR/logs/Tic-Tac-Toe in React A Beginners Guide.md deleted file mode 100644 index b84abe11c7e27e200e1d78afabeb92201f488346..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Tic-Tac-Toe in React A Beginners Guide.md +++ /dev/null @@ -1,185 +0,0 @@ -
          -

          How to Build a Tic Tac Toe Game with React Hooks

          -

          Tic tac toe is a classic game that is fun and easy to play. But did you know that you can also build it with React Hooks, a new feature that lets you use state and other React features without writing a class component?

          -

          In this tutorial, you will learn how to build a tic tac toe game with React Hooks from scratch. You will also learn how to add a time travel feature that allows you to go back to any previous move in the game. By the end of this tutorial, you will have a fully functional tic tac toe game that you can play with your friends or online.

          -

          tic tac toe react


          Download File ○○○ https://urlca.com/2uO72X



          -

          To follow this tutorial, you will need some basic knowledge of HTML, CSS, JavaScript, and React. You will also need a code editor, a web browser, and Node.js installed on your computer.

          -

          Introduction

          -

          What is React Hooks and why use it for tic tac toe?

          -

          React Hooks are a new addition in React 16.8 that let you use state and other React features without writing a class component. They are functions that let you "hook into" React state and lifecycle features from function components.

          -

          Some of the benefits of using React Hooks are:

          -
            -
          • They make your code more readable and maintainable by avoiding the complexity of class components.
          • -
          • They let you reuse stateful logic across different components without introducing higher-order components or render props.
          • -
          • They let you use more of React's features, such as context, reducers, custom hooks, etc.
          • -
          -

          For tic tac toe, using React Hooks will make our code simpler and cleaner. We will be able to manage the state of the game, such as the board, the player, and the winner, with just a few lines of code. We will also be able to add some extra features, such as time travel, with ease.

          -

          What you will learn in this tutorial

          -

          In this tutorial, you will learn how to:

          -
            -
          • Create a React app with create-react-app
          • -
          • Create functional components with JSX
          • -
          • Use useState hook to manage state
          • -
          • Use onClick handler to handle user actions
          • -
          • Use useEffect hook to store data in localStorage
          • -
          • Use map method to render lists
          • -
          • Use custom functions and logic to implement game rules
          • -
          -

          What you need to get started

          -

          To follow this tutorial, you will need the following tools and resources:

          -
            -
          • A code editor, such as Visual Studio Code, Atom, or Sublime Text
          • -
          • A web browser, such as Chrome, Firefox, or Safari
          • -
          • Node.js, a JavaScript runtime environment that lets you run JavaScript code outside of a browser. You can download it from https://nodejs.org/en/
          • -
          • create-react-app, a tool that lets you create a React app with no configuration. You can install it with the command npm install -g create-react-app
          • -
          • A GitHub account, if you want to save and share your code online. You can sign up for free at https://github.com/
          • -
          -

          The logic of tic tac toe

          -

          How to represent the board and the squares

          -

          The first step in building our tic tac toe game is to decide how to represent the board and the squares. The board is a 3x3 grid of squares, where each square can be either empty, marked with an X, or marked with an O. We can use an array of nine elements to store the state of the board, where each element corresponds to a square. For example, the initial state of the board can be represented as:

          -

          How to build a tic tac toe game with react hooks
          -React tutorial: tic tac toe with state and effects
          -Tic tac toe react app with custom hooks and reducer
          -Learn react by making a tic tac toe game with typescript
          -Tic tac toe game in react native with expo
          -React testing library: tic tac toe game example
          -Tic tac toe with react and firebase authentication
          -React tic tac toe game with AI using minimax algorithm
          -Tic tac toe game with react and socket.io for multiplayer
          -React tic tac toe game with next.js and tailwind css
          -Tic tac toe game with react and graphql using apollo client
          -React tic tac toe game with redux and redux toolkit
          -Tic tac toe game with react and styled components
          -React tic tac toe game with animations using framer motion
          -Tic tac toe game with react and web assembly
          -React tic tac toe game with drag and drop using react dnd
          -Tic tac toe game with react and material ui
          -React tic tac toe game with dark mode using context api
          -Tic tac toe game with react and svg
          -React tic tac toe game with voice control using speech recognition api
          -Tic tac toe game with react and web workers
          -React tic tac toe game with undo and redo functionality
          -Tic tac toe game with react and emotion css-in-js library
          -React tic tac toe game with internationalization using i18next
          -Tic tac toe game with react and service workers for offline support
          -React tic tac toe game with accessibility features using aria attributes
          -Tic tac toe game with react and d3.js for data visualization
          -React tic tac toe game with code splitting and lazy loading
          -Tic tac toe game with react and three.js for 3d graphics
          -React tic tac toe game with progressive web app features
          -Tic tac toe game with react and bootstrap 5
          -React tic tac toe game with server-side rendering using next.js or gatsby.js
          -Tic tac toe game with react and firebase firestore database
          -React tic tac toe game with authentication using auth0 or firebase auth
          -Tic tac toe game with react and aws amplify cloud services
          -React tic tac toe game with deployment using netlify or vercel
          -Tic tac toe game with react and github actions for continuous integration
          -React tic toc toe game with eslint and prettier for code quality
          -Tic toc toe game with react and jest for unit testing
          -React tic toc toe game with cypress or puppeteer for end-to-end testing

          -[null, null, null, null, null, null, null, null, null] -

          We can use the index of the array to identify each square, starting from 0 to 8. For example, the top-left square has index 0, the top-middle square has index 1, and so on. We can also use a table element to display the board on the web page, where each table cell contains a button element that represents a square.

          -

          How to check for a winner or a draw

          -

          The next step is to decide how to check for a winner or a draw. A player wins if they mark three squares in a row, either horizontally, vertically, or diagonally. A draw occurs if all nine squares are marked and no player wins. We can use a function that takes the board array as an argument and returns either 'X', 'O', or null depending on the outcome of the game. For example:

          -function calculateWinner(board) // Define the winning combinations const lines = [ [0, 1, 2], // Top row [3, 4, 5], // Middle row [6, 7, 8], // Bottom row [0, 3, 6], // Left column [1, 4, 7], // Middle column [2, 5, 8], // Right column [0, 4, 8], // Top-left to bottom-right diagonal [2, 4, 6], // Top-right to bottom-left diagonal ]; // Loop through the lines and check for a winner for (let i = 0; i < lines.length; i++) const [a, b, c] = lines[i]; // Destructure the line into three squares if (board[a] && board[a] === board[b] && board[a] === board[c]) return board[a]; // Return the winner ('X' or 'O') // Check for a draw if (board.every(square => square !== null)) return 'Draw'; // Return 'Draw' if all squares are marked // Return null if no winner or draw return null; -

          How to switch between players

          -

          The final step in the logic of tic tac toe is to decide how to switch between players. We can use a variable to store the current player ('X' or 'O'), and toggle it after each move. We can also use another variable to store the history of moves, which will be useful for implementing the time travel feature later. For example:

          -// Initialize the state variables let board = [null, null, null, null, null, null, null,null,null]; // The current board state let player = 'X'; // The current player ('X' or 'O') let history = []; // The history of moves // Handle user actions function handleClick(i) board[i]) return; // Update the board state with the current player's mark board[i] = player; // Add the board state to the history array history.push(board.slice()); // Use slice to make a copy of the board // Switch the player player = player === 'X' ? 'O' : 'X'; -

          Installation

          -

          How to create a React app with create-react-app

          -

          Now that we have the logic of tic tac toe, we can start building our React app. The easiest way to create a React app is to use create-react-app, a tool that sets up everything for us, such as the development server, the bundler, the transpiler, etc.

          -

          To use create-react-app, open your terminal and run the following command:

          -npx create-react-app tic-tac-toe-react -

          This will create a new folder called tic-tac-toe-react in your current directory, and install all the dependencies and files needed for your React app. It may take a few minutes to complete.

          -

          Once it is done, you can navigate to the tic-tac-toe-react folder and run the following command to start the development server:

          -cd tic-tac-toe-react npm start -

          This will open your web browser and display a default React page at http://localhost:3000/. You can edit the files in the src folder and see the changes reflected in the browser automatically.

          -

          Scaffold the project

          -

          How to create the components and styles files

          -

          The next step is to scaffold our project by creating the components and styles files. We will use a simple file structure that consists of three components: Square, Board, and App. We will also use a separate file for the styles and another file for the game logic.

          -

          To create the components and styles files, open your code editor and navigate to the src folder. Then, create the following files:

          -
            -
          • Square.js: This file will contain the Square component, which will render a button element that represents a square on the board.
          • -
          • Board.js: This file will contain the Board component, which will render a table element that contains nine Square components.
          • -
          • App.js: This file will contain the App component, which will render the Board component and some other elements, such as the status message and the reset button.
          • -
          • styles.css: This file will contain the styles for our app, such as colors, fonts, margins, etc.
          • -
          • logic.js: This file will contain the game logic functions, such as calculateWinner and jumpTo.
          • -
          -

          How to import and export the components

          -

          After creating the files, we need to import and export the components so that we can use them in other files. To do this, we need to use the import and export statements at the top and bottom of each file.

          -

          For example, in Square.js, we need to import React from 'react' and export Square as a default export:

          -// Import React from 'react' import React from 'react'; // Define the Square component function Square(props) // Return JSX for a button element return ( - ); // Export Square as a default export export default Square; -

          Similarly, in Board.js, we need to import React from 'react', import Square from './Square', and export Board as a default export:

          -// Import React from 'react' import React from 'react'; // Import Square from './Square' import Square from './Square'; // Define the Board component function Board(props) // Return JSX for a table element return ( - - - - - - - - - - - - - - - - - -
          props.onClick(0) /> props.onClick(1) /> props.onClick(2) />
          props.onClick(3) /> props.onClick(4) /> props.onClick(5) />
          props.onClick(6) /> props.onClick(7) /> props.onClick(8) />
          - ); // Export Board as a default export export default Board;
          -

          And in App.js, we need to import React from 'react', import useState and useEffect from 'react', import Board from './Board', import calculateWinner and jumpTo from './logic', and import './styles.css' for the styles:

          -// Import React from 'react' import React from 'react'; // Import useState and useEffect from 'react' import useState, useEffect from 'react'; // Import Board from './Board' import Board from './Board'; // Import calculateWinner and jumpTo from './logic' import calculateWinner, jumpTo from './logic'; // Import './styles.css' for the styles import './styles.css'; // Define the App component function App() winner === 'O') status = `Winner: $winner`; else if (winner === 'Draw') status = `It's a draw!`; else status = `Next player: $player`; // Return JSX for the app element return (
          -

          Tic Tac Toe with React Hooks

          - -
          -

          status

          - -
            - history.map((board, move) => (
          • - -
          • - ))
          -
          -
          - ); // Export App as a default export export default App;

          Conclusion

          -

          Congratulations! You have successfully built a tic tac toe game with React Hooks. You have learned how to use useState and useEffect hooks to manage state and side effects, how to create functional components with JSX, how to handle user actions with onClick handler, how to implement game logic with custom functions, and how to add a time travel feature with localStorage and map method.

          -

          This is just a basic example of what you can do with React Hooks. There are many more features and possibilities that you can explore and experiment with. For example, you can:

          -
            -
          • Make the game responsive or mobile-friendly by using media queries or CSS frameworks
          • -
          • Add animations or sound effects to the game by using CSS transitions or libraries
          • -
          • Make the game more challenging or add different modes by changing the board size or the rules
          • -
          • Use other React hooks, such as useContext, useReducer, or custom hooks, to enhance your app
          • -
          -

          We hope you enjoyed this tutorial and learned something new. If you have any questions or feedback, feel free to leave a comment below. Happy coding!

          -

          FAQs

          -

          What are some benefits of using React Hooks?

          -

          Some of the benefits of using React Hooks are:

          -
            -
          • They make your code more readable and maintainable by avoiding the complexity of class components.
          • -
          • They let you reuse stateful logic across different components without introducing higher-order components or render props.
          • -
          • They let you use more of React's features, such as context, reducers, custom hooks, etc.
          • -
          -

          How can I make the game responsive or mobile-friendly?

          -

          You can make the game responsive or mobile-friendly by using media queries or CSS frameworks. Media queries are a feature of CSS that let you apply different styles based on the screen size or device orientation. For example, you can use media queries to adjust the font size, the margin, or the layout of your app depending on the width of the screen. CSS frameworks are libraries that provide ready-made components and styles for building responsive web pages. For example, you can use Bootstrap, Material UI, or Tailwind CSS to create a grid system, a navbar, a button, etc. for your app.

          -

          How can I add animations or sound effects to the game?

          -

          You can add animations or sound effects to the game by using CSS transitions or libraries. CSS transitions are a feature of CSS that let you create smooth changes from one state to another. For example, you can use CSS transitions to change the color, the opacity, or the transform of your elements when they are clicked or hovered over. Libraries are external resources that provide additional functionality for your app. For example, you can use React Transition Group, React Spring, or Anime.js to create complex animations for your elements. You can also use Howler.js, Tone.js, or Pizzicato.js to play sound effects for your app.

          -

          How can I make the game more challenging or add different modes?

          -

          You can make the game more challenging or add different modes by changing the board size or the rules. For example, you can increase the board size from 3x3 to 4x4 or 5x5 and require four or five marks in a row to win. You can also change the rules from tic tac toe to gomoku, connect four, or ultimate tic tac toe. These are variations of tic tac toe that have different board sizes, shapes, or layers.

          -

          Where can I learn more about React Hooks or tic tac toe?

          -

          You can learn more about React Hooks or tic tac toe from the following resources:

          -
            -
          • React Hooks Introduction: This is the official documentation of React Hooks that explains what they are and how to use them.
          • -
          • React Hooks Reference: This is the official reference of React Hooks that provides detailed information and examples for each hook.
          • -
          • React Hooks FAQ: This is a collection of frequently asked questions and answers about React Hooks.
          • -
          • Tic Tac Toe Wikipedia: This is an encyclopedia article that gives an overview of tic tac toe and its history, variants, and strategies.
          • -
          • Tic Tac Toe Variations: This is a website that lets you play different variations of tic tac toe with different board sizes, shapes, or layers.
          • -
          -

          I hope you found these resources helpful and interesting. If you have any other questions or suggestions, feel free to leave a comment below.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Analisis E Interpretacion De Estados Financieros Abraham Perdomo Moreno PDF.md b/spaces/contluForse/HuggingGPT/assets/Analisis E Interpretacion De Estados Financieros Abraham Perdomo Moreno PDF.md deleted file mode 100644 index 512512199663c8dc492b83554f2dd7385e419d11..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Analisis E Interpretacion De Estados Financieros Abraham Perdomo Moreno PDF.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Analisis E Interpretacion De Estados Financieros Abraham Perdomo Moreno PDF


          Download File ❤❤❤ https://ssurll.com/2uzvjl



          -
          -Analisis E Interpretacion De Estados Financieros Abraham Perdomo Moreno PDF Amanda Diaz Actions, LR Presets rar. Adobe Soundbooth CS3 (Full Version ... 1fdad05405
          -
          -
          -

          diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/__init__.py deleted file mode 100644 index ce9ddac2f3006c7ee422aab7239060190a9d95d1..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/__init__.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from annotator.oneformer.detectron2.layers import ShapeSpec - -from .anchor_generator import build_anchor_generator, ANCHOR_GENERATOR_REGISTRY -from .backbone import ( - BACKBONE_REGISTRY, - FPN, - Backbone, - ResNet, - ResNetBlockBase, - build_backbone, - build_resnet_backbone, - make_stage, - ViT, - SimpleFeaturePyramid, - get_vit_lr_decay_rate, - MViT, - SwinTransformer, -) -from .meta_arch import ( - META_ARCH_REGISTRY, - SEM_SEG_HEADS_REGISTRY, - GeneralizedRCNN, - PanopticFPN, - ProposalNetwork, - RetinaNet, - SemanticSegmentor, - build_model, - build_sem_seg_head, - FCOS, -) -from .postprocessing import detector_postprocess -from .proposal_generator import ( - PROPOSAL_GENERATOR_REGISTRY, - build_proposal_generator, - RPN_HEAD_REGISTRY, - build_rpn_head, -) -from .roi_heads import ( - ROI_BOX_HEAD_REGISTRY, - ROI_HEADS_REGISTRY, - ROI_KEYPOINT_HEAD_REGISTRY, - ROI_MASK_HEAD_REGISTRY, - ROIHeads, - StandardROIHeads, - BaseMaskRCNNHead, - BaseKeypointRCNNHead, - FastRCNNOutputLayers, - build_box_head, - build_keypoint_head, - build_mask_head, - build_roi_heads, -) -from .test_time_augmentation import DatasetMapperTTA, GeneralizedRCNNWithTTA -from .mmdet_wrapper import MMDetBackbone, MMDetDetector - -_EXCLUDE = {"ShapeSpec"} -__all__ = [k for k in globals().keys() if k not in _EXCLUDE and not k.startswith("_")] - - -from annotator.oneformer.detectron2.utils.env import fixup_module_metadata - -fixup_module_metadata(__name__, globals(), __all__) -del fixup_module_metadata diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/gather_points.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/gather_points.py deleted file mode 100644 index f52f1677d8ea0facafc56a3672d37adb44677ff3..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/gather_points.py +++ /dev/null @@ -1,57 +0,0 @@ -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['gather_points_forward', 'gather_points_backward']) - - -class GatherPoints(Function): - """Gather points with given index.""" - - @staticmethod - def forward(ctx, features: torch.Tensor, - indices: torch.Tensor) -> torch.Tensor: - """ - Args: - features (Tensor): (B, C, N) features to gather. - indices (Tensor): (B, M) where M is the number of points. - - Returns: - Tensor: (B, C, M) where M is the number of points. - """ - assert features.is_contiguous() - assert indices.is_contiguous() - - B, npoint = indices.size() - _, C, N = features.size() - output = torch.cuda.FloatTensor(B, C, npoint) - - ext_module.gather_points_forward( - features, indices, output, b=B, c=C, n=N, npoints=npoint) - - ctx.for_backwards = (indices, C, N) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(indices) - return output - - @staticmethod - def backward(ctx, grad_out): - idx, C, N = ctx.for_backwards - B, npoint = idx.size() - - grad_features = torch.cuda.FloatTensor(B, C, N).zero_() - grad_out_data = grad_out.data.contiguous() - ext_module.gather_points_backward( - grad_out_data, - idx, - grad_features.data, - b=B, - c=C, - n=N, - npoints=npoint) - return grad_features, None - - -gather_points = GatherPoints.apply diff --git a/spaces/course-demos/Rick_and_Morty_QA/README.md b/spaces/course-demos/Rick_and_Morty_QA/README.md deleted file mode 100644 index 56bd69856d79a6365eee78f59aa568a1ebd435e7..0000000000000000000000000000000000000000 --- a/spaces/course-demos/Rick_and_Morty_QA/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Rick & Morty Bot -emoji: 🤹‍♂️ -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/renderer/mesh.py b/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/renderer/mesh.py deleted file mode 100644 index a76ec5838d08d109dc24f58ca8ef3aff2ade552b..0000000000000000000000000000000000000000 --- a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/renderer/mesh.py +++ /dev/null @@ -1,345 +0,0 @@ -import numpy as np - - -def save_obj_mesh(mesh_path, verts, faces): - file = open(mesh_path, 'w') - for v in verts: - file.write('v %.4f %.4f %.4f\n' % (v[0], v[1], v[2])) - for f in faces: - f_plus = f + 1 - file.write('f %d %d %d\n' % (f_plus[0], f_plus[1], f_plus[2])) - file.close() - -# https://github.com/ratcave/wavefront_reader -def read_mtlfile(fname): - materials = {} - with open(fname) as f: - lines = f.read().splitlines() - - for line in lines: - if line: - split_line = line.strip().split(' ', 1) - if len(split_line) < 2: - continue - - prefix, data = split_line[0], split_line[1] - if 'newmtl' in prefix: - material = {} - materials[data] = material - elif materials: - if data: - split_data = data.strip().split(' ') - - # assume texture maps are in the same level - # WARNING: do not include space in your filename!! - if 'map' in prefix: - material[prefix] = split_data[-1].split('\\')[-1] - elif len(split_data) > 1: - material[prefix] = tuple(float(d) for d in split_data) - else: - try: - material[prefix] = int(data) - except ValueError: - material[prefix] = float(data) - - return materials - - -def load_obj_mesh_mtl(mesh_file): - vertex_data = [] - norm_data = [] - uv_data = [] - - face_data = [] - face_norm_data = [] - face_uv_data = [] - - # face per material - face_data_mat = {} - face_norm_data_mat = {} - face_uv_data_mat = {} - - # current material name - mtl_data = None - cur_mat = None - - if isinstance(mesh_file, str): - f = open(mesh_file, "r") - else: - f = mesh_file - for line in f: - if isinstance(line, bytes): - line = line.decode("utf-8") - if line.startswith('#'): - continue - values = line.split() - if not values: - continue - - if values[0] == 'v': - v = list(map(float, values[1:4])) - vertex_data.append(v) - elif values[0] == 'vn': - vn = list(map(float, values[1:4])) - norm_data.append(vn) - elif values[0] == 'vt': - vt = list(map(float, values[1:3])) - uv_data.append(vt) - elif values[0] == 'mtllib': - mtl_data = read_mtlfile(mesh_file.replace(mesh_file.split('/')[-1],values[1])) - elif values[0] == 'usemtl': - cur_mat = values[1] - elif values[0] == 'f': - # local triangle data - l_face_data = [] - l_face_uv_data = [] - l_face_norm_data = [] - - # quad mesh - if len(values) > 4: - f = list(map(lambda x: int(x.split('/')[0]) if int(x.split('/')[0]) < 0 else int(x.split('/')[0])-1, values[1:4])) - l_face_data.append(f) - f = list(map(lambda x: int(x.split('/')[0]) if int(x.split('/')[0]) < 0 else int(x.split('/')[0])-1, [values[3], values[4], values[1]])) - l_face_data.append(f) - # tri mesh - else: - f = list(map(lambda x: int(x.split('/')[0]) if int(x.split('/')[0]) < 0 else int(x.split('/')[0])-1, values[1:4])) - l_face_data.append(f) - # deal with texture - if len(values[1].split('/')) >= 2: - # quad mesh - if len(values) > 4: - f = list(map(lambda x: int(x.split('/')[1]) if int(x.split('/')[1]) < 0 else int(x.split('/')[1])-1, values[1:4])) - l_face_uv_data.append(f) - f = list(map(lambda x: int(x.split('/')[1]) if int(x.split('/')[1]) < 0 else int(x.split('/')[1])-1, [values[3], values[4], values[1]])) - l_face_uv_data.append(f) - # tri mesh - elif len(values[1].split('/')[1]) != 0: - f = list(map(lambda x: int(x.split('/')[1]) if int(x.split('/')[1]) < 0 else int(x.split('/')[1])-1, values[1:4])) - l_face_uv_data.append(f) - # deal with normal - if len(values[1].split('/')) == 3: - # quad mesh - if len(values) > 4: - f = list(map(lambda x: int(x.split('/')[2]) if int(x.split('/')[2]) < 0 else int(x.split('/')[2])-1, values[1:4])) - l_face_norm_data.append(f) - f = list(map(lambda x: int(x.split('/')[2]) if int(x.split('/')[2]) < 0 else int(x.split('/')[2])-1, [values[3], values[4], values[1]])) - l_face_norm_data.append(f) - # tri mesh - elif len(values[1].split('/')[2]) != 0: - f = list(map(lambda x: int(x.split('/')[2]) if int(x.split('/')[2]) < 0 else int(x.split('/')[2])-1, values[1:4])) - l_face_norm_data.append(f) - - face_data += l_face_data - face_uv_data += l_face_uv_data - face_norm_data += l_face_norm_data - - if cur_mat is not None: - if cur_mat not in face_data_mat.keys(): - face_data_mat[cur_mat] = [] - if cur_mat not in face_uv_data_mat.keys(): - face_uv_data_mat[cur_mat] = [] - if cur_mat not in face_norm_data_mat.keys(): - face_norm_data_mat[cur_mat] = [] - face_data_mat[cur_mat] += l_face_data - face_uv_data_mat[cur_mat] += l_face_uv_data - face_norm_data_mat[cur_mat] += l_face_norm_data - - vertices = np.array(vertex_data) - faces = np.array(face_data) - - norms = np.array(norm_data) - norms = normalize_v3(norms) - face_normals = np.array(face_norm_data) - - uvs = np.array(uv_data) - face_uvs = np.array(face_uv_data) - - out_tuple = (vertices, faces, norms, face_normals, uvs, face_uvs) - - if cur_mat is not None and mtl_data is not None: - for key in face_data_mat: - face_data_mat[key] = np.array(face_data_mat[key]) - face_uv_data_mat[key] = np.array(face_uv_data_mat[key]) - face_norm_data_mat[key] = np.array(face_norm_data_mat[key]) - - out_tuple += (face_data_mat, face_norm_data_mat, face_uv_data_mat, mtl_data) - - return out_tuple - - -def load_obj_mesh(mesh_file, with_normal=False, with_texture=False): - vertex_data = [] - norm_data = [] - uv_data = [] - - face_data = [] - face_norm_data = [] - face_uv_data = [] - - if isinstance(mesh_file, str): - f = open(mesh_file, "r") - else: - f = mesh_file - for line in f: - if isinstance(line, bytes): - line = line.decode("utf-8") - if line.startswith('#'): - continue - values = line.split() - if not values: - continue - - if values[0] == 'v': - v = list(map(float, values[1:4])) - vertex_data.append(v) - elif values[0] == 'vn': - vn = list(map(float, values[1:4])) - norm_data.append(vn) - elif values[0] == 'vt': - vt = list(map(float, values[1:3])) - uv_data.append(vt) - - elif values[0] == 'f': - # quad mesh - if len(values) > 4: - f = list(map(lambda x: int(x.split('/')[0]), values[1:4])) - face_data.append(f) - f = list(map(lambda x: int(x.split('/')[0]), [values[3], values[4], values[1]])) - face_data.append(f) - # tri mesh - else: - f = list(map(lambda x: int(x.split('/')[0]), values[1:4])) - face_data.append(f) - - # deal with texture - if len(values[1].split('/')) >= 2: - # quad mesh - if len(values) > 4: - f = list(map(lambda x: int(x.split('/')[1]), values[1:4])) - face_uv_data.append(f) - f = list(map(lambda x: int(x.split('/')[1]), [values[3], values[4], values[1]])) - face_uv_data.append(f) - # tri mesh - elif len(values[1].split('/')[1]) != 0: - f = list(map(lambda x: int(x.split('/')[1]), values[1:4])) - face_uv_data.append(f) - # deal with normal - if len(values[1].split('/')) == 3: - # quad mesh - if len(values) > 4: - f = list(map(lambda x: int(x.split('/')[2]), values[1:4])) - face_norm_data.append(f) - f = list(map(lambda x: int(x.split('/')[2]), [values[3], values[4], values[1]])) - face_norm_data.append(f) - # tri mesh - elif len(values[1].split('/')[2]) != 0: - f = list(map(lambda x: int(x.split('/')[2]), values[1:4])) - face_norm_data.append(f) - - vertices = np.array(vertex_data) - faces = np.array(face_data) - 1 - - if with_texture and with_normal: - uvs = np.array(uv_data) - face_uvs = np.array(face_uv_data) - 1 - norms = np.array(norm_data) - if norms.shape[0] == 0: - norms = compute_normal(vertices, faces) - face_normals = faces - else: - norms = normalize_v3(norms) - face_normals = np.array(face_norm_data) - 1 - return vertices, faces, norms, face_normals, uvs, face_uvs - - if with_texture: - uvs = np.array(uv_data) - face_uvs = np.array(face_uv_data) - 1 - return vertices, faces, uvs, face_uvs - - if with_normal: - norms = np.array(norm_data) - norms = normalize_v3(norms) - face_normals = np.array(face_norm_data) - 1 - return vertices, faces, norms, face_normals - - return vertices, faces - - -def normalize_v3(arr): - ''' Normalize a numpy array of 3 component vectors shape=(n,3) ''' - lens = np.sqrt(arr[:, 0] ** 2 + arr[:, 1] ** 2 + arr[:, 2] ** 2) - eps = 0.00000001 - lens[lens < eps] = eps - arr[:, 0] /= lens - arr[:, 1] /= lens - arr[:, 2] /= lens - return arr - - -def compute_normal(vertices, faces): - # Create a zeroed array with the same type and shape as our vertices i.e., per vertex normal - norm = np.zeros(vertices.shape, dtype=vertices.dtype) - # Create an indexed view into the vertex array using the array of three indices for triangles - tris = vertices[faces] - # Calculate the normal for all the triangles, by taking the cross product of the vectors v1-v0, and v2-v0 in each triangle - n = np.cross(tris[::, 1] - tris[::, 0], tris[::, 2] - tris[::, 0]) - # n is now an array of normals per triangle. The length of each normal is dependent the vertices, - # we need to normalize these, so that our next step weights each normal equally. - normalize_v3(n) - # now we have a normalized array of normals, one per triangle, i.e., per triangle normals. - # But instead of one per triangle (i.e., flat shading), we add to each vertex in that triangle, - # the triangles' normal. Multiple triangles would then contribute to every vertex, so we need to normalize again afterwards. - # The cool part, we can actually add the normals through an indexed view of our (zeroed) per vertex normal array - norm[faces[:, 0]] += n - norm[faces[:, 1]] += n - norm[faces[:, 2]] += n - normalize_v3(norm) - - return norm - -# compute tangent and bitangent -def compute_tangent(vertices, faces, normals, uvs, faceuvs): - # NOTE: this could be numerically unstable around [0,0,1] - # but other current solutions are pretty freaky somehow - c1 = np.cross(normals, np.array([0,1,0.0])) - tan = c1 - normalize_v3(tan) - btan = np.cross(normals, tan) - - # NOTE: traditional version is below - - # pts_tris = vertices[faces] - # uv_tris = uvs[faceuvs] - - # W = np.stack([pts_tris[::, 1] - pts_tris[::, 0], pts_tris[::, 2] - pts_tris[::, 0]],2) - # UV = np.stack([uv_tris[::, 1] - uv_tris[::, 0], uv_tris[::, 2] - uv_tris[::, 0]], 1) - - # for i in range(W.shape[0]): - # W[i,::] = W[i,::].dot(np.linalg.inv(UV[i,::])) - - # tan = np.zeros(vertices.shape, dtype=vertices.dtype) - # tan[faces[:,0]] += W[:,:,0] - # tan[faces[:,1]] += W[:,:,0] - # tan[faces[:,2]] += W[:,:,0] - - # btan = np.zeros(vertices.shape, dtype=vertices.dtype) - # btan[faces[:,0]] += W[:,:,1] - # btan[faces[:,1]] += W[:,:,1] - # btan[faces[:,2]] += W[:,:,1] - - # normalize_v3(tan) - - # ndott = np.sum(normals*tan, 1, keepdims=True) - # tan = tan - ndott * normals - - # normalize_v3(btan) - # normalize_v3(tan) - - # tan[np.sum(np.cross(normals, tan) * btan, 1) < 0,:] *= -1.0 - - return tan, btan - -if __name__ == '__main__': - pts, tri, nml, trin, uvs, triuv = load_obj_mesh('/home/ICT2000/ssaito/Documents/Body/tmp/Baseball_Pitching/0012.obj', True, True) - compute_tangent(pts, tri, uvs, triuv) \ No newline at end of file diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/util/nvdiffrast.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/util/nvdiffrast.py deleted file mode 100644 index f3245859c650afbfe841a66b74cddefaf28820d9..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/util/nvdiffrast.py +++ /dev/null @@ -1,126 +0,0 @@ -"""This script is the differentiable renderer for Deep3DFaceRecon_pytorch - Attention, antialiasing step is missing in current version. -""" -import pytorch3d.ops -import torch -import torch.nn.functional as F -import kornia -from kornia.geometry.camera import pixel2cam -import numpy as np -from typing import List -from scipy.io import loadmat -from torch import nn - -from pytorch3d.structures import Meshes -from pytorch3d.renderer import ( - look_at_view_transform, - FoVPerspectiveCameras, - DirectionalLights, - RasterizationSettings, - MeshRenderer, - MeshRasterizer, - SoftPhongShader, - TexturesUV, -) - -# def ndc_projection(x=0.1, n=1.0, f=50.0): -# return np.array([[n/x, 0, 0, 0], -# [ 0, n/-x, 0, 0], -# [ 0, 0, -(f+n)/(f-n), -(2*f*n)/(f-n)], -# [ 0, 0, -1, 0]]).astype(np.float32) - -class MeshRenderer(nn.Module): - def __init__(self, - rasterize_fov, - znear=0.1, - zfar=10, - rasterize_size=224): - super(MeshRenderer, self).__init__() - - # x = np.tan(np.deg2rad(rasterize_fov * 0.5)) * znear - # self.ndc_proj = torch.tensor(ndc_projection(x=x, n=znear, f=zfar)).matmul( - # torch.diag(torch.tensor([1., -1, -1, 1]))) - self.rasterize_size = rasterize_size - self.fov = rasterize_fov - self.znear = znear - self.zfar = zfar - - self.rasterizer = None - - def forward(self, vertex, tri, feat=None): - """ - Return: - mask -- torch.tensor, size (B, 1, H, W) - depth -- torch.tensor, size (B, 1, H, W) - features(optional) -- torch.tensor, size (B, C, H, W) if feat is not None - - Parameters: - vertex -- torch.tensor, size (B, N, 3) - tri -- torch.tensor, size (B, M, 3) or (M, 3), triangles - feat(optional) -- torch.tensor, size (B, N ,C), features - """ - device = vertex.device - rsize = int(self.rasterize_size) - # ndc_proj = self.ndc_proj.to(device) - # trans to homogeneous coordinates of 3d vertices, the direction of y is the same as v - if vertex.shape[-1] == 3: - vertex = torch.cat([vertex, torch.ones([*vertex.shape[:2], 1]).to(device)], dim=-1) - vertex[..., 0] = -vertex[..., 0] - - - # vertex_ndc = vertex @ ndc_proj.t() - if self.rasterizer is None: - self.rasterizer = MeshRasterizer() - print("create rasterizer on device cuda:%d"%device.index) - - # ranges = None - # if isinstance(tri, List) or len(tri.shape) == 3: - # vum = vertex_ndc.shape[1] - # fnum = torch.tensor([f.shape[0] for f in tri]).unsqueeze(1).to(device) - # fstartidx = torch.cumsum(fnum, dim=0) - fnum - # ranges = torch.cat([fstartidx, fnum], axis=1).type(torch.int32).cpu() - # for i in range(tri.shape[0]): - # tri[i] = tri[i] + i*vum - # vertex_ndc = torch.cat(vertex_ndc, dim=0) - # tri = torch.cat(tri, dim=0) - - # for range_mode vetex: [B*N, 4], tri: [B*M, 3], for instance_mode vetex: [B, N, 4], tri: [M, 3] - tri = tri.type(torch.int32).contiguous() - - # rasterize - cameras = FoVPerspectiveCameras( - device=device, - fov=self.fov, - znear=self.znear, - zfar=self.zfar, - ) - - raster_settings = RasterizationSettings( - image_size=rsize - ) - - # print(vertex.shape, tri.shape) - mesh = Meshes(vertex.contiguous()[...,:3], tri.unsqueeze(0).repeat((vertex.shape[0],1,1))) - - fragments = self.rasterizer(mesh, cameras = cameras, raster_settings = raster_settings) - rast_out = fragments.pix_to_face.squeeze(-1) - depth = fragments.zbuf - - # render depth - depth = depth.permute(0, 3, 1, 2) - mask = (rast_out > 0).float().unsqueeze(1) - depth = mask * depth - - - image = None - if feat is not None: - attributes = feat.reshape(-1,3)[mesh.faces_packed()] - image = pytorch3d.ops.interpolate_face_attributes(fragments.pix_to_face, - fragments.bary_coords, - attributes) - # print(image.shape) - image = image.squeeze(-2).permute(0, 3, 1, 2) - image = mask * image - - return mask, depth, image - diff --git a/spaces/danterivers/music-generation-samples/audiocraft/utils/notebook.py b/spaces/danterivers/music-generation-samples/audiocraft/utils/notebook.py deleted file mode 100644 index 019b9d19e5bef976bedddf428fd25da42a8a9726..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/audiocraft/utils/notebook.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -try: - import IPython.display as ipd # type: ignore -except ImportError: - # Note in a notebook... - pass - - -import torch - - -def display_audio(samples: torch.Tensor, sample_rate: int): - """Renders an audio player for the given audio samples. - - Args: - samples (torch.Tensor): a Tensor of decoded audio samples - with shapes [B, C, T] or [C, T] - sample_rate (int): sample rate audio should be displayed with. - """ - assert samples.dim() == 2 or samples.dim() == 3 - - samples = samples.detach().cpu() - if samples.dim() == 2: - samples = samples[None, ...] - - for audio in samples: - ipd.display(ipd.Audio(audio, rate=sample_rate)) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/__init__.py deleted file mode 100644 index ad57eb9d3889bfbf5bbe868b0eaae7aa4d3049c1..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/__init__.py +++ /dev/null @@ -1,613 +0,0 @@ -# ruff: noqa -__version__ = "5.0.1" - -from typing import Any - -# Necessary as mypy would see expr as the module alt.expr although due to how -# the imports are set up it is expr in the alt.expr module -expr: Any - - -# The content of __all__ is automatically written by -# tools/update_init_file.py. Do not modify directly. -__all__ = [ - "Aggregate", - "AggregateOp", - "AggregateTransform", - "AggregatedFieldDef", - "Align", - "AllSortString", - "Angle", - "AngleDatum", - "AngleValue", - "AnyMark", - "AnyMarkConfig", - "AreaConfig", - "ArgmaxDef", - "ArgminDef", - "AutoSizeParams", - "AutosizeType", - "Axis", - "AxisConfig", - "AxisOrient", - "AxisResolveMap", - "BBox", - "BarConfig", - "BaseTitleNoValueRefs", - "Baseline", - "Bin", - "BinExtent", - "BinParams", - "BinTransform", - "BindCheckbox", - "BindDirect", - "BindInput", - "BindRadioSelect", - "BindRange", - "Binding", - "Blend", - "BoxPlot", - "BoxPlotConfig", - "BoxPlotDef", - "BrushConfig", - "CalculateTransform", - "Categorical", - "Chart", - "Color", - "ColorDatum", - "ColorDef", - "ColorName", - "ColorScheme", - "ColorValue", - "Column", - "CompositeMark", - "CompositeMarkDef", - "CompositionConfig", - "ConcatChart", - "ConcatSpecGenericSpec", - "ConditionalAxisColor", - "ConditionalAxisLabelAlign", - "ConditionalAxisLabelBaseline", - "ConditionalAxisLabelFontStyle", - "ConditionalAxisLabelFontWeight", - "ConditionalAxisNumber", - "ConditionalAxisNumberArray", - "ConditionalAxisPropertyAlignnull", - "ConditionalAxisPropertyColornull", - "ConditionalAxisPropertyFontStylenull", - "ConditionalAxisPropertyFontWeightnull", - "ConditionalAxisPropertyTextBaselinenull", - "ConditionalAxisPropertynumberArraynull", - "ConditionalAxisPropertynumbernull", - "ConditionalAxisPropertystringnull", - "ConditionalAxisString", - "ConditionalMarkPropFieldOrDatumDef", - "ConditionalMarkPropFieldOrDatumDefTypeForShape", - "ConditionalParameterMarkPropFieldOrDatumDef", - "ConditionalParameterMarkPropFieldOrDatumDefTypeForShape", - "ConditionalParameterStringFieldDef", - "ConditionalParameterValueDefGradientstringnullExprRef", - "ConditionalParameterValueDefTextExprRef", - "ConditionalParameterValueDefnumber", - "ConditionalParameterValueDefnumberArrayExprRef", - "ConditionalParameterValueDefnumberExprRef", - "ConditionalParameterValueDefstringExprRef", - "ConditionalParameterValueDefstringnullExprRef", - "ConditionalPredicateMarkPropFieldOrDatumDef", - "ConditionalPredicateMarkPropFieldOrDatumDefTypeForShape", - "ConditionalPredicateStringFieldDef", - "ConditionalPredicateValueDefAlignnullExprRef", - "ConditionalPredicateValueDefColornullExprRef", - "ConditionalPredicateValueDefFontStylenullExprRef", - "ConditionalPredicateValueDefFontWeightnullExprRef", - "ConditionalPredicateValueDefGradientstringnullExprRef", - "ConditionalPredicateValueDefTextBaselinenullExprRef", - "ConditionalPredicateValueDefTextExprRef", - "ConditionalPredicateValueDefnumber", - "ConditionalPredicateValueDefnumberArrayExprRef", - "ConditionalPredicateValueDefnumberArraynullExprRef", - "ConditionalPredicateValueDefnumberExprRef", - "ConditionalPredicateValueDefnumbernullExprRef", - "ConditionalPredicateValueDefstringExprRef", - "ConditionalPredicateValueDefstringnullExprRef", - "ConditionalStringFieldDef", - "ConditionalValueDefGradientstringnullExprRef", - "ConditionalValueDefTextExprRef", - "ConditionalValueDefnumber", - "ConditionalValueDefnumberArrayExprRef", - "ConditionalValueDefnumberExprRef", - "ConditionalValueDefstringExprRef", - "ConditionalValueDefstringnullExprRef", - "Config", - "CsvDataFormat", - "Cursor", - "Cyclical", - "Data", - "DataFormat", - "DataSource", - "Datasets", - "DateTime", - "DatumChannelMixin", - "DatumDef", - "Day", - "DensityTransform", - "DerivedStream", - "Description", - "DescriptionValue", - "Detail", - "Dict", - "DictInlineDataset", - "DictSelectionInit", - "DictSelectionInitInterval", - "Diverging", - "DomainUnionWith", - "DsvDataFormat", - "Element", - "Encoding", - "EncodingSortField", - "ErrorBand", - "ErrorBandConfig", - "ErrorBandDef", - "ErrorBar", - "ErrorBarConfig", - "ErrorBarDef", - "ErrorBarExtent", - "EventStream", - "EventType", - "Expr", - "ExprRef", - "Facet", - "FacetChart", - "FacetEncodingFieldDef", - "FacetFieldDef", - "FacetMapping", - "FacetSpec", - "FacetedEncoding", - "FacetedUnitSpec", - "Feature", - "FeatureCollection", - "FeatureGeometryGeoJsonProperties", - "Field", - "FieldChannelMixin", - "FieldDefWithoutScale", - "FieldEqualPredicate", - "FieldGTEPredicate", - "FieldGTPredicate", - "FieldLTEPredicate", - "FieldLTPredicate", - "FieldName", - "FieldOneOfPredicate", - "FieldOrDatumDefWithConditionDatumDefGradientstringnull", - "FieldOrDatumDefWithConditionDatumDefnumber", - "FieldOrDatumDefWithConditionDatumDefnumberArray", - "FieldOrDatumDefWithConditionDatumDefstringnull", - "FieldOrDatumDefWithConditionMarkPropFieldDefGradientstringnull", - "FieldOrDatumDefWithConditionMarkPropFieldDefTypeForShapestringnull", - "FieldOrDatumDefWithConditionMarkPropFieldDefnumber", - "FieldOrDatumDefWithConditionMarkPropFieldDefnumberArray", - "FieldOrDatumDefWithConditionStringDatumDefText", - "FieldOrDatumDefWithConditionStringFieldDefText", - "FieldOrDatumDefWithConditionStringFieldDefstring", - "FieldRange", - "FieldRangePredicate", - "FieldValidPredicate", - "Fill", - "FillDatum", - "FillOpacity", - "FillOpacityDatum", - "FillOpacityValue", - "FillValue", - "FilterTransform", - "Fit", - "FlattenTransform", - "FoldTransform", - "FontStyle", - "FontWeight", - "Generator", - "GenericUnitSpecEncodingAnyMark", - "GeoJsonFeature", - "GeoJsonFeatureCollection", - "GeoJsonProperties", - "Geometry", - "GeometryCollection", - "Gradient", - "GradientStop", - "GraticuleGenerator", - "GraticuleParams", - "HConcatChart", - "HConcatSpecGenericSpec", - "Header", - "HeaderConfig", - "HexColor", - "Href", - "HrefValue", - "Impute", - "ImputeMethod", - "ImputeParams", - "ImputeSequence", - "ImputeTransform", - "InlineData", - "InlineDataset", - "Interpolate", - "IntervalSelectionConfig", - "IntervalSelectionConfigWithoutType", - "JoinAggregateFieldDef", - "JoinAggregateTransform", - "JsonDataFormat", - "Key", - "LabelOverlap", - "LatLongDef", - "LatLongFieldDef", - "Latitude", - "Latitude2", - "Latitude2Datum", - "Latitude2Value", - "LatitudeDatum", - "LayerChart", - "LayerRepeatMapping", - "LayerRepeatSpec", - "LayerSpec", - "LayoutAlign", - "Legend", - "LegendBinding", - "LegendConfig", - "LegendOrient", - "LegendResolveMap", - "LegendStreamBinding", - "LineConfig", - "LineString", - "LinearGradient", - "LocalMultiTimeUnit", - "LocalSingleTimeUnit", - "Locale", - "LoessTransform", - "LogicalAndPredicate", - "LogicalNotPredicate", - "LogicalOrPredicate", - "Longitude", - "Longitude2", - "Longitude2Datum", - "Longitude2Value", - "LongitudeDatum", - "LookupData", - "LookupSelection", - "LookupTransform", - "Mark", - "MarkConfig", - "MarkDef", - "MarkPropDefGradientstringnull", - "MarkPropDefnumber", - "MarkPropDefnumberArray", - "MarkPropDefstringnullTypeForShape", - "MarkType", - "MaxRowsError", - "MergedStream", - "Month", - "MultiLineString", - "MultiPoint", - "MultiPolygon", - "MultiTimeUnit", - "NamedData", - "NonArgAggregateOp", - "NonLayerRepeatSpec", - "NonNormalizedSpec", - "NumberLocale", - "NumericArrayMarkPropDef", - "NumericMarkPropDef", - "OffsetDef", - "Opacity", - "OpacityDatum", - "OpacityValue", - "Order", - "OrderFieldDef", - "OrderValue", - "OrderValueDef", - "Orient", - "Orientation", - "OverlayMarkDef", - "Padding", - "Parameter", - "ParameterExpression", - "ParameterExtent", - "ParameterName", - "ParameterPredicate", - "Parse", - "ParseValue", - "PivotTransform", - "Point", - "PointSelectionConfig", - "PointSelectionConfigWithoutType", - "PolarDef", - "Polygon", - "Position", - "Position2Def", - "PositionDatumDef", - "PositionDatumDefBase", - "PositionDef", - "PositionFieldDef", - "PositionFieldDefBase", - "PositionValueDef", - "Predicate", - "PredicateComposition", - "PrimitiveValue", - "Projection", - "ProjectionConfig", - "ProjectionType", - "QuantileTransform", - "RadialGradient", - "Radius", - "Radius2", - "Radius2Datum", - "Radius2Value", - "RadiusDatum", - "RadiusValue", - "RangeConfig", - "RangeEnum", - "RangeRaw", - "RangeRawArray", - "RangeScheme", - "RectConfig", - "RegressionTransform", - "RelativeBandSize", - "RepeatChart", - "RepeatMapping", - "RepeatRef", - "RepeatSpec", - "Resolve", - "ResolveMode", - "Root", - "Row", - "RowColLayoutAlign", - "RowColboolean", - "RowColnumber", - "RowColumnEncodingFieldDef", - "SCHEMA_URL", - "SCHEMA_VERSION", - "SampleTransform", - "Scale", - "ScaleBinParams", - "ScaleBins", - "ScaleConfig", - "ScaleDatumDef", - "ScaleFieldDef", - "ScaleInterpolateEnum", - "ScaleInterpolateParams", - "ScaleResolveMap", - "ScaleType", - "SchemaBase", - "SchemeParams", - "SecondaryFieldDef", - "SelectionConfig", - "SelectionExpression", - "SelectionInit", - "SelectionInitInterval", - "SelectionInitIntervalMapping", - "SelectionInitMapping", - "SelectionParameter", - "SelectionPredicateComposition", - "SelectionResolution", - "SelectionType", - "SequenceGenerator", - "SequenceParams", - "SequentialMultiHue", - "SequentialSingleHue", - "Shape", - "ShapeDatum", - "ShapeDef", - "ShapeValue", - "SharedEncoding", - "SingleDefUnitChannel", - "SingleTimeUnit", - "Size", - "SizeDatum", - "SizeValue", - "Sort", - "SortArray", - "SortByChannel", - "SortByChannelDesc", - "SortByEncoding", - "SortField", - "SortOrder", - "Spec", - "SphereGenerator", - "StackOffset", - "StackTransform", - "StandardType", - "Step", - "StepFor", - "Stream", - "StringFieldDef", - "StringFieldDefWithCondition", - "StringValueDefWithCondition", - "Stroke", - "StrokeCap", - "StrokeDash", - "StrokeDashDatum", - "StrokeDashValue", - "StrokeDatum", - "StrokeJoin", - "StrokeOpacity", - "StrokeOpacityDatum", - "StrokeOpacityValue", - "StrokeValue", - "StrokeWidth", - "StrokeWidthDatum", - "StrokeWidthValue", - "StyleConfigIndex", - "SymbolShape", - "TOPLEVEL_ONLY_KEYS", - "Text", - "TextBaseline", - "TextDatum", - "TextDef", - "TextDirection", - "TextValue", - "Theta", - "Theta2", - "Theta2Datum", - "Theta2Value", - "ThetaDatum", - "ThetaValue", - "TickConfig", - "TickCount", - "TimeInterval", - "TimeIntervalStep", - "TimeLocale", - "TimeUnit", - "TimeUnitParams", - "TimeUnitTransform", - "Title", - "TitleAnchor", - "TitleConfig", - "TitleFrame", - "TitleOrient", - "TitleParams", - "Tooltip", - "TooltipContent", - "TooltipValue", - "TopLevelConcatSpec", - "TopLevelFacetSpec", - "TopLevelHConcatSpec", - "TopLevelLayerSpec", - "TopLevelMixin", - "TopLevelParameter", - "TopLevelRepeatSpec", - "TopLevelSelectionParameter", - "TopLevelSpec", - "TopLevelUnitSpec", - "TopLevelVConcatSpec", - "TopoDataFormat", - "Transform", - "Type", - "TypeForShape", - "TypedFieldDef", - "URI", - "Undefined", - "UnitSpec", - "UnitSpecWithFrame", - "Url", - "UrlData", - "UrlValue", - "UtcMultiTimeUnit", - "UtcSingleTimeUnit", - "VConcatChart", - "VConcatSpecGenericSpec", - "VEGAEMBED_VERSION", - "VEGALITE_VERSION", - "VEGA_VERSION", - "ValueChannelMixin", - "ValueDefWithConditionMarkPropFieldOrDatumDefGradientstringnull", - "ValueDefWithConditionMarkPropFieldOrDatumDefTypeForShapestringnull", - "ValueDefWithConditionMarkPropFieldOrDatumDefnumber", - "ValueDefWithConditionMarkPropFieldOrDatumDefnumberArray", - "ValueDefWithConditionMarkPropFieldOrDatumDefstringnull", - "ValueDefWithConditionStringFieldDefText", - "ValueDefnumber", - "ValueDefnumberwidthheightExprRef", - "VariableParameter", - "Vector10string", - "Vector12string", - "Vector2DateTime", - "Vector2Vector2number", - "Vector2boolean", - "Vector2number", - "Vector2string", - "Vector3number", - "Vector7string", - "VegaLite", - "VegaLiteSchema", - "ViewBackground", - "ViewConfig", - "WindowEventType", - "WindowFieldDef", - "WindowOnlyOp", - "WindowTransform", - "X", - "X2", - "X2Datum", - "X2Value", - "XDatum", - "XError", - "XError2", - "XError2Value", - "XErrorValue", - "XOffset", - "XOffsetDatum", - "XOffsetValue", - "XValue", - "Y", - "Y2", - "Y2Datum", - "Y2Value", - "YDatum", - "YError", - "YError2", - "YError2Value", - "YErrorValue", - "YOffset", - "YOffsetDatum", - "YOffsetValue", - "YValue", - "api", - "binding", - "binding_checkbox", - "binding_radio", - "binding_range", - "binding_select", - "channels", - "check_fields_and_encodings", - "concat", - "condition", - "core", - "curry", - "data", - "data_transformers", - "datum", - "default_data_transformer", - "display", - "expr", - "graticule", - "hconcat", - "layer", - "limit_rows", - "load_ipython_extension", - "load_schema", - "mixins", - "overload", - "param", - "parse_shorthand", - "pipe", - "renderers", - "repeat", - "sample", - "schema", - "selection_interval", - "selection_point", - "sequence", - "sphere", - "theme", - "themes", - "to_csv", - "to_json", - "to_values", - "topo_feature", - "utils", - "v5", - "value", - "vconcat", - "vegalite", - "with_property_setters", -] - - -def __dir__(): - return __all__ - - -from .vegalite import * - - -def load_ipython_extension(ipython): - from ._magics import vegalite - - ipython.register_magic_function(vegalite, "cell") diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/quartzPen.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/quartzPen.py deleted file mode 100644 index 6e1228d6f2b8bbc78cf52864ccaf3b249a654749..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/quartzPen.py +++ /dev/null @@ -1,44 +0,0 @@ -from fontTools.pens.basePen import BasePen - -from Quartz.CoreGraphics import CGPathCreateMutable, CGPathMoveToPoint -from Quartz.CoreGraphics import CGPathAddLineToPoint, CGPathAddCurveToPoint -from Quartz.CoreGraphics import CGPathAddQuadCurveToPoint, CGPathCloseSubpath - - -__all__ = ["QuartzPen"] - - -class QuartzPen(BasePen): - - """A pen that creates a CGPath - - Parameters - - path: an optional CGPath to add to - - xform: an optional CGAffineTransform to apply to the path - """ - - def __init__(self, glyphSet, path=None, xform=None): - BasePen.__init__(self, glyphSet) - if path is None: - path = CGPathCreateMutable() - self.path = path - self.xform = xform - - def _moveTo(self, pt): - x, y = pt - CGPathMoveToPoint(self.path, self.xform, x, y) - - def _lineTo(self, pt): - x, y = pt - CGPathAddLineToPoint(self.path, self.xform, x, y) - - def _curveToOne(self, p1, p2, p3): - (x1, y1), (x2, y2), (x3, y3) = p1, p2, p3 - CGPathAddCurveToPoint(self.path, self.xform, x1, y1, x2, y2, x3, y3) - - def _qCurveToOne(self, p1, p2): - (x1, y1), (x2, y2) = p1, p2 - CGPathAddQuadCurveToPoint(self.path, self.xform, x1, y1, x2, y2) - - def _closePath(self): - CGPathCloseSubpath(self.path) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-846a9041.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-846a9041.js deleted file mode 100644 index 902726e1aad4ad79abc9abb97c56d85fee6dde10..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-846a9041.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as X,e as y,s as p,f as ne,g as O,h as J,j as E,n as $,k as H,y as Ae,m as U,o as F,P as ae,p as P,w as C,r as ee,u as S,v as le,B as De,C as je,I as ie,K as Z,af as Be,ao as re,Z as Ce,t as _e,Y as R,x as ce,N as G,O as Q,F as q,G as K,ap as me,T as Y,H as V,am as Je,V as Se,ae as Ne,Q as Ee,R as Re,E as T}from"./index-39fce9e2.js";import{b as he,B as Te}from"./Button-79f6e3bf.js";import{B as He}from"./BlockTitle-fa702e63.js";import"./Info-7c1e7874.js";function Le(l){let e,n;return{c(){e=ne("svg"),n=ne("path"),O(n,"d","M5 8l4 4 4-4z"),O(e,"class","dropdown-arrow svelte-p5edak"),O(e,"xmlns","http://www.w3.org/2000/svg"),O(e,"width","18"),O(e,"height","18"),O(e,"viewBox","0 0 18 18")},m(t,o){J(t,e,o),E(e,n)},p:$,i:$,o:$,d(t){t&&H(e)}}}class Me extends X{constructor(e){super(),y(this,e,null,Le,p,{})}}function Ue(l){let e,n;return{c(){e=ne("svg"),n=ne("path"),O(n,"d","M19 6.41L17.59 5 12 10.59 6.41 5 5 6.41 10.59 12 5 17.59 6.41 19 12 13.41 17.59 19 19 17.59 13.41 12z"),O(e,"xmlns","http://www.w3.org/2000/svg"),O(e,"width","16"),O(e,"height","16"),O(e,"viewBox","0 0 24 24")},m(t,o){J(t,e,o),E(e,n)},p:$,i:$,o:$,d(t){t&&H(e)}}}class ze extends X{constructor(e){super(),y(this,e,null,Ue,p,{})}}function de(l,e,n){const t=l.slice();return t[24]=e[n],t}function we(l){let e,n,t,o,m,u=ie(l[0]),r=[];for(let i=0;i{t&&(n||(n=re(e,he,{duration:200,y:5},!0)),n.run(1))}),t=!0)},o(i){i&&(n||(n=re(e,he,{duration:200,y:5},!1)),n.run(0)),t=!1},d(i){i&&H(e),Ce(r,i),l[21](null),i&&n&&n.end(),o=!1,m()}}}function ge(l){let e,n,t,o=l[24]+"",m,u,r,i;return{c(){e=U("li"),n=U("span"),n.textContent="✓",t=F(),m=_e(o),u=F(),O(n,"class","inner-item svelte-1aonegi"),R(n,"hide",!l[11].includes(l[24])),O(e,"class","item svelte-1aonegi"),O(e,"role","button"),O(e,"data-value",r=l[24]),O(e,"aria-label",i=l[24]),R(e,"selected",l[11].includes(l[24])),R(e,"active",l[2]===l[24]),R(e,"bg-gray-100",l[2]===l[24]),R(e,"dark:bg-gray-600",l[2]===l[24])},m(s,f){J(s,e,f),E(e,n),E(e,t),E(e,m),E(e,u)},p(s,f){f&2049&&R(n,"hide",!s[11].includes(s[24])),f&1&&o!==(o=s[24]+"")&&ce(m,o),f&1&&r!==(r=s[24])&&O(e,"data-value",r),f&1&&i!==(i=s[24])&&O(e,"aria-label",i),f&2049&&R(e,"selected",s[11].includes(s[24])),f&5&&R(e,"active",s[2]===s[24]),f&5&&R(e,"bg-gray-100",s[2]===s[24]),f&5&&R(e,"dark:bg-gray-600",s[2]===s[24])},d(s){s&&H(e)}}}function qe(l){let e,n,t,o,m;Ae(l[18]);let u=l[1]&&!l[3]&&we(l);return{c(){e=U("div"),n=F(),u&&u.c(),t=ae(),O(e,"class","reference")},m(r,i){J(r,e,i),l[19](e),J(r,n,i),u&&u.m(r,i),J(r,t,i),o||(m=[P(window,"scroll",l[12]),P(window,"resize",l[18])],o=!0)},p(r,[i]){r[1]&&!r[3]?u?(u.p(r,i),i&10&&C(u,1)):(u=we(r),u.c(),C(u,1),u.m(t.parentNode,t)):u&&(ee(),S(u,1,1,()=>{u=null}),le())},i(r){C(u)},o(r){S(u)},d(r){r&&(H(e),H(n),H(t)),l[19](null),u&&u.d(r),o=!1,De(m)}}}function Ke(l,e,n){let t,{value:o=void 0}=e,{filtered:m}=e,{showOptions:u=!1}=e,{activeOption:r}=e,{disabled:i=!1}=e,s,f,g,c,h,j,w,A,b,v;function N(){const{top:k,bottom:W}=h.getBoundingClientRect();n(15,s=k),n(16,f=v-W)}let z=null;function B(){u&&(z!==null&&clearTimeout(z),z=setTimeout(()=>{N(),z=null},10))}const L=je();function I(){n(10,v=window.innerHeight)}function d(k){G[k?"unshift":"push"](()=>{h=k,n(4,h)})}const D=k=>L("change",k);function _(k){G[k?"unshift":"push"](()=>{j=k,n(5,j)})}return l.$$set=k=>{"value"in k&&n(14,o=k.value),"filtered"in k&&n(0,m=k.filtered),"showOptions"in k&&n(1,u=k.showOptions),"activeOption"in k&&n(2,r=k.activeOption),"disabled"in k&&n(3,i=k.disabled)},l.$$.update=()=>{if(l.$$.dirty&245810){if(u&&h){if(j&&typeof o=="string"){let W=j.querySelectorAll("li");for(const x of Array.from(W))if(x.getAttribute("data-value")===o){j.scrollTo(0,x.offsetTop);break}}N();const k=h.parentElement?.getBoundingClientRect();n(17,g=k?.height||0),n(6,c=k?.width||0)}f>s?(n(7,w=`${s}px`),n(9,b=f),n(8,A=null)):(n(8,A=`${f+g}px`),n(9,b=s-g),n(7,w=null))}l.$$.dirty&16384&&n(11,t=Array.isArray(o)?o:[o])},[m,u,r,i,h,j,c,w,A,b,v,t,B,L,o,s,f,g,I,d,D,_]}class Ve extends X{constructor(e){super(),y(this,e,Ke,qe,p,{value:14,filtered:0,showOptions:1,activeOption:2,disabled:3})}}function be(l,e,n){const t=l.slice();return t[31]=e[n],t}function Fe(l){let e;return{c(){e=_e(l[1])},m(n,t){J(n,e,t)},p(n,t){t[0]&2&&ce(e,n[1])},d(n){n&&H(e)}}}function ve(l){let e,n,t=ie(l[0]),o=[];for(let u=0;uS(o[u],1,1,()=>{o[u]=null});return{c(){for(let u=0;u{f=null}),le()):f?(f.p(l,h),h[0]&16&&C(f,1)):(f=ke(l),f.c(),C(f,1),f.m(e,u))},i(c){r||(C(f),r=!0)},o(c){S(f),r=!1},d(c){c&&H(e),f&&f.d(),i=!1,s()}}}function Ge(l){let e,n,t,o,m,u=l[3]&&Array.isArray(l[0]),r,i,s,f,g,c,h,j,w,A,b,v,N,z;n=new He({props:{show_label:l[5],info:l[2],$$slots:{default:[Fe]},$$scope:{ctx:l}}});let B=u&&ve(l);c=new ze({}),j=new Me({});function L(d){l[27](d)}let I={showOptions:l[11],filtered:l[10],activeOption:l[9],disabled:l[4]};return l[0]!==void 0&&(I.value=l[0]),A=new Ve({props:I}),G.push(()=>Q(A,"value",L)),A.$on("change",l[17]),{c(){e=U("label"),q(n.$$.fragment),t=F(),o=U("div"),m=U("div"),B&&B.c(),r=F(),i=U("div"),s=U("input"),f=F(),g=U("div"),q(c.$$.fragment),h=F(),q(j.$$.fragment),w=F(),q(A.$$.fragment),O(s,"class","border-none svelte-1xsj8nn"),s.disabled=l[4],O(s,"autocomplete","off"),R(s,"subdued",l[0]!==l[8]&&!l[7]),O(g,"class","token-remove remove-all svelte-1xsj8nn"),O(g,"title","Clear"),R(g,"hide",!l[3]||!l[0]?.length||l[4]),O(i,"class","secondary-wrap svelte-1xsj8nn"),O(m,"class","wrap-inner svelte-1xsj8nn"),R(m,"showOptions",l[11]),O(o,"class","wrap svelte-1xsj8nn"),O(e,"class","svelte-1xsj8nn"),R(e,"container",l[6])},m(d,D){J(d,e,D),K(n,e,null),E(e,t),E(e,o),E(o,m),B&&B.m(m,null),E(m,r),E(m,i),E(i,s),me(s,l[8]),l[25](s),E(i,f),E(i,g),K(c,g,null),E(i,h),K(j,i,null),E(o,w),K(A,o,null),v=!0,N||(z=[P(s,"input",l[24]),P(s,"keydown",l[18]),P(s,"keyup",l[26]),P(s,"blur",l[15]),P(s,"focus",l[16]),P(g,"click",l[14])],N=!0)},p(d,D){const _={};D[0]&32&&(_.show_label=d[5]),D[0]&4&&(_.info=d[2]),D[0]&2|D[1]&8&&(_.$$scope={dirty:D,ctx:d}),n.$set(_),D[0]&9&&(u=d[3]&&Array.isArray(d[0])),u?B?(B.p(d,D),D[0]&9&&C(B,1)):(B=ve(d),B.c(),C(B,1),B.m(m,r)):B&&(ee(),S(B,1,1,()=>{B=null}),le()),(!v||D[0]&16)&&(s.disabled=d[4]),D[0]&256&&s.value!==d[8]&&me(s,d[8]),(!v||D[0]&385)&&R(s,"subdued",d[0]!==d[8]&&!d[7]),(!v||D[0]&25)&&R(g,"hide",!d[3]||!d[0]?.length||d[4]),(!v||D[0]&2048)&&R(m,"showOptions",d[11]);const k={};D[0]&2048&&(k.showOptions=d[11]),D[0]&1024&&(k.filtered=d[10]),D[0]&512&&(k.activeOption=d[9]),D[0]&16&&(k.disabled=d[4]),!b&&D[0]&1&&(b=!0,k.value=d[0],Y(()=>b=!1)),A.$set(k),(!v||D[0]&64)&&R(e,"container",d[6])},i(d){v||(C(n.$$.fragment,d),C(B),C(c.$$.fragment,d),C(j.$$.fragment,d),C(A.$$.fragment,d),v=!0)},o(d){S(n.$$.fragment,d),S(B),S(c.$$.fragment,d),S(j.$$.fragment,d),S(A.$$.fragment,d),v=!1},d(d){d&&H(e),V(n),B&&B.d(),l[25](null),V(c),V(j),V(A),N=!1,De(z)}}}function Pe(l,e,n){let t,{label:o}=e,{info:m=void 0}=e,{value:u}=e,r=Array.isArray(u)?u.slice():u,{value_is_output:i=!1}=e,{multiselect:s=!1}=e,{max_choices:f}=e,{choices:g}=e,{disabled:c=!1}=e,{show_label:h}=e,{container:j=!0}=e,{allow_custom_value:w=!1}=e;const A=je();let b,v,N=!1,z;function B(){A("change",u),i||A("input")}Je(()=>{n(19,i=!1)});function L(a){n(0,u),(!f||u.lengthM!==a))),A("select",{index:g.indexOf(a),value:a,selected:!1})}function d(a){n(0,u=[]),n(8,b=""),a.preventDefault()}function D(a){s?n(8,b=""):w||u!==b&&(typeof u=="string"&&b==""?n(8,b=u):(n(0,u=void 0),n(8,b=""))),n(11,N=!1),A("blur")}function _(a){A("focus"),n(11,N=!0),n(10,t=g)}function k(a){const M=a.detail.target.dataset.value;w&&n(8,b=M),M!==void 0&&(s?(u?.includes(M)?I(M):L(M),n(8,b="")):(n(0,u=M),n(8,b=M),n(11,N=!1),A("select",{index:g.indexOf(M),value:M,selected:!0}),z.blur()))}function W(a){if(a.key==="Enter"&&v!=null)s?s&&Array.isArray(u)&&(u.includes(v)?I(v):L(v),n(8,b="")):(u!==v&&(n(0,u=v),A("select",{index:g.indexOf(u),value:u,selected:!0})),n(8,b=v),n(11,N=!1),z.blur());else if(n(11,N=!0),a.key==="ArrowUp"||a.key==="ArrowDown"){v===null&&n(9,v=t[0]);const M=a.key==="ArrowUp"?-1:1,fe=t.indexOf(v)+M;n(9,v=fe<0?t[t.length-1]:fe===t.length?t[0]:t[fe]),a.preventDefault()}else a.key==="Escape"?n(11,N=!1):a.key==="Backspace"?s&&(!b||b==="")&&Array.isArray(u)&&u.length>0&&(I(u[u.length-1]),n(8,b="")):n(11,N=!0)}const x=a=>I(a);function te(){b=this.value,n(8,b),n(0,u)}function ue(a){G[a?"unshift":"push"](()=>{z=a,n(12,z)})}const se=()=>{w&&n(0,u=b)};function oe(a){u=a,n(0,u)}return l.$$set=a=>{"label"in a&&n(1,o=a.label),"info"in a&&n(2,m=a.info),"value"in a&&n(0,u=a.value),"value_is_output"in a&&n(19,i=a.value_is_output),"multiselect"in a&&n(3,s=a.multiselect),"max_choices"in a&&n(20,f=a.max_choices),"choices"in a&&n(21,g=a.choices),"disabled"in a&&n(4,c=a.disabled),"show_label"in a&&n(5,h=a.show_label),"container"in a&&n(6,j=a.container),"allow_custom_value"in a&&n(7,w=a.allow_custom_value)},l.$$.update=()=>{l.$$.dirty[0]&1&&(typeof u=="string"||u===null)&&n(8,b=u),l.$$.dirty[0]&2097408&&n(10,t=g.filter(a=>b?a.toLowerCase().includes(b.toLowerCase()):a)),l.$$.dirty[0]&1536&&(!v||!t.includes(v))&&n(9,v=t.length?t[0]:null),l.$$.dirty[0]&4194305&&JSON.stringify(u)!=JSON.stringify(r)&&(n(22,r=Array.isArray(u)?u.slice():u),B()),l.$$.dirty[0]&4194305&&JSON.stringify(u)!=JSON.stringify(r)&&(A("change",u),n(22,r=Array.isArray(u)?u.slice():u))},[u,o,m,s,c,h,j,w,b,v,t,N,z,I,d,D,_,k,W,i,f,g,r,x,te,ue,se,oe]}let Ie=class extends X{constructor(e){super(),y(this,e,Pe,Ge,p,{label:1,info:2,value:0,value_is_output:19,multiselect:3,max_choices:20,choices:21,disabled:4,show_label:5,container:6,allow_custom_value:7},null,[-1,-1])}};function Qe(l){let e,n,t,o,m,u;const r=[l[14]];let i={};for(let c=0;cQ(t,"value",s)),G.push(()=>Q(t,"value_is_output",f)),t.$on("change",l[18]),t.$on("input",l[19]),t.$on("select",l[20]),t.$on("blur",l[21]),t.$on("focus",l[22]),{c(){q(e.$$.fragment),n=F(),q(t.$$.fragment)},m(c,h){K(e,c,h),J(c,n,h),K(t,c,h),u=!0},p(c,h){const j=h&16384?Ee(r,[Re(c[14])]):{};e.$set(j);const w={};h&512&&(w.choices=c[9]),h&128&&(w.multiselect=c[7]),h&256&&(w.max_choices=c[8]),h&4&&(w.label=c[2]),h&8&&(w.info=c[3]),h&1024&&(w.show_label=c[10]),h&32768&&(w.allow_custom_value=c[15]),h&2048&&(w.container=c[11]),!o&&h&1&&(o=!0,w.value=c[0],Y(()=>o=!1)),!m&&h&2&&(m=!0,w.value_is_output=c[1],Y(()=>m=!1)),t.$set(w)},i(c){u||(C(e.$$.fragment,c),C(t.$$.fragment,c),u=!0)},o(c){S(e.$$.fragment,c),S(t.$$.fragment,c),u=!1},d(c){c&&H(n),V(e,c),V(t,c)}}}function Ye(l){let e,n;return e=new Te({props:{visible:l[6],elem_id:l[4],elem_classes:l[5],padding:l[11],allow_overflow:!1,scale:l[12],min_width:l[13],$$slots:{default:[Qe]},$$scope:{ctx:l}}}),{c(){q(e.$$.fragment)},m(t,o){K(e,t,o),n=!0},p(t,[o]){const m={};o&64&&(m.visible=t[6]),o&16&&(m.elem_id=t[4]),o&32&&(m.elem_classes=t[5]),o&2048&&(m.padding=t[11]),o&4096&&(m.scale=t[12]),o&8192&&(m.min_width=t[13]),o&8441743&&(m.$$scope={dirty:o,ctx:t}),e.$set(m)},i(t){n||(C(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){V(e,t)}}}function Ze(l,e,n){let{label:t="Dropdown"}=e,{info:o=void 0}=e,{elem_id:m=""}=e,{elem_classes:u=[]}=e,{visible:r=!0}=e,{value:i}=e,{value_is_output:s=!1}=e,{multiselect:f=!1}=e,{max_choices:g}=e,{choices:c}=e,{show_label:h}=e,{container:j=!0}=e,{scale:w=null}=e,{min_width:A=void 0}=e,{loading_status:b}=e,{allow_custom_value:v=!1}=e;f&&!i?i=[]:i||(i="");function N(_){i=_,n(0,i)}function z(_){s=_,n(1,s)}function B(_){T.call(this,l,_)}function L(_){T.call(this,l,_)}function I(_){T.call(this,l,_)}function d(_){T.call(this,l,_)}function D(_){T.call(this,l,_)}return l.$$set=_=>{"label"in _&&n(2,t=_.label),"info"in _&&n(3,o=_.info),"elem_id"in _&&n(4,m=_.elem_id),"elem_classes"in _&&n(5,u=_.elem_classes),"visible"in _&&n(6,r=_.visible),"value"in _&&n(0,i=_.value),"value_is_output"in _&&n(1,s=_.value_is_output),"multiselect"in _&&n(7,f=_.multiselect),"max_choices"in _&&n(8,g=_.max_choices),"choices"in _&&n(9,c=_.choices),"show_label"in _&&n(10,h=_.show_label),"container"in _&&n(11,j=_.container),"scale"in _&&n(12,w=_.scale),"min_width"in _&&n(13,A=_.min_width),"loading_status"in _&&n(14,b=_.loading_status),"allow_custom_value"in _&&n(15,v=_.allow_custom_value)},[i,s,t,o,m,u,r,f,g,c,h,j,w,A,b,v,N,z,B,L,I,d,D]}class We extends X{constructor(e){super(),y(this,e,Ze,Ye,p,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,value:0,value_is_output:1,multiselect:7,max_choices:8,choices:9,show_label:10,container:11,scale:12,min_width:13,loading_status:14,allow_custom_value:15})}}function Xe(l){let e,n,t,o,m,u;const r=[l[14]];let i={};for(let c=0;cQ(t,"value",s)),G.push(()=>Q(t,"value_is_output",f)),t.$on("change",l[18]),t.$on("input",l[19]),t.$on("select",l[20]),t.$on("blur",l[21]),t.$on("focus",l[22]),{c(){q(e.$$.fragment),n=F(),q(t.$$.fragment)},m(c,h){K(e,c,h),J(c,n,h),K(t,c,h),u=!0},p(c,h){const j=h&16384?Ee(r,[Re(c[14])]):{};e.$set(j);const w={};h&512&&(w.choices=c[9]),h&128&&(w.multiselect=c[7]),h&256&&(w.max_choices=c[8]),h&4&&(w.label=c[2]),h&8&&(w.info=c[3]),h&1024&&(w.show_label=c[10]),h&32768&&(w.allow_custom_value=c[15]),h&2048&&(w.container=c[11]),!o&&h&1&&(o=!0,w.value=c[0],Y(()=>o=!1)),!m&&h&2&&(m=!0,w.value_is_output=c[1],Y(()=>m=!1)),t.$set(w)},i(c){u||(C(e.$$.fragment,c),C(t.$$.fragment,c),u=!0)},o(c){S(e.$$.fragment,c),S(t.$$.fragment,c),u=!1},d(c){c&&H(n),V(e,c),V(t,c)}}}function ye(l){let e,n;return e=new Te({props:{visible:l[6],elem_id:l[4],elem_classes:l[5],padding:l[11],allow_overflow:!1,scale:l[12],min_width:l[13],$$slots:{default:[Xe]},$$scope:{ctx:l}}}),{c(){q(e.$$.fragment)},m(t,o){K(e,t,o),n=!0},p(t,[o]){const m={};o&64&&(m.visible=t[6]),o&16&&(m.elem_id=t[4]),o&32&&(m.elem_classes=t[5]),o&2048&&(m.padding=t[11]),o&4096&&(m.scale=t[12]),o&8192&&(m.min_width=t[13]),o&8441743&&(m.$$scope={dirty:o,ctx:t}),e.$set(m)},i(t){n||(C(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){V(e,t)}}}function pe(l,e,n){let{label:t="Dropdown"}=e,{info:o=void 0}=e,{elem_id:m=""}=e,{elem_classes:u=[]}=e,{visible:r=!0}=e,{value:i}=e,{value_is_output:s=!1}=e,{multiselect:f=!1}=e,{max_choices:g}=e,{choices:c}=e,{show_label:h}=e,{container:j=!0}=e,{scale:w=null}=e,{min_width:A=void 0}=e,{loading_status:b}=e,{allow_custom_value:v=!1}=e;f&&!i?i=[]:i||(i="");function N(_){i=_,n(0,i)}function z(_){s=_,n(1,s)}function B(_){T.call(this,l,_)}function L(_){T.call(this,l,_)}function I(_){T.call(this,l,_)}function d(_){T.call(this,l,_)}function D(_){T.call(this,l,_)}return l.$$set=_=>{"label"in _&&n(2,t=_.label),"info"in _&&n(3,o=_.info),"elem_id"in _&&n(4,m=_.elem_id),"elem_classes"in _&&n(5,u=_.elem_classes),"visible"in _&&n(6,r=_.visible),"value"in _&&n(0,i=_.value),"value_is_output"in _&&n(1,s=_.value_is_output),"multiselect"in _&&n(7,f=_.multiselect),"max_choices"in _&&n(8,g=_.max_choices),"choices"in _&&n(9,c=_.choices),"show_label"in _&&n(10,h=_.show_label),"container"in _&&n(11,j=_.container),"scale"in _&&n(12,w=_.scale),"min_width"in _&&n(13,A=_.min_width),"loading_status"in _&&n(14,b=_.loading_status),"allow_custom_value"in _&&n(15,v=_.allow_custom_value)},[i,s,t,o,m,u,r,f,g,c,h,j,w,A,b,v,N,z,B,L,I,d,D]}class xe extends X{constructor(e){super(),y(this,e,pe,ye,p,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,value:0,value_is_output:1,multiselect:7,max_choices:8,choices:9,show_label:10,container:11,scale:12,min_width:13,loading_status:14,allow_custom_value:15})}}function $e(l){let e,n,t,o;function m(i){l[24](i)}function u(i){l[25](i)}let r={visible:l[6],elem_id:l[4],elem_classes:l[5],container:l[11],scale:l[12],min_width:l[13],choices:l[9],multiselect:l[7],max_choices:l[8],label:l[2],info:l[3],show_label:l[10],allow_custom_value:l[15],loading_status:l[14]};return l[0]!==void 0&&(r.value=l[0]),l[1]!==void 0&&(r.value_is_output=l[1]),e=new xe({props:r}),G.push(()=>Q(e,"value",m)),G.push(()=>Q(e,"value_is_output",u)),e.$on("change",l[26]),e.$on("input",l[27]),e.$on("select",l[28]),e.$on("blur",l[29]),e.$on("focus",l[30]),{c(){q(e.$$.fragment)},m(i,s){K(e,i,s),o=!0},p(i,s){const f={};s&64&&(f.visible=i[6]),s&16&&(f.elem_id=i[4]),s&32&&(f.elem_classes=i[5]),s&2048&&(f.container=i[11]),s&4096&&(f.scale=i[12]),s&8192&&(f.min_width=i[13]),s&512&&(f.choices=i[9]),s&128&&(f.multiselect=i[7]),s&256&&(f.max_choices=i[8]),s&4&&(f.label=i[2]),s&8&&(f.info=i[3]),s&1024&&(f.show_label=i[10]),s&32768&&(f.allow_custom_value=i[15]),s&16384&&(f.loading_status=i[14]),!n&&s&1&&(n=!0,f.value=i[0],Y(()=>n=!1)),!t&&s&2&&(t=!0,f.value_is_output=i[1],Y(()=>t=!1)),e.$set(f)},i(i){o||(C(e.$$.fragment,i),o=!0)},o(i){S(e.$$.fragment,i),o=!1},d(i){V(e,i)}}}function el(l){let e,n,t,o;function m(i){l[17](i)}function u(i){l[18](i)}let r={visible:l[6],elem_id:l[4],elem_classes:l[5],container:l[11],scale:l[12],min_width:l[13],choices:l[9],multiselect:l[7],max_choices:l[8],label:l[2],info:l[3],show_label:l[10],allow_custom_value:l[15],loading_status:l[14]};return l[0]!==void 0&&(r.value=l[0]),l[1]!==void 0&&(r.value_is_output=l[1]),e=new We({props:r}),G.push(()=>Q(e,"value",m)),G.push(()=>Q(e,"value_is_output",u)),e.$on("change",l[19]),e.$on("input",l[20]),e.$on("select",l[21]),e.$on("blur",l[22]),e.$on("focus",l[23]),{c(){q(e.$$.fragment)},m(i,s){K(e,i,s),o=!0},p(i,s){const f={};s&64&&(f.visible=i[6]),s&16&&(f.elem_id=i[4]),s&32&&(f.elem_classes=i[5]),s&2048&&(f.container=i[11]),s&4096&&(f.scale=i[12]),s&8192&&(f.min_width=i[13]),s&512&&(f.choices=i[9]),s&128&&(f.multiselect=i[7]),s&256&&(f.max_choices=i[8]),s&4&&(f.label=i[2]),s&8&&(f.info=i[3]),s&1024&&(f.show_label=i[10]),s&32768&&(f.allow_custom_value=i[15]),s&16384&&(f.loading_status=i[14]),!n&&s&1&&(n=!0,f.value=i[0],Y(()=>n=!1)),!t&&s&2&&(t=!0,f.value_is_output=i[1],Y(()=>t=!1)),e.$set(f)},i(i){o||(C(e.$$.fragment,i),o=!0)},o(i){S(e.$$.fragment,i),o=!1},d(i){V(e,i)}}}function ll(l){let e,n,t,o;const m=[el,$e],u=[];function r(i,s){return i[16]==="static"?0:1}return e=r(l),n=u[e]=m[e](l),{c(){n.c(),t=ae()},m(i,s){u[e].m(i,s),J(i,t,s),o=!0},p(i,[s]){let f=e;e=r(i),e===f?u[e].p(i,s):(ee(),S(u[f],1,1,()=>{u[f]=null}),le(),n=u[e],n?n.p(i,s):(n=u[e]=m[e](i),n.c()),C(n,1),n.m(t.parentNode,t))},i(i){o||(C(n),o=!0)},o(i){S(n),o=!1},d(i){i&&H(t),u[e].d(i)}}}function nl(l,e,n){let{label:t="Dropdown"}=e,{info:o=void 0}=e,{elem_id:m=""}=e,{elem_classes:u=[]}=e,{visible:r=!0}=e,{value:i}=e,{value_is_output:s=!1}=e,{multiselect:f=!1}=e,{max_choices:g}=e,{choices:c}=e,{show_label:h}=e,{container:j=!0}=e,{scale:w=null}=e,{min_width:A=void 0}=e,{loading_status:b}=e,{allow_custom_value:v=!1}=e,{mode:N}=e;f&&!i?i=[]:i||(i="");function z(a){i=a,n(0,i)}function B(a){s=a,n(1,s)}function L(a){T.call(this,l,a)}function I(a){T.call(this,l,a)}function d(a){T.call(this,l,a)}function D(a){T.call(this,l,a)}function _(a){T.call(this,l,a)}function k(a){i=a,n(0,i)}function W(a){s=a,n(1,s)}function x(a){T.call(this,l,a)}function te(a){T.call(this,l,a)}function ue(a){T.call(this,l,a)}function se(a){T.call(this,l,a)}function oe(a){T.call(this,l,a)}return l.$$set=a=>{"label"in a&&n(2,t=a.label),"info"in a&&n(3,o=a.info),"elem_id"in a&&n(4,m=a.elem_id),"elem_classes"in a&&n(5,u=a.elem_classes),"visible"in a&&n(6,r=a.visible),"value"in a&&n(0,i=a.value),"value_is_output"in a&&n(1,s=a.value_is_output),"multiselect"in a&&n(7,f=a.multiselect),"max_choices"in a&&n(8,g=a.max_choices),"choices"in a&&n(9,c=a.choices),"show_label"in a&&n(10,h=a.show_label),"container"in a&&n(11,j=a.container),"scale"in a&&n(12,w=a.scale),"min_width"in a&&n(13,A=a.min_width),"loading_status"in a&&n(14,b=a.loading_status),"allow_custom_value"in a&&n(15,v=a.allow_custom_value),"mode"in a&&n(16,N=a.mode)},[i,s,t,o,m,u,r,f,g,c,h,j,w,A,b,v,N,z,B,L,I,d,D,_,k,W,x,te,ue,se,oe]}class il extends X{constructor(e){super(),y(this,e,nl,ll,p,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,value:0,value_is_output:1,multiselect:7,max_choices:8,choices:9,show_label:10,container:11,scale:12,min_width:13,loading_status:14,allow_custom_value:15,mode:16})}}const al=il,_l=["static","dynamic"];export{al as Component,_l as modes}; -//# sourceMappingURL=index-846a9041.js.map diff --git a/spaces/declare-lab/tango/diffusers/examples/community/README.md b/spaces/declare-lab/tango/diffusers/examples/community/README.md deleted file mode 100644 index 11da90764579c7e548fe46fcc5738e8af95797b2..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/community/README.md +++ /dev/null @@ -1,1132 +0,0 @@ -# Community Examples - -> **For more information about community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841).** - -**Community** examples consist of both inference and training examples that have been added by the community. -Please have a look at the following table to get an overview of all community examples. Click on the **Code Example** to get a copy-and-paste ready code example that you can try out. -If a community doesn't work as expected, please open an issue and ping the author on it. - -| Example | Description | Code Example | Colab | Author | -|:---------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------:| -| CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with Stable Diffusion | [CLIP Guided Stable Diffusion](#clip-guided-stable-diffusion) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb) | [Suraj Patil](https://github.com/patil-suraj/) | -| One Step U-Net (Dummy) | Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) | [One Step U-Net](#one-step-unet) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) | -| Stable Diffusion Interpolation | Interpolate the latent space of Stable Diffusion between different prompts/seeds | [Stable Diffusion Interpolation](#stable-diffusion-interpolation) | - | [Nate Raw](https://github.com/nateraw/) | -| Stable Diffusion Mega | **One** Stable Diffusion Pipeline with all functionalities of [Text2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py), [Image2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) and [Inpainting](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | [Stable Diffusion Mega](#stable-diffusion-mega) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) | -| Long Prompt Weighting Stable Diffusion | **One** Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt. | [Long Prompt Weighting Stable Diffusion](#long-prompt-weighting-stable-diffusion) | - | [SkyTNT](https://github.com/SkyTNT) | -| Speech to Image | Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images | [Speech to Image](#speech-to-image) | - | [Mikail Duzenli](https://github.com/MikailINTech) -| Wild Card Stable Diffusion | Stable Diffusion Pipeline that supports prompts that contain wildcard terms (indicated by surrounding double underscores), with values instantiated randomly from a corresponding txt file or a dictionary of possible values | [Wildcard Stable Diffusion](#wildcard-stable-diffusion) | - | [Shyam Sudhakaran](https://github.com/shyamsn97) | -| [Composable Stable Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) | Stable Diffusion Pipeline that supports prompts that contain "|" in prompts (as an AND condition) and weights (separated by "|" as well) to positively / negatively weight prompts. | [Composable Stable Diffusion](#composable-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) | -| Seed Resizing Stable Diffusion| Stable Diffusion Pipeline that supports resizing an image and retaining the concepts of the 512 by 512 generation. | [Seed Resizing](#seed-resizing) | - | [Mark Rich](https://github.com/MarkRich) | -| Imagic Stable Diffusion | Stable Diffusion Pipeline that enables writing a text prompt to edit an existing image| [Imagic Stable Diffusion](#imagic-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) | -| Multilingual Stable Diffusion| Stable Diffusion Pipeline that supports prompts in 50 different languages. | [Multilingual Stable Diffusion](#multilingual-stable-diffusion-pipeline) | - | [Juan Carlos Piñeros](https://github.com/juancopi81) | -| Image to Image Inpainting Stable Diffusion | Stable Diffusion Pipeline that enables the overlaying of two images and subsequent inpainting| [Image to Image Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Alex McKinney](https://github.com/vvvm23) | -| Text Based Inpainting Stable Diffusion | Stable Diffusion Inpainting Pipeline that enables passing a text prompt to generate the mask for inpainting| [Text Based Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Dhruv Karan](https://github.com/unography) | -| Bit Diffusion | Diffusion on discrete data | [Bit Diffusion](#bit-diffusion) | - |[Stuti R.](https://github.com/kingstut) | -| K-Diffusion Stable Diffusion | Run Stable Diffusion with any of [K-Diffusion's samplers](https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py) | [Stable Diffusion with K Diffusion](#stable-diffusion-with-k-diffusion) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) | -| Checkpoint Merger Pipeline | Diffusion Pipeline that enables merging of saved model checkpoints | [Checkpoint Merger Pipeline](#checkpoint-merger-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) | -Stable Diffusion v1.1-1.4 Comparison | Run all 4 model checkpoints for Stable Diffusion and compare their results together | [Stable Diffusion Comparison](#stable-diffusion-comparisons) | - | [Suvaditya Mukherjee](https://github.com/suvadityamuk) | -MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt | [MagicMix](#magic-mix) | - | [Partho Das](https://github.com/daspartho) | -| Stable UnCLIP | Diffusion Pipeline for combining prior model (generate clip image embedding from text, UnCLIPPipeline `"kakaobrain/karlo-v1-alpha"`) and decoder pipeline (decode clip image embedding to image, StableDiffusionImageVariationPipeline `"lambdalabs/sd-image-variations-diffusers"` ). | [Stable UnCLIP](#stable-unclip) | - |[Ray Wang](https://wrong.wang) | -| UnCLIP Text Interpolation Pipeline | Diffusion Pipeline that allows passing two prompts and produces images while interpolating between the text-embeddings of the two prompts | [UnCLIP Text Interpolation Pipeline](#unclip-text-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) | -| UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) | -| DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) | - |[Aengus (Duc-Anh)](https://github.com/aengusng8) | -| CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) | - - - -To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly. -```py -pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="filename_in_the_community_folder") -``` - -## Example usages - -### CLIP Guided Stable Diffusion - -CLIP guided stable diffusion can help to generate more realistic images -by guiding stable diffusion at every denoising step with an additional CLIP model. - -The following code requires roughly 12GB of GPU RAM. - -```python -from diffusers import DiffusionPipeline -from transformers import CLIPImageProcessor, CLIPModel -import torch - - -feature_extractor = CLIPImageProcessor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K") -clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16) - - -guided_pipeline = DiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", - custom_pipeline="clip_guided_stable_diffusion", - clip_model=clip_model, - feature_extractor=feature_extractor, - - torch_dtype=torch.float16, -) -guided_pipeline.enable_attention_slicing() -guided_pipeline = guided_pipeline.to("cuda") - -prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece" - -generator = torch.Generator(device="cuda").manual_seed(0) -images = [] -for i in range(4): - image = guided_pipeline( - prompt, - num_inference_steps=50, - guidance_scale=7.5, - clip_guidance_scale=100, - num_cutouts=4, - use_cutouts=False, - generator=generator, - ).images[0] - images.append(image) - -# save images locally -for i, img in enumerate(images): - img.save(f"./clip_guided_sd/image_{i}.png") -``` - -The `images` list contains a list of PIL images that can be saved locally or displayed directly in a google colab. -Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images: - -![clip_guidance](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/clip_guidance/merged_clip_guidance.jpg). - -### One Step Unet - -The dummy "one-step-unet" can be run as follows: - -```python -from diffusers import DiffusionPipeline - -pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet") -pipe() -``` - -**Note**: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see https://github.com/huggingface/diffusers/issues/841). - -### Stable Diffusion Interpolation - -The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes. - -```python -from diffusers import DiffusionPipeline -import torch - -pipe = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - revision='fp16', - torch_dtype=torch.float16, - safety_checker=None, # Very important for videos...lots of false positives while interpolating - custom_pipeline="interpolate_stable_diffusion", -).to('cuda') -pipe.enable_attention_slicing() - -frame_filepaths = pipe.walk( - prompts=['a dog', 'a cat', 'a horse'], - seeds=[42, 1337, 1234], - num_interpolation_steps=16, - output_dir='./dreams', - batch_size=4, - height=512, - width=512, - guidance_scale=8.5, - num_inference_steps=50, -) -``` - -The output of the `walk(...)` function returns a list of images saved under the folder as defined in `output_dir`. You can use these images to create videos of stable diffusion. - -> **Please have a look at https://github.com/nateraw/stable-diffusion-videos for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality.** - -### Stable Diffusion Mega - -The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class. - -```python -#!/usr/bin/env python3 -from diffusers import DiffusionPipeline -import PIL -import requests -from io import BytesIO -import torch - - -def download_image(url): - response = requests.get(url) - return PIL.Image.open(BytesIO(response.content)).convert("RGB") - -pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_mega", torch_dtype=torch.float16, revision="fp16") -pipe.to("cuda") -pipe.enable_attention_slicing() - - -### Text-to-Image - -images = pipe.text2img("An astronaut riding a horse").images - -### Image-to-Image - -init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg") - -prompt = "A fantasy landscape, trending on artstation" - -images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images - -### Inpainting - -img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" -mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" -init_image = download_image(img_url).resize((512, 512)) -mask_image = download_image(mask_url).resize((512, 512)) - -prompt = "a cat sitting on a bench" -images = pipe.inpaint(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.75).images -``` - -As shown above this one pipeline can run all both "text-to-image", "image-to-image", and "inpainting" in one pipeline. - -### Long Prompt Weighting Stable Diffusion -Features of this custom pipeline: -- Input a prompt without the 77 token length limit. -- Includes tx2img, img2img. and inpainting pipelines. -- Emphasize/weigh part of your prompt with parentheses as so: `a baby deer with (big eyes)` -- De-emphasize part of your prompt as so: `a [baby] deer with big eyes` -- Precisely weigh part of your prompt as so: `a baby deer with (big eyes:1.3)` - -Prompt weighting equivalents: -- `a baby deer with` == `(a baby deer with:1.0)` -- `(big eyes)` == `(big eyes:1.1)` -- `((big eyes))` == `(big eyes:1.21)` -- `[big eyes]` == `(big eyes:0.91)` - -You can run this custom pipeline as so: - -#### pytorch - -```python -from diffusers import DiffusionPipeline -import torch - -pipe = DiffusionPipeline.from_pretrained( - 'hakurei/waifu-diffusion', - custom_pipeline="lpw_stable_diffusion", - - torch_dtype=torch.float16 -) -pipe=pipe.to("cuda") - -prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms" -neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry" - -pipe.text2img(prompt, negative_prompt=neg_prompt, width=512,height=512,max_embeddings_multiples=3).images[0] - -``` - -#### onnxruntime - -```python -from diffusers import DiffusionPipeline -import torch - -pipe = DiffusionPipeline.from_pretrained( - 'CompVis/stable-diffusion-v1-4', - custom_pipeline="lpw_stable_diffusion_onnx", - revision="onnx", - provider="CUDAExecutionProvider" -) - -prompt = "a photo of an astronaut riding a horse on mars, best quality" -neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry" - -pipe.text2img(prompt,negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0] - -``` - -if you see `Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors`. Do not worry, it is normal. - -### Speech to Image - -The following code can generate an image from an audio sample using pre-trained OpenAI whisper-small and Stable Diffusion. - -```Python -import torch - -import matplotlib.pyplot as plt -from datasets import load_dataset -from diffusers import DiffusionPipeline -from transformers import ( - WhisperForConditionalGeneration, - WhisperProcessor, -) - - -device = "cuda" if torch.cuda.is_available() else "cpu" - -ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") - -audio_sample = ds[3] - -text = audio_sample["text"].lower() -speech_data = audio_sample["audio"]["array"] - -model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device) -processor = WhisperProcessor.from_pretrained("openai/whisper-small") - -diffuser_pipeline = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - custom_pipeline="speech_to_image_diffusion", - speech_model=model, - speech_processor=processor, - - torch_dtype=torch.float16, -) - -diffuser_pipeline.enable_attention_slicing() -diffuser_pipeline = diffuser_pipeline.to(device) - -output = diffuser_pipeline(speech_data) -plt.imshow(output.images[0]) -``` -This example produces the following image: - -![image](https://user-images.githubusercontent.com/45072645/196901736-77d9c6fc-63ee-4072-90b0-dc8b903d63e3.png) - -### Wildcard Stable Diffusion -Following the great examples from https://github.com/jtkelm2/stable-diffusion-webui-1/blob/master/scripts/wildcards.py and https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts#wildcards, here's a minimal implementation that allows for users to add "wildcards", denoted by `__wildcard__` to prompts that are used as placeholders for randomly sampled values given by either a dictionary or a `.txt` file. For example: - -Say we have a prompt: - -``` -prompt = "__animal__ sitting on a __object__ wearing a __clothing__" -``` - -We can then define possible values to be sampled for `animal`, `object`, and `clothing`. These can either be from a `.txt` with the same name as the category. - -The possible values can also be defined / combined by using a dictionary like: `{"animal":["dog", "cat", mouse"]}`. - -The actual pipeline works just like `StableDiffusionPipeline`, except the `__call__` method takes in: - -`wildcard_files`: list of file paths for wild card replacement -`wildcard_option_dict`: dict with key as `wildcard` and values as a list of possible replacements -`num_prompt_samples`: number of prompts to sample, uniformly sampling wildcards - -A full example: - -create `animal.txt`, with contents like: - -``` -dog -cat -mouse -``` - -create `object.txt`, with contents like: - -``` -chair -sofa -bench -``` - -```python -from diffusers import DiffusionPipeline -import torch - -pipe = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - custom_pipeline="wildcard_stable_diffusion", - - torch_dtype=torch.float16, -) -prompt = "__animal__ sitting on a __object__ wearing a __clothing__" -out = pipe( - prompt, - wildcard_option_dict={ - "clothing":["hat", "shirt", "scarf", "beret"] - }, - wildcard_files=["object.txt", "animal.txt"], - num_prompt_samples=1 -) -``` - -### Composable Stable diffusion - -[Composable Stable Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) proposes conjunction and negation (negative prompts) operators for compositional generation with conditional diffusion models. - -```python -import torch as th -import numpy as np -import torchvision.utils as tvu - -from diffusers import DiffusionPipeline - -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument("--prompt", type=str, default="mystical trees | A magical pond | dark", - help="use '|' as the delimiter to compose separate sentences.") -parser.add_argument("--steps", type=int, default=50) -parser.add_argument("--scale", type=float, default=7.5) -parser.add_argument("--weights", type=str, default="7.5 | 7.5 | -7.5") -parser.add_argument("--seed", type=int, default=2) -parser.add_argument("--model_path", type=str, default="CompVis/stable-diffusion-v1-4") -parser.add_argument("--num_images", type=int, default=1) -args = parser.parse_args() - -has_cuda = th.cuda.is_available() -device = th.device('cpu' if not has_cuda else 'cuda') - -prompt = args.prompt -scale = args.scale -steps = args.steps - -pipe = DiffusionPipeline.from_pretrained( - args.model_path, - custom_pipeline="composable_stable_diffusion", -).to(device) - -pipe.safety_checker = None - -images = [] -generator = th.Generator("cuda").manual_seed(args.seed) -for i in range(args.num_images): - image = pipe(prompt, guidance_scale=scale, num_inference_steps=steps, - weights=args.weights, generator=generator).images[0] - images.append(th.from_numpy(np.array(image)).permute(2, 0, 1) / 255.) -grid = tvu.make_grid(th.stack(images, dim=0), nrow=4, padding=0) -tvu.save_image(grid, f'{prompt}_{args.weights}' + '.png') - -``` - -### Imagic Stable Diffusion -Allows you to edit an image using stable diffusion. - -```python -import requests -from PIL import Image -from io import BytesIO -import torch -import os -from diffusers import DiffusionPipeline, DDIMScheduler -has_cuda = torch.cuda.is_available() -device = torch.device('cpu' if not has_cuda else 'cuda') -pipe = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - safety_checker=None, - use_auth_token=True, - custom_pipeline="imagic_stable_diffusion", - scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False) -).to(device) -generator = torch.Generator("cuda").manual_seed(0) -seed = 0 -prompt = "A photo of Barack Obama smiling with a big grin" -url = 'https://www.dropbox.com/s/6tlwzr73jd1r9yk/obama.png?dl=1' -response = requests.get(url) -init_image = Image.open(BytesIO(response.content)).convert("RGB") -init_image = init_image.resize((512, 512)) -res = pipe.train( - prompt, - image=init_image, - generator=generator) -res = pipe(alpha=1, guidance_scale=7.5, num_inference_steps=50) -os.makedirs("imagic", exist_ok=True) -image = res.images[0] -image.save('./imagic/imagic_image_alpha_1.png') -res = pipe(alpha=1.5, guidance_scale=7.5, num_inference_steps=50) -image = res.images[0] -image.save('./imagic/imagic_image_alpha_1_5.png') -res = pipe(alpha=2, guidance_scale=7.5, num_inference_steps=50) -image = res.images[0] -image.save('./imagic/imagic_image_alpha_2.png') -``` - -### Seed Resizing -Test seed resizing. Originally generate an image in 512 by 512, then generate image with same seed at 512 by 592 using seed resizing. Finally, generate 512 by 592 using original stable diffusion pipeline. - -```python -import torch as th -import numpy as np -from diffusers import DiffusionPipeline - -has_cuda = th.cuda.is_available() -device = th.device('cpu' if not has_cuda else 'cuda') - -pipe = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - use_auth_token=True, - custom_pipeline="seed_resize_stable_diffusion" -).to(device) - -def dummy(images, **kwargs): - return images, False - -pipe.safety_checker = dummy - - -images = [] -th.manual_seed(0) -generator = th.Generator("cuda").manual_seed(0) - -seed = 0 -prompt = "A painting of a futuristic cop" - -width = 512 -height = 512 - -res = pipe( - prompt, - guidance_scale=7.5, - num_inference_steps=50, - height=height, - width=width, - generator=generator) -image = res.images[0] -image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height)) - - -th.manual_seed(0) -generator = th.Generator("cuda").manual_seed(0) - -pipe = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - use_auth_token=True, - custom_pipeline="/home/mark/open_source/diffusers/examples/community/" -).to(device) - -width = 512 -height = 592 - -res = pipe( - prompt, - guidance_scale=7.5, - num_inference_steps=50, - height=height, - width=width, - generator=generator) -image = res.images[0] -image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height)) - -pipe_compare = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - use_auth_token=True, - custom_pipeline="/home/mark/open_source/diffusers/examples/community/" -).to(device) - -res = pipe_compare( - prompt, - guidance_scale=7.5, - num_inference_steps=50, - height=height, - width=width, - generator=generator -) - -image = res.images[0] -image.save('./seed_resize/seed_resize_{w}_{h}_image_compare.png'.format(w=width, h=height)) -``` - -### Multilingual Stable Diffusion Pipeline - -The following code can generate an images from texts in different languages using the pre-trained [mBART-50 many-to-one multilingual machine translation model](https://huggingface.co/facebook/mbart-large-50-many-to-one-mmt) and Stable Diffusion. - -```python -from PIL import Image - -import torch - -from diffusers import DiffusionPipeline -from transformers import ( - pipeline, - MBart50TokenizerFast, - MBartForConditionalGeneration, -) -device = "cuda" if torch.cuda.is_available() else "cpu" -device_dict = {"cuda": 0, "cpu": -1} - -# helper function taken from: https://huggingface.co/blog/stable_diffusion -def image_grid(imgs, rows, cols): - assert len(imgs) == rows*cols - - w, h = imgs[0].size - grid = Image.new('RGB', size=(cols*w, rows*h)) - grid_w, grid_h = grid.size - - for i, img in enumerate(imgs): - grid.paste(img, box=(i%cols*w, i//cols*h)) - return grid - -# Add language detection pipeline -language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection" -language_detection_pipeline = pipeline("text-classification", - model=language_detection_model_ckpt, - device=device_dict[device]) - -# Add model for language translation -trans_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") -trans_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device) - -diffuser_pipeline = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - custom_pipeline="multilingual_stable_diffusion", - detection_pipeline=language_detection_pipeline, - translation_model=trans_model, - translation_tokenizer=trans_tokenizer, - - torch_dtype=torch.float16, -) - -diffuser_pipeline.enable_attention_slicing() -diffuser_pipeline = diffuser_pipeline.to(device) - -prompt = ["a photograph of an astronaut riding a horse", - "Una casa en la playa", - "Ein Hund, der Orange isst", - "Un restaurant parisien"] - -output = diffuser_pipeline(prompt) - -images = output.images - -grid = image_grid(images, rows=2, cols=2) -``` - -This example produces the following images: -![image](https://user-images.githubusercontent.com/4313860/198328706-295824a4-9856-4ce5-8e66-278ceb42fd29.png) - -### Image to Image Inpainting Stable Diffusion - -Similar to the standard stable diffusion inpainting example, except with the addition of an `inner_image` argument. - -`image`, `inner_image`, and `mask` should have the same dimensions. `inner_image` should have an alpha (transparency) channel. - -The aim is to overlay two images, then mask out the boundary between `image` and `inner_image` to allow stable diffusion to make the connection more seamless. -For example, this could be used to place a logo on a shirt and make it blend seamlessly. - -```python -import PIL -import torch - -from diffusers import DiffusionPipeline - -image_path = "./path-to-image.png" -inner_image_path = "./path-to-inner-image.png" -mask_path = "./path-to-mask.png" - -init_image = PIL.Image.open(image_path).convert("RGB").resize((512, 512)) -inner_image = PIL.Image.open(inner_image_path).convert("RGBA").resize((512, 512)) -mask_image = PIL.Image.open(mask_path).convert("RGB").resize((512, 512)) - -pipe = DiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-inpainting", - custom_pipeline="img2img_inpainting", - - torch_dtype=torch.float16 -) -pipe = pipe.to("cuda") - -prompt = "Your prompt here!" -image = pipe(prompt=prompt, image=init_image, inner_image=inner_image, mask_image=mask_image).images[0] -``` - -![2 by 2 grid demonstrating image to image inpainting.](https://user-images.githubusercontent.com/44398246/203506577-ec303be4-887e-4ebd-a773-c83fcb3dd01a.png) - -### Text Based Inpainting Stable Diffusion - -Use a text prompt to generate the mask for the area to be inpainted. -Currently uses the CLIPSeg model for mask generation, then calls the standard Stable Diffusion Inpainting pipeline to perform the inpainting. - -```python -from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation -from diffusers import DiffusionPipeline - -from PIL import Image -import requests - -processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined") -model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined") - -pipe = DiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-inpainting", - custom_pipeline="text_inpainting", - segmentation_model=model, - segmentation_processor=processor -) -pipe = pipe.to("cuda") - - -url = "https://github.com/timojl/clipseg/blob/master/example_image.jpg?raw=true" -image = Image.open(requests.get(url, stream=True).raw).resize((512, 512)) -text = "a glass" # will mask out this text -prompt = "a cup" # the masked out region will be replaced with this - -image = pipe(image=image, text=text, prompt=prompt).images[0] -``` - -### Bit Diffusion -Based https://arxiv.org/abs/2208.04202, this is used for diffusion on discrete data - eg, discreate image data, DNA sequence data. An unconditional discreate image can be generated like this: - -```python -from diffusers import DiffusionPipeline -pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="bit_diffusion") -image = pipe().images[0] - -``` - -### Stable Diffusion with K Diffusion - -Make sure you have @crowsonkb's https://github.com/crowsonkb/k-diffusion installed: - -``` -pip install k-diffusion -``` - -You can use the community pipeline as follows: - -```python -from diffusers import DiffusionPipeline - -pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="sd_text2img_k_diffusion") -pipe = pipe.to("cuda") - -prompt = "an astronaut riding a horse on mars" -pipe.set_scheduler("sample_heun") -generator = torch.Generator(device="cuda").manual_seed(seed) -image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] - -image.save("./astronaut_heun_k_diffusion.png") -``` - -To make sure that K Diffusion and `diffusers` yield the same results: - -**Diffusers**: -```python -from diffusers import DiffusionPipeline, EulerDiscreteScheduler - -seed = 33 - -pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") -pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) -pipe = pipe.to("cuda") - -generator = torch.Generator(device="cuda").manual_seed(seed) -image = pipe(prompt, generator=generator, num_inference_steps=50).images[0] -``` - -![diffusers_euler](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/k_diffusion/astronaut_euler.png) - -**K Diffusion**: -```python -from diffusers import DiffusionPipeline, EulerDiscreteScheduler - -seed = 33 - -pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="sd_text2img_k_diffusion") -pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) -pipe = pipe.to("cuda") - -pipe.set_scheduler("sample_euler") -generator = torch.Generator(device="cuda").manual_seed(seed) -image = pipe(prompt, generator=generator, num_inference_steps=50).images[0] -``` - -![diffusers_euler](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/k_diffusion/astronaut_euler_k_diffusion.png) - -### Checkpoint Merger Pipeline -Based on the AUTOMATIC1111/webui for checkpoint merging. This is a custom pipeline that merges upto 3 pretrained model checkpoints as long as they are in the HuggingFace model_index.json format. - -The checkpoint merging is currently memory intensive as it modifies the weights of a DiffusionPipeline object in place. Expect atleast 13GB RAM Usage on Kaggle GPU kernels and -on colab you might run out of the 12GB memory even while merging two checkpoints. - -Usage:- -```python -from diffusers import DiffusionPipeline - -#Return a CheckpointMergerPipeline class that allows you to merge checkpoints. -#The checkpoint passed here is ignored. But still pass one of the checkpoints you plan to -#merge for convenience -pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="checkpoint_merger") - -#There are multiple possible scenarios: -#The pipeline with the merged checkpoints is returned in all the scenarios - -#Compatible checkpoints a.k.a matched model_index.json files. Ignores the meta attributes in model_index.json during comparision.( attrs with _ as prefix ) -merged_pipe = pipe.merge(["CompVis/stable-diffusion-v1-4","CompVis/stable-diffusion-v1-2"], interp = "sigmoid", alpha = 0.4) - -#Incompatible checkpoints in model_index.json but merge might be possible. Use force = True to ignore model_index.json compatibility -merged_pipe_1 = pipe.merge(["CompVis/stable-diffusion-v1-4","hakurei/waifu-diffusion"], force = True, interp = "sigmoid", alpha = 0.4) - -#Three checkpoint merging. Only "add_difference" method actually works on all three checkpoints. Using any other options will ignore the 3rd checkpoint. -merged_pipe_2 = pipe.merge(["CompVis/stable-diffusion-v1-4","hakurei/waifu-diffusion","prompthero/openjourney"], force = True, interp = "add_difference", alpha = 0.4) - -prompt = "An astronaut riding a horse on Mars" - -image = merged_pipe(prompt).images[0] - -``` -Some examples along with the merge details: - -1. "CompVis/stable-diffusion-v1-4" + "hakurei/waifu-diffusion" ; Sigmoid interpolation; alpha = 0.8 - -![Stable plus Waifu Sigmoid 0.8](https://huggingface.co/datasets/NagaSaiAbhinay/CheckpointMergerSamples/resolve/main/stability_v1_4_waifu_sig_0.8.png) - -2. "hakurei/waifu-diffusion" + "prompthero/openjourney" ; Inverse Sigmoid interpolation; alpha = 0.8 - -![Stable plus Waifu Sigmoid 0.8](https://huggingface.co/datasets/NagaSaiAbhinay/CheckpointMergerSamples/resolve/main/waifu_openjourney_inv_sig_0.8.png) - - -3. "CompVis/stable-diffusion-v1-4" + "hakurei/waifu-diffusion" + "prompthero/openjourney"; Add Difference interpolation; alpha = 0.5 - -![Stable plus Waifu plus openjourney add_diff 0.5](https://huggingface.co/datasets/NagaSaiAbhinay/CheckpointMergerSamples/resolve/main/stable_waifu_openjourney_add_diff_0.5.png) - - -### Stable Diffusion Comparisons - -This Community Pipeline enables the comparison between the 4 checkpoints that exist for Stable Diffusion. They can be found through the following links: -1. [Stable Diffusion v1.1](https://huggingface.co/CompVis/stable-diffusion-v1-1) -2. [Stable Diffusion v1.2](https://huggingface.co/CompVis/stable-diffusion-v1-2) -3. [Stable Diffusion v1.3](https://huggingface.co/CompVis/stable-diffusion-v1-3) -4. [Stable Diffusion v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4) - -```python -from diffusers import DiffusionPipeline -import matplotlib.pyplot as plt - -pipe = DiffusionPipeline.from_pretrained('CompVis/stable-diffusion-v1-4', custom_pipeline='suvadityamuk/StableDiffusionComparison') -pipe.enable_attention_slicing() -pipe = pipe.to('cuda') -prompt = "an astronaut riding a horse on mars" -output = pipe(prompt) - -plt.subplots(2,2,1) -plt.imshow(output.images[0]) -plt.title('Stable Diffusion v1.1') -plt.axis('off') -plt.subplots(2,2,2) -plt.imshow(output.images[1]) -plt.title('Stable Diffusion v1.2') -plt.axis('off') -plt.subplots(2,2,3) -plt.imshow(output.images[2]) -plt.title('Stable Diffusion v1.3') -plt.axis('off') -plt.subplots(2,2,4) -plt.imshow(output.images[3]) -plt.title('Stable Diffusion v1.4') -plt.axis('off') - -plt.show() -``` - -As a result, you can look at a grid of all 4 generated images being shown together, that captures a difference the advancement of the training between the 4 checkpoints. - -### Magic Mix - -Implementation of the [MagicMix: Semantic Mixing with Diffusion Models](https://arxiv.org/abs/2210.16056) paper. This is a Diffusion Pipeline for semantic mixing of an image and a text prompt to create a new concept while preserving the spatial layout and geometry of the subject in the image. The pipeline takes an image that provides the layout semantics and a prompt that provides the content semantics for the mixing process. - -There are 3 parameters for the method- -- `mix_factor`: It is the interpolation constant used in the layout generation phase. The greater the value of `mix_factor`, the greater the influence of the prompt on the layout generation process. -- `kmax` and `kmin`: These determine the range for the layout and content generation process. A higher value of kmax results in loss of more information about the layout of the original image and a higher value of kmin results in more steps for content generation process. - -Here is an example usage- - -```python -from diffusers import DiffusionPipeline, DDIMScheduler -from PIL import Image - -pipe = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - custom_pipeline="magic_mix", - scheduler = DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"), -).to('cuda') - -img = Image.open('phone.jpg') -mix_img = pipe( - img, - prompt = 'bed', - kmin = 0.3, - kmax = 0.5, - mix_factor = 0.5, - ) -mix_img.save('phone_bed_mix.jpg') -``` -The `mix_img` is a PIL image that can be saved locally or displayed directly in a google colab. Generated image is a mix of the layout semantics of the given image and the content semantics of the prompt. - -E.g. the above script generates the following image: - -`phone.jpg` - -![206903102-34e79b9f-9ed2-4fac-bb38-82871343c655](https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg) - -`phone_bed_mix.jpg` - -![206903104-913a671d-ef53-4ae4-919d-64c3059c8f67](https://user-images.githubusercontent.com/59410571/209578602-70f323fa-05b7-4dd6-b055-e40683e37914.jpg) - -For more example generations check out this [demo notebook](https://github.com/daspartho/MagicMix/blob/main/demo.ipynb). - - -### Stable UnCLIP - -UnCLIPPipeline("kakaobrain/karlo-v1-alpha") provide a prior model that can generate clip image embedding from text. -StableDiffusionImageVariationPipeline("lambdalabs/sd-image-variations-diffusers") provide a decoder model than can generate images from clip image embedding. - -```python -import torch -from diffusers import DiffusionPipeline - -device = torch.device("cpu" if not torch.cuda.is_available() else "cuda") - -pipeline = DiffusionPipeline.from_pretrained( - "kakaobrain/karlo-v1-alpha", - torch_dtype=torch.float16, - custom_pipeline="stable_unclip", - decoder_pipe_kwargs=dict( - image_encoder=None, - ), -) -pipeline.to(device) - -prompt = "a shiba inu wearing a beret and black turtleneck" -random_generator = torch.Generator(device=device).manual_seed(1000) -output = pipeline( - prompt=prompt, - width=512, - height=512, - generator=random_generator, - prior_guidance_scale=4, - prior_num_inference_steps=25, - decoder_guidance_scale=8, - decoder_num_inference_steps=50, -) - -image = output.images[0] -image.save("./shiba-inu.jpg") - -# debug - -# `pipeline.decoder_pipe` is a regular StableDiffusionImageVariationPipeline instance. -# It is used to convert clip image embedding to latents, then fed into VAE decoder. -print(pipeline.decoder_pipe.__class__) -# - -# this pipeline only use prior module in "kakaobrain/karlo-v1-alpha" -# It is used to convert clip text embedding to clip image embedding. -print(pipeline) -# StableUnCLIPPipeline { -# "_class_name": "StableUnCLIPPipeline", -# "_diffusers_version": "0.12.0.dev0", -# "prior": [ -# "diffusers", -# "PriorTransformer" -# ], -# "prior_scheduler": [ -# "diffusers", -# "UnCLIPScheduler" -# ], -# "text_encoder": [ -# "transformers", -# "CLIPTextModelWithProjection" -# ], -# "tokenizer": [ -# "transformers", -# "CLIPTokenizer" -# ] -# } - -# pipeline.prior_scheduler is the scheduler used for prior in UnCLIP. -print(pipeline.prior_scheduler) -# UnCLIPScheduler { -# "_class_name": "UnCLIPScheduler", -# "_diffusers_version": "0.12.0.dev0", -# "clip_sample": true, -# "clip_sample_range": 5.0, -# "num_train_timesteps": 1000, -# "prediction_type": "sample", -# "variance_type": "fixed_small_log" -# } -``` - - -`shiba-inu.jpg` - - -![shiba-inu](https://user-images.githubusercontent.com/16448529/209185639-6e5ec794-ce9d-4883-aa29-bd6852a2abad.jpg) - -### UnCLIP Text Interpolation Pipeline - -This Diffusion Pipeline takes two prompts and interpolates between the two input prompts using spherical interpolation ( slerp ). The input prompts are converted to text embeddings by the pipeline's text_encoder and the interpolation is done on the resulting text_embeddings over the number of steps specified. Defaults to 5 steps. - -```python -import torch -from diffusers import DiffusionPipeline - -device = torch.device("cpu" if not torch.cuda.is_available() else "cuda") - -pipe = DiffusionPipeline.from_pretrained( - "kakaobrain/karlo-v1-alpha", - torch_dtype=torch.float16, - custom_pipeline="unclip_text_interpolation" -) -pipe.to(device) - -start_prompt = "A photograph of an adult lion" -end_prompt = "A photograph of a lion cub" -#For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths. -generator = torch.Generator(device=device).manual_seed(42) - -output = pipe(start_prompt, end_prompt, steps = 6, generator = generator, enable_sequential_cpu_offload=False) - -for i,image in enumerate(output.images): - img.save('result%s.jpg' % i) -``` - -The resulting images in order:- - -![result_0](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_0.png) -![result_1](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_1.png) -![result_2](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_2.png) -![result_3](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_3.png) -![result_4](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_4.png) -![result_5](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_5.png) - -### UnCLIP Image Interpolation Pipeline - -This Diffusion Pipeline takes two images or an image_embeddings tensor of size 2 and interpolates between their embeddings using spherical interpolation ( slerp ). The input images/image_embeddings are converted to image embeddings by the pipeline's image_encoder and the interpolation is done on the resulting image_embeddings over the number of steps specified. Defaults to 5 steps. - -```python -import torch -from diffusers import DiffusionPipeline -from PIL import Image - -device = torch.device("cpu" if not torch.cuda.is_available() else "cuda") -dtype = torch.float16 if torch.cuda.is_available() else torch.bfloat16 - -pipe = DiffusionPipeline.from_pretrained( - "kakaobrain/karlo-v1-alpha-image-variations", - torch_dtype=dtype, - custom_pipeline="unclip_image_interpolation" -) -pipe.to(device) - -images = [Image.open('./starry_night.jpg'), Image.open('./flowers.jpg')] -#For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths. -generator = torch.Generator(device=device).manual_seed(42) - -output = pipe(image = images ,steps = 6, generator = generator) - -for i,image in enumerate(output.images): - image.save('starry_to_flowers_%s.jpg' % i) -``` -The original images:- - -![starry](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_night.jpg) -![flowers](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/flowers.jpg) - -The resulting images in order:- - -![result0](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_0.png) -![result1](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_1.png) -![result2](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_2.png) -![result3](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_3.png) -![result4](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_4.png) -![result5](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_5.png) - -### DDIM Noise Comparative Analysis Pipeline -#### **Research question: What visual concepts do the diffusion models learn from each noise level during training?** -The [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227) paper proposed an approach to answer the above question, which is their second contribution. -The approach consists of the following steps: - -1. The input is an image x0. -2. Perturb it to xt using a diffusion process q(xt|x0). - - `strength` is a value between 0.0 and 1.0, that controls the amount of noise that is added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input. -3. Reconstruct the image with the learned denoising process pθ(ˆx0|xt). -4. Compare x0 and ˆx0 among various t to show how each step contributes to the sample. -The authors used [openai/guided-diffusion](https://github.com/openai/guided-diffusion) model to denoise images in FFHQ dataset. This pipeline extends their second contribution by investigating DDIM on any input image. - -```python -import torch -from PIL import Image -import numpy as np - -image_path = "path/to/your/image" # images from CelebA-HQ might be better -image_pil = Image.open(image_path) -image_name = image_path.split("/")[-1].split(".")[0] - -device = torch.device("cpu" if not torch.cuda.is_available() else "cuda") -pipe = DiffusionPipeline.from_pretrained( - "google/ddpm-ema-celebahq-256", - custom_pipeline="ddim_noise_comparative_analysis", -) -pipe = pipe.to(device) - -for strength in np.linspace(0.1, 1, 25): - denoised_image, latent_timestep = pipe( - image_pil, strength=strength, return_dict=False - ) - denoised_image = denoised_image[0] - denoised_image.save( - f"noise_comparative_analysis_{image_name}_{latent_timestep}.png" - ) -``` - -Here is the result of this pipeline (which is DDIM) on CelebA-HQ dataset. - -![noise-comparative-analysis](https://user-images.githubusercontent.com/67547213/224677066-4474b2ed-56ab-4c27-87c6-de3c0255eb9c.jpeg) - -### CLIP Guided Img2Img Stable Diffusion - -CLIP guided Img2Img stable diffusion can help to generate more realistic images with an initial image -by guiding stable diffusion at every denoising step with an additional CLIP model. - -The following code requires roughly 12GB of GPU RAM. - -```python -from io import BytesIO -import requests -import torch -from diffusers import DiffusionPipeline -from PIL import Image -from transformers import CLIPFeatureExtractor, CLIPModel -feature_extractor = CLIPFeatureExtractor.from_pretrained( - "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" -) -clip_model = CLIPModel.from_pretrained( - "laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16 -) -guided_pipeline = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - # custom_pipeline="clip_guided_stable_diffusion", - custom_pipeline="/home/njindal/diffusers/examples/community/clip_guided_stable_diffusion.py", - clip_model=clip_model, - feature_extractor=feature_extractor, - torch_dtype=torch.float16, -) -guided_pipeline.enable_attention_slicing() -guided_pipeline = guided_pipeline.to("cuda") -prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece" -url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" -response = requests.get(url) -init_image = Image.open(BytesIO(response.content)).convert("RGB") -image = guided_pipeline( - prompt=prompt, - num_inference_steps=30, - image=init_image, - strength=0.75, - guidance_scale=7.5, - clip_guidance_scale=100, - num_cutouts=4, - use_cutouts=False, -).images[0] -display(image) -``` - -Init Image - -![img2img_init_clip_guidance](https://huggingface.co/datasets/njindal/images/resolve/main/clip_guided_img2img_init.jpg) - -Output Image - -![img2img_clip_guidance](https://huggingface.co/datasets/njindal/images/resolve/main/clip_guided_img2img.jpg) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_model_editing.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_model_editing.py deleted file mode 100644 index d841bd8a2d268232d02547c64c4b262dbf9d9d89..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_model_editing.py +++ /dev/null @@ -1,796 +0,0 @@ -# Copyright 2023 TIME Authors and The HuggingFace Team. All rights reserved." -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import copy -import inspect -from typing import Any, Callable, Dict, List, Optional, Union - -import torch -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - -from ...loaders import TextualInversionLoaderMixin -from ...models import AutoencoderKL, UNet2DConditionModel -from ...schedulers import PNDMScheduler -from ...schedulers.scheduling_utils import SchedulerMixin -from ...utils import is_accelerate_available, is_accelerate_version, logging, randn_tensor -from ..pipeline_utils import DiffusionPipeline -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -AUGS_CONST = ["A photo of ", "An image of ", "A picture of "] - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import torch - >>> from diffusers import StableDiffusionModelEditingPipeline - - >>> model_ckpt = "CompVis/stable-diffusion-v1-4" - >>> pipe = StableDiffusionModelEditingPipeline.from_pretrained(model_ckpt) - - >>> pipe = pipe.to("cuda") - - >>> source_prompt = "A pack of roses" - >>> destination_prompt = "A pack of blue roses" - >>> pipe.edit_model(source_prompt, destination_prompt) - - >>> prompt = "A field of roses" - >>> image = pipe(prompt).images[0] - ``` -""" - - -class StableDiffusionModelEditingPipeline(DiffusionPipeline, TextualInversionLoaderMixin): - r""" - Pipeline for text-to-image model editing using "Editing Implicit Assumptions in Text-to-Image Diffusion Models". - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - with_to_k ([`bool`]): - Whether to edit the key projection matrices along wiht the value projection matrices. - with_augs ([`list`]): - Textual augmentations to apply while editing the text-to-image model. Set to [] for no augmentations. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: SchedulerMixin, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - with_to_k: bool = True, - with_augs: list = AUGS_CONST, - ): - super().__init__() - - if isinstance(scheduler, PNDMScheduler): - logger.error("PNDMScheduler for this pipeline is currently not supported.") - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - self.with_to_k = with_to_k - self.with_augs = with_augs - - # get cross-attention layers - ca_layers = [] - - def append_ca(net_): - if net_.__class__.__name__ == "CrossAttention": - ca_layers.append(net_) - elif hasattr(net_, "children"): - for net__ in net_.children(): - append_ca(net__) - - # recursively find all cross-attention layers in unet - for net in self.unet.named_children(): - if "down" in net[0]: - append_ca(net[1]) - elif "up" in net[0]: - append_ca(net[1]) - elif "mid" in net[0]: - append_ca(net[1]) - - # get projection matrices - self.ca_clip_layers = [l for l in ca_layers if l.to_v.in_features == 768] - self.projection_matrices = [l.to_v for l in self.ca_clip_layers] - self.og_matrices = [copy.deepcopy(l.to_v) for l in self.ca_clip_layers] - if self.with_to_k: - self.projection_matrices = self.projection_matrices + [l.to_k for l in self.ca_clip_layers] - self.og_matrices = self.og_matrices + [copy.deepcopy(l.to_k) for l in self.ca_clip_layers] - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. - - When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several - steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_sequential_cpu_offload - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - Note that offloading happens on a submodule basis. Memory savings are higher than with - `enable_model_cpu_offload`, but performance is lower. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.14.0"): - from accelerate import cpu_offload - else: - raise ImportError("`enable_sequential_cpu_offload` requires `accelerate v0.14.0` or higher") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]: - cpu_offload(cpu_offloaded_model, device) - - if self.safety_checker is not None: - cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, self.tokenizer) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs - def check_inputs( - self, - prompt, - height, - width, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @torch.no_grad() - def edit_model( - self, - source_prompt: str, - destination_prompt: str, - lamb: float = 0.1, - restart_params: bool = True, - ): - r""" - Apply model editing via closed-form solution (see Eq. 5 in the TIME paper https://arxiv.org/abs/2303.08084) - - Args: - source_prompt (`str`): - The source prompt containing the concept to be edited. - destination_prompt (`str`): - The destination prompt. Must contain all words from source_prompt with additional ones to specify the - target edit. - lamb (`float`, *optional*, defaults to 0.1): - The lambda parameter specifying the regularization intesity. Smaller values increase the editing power. - restart_params (`bool`, *optional*, defaults to True): - Restart the model parameters to their pre-trained version before editing. This is done to avoid edit - compounding. When it is False, edits accumulate. - """ - - # restart LDM parameters - if restart_params: - num_ca_clip_layers = len(self.ca_clip_layers) - for idx_, l in enumerate(self.ca_clip_layers): - l.to_v = copy.deepcopy(self.og_matrices[idx_]) - self.projection_matrices[idx_] = l.to_v - if self.with_to_k: - l.to_k = copy.deepcopy(self.og_matrices[num_ca_clip_layers + idx_]) - self.projection_matrices[num_ca_clip_layers + idx_] = l.to_k - - # set up sentences - old_texts = [source_prompt] - new_texts = [destination_prompt] - # add augmentations - base = old_texts[0] if old_texts[0][0:1] != "A" else "a" + old_texts[0][1:] - for aug in self.with_augs: - old_texts.append(aug + base) - base = new_texts[0] if new_texts[0][0:1] != "A" else "a" + new_texts[0][1:] - for aug in self.with_augs: - new_texts.append(aug + base) - - # prepare input k* and v* - old_embs, new_embs = [], [] - for old_text, new_text in zip(old_texts, new_texts): - text_input = self.tokenizer( - [old_text, new_text], - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0] - old_emb, new_emb = text_embeddings - old_embs.append(old_emb) - new_embs.append(new_emb) - - # identify corresponding destinations for each token in old_emb - idxs_replaces = [] - for old_text, new_text in zip(old_texts, new_texts): - tokens_a = self.tokenizer(old_text).input_ids - tokens_b = self.tokenizer(new_text).input_ids - tokens_a = [self.tokenizer.encode("a ")[1] if self.tokenizer.decode(t) == "an" else t for t in tokens_a] - tokens_b = [self.tokenizer.encode("a ")[1] if self.tokenizer.decode(t) == "an" else t for t in tokens_b] - num_orig_tokens = len(tokens_a) - idxs_replace = [] - j = 0 - for i in range(num_orig_tokens): - curr_token = tokens_a[i] - while tokens_b[j] != curr_token: - j += 1 - idxs_replace.append(j) - j += 1 - while j < 77: - idxs_replace.append(j) - j += 1 - while len(idxs_replace) < 77: - idxs_replace.append(76) - idxs_replaces.append(idxs_replace) - - # prepare batch: for each pair of setences, old context and new values - contexts, valuess = [], [] - for old_emb, new_emb, idxs_replace in zip(old_embs, new_embs, idxs_replaces): - context = old_emb.detach() - values = [] - with torch.no_grad(): - for layer in self.projection_matrices: - values.append(layer(new_emb[idxs_replace]).detach()) - contexts.append(context) - valuess.append(values) - - # edit the model - for layer_num in range(len(self.projection_matrices)): - # mat1 = \lambda W + \sum{v k^T} - mat1 = lamb * self.projection_matrices[layer_num].weight - - # mat2 = \lambda I + \sum{k k^T} - mat2 = lamb * torch.eye( - self.projection_matrices[layer_num].weight.shape[1], - device=self.projection_matrices[layer_num].weight.device, - ) - - # aggregate sums for mat1, mat2 - for context, values in zip(contexts, valuess): - context_vector = context.reshape(context.shape[0], context.shape[1], 1) - context_vector_T = context.reshape(context.shape[0], 1, context.shape[1]) - value_vector = values[layer_num].reshape(values[layer_num].shape[0], values[layer_num].shape[1], 1) - for_mat1 = (value_vector @ context_vector_T).sum(dim=0) - for_mat2 = (context_vector @ context_vector_T).sum(dim=0) - mat1 += for_mat1 - mat2 += for_mat2 - - # update projection matrix - self.projection_matrices[layer_num].weight = torch.nn.Parameter(mat1 @ torch.inverse(mat2)) - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - - Examples: - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - if output_type == "latent": - image = latents - has_nsfw_concept = None - elif output_type == "pil": - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # 10. Convert to PIL - image = self.numpy_to_pil(image) - else: - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion.py deleted file mode 100644 index 857122782d354cd5fcd5b69daf2f601be799c5d1..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion.py +++ /dev/null @@ -1,1025 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import gc -import tempfile -import time -import unittest - -import numpy as np -import torch -from huggingface_hub import hf_hub_download -from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, - StableDiffusionPipeline, - UNet2DConditionModel, - logging, -) -from diffusers.utils import load_numpy, nightly, slow, torch_device -from diffusers.utils.testing_utils import CaptureLogger, require_torch_gpu - -from ...models.test_models_unet_2d_condition import create_lora_layers -from ...pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS -from ...test_pipelines_common import PipelineTesterMixin - - -torch.backends.cuda.matmul.allow_tf32 = False - - -class StableDiffusionPipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = StableDiffusionPipeline - params = TEXT_TO_IMAGE_PARAMS - batch_params = TEXT_TO_IMAGE_BATCH_PARAMS - - def get_dummy_components(self): - torch.manual_seed(0) - unet = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=4, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - cross_attention_dim=32, - ) - scheduler = DDIMScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - clip_sample=False, - set_alpha_to_one=False, - ) - torch.manual_seed(0) - vae = AutoencoderKL( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - ) - torch.manual_seed(0) - text_encoder_config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - text_encoder = CLIPTextModel(text_encoder_config) - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - components = { - "unet": unet, - "scheduler": scheduler, - "vae": vae, - "text_encoder": text_encoder, - "tokenizer": tokenizer, - "safety_checker": None, - "feature_extractor": None, - } - return components - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "generator": generator, - "num_inference_steps": 2, - "guidance_scale": 6.0, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_ddim(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - - components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - output = sd_pipe(**inputs) - image = output.images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.5643, 0.6017, 0.4799, 0.5267, 0.5584, 0.4641, 0.5159, 0.4963, 0.4791]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_lora(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - - components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - # forward 1 - inputs = self.get_dummy_inputs(device) - output = sd_pipe(**inputs) - image = output.images - image_slice = image[0, -3:, -3:, -1] - - # set lora layers - lora_attn_procs = create_lora_layers(sd_pipe.unet) - sd_pipe.unet.set_attn_processor(lora_attn_procs) - sd_pipe = sd_pipe.to(torch_device) - - # forward 2 - inputs = self.get_dummy_inputs(device) - output = sd_pipe(**inputs, cross_attention_kwargs={"scale": 0.0}) - image = output.images - image_slice_1 = image[0, -3:, -3:, -1] - - # forward 3 - inputs = self.get_dummy_inputs(device) - output = sd_pipe(**inputs, cross_attention_kwargs={"scale": 0.5}) - image = output.images - image_slice_2 = image[0, -3:, -3:, -1] - - assert np.abs(image_slice - image_slice_1).max() < 1e-2 - assert np.abs(image_slice - image_slice_2).max() > 1e-2 - - def test_stable_diffusion_prompt_embeds(self): - components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(torch_device) - inputs["prompt"] = 3 * [inputs["prompt"]] - - # forward - output = sd_pipe(**inputs) - image_slice_1 = output.images[0, -3:, -3:, -1] - - inputs = self.get_dummy_inputs(torch_device) - prompt = 3 * [inputs.pop("prompt")] - - text_inputs = sd_pipe.tokenizer( - prompt, - padding="max_length", - max_length=sd_pipe.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_inputs = text_inputs["input_ids"].to(torch_device) - - prompt_embeds = sd_pipe.text_encoder(text_inputs)[0] - - inputs["prompt_embeds"] = prompt_embeds - - # forward - output = sd_pipe(**inputs) - image_slice_2 = output.images[0, -3:, -3:, -1] - - assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4 - - def test_stable_diffusion_negative_prompt_embeds(self): - components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(torch_device) - negative_prompt = 3 * ["this is a negative prompt"] - inputs["negative_prompt"] = negative_prompt - inputs["prompt"] = 3 * [inputs["prompt"]] - - # forward - output = sd_pipe(**inputs) - image_slice_1 = output.images[0, -3:, -3:, -1] - - inputs = self.get_dummy_inputs(torch_device) - prompt = 3 * [inputs.pop("prompt")] - - embeds = [] - for p in [prompt, negative_prompt]: - text_inputs = sd_pipe.tokenizer( - p, - padding="max_length", - max_length=sd_pipe.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_inputs = text_inputs["input_ids"].to(torch_device) - - embeds.append(sd_pipe.text_encoder(text_inputs)[0]) - - inputs["prompt_embeds"], inputs["negative_prompt_embeds"] = embeds - - # forward - output = sd_pipe(**inputs) - image_slice_2 = output.images[0, -3:, -3:, -1] - - assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4 - - def test_stable_diffusion_ddim_factor_8(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - - components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - output = sd_pipe(**inputs, height=136, width=136) - image = output.images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 136, 136, 3) - expected_slice = np.array([0.5524, 0.5626, 0.6069, 0.4727, 0.386, 0.3995, 0.4613, 0.4328, 0.4269]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_pndm(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe.scheduler = PNDMScheduler(skip_prk_steps=True) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - output = sd_pipe(**inputs) - image = output.images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.5094, 0.5674, 0.4667, 0.5125, 0.5696, 0.4674, 0.5277, 0.4964, 0.4945]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_no_safety_checker(self): - pipe = StableDiffusionPipeline.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-lms-pipe", safety_checker=None - ) - assert isinstance(pipe, StableDiffusionPipeline) - assert isinstance(pipe.scheduler, LMSDiscreteScheduler) - assert pipe.safety_checker is None - - image = pipe("example prompt", num_inference_steps=2).images[0] - assert image is not None - - # check that there's no error when saving a pipeline with one of the models being None - with tempfile.TemporaryDirectory() as tmpdirname: - pipe.save_pretrained(tmpdirname) - pipe = StableDiffusionPipeline.from_pretrained(tmpdirname) - - # sanity check that the pipeline still works - assert pipe.safety_checker is None - image = pipe("example prompt", num_inference_steps=2).images[0] - assert image is not None - - def test_stable_diffusion_k_lms(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - - components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe.scheduler = LMSDiscreteScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - output = sd_pipe(**inputs) - image = output.images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array( - [ - 0.47082293033599854, - 0.5371589064598083, - 0.4562119245529175, - 0.5220914483070374, - 0.5733777284622192, - 0.4795039892196655, - 0.5465868711471558, - 0.5074326395988464, - 0.5042197108268738, - ] - ) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_k_euler_ancestral(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - - components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - output = sd_pipe(**inputs) - image = output.images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array( - [ - 0.4707113206386566, - 0.5372191071510315, - 0.4563021957874298, - 0.5220003724098206, - 0.5734264850616455, - 0.4794946610927582, - 0.5463782548904419, - 0.5074145197868347, - 0.504422664642334, - ] - ) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_k_euler(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - - components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe.scheduler = EulerDiscreteScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - output = sd_pipe(**inputs) - image = output.images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array( - [ - 0.47082313895225525, - 0.5371587872505188, - 0.4562119245529175, - 0.5220913887023926, - 0.5733776688575745, - 0.47950395941734314, - 0.546586811542511, - 0.5074326992034912, - 0.5042197108268738, - ] - ) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_vae_slicing(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - components["scheduler"] = LMSDiscreteScheduler.from_config(components["scheduler"].config) - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - image_count = 4 - - inputs = self.get_dummy_inputs(device) - inputs["prompt"] = [inputs["prompt"]] * image_count - output_1 = sd_pipe(**inputs) - - # make sure sliced vae decode yields the same result - sd_pipe.enable_vae_slicing() - inputs = self.get_dummy_inputs(device) - inputs["prompt"] = [inputs["prompt"]] * image_count - output_2 = sd_pipe(**inputs) - - # there is a small discrepancy at image borders vs. full batch decode - assert np.abs(output_2.images.flatten() - output_1.images.flatten()).max() < 3e-3 - - def test_stable_diffusion_vae_tiling(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - - # make sure here that pndm scheduler skips prk - components["safety_checker"] = None - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - prompt = "A painting of a squirrel eating a burger" - - # Test that tiled decode at 512x512 yields the same result as the non-tiled decode - generator = torch.Generator(device=device).manual_seed(0) - output_1 = sd_pipe([prompt], generator=generator, guidance_scale=6.0, num_inference_steps=2, output_type="np") - - # make sure tiled vae decode yields the same result - sd_pipe.enable_vae_tiling() - generator = torch.Generator(device=device).manual_seed(0) - output_2 = sd_pipe([prompt], generator=generator, guidance_scale=6.0, num_inference_steps=2, output_type="np") - - assert np.abs(output_2.images.flatten() - output_1.images.flatten()).max() < 5e-1 - - # test that tiled decode works with various shapes - shapes = [(1, 4, 73, 97), (1, 4, 97, 73), (1, 4, 49, 65), (1, 4, 65, 49)] - for shape in shapes: - zeros = torch.zeros(shape).to(device) - sd_pipe.vae.decode(zeros) - - def test_stable_diffusion_negative_prompt(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - components["scheduler"] = PNDMScheduler(skip_prk_steps=True) - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - negative_prompt = "french fries" - output = sd_pipe(**inputs, negative_prompt=negative_prompt) - - image = output.images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array( - [ - 0.5108221173286438, - 0.5688379406929016, - 0.4685141146183014, - 0.5098261833190918, - 0.5657756328582764, - 0.4631010890007019, - 0.5226285457611084, - 0.49129390716552734, - 0.4899061322212219, - ] - ) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_long_prompt(self): - components = self.get_dummy_components() - components["scheduler"] = LMSDiscreteScheduler.from_config(components["scheduler"].config) - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - do_classifier_free_guidance = True - negative_prompt = None - num_images_per_prompt = 1 - logger = logging.get_logger("diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion") - - prompt = 25 * "@" - with CaptureLogger(logger) as cap_logger_3: - text_embeddings_3 = sd_pipe._encode_prompt( - prompt, torch_device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - prompt = 100 * "@" - with CaptureLogger(logger) as cap_logger: - text_embeddings = sd_pipe._encode_prompt( - prompt, torch_device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - negative_prompt = "Hello" - with CaptureLogger(logger) as cap_logger_2: - text_embeddings_2 = sd_pipe._encode_prompt( - prompt, torch_device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - assert text_embeddings_3.shape == text_embeddings_2.shape == text_embeddings.shape - assert text_embeddings.shape[1] == 77 - - assert cap_logger.out == cap_logger_2.out - # 100 - 77 + 1 (BOS token) + 1 (EOS token) = 25 - assert cap_logger.out.count("@") == 25 - assert cap_logger_3.out == "" - - def test_stable_diffusion_height_width_opt(self): - components = self.get_dummy_components() - components["scheduler"] = LMSDiscreteScheduler.from_config(components["scheduler"].config) - sd_pipe = StableDiffusionPipeline(**components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - prompt = "hey" - - output = sd_pipe(prompt, num_inference_steps=1, output_type="np") - image_shape = output.images[0].shape[:2] - assert image_shape == (64, 64) - - output = sd_pipe(prompt, num_inference_steps=1, height=96, width=96, output_type="np") - image_shape = output.images[0].shape[:2] - assert image_shape == (96, 96) - - config = dict(sd_pipe.unet.config) - config["sample_size"] = 96 - sd_pipe.unet = UNet2DConditionModel.from_config(config).to(torch_device) - output = sd_pipe(prompt, num_inference_steps=1, output_type="np") - image_shape = output.images[0].shape[:2] - assert image_shape == (192, 192) - - -@slow -@require_torch_gpu -class StableDiffusionPipelineSlowTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0): - generator = torch.Generator(device=generator_device).manual_seed(seed) - latents = np.random.RandomState(seed).standard_normal((1, 4, 64, 64)) - latents = torch.from_numpy(latents).to(device=device, dtype=dtype) - inputs = { - "prompt": "a photograph of an astronaut riding a horse", - "latents": latents, - "generator": generator, - "num_inference_steps": 3, - "guidance_scale": 7.5, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_1_1_pndm(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-1") - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.43625, 0.43554, 0.36670, 0.40660, 0.39703, 0.38658, 0.43936, 0.43557, 0.40592]) - assert np.abs(image_slice - expected_slice).max() < 1e-4 - - def test_stable_diffusion_1_4_pndm(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.57400, 0.47841, 0.31625, 0.63583, 0.58306, 0.55056, 0.50825, 0.56306, 0.55748]) - assert np.abs(image_slice - expected_slice).max() < 1e-4 - - def test_stable_diffusion_ddim(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", safety_checker=None) - sd_pipe.scheduler = DDIMScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.38019, 0.28647, 0.27321, 0.40377, 0.38290, 0.35446, 0.39218, 0.38165, 0.42239]) - assert np.abs(image_slice - expected_slice).max() < 1e-4 - - def test_stable_diffusion_lms(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", safety_checker=None) - sd_pipe.scheduler = LMSDiscreteScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.10542, 0.09620, 0.07332, 0.09015, 0.09382, 0.07597, 0.08496, 0.07806, 0.06455]) - assert np.abs(image_slice - expected_slice).max() < 1e-4 - - def test_stable_diffusion_dpm(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", safety_checker=None) - sd_pipe.scheduler = DPMSolverMultistepScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.03503, 0.03494, 0.01087, 0.03128, 0.02552, 0.00803, 0.00742, 0.00372, 0.00000]) - assert np.abs(image_slice - expected_slice).max() < 1e-4 - - def test_stable_diffusion_attention_slicing(self): - torch.cuda.reset_peak_memory_stats() - pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - # enable attention slicing - pipe.enable_attention_slicing() - inputs = self.get_inputs(torch_device, dtype=torch.float16) - image_sliced = pipe(**inputs).images - - mem_bytes = torch.cuda.max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - # make sure that less than 3.75 GB is allocated - assert mem_bytes < 3.75 * 10**9 - - # disable slicing - pipe.disable_attention_slicing() - inputs = self.get_inputs(torch_device, dtype=torch.float16) - image = pipe(**inputs).images - - # make sure that more than 3.75 GB is allocated - mem_bytes = torch.cuda.max_memory_allocated() - assert mem_bytes > 3.75 * 10**9 - assert np.abs(image_sliced - image).max() < 1e-3 - - def test_stable_diffusion_vae_slicing(self): - torch.cuda.reset_peak_memory_stats() - pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - # enable vae slicing - pipe.enable_vae_slicing() - inputs = self.get_inputs(torch_device, dtype=torch.float16) - inputs["prompt"] = [inputs["prompt"]] * 4 - inputs["latents"] = torch.cat([inputs["latents"]] * 4) - image_sliced = pipe(**inputs).images - - mem_bytes = torch.cuda.max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - # make sure that less than 4 GB is allocated - assert mem_bytes < 4e9 - - # disable vae slicing - pipe.disable_vae_slicing() - inputs = self.get_inputs(torch_device, dtype=torch.float16) - inputs["prompt"] = [inputs["prompt"]] * 4 - inputs["latents"] = torch.cat([inputs["latents"]] * 4) - image = pipe(**inputs).images - - # make sure that more than 4 GB is allocated - mem_bytes = torch.cuda.max_memory_allocated() - assert mem_bytes > 4e9 - # There is a small discrepancy at the image borders vs. a fully batched version. - assert np.abs(image_sliced - image).max() < 1e-2 - - def test_stable_diffusion_vae_tiling(self): - torch.cuda.reset_peak_memory_stats() - model_id = "CompVis/stable-diffusion-v1-4" - pipe = StableDiffusionPipeline.from_pretrained(model_id, revision="fp16", torch_dtype=torch.float16) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - pipe.unet = pipe.unet.to(memory_format=torch.channels_last) - pipe.vae = pipe.vae.to(memory_format=torch.channels_last) - - prompt = "a photograph of an astronaut riding a horse" - - # enable vae tiling - pipe.enable_vae_tiling() - pipe.enable_model_cpu_offload() - generator = torch.Generator(device="cpu").manual_seed(0) - output_chunked = pipe( - [prompt], - width=1024, - height=1024, - generator=generator, - guidance_scale=7.5, - num_inference_steps=2, - output_type="numpy", - ) - image_chunked = output_chunked.images - - mem_bytes = torch.cuda.max_memory_allocated() - - # disable vae tiling - pipe.disable_vae_tiling() - generator = torch.Generator(device="cpu").manual_seed(0) - output = pipe( - [prompt], - width=1024, - height=1024, - generator=generator, - guidance_scale=7.5, - num_inference_steps=2, - output_type="numpy", - ) - image = output.images - - assert mem_bytes < 1e10 - assert np.abs(image_chunked.flatten() - image.flatten()).max() < 1e-2 - - def test_stable_diffusion_fp16_vs_autocast(self): - # this test makes sure that the original model with autocast - # and the new model with fp16 yield the same result - pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device, dtype=torch.float16) - image_fp16 = pipe(**inputs).images - - with torch.autocast(torch_device): - inputs = self.get_inputs(torch_device) - image_autocast = pipe(**inputs).images - - # Make sure results are close enough - diff = np.abs(image_fp16.flatten() - image_autocast.flatten()) - # They ARE different since ops are not run always at the same precision - # however, they should be extremely close. - assert diff.mean() < 2e-2 - - def test_stable_diffusion_intermediate_state(self): - number_of_steps = 0 - - def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None: - callback_fn.has_been_called = True - nonlocal number_of_steps - number_of_steps += 1 - if step == 1: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array( - [-0.5693, -0.3018, -0.9746, 0.0518, -0.8770, 0.7559, -1.7402, 0.1022, 1.1582] - ) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 - elif step == 2: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array( - [-0.1958, -0.2993, -1.0166, -0.5005, -0.4810, 0.6162, -0.9492, 0.6621, 1.4492] - ) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 - - callback_fn.has_been_called = False - - pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs(torch_device, dtype=torch.float16) - pipe(**inputs, callback=callback_fn, callback_steps=1) - assert callback_fn.has_been_called - assert number_of_steps == inputs["num_inference_steps"] - - def test_stable_diffusion_low_cpu_mem_usage(self): - pipeline_id = "CompVis/stable-diffusion-v1-4" - - start_time = time.time() - pipeline_low_cpu_mem_usage = StableDiffusionPipeline.from_pretrained(pipeline_id, torch_dtype=torch.float16) - pipeline_low_cpu_mem_usage.to(torch_device) - low_cpu_mem_usage_time = time.time() - start_time - - start_time = time.time() - _ = StableDiffusionPipeline.from_pretrained(pipeline_id, torch_dtype=torch.float16, low_cpu_mem_usage=False) - normal_load_time = time.time() - start_time - - assert 2 * low_cpu_mem_usage_time < normal_load_time - - def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self): - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing(1) - pipe.enable_sequential_cpu_offload() - - inputs = self.get_inputs(torch_device, dtype=torch.float16) - _ = pipe(**inputs) - - mem_bytes = torch.cuda.max_memory_allocated() - # make sure that less than 2.8 GB is allocated - assert mem_bytes < 2.8 * 10**9 - - def test_stable_diffusion_pipeline_with_model_offloading(self): - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - inputs = self.get_inputs(torch_device, dtype=torch.float16) - - # Normal inference - - pipe = StableDiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - torch_dtype=torch.float16, - ) - pipe.unet.set_default_attn_processor() - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - outputs = pipe(**inputs) - mem_bytes = torch.cuda.max_memory_allocated() - - # With model offloading - - # Reload but don't move to cuda - pipe = StableDiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - torch_dtype=torch.float16, - ) - pipe.unet.set_default_attn_processor() - - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe.enable_model_cpu_offload() - pipe.set_progress_bar_config(disable=None) - inputs = self.get_inputs(torch_device, dtype=torch.float16) - - outputs_offloaded = pipe(**inputs) - mem_bytes_offloaded = torch.cuda.max_memory_allocated() - - assert np.abs(outputs.images - outputs_offloaded.images).max() < 1e-3 - assert mem_bytes_offloaded < mem_bytes - assert mem_bytes_offloaded < 3.5 * 10**9 - for module in pipe.text_encoder, pipe.unet, pipe.vae, pipe.safety_checker: - assert module.device == torch.device("cpu") - - # With attention slicing - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe.enable_attention_slicing() - _ = pipe(**inputs) - mem_bytes_slicing = torch.cuda.max_memory_allocated() - - assert mem_bytes_slicing < mem_bytes_offloaded - assert mem_bytes_slicing < 3 * 10**9 - - def test_stable_diffusion_textual_inversion(self): - pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") - pipe.load_textual_inversion("sd-concepts-library/low-poly-hd-logos-icons") - - a111_file = hf_hub_download("hf-internal-testing/text_inv_embedding_a1111_format", "winter_style.pt") - a111_file_neg = hf_hub_download( - "hf-internal-testing/text_inv_embedding_a1111_format", "winter_style_negative.pt" - ) - pipe.load_textual_inversion(a111_file) - pipe.load_textual_inversion(a111_file_neg) - pipe.to("cuda") - - generator = torch.Generator(device="cpu").manual_seed(1) - - prompt = "An logo of a turtle in strong Style-Winter with " - neg_prompt = "Style-Winter-neg" - - image = pipe(prompt=prompt, negative_prompt=neg_prompt, generator=generator, output_type="np").images[0] - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/text_inv/winter_logo_style.npy" - ) - - max_diff = np.abs(expected_image - image).max() - assert max_diff < 5e-2 - - -@nightly -@require_torch_gpu -class StableDiffusionPipelineNightlyTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0): - generator = torch.Generator(device=generator_device).manual_seed(seed) - latents = np.random.RandomState(seed).standard_normal((1, 4, 64, 64)) - latents = torch.from_numpy(latents).to(device=device, dtype=dtype) - inputs = { - "prompt": "a photograph of an astronaut riding a horse", - "latents": latents, - "generator": generator, - "num_inference_steps": 50, - "guidance_scale": 7.5, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_1_4_pndm(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4").to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_text2img/stable_diffusion_1_4_pndm.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_stable_diffusion_1_5_pndm(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_text2img/stable_diffusion_1_5_pndm.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_stable_diffusion_ddim(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4").to(torch_device) - sd_pipe.scheduler = DDIMScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_text2img/stable_diffusion_1_4_ddim.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_stable_diffusion_lms(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4").to(torch_device) - sd_pipe.scheduler = LMSDiscreteScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_text2img/stable_diffusion_1_4_lms.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_stable_diffusion_euler(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4").to(torch_device) - sd_pipe.scheduler = EulerDiscreteScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_text2img/stable_diffusion_1_4_euler.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_stable_diffusion_dpm(self): - sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4").to(torch_device) - sd_pipe.scheduler = DPMSolverMultistepScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - inputs["num_inference_steps"] = 25 - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_text2img/stable_diffusion_1_4_dpm_multi.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 diff --git a/spaces/declare-lab/tango/diffusers/utils/check_table.py b/spaces/declare-lab/tango/diffusers/utils/check_table.py deleted file mode 100644 index 8bd6d9eae9ce7994f6c5f6171c08ebf2928fa3be..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/utils/check_table.py +++ /dev/null @@ -1,185 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse -import collections -import importlib.util -import os -import re - - -# All paths are set with the intent you should run this script from the root of the repo with the command -# python utils/check_table.py -TRANSFORMERS_PATH = "src/diffusers" -PATH_TO_DOCS = "docs/source/en" -REPO_PATH = "." - - -def _find_text_in_file(filename, start_prompt, end_prompt): - """ - Find the text in `filename` between a line beginning with `start_prompt` and before `end_prompt`, removing empty - lines. - """ - with open(filename, "r", encoding="utf-8", newline="\n") as f: - lines = f.readlines() - # Find the start prompt. - start_index = 0 - while not lines[start_index].startswith(start_prompt): - start_index += 1 - start_index += 1 - - end_index = start_index - while not lines[end_index].startswith(end_prompt): - end_index += 1 - end_index -= 1 - - while len(lines[start_index]) <= 1: - start_index += 1 - while len(lines[end_index]) <= 1: - end_index -= 1 - end_index += 1 - return "".join(lines[start_index:end_index]), start_index, end_index, lines - - -# Add here suffixes that are used to identify models, separated by | -ALLOWED_MODEL_SUFFIXES = "Model|Encoder|Decoder|ForConditionalGeneration" -# Regexes that match TF/Flax/PT model names. -_re_tf_models = re.compile(r"TF(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") -_re_flax_models = re.compile(r"Flax(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") -# Will match any TF or Flax model too so need to be in an else branch afterthe two previous regexes. -_re_pt_models = re.compile(r"(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") - - -# This is to make sure the diffusers module imported is the one in the repo. -spec = importlib.util.spec_from_file_location( - "diffusers", - os.path.join(TRANSFORMERS_PATH, "__init__.py"), - submodule_search_locations=[TRANSFORMERS_PATH], -) -diffusers_module = spec.loader.load_module() - - -# Thanks to https://stackoverflow.com/questions/29916065/how-to-do-camelcase-split-in-python -def camel_case_split(identifier): - "Split a camelcased `identifier` into words." - matches = re.finditer(".+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)", identifier) - return [m.group(0) for m in matches] - - -def _center_text(text, width): - text_length = 2 if text == "✅" or text == "❌" else len(text) - left_indent = (width - text_length) // 2 - right_indent = width - text_length - left_indent - return " " * left_indent + text + " " * right_indent - - -def get_model_table_from_auto_modules(): - """Generates an up-to-date model table from the content of the auto modules.""" - # Dictionary model names to config. - config_mapping_names = diffusers_module.models.auto.configuration_auto.CONFIG_MAPPING_NAMES - model_name_to_config = { - name: config_mapping_names[code] - for code, name in diffusers_module.MODEL_NAMES_MAPPING.items() - if code in config_mapping_names - } - model_name_to_prefix = {name: config.replace("ConfigMixin", "") for name, config in model_name_to_config.items()} - - # Dictionaries flagging if each model prefix has a slow/fast tokenizer, backend in PT/TF/Flax. - slow_tokenizers = collections.defaultdict(bool) - fast_tokenizers = collections.defaultdict(bool) - pt_models = collections.defaultdict(bool) - tf_models = collections.defaultdict(bool) - flax_models = collections.defaultdict(bool) - - # Let's lookup through all diffusers object (once). - for attr_name in dir(diffusers_module): - lookup_dict = None - if attr_name.endswith("Tokenizer"): - lookup_dict = slow_tokenizers - attr_name = attr_name[:-9] - elif attr_name.endswith("TokenizerFast"): - lookup_dict = fast_tokenizers - attr_name = attr_name[:-13] - elif _re_tf_models.match(attr_name) is not None: - lookup_dict = tf_models - attr_name = _re_tf_models.match(attr_name).groups()[0] - elif _re_flax_models.match(attr_name) is not None: - lookup_dict = flax_models - attr_name = _re_flax_models.match(attr_name).groups()[0] - elif _re_pt_models.match(attr_name) is not None: - lookup_dict = pt_models - attr_name = _re_pt_models.match(attr_name).groups()[0] - - if lookup_dict is not None: - while len(attr_name) > 0: - if attr_name in model_name_to_prefix.values(): - lookup_dict[attr_name] = True - break - # Try again after removing the last word in the name - attr_name = "".join(camel_case_split(attr_name)[:-1]) - - # Let's build that table! - model_names = list(model_name_to_config.keys()) - model_names.sort(key=str.lower) - columns = ["Model", "Tokenizer slow", "Tokenizer fast", "PyTorch support", "TensorFlow support", "Flax Support"] - # We'll need widths to properly display everything in the center (+2 is to leave one extra space on each side). - widths = [len(c) + 2 for c in columns] - widths[0] = max([len(name) for name in model_names]) + 2 - - # Build the table per se - table = "|" + "|".join([_center_text(c, w) for c, w in zip(columns, widths)]) + "|\n" - # Use ":-----:" format to center-aligned table cell texts - table += "|" + "|".join([":" + "-" * (w - 2) + ":" for w in widths]) + "|\n" - - check = {True: "✅", False: "❌"} - for name in model_names: - prefix = model_name_to_prefix[name] - line = [ - name, - check[slow_tokenizers[prefix]], - check[fast_tokenizers[prefix]], - check[pt_models[prefix]], - check[tf_models[prefix]], - check[flax_models[prefix]], - ] - table += "|" + "|".join([_center_text(l, w) for l, w in zip(line, widths)]) + "|\n" - return table - - -def check_model_table(overwrite=False): - """Check the model table in the index.rst is consistent with the state of the lib and maybe `overwrite`.""" - current_table, start_index, end_index, lines = _find_text_in_file( - filename=os.path.join(PATH_TO_DOCS, "index.mdx"), - start_prompt="", - ) - new_table = get_model_table_from_auto_modules() - - if current_table != new_table: - if overwrite: - with open(os.path.join(PATH_TO_DOCS, "index.mdx"), "w", encoding="utf-8", newline="\n") as f: - f.writelines(lines[:start_index] + [new_table] + lines[end_index:]) - else: - raise ValueError( - "The model table in the `index.mdx` has not been updated. Run `make fix-copies` to fix this." - ) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") - args = parser.parse_args() - - check_model_table(args.fix_and_overwrite) diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/utils/utils_logging.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/utils/utils_logging.py deleted file mode 100644 index c787b6aae7cd037a4718df44d672b8ffa9e5c249..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/utils/utils_logging.py +++ /dev/null @@ -1,41 +0,0 @@ -import logging -import os -import sys - - -class AverageMeter(object): - """Computes and stores the average and current value - """ - - def __init__(self): - self.val = None - self.avg = None - self.sum = None - self.count = None - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - -def init_logging(rank, models_root): - if rank == 0: - log_root = logging.getLogger() - log_root.setLevel(logging.INFO) - formatter = logging.Formatter("Training: %(asctime)s-%(message)s") - handler_file = logging.FileHandler(os.path.join(models_root, "training.log")) - handler_stream = logging.StreamHandler(sys.stdout) - handler_file.setFormatter(formatter) - handler_stream.setFormatter(formatter) - log_root.addHandler(handler_file) - log_root.addHandler(handler_stream) - log_root.info('rank_id: %d' % rank) diff --git a/spaces/devashish07/food_vision_mini/app.py b/spaces/devashish07/food_vision_mini/app.py deleted file mode 100644 index 105a6eeaf6b68d00881ae71df5b7fb39fb6a78fc..0000000000000000000000000000000000000000 --- a/spaces/devashish07/food_vision_mini/app.py +++ /dev/null @@ -1,73 +0,0 @@ - -### 1. Imports and class names setup ### -import gradio as gr -import os -import torch -from model import create_effnetb2_model -from timeit import default_timer as timer - -# Setup class names -class_names = ["pizza", "steak", "sushi"] - - -### 2. Model and transforms preparation ### -effnetb2, effnetb2_transforms = create_effnetb2_model(num_classes = len(class_names)) - -# Load save weights -effnetb2.load_state_dict( - torch.load( - f = "17_pretrained_effnetb2_20_percent.pth", - map_location = torch.device("cpu") # load the model to the cpu because model was trained on gpu. - ) -) - - -### 3. Predict function (predict()) ### -def predict(img): - # Start a timer - start_time = timer() - - # Transform the input image for use with EffNetB2 - img = effnetb2_transforms(img).unsqueeze(dim = 0) # unsqueeze = add batch dimension on 0th index. - - # Put model into eval mode, make prediction - effnetb2.eval() - with torch.inference_mode(): - # Pass transformed image through the model and turn the prediction logits into probabilities. - pred_probs = torch.softmax(effnetb2(img), dim = 1) - - # Create a prediction label and prediction probability dictionary - pred_labels_and_probs = {class_names[i]: float(pred_probs[0][i]) for i in range(len(class_names))} - - # Calculate pred time - end_time = timer() - pred_time = round(end_time - start_time, 4) - - # Return pred dict and pred time - return pred_labels_and_probs, pred_time - - - -### 4. Gradio app - our Gradio interface + launch command ### -# Create title, description and article -title = "FoodVision Mini" -description = "An EfficientNetB2 feature extractor computer vision model to classify images as pizza, steak or sushi" -article = "Created at 17-Pytorch-Model-Deployment" - -# Create example_list -example_list = [[os.path.join("examples", example)] for example in os.listdir("examples")] - -# Create the Gradio demo -demo = gr.Interface(fn = predict, # maps inputs to outputs - inputs = gr.Image(type = "pil"), - outputs = [gr.Label(num_top_classes = 3, label = "Predictions"), - gr.Number(label = "Prediction time {s}")], - examples = example_list, - title = title, - description = description, - article = article - ) - -# Launch the demo. -demo.launch(debug = True, # Print erros locally - ) diff --git a/spaces/diacanFperku/AutoGPT/100obrasmaestrasdelamusicaclasicadescargartorrent.md b/spaces/diacanFperku/AutoGPT/100obrasmaestrasdelamusicaclasicadescargartorrent.md deleted file mode 100644 index bc2a6386b58ecd7fd47e9a4dde7034890b1ecc17..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/100obrasmaestrasdelamusicaclasicadescargartorrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

          100obrasmaestrasdelamusicaclasicadescargartorrent


          Download Zip · https://gohhs.com/2uFVCU



          - -May 14, 2021 - 100BRAZMASTRASDELAMUSKAKLASIKADESHARGARDRENT · The textbook quantum mechanics Mathews and Venkatesan PDF download June 28, 2034 - 100BAUMENBAUMENBAUMENBAUMENBAUMENBAUMENBAUMENBAUMENBAUMENBAUMENBAUMENBAUMENBAUMENBAUMENBAU BAUM MEH MEH MEH MEH 8a78ff9644
          -
          -
          -

          diff --git a/spaces/diacanFperku/AutoGPT/Bandicam 2.1.1.731 Final Incl. Crack [ATOM] Download.md b/spaces/diacanFperku/AutoGPT/Bandicam 2.1.1.731 Final Incl. Crack [ATOM] Download.md deleted file mode 100644 index 48288019742005ef25d5d858b509b122937b52f3..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Bandicam 2.1.1.731 Final Incl. Crack [ATOM] Download.md +++ /dev/null @@ -1,16 +0,0 @@ -

          Bandicam 2.1.1.731 Final Incl. Crack [ATOM] Download


          DOWNLOAD ⚙⚙⚙ https://gohhs.com/2uFUy7



          -
          -It is a brilliant solution that helps you create and share videos with ease. You can easily record from your computer screen and even from your webcam. Moreover, you can also record the audio from your microphone. Bandicam 2.1.1.731 Final Incl Crack. It is a useful and reliable program that can record your screen while you are working. You can easily record from your computer screen and even from your webcam. Moreover, you can also record the audio from your microphone. Bandicam 2.1.1.731 Final Incl Crack. It is a useful and reliable program that can record your screen while you are working. You can easily record from your computer screen and even from your webcam. Moreover, you can also record the audio from your microphone. The software has some useful and interesting features such as providing you the ease to record high-definition videos, let you customize the output resolutions, allows you to easily add text or voice notes, and also helps you to upload your creations to other websites. Additionally, it allows you to record multiple streams and using the language of your choice, and offers you the option to use the built-in player and browser that plays your recordings, and let you record from your microphone. You can easily get the Bandicam. Take your recordings on any of the supported platforms such as Windows, Mac, Linux, Android, iOS, or even from HTML5. Bandicam 2.1.1.731 Final Incl Crack. It is a useful and reliable program that can record your screen while you are working. You can easily record from your computer screen and even from your webcam. Moreover, you can also record the audio from your microphone. The software has some useful and interesting features such as providing you the ease to record high-definition videos, let you customize the output resolutions, allows you to easily add text or voice notes, and also helps you to upload your creations to other websites. Additionally, it allows you to record multiple streams and using the language of your choice, and offers you the option to use the built-in player and browser that plays your recordings, and let you record from your microphone. Bandicam 2.1.1.731 Final Incl Crack. - -How to Crack the Bandicam 2.1.1.731 Final Incl Crack? - -Download the trial version of Bandicam. - -Install it. - -Now, run the setup to finish the installation. - -Use the 4fefd39f24
          -
          -
          -

          diff --git a/spaces/diacanFperku/AutoGPT/Disk Drill Pro Serial Number Mac Keygen Core __TOP__.md b/spaces/diacanFperku/AutoGPT/Disk Drill Pro Serial Number Mac Keygen Core __TOP__.md deleted file mode 100644 index 8b3c2207358c9e31aa5acacc5d6924df91aedb91..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Disk Drill Pro Serial Number Mac Keygen Core __TOP__.md +++ /dev/null @@ -1,10 +0,0 @@ - -

          mac users have a limited access to the macos recovery which is a sort of a diagnostic tool. disk drill offers a way around that with the capability to access disk partitions by a direct path. this lets disk drill recover files from disk partitions even from those that are impossible to reach by the mac. the tool can also recover files even from mac systems that dont have access to the macos recovery.

          -

          disk drill pro serial number mac keygen core


          Download Ziphttps://gohhs.com/2uFV4q



          -

          disk drill for macos mac os x is the first app to offer users a native recovery solution for mac. other mobile data recovery apps come with a limited version of mac os x support, but disk drill works with all apple mac systems at full speed.

          -

          disk drill for macos lets the users access their encrypted files and backup drive with a free tool, and also provides a facility to make good backups without installing any specialized software. the software is a simple tool that many people may consider to be very straightforward. it does not ask for any obscure settings, and anyone could use it without issues. this makes disk drill an ideal choice for people who want to back up important data.

          -

          disk drill is a light-weight mobile data recovery tool for windows and macs. the ultimate goal of the tool is to recover the most powerful and select files for free. if you need to recover data from a partition or system disk, disk drill can do that for you. disk drill is convenient, easy to use, and affordable. the free tool works only with files that are stored on a storage device.

          -

          -

          disk drill for macos gives users the most powerful tool to access data that is stored on encrypted volumes and external drives. the utility can also recover files from locked disks and disks that have not been opened for some time. apart from the mentioned features, disk drill also lets users browse files and folders within the finder and also lets them make good backups while accessing data without any hassles.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Milagros Telenovela In Italiano Tutte Le Puntate Added By Users !!LINK!!.md b/spaces/diacanFperku/AutoGPT/Milagros Telenovela In Italiano Tutte Le Puntate Added By Users !!LINK!!.md deleted file mode 100644 index 4dacd0240714b6fbfc49ea96d36cc700f715ff15..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Milagros Telenovela In Italiano Tutte Le Puntate Added By Users !!LINK!!.md +++ /dev/null @@ -1,16 +0,0 @@ -

          Milagros Telenovela In Italiano Tutte Le Puntate | Added By Users


          Download Zip ❤❤❤ https://gohhs.com/2uFVwC



          -
          -Sicilia (Milagros) : italiano : notizie : tieni conto di qualunque...Now Playing In: Milagros Telenovela In Italiano Tutte Le Puntate | Posted by Milagros users. We're sorry, we cannot show you content because of this site's support for the. - -Milagros Telenovela In Italiano Tutte Le Puntate - - -Videos Milagros Telenovela In Italiano Tutte Le Puntate -. The information on this page is submitted by users. - -Check out this great site! - -Milagros Telenovela In Italiano Tutte Le Puntate -. Visit us and also find music videos, television schedules and dates for shows that air on TV in your country. The information on this page is submitted by users.Severity of hyponatremia is an independent predictor of mortality in very elderly patients with severe sepsis. - -To determine whether the serum sodium level (Na+) in patients with severe sepsis can be considered as an independent prognostic factor for the outcome of the disease, and whether a hyponatremia of or =130 mmol/L (60.6% vs 29.8%, p =.02; 70.8% vs 27.3%, p =.001, respectively). The 28-day and 60-day mortality rates of patients with a Na+ of or =130 mmol/L (62.5% vs 29.4%, p =.003; 80.6% vs 27.3%, p =.0001). The logistic regression analysis showed that a Na+ of <130 mmol/L was an 4fefd39f24
          -
          -
          -

          diff --git a/spaces/diacanFperku/AutoGPT/Rapid Typing Tutor Crack __HOT__.md b/spaces/diacanFperku/AutoGPT/Rapid Typing Tutor Crack __HOT__.md deleted file mode 100644 index a12de99b1b9f7ef93aaeb208ee617362a12f554f..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Rapid Typing Tutor Crack __HOT__.md +++ /dev/null @@ -1,126 +0,0 @@ -
          -

          Rapid Typing Tutor Crack: How to Learn Touch Typing Fast and Easy

          - -

          Do you want to improve your typing speed and accuracy? Do you want to learn touch typing in a fun and effective way? If yes, then you may be interested in Rapid Typing Tutor Crack. This is a cracked version of Rapid Typing Tutor, a popular and powerful software that can help you learn touch typing through a series of lessons and exercises. In this article, we will tell you what Rapid Typing Tutor Crack is, how to download it, and how to use it.

          -

          Rapid Typing Tutor Crack


          Download Zip ★★★ https://gohhs.com/2uFUZS



          - -

          What is Rapid Typing Tutor Crack?

          - -

          Rapid Typing Tutor is a keyboard trainer that can help you improve your typing speed and reduce typos. It can teach you touch typing in a short time, by organizing its lessons around various keyboard groups. It can also help you learn individual letters, numbers, and symbols, as well as a series of text exercises.

          - -

          Rapid Typing Tutor has a simple and colorful interface, and supports colorful skins library. It also has a game plot, where you can train in a virtual picturesque underwater world, and see various underwater creatures as you progress. It also has a keyboard emulator that can help you learn blind typing quickly.

          - -

          Rapid Typing Tutor is suitable for children and adults, and supports multiple users with their personal settings. It also supports QWERTY, AZERTY, QWERTZ, and Dvorak keyboard layouts.

          - -

          Rapid Typing Tutor Crack is a cracked version of Rapid Typing Tutor, which means that it has been modified to bypass the activation process and use the full version of the software for free. However, using Rapid Typing Tutor Crack may have some risks and disadvantages, such as:

          -

          - -
            -
          • It may contain viruses, malware, or spyware that can harm your device or steal your data.
          • -
          • It may not work properly or have some errors or bugs that can affect your learning experience.
          • -
          • It may not be updated or compatible with the latest versions of Windows or other software.
          • -
          • It may violate the copyright laws and the terms of service of Rapid Typing Software.
          • -
          - -

          Therefore, we do not recommend using Rapid Typing Tutor Crack, and we advise you to download the official version of Rapid Typing Tutor from its website or other reputable sources.

          - -

          How to download Rapid Typing Tutor Crack?

          - -

          If you still want to download Rapid Typing Tutor Crack, you need to be careful and selective when choosing where to download it from. There are many websites that offer Rapid Typing Tutor Crack for free or for a small fee, but not all of them are reliable or trustworthy. Some may have corrupted files, outdated versions, or malicious links.

          - -

          Here are some tips to help you find a reputable website that offers Rapid Typing Tutor Crack:

          - -
            -
          • Check the reviews and ratings of the website. You can use online tools such as Trustpilot, Sitejabber, or Scamadviser to see what other users have said about the website. Look for positive feedback, high ratings, and verified testimonials.
          • -
          • Check the quality and accuracy of the crack file. You can use online tools such as VirusTotal, URLVoid, or PDF Checker to see if the crack file is valid, complete, and error-free. Look for high resolution, clear fonts, and consistent formatting.
          • -
          • Check the security and privacy of the website. You can use online tools such as SSL Checker, VirusTotal, or URLVoid to see if the website is secure, encrypted, and virus-free. Look for HTTPS protocol, SSL certificate, and green padlock icon.
          • -
          - -

          By following these tips, you can find a trustworthy website that offers Rapid Typing Tutor Crack safely and securely.

          - -

          How to use Rapid Typing Tutor Crack?

          - -

          Using Rapid Typing Tutor Crack is simple and straightforward. Here are some steps you can follow:

          - -
            -
          1. Download Rapid Typing Tutor Crack from a reputable website. Make sure you have a compatible device and software to open it.
          2. -
          3. Disable Windows Defender or any antivirus software on your device. This is to prevent them from deleting or blocking the crack file.
          4. -
          5. Extract the crack file using WinRAR or any other extraction tool. You will get a folder with the setup file and the crack file.
          6. -
          7. Run the setup file and follow the installation instructions. Do not launch the program after installation.
          8. -
          9. Copy the crack file and paste it into the installation folder of Rapid Typing Tutor. This is to replace the original file with the cracked one.
          10. -
          11. Run the program as administrator and enjoy the full version of Rapid Typing Tutor for free.
          12. -
          - -

          By following these steps, you can use Rapid Typing Tutor Crack to learn touch typing fast and easy.

          - -

          Conclusion

          - -

          In this article, we have discussed Rapid Typing Tutor Crack and how to download and use it. We have seen that it is a cracked version of Rapid Typing Tutor, a popular and powerful keyboard trainer that can help you improve your typing speed and accuracy. We have also seen that it has some risks and disadvantages, such as containing viruses, having errors, being outdated, or violating copyright laws.

          - -

          We hope this article has given you some useful information about Rapid Typing Tutor Crack and how to use it. However, we do not recommend using Rapid Typing Tutor Crack, and we advise you to download the official version of Rapid Typing Tutor from its website or other reputable sources.

          -

          What are the features of Rapid Typing Tutor?

          - -

          Rapid Typing Tutor has many features that can help you learn touch typing fast and easy. Some of these features are:

          - -
            -
          • Game plot. Rapid Typing Tutor has a game plot that makes learning fun and engaging. You can train in a virtual picturesque underwater world, and see various underwater creatures as you progress. The more you improve your typing level, the more nice underwater creatures will come to light.
          • -
          • Keyboard emulator. Rapid Typing Tutor has a keyboard emulator that can help you learn blind typing quickly. It shows you the keys and finger workplaces that are lighted on the screen, so you don't have to look at your keyboard. It also supports different keyboard layouts, such as QWERTY, AZERTY, QWERTZ, and Dvorak.
          • -
          • Simple and colorful interface. Rapid Typing Tutor has a simple and colorful interface that is easy to use and navigate. It also supports colorful skins library, so you can customize the appearance of the program according to your preference.
          • -
          • Customizable lessons and exercises. Rapid Typing Tutor has customizable lessons and exercises that are organized for various student levels. You can choose from beginner, intermediate, advanced, or expert levels, depending on your skill and goal. You can also create your own exercises or edit the included ones.
          • -
          • Multiple users and statistics. Rapid Typing Tutor supports multiple users with their personal settings. You can create different profiles for yourself or your family members or friends, and track your progress and performance individually. You can also view detailed statistics and charts that show your speed, accuracy, errors, and improvement.
          • -
          - -

          By using these features, you can make the most of Rapid Typing Tutor and learn touch typing fast and easy.

          - -

          What are the benefits of Rapid Typing Tutor?

          - -

          Rapid Typing Tutor has many benefits that can help you improve your typing speed and accuracy. Some of these benefits are:

          - -
            -
          • It is free and easy to download. You can download Rapid Typing Tutor from its official website or other reputable sources for free or for a small fee. You can also choose the format that suits your device best, such as setup or portable version.
          • -
          • It is portable and convenient. You can use Rapid Typing Tutor on your local PC or on any external device while on the go. You don't have to worry about carrying a heavy book or finding a library that has it.
          • -
          • It is comprehensive and accurate. Rapid Typing Tutor covers every letter, number, and symbol on the keyboard, as well as a series of text exercises that include different topics and genres. It also provides accurate feedback and correction for every mistake you make.
          • -
          • It is helpful and informative. Rapid Typing Tutor can help you enhance your typing skills, enrich your vocabulary, improve your spelling and grammar, and increase your productivity and efficiency. It can also help you prepare for typing tests, exams, or jobs that require typing skills.
          • -
          • It is fun and entertaining. Rapid Typing Tutor can make learning fun and entertaining with its game plot, colorful skins, underwater creatures, and sound effects. You can also compete with yourself or others by setting goals and challenges.
          • -
          - -

          By using these benefits, you can enjoy Rapid Typing Tutor and improve your typing speed and accuracy.

          -

          What are the drawbacks of Rapid Typing Tutor Crack?

          - -

          While Rapid Typing Tutor Crack may seem tempting and appealing, it also has some drawbacks that can outweigh its benefits. Some of these drawbacks are:

          - -
            -
          • It may harm your device or data. Rapid Typing Tutor Crack may contain viruses, malware, or spyware that can infect your device or steal your data. It may also damage your system files or registry, causing errors or crashes.
          • -
          • It may not work properly or have some errors or bugs. Rapid Typing Tutor Crack may not work properly or have some errors or bugs that can affect your learning experience. It may also not be updated or compatible with the latest versions of Windows or other software.
          • -
          • It may violate the law and the terms of service. Rapid Typing Tutor Crack may violate the copyright laws and the terms of service of Rapid Typing Software. It may also expose you to legal consequences or penalties if you are caught using it.
          • -
          • It may not be ethical or fair. Rapid Typing Tutor Crack may not be ethical or fair to the developers and creators of Rapid Typing Tutor, who have invested their time, money, and effort to create a quality product. It may also deprive them of their rightful income and recognition.
          • -
          - -

          By using these drawbacks, you can avoid Rapid Typing Tutor Crack and use the official version of Rapid Typing Tutor instead.

          - -

          How to download and use the official version of Rapid Typing Tutor?

          - -

          If you want to download and use the official version of Rapid Typing Tutor, you can follow these steps:

          - -
            -
          1. Go to the official website of Rapid Typing Software at https://rapidtyping.com/.
          2. -
          3. Click on the "Download" button on the top menu bar.
          4. -
          5. Choose the version that suits your device best, such as setup or portable version.
          6. -
          7. Click on the "Download" button under the chosen version.
          8. -
          9. Save the file to your device and run it.
          10. -
          11. Follow the installation instructions and launch the program.
          12. -
          13. Create a profile for yourself or choose an existing one.
          14. -
          15. Select a lesson or exercise that matches your level and goal.
          16. -
          17. Start typing and enjoy learning touch typing with Rapid Typing Tutor.
          18. -
          - -

          By following these steps, you can download and use the official version of Rapid Typing Tutor and enjoy its features and benefits without any risks or drawbacks.

          -

          Conclusion

          - -

          In this article, we have discussed Rapid Typing Tutor Crack and how to download and use it. We have seen that it is a cracked version of Rapid Typing Tutor, a popular and powerful keyboard trainer that can help you improve your typing speed and accuracy. We have also seen that it has some features and benefits, such as being free and easy to download, portable and convenient, comprehensive and accurate, helpful and informative, and fun and entertaining.

          - -

          However, we have also seen that it has some risks and disadvantages, such as harming your device or data, not working properly or having errors or bugs, violating the law and the terms of service, and not being ethical or fair. Therefore, we do not recommend using Rapid Typing Tutor Crack, and we advise you to download and use the official version of Rapid Typing Tutor from its website or other reputable sources.

          - -

          We hope this article has given you some useful information and tips about Rapid Typing Tutor Crack and how to use it. However, we encourage you to try the official version of Rapid Typing Tutor and see how it can enhance your typing skills and enrich your learning experience.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/digitalxingtong/Eileen-Bert-Vits2/losses.py b/spaces/digitalxingtong/Eileen-Bert-Vits2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Eileen-Bert-Vits2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/dineshreddy/WALT/mmdet/models/utils/res_layer.py b/spaces/dineshreddy/WALT/mmdet/models/utils/res_layer.py deleted file mode 100644 index 4a4efd3dd30b30123ed5135eac080ad9f7f7b448..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/utils/res_layer.py +++ /dev/null @@ -1,187 +0,0 @@ -from mmcv.cnn import build_conv_layer, build_norm_layer -from torch import nn as nn - - -class ResLayer(nn.Sequential): - """ResLayer to build ResNet style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - downsample_first (bool): Downsample at the first block or last block. - False for Hourglass, True for ResNet. Default: True - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - downsample_first=True, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - if downsample_first: - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - inplanes = planes * block.expansion - for _ in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - - else: # downsample_first=False is for HourglassModule - for _ in range(num_blocks - 1): - layers.append( - block( - inplanes=inplanes, - planes=inplanes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - super(ResLayer, self).__init__(*layers) - - -class SimplifiedBasicBlock(nn.Module): - """Simplified version of original basic residual block. This is used in - `SCNet `_. - - - Norm layer is now optional - - Last ReLU in forward function is removed - """ - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None): - super(SimplifiedBasicBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert not with_cp, 'Not implemented yet.' - self.with_norm = norm_cfg is not None - with_bias = True if norm_cfg is None else False - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=with_bias) - if self.with_norm: - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, planes, postfix=1) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=with_bias) - if self.with_norm: - self.norm2_name, norm2 = build_norm_layer( - norm_cfg, planes, postfix=2) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) if self.with_norm else None - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) if self.with_norm else None - - def forward(self, x): - """Forward function.""" - - identity = x - - out = self.conv1(x) - if self.with_norm: - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - if self.with_norm: - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/ocr_mask_rcnn_r50_fpn_ohem.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/ocr_mask_rcnn_r50_fpn_ohem.py deleted file mode 100644 index 843fd36fc60682706503120f16866ba511cf7310..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/ocr_mask_rcnn_r50_fpn_ohem.py +++ /dev/null @@ -1,126 +0,0 @@ -# model settings -model = dict( - type='OCRMaskRCNN', - backbone=dict( - type='mmdet.ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'), - norm_eval=True, - style='pytorch'), - neck=dict( - type='mmdet.FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[4], - ratios=[0.17, 0.44, 1.13, 2.90, 7.46], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=1, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=1, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1, - gpu_assign_thr=50), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_across_levels=False, - nms_pre=2000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='OHEMSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_across_levels=False, - nms_pre=1000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/spaces/docs-demos/dpr-question_encoder-bert-base-multilingual/app.py b/spaces/docs-demos/dpr-question_encoder-bert-base-multilingual/app.py deleted file mode 100644 index 78ad9deba759bc7df0a486f2fcdad692d741fa8b..0000000000000000000000000000000000000000 --- a/spaces/docs-demos/dpr-question_encoder-bert-base-multilingual/app.py +++ /dev/null @@ -1,32 +0,0 @@ -import gradio as gr - -title = "DPR" - -description = "Gradio Demo for DPR. To use it, simply add your text, or click one of the examples to load them. Read more at the links below." - -article = "

          Dense Passage Retrieval for Open-Domain Question Answering

          " - -examples = [ - ["Hello, is my dog cute ?","dpr-question_encoder-bert-base-multilingual"] -] - -io1 = gr.Interface.load("huggingface/voidful/dpr-question_encoder-bert-base-multilingual") - -io2 = gr.Interface.load("huggingface/sivasankalpp/dpr-multidoc2dial-structure-question-encoder") - -def inference(inputtext, model): - if model == "dpr-question_encoder-bert-base-multilingual": - outlabel = io1(inputtext) - else: - outlabel = io2(inputtext) - return outlabel - - -gr.Interface( - inference, - [gr.inputs.Textbox(label="Context",lines=10),gr.inputs.Dropdown(choices=["dpr-question_encoder-bert-base-multilingual","dpr-multidoc2dial-structure-question-encoder"], type="value", default="dpr-question_encoder-bert-base-multilingual", label="model")], - [gr.outputs.Dataframe(type="pandas",label="Output")], - examples=examples, - article=article, - title=title, - description=description).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/doevent/3D_Photo_Inpainting/utils.py b/spaces/doevent/3D_Photo_Inpainting/utils.py deleted file mode 100644 index 808e48b1979d16f32c050f43f1f6c0ca36d8d18b..0000000000000000000000000000000000000000 --- a/spaces/doevent/3D_Photo_Inpainting/utils.py +++ /dev/null @@ -1,1416 +0,0 @@ -import os -import glob -import cv2 -import scipy.misc as misc -from skimage.transform import resize -import numpy as np -from functools import reduce -from operator import mul -import torch -from torch import nn -import matplotlib.pyplot as plt -import re -try: - import cynetworkx as netx -except ImportError: - import networkx as netx -from scipy.ndimage import gaussian_filter -from skimage.feature import canny -import collections -import shutil -import imageio -import copy -from matplotlib import pyplot as plt -from mpl_toolkits.mplot3d import Axes3D -import time -from scipy.interpolate import interp1d -from collections import namedtuple - -def path_planning(num_frames, x, y, z, path_type=''): - if path_type == 'straight-line': - corner_points = np.array([[0, 0, 0], [(0 + x) * 0.5, (0 + y) * 0.5, (0 + z) * 0.5], [x, y, z]]) - corner_t = np.linspace(0, 1, len(corner_points)) - t = np.linspace(0, 1, num_frames) - cs = interp1d(corner_t, corner_points, axis=0, kind='quadratic') - spline = cs(t) - xs, ys, zs = [xx.squeeze() for xx in np.split(spline, 3, 1)] - elif path_type == 'double-straight-line': - corner_points = np.array([[-x, -y, -z], [0, 0, 0], [x, y, z]]) - corner_t = np.linspace(0, 1, len(corner_points)) - t = np.linspace(0, 1, num_frames) - cs = interp1d(corner_t, corner_points, axis=0, kind='quadratic') - spline = cs(t) - xs, ys, zs = [xx.squeeze() for xx in np.split(spline, 3, 1)] - elif path_type == 'circle': - xs, ys, zs = [], [], [] - for frame_id, bs_shift_val in enumerate(np.arange(-2.0, 2.0, (4./num_frames))): - xs += [np.cos(bs_shift_val * np.pi) * 1 * x] - ys += [np.sin(bs_shift_val * np.pi) * 1 * y] - zs += [np.cos(bs_shift_val * np.pi/2.) * 1 * z] - xs, ys, zs = np.array(xs), np.array(ys), np.array(zs) - - return xs, ys, zs - -def open_small_mask(mask, context, open_iteration, kernel): - np_mask = mask.cpu().data.numpy().squeeze().astype(np.uint8) - raw_mask = np_mask.copy() - np_context = context.cpu().data.numpy().squeeze().astype(np.uint8) - np_input = np_mask + np_context - for _ in range(open_iteration): - np_input = cv2.erode(cv2.dilate(np_input, np.ones((kernel, kernel)), iterations=1), np.ones((kernel,kernel)), iterations=1) - np_mask[(np_input - np_context) > 0] = 1 - out_mask = torch.FloatTensor(np_mask).to(mask)[None, None, ...] - - return out_mask - -def filter_irrelevant_edge_new(self_edge, comp_edge, other_edges, other_edges_with_id, current_edge_id, context, depth, mesh, context_cc, spdb=False): - other_edges = other_edges.squeeze().astype(np.uint8) - other_edges_with_id = other_edges_with_id.squeeze() - self_edge = self_edge.squeeze() - dilate_bevel_self_edge = cv2.dilate((self_edge + comp_edge).astype(np.uint8), np.array([[1,1,1],[1,1,1],[1,1,1]]), iterations=1) - dilate_cross_self_edge = cv2.dilate((self_edge + comp_edge).astype(np.uint8), np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - edge_ids = np.unique(other_edges_with_id * context + (-1) * (1 - context)).astype(np.int) - end_depth_maps = np.zeros_like(self_edge) - self_edge_ids = np.sort(np.unique(other_edges_with_id[self_edge > 0]).astype(np.int)) - self_edge_ids = self_edge_ids[1:] if self_edge_ids.shape[0] > 0 and self_edge_ids[0] == -1 else self_edge_ids - self_comp_ids = np.sort(np.unique(other_edges_with_id[comp_edge > 0]).astype(np.int)) - self_comp_ids = self_comp_ids[1:] if self_comp_ids.shape[0] > 0 and self_comp_ids[0] == -1 else self_comp_ids - edge_ids = edge_ids[1:] if edge_ids[0] == -1 else edge_ids - other_edges_info = [] - extend_other_edges = np.zeros_like(other_edges) - if spdb is True: - f, ((ax1, ax2, ax3)) = plt.subplots(1, 3, sharex=True, sharey=True); ax1.imshow(self_edge); ax2.imshow(context); ax3.imshow(other_edges_with_id * context + (-1) * (1 - context)); plt.show() - import pdb; pdb.set_trace() - filter_self_edge = np.zeros_like(self_edge) - for self_edge_id in self_edge_ids: - filter_self_edge[other_edges_with_id == self_edge_id] = 1 - dilate_self_comp_edge = cv2.dilate(comp_edge, kernel=np.ones((3, 3)), iterations=2) - valid_self_comp_edge = np.zeros_like(comp_edge) - for self_comp_id in self_comp_ids: - valid_self_comp_edge[self_comp_id == other_edges_with_id] = 1 - self_comp_edge = dilate_self_comp_edge * valid_self_comp_edge - filter_self_edge = (filter_self_edge + self_comp_edge).clip(0, 1) - for edge_id in edge_ids: - other_edge_locs = (other_edges_with_id == edge_id).astype(np.uint8) - condition = (other_edge_locs * other_edges * context.astype(np.uint8)) - end_cross_point = dilate_cross_self_edge * condition * (1 - filter_self_edge) - end_bevel_point = dilate_bevel_self_edge * condition * (1 - filter_self_edge) - if end_bevel_point.max() != 0: - end_depth_maps[end_bevel_point != 0] = depth[end_bevel_point != 0] - if end_cross_point.max() == 0: - nxs, nys = np.where(end_bevel_point != 0) - for nx, ny in zip(nxs, nys): - bevel_node = [xx for xx in context_cc if xx[0] == nx and xx[1] == ny][0] - for ne in mesh.neighbors(bevel_node): - if other_edges_with_id[ne[0], ne[1]] > -1 and dilate_cross_self_edge[ne[0], ne[1]] > 0: - extend_other_edges[ne[0], ne[1]] = 1 - break - else: - other_edges[other_edges_with_id == edge_id] = 0 - other_edges = (other_edges + extend_other_edges).clip(0, 1) * context - - return other_edges, end_depth_maps, other_edges_info - -def clean_far_edge_new(input_edge, end_depth_maps, mask, context, global_mesh, info_on_pix, self_edge, inpaint_id, config): - mesh = netx.Graph() - hxs, hys = np.where(input_edge * mask > 0) - valid_near_edge = (input_edge != 0).astype(np.uint8) * context - valid_map = mask + context - invalid_edge_ids = [] - for hx, hy in zip(hxs, hys): - node = (hx ,hy) - mesh.add_node((hx, hy)) - eight_nes = [ne for ne in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), \ - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)]\ - if 0 <= ne[0] < input_edge.shape[0] and 0 <= ne[1] < input_edge.shape[1] and 0 < input_edge[ne[0], ne[1]]] # or end_depth_maps[ne[0], ne[1]] != 0] - for ne in eight_nes: - mesh.add_edge(node, ne, length=np.hypot(ne[0] - hx, ne[1] - hy)) - if end_depth_maps[ne[0], ne[1]] != 0: - mesh.nodes[ne[0], ne[1]]['cnt'] = True - if end_depth_maps[ne[0], ne[1]] == 0: - import pdb; pdb.set_trace() - mesh.nodes[ne[0], ne[1]]['depth'] = end_depth_maps[ne[0], ne[1]] - elif mask[ne[0], ne[1]] != 1: - four_nes = [nne for nne in [(ne[0] + 1, ne[1]), (ne[0] - 1, ne[1]), (ne[0], ne[1] + 1), (ne[0], ne[1] - 1)]\ - if nne[0] < end_depth_maps.shape[0] and nne[0] >= 0 and nne[1] < end_depth_maps.shape[1] and nne[1] >= 0] - for nne in four_nes: - if end_depth_maps[nne[0], nne[1]] != 0: - mesh.add_edge(nne, ne, length=np.hypot(nne[0] - ne[0], nne[1] - ne[1])) - mesh.nodes[nne[0], nne[1]]['cnt'] = True - mesh.nodes[nne[0], nne[1]]['depth'] = end_depth_maps[nne[0], nne[1]] - ccs = [*netx.connected_components(mesh)] - end_pts = [] - for cc in ccs: - end_pts.append(set()) - for node in cc: - if mesh.nodes[node].get('cnt') is not None: - end_pts[-1].add((node[0], node[1], mesh.nodes[node]['depth'])) - predef_npaths = [None for _ in range(len(ccs))] - fpath_map = np.zeros_like(input_edge) - 1 - npath_map = np.zeros_like(input_edge) - 1 - npaths, fpaths = dict(), dict() - break_flag = False - end_idx = 0 - while end_idx < len(end_pts): - end_pt, cc = [*zip(end_pts, ccs)][end_idx] - end_idx += 1 - sorted_end_pt = [] - fpath = [] - iter_fpath = [] - if len(end_pt) > 2 or len(end_pt) == 0: - if len(end_pt) > 2: - continue - continue - if len(end_pt) == 2: - ravel_end = [*end_pt] - tmp_sub_mesh = mesh.subgraph(list(cc)).copy() - tmp_npath = [*netx.shortest_path(tmp_sub_mesh, (ravel_end[0][0], ravel_end[0][1]), (ravel_end[1][0], ravel_end[1][1]), weight='length')] - fpath_map1, npath_map1, disp_diff1 = plan_path(mesh, info_on_pix, cc, ravel_end[0:1], global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None, npath=tmp_npath) - fpath_map2, npath_map2, disp_diff2 = plan_path(mesh, info_on_pix, cc, ravel_end[1:2], global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None, npath=tmp_npath) - tmp_disp_diff = [disp_diff1, disp_diff2] - self_end = [] - edge_len = [] - ds_edge = cv2.dilate(self_edge.astype(np.uint8), np.ones((3, 3)), iterations=1) - if ds_edge[ravel_end[0][0], ravel_end[0][1]] > 0: - self_end.append(1) - else: - self_end.append(0) - if ds_edge[ravel_end[1][0], ravel_end[1][1]] > 0: - self_end.append(1) - else: - self_end.append(0) - edge_len = [np.count_nonzero(npath_map1), np.count_nonzero(npath_map2)] - sorted_end_pts = [xx[0] for xx in sorted(zip(ravel_end, self_end, edge_len, [disp_diff1, disp_diff2]), key=lambda x: (x[1], x[2]), reverse=True)] - re_npath_map1, re_fpath_map1 = (npath_map1 != -1).astype(np.uint8), (fpath_map1 != -1).astype(np.uint8) - re_npath_map2, re_fpath_map2 = (npath_map2 != -1).astype(np.uint8), (fpath_map2 != -1).astype(np.uint8) - if np.count_nonzero(re_npath_map1 * re_npath_map2 * mask) / \ - (np.count_nonzero((re_npath_map1 + re_npath_map2) * mask) + 1e-6) > 0.5\ - and np.count_nonzero(re_fpath_map1 * re_fpath_map2 * mask) / \ - (np.count_nonzero((re_fpath_map1 + re_fpath_map2) * mask) + 1e-6) > 0.5\ - and tmp_disp_diff[0] != -1 and tmp_disp_diff[1] != -1: - my_fpath_map, my_npath_map, npath, fpath = \ - plan_path_e2e(mesh, cc, sorted_end_pts, global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None) - npath_map[my_npath_map != -1] = my_npath_map[my_npath_map != -1] - fpath_map[my_fpath_map != -1] = my_fpath_map[my_fpath_map != -1] - if len(fpath) > 0: - edge_id = global_mesh.nodes[[*sorted_end_pts][0]]['edge_id'] - fpaths[edge_id] = fpath - npaths[edge_id] = npath - invalid_edge_ids.append(edge_id) - else: - if tmp_disp_diff[0] != -1: - ratio_a = tmp_disp_diff[0] / (np.sum(tmp_disp_diff) + 1e-8) - else: - ratio_a = 0 - if tmp_disp_diff[1] != -1: - ratio_b = tmp_disp_diff[1] / (np.sum(tmp_disp_diff) + 1e-8) - else: - ratio_b = 0 - npath_len = len(tmp_npath) - if npath_len > config['depth_edge_dilate_2'] * 2: - npath_len = npath_len - (config['depth_edge_dilate_2'] * 1) - tmp_npath_a = tmp_npath[:int(np.floor(npath_len * ratio_a))] - tmp_npath_b = tmp_npath[::-1][:int(np.floor(npath_len * ratio_b))] - tmp_merge = [] - if len(tmp_npath_a) > 0 and sorted_end_pts[0][0] == tmp_npath_a[0][0] and sorted_end_pts[0][1] == tmp_npath_a[0][1]: - if len(tmp_npath_a) > 0 and mask[tmp_npath_a[-1][0], tmp_npath_a[-1][1]] > 0: - tmp_merge.append([sorted_end_pts[:1], tmp_npath_a]) - if len(tmp_npath_b) > 0 and mask[tmp_npath_b[-1][0], tmp_npath_b[-1][1]] > 0: - tmp_merge.append([sorted_end_pts[1:2], tmp_npath_b]) - elif len(tmp_npath_b) > 0 and sorted_end_pts[0][0] == tmp_npath_b[0][0] and sorted_end_pts[0][1] == tmp_npath_b[0][1]: - if len(tmp_npath_b) > 0 and mask[tmp_npath_b[-1][0], tmp_npath_b[-1][1]] > 0: - tmp_merge.append([sorted_end_pts[:1], tmp_npath_b]) - if len(tmp_npath_a) > 0 and mask[tmp_npath_a[-1][0], tmp_npath_a[-1][1]] > 0: - tmp_merge.append([sorted_end_pts[1:2], tmp_npath_a]) - for tmp_idx in range(len(tmp_merge)): - if len(tmp_merge[tmp_idx][1]) == 0: - continue - end_pts.append(tmp_merge[tmp_idx][0]) - ccs.append(set(tmp_merge[tmp_idx][1])) - if len(end_pt) == 1: - sub_mesh = mesh.subgraph(list(cc)).copy() - pnodes = netx.periphery(sub_mesh) - if len(end_pt) == 1: - ends = [*end_pt] - elif len(sorted_end_pt) == 1: - ends = [*sorted_end_pt] - else: - import pdb; pdb.set_trace() - try: - edge_id = global_mesh.nodes[ends[0]]['edge_id'] - except: - import pdb; pdb.set_trace() - pnodes = sorted(pnodes, - key=lambda x: np.hypot((x[0] - ends[0][0]), (x[1] - ends[0][1])), - reverse=True)[0] - npath = [*netx.shortest_path(sub_mesh, (ends[0][0], ends[0][1]), pnodes, weight='length')] - for np_node in npath: - npath_map[np_node[0], np_node[1]] = edge_id - fpath = [] - if global_mesh.nodes[ends[0]].get('far') is None: - print("None far") - else: - fnodes = global_mesh.nodes[ends[0]].get('far') - dmask = mask + 0 - did = 0 - while True: - did += 1 - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - if did > 3: - break - ffnode = [fnode for fnode in fnodes if (dmask[fnode[0], fnode[1]] > 0 and mask[fnode[0], fnode[1]] == 0 and\ - global_mesh.nodes[fnode].get('inpaint_id') != inpaint_id + 1)] - if len(ffnode) > 0: - fnode = ffnode[0] - break - if len(ffnode) == 0: - continue - fpath.append((fnode[0], fnode[1])) - barrel_dir = np.array([[1, 0], [1, 1], [0, 1], [-1, 1], [-1, 0], [-1, -1], [0, -1], [1, -1]]) - n2f_dir = (int(fnode[0] - npath[0][0]), int(fnode[1] - npath[0][1])) - while True: - if barrel_dir[0, 0] == n2f_dir[0] and barrel_dir[0, 1] == n2f_dir[1]: - n2f_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - for step in range(0, len(npath)): - if step == 0: - continue - elif step == 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_dir[0, 0] == next_dir[0] and barrel_dir[0, 1] == next_dir[1]: - next_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - barrel_pair = np.stack((n2f_barrel, next_barrel), axis=0) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - elif step > 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_pair[1, 0, 0] == next_dir[0] and barrel_pair[1, 0, 1] == next_dir[1]: - next_barrel = barrel_pair.copy() - break - barrel_pair = np.roll(barrel_pair, 1, axis=1) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - new_locs = [] - if abs(n2f_dir[0]) == 1: - new_locs.append((npath[step][0] + n2f_dir[0], npath[step][1])) - if abs(n2f_dir[1]) == 1: - new_locs.append((npath[step][0], npath[step][1] + n2f_dir[1])) - if len(new_locs) > 1: - new_locs = sorted(new_locs, key=lambda xx: np.hypot((xx[0] - fpath[-1][0]), (xx[1] - fpath[-1][1]))) - break_flag = False - for new_loc in new_locs: - new_loc_nes = [xx for xx in [(new_loc[0] + 1, new_loc[1]), (new_loc[0] - 1, new_loc[1]), - (new_loc[0], new_loc[1] + 1), (new_loc[0], new_loc[1] - 1)]\ - if xx[0] >= 0 and xx[0] < fpath_map.shape[0] and xx[1] >= 0 and xx[1] < fpath_map.shape[1]] - if np.all([(fpath_map[nlne[0], nlne[1]] == -1) for nlne in new_loc_nes]) != True: - break - if npath_map[new_loc[0], new_loc[1]] != -1: - if npath_map[new_loc[0], new_loc[1]] != edge_id: - break_flag = True - break - else: - continue - if valid_map[new_loc[0], new_loc[1]] == 0: - break_flag = True - break - fpath.append(new_loc) - if break_flag is True: - break - if step != len(npath) - 1: - for xx in npath[step:]: - if npath_map[xx[0], xx[1]] == edge_id: - npath_map[xx[0], xx[1]] = -1 - npath = npath[:step] - if len(fpath) > 0: - for fp_node in fpath: - fpath_map[fp_node[0], fp_node[1]] = edge_id - fpaths[edge_id] = fpath - npaths[edge_id] = npath - fpath_map[valid_near_edge != 0] = -1 - if len(fpath) > 0: - iter_fpath = copy.deepcopy(fpaths[edge_id]) - for node in iter_fpath: - if valid_near_edge[node[0], node[1]] != 0: - fpaths[edge_id].remove(node) - - return fpath_map, npath_map, False, npaths, fpaths, invalid_edge_ids - -def plan_path_e2e(mesh, cc, end_pts, global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None): - my_npath_map = np.zeros_like(input_edge) - 1 - my_fpath_map = np.zeros_like(input_edge) - 1 - sub_mesh = mesh.subgraph(list(cc)).copy() - ends_1, ends_2 = end_pts[0], end_pts[1] - edge_id = global_mesh.nodes[ends_1]['edge_id'] - npath = [*netx.shortest_path(sub_mesh, (ends_1[0], ends_1[1]), (ends_2[0], ends_2[1]), weight='length')] - for np_node in npath: - my_npath_map[np_node[0], np_node[1]] = edge_id - fpath = [] - if global_mesh.nodes[ends_1].get('far') is None: - print("None far") - else: - fnodes = global_mesh.nodes[ends_1].get('far') - dmask = mask + 0 - while True: - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - ffnode = [fnode for fnode in fnodes if (dmask[fnode[0], fnode[1]] > 0 and mask[fnode[0], fnode[1]] == 0 and\ - global_mesh.nodes[fnode].get('inpaint_id') != inpaint_id + 1)] - if len(ffnode) > 0: - fnode = ffnode[0] - break - e_fnodes = global_mesh.nodes[ends_2].get('far') - dmask = mask + 0 - while True: - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - e_ffnode = [e_fnode for e_fnode in e_fnodes if (dmask[e_fnode[0], e_fnode[1]] > 0 and mask[e_fnode[0], e_fnode[1]] == 0 and\ - global_mesh.nodes[e_fnode].get('inpaint_id') != inpaint_id + 1)] - if len(e_ffnode) > 0: - e_fnode = e_ffnode[0] - break - fpath.append((fnode[0], fnode[1])) - if len(e_ffnode) == 0 or len(ffnode) == 0: - return my_npath_map, my_fpath_map, [], [] - barrel_dir = np.array([[1, 0], [1, 1], [0, 1], [-1, 1], [-1, 0], [-1, -1], [0, -1], [1, -1]]) - n2f_dir = (int(fnode[0] - npath[0][0]), int(fnode[1] - npath[0][1])) - while True: - if barrel_dir[0, 0] == n2f_dir[0] and barrel_dir[0, 1] == n2f_dir[1]: - n2f_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - for step in range(0, len(npath)): - if step == 0: - continue - elif step == 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_dir[0, 0] == next_dir[0] and barrel_dir[0, 1] == next_dir[1]: - next_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - barrel_pair = np.stack((n2f_barrel, next_barrel), axis=0) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - elif step > 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_pair[1, 0, 0] == next_dir[0] and barrel_pair[1, 0, 1] == next_dir[1]: - next_barrel = barrel_pair.copy() - break - barrel_pair = np.roll(barrel_pair, 1, axis=1) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - new_locs = [] - if abs(n2f_dir[0]) == 1: - new_locs.append((npath[step][0] + n2f_dir[0], npath[step][1])) - if abs(n2f_dir[1]) == 1: - new_locs.append((npath[step][0], npath[step][1] + n2f_dir[1])) - if len(new_locs) > 1: - new_locs = sorted(new_locs, key=lambda xx: np.hypot((xx[0] - fpath[-1][0]), (xx[1] - fpath[-1][1]))) - break_flag = False - for new_loc in new_locs: - new_loc_nes = [xx for xx in [(new_loc[0] + 1, new_loc[1]), (new_loc[0] - 1, new_loc[1]), - (new_loc[0], new_loc[1] + 1), (new_loc[0], new_loc[1] - 1)]\ - if xx[0] >= 0 and xx[0] < my_fpath_map.shape[0] and xx[1] >= 0 and xx[1] < my_fpath_map.shape[1]] - if fpath_map is not None and np.sum([fpath_map[nlne[0], nlne[1]] for nlne in new_loc_nes]) != 0: - break_flag = True - break - if my_npath_map[new_loc[0], new_loc[1]] != -1: - continue - if npath_map is not None and npath_map[new_loc[0], new_loc[1]] != edge_id: - break_flag = True - break - fpath.append(new_loc) - if break_flag is True: - break - if (e_fnode[0], e_fnode[1]) not in fpath: - fpath.append((e_fnode[0], e_fnode[1])) - if step != len(npath) - 1: - for xx in npath[step:]: - if my_npath_map[xx[0], xx[1]] == edge_id: - my_npath_map[xx[0], xx[1]] = -1 - npath = npath[:step] - if len(fpath) > 0: - for fp_node in fpath: - my_fpath_map[fp_node[0], fp_node[1]] = edge_id - - return my_fpath_map, my_npath_map, npath, fpath - -def plan_path(mesh, info_on_pix, cc, end_pt, global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None, npath=None): - my_npath_map = np.zeros_like(input_edge) - 1 - my_fpath_map = np.zeros_like(input_edge) - 1 - sub_mesh = mesh.subgraph(list(cc)).copy() - pnodes = netx.periphery(sub_mesh) - ends = [*end_pt] - edge_id = global_mesh.nodes[ends[0]]['edge_id'] - pnodes = sorted(pnodes, - key=lambda x: np.hypot((x[0] - ends[0][0]), (x[1] - ends[0][1])), - reverse=True)[0] - if npath is None: - npath = [*netx.shortest_path(sub_mesh, (ends[0][0], ends[0][1]), pnodes, weight='length')] - else: - if (ends[0][0], ends[0][1]) == npath[0]: - npath = npath - elif (ends[0][0], ends[0][1]) == npath[-1]: - npath = npath[::-1] - else: - import pdb; pdb.set_trace() - for np_node in npath: - my_npath_map[np_node[0], np_node[1]] = edge_id - fpath = [] - if global_mesh.nodes[ends[0]].get('far') is None: - print("None far") - else: - fnodes = global_mesh.nodes[ends[0]].get('far') - dmask = mask + 0 - did = 0 - while True: - did += 1 - if did > 3: - return my_fpath_map, my_npath_map, -1 - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - ffnode = [fnode for fnode in fnodes if (dmask[fnode[0], fnode[1]] > 0 and mask[fnode[0], fnode[1]] == 0 and\ - global_mesh.nodes[fnode].get('inpaint_id') != inpaint_id + 1)] - if len(ffnode) > 0: - fnode = ffnode[0] - break - - fpath.append((fnode[0], fnode[1])) - disp_diff = 0. - for n_loc in npath: - if mask[n_loc[0], n_loc[1]] != 0: - disp_diff = abs(abs(1. / info_on_pix[(n_loc[0], n_loc[1])][0]['depth']) - abs(1. / ends[0][2])) - break - barrel_dir = np.array([[1, 0], [1, 1], [0, 1], [-1, 1], [-1, 0], [-1, -1], [0, -1], [1, -1]]) - n2f_dir = (int(fnode[0] - npath[0][0]), int(fnode[1] - npath[0][1])) - while True: - if barrel_dir[0, 0] == n2f_dir[0] and barrel_dir[0, 1] == n2f_dir[1]: - n2f_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - for step in range(0, len(npath)): - if step == 0: - continue - elif step == 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_dir[0, 0] == next_dir[0] and barrel_dir[0, 1] == next_dir[1]: - next_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - barrel_pair = np.stack((n2f_barrel, next_barrel), axis=0) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - elif step > 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_pair[1, 0, 0] == next_dir[0] and barrel_pair[1, 0, 1] == next_dir[1]: - next_barrel = barrel_pair.copy() - break - barrel_pair = np.roll(barrel_pair, 1, axis=1) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - new_locs = [] - if abs(n2f_dir[0]) == 1: - new_locs.append((npath[step][0] + n2f_dir[0], npath[step][1])) - if abs(n2f_dir[1]) == 1: - new_locs.append((npath[step][0], npath[step][1] + n2f_dir[1])) - if len(new_locs) > 1: - new_locs = sorted(new_locs, key=lambda xx: np.hypot((xx[0] - fpath[-1][0]), (xx[1] - fpath[-1][1]))) - break_flag = False - for new_loc in new_locs: - new_loc_nes = [xx for xx in [(new_loc[0] + 1, new_loc[1]), (new_loc[0] - 1, new_loc[1]), - (new_loc[0], new_loc[1] + 1), (new_loc[0], new_loc[1] - 1)]\ - if xx[0] >= 0 and xx[0] < my_fpath_map.shape[0] and xx[1] >= 0 and xx[1] < my_fpath_map.shape[1]] - if fpath_map is not None and np.all([(fpath_map[nlne[0], nlne[1]] == -1) for nlne in new_loc_nes]) != True: - break_flag = True - break - if np.all([(my_fpath_map[nlne[0], nlne[1]] == -1) for nlne in new_loc_nes]) != True: - break_flag = True - break - if my_npath_map[new_loc[0], new_loc[1]] != -1: - continue - if npath_map is not None and npath_map[new_loc[0], new_loc[1]] != edge_id: - break_flag = True - break - if valid_map[new_loc[0], new_loc[1]] == 0: - break_flag = True - break - fpath.append(new_loc) - if break_flag is True: - break - if step != len(npath) - 1: - for xx in npath[step:]: - if my_npath_map[xx[0], xx[1]] == edge_id: - my_npath_map[xx[0], xx[1]] = -1 - npath = npath[:step] - if len(fpath) > 0: - for fp_node in fpath: - my_fpath_map[fp_node[0], fp_node[1]] = edge_id - - return my_fpath_map, my_npath_map, disp_diff - -def refresh_node(old_node, old_feat, new_node, new_feat, mesh, stime=False): - mesh.add_node(new_node) - mesh.nodes[new_node].update(new_feat) - mesh.nodes[new_node].update(old_feat) - for ne in mesh.neighbors(old_node): - mesh.add_edge(new_node, ne) - if mesh.nodes[new_node].get('far') is not None: - tmp_far_nodes = mesh.nodes[new_node]['far'] - for far_node in tmp_far_nodes: - if mesh.has_node(far_node) is False: - mesh.nodes[new_node]['far'].remove(far_node) - continue - if mesh.nodes[far_node].get('near') is not None: - for idx in range(len(mesh.nodes[far_node].get('near'))): - if mesh.nodes[far_node]['near'][idx][0] == new_node[0] and mesh.nodes[far_node]['near'][idx][1] == new_node[1]: - if len(mesh.nodes[far_node]['near'][idx]) == len(old_node): - mesh.nodes[far_node]['near'][idx] = new_node - if mesh.nodes[new_node].get('near') is not None: - tmp_near_nodes = mesh.nodes[new_node]['near'] - for near_node in tmp_near_nodes: - if mesh.has_node(near_node) is False: - mesh.nodes[new_node]['near'].remove(near_node) - continue - if mesh.nodes[near_node].get('far') is not None: - for idx in range(len(mesh.nodes[near_node].get('far'))): - if mesh.nodes[near_node]['far'][idx][0] == new_node[0] and mesh.nodes[near_node]['far'][idx][1] == new_node[1]: - if len(mesh.nodes[near_node]['far'][idx]) == len(old_node): - mesh.nodes[near_node]['far'][idx] = new_node - if new_node != old_node: - mesh.remove_node(old_node) - if stime is False: - return mesh - else: - return mesh, None, None - - -def create_placeholder(context, mask, depth, fpath_map, npath_map, mesh, inpaint_id, edge_ccs, extend_edge_cc, all_edge_maps, self_edge_id): - add_node_time = 0 - add_edge_time = 0 - add_far_near_time = 0 - valid_area = context + mask - H, W = mesh.graph['H'], mesh.graph['W'] - edge_cc = edge_ccs[self_edge_id] - num_com = len(edge_cc) + len(extend_edge_cc) - hxs, hys = np.where(mask > 0) - for hx, hy in zip(hxs, hys): - mesh.add_node((hx, hy), inpaint_id=inpaint_id + 1, num_context=num_com) - for hx, hy in zip(hxs, hys): - four_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] if\ - 0 <= x < mesh.graph['H'] and 0 <= y < mesh.graph['W'] and valid_area[x, y] != 0] - for ne in four_nes: - if mask[ne[0], ne[1]] != 0: - if not mesh.has_edge((hx, hy), ne): - mesh.add_edge((hx, hy), ne) - elif depth[ne[0], ne[1]] != 0: - if mesh.has_node((ne[0], ne[1], depth[ne[0], ne[1]])) and\ - not mesh.has_edge((hx, hy), (ne[0], ne[1], depth[ne[0], ne[1]])): - mesh.add_edge((hx, hy), (ne[0], ne[1], depth[ne[0], ne[1]])) - else: - print("Undefined context node.") - import pdb; pdb.set_trace() - near_ids = np.unique(npath_map) - if near_ids[0] == -1: near_ids = near_ids[1:] - for near_id in near_ids: - hxs, hys = np.where((fpath_map == near_id) & (mask > 0)) - if hxs.shape[0] > 0: - mesh.graph['max_edge_id'] = mesh.graph['max_edge_id'] + 1 - else: - break - for hx, hy in zip(hxs, hys): - mesh.nodes[(hx, hy)]['edge_id'] = int(round(mesh.graph['max_edge_id'])) - four_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] if\ - x < mesh.graph['H'] and x >= 0 and y < mesh.graph['W'] and y >= 0 and npath_map[x, y] == near_id] - for xx in four_nes: - xx_n = copy.deepcopy(xx) - if not mesh.has_node(xx_n): - if mesh.has_node((xx_n[0], xx_n[1], depth[xx_n[0], xx_n[1]])): - xx_n = (xx_n[0], xx_n[1], depth[xx_n[0], xx_n[1]]) - if mesh.has_edge((hx, hy), xx_n): - # pass - mesh.remove_edge((hx, hy), xx_n) - if mesh.nodes[(hx, hy)].get('near') is None: - mesh.nodes[(hx, hy)]['near'] = [] - mesh.nodes[(hx, hy)]['near'].append(xx_n) - connect_point_exception = set() - hxs, hys = np.where((npath_map == near_id) & (all_edge_maps > -1)) - for hx, hy in zip(hxs, hys): - unknown_id = int(round(all_edge_maps[hx, hy])) - if unknown_id != near_id and unknown_id != self_edge_id: - unknown_node = set([xx for xx in edge_ccs[unknown_id] if xx[0] == hx and xx[1] == hy]) - connect_point_exception |= unknown_node - hxs, hys = np.where((npath_map == near_id) & (mask > 0)) - if hxs.shape[0] > 0: - mesh.graph['max_edge_id'] = mesh.graph['max_edge_id'] + 1 - else: - break - for hx, hy in zip(hxs, hys): - mesh.nodes[(hx, hy)]['edge_id'] = int(round(mesh.graph['max_edge_id'])) - mesh.nodes[(hx, hy)]['connect_point_id'] = int(round(near_id)) - mesh.nodes[(hx, hy)]['connect_point_exception'] = connect_point_exception - four_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] if\ - x < mesh.graph['H'] and x >= 0 and y < mesh.graph['W'] and y >= 0 and fpath_map[x, y] == near_id] - for xx in four_nes: - xx_n = copy.deepcopy(xx) - if not mesh.has_node(xx_n): - if mesh.has_node((xx_n[0], xx_n[1], depth[xx_n[0], xx_n[1]])): - xx_n = (xx_n[0], xx_n[1], depth[xx_n[0], xx_n[1]]) - if mesh.has_edge((hx, hy), xx_n): - mesh.remove_edge((hx, hy), xx_n) - if mesh.nodes[(hx, hy)].get('far') is None: - mesh.nodes[(hx, hy)]['far'] = [] - mesh.nodes[(hx, hy)]['far'].append(xx_n) - - return mesh, add_node_time, add_edge_time, add_far_near_time - -def clean_far_edge(mask_edge, mask_edge_with_id, context_edge, mask, info_on_pix, global_mesh, anchor): - if isinstance(mask_edge, torch.Tensor): - if mask_edge.is_cuda: - mask_edge = mask_edge.cpu() - mask_edge = mask_edge.data - mask_edge = mask_edge.numpy() - if isinstance(context_edge, torch.Tensor): - if context_edge.is_cuda: - context_edge = context_edge.cpu() - context_edge = context_edge.data - context_edge = context_edge.numpy() - if isinstance(mask, torch.Tensor): - if mask.is_cuda: - mask = mask.cpu() - mask = mask.data - mask = mask.numpy() - mask = mask.squeeze() - mask_edge = mask_edge.squeeze() - context_edge = context_edge.squeeze() - valid_near_edge = np.zeros_like(mask_edge) - far_edge = np.zeros_like(mask_edge) - far_edge_with_id = np.ones_like(mask_edge) * -1 - near_edge_with_id = np.ones_like(mask_edge) * -1 - uncleaned_far_edge = np.zeros_like(mask_edge) - # Detect if there is any valid pixel mask_edge, if not ==> return default value - if mask_edge.sum() == 0: - return far_edge, uncleaned_far_edge, far_edge_with_id, near_edge_with_id - mask_edge_ids = dict(collections.Counter(mask_edge_with_id.flatten())).keys() - for edge_id in mask_edge_ids: - if edge_id < 0: - continue - specific_edge_map = (mask_edge_with_id == edge_id).astype(np.uint8) - _, sub_specific_edge_maps = cv2.connectedComponents(specific_edge_map.astype(np.uint8), connectivity=8) - for sub_edge_id in range(1, sub_specific_edge_maps.max() + 1): - specific_edge_map = (sub_specific_edge_maps == sub_edge_id).astype(np.uint8) - edge_pxs, edge_pys = np.where(specific_edge_map > 0) - edge_mesh = netx.Graph() - for edge_px, edge_py in zip(edge_pxs, edge_pys): - edge_mesh.add_node((edge_px, edge_py)) - for ex in [edge_px-1, edge_px, edge_px+1]: - for ey in [edge_py-1, edge_py, edge_py+1]: - if edge_px == ex and edge_py == ey: - continue - if ex < 0 or ex >= specific_edge_map.shape[0] or ey < 0 or ey >= specific_edge_map.shape[1]: - continue - if specific_edge_map[ex, ey] == 1: - if edge_mesh.has_node((ex, ey)): - edge_mesh.add_edge((ex, ey), (edge_px, edge_py)) - periphery_nodes = netx.periphery(edge_mesh) - path_diameter = netx.diameter(edge_mesh) - start_near_node = None - for node_s in periphery_nodes: - for node_e in periphery_nodes: - if node_s != node_e: - if netx.shortest_path_length(edge_mesh, node_s, node_e) == path_diameter: - if np.any(context_edge[node_s[0]-1:node_s[0]+2, node_s[1]-1:node_s[1]+2].flatten()): - start_near_node = (node_s[0], node_s[1]) - end_near_node = (node_e[0], node_e[1]) - break - if np.any(context_edge[node_e[0]-1:node_e[0]+2, node_e[1]-1:node_e[1]+2].flatten()): - start_near_node = (node_e[0], node_e[1]) - end_near_node = (node_s[0], node_s[1]) - break - if start_near_node is not None: - break - if start_near_node is None: - continue - new_specific_edge_map = np.zeros_like(mask) - for path_node in netx.shortest_path(edge_mesh, start_near_node, end_near_node): - new_specific_edge_map[path_node[0], path_node[1]] = 1 - context_near_pxs, context_near_pys = np.where(context_edge[start_near_node[0]-1:start_near_node[0]+2, start_near_node[1]-1:start_near_node[1]+2] > 0) - distance = np.abs((context_near_pxs - 1)) + np.abs((context_near_pys - 1)) - if (np.where(distance == distance.min())[0].shape[0]) > 1: - closest_pxs = context_near_pxs[np.where(distance == distance.min())[0]] - closest_pys = context_near_pys[np.where(distance == distance.min())[0]] - closest_depths = [] - for closest_px, closest_py in zip(closest_pxs, closest_pys): - if info_on_pix.get((closest_px + start_near_node[0] - 1 + anchor[0], closest_py + start_near_node[1] - 1 + anchor[2])) is not None: - for info in info_on_pix.get((closest_px + start_near_node[0] - 1 + anchor[0], closest_py + start_near_node[1] - 1 + anchor[2])): - if info['synthesis'] is False: - closest_depths.append(abs(info['depth'])) - context_near_px, context_near_py = closest_pxs[np.array(closest_depths).argmax()], closest_pys[np.array(closest_depths).argmax()] - else: - context_near_px, context_near_py = context_near_pxs[distance.argmin()], context_near_pys[distance.argmin()] - context_near_node = (start_near_node[0]-1 + context_near_px, start_near_node[1]-1 + context_near_py) - far_node_list = [] - global_context_near_node = (context_near_node[0] + anchor[0], context_near_node[1] + anchor[2]) - if info_on_pix.get(global_context_near_node) is not None: - for info in info_on_pix[global_context_near_node]: - if info['synthesis'] is False: - context_near_node_3d = (global_context_near_node[0], global_context_near_node[1], info['depth']) - if global_mesh.nodes[context_near_node_3d].get('far') is not None: - for far_node in global_mesh.nodes[context_near_node_3d].get('far'): - far_node = (far_node[0] - anchor[0], far_node[1] - anchor[2], far_node[2]) - if mask[far_node[0], far_node[1]] == 0: - far_node_list.append([far_node[0], far_node[1]]) - if len(far_node_list) > 0: - far_nodes_dist = np.sum(np.abs(np.array(far_node_list) - np.array([[edge_px, edge_py]])), axis=1) - context_far_node = tuple(far_node_list[far_nodes_dist.argmin()]) - corresponding_far_edge = np.zeros_like(mask_edge) - corresponding_far_edge[context_far_node[0], context_far_node[1]] = 1 - surround_map = cv2.dilate(new_specific_edge_map.astype(np.uint8), - np.array([[1,1,1],[1,1,1],[1,1,1]]).astype(np.uint8), - iterations=1) - specific_edge_map_wo_end_pt = new_specific_edge_map.copy() - specific_edge_map_wo_end_pt[end_near_node[0], end_near_node[1]] = 0 - surround_map_wo_end_pt = cv2.dilate(specific_edge_map_wo_end_pt.astype(np.uint8), - np.array([[1,1,1],[1,1,1],[1,1,1]]).astype(np.uint8), - iterations=1) - surround_map_wo_end_pt[new_specific_edge_map > 0] = 0 - surround_map_wo_end_pt[context_near_node[0], context_near_node[1]] = 0 - surround_map = surround_map_wo_end_pt.copy() - _, far_edge_cc = cv2.connectedComponents(surround_map.astype(np.uint8), connectivity=4) - start_far_node = None - accompany_far_node = None - if surround_map[context_far_node[0], context_far_node[1]] == 1: - start_far_node = context_far_node - else: - four_nes = [(context_far_node[0] - 1, context_far_node[1]), - (context_far_node[0] + 1, context_far_node[1]), - (context_far_node[0], context_far_node[1] - 1), - (context_far_node[0], context_far_node[1] + 1)] - candidate_bevel = [] - for ne in four_nes: - if surround_map[ne[0], ne[1]] == 1: - start_far_node = (ne[0], ne[1]) - break - elif (ne[0] != context_near_node[0] or ne[1] != context_near_node[1]) and \ - (ne[0] != start_near_node[0] or ne[1] != start_near_node[1]): - candidate_bevel.append((ne[0], ne[1])) - if start_far_node is None: - for ne in candidate_bevel: - if ne[0] == context_far_node[0]: - bevel_xys = [[ne[0] + 1, ne[1]], [ne[0] - 1, ne[1]]] - if ne[1] == context_far_node[1]: - bevel_xys = [[ne[0], ne[1] + 1], [ne[0], ne[1] - 1]] - for bevel_x, bevel_y in bevel_xys: - if surround_map[bevel_x, bevel_y] == 1: - start_far_node = (bevel_x, bevel_y) - accompany_far_node = (ne[0], ne[1]) - break - if start_far_node is not None: - break - if start_far_node is not None: - for far_edge_id in range(1, far_edge_cc.max() + 1): - specific_far_edge = (far_edge_cc == far_edge_id).astype(np.uint8) - if specific_far_edge[start_far_node[0], start_far_node[1]] == 1: - if accompany_far_node is not None: - specific_far_edge[accompany_far_node] = 1 - far_edge[specific_far_edge > 0] = 1 - far_edge_with_id[specific_far_edge > 0] = edge_id - end_far_candidates = np.zeros_like(far_edge) - end_far_candidates[end_near_node[0], end_near_node[1]] = 1 - end_far_candidates = cv2.dilate(end_far_candidates.astype(np.uint8), - np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), - iterations=1) - end_far_candidates[end_near_node[0], end_near_node[1]] = 0 - invalid_nodes = (((far_edge_cc != far_edge_id).astype(np.uint8) * \ - (far_edge_cc != 0).astype(np.uint8)).astype(np.uint8) + \ - (new_specific_edge_map).astype(np.uint8) + \ - (mask == 0).astype(np.uint8)).clip(0, 1) - end_far_candidates[invalid_nodes > 0] = 0 - far_edge[end_far_candidates > 0] = 1 - far_edge_with_id[end_far_candidates > 0] = edge_id - - far_edge[context_far_node[0], context_far_node[1]] = 1 - far_edge_with_id[context_far_node[0], context_far_node[1]] = edge_id - near_edge_with_id[(mask_edge_with_id == edge_id) > 0] = edge_id - uncleaned_far_edge = far_edge.copy() - far_edge[mask == 0] = 0 - - return far_edge, uncleaned_far_edge, far_edge_with_id, near_edge_with_id - -def get_MiDaS_samples(image_folder, depth_folder, config, specific=None, aft_certain=None): - lines = [os.path.splitext(os.path.basename(xx))[0] for xx in glob.glob(os.path.join(image_folder, '*' + config['img_format']))] - samples = [] - generic_pose = np.eye(4) - assert len(config['traj_types']) == len(config['x_shift_range']) ==\ - len(config['y_shift_range']) == len(config['z_shift_range']) == len(config['video_postfix']), \ - "The number of elements in 'traj_types', 'x_shift_range', 'y_shift_range', 'z_shift_range' and \ - 'video_postfix' should be equal." - tgt_pose = [[generic_pose * 1]] - tgts_poses = [] - for traj_idx in range(len(config['traj_types'])): - tgt_poses = [] - sx, sy, sz = path_planning(config['num_frames'], config['x_shift_range'][traj_idx], config['y_shift_range'][traj_idx], - config['z_shift_range'][traj_idx], path_type=config['traj_types'][traj_idx]) - for xx, yy, zz in zip(sx, sy, sz): - tgt_poses.append(generic_pose * 1.) - tgt_poses[-1][:3, -1] = np.array([xx, yy, zz]) - tgts_poses += [tgt_poses] - tgt_pose = generic_pose * 1 - - aft_flag = True - if aft_certain is not None and len(aft_certain) > 0: - aft_flag = False - for seq_dir in lines: - if specific is not None and len(specific) > 0: - if specific != seq_dir: - continue - if aft_certain is not None and len(aft_certain) > 0: - if aft_certain == seq_dir: - aft_flag = True - if aft_flag is False: - continue - samples.append({}) - sdict = samples[-1] - sdict['depth_fi'] = os.path.join(depth_folder, seq_dir + config['depth_format']) - sdict['ref_img_fi'] = os.path.join(image_folder, seq_dir + config['img_format']) - H, W = imageio.imread(sdict['ref_img_fi']).shape[:2] - sdict['int_mtx'] = np.array([[max(H, W), 0, W//2], [0, max(H, W), H//2], [0, 0, 1]]).astype(np.float32) - if sdict['int_mtx'].max() > 1: - sdict['int_mtx'][0, :] = sdict['int_mtx'][0, :] / float(W) - sdict['int_mtx'][1, :] = sdict['int_mtx'][1, :] / float(H) - sdict['ref_pose'] = np.eye(4) - sdict['tgt_pose'] = tgt_pose - sdict['tgts_poses'] = tgts_poses - sdict['video_postfix'] = config['video_postfix'] - sdict['tgt_name'] = [os.path.splitext(os.path.basename(sdict['depth_fi']))[0]] - sdict['src_pair_name'] = sdict['tgt_name'][0] - - return samples - -def get_valid_size(imap): - x_max = np.where(imap.sum(1).squeeze() > 0)[0].max() + 1 - x_min = np.where(imap.sum(1).squeeze() > 0)[0].min() - y_max = np.where(imap.sum(0).squeeze() > 0)[0].max() + 1 - y_min = np.where(imap.sum(0).squeeze() > 0)[0].min() - size_dict = {'x_max':x_max, 'y_max':y_max, 'x_min':x_min, 'y_min':y_min} - - return size_dict - -def dilate_valid_size(isize_dict, imap, dilate=[0, 0]): - osize_dict = copy.deepcopy(isize_dict) - osize_dict['x_min'] = max(0, osize_dict['x_min'] - dilate[0]) - osize_dict['x_max'] = min(imap.shape[0], osize_dict['x_max'] + dilate[0]) - osize_dict['y_min'] = max(0, osize_dict['y_min'] - dilate[0]) - osize_dict['y_max'] = min(imap.shape[1], osize_dict['y_max'] + dilate[1]) - - return osize_dict - -def crop_maps_by_size(size, *imaps): - omaps = [] - for imap in imaps: - omaps.append(imap[size['x_min']:size['x_max'], size['y_min']:size['y_max']].copy()) - - return omaps - -def smooth_cntsyn_gap(init_depth_map, mask_region, context_region, init_mask_region=None): - if init_mask_region is not None: - curr_mask_region = init_mask_region * 1 - else: - curr_mask_region = mask_region * 0 - depth_map = init_depth_map.copy() - for _ in range(2): - cm_mask = context_region + curr_mask_region - depth_s1 = np.roll(depth_map, 1, 0) - depth_s2 = np.roll(depth_map, -1, 0) - depth_s3 = np.roll(depth_map, 1, 1) - depth_s4 = np.roll(depth_map, -1, 1) - mask_s1 = np.roll(cm_mask, 1, 0) - mask_s2 = np.roll(cm_mask, -1, 0) - mask_s3 = np.roll(cm_mask, 1, 1) - mask_s4 = np.roll(cm_mask, -1, 1) - fluxin_depths = (depth_s1 * mask_s1 + depth_s2 * mask_s2 + depth_s3 * mask_s3 + depth_s4 * mask_s4) / \ - ((mask_s1 + mask_s2 + mask_s3 + mask_s4) + 1e-6) - fluxin_mask = (fluxin_depths != 0) * mask_region - init_mask = (fluxin_mask * (curr_mask_region >= 0).astype(np.float32) > 0).astype(np.uint8) - depth_map[init_mask > 0] = fluxin_depths[init_mask > 0] - if init_mask.shape[-1] > curr_mask_region.shape[-1]: - curr_mask_region[init_mask.sum(-1, keepdims=True) > 0] = 1 - else: - curr_mask_region[init_mask > 0] = 1 - depth_map[fluxin_mask > 0] = fluxin_depths[fluxin_mask > 0] - - return depth_map - -def read_MiDaS_depth(disp_fi, disp_rescale=10., h=None, w=None): - if 'npy' in os.path.splitext(disp_fi)[-1]: - disp = np.load(disp_fi) - else: - disp = imageio.imread(disp_fi).astype(np.float32) - disp = disp - disp.min() - disp = cv2.blur(disp / disp.max(), ksize=(3, 3)) * disp.max() - disp = (disp / disp.max()) * disp_rescale - if h is not None and w is not None: - disp = resize(disp / disp.max(), (h, w), order=1) * disp.max() - depth = 1. / np.maximum(disp, 0.05) - - return depth - -def follow_image_aspect_ratio(depth, image): - H, W = image.shape[:2] - image_aspect_ratio = H / W - dH, dW = depth.shape[:2] - depth_aspect_ratio = dH / dW - if depth_aspect_ratio > image_aspect_ratio: - resize_H = dH - resize_W = dH / image_aspect_ratio - else: - resize_W = dW - resize_H = dW * image_aspect_ratio - depth = resize(depth / depth.max(), - (int(resize_H), - int(resize_W)), - order=0) * depth.max() - - return depth - -def depth_resize(depth, origin_size, image_size): - if origin_size[0] is not 0: - max_depth = depth.max() - depth = depth / max_depth - depth = resize(depth, origin_size, order=1, mode='edge') - depth = depth * max_depth - else: - max_depth = depth.max() - depth = depth / max_depth - depth = resize(depth, image_size, order=1, mode='edge') - depth = depth * max_depth - - return depth - -def filter_irrelevant_edge(self_edge, other_edges, other_edges_with_id, current_edge_id, context, edge_ccs, mesh, anchor): - other_edges = other_edges.squeeze() - other_edges_with_id = other_edges_with_id.squeeze() - - self_edge = self_edge.squeeze() - dilate_self_edge = cv2.dilate(self_edge.astype(np.uint8), np.array([[1,1,1],[1,1,1],[1,1,1]]).astype(np.uint8), iterations=1) - edge_ids = collections.Counter(other_edges_with_id.flatten()).keys() - other_edges_info = [] - # import ipdb - # ipdb.set_trace() - for edge_id in edge_ids: - edge_id = int(edge_id) - if edge_id >= 0: - condition = ((other_edges_with_id == edge_id) * other_edges * context).astype(np.uint8) - if dilate_self_edge[condition > 0].sum() == 0: - other_edges[other_edges_with_id == edge_id] = 0 - else: - num_condition, condition_labels = cv2.connectedComponents(condition, connectivity=8) - for condition_id in range(1, num_condition): - isolate_condition = ((condition_labels == condition_id) > 0).astype(np.uint8) - num_end_group, end_group = cv2.connectedComponents(((dilate_self_edge * isolate_condition) > 0).astype(np.uint8), connectivity=8) - if num_end_group == 1: - continue - for end_id in range(1, num_end_group): - end_pxs, end_pys = np.where((end_group == end_id)) - end_px, end_py = end_pxs[0], end_pys[0] - other_edges_info.append({}) - other_edges_info[-1]['edge_id'] = edge_id - # other_edges_info[-1]['near_depth'] = None - other_edges_info[-1]['diff'] = None - other_edges_info[-1]['edge_map'] = np.zeros_like(self_edge) - other_edges_info[-1]['end_point_map'] = np.zeros_like(self_edge) - other_edges_info[-1]['end_point_map'][(end_group == end_id)] = 1 - other_edges_info[-1]['forbidden_point_map'] = np.zeros_like(self_edge) - other_edges_info[-1]['forbidden_point_map'][(end_group != end_id) * (end_group != 0)] = 1 - other_edges_info[-1]['forbidden_point_map'] = cv2.dilate(other_edges_info[-1]['forbidden_point_map'], kernel=np.array([[1,1,1],[1,1,1],[1,1,1]]), iterations=2) - for x in edge_ccs[edge_id]: - nx = x[0] - anchor[0] - ny = x[1] - anchor[1] - if nx == end_px and ny == end_py: - # other_edges_info[-1]['near_depth'] = abs(nx) - if mesh.nodes[x].get('far') is not None and len(mesh.nodes[x].get('far')) == 1: - other_edges_info[-1]['diff'] = abs(1./abs([*mesh.nodes[x].get('far')][0][2]) - 1./abs(x[2])) - else: - other_edges_info[-1]['diff'] = 0 - # if end_group[nx, ny] != end_id and end_group[nx, ny] > 0: - # continue - try: - if isolate_condition[nx, ny] == 1: - other_edges_info[-1]['edge_map'][nx, ny] = 1 - except: - pass - try: - other_edges_info = sorted(other_edges_info, key=lambda x : x['diff'], reverse=True) - except: - import pdb - pdb.set_trace() - # import pdb - # pdb.set_trace() - # other_edges = other_edges[..., None] - for other_edge in other_edges_info: - if other_edge['end_point_map'] is None: - import pdb - pdb.set_trace() - - other_edges = other_edges * context - - return other_edges, other_edges_info - -def require_depth_edge(context_edge, mask): - dilate_mask = cv2.dilate(mask, np.array([[1,1,1],[1,1,1],[1,1,1]]).astype(np.uint8), iterations=1) - if (dilate_mask * context_edge).max() == 0: - return False - else: - return True - -def refine_color_around_edge(mesh, info_on_pix, edge_ccs, config, spdb=False): - H, W = mesh.graph['H'], mesh.graph['W'] - tmp_edge_ccs = copy.deepcopy(edge_ccs) - for edge_id, edge_cc in enumerate(edge_ccs): - if len(edge_cc) == 0: - continue - near_maps = np.zeros((H, W)).astype(np.bool) - far_maps = np.zeros((H, W)).astype(np.bool) - tmp_far_nodes = set() - far_nodes = set() - near_nodes = set() - end_nodes = set() - for i in range(5): - if i == 0: - for edge_node in edge_cc: - if mesh.nodes[edge_node].get('depth_edge_dilate_2_color_flag') is not True: - break - if mesh.nodes[edge_node].get('inpaint_id') == 1: - near_nodes.add(edge_node) - tmp_node = mesh.nodes[edge_node].get('far') - tmp_node = set(tmp_node) if tmp_node is not None else set() - tmp_far_nodes |= tmp_node - rmv_tmp_far_nodes = set() - for far_node in tmp_far_nodes: - if not(mesh.has_node(far_node) and mesh.nodes[far_node].get('inpaint_id') == 1): - rmv_tmp_far_nodes.add(far_node) - if len(tmp_far_nodes - rmv_tmp_far_nodes) == 0: - break - else: - for near_node in near_nodes: - near_maps[near_node[0], near_node[1]] = True - mesh.nodes[near_node]['refine_rgbd'] = True - mesh.nodes[near_node]['backup_depth'] = near_node[2] \ - if mesh.nodes[near_node].get('real_depth') is None else mesh.nodes[near_node]['real_depth'] - mesh.nodes[near_node]['backup_color'] = mesh.nodes[near_node]['color'] - for far_node in tmp_far_nodes: - if mesh.has_node(far_node) and mesh.nodes[far_node].get('inpaint_id') == 1: - far_nodes.add(far_node) - far_maps[far_node[0], far_node[1]] = True - mesh.nodes[far_node]['refine_rgbd'] = True - mesh.nodes[far_node]['backup_depth'] = far_node[2] \ - if mesh.nodes[far_node].get('real_depth') is None else mesh.nodes[far_node]['real_depth'] - mesh.nodes[far_node]['backup_color'] = mesh.nodes[far_node]['color'] - tmp_far_nodes = far_nodes - tmp_near_nodes = near_nodes - else: - tmp_far_nodes = new_tmp_far_nodes - tmp_near_nodes = new_tmp_near_nodes - new_tmp_far_nodes = None - new_tmp_near_nodes = None - new_tmp_far_nodes = set() - new_tmp_near_nodes = set() - for node in tmp_near_nodes: - for ne_node in mesh.neighbors(node): - if far_maps[ne_node[0], ne_node[1]] == False and \ - near_maps[ne_node[0], ne_node[1]] == False: - if mesh.nodes[ne_node].get('inpaint_id') == 1: - new_tmp_near_nodes.add(ne_node) - near_maps[ne_node[0], ne_node[1]] = True - mesh.nodes[ne_node]['refine_rgbd'] = True - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - else: - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - end_nodes.add(node) - near_nodes.update(new_tmp_near_nodes) - for node in tmp_far_nodes: - for ne_node in mesh.neighbors(node): - if far_maps[ne_node[0], ne_node[1]] == False and \ - near_maps[ne_node[0], ne_node[1]] == False: - if mesh.nodes[ne_node].get('inpaint_id') == 1: - new_tmp_far_nodes.add(ne_node) - far_maps[ne_node[0], ne_node[1]] = True - mesh.nodes[ne_node]['refine_rgbd'] = True - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - else: - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - end_nodes.add(node) - far_nodes.update(new_tmp_far_nodes) - if len(far_nodes) == 0: - tmp_edge_ccs[edge_id] = set() - continue - for node in new_tmp_far_nodes | new_tmp_near_nodes: - for ne_node in mesh.neighbors(node): - if far_maps[ne_node[0], ne_node[1]] == False and near_maps[ne_node[0], ne_node[1]] == False: - end_nodes.add(node) - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - tmp_end_nodes = end_nodes - - refine_nodes = near_nodes | far_nodes - remain_refine_nodes = copy.deepcopy(refine_nodes) - accum_idx = 0 - while len(remain_refine_nodes) > 0: - accum_idx += 1 - if accum_idx > 100: - break - new_tmp_end_nodes = None - new_tmp_end_nodes = set() - survive_tmp_end_nodes = set() - for node in tmp_end_nodes: - re_depth, re_color, re_count = 0, np.array([0., 0., 0.]), 0 - for ne_node in mesh.neighbors(node): - if mesh.nodes[ne_node].get('refine_rgbd') is True: - if ne_node not in tmp_end_nodes: - new_tmp_end_nodes.add(ne_node) - else: - try: - re_depth += mesh.nodes[ne_node]['backup_depth'] - re_color += mesh.nodes[ne_node]['backup_color'].astype(np.float32) - re_count += 1. - except: - import pdb; pdb.set_trace() - if re_count > 0: - re_depth = re_depth / re_count - re_color = re_color / re_count - mesh.nodes[node]['backup_depth'] = re_depth - mesh.nodes[node]['backup_color'] = re_color - mesh.nodes[node]['refine_rgbd'] = False - else: - survive_tmp_end_nodes.add(node) - for node in tmp_end_nodes - survive_tmp_end_nodes: - if node in remain_refine_nodes: - remain_refine_nodes.remove(node) - tmp_end_nodes = new_tmp_end_nodes - if spdb == True: - bfrd_canvas = np.zeros((H, W)) - bfrc_canvas = np.zeros((H, W, 3)).astype(np.uint8) - aftd_canvas = np.zeros((H, W)) - aftc_canvas = np.zeros((H, W, 3)).astype(np.uint8) - for node in refine_nodes: - bfrd_canvas[node[0], node[1]] = abs(node[2]) - aftd_canvas[node[0], node[1]] = abs(mesh.nodes[node]['backup_depth']) - bfrc_canvas[node[0], node[1]] = mesh.nodes[node]['color'].astype(np.uint8) - aftc_canvas[node[0], node[1]] = mesh.nodes[node]['backup_color'].astype(np.uint8) - f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, sharex=True, sharey=True); - ax1.imshow(bfrd_canvas); - ax2.imshow(aftd_canvas); - ax3.imshow(bfrc_canvas); - ax4.imshow(aftc_canvas); - plt.show() - import pdb; pdb.set_trace() - for node in refine_nodes: - if mesh.nodes[node].get('refine_rgbd') is not None: - mesh.nodes[node].pop('refine_rgbd') - mesh.nodes[node]['color'] = mesh.nodes[node]['backup_color'] - for info in info_on_pix[(node[0], node[1])]: - if info['depth'] == node[2]: - info['color'] = mesh.nodes[node]['backup_color'] - - return mesh, info_on_pix - -def refine_depth_around_edge(mask_depth, far_edge, uncleaned_far_edge, near_edge, mask, all_depth, config): - if isinstance(mask_depth, torch.Tensor): - if mask_depth.is_cuda: - mask_depth = mask_depth.cpu() - mask_depth = mask_depth.data - mask_depth = mask_depth.numpy() - if isinstance(far_edge, torch.Tensor): - if far_edge.is_cuda: - far_edge = far_edge.cpu() - far_edge = far_edge.data - far_edge = far_edge.numpy() - if isinstance(uncleaned_far_edge, torch.Tensor): - if uncleaned_far_edge.is_cuda: - uncleaned_far_edge = uncleaned_far_edge.cpu() - uncleaned_far_edge = uncleaned_far_edge.data - uncleaned_far_edge = uncleaned_far_edge.numpy() - if isinstance(near_edge, torch.Tensor): - if near_edge.is_cuda: - near_edge = near_edge.cpu() - near_edge = near_edge.data - near_edge = near_edge.numpy() - if isinstance(mask, torch.Tensor): - if mask.is_cuda: - mask = mask.cpu() - mask = mask.data - mask = mask.numpy() - mask = mask.squeeze() - uncleaned_far_edge = uncleaned_far_edge.squeeze() - far_edge = far_edge.squeeze() - near_edge = near_edge.squeeze() - mask_depth = mask_depth.squeeze() - dilate_far_edge = cv2.dilate(uncleaned_far_edge.astype(np.uint8), kernel=np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - near_edge[dilate_far_edge == 0] = 0 - dilate_near_edge = cv2.dilate(near_edge.astype(np.uint8), kernel=np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - far_edge[dilate_near_edge == 0] = 0 - init_far_edge = far_edge.copy() - init_near_edge = near_edge.copy() - for i in range(config['depth_edge_dilate_2']): - init_far_edge = cv2.dilate(init_far_edge, kernel=np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - init_far_edge[init_near_edge == 1] = 0 - init_near_edge = cv2.dilate(init_near_edge, kernel=np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - init_near_edge[init_far_edge == 1] = 0 - init_far_edge[mask == 0] = 0 - init_near_edge[mask == 0] = 0 - hole_far_edge = 1 - init_far_edge - hole_near_edge = 1 - init_near_edge - change = None - while True: - change = False - hole_far_edge[init_near_edge == 1] = 0 - hole_near_edge[init_far_edge == 1] = 0 - far_pxs, far_pys = np.where((hole_far_edge == 0) * (init_far_edge == 1) > 0) - current_hole_far_edge = hole_far_edge.copy() - for far_px, far_py in zip(far_pxs, far_pys): - min_px = max(far_px - 1, 0) - max_px = min(far_px + 2, mask.shape[0]-1) - min_py = max(far_py - 1, 0) - max_py = min(far_py + 2, mask.shape[1]-1) - hole_far = current_hole_far_edge[min_px: max_px, min_py: max_py] - tmp_mask = mask[min_px: max_px, min_py: max_py] - all_depth_patch = all_depth[min_px: max_px, min_py: max_py] * 0 - all_depth_mask = (all_depth_patch != 0).astype(np.uint8) - cross_element = np.array([[0,1,0],[1,1,1],[0,1,0]])[min_px - (far_px - 1): max_px - (far_px - 1), min_py - (far_py - 1): max_py - (far_py - 1)] - combine_mask = (tmp_mask + all_depth_mask).clip(0, 1) * hole_far * cross_element - tmp_patch = combine_mask * (mask_depth[min_px: max_px, min_py: max_py] + all_depth_patch) - number = np.count_nonzero(tmp_patch) - if number > 0: - mask_depth[far_px, far_py] = np.sum(tmp_patch).astype(np.float32) / max(number, 1e-6) - hole_far_edge[far_px, far_py] = 1 - change = True - near_pxs, near_pys = np.where((hole_near_edge == 0) * (init_near_edge == 1) > 0) - current_hole_near_edge = hole_near_edge.copy() - for near_px, near_py in zip(near_pxs, near_pys): - min_px = max(near_px - 1, 0) - max_px = min(near_px + 2, mask.shape[0]-1) - min_py = max(near_py - 1, 0) - max_py = min(near_py + 2, mask.shape[1]-1) - hole_near = current_hole_near_edge[min_px: max_px, min_py: max_py] - tmp_mask = mask[min_px: max_px, min_py: max_py] - all_depth_patch = all_depth[min_px: max_px, min_py: max_py] * 0 - all_depth_mask = (all_depth_patch != 0).astype(np.uint8) - cross_element = np.array([[0,1,0],[1,1,1],[0,1,0]])[min_px - near_px + 1:max_px - near_px + 1, min_py - near_py + 1:max_py - near_py + 1] - combine_mask = (tmp_mask + all_depth_mask).clip(0, 1) * hole_near * cross_element - tmp_patch = combine_mask * (mask_depth[min_px: max_px, min_py: max_py] + all_depth_patch) - number = np.count_nonzero(tmp_patch) - if number > 0: - mask_depth[near_px, near_py] = np.sum(tmp_patch) / max(number, 1e-6) - hole_near_edge[near_px, near_py] = 1 - change = True - if change is False: - break - - return mask_depth - - - -def vis_depth_edge_connectivity(depth, config): - disp = 1./depth - u_diff = (disp[1:, :] - disp[:-1, :])[:-1, 1:-1] - b_diff = (disp[:-1, :] - disp[1:, :])[1:, 1:-1] - l_diff = (disp[:, 1:] - disp[:, :-1])[1:-1, :-1] - r_diff = (disp[:, :-1] - disp[:, 1:])[1:-1, 1:] - u_over = (np.abs(u_diff) > config['depth_threshold']).astype(np.float32) - b_over = (np.abs(b_diff) > config['depth_threshold']).astype(np.float32) - l_over = (np.abs(l_diff) > config['depth_threshold']).astype(np.float32) - r_over = (np.abs(r_diff) > config['depth_threshold']).astype(np.float32) - concat_diff = np.stack([u_diff, b_diff, r_diff, l_diff], axis=-1) - concat_over = np.stack([u_over, b_over, r_over, l_over], axis=-1) - over_diff = concat_diff * concat_over - pos_over = (over_diff > 0).astype(np.float32).sum(-1).clip(0, 1) - neg_over = (over_diff < 0).astype(np.float32).sum(-1).clip(0, 1) - neg_over[(over_diff > 0).astype(np.float32).sum(-1) > 0] = 0 - _, edge_label = cv2.connectedComponents(pos_over.astype(np.uint8), connectivity=8) - T_junction_maps = np.zeros_like(pos_over) - for edge_id in range(1, edge_label.max() + 1): - edge_map = (edge_label == edge_id).astype(np.uint8) - edge_map = np.pad(edge_map, pad_width=((1,1),(1,1)), mode='constant') - four_direc = np.roll(edge_map, 1, 1) + np.roll(edge_map, -1, 1) + np.roll(edge_map, 1, 0) + np.roll(edge_map, -1, 0) - eight_direc = np.roll(np.roll(edge_map, 1, 1), 1, 0) + np.roll(np.roll(edge_map, 1, 1), -1, 0) + \ - np.roll(np.roll(edge_map, -1, 1), 1, 0) + np.roll(np.roll(edge_map, -1, 1), -1, 0) - eight_direc = (eight_direc + four_direc)[1:-1,1:-1] - pos_over[eight_direc > 2] = 0 - T_junction_maps[eight_direc > 2] = 1 - _, edge_label = cv2.connectedComponents(pos_over.astype(np.uint8), connectivity=8) - edge_label = np.pad(edge_label, 1, mode='constant') - - return edge_label - - - -def max_size(mat, value=0): - if not (mat and mat[0]): return (0, 0) - it = iter(mat) - prev = [(el==value) for el in next(it)] - max_size = max_rectangle_size(prev) - for row in it: - hist = [(1+h) if el == value else 0 for h, el in zip(prev, row)] - max_size = max(max_size, max_rectangle_size(hist), key=get_area) - prev = hist - return max_size - -def max_rectangle_size(histogram): - Info = namedtuple('Info', 'start height') - stack = [] - top = lambda: stack[-1] - max_size = (0, 0) # height, width of the largest rectangle - pos = 0 # current position in the histogram - for pos, height in enumerate(histogram): - start = pos # position where rectangle starts - while True: - if not stack or height > top().height: - stack.append(Info(start, height)) # push - if stack and height < top().height: - max_size = max(max_size, (top().height, (pos-top().start)), - key=get_area) - start, _ = stack.pop() - continue - break # height == top().height goes here - - pos += 1 - for start, height in stack: - max_size = max(max_size, (height, (pos-start)), - key=get_area) - - return max_size - -def get_area(size): - return reduce(mul, size) - -def find_anchors(matrix): - matrix = [[*x] for x in matrix] - mh, mw = max_size(matrix) - matrix = np.array(matrix) - # element = np.zeros((mh, mw)) - for i in range(matrix.shape[0] + 1 - mh): - for j in range(matrix.shape[1] + 1 - mw): - if matrix[i:i + mh, j:j + mw].max() == 0: - return i, i + mh, j, j + mw - -def find_largest_rect(dst_img, bg_color=(128, 128, 128)): - valid = np.any(dst_img[..., :3] != bg_color, axis=-1) - dst_h, dst_w = dst_img.shape[:2] - ret, labels = cv2.connectedComponents(np.uint8(valid == False)) - red_mat = np.zeros_like(labels) - # denoise - for i in range(1, np.max(labels)+1, 1): - x, y, w, h = cv2.boundingRect(np.uint8(labels==i)) - if x == 0 or (x+w) == dst_h or y == 0 or (y+h) == dst_w: - red_mat[labels==i] = 1 - # crop - t, b, l, r = find_anchors(red_mat) - - return t, b, l, r diff --git a/spaces/doevent/VintageStyle/README.md b/spaces/doevent/VintageStyle/README.md deleted file mode 100644 index 2cb5bab0ee3f0c2e65abdba9c3e4a35d26348381..0000000000000000000000000000000000000000 --- a/spaces/doevent/VintageStyle/README.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Vintage Style -emoji: 💁🏼‍♀️ -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference - -# Configuration - -`title`: _string_ -Display title for the Space - diff --git a/spaces/dotku/fastapi-demo/main.py b/spaces/dotku/fastapi-demo/main.py deleted file mode 100644 index 722892d56d3c7d4042c35442792195ef890defd8..0000000000000000000000000000000000000000 --- a/spaces/dotku/fastapi-demo/main.py +++ /dev/null @@ -1,17 +0,0 @@ -from fastapi import FastAPI -from fastapi.middleware.cors import CORSMiddleware - -app = FastAPI() - -app.add_middleware( - CORSMiddleware, - allow_origins=['*'] -) - -@app.get("/") -def read_root(): - return {"message": "Hello World"} - -@app.get("/api/python") -def hello_python(): - return {"message": "Hello Python"} \ No newline at end of file diff --git a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/util/get_tokenlizer.py b/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/util/get_tokenlizer.py deleted file mode 100644 index f7dcf7e95f03f95b20546b26442a94225924618b..0000000000000000000000000000000000000000 --- a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/util/get_tokenlizer.py +++ /dev/null @@ -1,26 +0,0 @@ -from transformers import AutoTokenizer, BertModel, BertTokenizer, RobertaModel, RobertaTokenizerFast - - -def get_tokenlizer(text_encoder_type): - if not isinstance(text_encoder_type, str): - # print("text_encoder_type is not a str") - if hasattr(text_encoder_type, "text_encoder_type"): - text_encoder_type = text_encoder_type.text_encoder_type - elif text_encoder_type.get("text_encoder_type", False): - text_encoder_type = text_encoder_type.get("text_encoder_type") - else: - raise ValueError( - "Unknown type of text_encoder_type: {}".format(type(text_encoder_type)) - ) - print("final text_encoder_type: {}".format(text_encoder_type)) - - tokenizer = AutoTokenizer.from_pretrained(text_encoder_type) - return tokenizer - - -def get_pretrained_language_model(text_encoder_type): - if text_encoder_type == "bert-base-uncased": - return BertModel.from_pretrained(text_encoder_type) - if text_encoder_type == "roberta-base": - return RobertaModel.from_pretrained(text_encoder_type) - raise ValueError("Unknown text_encoder_type {}".format(text_encoder_type)) diff --git a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py b/spaces/eIysia/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py deleted file mode 100644 index 258b618cd338322365dfa25bec468a0a3f70ccd1..0000000000000000000000000000000000000000 --- a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py +++ /dev/null @@ -1,36 +0,0 @@ -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import IPython.display as ipd -import torch -import commons -import utils -import ONNXVITS_infer -from text import text_to_sequence - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json") - -net_g = ONNXVITS_infer.SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) -_ = net_g.eval() - -_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g) - -text1 = get_text("おはようございます。", hps) -stn_tst = text1 -with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - sid = torch.LongTensor([0]) - audio = net_g.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy() -print(audio) \ No newline at end of file diff --git a/spaces/ec7719/Excel/app.py b/spaces/ec7719/Excel/app.py deleted file mode 100644 index 2d94b9d6e508d3de04e1c7855cfae62482ab853c..0000000000000000000000000000000000000000 --- a/spaces/ec7719/Excel/app.py +++ /dev/null @@ -1,138 +0,0 @@ -import streamlit as st -import pandas as pd -import math -import matplotlib.pyplot as plt - -# Function to read different file types -def read_file(file): - file_extension = file.name.split(".")[-1] - if file_extension == "csv": - data = pd.read_csv(file) - elif file_extension == "xlsx": - data = pd.read_excel(file, engine="openpyxl") - else: - st.error("Unsupported file format. Please upload a CSV or Excel file.") - return None - return data - -# Function to display the uploaded data -def display_data(data): - st.write("### Data Preview") - st.dataframe(data.head()) - -# Function to perform mathematical calculations and store results as separate columns -def perform_calculations(data): - st.write("### Calculations") - - # Get column names - columns = data.columns - - # Iterate over each column and perform calculations - for column in columns: - st.write("Calculations for column:", column) - - # Example calculations: sum, mean, median - column_sum = data[column].sum() - column_mean = data[column].mean() - column_median = data[column].median() - - # Create new column names - sum_column_name = f"{column}_sum" - mean_column_name = f"{column}_mean" - median_column_name = f"{column}_median" - - # Add the calculated values as new columns - data[sum_column_name] = column_sum - data[mean_column_name] = column_mean - data[median_column_name] = column_median - - # Display the calculated values - st.write("Sum:", column_sum) - st.write("Mean:", column_mean) - st.write("Median:", column_median) - - # Display the updated data with calculated columns - st.write("### Updated Data") - st.dataframe(data) - -# Function to perform mathematical calculations -def perform_math(df, selected_columns, operation): - result = None - - if operation == "sqrt": - result = df[selected_columns].applymap(math.sqrt) - elif operation == "log": - result = df[selected_columns].applymap(math.log) - elif operation == "exp": - result = df[selected_columns].applymap(math.exp) - elif operation == "sin": - result = df[selected_columns].applymap(math.sin) - elif operation == "cos": - result = df[selected_columns].applymap(math.cos) - elif operation == "tan": - result = df[selected_columns].applymap(math.tan) - elif operation == "multiply": - result = df[selected_columns].prod(axis=1) - elif operation == "add": - result = df[selected_columns].sum(axis=1) - elif operation == "subtract": - result = df[selected_columns[0]] - df[selected_columns[1]] - - if result is not None: - df[f"{operation}_result"] = result - - return df - -def plot_graph(data, graph_type, x_variables, y_variables): - plt.figure(figsize=(8, 6)) - - for x_var in x_variables: - for y_var in y_variables: - if graph_type == "Scatter": - plt.scatter(data[x_var], data[y_var], label=f"{x_var} vs {y_var}") - elif graph_type == "Line": - plt.plot(data[x_var], data[y_var], label=f"{x_var} vs {y_var}") - st.pyplot() - elif graph_type == "Bar": - x = range(len(data)) - plt.bar(x, data[y_var], label=y_var) - - plt.xlabel("X Values") - plt.ylabel("Y Values") - plt.title(f"{graph_type} Plot") - plt.legend() - - st.pyplot() - -def main(): - st.title("Excel-like Data Visualization and Calculations") - st.write("Upload a CSV or Excel file and visualize the data") - - file = st.file_uploader("Upload file", type=["csv", "xlsx"]) - - if file is not None: - data = read_file(file) - if data is not None: - display_data(data) - perform_calculations(data) - - st.write("### Graph Visualizer") - st.write("Select variables for visualization:") - - graph_type = st.selectbox("Graph Type", options=["Scatter", "Line", "Bar"]) - x_variables = st.multiselect("X Variables", options=data.columns) - y_variables = st.multiselect("Y Variables", options=data.columns) - - selected_columns = st.multiselect("Select columns:", options=data.columns) - operation = st.selectbox("Select an operation:", ["sqrt", "log", "exp", "sin", "cos", "tan", "multiply", "add", "subtract"]) - - if st.button("Calculate"): - data = perform_math(data, selected_columns, operation) - st.write(data) - if st.button("Plot"): - plot_graph(data, graph_type, x_variables, y_variables) - - -if __name__ == "__main__": - st.set_page_config(page_title="My Analytics App") - main() diff --git a/spaces/elun15/image-regression/dataset.py b/spaces/elun15/image-regression/dataset.py deleted file mode 100644 index 36168e1c3ef907db14fa86945fd0aae4c039c46b..0000000000000000000000000000000000000000 --- a/spaces/elun15/image-regression/dataset.py +++ /dev/null @@ -1,75 +0,0 @@ -import os - -import pandas as pd -import torch -import torchvision -from hydra.utils import get_original_cwd -from PIL import Image - - -# Define ImageRegressionDataset class as a PyTorch Dataset -class ImageRegressionDataset(torch.utils.data.Dataset): - def __init__(self, cfg, set): - self.data_path = os.path.join(get_original_cwd(), cfg.data_path) - self.set = set - - # Load annotation.txt file from self.data path as a dataframe ignoring first row and \t as delimiter - self.data = pd.read_csv(os.path.join(self.data_path, "annotation.txt"), sep="\t", header=None, skiprows=1) - - # Divide the dataset into train and test set randomly with random seed 1 - self.train_df = self.data.sample(frac=cfg.train_percentages, random_state=1) - self.test_df = self.data.drop(self.train_df.index) - - # Define train transformations from PyTorch - self.train_transforms = torchvision.transforms.Compose( - [ - torchvision.transforms.Resize(224), - torchvision.transforms.RandomHorizontalFlip(), - torchvision.transforms.RandomVerticalFlip(), - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - ] - ) - - # Define validation transformations from PyTorch - self.val_transforms = torchvision.transforms.Compose( - [ - torchvision.transforms.Resize(224), - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - ] - ) - - def __len__(self): - # If set is train, return length of train_df, else return length of test_df - if self.set == "train": - return len(self.train_df) - else: - return len(self.test_df) - - def __getitem__(self, idx): - - # If set is train, use train_df, else use test_df - if self.set == "train": - df = self.train_df - else: - df = self.test_df - - # Get image path from df - image_path = os.path.join(self.data_path, "images", df.iloc[idx, 0]) - - # Read image from image_path - image = Image.open(image_path) - - # Get label from df - label = df.iloc[idx, 1] - # As labels are in range 0-100, we divide them by 100 to get them in range 0-1 - label /= 100 - - # If set is train, apply train_transforms, else apply val_transforms - if self.set == "train": - image = self.train_transforms(image) - else: - image = self.val_transforms(image) - - return image, label diff --git a/spaces/emc348/faces-through-time/torch_utils/ops/fma.py b/spaces/emc348/faces-through-time/torch_utils/ops/fma.py deleted file mode 100644 index 2eeac58a626c49231e04122b93e321ada954c5d3..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/torch_utils/ops/fma.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Fused multiply-add, with slightly faster gradients than `torch.addcmul()`.""" - -import torch - -#---------------------------------------------------------------------------- - -def fma(a, b, c): # => a * b + c - return _FusedMultiplyAdd.apply(a, b, c) - -#---------------------------------------------------------------------------- - -class _FusedMultiplyAdd(torch.autograd.Function): # a * b + c - @staticmethod - def forward(ctx, a, b, c): # pylint: disable=arguments-differ - out = torch.addcmul(c, a, b) - ctx.save_for_backward(a, b) - ctx.c_shape = c.shape - return out - - @staticmethod - def backward(ctx, dout): # pylint: disable=arguments-differ - a, b = ctx.saved_tensors - c_shape = ctx.c_shape - da = None - db = None - dc = None - - if ctx.needs_input_grad[0]: - da = _unbroadcast(dout * b, a.shape) - - if ctx.needs_input_grad[1]: - db = _unbroadcast(dout * a, b.shape) - - if ctx.needs_input_grad[2]: - dc = _unbroadcast(dout, c_shape) - - return da, db, dc - -#---------------------------------------------------------------------------- - -def _unbroadcast(x, shape): - extra_dims = x.ndim - len(shape) - assert extra_dims >= 0 - dim = [i for i in range(x.ndim) if x.shape[i] > 1 and (i < extra_dims or shape[i - extra_dims] == 1)] - if len(dim): - x = x.sum(dim=dim, keepdim=True) - if extra_dims: - x = x.reshape(-1, *x.shape[extra_dims+1:]) - assert x.shape == shape - return x - -#---------------------------------------------------------------------------- diff --git a/spaces/eubinecto/idiomify/idiomify/pipeline.py b/spaces/eubinecto/idiomify/idiomify/pipeline.py deleted file mode 100644 index 36044df44e6bfc465b4b35139b07dcadfc824d5a..0000000000000000000000000000000000000000 --- a/spaces/eubinecto/idiomify/idiomify/pipeline.py +++ /dev/null @@ -1,30 +0,0 @@ -import re -import pandas as pd -from typing import List -from transformers import BartTokenizer -from idiomify.builders import SourcesBuilder -from idiomify.models import Idiomifier - - -class Pipeline: - - def __init__(self, model: Idiomifier, tokenizer: BartTokenizer, idioms: pd.DataFrame): - self.model = model - self.builder = SourcesBuilder(tokenizer) - self.idioms = idioms - - def __call__(self, sents: List[str], max_length=100) -> List[str]: - srcs = self.builder(literal2idiomatic=[(sent, "") for sent in sents]) - pred_ids = self.model.bart.generate( - inputs=srcs[:, 0], # (N, 2, L) -> (N, L) - attention_mask=srcs[:, 1], # (N, 2, L) -> (N, L) - decoder_start_token_id=self.model.hparams['bos_token_id'], - max_length=max_length, - ) # -> (N, L_t) - # we don't skip special tokens because we have to keep & for highlighting idioms. - tgts = self.builder.tokenizer.batch_decode(pred_ids, skip_special_tokens=False) - tgts = [ - re.sub(r"||", "", tgt) - for tgt in tgts - ] - return tgts diff --git a/spaces/ewg88/ai-forever-ruGPT-3.5-13B/app.py b/spaces/ewg88/ai-forever-ruGPT-3.5-13B/app.py deleted file mode 100644 index c1954ff145890f532300730c2642f6c4c7f3d5c8..0000000000000000000000000000000000000000 --- a/spaces/ewg88/ai-forever-ruGPT-3.5-13B/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/ai-forever/ruGPT-3.5-13B").launch() \ No newline at end of file diff --git a/spaces/facebook/StyleNeRF/run_train.py b/spaces/facebook/StyleNeRF/run_train.py deleted file mode 100644 index 9d50f941854b2a54cdefdee9bf4504431da9faa6..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/run_train.py +++ /dev/null @@ -1,398 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - - -from math import dist -import sys -import os -import click -import re -import json -import glob -import tempfile -import torch -import dnnlib -import hydra - -from datetime import date -from training import training_loop -from metrics import metric_main -from torch_utils import training_stats, custom_ops, distributed_utils -from torch_utils.distributed_utils import get_init_file, get_shared_folder -from omegaconf import DictConfig, OmegaConf - -#---------------------------------------------------------------------------- - -class UserError(Exception): - pass - -#---------------------------------------------------------------------------- - -def setup_training_loop_kwargs(cfg): - args = OmegaConf.create({}) - - # ------------------------------------------ - # General options: gpus, snap, metrics, seed - # ------------------------------------------ - args.rank = 0 - args.gpu = 0 - args.num_gpus = torch.cuda.device_count() if cfg.gpus is None else cfg.gpus - args.nodes = cfg.nodes if cfg.nodes is not None else 1 - args.world_size = 1 - - args.dist_url = 'env://' - args.launcher = cfg.launcher - args.partition = cfg.partition - args.comment = cfg.comment - args.timeout = 4320 if cfg.timeout is None else cfg.timeout - args.job_dir = '' - - if cfg.snap is None: - cfg.snap = 50 - assert isinstance(cfg.snap, int) - if cfg.snap < 1: - raise UserError('snap must be at least 1') - args.image_snapshot_ticks = cfg.imgsnap - args.network_snapshot_ticks = cfg.snap - if hasattr(cfg, 'ucp'): - args.update_cam_prior_ticks = cfg.ucp - - if cfg.metrics is None: - cfg.metrics = ['fid50k_full'] - cfg.metrics = list(cfg.metrics) - if not all(metric_main.is_valid_metric(metric) for metric in cfg.metrics): - raise UserError('\n'.join(['metrics can only contain the following values:'] + metric_main.list_valid_metrics())) - args.metrics = cfg.metrics - - if cfg.seed is None: - cfg.seed = 0 - assert isinstance(cfg.seed, int) - args.random_seed = cfg.seed - - # ----------------------------------- - # Dataset: data, cond, subset, mirror - # ----------------------------------- - - assert cfg.data is not None - assert isinstance(cfg.data, str) - args.update({"training_set_kwargs": dict(class_name='training.dataset.ImageFolderDataset', path=cfg.data, resolution=cfg.resolution, use_labels=True, max_size=None, xflip=False)}) - args.update({"data_loader_kwargs": dict(pin_memory=True, num_workers=3, prefetch_factor=2)}) - args.generation_with_image = getattr(cfg, 'generate_with_image', False) - try: - training_set = dnnlib.util.construct_class_by_name(**args.training_set_kwargs) # subclass of training.dataset.Dataset - args.training_set_kwargs.resolution = training_set.resolution # be explicit about resolution - args.training_set_kwargs.use_labels = training_set.has_labels # be explicit about labels - args.training_set_kwargs.max_size = len(training_set) # be explicit about dataset size - desc = training_set.name - del training_set # conserve memory - except IOError as err: - raise UserError(f'data: {err}') - - if cfg.cond is None: - cfg.cond = False - assert isinstance(cfg.cond, bool) - if cfg.cond: - if not args.training_set_kwargs.use_labels: - raise UserError('cond=True requires labels specified in dataset.json') - desc += '-cond' - else: - args.training_set_kwargs.use_labels = False - - if cfg.subset is not None: - assert isinstance(cfg.subset, int) - if not 1 <= cfg.subset <= args.training_set_kwargs.max_size: - raise UserError(f'subset must be between 1 and {args.training_set_kwargs.max_size}') - desc += f'-subset{cfg.subset}' - if cfg.subset < args.training_set_kwargs.max_size: - args.training_set_kwargs.max_size = cfg.subset - args.training_set_kwargs.random_seed = args.random_seed - - if cfg.mirror is None: - cfg.mirror = False - assert isinstance(cfg.mirror, bool) - if cfg.mirror: - desc += '-mirror' - args.training_set_kwargs.xflip = True - - # ------------------------------------ - # Base config: cfg, model, gamma, kimg, batch - # ------------------------------------ - if cfg.auto: - cfg.spec.name = 'auto' - desc += f'-{cfg.spec.name}' - desc += f'-{cfg.model.name}' - if cfg.spec.name == 'auto': - res = args.training_set_kwargs.resolution - cfg.spec.fmaps = 1 if res >= 512 else 0.5 - cfg.spec.lrate = 0.002 if res >= 1024 else 0.0025 - cfg.spec.gamma = 0.0002 * (res ** 2) / cfg.spec.mb # heuristic formula - cfg.spec.ema = cfg.spec.mb * 10 / 32 - - if getattr(cfg.spec, 'lrate_disc', None) is None: - cfg.spec.lrate_disc = cfg.spec.lrate # use the same learning rate for discriminator - - # model (generator, discriminator) - args.update({"G_kwargs": dict(**cfg.model.G_kwargs)}) - args.update({"D_kwargs": dict(**cfg.model.D_kwargs)}) - args.update({"G_opt_kwargs": dict(class_name='torch.optim.Adam', lr=cfg.spec.lrate, betas=[0,0.99], eps=1e-8)}) - args.update({"D_opt_kwargs": dict(class_name='torch.optim.Adam', lr=cfg.spec.lrate_disc, betas=[0,0.99], eps=1e-8)}) - args.update({"loss_kwargs": dict(class_name='training.loss.StyleGAN2Loss', r1_gamma=cfg.spec.gamma, **cfg.model.loss_kwargs)}) - - if cfg.spec.name == 'cifar': - args.loss_kwargs.pl_weight = 0 # disable path length regularization - args.loss_kwargs.style_mixing_prob = 0 # disable style mixing - args.D_kwargs.architecture = 'orig' # disable residual skip connections - - # kimg data config - args.spec = cfg.spec # just keep the dict. - args.total_kimg = cfg.spec.kimg - args.batch_size = cfg.spec.mb - args.batch_gpu = cfg.spec.mbstd - args.ema_kimg = cfg.spec.ema - args.ema_rampup = cfg.spec.ramp - - # --------------------------------------------------- - # Discriminator augmentation: aug, p, target, augpipe - # --------------------------------------------------- - if cfg.aug is None: - cfg.aug = 'ada' - else: - assert isinstance(cfg.aug, str) - desc += f'-{cfg.aug}' - - if cfg.aug == 'ada': - args.ada_target = 0.6 - elif cfg.aug == 'noaug': - pass - elif cfg.aug == 'fixed': - if cfg.p is None: - raise UserError(f'--aug={cfg.aug} requires specifying --p') - else: - raise UserError(f'--aug={cfg.aug} not supported') - - if cfg.p is not None: - assert isinstance(cfg.p, float) - if cfg.aug != 'fixed': - raise UserError('--p can only be specified with --aug=fixed') - if not 0 <= cfg.p <= 1: - raise UserError('--p must be between 0 and 1') - desc += f'-p{cfg.p:g}' - args.augment_p = cfg.p - - if cfg.target is not None: - assert isinstance(cfg.target, float) - if cfg.aug != 'ada': - raise UserError('--target can only be specified with --aug=ada') - if not 0 <= cfg.target <= 1: - raise UserError('--target must be between 0 and 1') - desc += f'-target{cfg.target:g}' - args.ada_target = cfg.target - - assert cfg.augpipe is None or isinstance(cfg.augpipe, str) - if cfg.augpipe is None: - cfg.augpipe = 'bgc' - else: - if cfg.aug == 'noaug': - raise UserError('--augpipe cannot be specified with --aug=noaug') - desc += f'-{cfg.augpipe}' - - augpipe_specs = { - 'blit': dict(xflip=1, rotate90=1, xint=1), - 'geom': dict(scale=1, rotate=1, aniso=1, xfrac=1), - 'color': dict(brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1), - 'filter': dict(imgfilter=1), - 'noise': dict(noise=1), - 'cutout': dict(cutout=1), - 'bgc0': dict(xint=1, scale=1, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1), - 'bg': dict(xflip=1, rotate90=1, xint=1, scale=1, rotate=1, aniso=1, xfrac=1), - 'bgc': dict(xflip=1, rotate90=1, xint=1, scale=1, rotate=1, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1), - 'bgcf': dict(xflip=1, rotate90=1, xint=1, scale=1, rotate=1, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1, imgfilter=1), - 'bgcfn': dict(xflip=1, rotate90=1, xint=1, scale=1, rotate=1, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1, imgfilter=1, noise=1), - 'bgcfnc': dict(xflip=1, rotate90=1, xint=1, scale=1, rotate=1, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1, imgfilter=1, noise=1, cutout=1), - } - assert cfg.augpipe in augpipe_specs - if cfg.aug != 'noaug': - args.update({"augment_kwargs": dict(class_name='training.augment.AugmentPipe', **augpipe_specs[cfg.augpipe])}) - - # ---------------------------------- - # Transfer learning: resume, freezed - # ---------------------------------- - - resume_specs = { - 'ffhq256': 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/ffhq-res256-mirror-paper256-noaug.pkl', - 'ffhq512': 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/ffhq-res512-mirror-stylegan2-noaug.pkl', - 'ffhq1024': 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/ffhq-res1024-mirror-stylegan2-noaug.pkl', - 'celebahq256': 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/celebahq-res256-mirror-paper256-kimg100000-ada-target0.5.pkl', - 'lsundog256': 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/lsundog-res256-paper256-kimg100000-noaug.pkl', - } - - assert cfg.resume is None or isinstance(cfg.resume, str) - if cfg.resume is None: - cfg.resume = 'noresume' - elif cfg.resume == 'noresume': - desc += '-noresume' - elif cfg.resume in resume_specs: - desc += f'-resume{cfg.resume}' - args.resume_pkl = resume_specs[cfg.resume] # predefined url - else: - desc += '-resumecustom' - args.resume_pkl = cfg.resume # custom path or url - - if cfg.resume != 'noresume': - args.ada_kimg = 100 # make ADA react faster at the beginning - args.ema_rampup = None # disable EMA rampup - - if cfg.freezed is not None: - assert isinstance(cfg.freezed, int) - if not cfg.freezed >= 0: - raise UserError('--freezed must be non-negative') - desc += f'-freezed{cfg.freezed:d}' - args.D_kwargs.block_kwargs.freeze_layers = cfg.freezed - - # ------------------------------------------------- - # Performance options: fp32, nhwc, nobench, workers - # ------------------------------------------------- - args.num_fp16_res = cfg.num_fp16_res - if cfg.fp32 is None: - cfg.fp32 = False - assert isinstance(cfg.fp32, bool) - if cfg.fp32: - args.G_kwargs.synthesis_kwargs.num_fp16_res = args.D_kwargs.num_fp16_res = 0 - args.G_kwargs.synthesis_kwargs.conv_clamp = args.D_kwargs.conv_clamp = None - - if cfg.nhwc is None: - cfg.nhwc = False - assert isinstance(cfg.nhwc, bool) - if cfg.nhwc: - args.G_kwargs.synthesis_kwargs.fp16_channels_last = args.D_kwargs.block_kwargs.fp16_channels_last = True - - if cfg.nobench is None: - cfg.nobench = False - assert isinstance(cfg.nobench, bool) - if cfg.nobench: - args.cudnn_benchmark = False - - if cfg.allow_tf32 is None: - cfg.allow_tf32 = False - assert isinstance(cfg.allow_tf32, bool) - args.allow_tf32 = cfg.allow_tf32 - - if cfg.workers is not None: - assert isinstance(cfg.workers, int) - if not cfg.workers >= 1: - raise UserError('--workers must be at least 1') - args.data_loader_kwargs.num_workers = cfg.workers - - args.debug = cfg.debug - if getattr(cfg, "prefix", None) is not None: - desc = cfg.prefix + '-' + desc - return desc, args - -#---------------------------------------------------------------------------- - -def subprocess_fn(rank, args): - if not args.debug: - dnnlib.util.Logger(file_name=os.path.join(args.run_dir, 'log.txt'), file_mode='a', should_flush=True) - - # Init torch.distributed. - distributed_utils.init_distributed_mode(rank, args) - if args.rank != 0: - custom_ops.verbosity = 'none' - - # Execute training loop. - training_loop.training_loop(**args) - -#---------------------------------------------------------------------------- - -class CommaSeparatedList(click.ParamType): - name = 'list' - - def convert(self, value, param, ctx): - _ = param, ctx - if value is None or value.lower() == 'none' or value == '': - return [] - return value.split(',') - - -@hydra.main(config_path="conf", config_name="config") -def main(cfg: DictConfig): - - outdir = cfg.outdir - - # Setup training options - run_desc, args = setup_training_loop_kwargs(cfg) - - # Pick output directory. - prev_run_dirs = [] - if os.path.isdir(outdir): - prev_run_dirs = [x for x in os.listdir(outdir) if os.path.isdir(os.path.join(outdir, x))] - - if cfg.resume_run is None: - prev_run_ids = [re.match(r'^\d+', x) for x in prev_run_dirs] - prev_run_ids = [int(x.group()) for x in prev_run_ids if x is not None] - cur_run_id = max(prev_run_ids, default=-1) + 1 - else: - cur_run_id = cfg.resume_run - - args.run_dir = os.path.join(outdir, f'{cur_run_id:05d}-{run_desc}') - print(outdir, args.run_dir) - - if cfg.resume_run is not None: - pkls = sorted(glob.glob(args.run_dir + '/network*.pkl')) - if len(pkls) > 0: - args.resume_pkl = pkls[-1] - args.resume_start = int(args.resume_pkl.split('-')[-1][:-4]) * 1000 - else: - args.resume_start = 0 - - # Print options. - print() - print('Training options:') - print(OmegaConf.to_yaml(args)) - print() - print(f'Output directory: {args.run_dir}') - print(f'Training data: {args.training_set_kwargs.path}') - print(f'Training duration: {args.total_kimg} kimg') - print(f'Number of images: {args.training_set_kwargs.max_size}') - print(f'Image resolution: {args.training_set_kwargs.resolution}') - print(f'Conditional model: {args.training_set_kwargs.use_labels}') - print(f'Dataset x-flips: {args.training_set_kwargs.xflip}') - print() - - # Dry run? - if cfg.dry_run: - print('Dry run; exiting.') - return - - # Create output directory. - print('Creating output directory...') - if not os.path.exists(args.run_dir): - os.makedirs(args.run_dir) - with open(os.path.join(args.run_dir, 'training_options.yaml'), 'wt') as fp: - OmegaConf.save(config=args, f=fp.name) - - # Launch processes. - print('Launching processes...') - if (args.launcher == 'spawn') and (args.num_gpus > 1): - args.dist_url = distributed_utils.get_init_file().as_uri() - torch.multiprocessing.set_start_method('spawn') - torch.multiprocessing.spawn(fn=subprocess_fn, args=(args,), nprocs=args.num_gpus) - else: - subprocess_fn(rank=0, args=args) - -#---------------------------------------------------------------------------- - -if __name__ == "__main__": - if os.getenv('SLURM_ARGS') is not None: - # deparcated launcher for slurm jobs. - slurm_arg = eval(os.getenv('SLURM_ARGS')) - all_args = sys.argv[1:] - print(slurm_arg) - print(all_args) - - from launcher import launch - launch(slurm_arg, all_args) - - else: - main() # pylint: disable=no-value-for-parameter - -#---------------------------------------------------------------------------- diff --git a/spaces/falterWliame/Face_Mask_Detection/Autodata 3 44 En Francais.md b/spaces/falterWliame/Face_Mask_Detection/Autodata 3 44 En Francais.md deleted file mode 100644 index 243df8c10060b8b99ebd4052b3de4748f61b78a7..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Autodata 3 44 En Francais.md +++ /dev/null @@ -1,57 +0,0 @@ - -

          Autodata 3 44 En Francais: Le logiciel technique pour l'automobile

          -

          Si vous êtes un mécanicien professionnel ou amateur, vous avez sans doute besoin d'un logiciel qui vous aide à réparer et à entretenir les voitures. Un logiciel qui vous fournit des informations techniques précises et à jour sur les différents modèles de véhicules. Un logiciel qui vous permet d'analyser les paramètres de la voiture, de régler les problèmes de climatisation, de courroies, d'injection, de moteurs, etc. Un logiciel qui vous fait gagner du temps et de l'argent. Ce logiciel existe et il s'appelle Autodata 3 44 En Francais.

          -

          Dans cet article, nous allons vous présenter Autodata 3 44 En Francais, ses caractéristiques, ses avantages et sa compatibilité avec les systèmes d'exploitation Windows.

          -

          Autodata 3 44 En Francais


          Download File ►►► https://urlca.com/2uDbVk



          -

          Qu'est-ce que Autodata 3 44 En Francais?

          -

          Autodata 3 44 En Francais est un logiciel Windows complet qui a été développé pour analyser les paramètres de la voiture. Il s'agit d'une base de données techniques pour la réparation automobile qui couvre pratiquement toutes les marques et tous les modèles de voitures. Il propose la seule source d'informations d'origine constructeurs, avec une qualité et une quantité de données en évolution permanente.

          -

          Autodata 3 44 En Francais est rédigé dans un style concis, clair et cohérent, qui est facile à comprendre et à utiliser par les mécaniciens. Il dispose d'une interface intuitive qui permet de naviguer facilement entre les différentes rubriques et sous-rubriques du logiciel. Il offre également la possibilité d'imprimer ou d'exporter les informations en format PDF.

          -

          Quelles sont les caractéristiques de Autodata 3 44 En Francais?

          -

          Autodata 3 44 En Francais offre de nombreuses caractéristiques qui font de lui un logiciel indispensable pour la réparation automobile. Parmi ces caractéristiques, on peut citer:

          -
            -
          • Les données techniques: Autodata 3 44 En Francais fournit des données techniques détaillées sur les voitures, telles que les caractéristiques du moteur, les couples de serrage, les schémas électriques, les codes défauts, les intervalles de révision, etc.
          • -
          • Les procédures de réparation: Autodata 3 44 En Francais explique pas à pas comment effectuer les opérations de réparation sur les voitures, telles que le remplacement des pièces, le réglage des soupapes, la purge des freins, etc.
          • -
          • Les outils de diagnostic: Autodata 3 44 En Francais propose des outils de diagnostic qui permettent de tester le fonctionnement des différents systèmes de la voiture, tels que l'injection, l'allumage, l'ABS, l'ESP, etc.
          • -
          • Les conseils pratiques: Autodata 3 44 En Francais donne des conseils pratiques pour optimiser le travail du mécanicien, tels que les précautions à prendre, les astuces à connaître, les erreurs à éviter, etc.
          • -
          -

          Quels sont les avantages de Autodata 3 44 En Francais?

          -

          Autodata 3 44 En Francais présente de nombreux avantages pour le mécanicien qui l'utilise. Parmi ces avantages, on peut citer:

          -
            -
          • L'exactitude: Autodata 3 44 En Francais garantit l'exactitude des informations qu'il fournit, car il se base sur les données officielles des constructeurs automobiles. Il met également à jour régulièrement ses données pour suivre l'évolution du marché.
          • -
          • La fiabilité: Autodata 3 44 En Francais assure la fiabilité des opérations de réparation qu'il explique, car il suit les normes et les recommandations des constructeurs automobiles. Il respecte également les règles de sécurité et d'environnement.
          • -
          • L'efficacité: Autodata 3 44 En Francais augmente l'efficacité du travail du mécanicien, car il lui permet de gagner du temps et de l'argent. Il lui évite de chercher des informations dans plusieurs sources différentes ou obsolètes. Il lui facilite également le diagnostic et la résolution des problèmes.
          • -
          -

          Quelle est la compatibilité de Autodata 3 44 En Francais?

          -

          Autodata 3 44 En Francais est compatible avec les systèmes d'exploitation Windows XP (32-64 bits), Vista (32 bits SP1), Windows 7 (32-64 bits) et Windows 8 (32-64 bits). Il nécessite un espace disque dur de 2,2 Go et une mémoire RAM de 1 Go. Il fonctionne avec un processeur Intel Dual Core ou supérieur.

          -

          Conclusion

          -

          Autodata 3 44 En Francais est le logiciel technique pour l'automobile par excellence. Il fournit des informations techniques précises et à jour sur les différents modèles de voitures. Il explique pas à pas comment effectuer les opérations de réparation sur les voitures. Il propose des outils de diagnostic qui permettent de tester le fonctionnement des différents systèmes de la voiture. Il donne des conseils pratiques pour optimiser le travail du mécanicien. Il garantit l'exactitude, la fiabilité et l'efficacité des informations qu'il fournit. Il est compatible avec les systèmes d'exploitation Windows XP (32-64 bits), Vista (32 bits SP1), Windows 7 (32-64 bits) et Windows 8 (32-64 bits). Si vous êtes un mécanicien professionnel ou amateur, vous ne pouvez pas vous passer de Autodata 3 44 En Francais.

          -

          Autodata 3 44 En Francais: Le logiciel technique pour l'automobile

          -

          Si vous êtes un mécanicien professionnel ou amateur, vous avez sans doute besoin d'un logiciel qui vous aide à réparer et à entretenir les voitures. Un logiciel qui vous fournit des informations techniques précises et à jour sur les différents modèles de véhicules. Un logiciel qui vous permet d'analyser les paramètres de la voiture, de régler les problèmes de climatisation, de courroies, d'injection, de moteurs, etc. Un logiciel qui vous fait gagner du temps et de l'argent. Ce logiciel existe et il s'appelle Autodata 3 44 En Francais.

          -

          -

          Dans cet article, nous allons vous présenter Autodata 3 44 En Francais, ses caractéristiques, ses avantages et sa compatibilité avec les systèmes d'exploitation Windows.

          -

          Qu'est-ce que Autodata 3 44 En Francais?

          -

          Autodata 3 44 En Francais est un logiciel Windows complet qui a été développé pour analyser les paramètres de la voiture. Il s'agit d'une base de données techniques pour la réparation automobile qui couvre pratiquement toutes les marques et tous les modèles de voitures. Il propose la seule source d'informations d'origine constructeurs, avec une qualité et une quantité de données en évolution permanente.

          -

          Autodata 3 44 En Francais est rédigé dans un style concis, clair et cohérent, qui est facile à comprendre et à utiliser par les mécaniciens. Il dispose d'une interface intuitive qui permet de naviguer facilement entre les différentes rubriques et sous-rubriques du logiciel. Il offre également la possibilité d'imprimer ou d'exporter les informations en format PDF.

          -

          Quelles sont les caractéristiques de Autodata 3 44 En Francais?

          -

          Autodata 3 44 En Francais offre de nombreuses caractéristiques qui font de lui un logiciel indispensable pour la réparation automobile. Parmi ces caractéristiques, on peut citer:

          -
            -
          • Les données techniques: Autodata 3 44 En Francais fournit des données techniques détaillées sur les voitures, telles que les caractéristiques du moteur, les couples de serrage, les schémas électriques, les codes défauts, les intervalles de révision, etc.
          • -
          • Les procédures de réparation: Autodata 3 44 En Francais explique pas à pas comment effectuer les opérations de réparation sur les voitures, telles que le remplacement des pièces, le réglage des soupapes, la purge des freins, etc.
          • -
          • Les outils de diagnostic: Autodata 3 44 En Francais propose des outils de diagnostic qui permettent de tester le fonctionnement des différents systèmes de la voiture, tels que l'injection, l'allumage, l'ABS, l'ESP, etc.
          • -
          • Les conseils pratiques: Autodata 3 44 En Francais donne des conseils pratiques pour optimiser le travail du mécanicien, tels que les précautions à prendre, les astuces à connaître, les erreurs à éviter, etc.
          • -
          -

          Quels sont les avantages de Autodata 3 44 En Francais?

          -

          Autodata 3 44 En Francais présente de nombreux avantages pour le mécanicien qui l'utilise. Parmi ces avantages, on peut citer:

          -
            -
          • L'exactitude: Autodata 3 44 En Francais garantit l'exactitude des informations qu'il fournit, car il se base sur les données officielles des constructeurs automobiles. Il met également à jour régulièrement ses données pour suivre l'évolution du marché.
          • -
          • La fiabilité: Autodata 3 44 En Francais assure la fiabilité des opérations de réparation qu'il explique, car il suit les normes et les recommandations des constructeurs automobiles. Il respecte également les règles de sécurité et d'environnement.
          • -
          • L'efficacité: Autodata 3 44 En Francais augmente l'efficacité du travail du mécanicien, car il lui permet de gagner du temps et de l'argent. Il lui évite de chercher des informations dans plusieurs sources différentes ou obsolètes. Il lui facilite également le diagnostic et la résolution des problèmes.
          • -
          -

          Quelle est la compatibilité de Autodata 3 44 En Francais?

          -

          Autodata 3 44 En Francais est compatible avec les systèmes d'exploitation Windows XP (32-64 bits), Vista (32 bits SP1), Windows 7 (32-64 bits) et Windows 8 (32-64 bits). Il nécessite un espace disque dur de 2,2 Go et une mémoire RAM de 1 Go. Il fonctionne avec un processeur Intel Dual Core ou supérieur.

          -

          Conclusion

          -

          Autodata 3 44 En Francais est le logiciel technique pour l'automobile par excellence. Il fournit des informations techniques précises et à jour sur les différents modèles de voitures. Il explique pas à pas comment effectuer les opérations de réparation sur les voitures. Il propose des outils de diagnostic qui permettent de tester le fonctionnement des différents systèmes de la voiture. Il donne des conseils pratiques pour optimiser le travail du mécanicien. Il garantit l'exactitude, la fiabilité et l'efficacité des informations qu'il fournit. Il est compatible avec les systèmes d'exploitation Windows XP (32-64 bits), Vista (32 bits SP1), Windows 7 (32-64 bits) et Windows 8 (32-64 bits). Si vous êtes un mécanicien professionnel ou amateur, vous ne pouvez pas vous passer de Autodata 3 44 En Francais.

          -

          Conclusion

          -

          Autodata 3 44 En Francais est le logiciel technique pour l'automobile par excellence. Il fournit des informations techniques précises et à jour sur les différents modèles de voitures. Il explique pas à pas comment effectuer les opérations de réparation sur les voitures. Il propose des outils de diagnostic qui permettent de tester le fonctionnement des différents systèmes de la voiture. Il donne des conseils pratiques pour optimiser le travail du mécanicien. Il garantit l'exactitude, la fiabilité et l'efficacité des informations qu'il fournit. Il est compatible avec les systèmes d'exploitation Windows XP (32-64 bits), Vista (32 bits SP1), Windows 7 (32-64 bits) et Windows 8 (32-64 bits). Si vous êtes un mécanicien professionnel ou amateur, vous ne pouvez pas vous passer de Autodata 3 44 En Francais.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Dapatkan Uang Tak Terbatas dan Gems di Hill Climb Racing MOD APK dengan Link Download Gratis.md b/spaces/fatiXbelha/sd/Dapatkan Uang Tak Terbatas dan Gems di Hill Climb Racing MOD APK dengan Link Download Gratis.md deleted file mode 100644 index 828882559d4d395bb9b81391dd37e3cd471acc10..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Dapatkan Uang Tak Terbatas dan Gems di Hill Climb Racing MOD APK dengan Link Download Gratis.md +++ /dev/null @@ -1,54 +0,0 @@ - -

          Hill Climb Racing Mod APK: Unlimited Money and Fun

          -

          If you are looking for a fun and addictive racing game that will challenge your skills and keep you entertained for hours, then you should try Hill Climb Racing. This game is one of the most popular and downloaded racing games on Android, with over 500 million downloads on Google Play. But what if you want to enjoy the game without any limitations or interruptions? Well, that's where Hill Climb Racing Mod APK comes in. In this article, we will tell you what Hill Climb Racing is, what features it has, why you should download Hill Climb Racing Mod APK, and how to do it. So, buckle up and get ready for some hill climbing action!

          -

          What is Hill Climb Racing?

          -

          Hill Climb Racing is a 2D physics-based racing game developed by Fingersoft, a Finnish game studio. The game was released in 2012 and has since become one of the most successful indie games of all time. The game features a simple but addictive gameplay, where you have to drive various vehicles across different terrains, such as hills, mountains, deserts, forests, and even the moon. The goal is to go as far as possible without running out of gas or crashing your vehicle. Along the way, you can collect coins and fuel cans, which you can use to upgrade your vehicles and unlock new ones. You can also perform stunts and tricks to earn bonus points and coins.

          -

          link download hill climb racing mod apk uang tak terbatas


          Download ->>->>->> https://urllie.com/2uNvFd



          -

          Features of Hill Climb Racing

          -

          Hill Climb Racing has many features that make it an enjoyable and diverse game. Here are some of them:

          -

          Vehicles

          -

          The game offers a wide range of vehicles that you can choose from, each with its own characteristics and abilities. You can drive cars, bikes, trucks, buses, tanks, tractors, snowmobiles, and even a sleigh pulled by reindeer. Some vehicles are faster, some are more stable, some are more fuel-efficient, and some are more suitable for certain terrains. You can also customize your vehicles with different paints, wheels, engines, suspensions, roll cages, and more.

          -

          Tracks

          -

          The game also has a variety of tracks that you can explore, each with its own challenges and scenery. You can race on hills, mountains, deserts, forests, caves, beaches, cities, junkyards, volcanoes, and even the moon. Each track has its own obstacles, such as rocks, bridges, ramps, loops, tunnels, waterfalls, lava pools, and more. You have to be careful not to flip over or crash into them.

          -

          Upgrades

          -

          As you play the game, you can earn coins that you can use to upgrade your vehicles and make them faster, stronger, and more efficient. You can upgrade four aspects of your vehicle: engine, suspension, tires, and 4WD. Each upgrade has 10 levels that you can unlock with coins. Upgrading your vehicle will help you go further and overcome more difficult terrains.

          -

          Challenges

          -

          The game also has a challenge mode where you can compete with other players online or offline. You can choose from different challenges such as distance challenge, time challenge, flip challenge , and air time challenge. You can also create your own challenges and share them with your friends. Challenges are a great way to test your skills and compete with others for the best scores and rankings.

          -

          [Hill Climb Racing MOD APK 1.58.0 (Unlimited Money) - APKdone](^1^)

          -

          Why Download Hill Climb Racing Mod APK?

          -

          While Hill Climb Racing is a fun and free game, it also has some limitations and drawbacks that can affect your gaming experience. For example, you have to watch ads to get extra coins or fuel, you have to grind for a long time to unlock and upgrade all the vehicles and tracks, and you have to deal with the game's difficulty curve that can be frustrating at times. That's why many players prefer to download Hill Climb Racing Mod APK, which is a modified version of the game that gives you unlimited money and other benefits. Here are some of the reasons why you should download Hill Climb Racing Mod APK:

          -

          Benefits of Hill Climb Racing Mod APK

          -

          Hill Climb Racing Mod APK has many advantages over the original game. Here are some of them:

          -

          Unlimited Money

          -

          The most obvious benefit of Hill Climb Racing Mod APK is that it gives you unlimited money that you can use to buy and upgrade anything you want in the game. You don't have to worry about running out of coins or fuel, or watching ads to get more. You can enjoy the game without any restrictions or interruptions.

          -

          All Vehicles and Tracks Unlocked

          -

          Another benefit of Hill Climb Racing Mod APK is that it unlocks all the vehicles and tracks in the game from the start. You don't have to play for hours or days to unlock them one by one. You can choose any vehicle or track you like and explore them at your own pace. You can also try different combinations of vehicles and tracks to see which ones suit your style and preference.

          -

          No Ads

          -

          A third benefit of Hill Climb Racing Mod APK is that it removes all the ads from the game. You don't have to watch annoying and repetitive ads that pop up every few minutes or interrupt your gameplay. You can play the game smoothly and without any distractions.

          -

          How to Download and Install Hill Climb Racing Mod APK?

          -

          If you are convinced by the benefits of Hill Climb Racing Mod APK and want to download it, then you need to follow these simple steps:

          -

          Step 1: Download the APK file from a trusted source

          -

          The first step is to download the APK file of Hill Climb Racing Mod APK from a reliable and safe source. You can find many websites that offer this file, but you need to be careful not to download a fake or malicious file that can harm your device or steal your data. We recommend you to use this link to download the latest version of Hill Climb Racing Mod APK, which is 100% safe and tested.

          -

          Step 2: Enable unknown sources on your device

          -

          The second step is to enable unknown sources on your device, which will allow you to install apps from sources other than Google Play. To do this, go to your device's settings, then security, then unknown sources, and turn it on. This will enable you to install the APK file that you downloaded in step 1.

          -

          Step 3: Install the APK file and launch the game

          -

          The final step is to install the APK file that you downloaded in step 1 by tapping on it and following the instructions on the screen. Once the installation is complete, you can launch the game and enjoy Hill Climb Racing Mod APK with unlimited money and fun.

          -

          Conclusion

          -

          Hill Climb Racing is one of the best racing games on Android, with millions of fans around the world. It has a simple but addictive gameplay, a wide range of vehicles and tracks, and a challenge mode that will test your skills. However, if you want to enjoy the game without any limitations or interruptions, then you should download Hill Climb Racing Mod APK, which gives you unlimited money, all vehicles and tracks unlocked, and no ads. This way, you can have more fun and excitement while playing Hill Climb Racing.

          -

          If you liked this article, please share it with your friends who love racing games. Also, if you have any questions or feedback about Hill Climb Racing Mod APK, please leave them in the comments section below. We would love to hear from you!

          -

          Frequently Asked Questions

          -

          Here are some of the most common questions that people ask about Hill Climb Racing Mod APK:

          -
            -
          1. Is Hill Climb Racing Mod APK safe?
          2. -

            Yes, Hill Climb Racing Mod APK is safe to download and install, as long as you use a trusted source like the one we provided in this article. However, you should always be careful when downloading and installing any app from unknown sources, as they may contain viruses or malware that can harm your device or steal your data. You should also check the permissions that the app requests and only grant them if you trust the app.

            -
          3. Is Hill Climb Racing Mod APK legal?
          4. -

            Hill Climb Racing Mod APK is not legal, as it violates the terms and conditions of the original game. By using Hill Climb Racing Mod APK, you are essentially hacking the game and getting an unfair advantage over other players. This can also result in your account being banned or suspended by the game developers. Therefore, we do not encourage or endorse the use of Hill Climb Racing Mod APK, and we are not responsible for any consequences that may arise from using it.

            -
          5. Will Hill Climb Racing Mod APK work on my device?
          6. -

            Hill Climb Racing Mod APK should work on most Android devices that support the original game. However, some devices may not be compatible with Hill Climb Racing Mod APK, or may experience some issues or glitches while playing it. If you encounter any problems while using Hill Climb Racing Mod APK, you can try to uninstall and reinstall it, or contact the mod developer for support.

            -
          7. Can I play Hill Climb Racing Mod APK online?
          8. -

            Hill Climb Racing Mod APK can be played online, but only with other players who are also using the modded version of the game. You cannot play Hill Climb Racing Mod APK with players who are using the official version of the game, as they have different features and settings. You can also play Hill Climb Racing Mod APK offline, without an internet connection.

            -
          9. Can I update Hill Climb Racing Mod APK?
          10. -

            Hill Climb Racing Mod APK can be updated, but only by downloading and installing the latest version of the modded file from the same source that you used before. You cannot update Hill Climb Racing Mod APK from Google Play or from the original game's settings, as they will overwrite the modded file and remove all the benefits that you had. You should also backup your game data before updating Hill Climb Racing Mod APK, in case something goes wrong during the process.

            -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Descarga Roblox APK y nete a la comunidad de creadores ms grande del mundo.md b/spaces/fatiXbelha/sd/Descarga Roblox APK y nete a la comunidad de creadores ms grande del mundo.md deleted file mode 100644 index 89fd948498d74dd4039419b41b3290f1036cbc5c..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Descarga Roblox APK y nete a la comunidad de creadores ms grande del mundo.md +++ /dev/null @@ -1,185 +0,0 @@ -
          -

          Descargar Roblox APK: Cómo jugar y crear juegos en tu dispositivo Android

          -

          Roblox es una plataforma de juegos en línea que te permite crear, compartir y jugar experiencias con millones de personas en todo el mundo. Es una forma divertida y creativa de expresar tu imaginación y aprender habilidades de programación, diseño y colaboración. En este artículo, te mostraremos cómo descargar Roblox APK para tu dispositivo Android, cómo instalarlo y lanzarlo, cómo crear una cuenta y acceder a los juegos, cómo crear y compartir tus propios juegos usando Roblox Studio, y cómo solucionar algunos problemas comunes con Roblox APK.

          -

          ¿Qué es Roblox y por qué es popular?

          -

          Roblox es una plataforma de juegos en línea que te permite crear, compartir y jugar experiencias con millones de personas en todo el mundo. Puedes explorar una variedad infinita de mundos virtuales creados por la comunidad, desde aventuras épicas hasta competiciones deportivas, pasando por simulaciones realistas y juegos educativos. También puedes crear tus propios juegos usando Roblox Studio, una herramienta gratuita y fácil de usar que te permite diseñar y programar tus propias experiencias con bloques, scripts y recursos. Puedes publicar tus juegos en la plataforma para que otros los jueguen, o jugar a los juegos de otros usuarios y darles tu opinión.

          -

          descargar roblox apk


          Download Ziphttps://urllie.com/2uNB7O



          -

          Roblox es popular porque ofrece una experiencia única y personalizada para cada usuario. Puedes ser lo que quieras ser, desde un astronauta hasta un superhéroe, pasando por un chef o un artista. Puedes personalizar tu avatar con miles de objetos, ropas, caras, accesorios y más. Puedes chatear con tus amigos en línea, usar mensajes privados y grupos, y hacer nuevos amigos con intereses similares. Puedes aprender habilidades útiles como programación, diseño gráfico, animación, sonido y música. Y lo mejor de todo, puedes divertirte y expresarte en un entorno seguro y amigable.

          -

          ¿Qué beneficios tiene jugar a Roblox?

          -

          Jugar a Roblox tiene muchos beneficios para los usuarios de todas las edades. Algunos de ellos son:

          -
            -
          • Estimula la creatividad: Roblox te permite crear tus propios juegos con bloques, scripts y recursos. Puedes diseñar tus propios mundos, personajes, objetos, historias y mecánicas de juego. También puedes explorar los juegos de otros usuarios y ver cómo han creado sus experiencias.
          • -
          • Enseña habilidades de programación y codificación: Roblox usa Lua, un lenguaje de programación fácil de aprender y usar. Puedes usar Lua para crear scripts que controlen el comportamiento de tu juego, desde la física hasta la inteligencia artificial. También puedes usar Lua para modificar los juegos existentes y añadirles nuevas funciones.
          • -
          • Fomenta el aprendizaje computacional: Roblox te ayuda a desarrollar habilidades computacionales como la velocidad de escritura, la comunicación efectiva en línea, la navegación de menús y la confianza en el uso de software basado en la web.
          • -
          • Educa sobre el emprendimiento: Roblox te permite ganar dinero Continuing the article:
          • Educa sobre el emprendimiento: Roblox te permite ganar dinero real con tus juegos, si los monetizas con Robux, la moneda virtual de la plataforma. Puedes vender tus juegos, objetos, pases de juego, suscripciones y anuncios a otros usuarios. También puedes comprar Robux con dinero real y usarlos para mejorar tu experiencia de juego.
          • -
          • Promueve la colaboración y la socialización: Roblox te conecta con millones de personas en todo el mundo que comparten tu pasión por los juegos. Puedes jugar con tus amigos, unirte a grupos, participar en eventos, competir en clasificaciones y comunicarte con otros usuarios. También puedes colaborar con otros creadores para hacer juegos juntos, intercambiar ideas y consejos, y recibir feedback.
          • -
          • Divierte y entretiene: Roblox te ofrece una variedad infinita de juegos para todos los gustos y edades. Puedes disfrutar de aventuras, acción, deportes, simulación, educación, terror, comedia y mucho más. También puedes crear tus propios juegos y expresar tu personalidad y estilo.
          • -
          -

          ¿Qué inconvenientes tiene jugar a Roblox?

          -

          Aunque Roblox es una plataforma segura y divertida, también tiene algunos inconvenientes que debes tener en cuenta. Algunos de ellos son:

          -

          descargar roblox apk gratis
          -descargar roblox apk ultima version
          -descargar roblox apk para android
          -descargar roblox apk mod
          -descargar roblox apk sin internet
          -descargar roblox apk 2023
          -descargar roblox apk hackeado
          -descargar roblox apk full
          -descargar roblox apk mega
          -descargar roblox apk mediafire
          -descargar roblox apk uptodown
          -descargar roblox apk para pc
          -descargar roblox apk sin licencia
          -descargar roblox apk premium
          -descargar roblox apk actualizado
          -descargar roblox apk por aptoide
          -descargar roblox apk con todo desbloqueado
          -descargar roblox apk para tablet
          -descargar roblox apk sin play store
          -descargar roblox apk con robux infinitos
          -descargar roblox apk android 5.0+
          -descargar roblox apk desde el sitio oficial
          -descargar roblox apk compatible con todos los dispositivos
          -descargar roblox apk en español
          -descargar roblox apk facil y rapido
          -descargar roblox apk sin virus
          -descargar roblox apk con chat de voz
          -descargar roblox apk para celular
          -descargar roblox apk con juegos gratis
          -descargar roblox apk sin anuncios
          -descargar roblox apk 2.578.564
          -descargar roblox apk por mega y mediafire
          -descargar roblox apk para android 4.4.2
          -descargar roblox apk con modo oscuro
          -descargar roblox apk con graficos mejorados
          -descargar roblox apk con acceso a todos los servidores
          -descargar roblox apk sin errores ni bugs
          -descargar roblox apk con skins personalizados
          -descargar roblox apk para jugar online y offline
          -descargar roblox apk con musica de fondo
          -descargar roblox apk con soporte para gamepad
          -descargar roblox apk con actualizaciones automaticas
          -descargar roblox apk con funciones especiales
          -descargar roblox apk con emojis y stickers
          -descargar roblox apk con opciones de seguridad y privacidad
          -descargar roblox apk con traductor integrado
          -descargar roblox apk con editor de video y foto
          -descargar roblox apk con efectos de sonido y animacion
          -descargar roblox apk con tutoriales y consejos

          -
            -
          • Requiere conexión a internet: Roblox es una plataforma en línea que necesita conexión a internet para funcionar. Si no tienes una conexión estable o rápida, puede que experimentes problemas de carga, lag o desconexión. Además, si juegas en un dispositivo móvil, puede que consumas muchos datos de tu plan.
          • -
          • Puede ser adictivo: Roblox es un juego muy entretenido y adictivo que puede hacerte pasar horas frente a la pantalla. Esto puede afectar a tu salud física y mental, así como a tu rendimiento académico o laboral. Es importante que establezcas límites de tiempo y que hagas pausas frecuentes para descansar y hacer otras actividades.
          • -
          • Puede contener contenido inapropiado: Roblox es una plataforma abierta donde cualquiera puede crear y compartir juegos. Esto significa que puede haber algunos juegos que contengan violencia, lenguaje obsceno, temas sexuales o referencias a drogas o alcohol. Aunque Roblox tiene un sistema de moderación y filtros para evitar este tipo de contenido, no es infalible y puede haber algunos casos que se escapen al control. Por eso, es importante que supervises el uso de Roblox por parte de los menores de edad y que uses las opciones de seguridad y privacidad para proteger tu cuenta.
          • -
          -

          Cómo descargar Roblox APK para dispositivos Android

          -

          Roblox APK es el archivo de instalación de la aplicación de Roblox para dispositivos Android. Puedes descargarlo desde la página oficial de Roblox o desde otras fuentes confiables como APKPure o Uptodown. A continuación te explicamos cómo descargar Roblox APK para tu dispositivo Android paso a paso.

          -

          Paso 1: Accede a la página oficial de Roblox

          -

          Lo primero que debes hacer es acceder a la página oficial de Roblox desde tu navegador web. Puedes hacerlo desde este enlace: https://www.roblox.com/

          -

          En la página principal verás un botón verde que dice "Descargar ahora". Haz clic en él para ir a la página de descarga.

          - Página principal de Roblox -

          Paso 2: Elige la opción de Android

          -

          En la página de descarga verás varias opciones para diferentes plataformas como Windows, Mac, iOS, Xbox One y Android. Haz clic en el icono de Android para descargar el archivo APK de Roblox.

          - Página de descarga de Roblox -

          Paso 3: Confirma la descarga

          -

          Al hacer clic en el icono de Android se abrirá una ventana emergente que te preguntará si quieres descargar el archivo APK de Roblox. Haz clic en "Aceptar" para confirmar la descarga.

          - Vent Continuing the article: <img src= -

          Paso 4: Busca el archivo APK en tu dispositivo

          -

          Una vez que hayas confirmado la descarga, el archivo APK se guardará en la carpeta de descargas de tu dispositivo. Puedes usar un explorador de archivos para buscarlo y abrirlo. El nombre del archivo será algo como "Roblox-2.578.564.apk".

          - Explorador de archivos con el archivo APK de Roblox -

          Paso 5: Permite la instalación de fuentes desconocidas

          -

          Antes de instalar el archivo APK, debes permitir que tu dispositivo instale aplicaciones de fuentes desconocidas. Esto significa que puedes instalar aplicaciones que no provienen de la tienda oficial de Google Play. Para hacerlo, sigue estos pasos:

          -
            -
          1. Ve a los ajustes o configuración de tu dispositivo.
          2. -
          3. Busca la opción de seguridad o privacidad.
          4. -
          5. Activa la opción de orígenes o fuentes desconocidas.
          6. -
          7. Confirma la acción si te lo pide.
          8. -
          -

          Estos pasos pueden variar según el modelo y la versión de tu dispositivo. Si no encuentras la opción, puedes buscarla en el buscador de ajustes o consultar el manual de tu dispositivo.

          - Ajustes de seguridad con la opción de fuentes desconocidas -

          Paso 6: Instala y lanza Roblox APK

          -

          Una vez que hayas permitido la instalación de fuentes desconocidas, puedes instalar y lanzar Roblox APK. Para hacerlo, sigue estos pasos:

          -
            -
          1. Abre el archivo APK desde el explorador de archivos o desde la barra de notificaciones.
          2. -
          3. Acepta los permisos que te pida la aplicación.
          4. -
          5. Espera a que se complete la instalación.
          6. -
          7. Abre la aplicación desde el icono que se creará en tu pantalla de inicio o desde el menú de aplicaciones.
          8. -
          -

          ¡Ya está! Ya puedes disfrutar de Roblox en tu dispositivo Android.

          - Pantalla de inicio de Roblox APK Continuing the article:

          Cómo crear una cuenta y acceder a los juegos de Roblox

          -

          Para poder jugar a los juegos de Roblox, necesitas crear una cuenta y acceder a ella. Esto te permitirá guardar tu progreso, personalizar tu avatar, chatear con otros usuarios, comprar y vender objetos, y mucho más. Para crear una cuenta y acceder a los juegos de Roblox, sigue estos pasos:

          -

          Paso 1: Abre la aplicación de Roblox

          -

          Abre la aplicación de Roblox que acabas de instalar en tu dispositivo Android. Verás una pantalla de inicio con dos opciones: "Iniciar sesión" y "Registrarse". Si ya tienes una cuenta de Roblox, puedes iniciar sesión con tu nombre de usuario y contraseña. Si no tienes una cuenta, puedes registrarte con tu correo electrónico, fecha de nacimiento y nombre de usuario.

          - Pantalla de inicio de sesión o registro de Roblox -

          Paso 2: Rellena el formulario de registro

          -

          Si eliges la opción de registrarte, tendrás que rellenar un formulario con tus datos personales. Debes introducir tu correo electrónico, tu fecha de nacimiento y tu nombre de usuario. También debes elegir una contraseña segura y aceptar los términos y condiciones de Roblox. Luego, haz clic en el botón "Registrarse".

          - Formulario de registro de Roblox -

          Paso 3: Verifica tu correo electrónico

          -

          Después de registrarte, recibirás un correo electrónico de Roblox para verificar tu cuenta. Abre el correo electrónico y haz clic en el enlace que te envían. Esto confirmará tu registro y te llevará a la página principal de Roblox.

          - Correo electrónico de verificación de Roblox -

          Paso 4: Explora los juegos de Roblox

          -

          Una vez que hayas verificado tu cuenta, podrás explorar los juegos de Roblox desde la aplicación. Verás una lista de categorías como "Destacados", "Populares", "Recomendados", "Top ganadores" y más. Puedes deslizar la pantalla para ver más opciones o usar el buscador para encontrar un juego específico. Para jugar a un juego, solo tienes que hacer clic en él y esperar a que se cargue.

          - Lista de juegos de Roblox

          Cómo solucionar problemas comunes con Roblox APK

          -

          A veces, al usar Roblox APK en tu dispositivo Android, puedes encontrarte con algunos problemas que afecten a tu experiencia de juego. Estos problemas pueden ser de conexión, de rendimiento, de compatibilidad o de actualización. A continuación te damos algunos consejos para solucionar los problemas más comunes con Roblox APK.

          -

          Cómo actualizar Roblox APK a la última versión

          -

          Para asegurarte de que Roblox APK funcione correctamente y tenga las últimas funciones y mejoras, es importante que lo actualices a la última versión disponible. Para hacerlo, puedes seguir estos pasos:

          -
            -
          1. Abre la aplicación de Roblox en tu dispositivo Android.
          2. -
          3. En la esquina superior izquierda, toca el icono de tres líneas horizontales para abrir el menú.
          4. -
          5. En el menú, toca la opción de Configuración (el icono del engranaje).
          6. -
          7. En la pantalla de Configuración, toca la opción de Acerca de (el icono del signo de interrogación).
          8. -
          9. En la pantalla de Acerca de, verás el número de versión actual de Roblox APK. Si hay una nueva versión disponible, verás un botón que dice "Actualizar". Toca ese botón para iniciar la descarga e instalación de la nueva versión.
          10. -
          -

          Si no ves el botón de actualizar, significa que ya tienes la última versión instalada. También puedes comprobar si hay actualizaciones disponibles desde la página oficial de Roblox o desde otras fuentes confiables como APKPure o Uptodown.

          - Pantalla de Acerca de con el botón de actualizar -

          Cómo solucionar problemas de compatibilidad y rendimiento

          -

          Algunos dispositivos Android pueden tener problemas para ejecutar Roblox APK debido a sus especificaciones técnicas o a su sistema operativo. Esto puede causar que el juego se cierre inesperadamente, se congele, se ralentice o no cargue correctamente. Para solucionar estos problemas, puedes probar las siguientes soluciones:

          -
            -
          • Asegúrate de que tu dispositivo cumple con los requisitos mínimos: Roblox APK requiere al menos un procesador ARMv7, 1 GB de RAM y Android 4.4 o superior. Puedes consultar las especificaciones de tu dispositivo en los ajustes o en el manual del usuario.
          • -
          • Limpia el caché y los datos de la aplicación: El caché y los datos son archivos temporales que se almacenan en tu dispositivo para mejorar el funcionamiento de la aplicación. Sin embargo, a veces pueden acumularse y causar problemas. Para limpiarlos, sigue estos pasos:
          • -
              -
            1. Ve a los ajustes o configuración de tu dispositivo.
            2. -
            3. Busca la opción de aplicaciones o administrador de aplicaciones.
            4. -
            5. Busca y selecciona la aplicación de Roblox.
            6. -
            7. Toca la opción de almacenamiento o memoria.
            8. -
            9. Toca los botones de borrar caché y borrar datos.
            10. -
            -
          • Cierra las aplicaciones en segundo plano: Las aplicaciones que se ejecutan en segundo plano pueden consumir recursos y afectar al rendimiento de Roblox APK. Para cerrarlas, sigue estos pasos:
          • -
              -
            1. Pulsa el botón de inicio o el botón cuadrado para ver las aplicaciones recientes.
            2. -
            3. Desliza hacia arriba o hacia un lado las aplicaciones que no estés usando para cerrarlas.
            4. -
            5. Vuelve a abrir la aplicación de Roblox.
            6. -
            -
          • Reinicia tu dispositivo: A veces, un simple reinicio puede solucionar muchos problemas. Para reiniciar tu dispositivo, sigue estos pasos:
          • -
              -
            1. Mantén pulsado el botón de encendido hasta que aparezca un menú.
            2. -
            3. Toca la opción de reiniciar o apagar y encender.
            4. -
            5. Espera a que tu dispositivo se reinicie y vuelve a abrir la aplicación de Roblox.
            6. -
            -
          -

          Cómo contactar con el soporte técnico de Roblox

          -

          Si ninguna de las soluciones anteriores te funciona, o si tienes algún otro problema con Roblox APK, puedes contactar con el so Continuing the article:

          Si ninguna de las soluciones anteriores te funciona, o si tienes algún otro problema con Roblox APK, puedes contactar con el soporte técnico de Roblox para pedir ayuda. Para hacerlo, sigue estos pasos:

          -
            -
          1. Abre la aplicación de Roblox en tu dispositivo Android.
          2. -
          3. En la esquina superior izquierda, toca el icono de tres líneas horizontales para abrir el menú.
          4. -
          5. En el menú, toca la opción de Ayuda (el icono del signo de interrogación).
          6. -
          7. En la pantalla de Ayuda, verás varias opciones para resolver tus dudas o problemas. Puedes consultar las preguntas frecuentes, los tutoriales, los foros o el blog de Roblox. También puedes enviar un correo electrónico al equipo de soporte, llamando al número de teléfono que aparece o rellenando el formulario que se te muestra.
          8. -
          -

          Al contactar con el soporte técnico de Roblox, debes proporcionar la mayor información posible sobre tu problema, como el modelo y la versión de tu dispositivo, la versión de Roblox APK que usas, el nombre del juego que intentas jugar, el mensaje de error que recibes y los pasos que has seguido para solucionarlo. Esto ayudará al equipo de soporte a identificar y resolver tu problema más rápidamente.

          - Pantalla de Ayuda de Roblox -

          Conclusión

          -

          Roblox es una plataforma de juegos en línea que te permite crear, compartir y jugar experiencias con millones de personas en todo el mundo. Es una forma divertida y creativa de expresar tu imaginación y aprender habilidades de programación, diseño y colaboración. En este artículo, te hemos mostrado cómo descargar Roblox APK para tu dispositivo Android, cómo instalarlo y lanzarlo, cómo crear una cuenta y acceder a los juegos, cómo crear y compartir tus propios juegos usando Roblox Studio, y cómo solucionar algunos problemas comunes con Roblox APK.

          -

          Esperamos que este artículo te haya sido útil y que disfrutes de Roblox en tu dispositivo Android. Si tienes alguna pregunta o comentario sobre Roblox APK, no dudes en dejarnos un mensaje abajo. ¡Nos encantaría saber tu opinión!

          -

          Preguntas frecuentes

          -

          A continuación te presentamos algunas preguntas frecuentes sobre Roblox APK y sus respuestas.

          -

          ¿Roblox APK es seguro?

          -

          Sí, Roblox APK es seguro siempre y cuando lo descargues desde la página oficial de Roblox o desde otras fuentes confiables como APKPure o Uptodown. Estas fuentes verifican que el archivo APK no contenga virus ni malware que puedan dañar tu dispositivo o robar tu información. Además, Roblox tiene medidas de seguridad y privacidad para proteger tu cuenta y tus datos personales.

          -

          ¿Roblox APK es gratis?

          -

          Sí, Roblox APK es gratis y no tiene ningún costo por descargarlo o instalarlo. Sin embargo, algunos juegos o funciones dentro de la plataforma pueden requerir el uso de Robux, la moneda virtual de Roblox. Puedes comprar Robux con dinero real o ganarlos con tus juegos.

          -

          ¿Roblox APK funciona sin internet?

          -

          No, Roblox APK necesita conexión a internet para funcionar correctamente. Sin internet no podrás acceder a los juegos ni a las funciones sociales de la plataforma. Además, necesitarás internet para actualizar la aplicación y recibir las últimas novedades y mejoras.

          -

          ¿Roblox APK es compatible con todos los dispositivos Android?

          -

          No, Roblox APK no es compatible con todos los dispositivos Android. Algunos dispositivos pueden tener problemas para ejecutar la aplicación debido a sus especificaciones técnicas o a su sistema operativo. Para poder usar Roblox APK en tu dispositivo Android, necesitas al menos un procesador ARMv7, 1 GB de RAM y Android 4.4 o superior.

          -

          ¿Cómo puedo crear mis propios juegos con Roblox Studio?

          -

          Para crear tus propios juegos con Roblox Studio, necesitas descargar e instalar la aplicación de Roblox Studio en tu computadora. Puedes hacerlo desde este enlace: https://www.roblox.com/develop. Luego, puedes usar la herramienta para diseñar y programar tus propios juegos con bloques, scripts Continuing the article: y recursos. También puedes publicar tus juegos en la plataforma para que otros los jueguen, o jugar a los juegos de otros usuarios y modificarlos. Para aprender más sobre cómo usar Roblox Studio, puedes consultar los tutoriales, los foros y el blog de Roblox.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Data Genshin Impact Tips dan Trik untuk Menghemat Waktu dan Kuota.md b/spaces/fatiXbelha/sd/Download Data Genshin Impact Tips dan Trik untuk Menghemat Waktu dan Kuota.md deleted file mode 100644 index b4b5f7df3e16ea0790aeee6223deda701e60397e..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Data Genshin Impact Tips dan Trik untuk Menghemat Waktu dan Kuota.md +++ /dev/null @@ -1,86 +0,0 @@ - -

          Berapa Jam Download Data Genshin Impact?

          -

          Genshin Impact adalah salah satu game RPG (role-playing game) yang paling populer saat ini. Game ini menawarkan dunia terbuka yang luas, grafis yang indah, dan karakter-karakter yang menarik. Namun, untuk bisa memainkan game ini, Anda harus mengunduh data yang cukup besar, sekitar 30 GB untuk PC dan 8 GB untuk Android dan iOS. Lalu, berapa jam download data Genshin Impact? Apakah ada cara untuk mempercepat prosesnya?

          -

          berapa jam download data genshin impact


          DOWNLOADhttps://urllie.com/2uNGCX



          -

          Dalam artikel ini, kami akan menjelaskan apa itu Genshin Impact, mengapa download data Genshin Impact lama, dan bagaimana cara mempercepat download data Genshin Impact. Simak ulasan lengkapnya di bawah ini.

          -

          Apa itu Genshin Impact?

          -

          Genshin Impact adalah game RPG yang dikembangkan oleh Hoyoverse, sebuah perusahaan asal China. Game ini dirilis pada September 2020 dan telah mendapatkan banyak penghargaan dan pujian dari para kritikus dan pemain. Game ini juga memiliki lebih dari 100 juta pengguna aktif di seluruh dunia.

          -

          Genre dan gameplay

          -

          Genshin Impact termasuk dalam genre RPG aksi dengan elemen fantasi. Dalam game ini, Anda akan berperan sebagai seorang Traveler yang mencari saudaranya yang hilang di dunia bernama Teyvat. Anda bisa menjelajahi berbagai wilayah dengan tema berbeda, bertemu dengan karakter-karakter lain yang bisa bergabung dengan tim Anda, dan bertarung dengan musuh-musuh yang menggunakan kekuatan elemen.

          -

          Game ini juga memiliki sistem gacha, yaitu sebuah mekanisme di mana Anda bisa mendapatkan karakter atau senjata baru dengan cara mengeluarkan mata uang dalam game atau uang sungguhan. Selain itu, game ini juga memiliki fitur multiplayer, di mana Anda bisa bermain bersama dengan teman-teman Anda secara online.

          -

          Platform dan sistem operasi

          -

          Genshin Impact bisa dimainkan di berbagai platform, seperti PC, Android, iOS, PlayStation 4, PlayStation 5, dan Nintendo Switch (dalam pengembangan). Untuk PC, game ini membutuhkan sistem operasi Windows 7 SP1 64-bit, Windows 8.1 64-bit, atau Windows 10 64-bit. Untuk Android, game ini membutuhkan sistem operasi Android 7.0 atau lebih tinggi. Untuk iOS, game ini membutuhkan sistem operasi iOS 9.0 atau lebih tinggi.

          -

          Berikut adalah spesifikasi minimum dan rekomendasi untuk PC:

          - - - - - -11 - -
          SpesifikasiMinimumRekomendasi
          ProsesorIntel Core i5 atau setaraIntel Core i7 atau setara
          RAM8 GB16 GB
          Kartu grafisNVIDIA GeForce GT 1030 atau lebih tinggiNVIDIA GeForce GTX 1060 6 GB atau lebih tinggi
          Versi DirectX1111
          Ruang penyimpanan30 GB30 GB
          -

          Mengapa download data Genshin Impact lama?

          -

          Download data Genshin Impact bisa memakan waktu yang lama, tergantung pada beberapa faktor, seperti ukuran file, koneksi internet, dan jumlah pengguna. Berikut adalah penjelasan lebih lengkapnya.

          -

          Ukuran file yang besar

          -

          Genshin Impact memiliki ukuran file yang besar, sekitar 30 GB untuk PC dan 8 GB untuk Android dan iOS. Ini karena game ini memiliki grafis yang berkualitas tinggi, suara yang imersif, dan konten yang beragam. Ukuran file yang besar ini tentu saja membutuhkan waktu yang lama untuk diunduh, terutama jika Anda menggunakan koneksi internet yang lambat atau tidak stabil.

          -

          Koneksi internet yang lambat

          -

          Koneksi internet yang lambat adalah salah satu penyebab utama download data Genshin Impact lama. Koneksi internet yang lambat bisa disebabkan oleh banyak hal, seperti jarak antara perangkat Anda dengan server game, gangguan pada jaringan internet Anda, atau kuota internet Anda yang habis. Koneksi internet yang lambat akan mengurangi kecepatan download data Genshin Impact, sehingga memakan waktu yang lebih lama.

          -

          berapa lama download data genshin impact di pc
          -berapa ukuran download data genshin impact di android
          -berapa waktu download data genshin impact di ios
          -berapa mb download data genshin impact di ps4
          -berapa gb download data genshin impact di ps5
          -berapa kecepatan download data genshin impact optimal
          -berapa kuota download data genshin impact habis
          -berapa persen download data genshin impact selesai
          -berapa jam update data genshin impact terbaru
          -berapa lama install data genshin impact setelah download
          -cara mempercepat download data genshin impact di pc
          -cara mengatasi download data genshin impact lambat di android
          -cara melanjutkan download data genshin impact di ios
          -cara menghapus download data genshin impact di ps4
          -cara mengecek download data genshin impact di ps5
          -cara mengoptimalkan download data genshin impact dengan vpn
          -cara menghemat kuota download data genshin impact dengan wifi
          -cara mengetahui persen download data genshin impact dengan launcher
          -cara mengupdate data genshin impact tanpa download ulang
          -cara menginstall data genshin impact tanpa error setelah download
          -tips dan trik download data genshin impact cepat dan mudah
          -solusi dan panduan download data genshin impact lemot dan macet
          -review dan testimoni download data genshin impact lancar dan sukses
          -syarat dan spesifikasi download data genshin impact minimal dan rekomendasi
          -link dan situs download data genshin impact resmi dan aman
          -alasan dan manfaat download data genshin impact sekarang juga
          -diskon dan promo download data genshin impact gratis dan murah
          -bonus dan hadiah download data genshin impact banyak dan menarik
          -event dan kontes download data genshin impact seru dan menguntungkan
          -kode dan kupon download data genshin impact valid dan terbaru

          -

          Jumlah pengguna yang banyak

          -

          Jumlah pengguna yang banyak juga bisa mempengaruhi kecepatan download data Genshin Impact. Jika banyak pengguna yang mengunduh data game secara bersamaan, maka server game akan mengalami beban yang tinggi, sehingga kecepatan download data Genshin Impact akan menurun. Hal ini biasanya terjadi saat ada pembaruan atau event baru di game.

          -

          Bagaimana cara mempercepat download data Genshin Impact?

          -

          Meskipun download data Genshin Impact bisa memakan waktu yang lama, ada beberapa cara yang bisa Anda lakukan untuk mempercepat prosesnya. Berikut adalah beberapa tips yang bisa Anda coba.

          -

          Memilih server yang dekat

          -

          Genshin Impact memiliki beberapa server yang bisa Anda pilih, seperti Asia, Amerika, Eropa, dan TW/HK/MO. Server ini berpengaruh pada kecepatan download data Genshin Impact, karena semakin dekat jarak antara perangkat Anda dengan server game, semakin cepat kecepatan download data Genshin Impact. Oleh karena itu, sebaiknya Anda memilih server yang sesuai dengan lokasi Anda.

          -

          Menggunakan kabel LAN atau Wi-Fi 5 GHz

          -

          Jika Anda menggunakan PC untuk bermain Genshin Impact, maka Anda bisa menggunakan kabel LAN untuk menghubungkan perangkat Anda dengan router internet. Kabel LAN biasanya lebih stabil dan cepat daripada Wi-Fi. Namun, jika Anda tidak memiliki kabel LAN atau menggunakan perangkat seluler, maka Anda bisa menggunakan Wi-Fi 5 GHz jika tersedia. Wi-Fi 5 GHz memiliki frekuensi yang lebih tinggi dan bandwidth yang lebih lebar daripada Wi-Fi 2.4 GHz, sehingga bisa meningkatkan kecepatan download data Genshin Impact.

          -

          Menutup aplikasi lain yang menggunakan bandwidth

          -

          Aplikasi lain yang menggunakan bandwidth internet juga bisa mengganggu kecepatan download data Genshin Impact. Aplikasi seperti browser, streaming video, atau download manager bisa memakan bandwidth internet Anda, sehingga mengurangi kecepatan download data Genshin Impact. Oleh karena itu, sebaiknya Anda menutup aplikasi lain yang tidak perlu saat mengunduh data game.

          -

          Menggunakan download manager

          -

          Download manager adalah sebuah aplikasi atau ekstensi browser yang bisa membantu Anda mengunduh file dengan lebih cepat dan mudah. Download manager biasanya memiliki fitur seperti pause dan resume download, multiple connections, speed limit, dan lain-lain. Beberapa contoh download manager yang bisa Anda gunakan adalah IDM (Internet Download Manager), FDM (Free Download Manager), atau XDM (Xtreme Download Manager).

          -

          Kesimpulan

          -

          Genshin Impact adalah game RPG aksi dengan elemen fantasi yang sangat populer saat ini. Game ini memiliki ukuran file yang besar, sekitar 30 GB untuk PC dan 8 GB untuk Android dan iOS. Download data Genshin Impact bisa memakan waktu yang lama, tergantung pada koneksi internet, server game, dan jumlah pengguna. Namun, Anda bisa mempercepat download data Genshin Impact dengan beberapa cara, seperti memilih server yang dekat, menggunakan kabel LAN atau Wi-Fi 5 GHz, menutup aplikasi lain yang menggunakan bandwidth, dan menggunakan download manager.

          -

          Semoga artikel ini bermanfaat untuk Anda yang ingin bermain Genshin Impact. Selamat bermain dan jangan lupa untuk mengikuti pembaruan dan event terbaru di game ini.

          -

          FAQ

          -

          Berikut adalah beberapa pertanyaan yang sering diajukan tentang download data Genshin Impact.

          -

          Apakah Genshin Impact gratis?

          -

          Ya, Genshin Impact adalah game yang gratis untuk dimainkan. Anda tidak perlu membayar apapun untuk mengunduh atau memainkan game ini. Namun, Anda bisa membeli mata uang dalam game atau item-item tertentu dengan uang sungguhan jika Anda mau.

          -

          Apakah Genshin Impact bisa dimainkan offline?

          -

          Tidak, Genshin Impact adalah game yang membutuhkan koneksi internet untuk dimainkan. Anda tidak bisa memainkan game ini secara offline. Anda juga harus selalu mengunduh pembaruan terbaru jika ada.

          -

          Apakah Genshin Impact bisa cross-play?

          -

          Ya, Genshin Impact adalah game yang bisa cross-play, yaitu Anda bisa bermain bersama dengan pemain lain yang menggunakan platform yang berbeda. Misalnya, Anda bisa bermain bersama dengan teman Anda yang menggunakan PC, Android, iOS, PlayStation 4, atau PlayStation 5.

          -

          Apakah Genshin Impact aman untuk anak-anak?

          -

          Genshin Impact adalah game yang memiliki rating T (Teen) dari ESRB (Entertainment Software Rating Board), yaitu sebuah lembaga yang memberikan penilaian usia untuk game. Rating T berarti game ini cocok untuk usia 13 tahun ke atas. Game ini mengandung unsur-unsur seperti kekerasan, darah, bahasa kasar, dan tema dewasa.

          -

          Apakah Genshin Impact ada di Steam?

          -

          Tidak, Genshin Impact tidak ada di Steam, yaitu sebuah platform distribusi digital untuk game PC. Anda harus mengunduh game ini dari situs resmi Hoyoverse atau dari toko aplikasi seperti Google Play Store atau App Store.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Sumdog for PC A Step-by-Step Tutorial for Windows and Mac Users.md b/spaces/fatiXbelha/sd/Download Sumdog for PC A Step-by-Step Tutorial for Windows and Mac Users.md deleted file mode 100644 index 3d9fdf2a79640bc7b0677cc0a38ec5d15f93578d..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Sumdog for PC A Step-by-Step Tutorial for Windows and Mac Users.md +++ /dev/null @@ -1,110 +0,0 @@ - -

          How to Download Sumdog: A Fun and Effective Way to Learn Math and Spelling

          -

          If you are looking for a way to help your child improve their math and spelling skills, you might want to consider downloading Sumdog. Sumdog is a games-based adaptive-learning app that tailors curriculum-aligned questions to each child's unique level. Used at home and in schools across the UK, it can help you inspire and motivate even your most disengaged learners to help build their confidence and enjoy learning.

          -

          In this article, we will show you how to download Sumdog on Apple and Android devices, how to use it after downloading it, and what benefits it can offer for learning math and spelling. Let's get started!

          -

          how to download sumdog


          Download Filehttps://urllie.com/2uNDeZ



          -

          How to Download Sumdog on Apple Devices

          -

          If you have an iPad or an iPhone, you can download Sumdog from the App Store. You will need minimum iOS 11.0 or later. Here are the steps you need to follow:

          -
            -
          1. Open the App Store on your device and search for the Sumdog app.
          2. -
          3. Tap on the Sumdog app icon and then tap on Get.
          4. -
          5. Enter your Apple ID password or use Touch ID or Face ID to confirm.
          6. -
          7. Wait for the app to download and install on your device.
          8. -
          -

          You can also click here if you are viewing this page using an iPad or an iPhone to download the Sumdog app directly.

          -

          how to install sumdog app on android
          -how to get sumdog on ipad or iphone
          -how to access sumdog games online
          -how to download sumdog for windows 10
          -how to update sumdog app on chromebook
          -how to sign up for sumdog free trial
          -how to create a sumdog account for students
          -how to log in to sumdog as a teacher or parent
          -how to use sumdog for maths and spelling practice
          -how to set up sumdog challenges and assessments
          -how to track progress and results on sumdog
          -how to earn coins and rewards on sumdog
          -how to customize your avatar and house on sumdog
          -how to play sumdog with friends and classmates
          -how to join a sumdog contest or competition
          -how to contact sumdog support or feedback
          -how to uninstall or delete sumdog app
          -how to troubleshoot common issues with sumdog
          -how to find out more about sumdog features and benefits
          -how to subscribe or upgrade to sumdog premium
          -how to cancel or change your sumdog subscription plan
          -how to get a refund or discount from sumdog
          -how to redeem a sumdog voucher or coupon code
          -how to review or rate sumdog app on app store or google play
          -how to share your experience or testimonial with sumdog
          -how to learn more about sumdog curriculum and alignment
          -how to compare sumdog with other learning apps or platforms
          -how to integrate sumdog with other tools or systems
          -how to download sumdog data or reports
          -how to manage your privacy and security settings on sumdog
          -how to reset your password or username on sumdog
          -how to add or remove students from your class on sumdog
          -how to switch between different subjects or levels on sumdog
          -how to adjust the difficulty or speed of questions on sumdog
          -how to enable or disable sound effects or music on sumdog
          -how to play the latest games or activities on sumdog
          -how to watch tutorials or videos on how to use sumdog
          -how to follow sumdog on social media or blog
          -how to request a demo or webinar from sumdog
          -how to become a partner or affiliate of sumdog
          -how to apply for a job or career at sumdog
          -how to donate or support sumdog's mission and vision
          -how to find out about the research and evidence behind sumdog's effectiveness
          -how to join the community or forum of sumdog users
          -how to invite others or refer friends to join sumdog
          -how to get help or tips from other teachers or parents using sumdog
          -how to give feedback or suggestions for improving sumdog
          -how to report a bug or problem with sumdog

          -

          How to Download Sumdog on Android Devices

          -

          If you have an Android tablet or smartphone, you can download Sumdog from the Play Store. You will need a minimum Android v5.0 or later. Here are the steps you need to follow:

          -
            -
          1. Open the Play Store on your device and search for the Sumdog app.
          2. -
          3. Tap on the Sumdog app icon and then tap on Install.
          4. -
          5. Accept the permissions required by the app.
          6. -
          7. Wait for the app to download and install on your device.
          8. -
          -

          You can also click here if you are viewing this page using an Android tablet or smartphone to download the Sumdog app directly.

          -

          How to Use Sumdog After Downloading It

          -

          Once you have downloaded Sumdog on your device, you can start using it right away. You will need an internet connection to access the app. Here are the steps you need to follow:

          -
            -
          1. Launch the app and sign in with your username and password or create a new account. If you are a parent, you can create a family account and add up to six children. If you are a teacher, you can create a school account and add your students.
          2. -
          3. Choose your subject, grade level, and skill to practice. You can choose from math or spelling, and select from over 10 grade levels and hundreds of skills. You can also let Sumdog choose the best skill for you based on your diagnostic test results.
          4. -
          5. Play games and answer questions tailored to your level. You can choose from over 30 games, such as Jet Ski Rescue, Cake Monsters, or Pirate Treasure. You will see questions on the screen that match your skill and level. You will get instant feedback and hints if you need them.
          6. -
          7. Earn coins, rewards, and badges as you progress. You can use your coins to buy items for your 3D avatar, such as clothes, accessories, or pets. You can also earn rewards, such as certificates, trophies, or stickers. You can also collect badges for completing challenges, such as answering a certain number of questions correctly or playing for a certain amount of time.
          8. -
          -

          Benefits of Using Sumdog for Learning Math and Spelling

          -

          Sumdog is not just a fun and engaging app, but also a powerful and effective learning tool. Here are some of the benefits of using Sumdog for learning math and spelling:

          -
            -
          • Sumdog is aligned to the National Curriculum and covers thousands of standards-aligned questions. You can be sure that your child is practicing the skills that they need to master for their grade level.
          • -
          • Sumdog adapts to each child's unique level and provides personalized practice. Sumdog uses an adaptive-learning algorithm that adjusts the difficulty of the questions based on each child's performance. This way, your child will always get the right level of challenge and support.
          • -
          • Sumdog is engaging and motivating with over 30 games, virtual rewards, and a 3D avatar. Sumdog makes learning fun and rewarding by incorporating games, coins, rewards, and badges into the learning process. Your child will enjoy playing games while learning math and spelling skills. They will also love customizing their own 3D avatar with items they buy with their coins.
          • -
          • Sumdog is proven to accelerate progress with just 30 minutes of practice each week. Sumdog has been independently evaluated by several research studies that show that it can improve math and spelling outcomes for children of all abilities. According to one study by Glasgow University, children who use Sumdog regularly make three times more progress than those who do not.
          • -
          -

          Conclusion and FAQs

          -

          In conclusion, Sumdog is a games-based adaptive-learning app that can help your child learn math and spelling skills in a fun and effective way. You can download it on your Apple or Android device easily by following the steps we have outlined in this article. You can also use it after downloading it by signing in or creating an account, choosing your subject, grade level, and skill, playing games and answering questions, and earning coins, rewards, and badges.

          -

          If you want to give Sumdog a try, you can download it today from the App Store or the Play Store. You can also visit www.sumdog.com for more information about the app and its features.

          -

          If you have

          If you have any questions about Sumdog, you might find the answers in the following FAQs. If not, you can always contact Sumdog support for more help.

          -

          FAQ 1: How much does Sumdog cost?

          -

          Sumdog is free to download and use for all children. However, if you want to access more features and games, you can upgrade to a premium subscription. A premium subscription costs £6 per month or £48 per year for one child, or £12 per month or £96 per year for a family of up to six children. You can also get a free trial of the premium subscription for 14 days.

          -

          FAQ 2: How can I monitor my child's progress on Sumdog?

          -

          If you are a parent, you can monitor your child's progress on Sumdog by logging into your parent dashboard. You can see how much time your child has spent on Sumdog, what skills they have practiced, how many questions they have answered, and what coins and rewards they have earned. You can also see their diagnostic test results and their accuracy and speed scores.

          -

          FAQ 3: How can I set work for my child on Sumdog?

          -

          If you are a parent, you can set work for your child on Sumdog by using the parent dashboard. You can choose the subject, grade level, and skill that you want your child to practice. You can also set a target number of questions or minutes that you want your child to complete each day or week. You can also assign specific games or challenges for your child to play.

          -

          FAQ 4: How can I contact Sumdog support if I have any issues?

          -

          If you have any issues with Sumdog, you can contact Sumdog support by emailing support@sumdog.com or calling 0131 226 1511. You can also visit www.sumdog.com/en/Help/ for more help and resources.

          -

          FAQ 5: What are some other features of Sumdog that I should know about?

          -

          Some other features of Sumdog that you should know about are:

          -
            -
          • You can play Sumdog online or offline. If you play offline, your progress will be synced when you go online again.
          • -
          • You can play Sumdog with your friends or other children from around the world. You can join multiplayer games, chat with other players, and send friend requests.
          • -
          • You can join Sumdog contests and competitions. You can compete with other children from your school, your region, or your country. You can win prizes and certificates for yourself and your school.
          • -
          • You can access Sumdog on any device. You can use the same account to log in on your tablet, smartphone, laptop, or desktop computer.
          • -
          • You can get feedback and support from Sumdog teachers. You can ask questions, request hints, or report problems using the in-app chat feature.
          • -

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download and Play Euro Truck Simulator 2 Bus Indonesia Mod Apk on Your Android Phone or Tablet.md b/spaces/fatiXbelha/sd/Download and Play Euro Truck Simulator 2 Bus Indonesia Mod Apk on Your Android Phone or Tablet.md deleted file mode 100644 index 34e208a00cdf4b9add0ef7071a81d665f94ce0e8..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download and Play Euro Truck Simulator 2 Bus Indonesia Mod Apk on Your Android Phone or Tablet.md +++ /dev/null @@ -1,169 +0,0 @@ - - - -
          -

          How to Download and Install Euro Truck Simulator 2 Bus Indonesia Mod for Android

          -

          If you are a fan of simulation games, especially driving games, you might have heard of Euro Truck Simulator 2, a popular game that lets you drive trucks across Europe. But did you know that you can also drive buses across Indonesia with a mod? Yes, you read that right. There is a mod for Euro Truck Simulator 2 that adds buses and routes from Indonesia, giving you a whole new experience of driving in a different country.

          -

          In this article, we will show you how to download and install Euro Truck Simulator 2 Bus Indonesia Mod for Android devices. This mod will allow you to play the game on your smartphone or tablet, without needing a PC or a console. You will be able to enjoy the realistic graphics, physics, and sounds of driving buses in Indonesia, as well as explore various cities and landmarks along the way.

          -

          download mod euro truck simulator 2 bus indonesia for apk android


          Download Filehttps://urllie.com/2uNHTj



          -

          But before we get into the details, let's first understand what this mod is all about.

          -

          What is Euro Truck Simulator 2 Bus Indonesia Mod?

          -

          Euro Truck Simulator 2 Bus Indonesia Mod is a modification for Euro Truck Simulator 2, a game developed by SCS Software and released in 2012. The game allows you to drive various trucks across Europe, delivering cargo and earning money. You can customize your truck, buy new ones, hire drivers, and expand your business.

          -

          The mod, however, changes the game completely. It replaces the trucks with buses, and the European map with an Indonesian map. You can choose from different types of buses, such as SR2 ECE, and drive them across Indonesia, picking up and dropping off passengers. You can also visit famous places in Indonesia, such as Jakarta, Bali, Surabaya, Bandung, and more.

          -

          The mod was created by Indonesian modders who wanted to bring their country's culture and scenery to the game. They also added realistic features, such as traffic jams, toll booths, police checkpoints, speed limits, weather effects, and more. The mod is constantly updated with new buses, routes, and improvements.

          -

          Features of Euro Truck Simulator 2 Bus Indonesia Mod

          -

          Some of the features that make this mod stand out are:

          -
            -
          • High-quality graphics: The mod uses high-resolution textures and models to create realistic buses and environments. You can see the details of the bus interiors, exteriors, dashboards, lights, mirrors, etc. You can also admire the beauty of the Indonesian landscapes, such as mountains, beaches, forests, villages, etc.
          • -
          • Realistic physics: The mod simulates the physics of driving a bus in real life. You can feel the weight, speed, acceleration, braking, steering, suspension, etc. of the bus. You also have to deal with factors such as traffic rules, road conditions, weather changes, fuel consumption, etc.
          • -
          • Immersive sounds: The mod uses authentic sounds to enhance the immersion of driving a bus in Indonesia. You can hear the engine sounds, horn sounds, brake sounds, etc. of the bus. You can also hear the sounds of other vehicles on the road , as well as the sounds of the passengers, the radio, the announcements, etc.
          • -
          • Varied gameplay: The mod offers different modes of gameplay, such as free roam, career, and multiplayer. You can choose to drive any bus you want, anywhere you want, without any restrictions. You can also follow a career path, where you have to complete missions and earn money. You can also join online servers and play with other players from around the world.
          • -
          -

          Requirements for Euro Truck Simulator 2 Bus Indonesia Mod

          -

          To play this mod on your Android device, you will need the following:

          -
            -
          • A compatible device: The mod requires a device that runs on Android 4.1 or higher, and has at least 2 GB of RAM and 1 GB of free storage space. The mod also works better on devices that have a good processor and graphics card.
          • -
          • A Euro Truck Simulator 2 game: The mod is not a standalone game, but a modification for Euro Truck Simulator 2. You will need to have the original game installed on your device before you can install the mod. You can download the game from the Google Play Store or from other sources.
          • -
          • A Euro Truck Simulator 2 Bus Indonesia Mod APK file: The mod is not available on the Google Play Store, but you can download it from various websites that offer it. You will need to download the APK file of the mod, which is a file that contains the data and instructions for installing the mod on your device.
          • -
          -

          How to Download Euro Truck Simulator 2 Bus Indonesia Mod APK File

          -

          Now that you know what this mod is and what you need to play it, let's see how you can download it on your Android device. Here are the steps you need to follow:

          -

          Find a reputable website that offers the mod

          -

          The first step is to find a website that provides the APK file of the mod. There are many websites that claim to offer this mod, but not all of them are trustworthy. Some of them may contain viruses, malware, or fake files that can harm your device or steal your data.

          -

          download ets2 mod apk unlimited money and all trucks
          -download euro truck simulator 2 indonesia mod apk offline
          -download mod bus simulator indonesia ets2 android apk
          -download euro truck simulator 2 mod apk terbaru 2023
          -download mod truk simulator indonesia euro truck 2 apk
          -download euro truck simulator 2 mod apk versi lama
          -download mod bus indonesia euro truck simulator 2 android
          -download euro truck simulator 2 mod apk no iklan
          -download mod game euro truck simulator 2 android apk
          -download euro truck simulator 2 mod apk jalan tikus
          -download mod map indonesia euro truck simulator 2 apk
          -download euro truck simulator 2 mod apk data obb
          -download mod livery bus indonesia euro truck 2 apk
          -download euro truck simulator 2 mod apk free shopping
          -download mod traffic indonesia euro truck simulator 2 apk
          -download euro truck simulator 2 mod apk revdl
          -download mod klakson telolet euro truck simulator 2 apk
          -download euro truck simulator 2 mod apk happymod
          -download mod skin bus indonesia euro truck 2 apk
          -download euro truck simulator 2 mod apk rexdl
          -download mod mobil indonesia euro truck simulator 2 apk
          -download euro truck simulator 2 mod apk unlimited xp
          -download mod suara bus indonesia euro truck 2 apk
          -download euro truck simulator 2 mod apk putra adam
          -download mod lampu strobo euro truck simulator 2 apk
          -download euro truck simulator 2 mod apk full unlocked
          -download mod cuaca realistis euro truck simulator 2 apk
          -download euro truck simulator 2 mod apk netgeek id
          -download mod grafik hd euro truck simulator 2 android apk
          -download euro truck simulator 2 mod apk metodegames com
          -download mod speedometer digital euro truck 2 apk
          -download euro truck simulator 2 mod apk unlimited fuel
          -download mod kamera interior euro truck simulator 2 apk
          -download euro truck simulator 2 mod apk android republica
          -download mod musik dangdut koplo euro truck 2 apk
          -download euro truck simulator 2 mod apk android offline com
          -download mod polisi indonesia euro truck simulator 2 apk
          -download euro truck simulator 2 mod apk android oyun club
          -download mod bensin habis cepat euro truck 2 android apk
          -download euro truck simulator 2 mod apk android gamespot com

          -

          To avoid such risks, you should look for a website that has positive reviews, ratings, and feedback from other users. You should also check the date and size of the file, and compare it with other sources. You can also use antivirus software or online scanners to scan the file before downloading it.

          -

          One of the websites that we recommend is [Euro Truck Simulator 2 Bus Indonesia Mod APK Download]. This website has been verified by us and has a good reputation among users. It also offers the latest version of the mod, which is updated regularly with new features and improvements.

          -

          Download the APK file using your browser

          -

          The next step is to download the APK file using your browser. To do this, you need to follow these steps:

          -
            -
          1. Open your browser and go to the website that offers the mod.
          2. -
          3. Find the download button or link and tap on it.
          4. -
          5. Wait for the download to start and finish.
          6. -
          7. You may see a warning message that says "This type of file can harm your device". This is normal and you can ignore it. Just tap OK or Continue.
          8. -
          9. The APK file will be saved in your Downloads folder or in another location that you have chosen.
          10. -
          -

          Allow unknown apps on your Android device

          -

          The last step before installing the mod is to allow unknown apps on your Android device. Unknown apps are apps that are not downloaded from the Google Play Store or other official sources. By default, Android devices do not allow unknown apps to be installed for security reasons.

          -

          To enable unknown apps on your device, you need to follow these steps:

          -
            -
          1. Go to Settings and tap on Security or Privacy.
          2. -
          3. Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.
          4. -
          5. You may see a warning message that says "Your phone and personal data are more vulnerable to attack by apps from unknown sources". This is normal and you can ignore it. Just tap OK or Allow.
          6. -
          7. You have now enabled unknown apps on your device and you can proceed to install the mod.
          8. -
          -

          How to Install Euro Truck Simulator 2 Bus Indonesia Mod APK File

          -

          Now that you have downloaded and enabled unknown apps on your device, you can install the mod. To do this, you need to follow these steps:

          -

          Install a file manager app on your Android device

          A file manager app is an app that allows you to access and manage the files and folders on your device. You will need a file manager app to locate and install the APK file of the mod. You can use any file manager app that you like, but we recommend using [ES File Explorer]. This app is free, easy to use, and has many features.

          -

          To install a file manager app on your device, you need to follow these steps:

          -
            -
          1. Go to the Google Play Store and search for the file manager app that you want to use.
          2. -
          3. Tap on the app and then tap on Install.
          4. -
          5. Wait for the app to download and install on your device.
          6. -
          7. You have now installed a file manager app on your device and you can use it to find and install the mod.
          8. -
          -

          Locate the APK file in your file explorer app and select it

          -

          The next step is to locate the APK file of the mod in your file explorer app and select it. To do this, you need to follow these steps:

          -
            -
          1. Open the file explorer app that you have installed on your device.
          2. -
          3. Navigate to the folder where you have saved the APK file of the mod. It is usually in the Downloads folder or in another location that you have chosen.
          4. -
          5. Find the APK file of the mod and tap on it. It should have a name like Euro Truck Simulator 2 Bus Indonesia Mod.apk or something similar.
          6. -
          7. You have now selected the APK file of the mod and you can proceed to install it.
          8. -
          -

          Tap Yes when prompted to install the APK file

          -

          The final step is to tap Yes when prompted to install the APK file of the mod. To do this, you need to follow these steps:

          -
            -
          1. After selecting the APK file of the mod, you will see a pop-up window that asks you if you want to install this application. It will also show you some information about the app, such as its name, size, permissions, etc.
          2. -
          3. Read the information carefully and make sure that you trust the source and the app. If you are not sure, you can cancel the installation and delete the APK file from your device.
          4. -
          5. If you are sure that you want to install the app, tap on Yes or Install.
          6. -
          7. Wait for the installation process to complete. It may take a few seconds or minutes depending on your device and the size of the app.
          8. -
          9. You have now installed Euro Truck Simulator 2 Bus Indonesia Mod on your Android device and you can start playing it.
          10. -
          -

          How to Play Euro Truck Simulator 2 Bus Indonesia Mod on Android

          -

          Congratulations! You have successfully downloaded and installed Euro Truck Simulator 2 Bus Indonesia Mod on your Android device. Now, let's see how you can play it and enjoy driving buses across Indonesia. Here are some tips and tricks that will help you:

          -

          Launch the game from your app drawer

          -

          The first thing you need to do is to launch the game from your app drawer. To do this, you need to follow these steps:

          -
            -
          1. Go to your app drawer and look for Euro Truck Simulator 2 Bus Indonesia Mod. It should have an icon like a bus with an Indonesian flag on it.
          2. -
          3. Tap on the icon to launch the game.
          4. -
          5. You will see a loading screen with some logos and information. Wait for it to finish loading.
          6. -
          7. You will then see a main menu with some options, such as New Game, Load Game, Options, etc.
          8. -
          9. You have now launched the game from your app drawer and you can choose what you want to do next.
          10. -
          -

          Choose your bus and destination

          -

          The next thing you need to do is to choose your bus and destination. To do this, you need to follow these steps:

          -
            -
          1. From the main menu, tap on New Game or Load Game depending on whether you want to start a new game or continue an existing one.
          2. -
          3. You will then see a screen where you can choose your profile name, picture, company name, logo, etc. You can customize these as you like or use the default ones.
          4. -
          5. After creating or selecting your profile, tap on Confirm.
          6. -
          7. You will then see a screen where you can choose your bus and destination. You can scroll through different types of buses, such as SR2 ECE, SR1 ECE, SHD, etc. You can also see their specifications, such as power, torque, speed, fuel capacity, etc.Tap on the bus that you want to drive and then tap on Select.
          8. -
          9. You will then see a map of Indonesia where you can choose your destination. You can zoom in and out and move the map to see different cities and routes. You can also see the distance, time, and reward for each route.
          10. -
          11. Tap on the destination that you want to go to and then tap on Select.
          12. -
          13. You have now chosen your bus and destination and you can proceed to drive.
          14. -
          -

          Enjoy driving across Indonesia with realistic graphics and physics

          -

          The last thing you need to do is to enjoy driving across Indonesia with realistic graphics and physics. To do this, you need to follow these steps:

          -
            -
          1. After choosing your bus and destination, you will see a screen where you can adjust some settings, such as the difficulty level, the camera view, the steering mode, etc. You can change these as you like or use the default ones.
          2. -
          3. Tap on Start to begin driving.
          4. -
          5. You will then see your bus in a garage or a terminal where you can pick up your passengers. You can also see your dashboard, mirrors, GPS, speedometer, etc. You can interact with these by tapping on them or using the buttons on the screen.
          6. -
          7. Follow the instructions on the screen or the GPS to drive your bus to your destination. You will have to follow the traffic rules, such as stopping at red lights, paying tolls, avoiding collisions, etc. You will also have to deal with realistic situations, such as traffic jams, weather changes, police checkpoints, etc.
          8. -
          9. Along the way, you can enjoy the scenery of Indonesia, such as mountains, beaches, forests, villages, etc. You can also visit famous places, such as Jakarta, Bali, Surabaya, Bandung, etc.
          10. -
          11. When you reach your destination, you will have to park your bus and drop off your passengers. You will then see a summary of your performance, such as your driving time, distance, fuel consumption, income, expenses, etc. You will also earn experience points and money that you can use to buy new buses or upgrade your existing ones.
          12. -
          13. You have now completed your route and you can choose another one or quit the game.
          14. -
          -

          Conclusion

          -

          Euro Truck Simulator 2 Bus Indonesia Mod is a great mod that allows you to drive buses across Indonesia on your Android device. It is a fun and realistic way to experience the culture and scenery of Indonesia. It is also easy to download and install on your device with a few simple steps.

          -

          If you are looking for a new and exciting simulation game that will challenge your driving skills and entertain you for hours, you should definitely try Euro Truck Simulator 2 Bus Indonesia Mod. It is one of the best mods for Euro Truck Simulator 2 and one of the best simulation games for Android devices.

          -

          FAQs

          -

          Here are some frequently asked questions about Euro Truck Simulator 2 Bus Indonesia Mod:

          -

          Q: Is Euro Truck Simulator 2 Bus Indonesia Mod free?

          -

          A: Yes, Euro Truck Simulator 2 Bus Indonesia Mod is free to download and play. However, you will need to have Euro Truck Simulator 2 game installed on your device first, which may cost some money depending on where you get it from.

          -

          Q: Is Euro Truck Simulator 2 Bus Indonesia Mod safe?

          -

          A: Yes, Euro Truck Simulator 2 Bus Indonesia Mod is safe to download and play. However, you should always download it from a reputable website that has positive reviews and ratings from other users. You should also scan the APK file with antivirus software or online scanners before installing it on your device.

          -

          Q: Is Euro Truck Simulator 2 Bus Indonesia Mod compatible with other mods?

          -

          A: No, Euro Truck Simulator 2 Bus Indonesia Mod is not compatible with other mods for Euro Truck Simulator 2. It is a standalone mod that replaces the original game completely. If you want to use other mods for Euro Truck Simulator 2 , you will need to uninstall Euro Truck Simulator 2 Bus Indonesia Mod first and then install the other mods.

          -

          Q: How can I update Euro Truck Simulator 2 Bus Indonesia Mod?

          -

          A: To update Euro Truck Simulator 2 Bus Indonesia Mod, you will need to download the latest version of the APK file from the website that you got it from. You will then need to uninstall the previous version of the mod from your device and then install the new version. You may also need to update Euro Truck Simulator 2 game if there are any changes or patches.

          -

          Q: How can I contact the developers of Euro Truck Simulator 2 Bus Indonesia Mod?

          -

          A: To contact the developers of Euro Truck Simulator 2 Bus Indonesia Mod, you can visit their official website [Euro Truck Simulator 2 Bus Indonesia Mod]. There, you can find their contact information, such as their email, Facebook, Instagram, YouTube, etc. You can also leave your feedback, suggestions, or questions on their website or social media pages.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download shapez io level 20 save file and unlock logic mode.md b/spaces/fatiXbelha/sd/Download shapez io level 20 save file and unlock logic mode.md deleted file mode 100644 index 270a359cf7e8c7c6766d1653cdc580287599ea79..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download shapez io level 20 save file and unlock logic mode.md +++ /dev/null @@ -1,127 +0,0 @@ -
          -

          How to Download and Use Shapez.io Level 20 Save File

          -

          Shapez.io is a fun and relaxing game that challenges you to build factories and automate the production of complex shapes. But what if you want to skip the grind and jump right into the logic mode? In this article, we will show you how to download and use a level 20 save file for Shapez.io, as well as how to backup and restore your own save files. Let's get started!

          -

          shapez io level 20 save file download


          Download File ★★★ https://urllie.com/2uNxPX



          -

          What is Shapez.io?

          -

          Shapez.io is a game developed by Tobias Springer that is available on Steam, itch.io, and as a web browser game. The game is inspired by Factorio, but with a minimalist and colorful aesthetic. The goal of the game is to create factories that can process shapes and colors, using conveyor belts, cutters, rotators, stackers, mixers, painters, and more. As you progress through the levels, the shapes become more complicated and require more steps to produce. You also have to deal with limited resources, space constraints, and scaling issues.

          -

          The game has two modes: normal mode and logic mode. In normal mode, you have to complete the objectives given by the hub, which usually involve delivering a certain amount of shapes or colors per second. In logic mode, you can use wires, switches, logic gates, displays, and sensors to create your own circuits and contraptions. Logic mode is unlocked after reaching level 20 in normal mode.

          -

          Why would you want to download a level 20 save file?

          -

          Some players may find the normal mode too tedious or repetitive, and may want to skip it entirely and go straight to the logic mode. Others may have lost their progress due to a corrupted or deleted save file, and may want to restore it quickly. Or maybe you just want to experiment with different factory designs without worrying about the objectives or resources.

          -

          Whatever your reason, downloading a level 20 save file can help you access the logic mode faster and easier. A level 20 save file is a binary file that contains all the information about your game state at level 20, such as your factory layout, your inventory, your upgrades, and your achievements. By importing this file into your game, you can resume playing from level 20 without having to start from scratch.

          -

          shapez io level 20 save file location
          -shapez io level 20 save file backup
          -shapez io level 20 save file import
          -shapez io level 20 save file editor
          -shapez io level 20 save file reddit
          -shapez io level 20 save file steam
          -shapez io level 20 save file corrupted
          -shapez io level 20 save file restore
          -shapez io level 20 save file mod
          -shapez io level 20 save file cheat
          -shapez io level 20 save file browser
          -shapez io level 20 save file standalone
          -shapez io level 20 save file carbonite
          -shapez io level 20 save file fix
          -shapez io level 20 save file transfer
          -shapez io level 20 save file update
          -shapez io level 20 save file format
          -shapez io level 20 save file extension
          -shapez io level 20 save file bin
          -shapez io level 20 save file code
          -shapez io level 20 save file generator
          -shapez io level 20 save file converter
          -shapez io level 20 save file online
          -shapez io level 20 save file free
          -shapez io level 20 save file tutorial
          -shapez io level 20 save file guide
          -shapez io level 20 save file tips
          -shapez io level 20 save file tricks
          -shapez io level 20 save file help
          -shapez io level 20 save file support
          -shapez io level 20 save file forum
          -shapez io level 20 save file community
          -shapez io level 20 save file discussion
          -shapez io level 20 save file review
          -shapez io level 20 save file feedback
          -shapez io level 20 save file problem
          -shapez io level 20 save file solution
          -shapez io level 20 save file error
          -shapez io level 20 save file issue
          -shapez io level 20 save file bug
          -shapez io level 20 save file glitch
          -shapez io level 20 save file patch
          -shapez io level 20 save file version
          -shapez io level 20 save file release
          -shapez io level 20 save file source
          -shapez io level 20 save file github
          -shapez io level 20 save file link
          -shapez io level 20 save file share

          -

          How to backup and restore your save files

          -

          Before you download and use a level 20 save file, it is highly recommended that you backup your own save files first. This way, you can avoid losing your original progress or overwriting your current game state. You can also restore your own save files if you encounter any problems or errors with the downloaded file.

          -

          The importance of backing up your save files regularly

          -

          Backing up your save files is a good practice for any game that you play, especially for games like Shapez.io that involve a lot of time and effort. Save files can get corrupted or deleted due to various reasons, such as power outages, system crashes, viruses, accidental deletion, or updates. If you don't have a backup of your save files, you may lose all your progress and have to start over from the beginning.

          -

          Therefore, it is advisable that you backup your save files regularly, preferably after each session or each level. You can also backup your save files before making any major changes to your factory or before trying out new features or mods. This way, you can always revert

          to your previous state if something goes wrong or if you are not satisfied with the results.

          -

          The location of your save files on different platforms

          -

          The location of your save files depends on the platform that you are playing the game on. Here are the common locations for each platform:

          -
            -
          • Steam: C:\Users\YourUsername\AppData\Roaming\shapez.io\savegames
          • -
          • Itch.io: C:\Users\YourUsername\AppData\LocalLow\Tobias Springer\shapez.io\savegames
          • -
          • Web browser: Your browser's local storage (you can access it by opening the developer tools and going to the Application tab)
          • -
          -

          You can also export your save files from the game's settings menu, which will generate a .bin file that you can save anywhere on your computer.

          -

          The methods of backing up and restoring your save files

          -

          There are two main methods of backing up and restoring your save files: manually and automatically. Here is how they work:

          -
            -
          • Manually: You can copy and paste your save files from their location to another folder or drive, or you can export them from the game's settings menu. To restore them, you can either copy and paste them back to their original location, or you can import them from the game's settings menu.
          • -
          • Automatically: You can use a third-party software or service that can automatically backup and restore your save files, such as Steam Cloud, Google Drive, Dropbox, or OneDrive. To use this method, you need to sync your save files folder with the software or service, and enable the cloud saving option in the game's settings menu. To restore them, you need to download them from the software or service, and place them in their original location.
          • -
          -

          Both methods have their advantages and disadvantages, so you can choose the one that suits your preferences and needs.

          -

          How to download and use a level 20 save file

          -

          Now that you know how to backup and restore your save files, let's see how to download and use a level 20 save file for Shapez.io. Here are the steps that you need to follow:

          -

          The sources of level 20 save files online

          -

          There are many sources of level 20 save files online, such as forums, websites, blogs, YouTube videos, or Reddit posts. You can search for them using keywords like "shapez.io level 20 save file", "shapez.io logic mode unlock", or "shapez.io level 20 download". Some examples of sources are:

          - -

          However, not all sources are reliable or safe, so you need to be careful when downloading files from unknown or untrusted sources. You should always scan the files for viruses or malware before opening them, and read the comments or reviews from other users before downloading them. You should also backup your own save files before using any downloaded file, in case something goes wrong or if you want to switch back to your own progress.

          -

          The steps of downloading and using a level 20 save file

          -

          Once you have found a source of a level 20 save file that you trust, you can download it by clicking on the link or button provided by the source. The file should be a .bin file that contains the game state at level 20. After downloading it, you need to place it in the same location as your own save files (see above for the location on different platforms). You may need to rename it or overwrite an existing file if there is already a file with the same name in that location.

          -

          After placing the file in the correct location, you can launch the game and go to the settings menu. There, you should see a list of available save files that you can load. You should see the level 20 save file that you downloaded among them. You can select it and click on "Load" to load it into the game. You should then see a message that says "Welcome to logic mode!" and a new tab in the bottom left corner that says "Logic". You can click on it to access the logic mode features and start creating your own circuits and contraptions.

          -

          The benefits and drawbacks of using a level 20 save file

          -

          Using a level 20 save file has some benefits and drawbacks that you should be aware of before using it. Here are some of them:

          -

          Benefits

          -
            -
          • You can save time and effort by skipping the normal mode and going straight to the logic mode.
          • -
          • You can enjoy the creative and experimental aspects of the game without worrying about the objectives or resources.
          • -
          • You can learn from the factory design and logic circuits of the level 20 save file and use them as inspiration or reference for your own creations.
          • -
          • You can compare your own progress and performance with the level 20 save file and see how you can improve or optimize your factory.
          • -
          -

          Drawbacks

          -
            -
          • You may miss out on the fun and challenge of the normal mode and the satisfaction of unlocking the logic mode by yourself.
          • -
          • You may not understand some of the game mechanics or features that are introduced gradually in the normal mode and may get confused or frustrated in the logic mode.
          • -
          • You may lose interest or motivation in the game if you use a level 20 save file that is too advanced or complex for your skill level or preference.
          • -
          • You may encounter compatibility or stability issues with the level 20 save file if it is outdated or modified by the source.
          • -
          -

          Conclusion

          -

          In conclusion, downloading and using a level 20 save file for Shapez.io can be a quick and easy way to access the logic mode and enjoy its features. However, you should also be careful when choosing a source of a level 20 save file and backup your own save files before using it. You should also weigh the benefits and drawbacks of using a level 20 save file and decide if it is worth it for you. We hope this article has helped you learn how to download and use a level 20 save file for Shapez.io. Have fun!

          -

          FAQs

          -

          Q: How do I export my own save file from Shapez.io?

          -

          A: You can export your own save file from Shapez.io by going to the settings menu and clicking on "Export Savegame". This will generate a .bin file that you can save anywhere on your computer. You can also share this file with others if you want to.

          -

          Q: How do I import a save file into Shapez.io?

          -

          A: You can import a save file into Shapez.io by going to the settings menu and clicking on "Import Savegame". This will open a file browser where you can select the .bin file that you want to import. You can also drag and drop the .bin file into the game window to import it.

          -

          Q: How do I delete a save file from Shapez.io?

          -

          A: You can delete a save file from Shapez.io by going to the settings menu and clicking on "Delete Savegame". This will show you a list of available save files that you can delete. You can also delete a save file manually by going to its location (see above for the location on different platforms) and deleting it from there.

          -

          Q: How do I update my game version for Shapez.io?

          -

          A: You can update your game version for Shapez.io by following these steps:

          -
            -
          • Steam: The game will update automatically when you launch it through Steam. You can also check for updates manually by right-clicking on the game in your library, selecting "Properties", going to the "Updates" tab, and clicking on "Check for updates".
          • -
          • Itch.io: The game will update automatically when you launch it through the itch.io app. You can also check for updates manually by going to your library, selecting the game, clicking on "More", and clicking on "Check for update".
          • -
          • Web browser: The game will update automatically when you visit the website. You can also force an update by refreshing the page or clearing your browser cache.
          • -
          -

          Q: How do I access the modding features for Shapez.io?

          -

          A: You can access the modding features for Shapez.io by going to the settings menu and clicking on "Modding". This will open a new tab where you can browse, install, enable, disable, or uninstall mods for the game. You can also create your own mods using the modding API provided by the developer.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Dynast.io APK The Best Survival Game with RPG Elements for Android Devices.md b/spaces/fatiXbelha/sd/Dynast.io APK The Best Survival Game with RPG Elements for Android Devices.md deleted file mode 100644 index 09f3614b6d11d332df2ec1107738852f5b2f474b..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Dynast.io APK The Best Survival Game with RPG Elements for Android Devices.md +++ /dev/null @@ -1,96 +0,0 @@ -
          -

          Dynast.io APK: A Survival Game with RPG Elements

          -

          Are you looking for a new and exciting game to play on your Android device? Do you enjoy games that test your skills, creativity, and strategy? If so, you might want to check out Dynast.io APK, a survival game with RPG elements that will keep you hooked for hours.

          -

          What is Dynast.io?

          -

          Dynast.io is an online survival game in which you build your dynasty in a ruthless world where all other players will try to kill you to take your resources. The game was developed by Whalebox Studio LLC, a small indie team that aims to create fun and original games for mobile platforms. The game is available for free on Google Play Store, but you can also download the APK file from other sources if you prefer.

          -

          dynast.io apk


          DOWNLOAD ->>> https://urllie.com/2uNEBe



          -

          A multiplayer online game set in a ruthless world

          -

          In Dynast.io, you can join or create a server with up to 100 players, each with their own base, inventory, and character. You can also chat with other players, form alliances, or declare war. The game features a dynamic day-night cycle, weather effects, and seasons that affect the gameplay. For example, at night, the visibility is reduced and more monsters appear, while in winter, the temperature drops and you need to keep warm.

          -

          A game that combines survival, building, crafting, and fighting

          -

          Dynast.io is not just a simple survival game. It also incorporates elements of building, crafting, and fighting that make it more engaging and diverse. You can build your base with different materials and structures, such as walls, doors, chests, furnaces, traps, and more. You can craft various items and equipment, such as weapons, armor, tools, potions, food, and more. You can fight against monsters and other players using melee or ranged attacks, as well as magic spells. You can also tame animals and use them as mounts or pets.

          -

          How to download and install Dynast.io APK?

          -

          If you want to play Dynast.io on your Android device, you need to download and install the APK file. Here are the steps to do so:

          -

          Download the APK file from a trusted source

          -

          You can download the APK file from Google Play Store, or from other websites that offer it. However, be careful when downloading files from unknown sources, as they may contain viruses or malware. One of the websites that you can trust is APKCombo, which provides safe and fast downloads of various APK files.

          -

          Enable unknown sources on your device

          -

          Before you can install the APK file, you need to enable unknown sources on your device. This will allow you to install apps that are not from Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may also need to grant permission for your browser or file manager to install apps.

          -

          Install the APK file and launch the game

          -

          Once you have downloaded the APK file, locate it on your device using a file manager or your browser's downloads folder. Tap on it and follow the instructions to install it. After the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer. You can also create a shortcut for the game on your desktop for easier access. Enjoy playing Dynast.io and have fun!

          -

          dynast.io game download apk
          -dynast.io survival io game apk
          -dynast.io mod apk unlimited resources
          -dynast.io apk for android free download
          -dynast.io online multiplayer game apk
          -dynast.io 2d rpg survival game apk
          -dynast.io latest version apk download
          -dynast.io hack apk no root
          -dynast.io apk for pc windows 10
          -dynast.io beta apk test version
          -dynast.io apk mirror link
          -dynast.io offline mode apk
          -dynast.io apk pure download
          -dynast.io pro apk full unlocked
          -dynast.io apk for ios iphone ipad
          -dynast.io cheats apk mod menu
          -dynast.io apk old version download
          -dynast.io premium apk no ads
          -dynast.io apk for chromebook laptop
          -dynast.io update apk new features
          -dynast.io cracked apk free download
          -dynast.io tips and tricks apk guide
          -dynast.io skins and weapons apk pack
          -dynast.io review and rating apk feedback
          -dynast.io gameplay and tutorial apk video
          -dynast.io wiki and faq apk information
          -dynast.io best settings and controls apk config
          -dynast.io clans and friends apk chat
          -dynast.io maps and modes apk explore
          -dynast.io events and challenges apk rewards
          -dynast.io bugs and issues apk report
          -dynast.io support and contact apk help
          -dynast.io news and updates apk blog
          -dynast.io fan art and memes apk gallery
          -dynast.io community and forum apk join
          -dynast.io alternatives and similar games apk list
          -dynast.io fun facts and trivia apk quiz
          -dynast.io secrets and easter eggs apk discover
          -dynast.io achievements and leaderboards apk compete
          -dynast.io suggestions and feedback apk improve

          -

          How to play Dynast.io?

          -

          Dynast.io is a game that requires skill, strategy, and creativity. You need to survive in a harsh environment, build your base, craft your equipment, and fight your enemies. Here are some tips on how to play the game:

          -

          Start your journey by building a base with wood and stones

          -

          When you enter the game, you will spawn in a random location on the map. The first thing you need to do is to gather some basic resources, such as wood and stones. You can find them scattered around the map, or you can chop down trees and break rocks with your fists. You can also use your map (M key) to locate nearby resources. Once you have enough wood and stones, you can start building your base. You can use the build menu (B key) to select different structures and place them on the ground. You can also rotate them with the R key and delete them with the X key. You should build a base that is secure, spacious, and functional. You should also place a flag near your base to claim it as your territory and prevent other players from building there.

          -

          Venture on the map looking for resources and components

          -

          After you have built your base, you need to explore the map and look for more resources and components. You will need them to craft more advanced items and equipment, such as metal, leather, cloth, gunpowder, bullets, etc. You can find them in various places, such as chests, barrels, crates, campsites, caves, etc. You can also loot them from dead monsters or players. However, be careful when venturing on the map, as you may encounter dangers and enemies along the way.

          -

          Beware of monsters and other players who will try to kill you

          -

          Dynast.io is a game where everyone is your enemy. You will face different kinds of monsters and other players who will try to kill you and take your loot. Monsters are creatures that spawn randomly on the map and attack you on sight. They vary in size, strength, and behavior. Some of them are passive and will only attack if provoked, while others are aggressive and will chase you down. Some of them are also nocturnal and will only appear at night. Some examples of monsters are wolves, bears, zombies, skeletons, spiders, etc.

          -

          Other players are human opponents who can join or create servers with you. They can be friendly or hostile depending on their mood and intention. They can chat with you, trade with you, ally with you, or betray you. They can also attack you with weapons or spells, raid your base, or steal your flag. You can also do the same to them if you want.

          -

          Upgrade your base and equipment by crafting and hunting

          -

          To survive longer and better in Dynast.io, you need to upgrade your base and equipment by crafting and hunting. Crafting is the process of making new items and equipment from existing resources and components. You can use the craft menu (C key) to select different recipes and craft them. You can also use furnaces, workbenches, anvils, etc. to craft more complex items. Some examples of items and equipment that you can craft are swords, axes, bows, guns, helmets, chestplates, boots, etc.

          -

          Hunting is the process of killing animals and monsters for their meat and skins. You can use weapons or spells to hunt them down. You can also use traps or bait to lure them in. Hunting is useful for obtaining food and leather that you can use for crafting or eating. Eating food will restore your health and hunger bars that deplete over time.

          -

          Gain levels and skills by completing quests and achievements

          -

          Dynast.io is also a game that has RPG elements that allow you to gain levels and skills by completing quests and achievements. Quests are tasks that are given by NPCs (non-player characters) that you can find in villages or campsites. They will ask you to do something for them in exchange for rewards such as gold coins or items. Some examples of quests are collecting resources, killing monsters, delivering items, etc.

          -

          Achievements are goals that are set by the game that challenge you to do something specific or remarkable. They will reward you with experience points or items when you complete them. Some examples of achievements are building a certain structure, crafting a certain item, killing a certain monster or player etc.

          -

          By completing quests and achievements, you will gain experience points that will increase your level and unlock new skills. Skills are abilities that enhance your performance in the game. You can use the skill menu (S key) to select different skills and activate them. You can also upgrade your skills by spending skill points that you earn by leveling up. Some examples of skills are speed, strength, stealth, fireball, heal, etc.

          -

          Why should you play Dynast.io?

          -

          Dynast.io is a game that offers a challenging and immersive experience for players who love survival games with RPG elements. Here are some reasons why you should play Dynast.io:

          -

          A game that offers a challenging and immersive experience

          -

          Dynast.io is a game that will test your skills, creativity, and strategy in a ruthless world where you have to survive against all odds. The game will keep you on your toes as you face different threats and challenges every day and night. The game will also immerse you in a realistic and dynamic environment where you have to adapt to the changing weather, seasons, and biomes.

          -

          A game that features a variety of biomes, items, and enemies

          -

          Dynast.io is a game that features a large and diverse map with different biomes, such as forest, desert, snow, swamp, etc. Each biome has its own characteristics, resources, and dangers. The game also features a wide range of items and equipment that you can craft and use for different purposes. The game also features a variety of enemies that you can encounter and fight, such as monsters, animals, and other players.

          -

          A game that supports cross-platform play and chat

          -

          Dynast.io is a game that supports cross-platform play and chat, meaning that you can play and communicate with other players who are using different devices, such as PC, Android, iOS, etc. This makes the game more accessible and social for everyone. You can also invite your friends to join your server or join theirs.

          -

          Conclusion

          -

          Dynast.io APK is a survival game with RPG elements that will provide you with hours of fun and excitement. You can download and install the APK file from various sources and enjoy playing the game on your Android device. You can also play the game on other platforms and chat with other players. You can build your base, craft your equipment, fight your enemies, and gain levels and skills in this game. If you are looking for a new and exciting game to play on your Android device, you should give Dynast.io APK a try.

          -

          FAQs

          -

          Here are some frequently asked questions about Dynast.io APK:

          - - - - - - - -
          QuestionAnswer
          Is Dynast.io APK safe to download and install?Yes, Dynast.io APK is safe to download and install if you get it from a trusted source such as Google Play Store or APKCombo. However, be careful when downloading files from unknown sources as they may contain viruses or malware.
          Is Dynast.io APK free to play?Yes, Dynast.io APK is free to play. However, the game may contain ads or in-app purchases that require real money.
          Can I play Dynast.io APK offline?No, Dynast.io APK requires an internet connection to play as it is an online multiplayer game.
          Can I play Dynast.io APK with my friends?Yes, you can play Dynast.io APK with your friends by joining or creating a server with them. You can also chat with them using the chat feature.
          How can I contact the developers of Dynast.io APK?You can contact the developers of Dynast.io APK by visiting their website or sending them an email at support@whaleboxstudio.com.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Facebook Lite Old Version The Best Way to Stay Connected with Less Data and Battery Usage.md b/spaces/fatiXbelha/sd/Facebook Lite Old Version The Best Way to Stay Connected with Less Data and Battery Usage.md deleted file mode 100644 index 37dcf145fb8762b61ab14736b218d0846c948f69..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Facebook Lite Old Version The Best Way to Stay Connected with Less Data and Battery Usage.md +++ /dev/null @@ -1,130 +0,0 @@ - -

          Download Facebook Lite Old Version: A Complete Guide

          -

          Facebook is one of the most popular social media platforms in the world, with billions of users and tons of features. However, not everyone can enjoy the full Facebook experience on their phones, especially if they have limited storage space, data plan, or network speed. That's why Facebook created Facebook Lite, a lighter and faster version of the regular Facebook app that works on almost any Android device.

          -

          But what if you want to use an even lighter and faster version of Facebook Lite? What if you want to go back to an older version of Facebook Lite that has fewer bugs, less ads, or more compatibility? In this article, we will show you how to download Facebook Lite old version and why you might want to do so.

          -

          download facebook lite old version


          Download File 🗸 https://urllie.com/2uNxrf



          -

          What is Facebook Lite?

          -

          Facebook Lite is a stripped-down version of the standard Facebook app for Android and iOS. It was launched in 2015 as a way to provide a better Facebook experience for users in developing countries where data connectivity is poor or expensive. However, it soon became popular among users who wanted to save space, data, and battery on their phones, or who had older devices that couldn't run the regular Facebook app smoothly.

          -

          Facebook Lite has all the basic features of Facebook, such as News Feed, Messenger, Notifications, Groups, Pages, Events, Marketplace, and more. However, it also has some differences from the regular Facebook app, such as:

          -

          download facebook lite apk old version
          -download facebook lite for android old version
          -download facebook lite 270.0.0.2.118 apk
          -download facebook lite latest version for android
          -download facebook lite app old version
          -download facebook lite 2019 old version
          -download facebook lite 2020 old version
          -download facebook lite 2021 old version
          -download facebook lite for pc old version
          -download facebook lite for ios old version
          -download facebook lite mod apk old version
          -download facebook lite pro apk old version
          -download facebook lite dark mode apk old version
          -download facebook lite transparent apk old version
          -download facebook lite messenger apk old version
          -download facebook lite uptodown old version
          -download facebook lite softpedia old version
          -download facebook lite from official website
          -download facebook lite without google play store
          -download facebook lite for java phone old version
          -download facebook lite for windows phone old version
          -download facebook lite for blackberry old version
          -download facebook lite for nokia old version
          -download facebook lite for samsung old version
          -download facebook lite for huawei old version
          -download facebook lite for oppo old version
          -download facebook lite for vivo old version
          -download facebook lite for xiaomi old version
          -download facebook lite for lg old version
          -download facebook lite for sony old version
          -download facebook lite for lenovo old version
          -download facebook lite for motorola old version
          -download facebook lite for asus old version
          -download facebook lite for oneplus old version
          -download facebook lite for realme old version
          -download facebook lite for tecno old version
          -download facebook lite for infinix old version
          -download facebook lite for itel old version
          -download facebook lite for gionee old version
          -download facebook lite for micromax old version
          -how to download and install facebook lite old version on android phone
          -how to update facebook lite to the latest version on android phone
          -how to uninstall or delete facebook lite from android phone
          -how to use two or more accounts on one device with parallel space app
          -how to fix common problems or errors with facebook lite app on android phone
          -how to enable or disable notifications from facebook lite app on android phone
          -how to change language or region settings on facebook lite app on android phone
          -how to adjust privacy or security settings on facebook lite app on android phone
          -how to manage data usage or storage space on facebook lite app on android phone

          -
            -
          • It's much smaller in size. The download size of Facebook Lite is under 10MB, while the regular Facebook app can be over 100MB.
          • -
          • It uses less data. Facebook Lite doesn't preload photos and videos like the regular Facebook app does. It also lets you choose the photo quality that you want to see. You can also turn off autoplay for videos when you're not on Wi-Fi.
          • -
          • It loads faster. Facebook Lite is designed to work on 2G networks and areas with slow or unstable internet connections. It also has a simpler and cleaner user interface that makes it easier to navigate.
          • -
          • It works on old Android phones. You can use Facebook Lite on devices that run Android 2.3 or higher, while the regular Facebook app requires Android 4.1 or higher.
          • -
          • It has built-in Messenger. You don't need to download a separate app to chat with your friends on Facebook Lite. You can access Messenger from the same app by tapping on the chat icon.
          • -
          -

          Why use Facebook Lite old version?

          -

          While Facebook Lite is already a great alternative to the regular Facebook app, some users might prefer to use an older version of it for various reasons. Some of the possible benefits of using Facebook Lite old version are:

          -
            -
          • You can avoid updates that might introduce new bugs, glitches, or errors.
          • -
          • You can avoid updates that might add more ads, bloatware, or unwanted features.
          • -
          • You can avoid updates that might change the layout or design of the app that you're used to.
          • -
          • You can avoid updates that might increase the size or data usage of the app.
          • -
          • You can avoid updates that might reduce the compatibility or performance of the app on your device.
          • -
          -

          What are the drawbacks of using Facebook Lite old version?

          -

          Of course, using an older version of any app also comes with some drawbacks that you should be aware of before you decide to do so. Some of the possible drawbacks of using Facebook Lite old version are:

          -
            -
          • You might miss out on new features or improvements that are added in newer versions.
          • -
          • You might miss out on security patches or bug fixes that are released in newer versions.
          • -
          • You might encounter compatibility issues with other apps or services that require the latest version of Facebook Lite.
          • -
          • You might violate the terms of service or policies of Facebook by using an outdated or unauthorized version of the app.
          • -
          • You might expose your device or account to security risks or malware by downloading older versions of Facebook Lite from untrusted sources.
          • -
          -

          Therefore, you should weigh the pros and cons of using Facebook Lite old version carefully before you proceed. You should also backup your data and device before you install any older version of Facebook Lite, just in case something goes wrong.

          -

          How to download Facebook Lite old version?

          -

          If you still want to download Facebook Lite old version, you will need to find a reliable source that offers the APK file of the version that you want. APK stands for Android Package Kit, and it is the file format that Android uses to distribute and install apps. You can't find older versions of Facebook Lite on the official Google Play Store, so you will have to look for alternative sources online.

          -

          However, not all sources are safe or trustworthy. Some websites might offer fake or modified APK files that contain viruses, spyware, or malware. Some websites might also require you to sign up, pay, or complete surveys before you can download the APK file. Therefore, you should be careful and do some research before you download any APK file from any website.

          -

          One of the most reputable and popular sources for downloading older versions of Android apps is APKMirror. APKMirror is a website that hosts thousands of APK files for various apps, including Facebook Lite. It also verifies the authenticity and integrity of the APK files that it offers, so you can be sure that they are safe and original. Here are the steps to download Facebook Lite old version from APKMirror:

          -
            -
          1. Go to APKMirror.com on your browser.
          2. -
          3. Type "Facebook Lite" in the search box and hit enter.
          4. -
          5. You will see a list of all the versions of Facebook Lite that are available on APKMirror. You can sort them by date, size, or popularity by clicking on the tabs at the top.
          6. -
          7. Choose the version that you want to download and click on it. You will see a page with more details about the version, such as its release date, changelog, ratings, and screenshots.
          8. -
          9. Scroll down to the bottom of the page and click on the "Download APK" button. You will see a pop-up window with a captcha code that you need to enter to verify that you are not a robot.
          10. -
          11. After you enter the captcha code, click on the "Download" button. The APK file will start downloading to your device.
          12. -
          -

          How to install Facebook Lite old version?

          -

          After you download the APK file of Facebook Lite old version, you will need to install it on your device. However, before you do that, you will need to enable the option to install apps from unknown sources on your device. This option allows you to install apps that are not from the Google Play Store, such as APK files. Here are the steps to enable this option:

          -
            -
          1. Go to your device's settings and look for "Security" or "Privacy" options.
          2. -
          3. Find the option that says "Unknown sources" or "Install unknown apps" and toggle it on. You might see a warning message that tells you about the risks of installing apps from unknown sources. Tap on "OK" or "Allow" to proceed.
          4. -
          5. You can now install any APK file that you have downloaded on your device.
          6. -
          -

          To install Facebook Lite old version, follow these steps:

          -
            -
          1. Locate the APK file that you have downloaded on your device. You can use a file manager app or your browser's downloads folder to find it.
          2. -
          3. Tap on the APK file and you will see a prompt that asks you if you want to install this app. Tap on "Install" to start the installation process.
          4. -
          5. You might see another prompt that asks you if you want to replace the existing app with this version. Tap on "Yes" or "OK" to confirm.
          6. -
          7. The installation process will take a few seconds and then you will see a message that says "App installed". Tap on "Open" to launch Facebook Lite old version.
          8. -
          -

          Conclusion

          -

          In this article, we have shown you how to download and install Facebook Lite old version on your Android device. We have also explained what Facebook Lite is, why some people prefer to use older versions of it, and what are the benefits and drawbacks of doing so. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. And if you liked this article, please share it with your friends and family who might also be interested in downloading Facebook Lite old version.

          -

          FAQs

          -

          Here are some of the frequently asked questions and answers about Facebook Lite old version:

          -

          Is Facebook Lite old version safe to use?

          -

          Facebook Lite old version is generally safe to use, as long as you download it from a trusted source like APKMirror. However, you should be aware that using an older version of any app might expose you to some security risks or vulnerabilities that have been fixed in newer versions. You should also make sure that your device has the latest security updates and antivirus software installed.

          -

          Can I use Facebook Lite old version and regular Facebook app at the same time?

          -

          No, you can't use both versions of Facebook on the same device. You will have to uninstall one of them before you can install the other. However, you can use Facebook Lite old version on one device and regular Facebook app on another device, as long as you log in with the same account.

          -

          How can I update Facebook Lite old version to the latest version?

          -

          If you want to update Facebook Lite old version to the latest version, you will have to uninstall the old version first and then download and install the new version from the Google Play Store or APKMirror. Alternatively, you can enable the auto-update option on your device's settings, so that your apps will be updated automatically whenever a new version is available.

          -

          How can I delete Facebook Lite old version from my device?

          -

          If you want to delete Facebook Lite old version from your device, you will have to uninstall it like any other app. You can do this by going to your device's settings, finding the app in the list of installed apps, and tapping on "Uninstall". You can also delete the APK file that you downloaded from your device's storage.

          -

          What are some alternatives to Facebook Lite old version?

          -

          If you are looking for other ways to use Facebook on your phone without using too much space, data, or battery, you might want to try some of these alternatives:

          -
            -
          • Use Facebook on your browser. You can access Facebook from any web browser on your phone, such as Chrome, Firefox, or Opera. You can also use the mobile-friendly version of Facebook by going to m.facebook.com.
          • -
          • Use Facebook Lite new version. You can download the latest version of Facebook Lite from the Google Play Store or APKMirror. It might have some improvements or fixes that you might like.
          • -
          • Use other lite apps for social media. There are many other lite apps for different social media platforms, such as Twitter Lite, Instagram Lite, Snapchat Lite, and more. You can find them on the Google Play Store or APKMirror.
          • -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/feiya/feiyaa/Dockerfile b/spaces/feiya/feiyaa/Dockerfile deleted file mode 100644 index 3698c7cb7938e025afc53b18a571ae2961fbdffe..0000000000000000000000000000000000000000 --- a/spaces/feiya/feiyaa/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtX5rG6bE3fZ4iO" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/fengmuxi/ChatGpt-Web/app/api/auth.ts b/spaces/fengmuxi/ChatGpt-Web/app/api/auth.ts deleted file mode 100644 index 4e3b3f388c60c8f7d65790521dd9e40bd77360b0..0000000000000000000000000000000000000000 --- a/spaces/fengmuxi/ChatGpt-Web/app/api/auth.ts +++ /dev/null @@ -1,73 +0,0 @@ -import { NextRequest } from "next/server"; -import { getServerSideConfig } from "../config/server"; -import md5 from "spark-md5"; -import { ACCESS_CODE_PREFIX } from "../constant"; - -const serverConfig = getServerSideConfig(); - -export function getIP(req: NextRequest) { - let ip = req.ip ?? req.headers.get("x-real-ip"); - const forwardedFor = req.headers.get("x-forwarded-for"); - - if (!ip && forwardedFor) { - ip = forwardedFor.split(",").at(0) ?? ""; - } - - return ip; -} - -function parseApiKey(bearToken: string) { - const token = bearToken.trim().replaceAll("Bearer ", "").trim(); - const isOpenAiKey = !token.startsWith(ACCESS_CODE_PREFIX); - - return { - accessCode: isOpenAiKey ? "" : token.slice(ACCESS_CODE_PREFIX.length), - apiKey: isOpenAiKey ? token : "", - }; -} - -export function auth(req: NextRequest) { - const authToken = req.headers.get("Authorization") ?? ""; - const auth = req.headers.get("auth") ?? ""; - - // check if it is openai api key or user token - const { accessCode, apiKey: token } = parseApiKey(authToken); - - const hashedCode = md5.hash(accessCode ?? "").trim(); - - console.log("[Auth] allowed hashed codes: ", [...serverConfig.codes]); - console.log("[Auth] got access code:", accessCode); - console.log("[Auth] hashed access code:", hashedCode); - console.log("[Auth] get auth:",auth); - console.log("[User IP] ", getIP(req)); - console.log("[Time] ", new Date().toLocaleString()); - // serverConfig.needCode && !serverConfig.codes.has(hashedCode) && - if (!token && !auth) { - return { - error: true, - needAccessCode: true, - msg: "Please go login page to login.", - }; - } - - // if user does not provide an api key, inject system api key - if (!token) { - const apiKey = serverConfig.apiKey; - if (apiKey) { - console.log("[Auth] use system api key"); - req.headers.set("Authorization", `Bearer ${apiKey}`); - } else { - console.log("[Auth] admin did not provide an api key"); - return { - error: true, - msg: "Empty Api Key", - }; - } - } else { - console.log("[Auth] use user api key"); - } - - return { - error: false, - }; -} diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Attack on Titan Part 2 Full Movie in Hindi 720p HD Quality.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Attack on Titan Part 2 Full Movie in Hindi 720p HD Quality.md deleted file mode 100644 index 7125942482ff44831b1096555f6278034cb46ada..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Attack on Titan Part 2 Full Movie in Hindi 720p HD Quality.md +++ /dev/null @@ -1,69 +0,0 @@ - -

          Attack on Titan Part 2 Full Movie Download in Hindi 720p: How to Watch the Epic Finale of the Live-Action Adaptation

          -

          If you are a fan of anime, manga, or action movies, you have probably heard of Attack on Titan, one of the most popular and acclaimed series of the past decade. Attack on Titan is a dark fantasy story that follows a group of young soldiers who fight against giant humanoid creatures called titans that have nearly wiped out humanity.

          -

          In 2015, a live-action adaptation of Attack on Titan was released in two parts, directed by Shinji Higuchi and starring Haruma Miura, Kiko Mizuhara, Kanata Hongo, Hiroki Hasegawa, and more. The movies were based on the manga by Hajime Isayama, but they also made some significant changes to the characters, plot, and setting.

          -

          attack on titan part 2 full movie download in hindi 720p


          Download Filehttps://gohhs.com/2uPuMr



          -

          The first part of the movie introduced us to the main protagonist, Eren Yeager, who witnessed his mother being eaten by a titan when he was a child. He joined the Scouting Regiment, an elite military force that ventures outside the walls that protect humanity from the titans. He also discovered that he had a mysterious ability to transform into a titan himself.

          -

          The second part of the movie, titled Attack on Titan Part 2: End of the World, continued the story from where the first part left off. Eren had to face his own identity as a titan, as well as the secrets behind the origin and purpose of the titans. He also had to deal with a new enemy, Shikishima, a mysterious leader who claimed to be humanity's savior.

          -

          If you want to watch this epic finale of the live-action adaptation of Attack on Titan, you might be wondering how to download it in Hindi 720p. Hindi is one of the most widely spoken languages in the world, and many fans prefer to watch movies in their native language. 720p is also a good quality for streaming or downloading movies, as it offers clear images and sound without taking up too much space or bandwidth.

          -

          In this article, we will show you how to download Attack on Titan Part 2 in Hindi 720p, as well as what to expect from this movie. We will also give you some legal and safe options, as well as some illegal and risky options, for downloading or streaming this movie online. Read on to find out more!

          -

          How to Download Attack on Titan Part 2 in Hindi 720p

          -

          There are many ways to download or stream movies online, but not all of them are legal or safe. Some websites or apps may offer free or cheap downloads or streams, but they may also expose you to malware, viruses , phishing, hacking, or legal issues. Therefore, it is always better to use legal and safe options, even if they may cost some money or require a subscription. Here are some of the best legal and safe options for downloading or streaming Attack on Titan Part 2 in Hindi 720p.

          -

          Legal and safe options

          -
            -
          • Funimation: Funimation is one of the leading platforms for anime and live-action movies. It has a large library of titles, including Attack on Titan Part 2. You can download or stream the movie in Hindi 720p with a premium subscription, which costs $7.99 per month or $99.99 per year. You can also get a 14-day free trial to test the service before committing. Funimation is available on various devices, such as smartphones, tablets, computers, smart TVs, gaming consoles, and more.
          • -
          • Amazon Prime Video: Amazon Prime Video is another popular platform for movies and shows. It also has Attack on Titan Part 2 in its catalog, and you can download or stream it in Hindi 720p with a Prime membership, which costs $12.99 per month or $119 per year. You can also get a 30-day free trial to enjoy the benefits of Prime, such as free shipping, exclusive deals, music streaming, and more. Amazon Prime Video is compatible with various devices, such as smartphones, tablets, computers, smart TVs, gaming consoles, and more.
          • -
          • Google Play Movies: Google Play Movies is a convenient option for downloading or renting movies online. You can find Attack on Titan Part 2 in its store, and you can download or rent it in Hindi 720p for a reasonable price. The rental period is usually 48 hours, and you can watch the movie on any device that supports Google Play Movies, such as smartphones, tablets, computers, smart TVs, gaming consoles, and more.
          • -
          -

          Illegal and risky options

          -

          If you are looking for free or cheap options to download or stream Attack on Titan Part 2 in Hindi 720p, you may be tempted to use some illegal and risky options. However, we strongly advise you to avoid these options, as they may harm your device or your privacy. Here are some of the most common illegal and risky options for downloading or streaming movies online.

          -
            -
          • Torrent sites: Torrent sites are websites that allow users to share files through peer-to-peer networks. You may be able to find Attack on Titan Part 2 in Hindi 720p on some torrent sites, but you may also encounter fake or corrupted files that may contain malware or viruses. Moreover, torrenting is illegal in many countries, and you may face legal consequences if you are caught downloading or uploading copyrighted content.
          • -
          • Streaming sites: Streaming sites are websites that host links to movies and shows that are hosted on third-party servers. You may be able to watch Attack on Titan Part 2 in Hindi 720p on some streaming sites, but you may also encounter low-quality or broken links that may ruin your viewing experience. Furthermore, streaming sites are often full of pop-up ads and redirects that may expose you to malware or phishing. Additionally, streaming sites are also illegal in many countries, and you may face legal issues if you are caught accessing them.
          • -
          • VPNs and proxies: VPNs and proxies are tools that allow users to hide their IP address and location when browsing the internet. You may think that using a VPN or a proxy will protect you from the risks of using illegal and risky options to download or stream Attack on Titan Part 2 in Hindi 720p, but this is not always the case. VPNs and proxies may slow down your connection speed or leak your data to third parties. Moreover, VPNs and proxies are not a guarantee of anonymity or safety online.
          • -
          -

          What to Expect from Attack on Titan Part 2

          -

          Now that you know how to download or stream Attack on Titan Part 2 in Hindi 720p legally and safely, you may be wondering what to expect from this movie. Well, we can tell you that this movie is not for the faint of heart. It is a thrilling and intense ride that will keep you on the edge of your seat from start to finish. Here are some of the things that you can expect from Attack on Titan Part 2.

          -

          The action and special effects

          -

          One of the main attractions of Attack on Titan Part 2 is the action and special effects. The movie features some of the most impressive and terrifying titans ever seen on screen. The titans have different designs Part 2? -

        11. A: Attack on Titan Part 2 is 87 minutes long.
        12. -
        13. Q: Who directed Attack on Titan Part 2?
        14. -
        15. A: Attack on Titan Part 2 was directed by Shinji Higuchi, who is also known for directing Shin Godzilla and The Floating Castle.
        16. -
        17. Q: Who wrote the original manga of Attack on Titan?
        18. -
        19. A: The original manga of Attack on Titan was written by Hajime Isayama, who started publishing it in 2009 and ended it in 2021.
        20. -
        21. Q: Is there an anime adaptation of Attack on Titan?
        22. -
        23. A: Yes, there is an anime adaptation of Attack on Titan, which started airing in 2013 and ended in 2021. It has four seasons and 75 episodes, and it follows the manga more closely than the live-action movies.
        24. -

          -

          attack on titan part 2 hindi dubbed 720p download
          -attack on titan end of the world full movie in hindi 720p
          -attack on titan part 2 2015 hindi 720p bluray download
          -attack on titan part 2 full movie in hindi watch online 720p
          -attack on titan part 2 hindi subbed 720p download
          -attack on titan part 2 full movie in hindi free download 720p
          -attack on titan part 2 hindi audio track download 720p
          -attack on titan part 2 full movie in hindi filmyzilla 720p
          -attack on titan part 2 hindi dual audio 720p download
          -attack on titan part 2 full movie in hindi filmywap 720p
          -attack on titan part 2 hindi dubbed watch online free 720p
          -attack on titan end of the world hindi dubbed download 720p
          -attack on titan part 2 full movie in hindi hd download 720p
          -attack on titan part 2 hindi dubbed torrent download 720p
          -attack on titan part 2 full movie in hindi mkv download 720p
          -attack on titan part 2 hindi dubbed movie download filmy4wap 720p
          -attack on titan end of the world full movie in hindi online watch 720p
          -attack on titan part 2 full movie in hindi mp4 download 720p
          -attack on titan part 2 hindi dubbed movie download khatrimaza 720p
          -attack on titan end of the world hindi dubbed watch online hd 720p
          -attack on titan part 2 full movie in hindi download moviesflix 720p
          -attack on titan part 2 hindi dubbed movie download worldfree4u 720p
          -attack on titan end of the world full movie in hindi free download hd 720p
          -attack on titan part 2 full movie in hindi download bolly4u 720p
          -attack on titan part 2 hindi dubbed movie download pagalmovies 720p
          -attack on titan end of the world full movie in hindi dubbed filmyhit 720p
          -attack on titan part 2 full movie in hindi download skymovieshd.in 720p
          -attack on titan part 2 hindi dubbed movie download coolmoviez.com 720p
          -attack on titan end of the world full movie in hindi dubbed mp4moviez.in 720p
          -attack on titan part 2 full movie in hindi download moviemad.in 720p
          -attack on titan part 2 hindi dubbed movie download okhatrimaza.com.in.2023.all.hd.mp4.avi.mkv.pc.480p.300mb.500mb.700mb.900mb.1gb.full.movies.free.download.2023.new.latest.hollywood.bollywood.south.indian.hindi.dubbed.movies.download.hd.mp4.high.quality.mp4moviez.pagalworld.webmusic.songspk.djmaza.mp3skull.djpunjab.mrjatt.okpunjab.moviescounter.hdfriday.downloadhub.bolly4u.khatrimaza.worldfree4u.moviesbaba.bollyshare.big4umovies.sevenstarhd.downloadhub.net.bdupload.indishare.uploadbank.uploadbaz.suprafiles.openload.co.clicknupload.link.upfile.mobi.filecloud.io.filescdn.com.arabloads.net.bdupload.info.embedupload.com.filetut.com.hipfile.com.hugefiles.net.multiup.org.turbobit.net.uppit.com.uptobox.com.userscloud.com.usersfiles.com.9xupload.me.desiupload.in.indishare.me.mirrorace.com.multiup.org.openload.co.vup.to.vidoza.net.zippyshare.com.9xmovies.today.9xmovies.press.9xmovies.in.9xmovies.run.9xmovies.vet.9xmovies.vin.9xmovies.live.9xmovies.pw.9xmovies.win.9xmovies.bid.9xmovies.cc.9xmovies.ws.9xmovies.co7.in.9xmovies.co.in.9xmovies.trade.9xmovies.asia.9xmovies.life.9xmovies.guru.9xmovies.lol.9xmovies.link.bdmusic25.cc.bdmusic25.me.bdmusic25.fun.bdmusic25.run.bdmusic25.live.bdmusic25.pw.bdmusic25.bid.bdmusic25.win.bdmusic25.icu.bdmusic25.site.bdmusic25.work.bdmusic25.info.bdmusic25.net.bdmusic25.wiki.bdmovie24.net.bdjan24.com.bdjan24.net.bdjan24.info.bdjan24.site.bdjan24.live.bdjan

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download GApps mod apk now and get unlimited coins for free.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download GApps mod apk now and get unlimited coins for free.md deleted file mode 100644 index 2991f09771863990918f8af2aa1f16daebe5e217..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download GApps mod apk now and get unlimited coins for free.md +++ /dev/null @@ -1,79 +0,0 @@ - -

          GApps Unlimited Coins Mod APK Download: What You Need to Know

          -

          If you are looking for a way to enjoy Google apps and services on your Android device without any restrictions or limitations, you might be interested in GApps Unlimited Coins Mod APK. This is a modified version of the official Google Apps package that claims to offer unlimited coins, gems, and other resources for various games and apps. But before you download and install this modded version of GApps, there are some things you need to know. In this article, we will explain what are GApps, why do you need them, what is GApps Unlimited Coins Mod APK, how does it work, how to download and install it, and what are the alternatives. Read on to find out more.

          -

          gapps unlimited coins mod apk download


          Download Zip > https://gohhs.com/2uPo9a



          -

          What are GApps and why do you need them?

          -

          GApps, short for Google Apps, are a set of applications and services that are developed by Google and come pre-installed on most Android devices. These include the Play Store, Gmail, Maps, YouTube, Photos, Drive, Chrome, Assistant, and many more. These apps and services provide core functionality and features that enhance the user experience and performance of Android devices. They also allow users to access the vast ecosystem of Google products and services, such as cloud storage, music streaming, video calling, email, navigation, etc.

          -

          However, not all Android devices come with GApps pre-installed. Some device makers or custom ROM developers may not have a license or permission from Google to include GApps in their software. This means that users who buy these devices or flash these ROMs will not be able to use Google apps and services on their devices. This can be a problem for many users who rely on Google apps and services for their daily needs. For example, without the Play Store, users will not be able to download or update apps from the official source. Without Google Play Services, users will not be able to use features like location-based services, push notifications, in-app purchases, etc.

          -

          What is GApps Unlimited Coins Mod APK and how does it work?

          -

          GApps Unlimited Coins Mod APK is a modified version of the official Google Apps package that claims to offer unlimited coins, gems, and other resources for various games and apps that use Google Play Services. This means that users who install this modded version of GApps will be able to enjoy Google apps and services on their devices without any restrictions or limitations. They will also be able to get unlimited resources for games and apps that require Google Play Services.

          -

          How does it work? According to the developers of this modded version of GApps, they have modified some files and libraries in the original package to bypass the verification and authentication process of Google Play Services. This allows them to inject unlimited coins, gems, and other resources into games and apps that use Google Play Services. They also claim that they have optimized the package size and performance to make it faster and smoother than the original package.

          -

          gapps mod apk download with unlimited money and gems
          -how to get free coins in gapps games using mod apk
          -gapps hack mod apk download for android and ios devices
          -best gapps mod apk with unlimited coins and no ads
          -gapps premium mod apk download latest version with unlimited features
          -gapps pro mod apk download unlocked all games and coins
          -gapps vip mod apk download free with unlimited access and rewards
          -gapps cracked mod apk download full version with unlimited resources
          -gapps cheat mod apk download easy and fast with unlimited coins
          -gapps patcher mod apk download working and safe with unlimited benefits
          -gapps generator mod apk download online and offline with unlimited possibilities
          -gapps injector mod apk download no root and no survey with unlimited fun
          -gapps editor mod apk download customize and modify with unlimited options
          -gapps unlocker mod apk download bypass and unlock with unlimited power
          -gapps installer mod apk download install and update with unlimited convenience
          -gapps manager mod apk download manage and control with unlimited authority
          -gapps launcher mod apk download launch and play with unlimited ease
          -gapps booster mod apk download boost and enhance with unlimited performance
          -gapps optimizer mod apk download optimize and improve with unlimited efficiency
          -gapps cleaner mod apk download clean and fix with unlimited security
          -gapps backup mod apk download backup and restore with unlimited safety
          -gapps converter mod apk download convert and change with unlimited flexibility
          -gapps downloader mod apk download download and save with unlimited speed
          -gapps uploader mod apk download upload and share with unlimited storage
          -gapps streamer mod apk download stream and watch with unlimited quality
          -gapps recorder mod apk download record and capture with unlimited clarity
          -gapps player mod apk download play and enjoy with unlimited entertainment
          -gapps maker mod apk download make and create with unlimited creativity
          -gapps editor mod apk download edit and modify with unlimited functionality
          -gapps simulator mod apk download simulate and experience with unlimited realism
          -gapps emulator mod apk download emulate and run with unlimited compatibility
          -gapps browser mod apk download browse and explore with unlimited privacy
          -gapps scanner mod apk download scan and detect with unlimited accuracy
          -gapps finder mod apk download find and locate with unlimited precision
          -gapps tracker mod apk download track and monitor with unlimited reliability
          -gapps tester mod apk download test and check with unlimited validity
          -gapps verifier mod apk download verify and confirm with unlimited certainty
          -gapps analyzer mod apk download analyze and evaluate with unlimited insight
          -gapps calculator mod apk download calculate and estimate with unlimited intelligence
          -gapps translator mod apk download translate and communicate with unlimited languages
          -gapps dictionary mod apk download define and explain with unlimited words
          -gapps thesaurus mod apk download synonymize and antonymize with unlimited expressions
          -gapps encyclopedia mod apk download learn and discover with unlimited knowledge
          -gapps trivia mod apk download quiz and challenge with unlimited questions
          -gapps puzzle mod apk download solve and complete with unlimited solutions
          -gapps arcade mod apk download compete and win with unlimited scores
          -gapps adventure mod apk download adventure and explore with unlimited adventures
          -gapps action mod apk download fight and survive with unlimited actions

          -

          How to download and install GApps Unlimited Coins Mod APK on your device?

          -

          If you want to try out GApps Unlimited Coins Mod APK on your device, you will need to follow these steps: - First, you will need to download the GApps Unlimited Coins Mod APK file from a reliable source. You can search for it on the internet or use the link provided below. Make sure you download the file that matches your device's architecture and Android version. - Second, you will need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps that are not from the Play Store. - Third, you will need to locate the downloaded GApps Unlimited Coins Mod APK file on your device using a file manager app. Tap on the file and follow the instructions to install it. You may need to grant some permissions and accept some terms and conditions. - Fourth, you will need to reboot your device after the installation is complete. This will ensure that the modded version of GApps is properly integrated with your device's system. - Fifth, you will need to launch the GApps Unlimited Coins Mod APK app from your app drawer and sign in with your Google account. You will then be able to access Google apps and services on your device without any restrictions or limitations. You will also be able to get unlimited resources for games and apps that use Google Play Services.

          What are the alternatives to GApps Unlimited Coins Mod APK?

          -

          While GApps Unlimited Coins Mod APK may sound tempting and appealing, it is not without its risks and drawbacks. Here are some of the potential issues that you may face if you use this modded version of GApps:

          - - It may not be compatible with all devices or ROMs. Some devices or ROMs may have different configurations or security measures that may prevent the modded version of GApps from working properly or at all. - It may not be updated regularly or at all. The developers of this modded version of GApps may not be able to keep up with the updates and changes that Google makes to its apps and services. This may result in bugs, errors, crashes, or missing features. - It may compromise your privacy and security. The modded version of GApps may contain malicious code or spyware that may collect your personal data or expose your device to hackers or viruses. You may also lose access to some of the security features that Google provides, such as encryption, backup, or anti-theft. - It may violate Google's terms of service and policies. The modded version of GApps may breach Google's terms of service and policies that govern the use of its apps and services. This may result in your account being suspended or banned by Google. Therefore, if you are looking for alternatives to GApps Unlimited Coins Mod APK, you may want to consider these options: - Use official GApps packages from reputable sources. If your device or ROM supports official GApps packages, you can download and install them from reputable sources such as OpenGApps or BiTGApps . These packages are based on the original Google Apps package but are customized and optimized for different devices and ROMs. They are also updated regularly and do not contain any modifications or alterations that may affect your privacy or security. - Use FOSS alternatives to Google apps. If you want to avoid using Google apps and services altogether, you can use FOSS (Free and Open Source Software) alternatives that offer similar functionality and features but respect your privacy and freedom. Some examples of FOSS alternatives to Google apps are F-Droid (an app store for FOSS apps), microG (a lightweight replacement for Google Play Services), Aurora Store (a client for accessing the Play Store without Google account), K-9 Mail (an email client), OsmAnd (a navigation app), NewPipe (a YouTube client), Simple Gallery (a photo gallery app), etc.

          Conclusion

          -

          GApps Unlimited Coins Mod APK is a modified version of the official Google Apps package that claims to offer unlimited coins, gems, and other resources for various games and apps that use Google Play Services. It also claims to offer unrestricted access to Google apps and services on Android devices that do not have them pre-installed. However, this modded version of GApps is not without its risks and drawbacks. It may not be compatible with all devices or ROMs, it may not be updated regularly or at all, it may compromise your privacy and security, and it may violate Google's terms of service and policies.

          -

          If you want to use Google apps and services on your Android device without any restrictions or limitations, you may want to consider using official GApps packages from reputable sources or FOSS alternatives to Google apps that respect your privacy and freedom.

          -

          We hope We hope this article has helped you understand what GApps Unlimited Coins Mod APK is, how it works, how to download and install it, and what are the alternatives. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.

          -

          FAQs

          -

          Here are some of the frequently asked questions about GApps Unlimited Coins Mod APK:

          -

          Q1: Is GApps Unlimited Coins Mod APK safe to use?

          -

          A1: There is no definitive answer to this question, as different sources may provide different versions of GApps Unlimited Coins Mod APK that may have different levels of safety and quality. However, as a general rule of thumb, you should always be careful and cautious when downloading and installing any modded or unofficial app or package on your device. You should always scan the file for viruses or malware, check the reviews and ratings of the source, and backup your data before proceeding. You should also be aware of the potential risks and drawbacks of using GApps Unlimited Coins Mod APK, as we have discussed in this article.

          -

          Q2: Will GApps Unlimited Coins Mod APK work on any Android device?

          -

          A2: No, GApps Unlimited Coins Mod APK will not work on any Android device. It will only work on devices that support the installation of GApps packages, such as devices that run custom ROMs or non-GMS devices. It will also depend on the compatibility of the modded version of GApps with your device's architecture and Android version. You should always check the compatibility and requirements of the modded version of GApps before downloading and installing it on your device.

          -

          Q3: What are some of the best FOSS alternatives to Google apps?

          -

          A3: There are many FOSS alternatives to Google apps that offer similar functionality and features but respect your privacy and freedom. Some examples of FOSS alternatives to Google apps are F-Droid (an app store for FOSS apps), microG (a lightweight replacement for Google Play Services), Aurora Store (a client for accessing the Play Store without Google account), K-9 Mail (an email client), OsmAnd (a navigation app), NewPipe (a YouTube client), Simple Gallery (a photo gallery app), etc. You can find more FOSS alternatives to Google apps on websites like AlternativeTo or Fossdroid .

          -

          Q4: How can I update GApps Unlimited Coins Mod APK to the latest version?

          -

          A4: The best way to update GApps Unlimited Coins Mod APK to the latest version is to download and install the latest version from the same source that you got the previous version from. You should always check for updates regularly and install them as soon as possible to avoid missing out on new features or fixes. However, you should also be careful and cautious when updating GApps Unlimited Coins Mod APK, as some updates may not be compatible with your device or ROM, or may contain bugs or errors. You should always backup your data before updating and follow the instructions carefully.

          -

          Q5: Where can I get support or report bugs for GApps Unlimited Coins Mod APK?

          -

          A5: The best place to get support or report bugs for GApps Unlimited Coins Mod APK is to contact the developers or the source that provided you with the modded version of GApps. They may be able to help you with your issues or fix the bugs that you encounter. However, you should not expect too much support or assistance from them, as they are not affiliated with Google or the official GApps team. They may also not respond to your queries or requests promptly or at all.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download MilkChoco Mod APK 1.32.1 and Unlock All the Features for Free.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download MilkChoco Mod APK 1.32.1 and Unlock All the Features for Free.md deleted file mode 100644 index 5d24795f6d58e9b94e4873f9beaa668f8fb8e3da..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download MilkChoco Mod APK 1.32.1 and Unlock All the Features for Free.md +++ /dev/null @@ -1,79 +0,0 @@ - -

          MilkChoco Mod APK 1.32.1: A Premium Version of MilkChoco Online FPS

          -

          If you are a fan of multiplayer shooting games with cute graphics and voices, you might have heard of MilkChoco Online FPS. This is a popular game where you can play as various heroes with different abilities in different game modes and maps. But did you know that there is a premium version of this game that you can get for free? It's called MilkChoco Mod APK 1.32.1, and it offers many advantages over the original version. In this article, we will tell you everything you need to know about this modded version of MilkChoco Online FPS, including what it is, how to download and install it, tips and tricks for playing it, reviews and ratings from other users, and some FAQs.

          -

          milkchoco mod apk 1.32.1


          Download File ✪✪✪ https://gohhs.com/2uPrOK



          -

          What is MilkChoco Online FPS?

          -

          MilkChoco Online FPS is a third-person shooter game where two teams of five players each battle each other in various game modes and maps. The game is free and available on iOS, Android, Nintendo Switch, and PC. You can choose from 22 classes, each with their own unique ability, such as attacker, doctor, bomber, sniper, etc., and play various roles in battlefields such as 'Assault', 'Deathmatch', 'Escort', 'Capture the Milk', 'Kill Devil', 'Ice Bang', and 'Free for All'. You can also customize your character with different weapons and outfits. The game has low latency online FPS and simple controls, making it easy to play.

          -

          The game modes and maps are as follows:

          -
            -
          • Assault: A king of the hill mode where one team tries to capture and hold a point while the other team tries to stop them. The map is 'Town'.
          • -
          • Deathmatch: A team deathmatch mode where the team with the most kills wins. The map is 'Dust World'.
          • -
          • Escort: A mode where one team tries to escort a vehicle to the destination while the other team tries to stop them. The map is 'Factory'.
          • -
          • Capture the Milk: A capture the flag mode where each team tries to steal the milk from the enemy base and bring it back to their own base. The map is 'Ice World'.
          • -
          • Kill Devil: A mode where both teams try to kill a powerful devil that spawns randomly on the map. The map is 'Hell'.
          • -
          • Ice Bang: A mode where players have to break ice blocks to find items and enemies. The map is 'Ice Bang'.
          • -
          • Free for All: A solo deathmatch mode where every player is on their own and the player with the most kills wins. The map is 'Free for All'.
          • -
          -

          What is MilkChoco Mod APK 1.32.1?

          -

          MilkChoco Mod APK 1.32.1 is a modified version of MilkChoco Online FPS that you can download and install on your Android device for free. This version offers many benefits over the original version, such as:

          -
            -
          • Unlimited money and diamonds: You can get unlimited money and diamonds in the game, which you can use to buy weapons, outfits, and other items. You can also upgrade your weapons and skills without any limit.
          • -
          • Unlocked all classes: You can access all 22 classes in the game without having to unlock them or pay for them. You can switch between different classes anytime you want and enjoy their unique abilities.
          • -
          • No ads: You can play the game without any annoying ads or pop-ups that interrupt your gameplay or consume your data. You can also enjoy faster loading times and smoother performance.
          • -
          • No root required: You don't need to root your device or do any complicated steps to install this modded version of the game. You just need to download the APK file from Jojoy and follow some simple instructions that we will provide later in this article.
          • -
          -

          MilkChoco Mod APK 1.32.1 is a premium version of MilkChoco Online FPS that gives you more fun and freedom in playing the game. You can enjoy all the features of the game without any restrictions or costs. You can also have an edge over other players who are using the original version of the game.

          -

          MilkChoco MOD APK v1.34.0 (Unlimited Money/Gems) - Jojoy[^1^]: This is a website that offers a premium version of MilkChoco, a game where you can play as different characters with different abilities in a team-based shooter. The mod apk allows you to use all the features in MilkChoco without paying or watching ads. You can also download and install the latest version of the mod apk from this website.

          -

          However, there are also some drawbacks of using this modded version of the game, such as:

          -
            -
          • Potential risk of malware or viruses: Since this modded version of the game is not from the official source, there is a possibility that it may contain harmful or malicious code that could damage your device or steal your personal information. You should always be careful when downloading and installing any modded APK files from unknown sources and scan them with a reliable antivirus software before using them.
          • -
          • Potential risk of ban or suspension: Since this modded version of the game violates the terms and conditions of the original version, there is a possibility that you may get banned or suspended from playing the game online if you are detected by the game's security system. You should always use this modded version at your own risk and discretion and not abuse its features or cheat in online matches.
          • -
          • Potential compatibility issues: Since this modded version of the game may not be compatible with all devices or versions of Android, there is a possibility that you may encounter some errors or glitches when playing the game. You should always check the compatibility and requirements of this modded version before downloading and installing it on your device.
          • -
          -

          How to download and install MilkChoco Mod APK 1.32.1?

          -

          If you want to try out this modded version of MilkChoco Online FPS, you can follow these steps to download and install it on your Android device:

          -
            -
          1. Go to Jojoy's website and search for MilkChoco Mod APK 1.32.1.
          2. -
          3. Click on the download button and wait for the APK file to be downloaded on your device.
          4. -
          5. Go to your device's settings and enable unknown sources. This will allow you to install apps from sources other than the Google Play Store.
          6. -
          7. Locate the downloaded APK file on your device and tap on it to start the installation process. You may need to grant some permissions to the app.
          8. -
          9. Wait for the installation to finish and then launch the game from your app drawer or home screen.
          10. -
          11. Enjoy playing MilkChoco Mod APK 1.32.1 with unlimited money, diamonds, and classes.
          12. -
          -

          Here is a table showing the file size, compatibility, and requirements of MilkChoco Mod APK 1.32.1:

          - | File size | Compatibility | Requirements | | --------- | ------------- | ------------ | | 210 MB | Android 4.4 and up | Internet connection, unknown sources enabled |

          Tips and tricks for playing MilkChoco Mod APK 1.32.1

          -

          Now that you have downloaded and installed MilkChoco Mod APK 1.32.1, you might be wondering how to play it better and have more fun. Here are some tips and tricks that you can use when playing this modded version of MilkChoco Online FPS:

          -
            -
          • Use the right class for the right situation: Each class has its own strengths and weaknesses, and you should choose the one that suits your playstyle and the game mode. For example, if you want to heal your teammates, you can use the Doctor class; if you want to deal high damage from a distance, you can use the Sniper class; if you want to sneak behind enemy lines, you can use the Ghost class; etc.
          • -
          • Use your ability wisely: Each class has a unique ability that can give you an advantage in battle, but it also has a cooldown time that prevents you from spamming it. You should use your ability at the right time and place, and not waste it on unnecessary situations. For example, if you are using the Bomber class, you can use your ability to throw a bomb that explodes after a few seconds, but you should not throw it randomly or near your teammates; if you are using the Ice class, you can use your ability to freeze enemies in place, but you should not use it when there are no enemies around or when they are too far away; etc.
          • -
          • Upgrade your weapons and skills: With unlimited money and diamonds, you can upgrade your weapons and skills to improve their performance and effectiveness. You can also buy new weapons and outfits to customize your character and make it look cooler. You can upgrade your weapons and skills by tapping on the 'Shop' button on the main menu and then choosing the 'Upgrade' tab.
          • -
          • Communicate with your teammates: MilkChoco Online FPS is a team-based game, and communication is key to winning. You can communicate with your teammates by using the chat feature or the voice chat feature (if available). You can also use emojis and sprays to express yourself or interact with other players. You can communicate with your teammates by tapping on the 'Chat' button on the top right corner of the screen during a match.
          • -
          • Have fun: The most important tip for playing MilkChoco Mod APK 1.32.1 is to have fun and enjoy the game. Don't take it too seriously or get frustrated if you lose or die. Remember that this is just a game, and the main purpose is to have fun with other players online.
          • -
          -

          Reviews and ratings of MilkChoco Mod APK 1.32.1

          -

          MilkChoco Mod APK 1.32.1 is a popular modded version of MilkChoco Online FPS that has received many positive reviews and ratings from users who have tried it. Here are some of them:

          -

          "This is a SUPER FUN game! I love it! However, sometimes it glitches when I try to go into the game. Even though it gltiches occasionally, if you're looking for a new game to try, it's definitely worth a shot!" - Summer Terry

          -

          "I'm gonna be frank. This new update has several bugs in it... For one, the game reloads mid-game, making it almost impossible to play efficiently. I've seen it in several modes (battle royale and star league included). Two, for some reason, I can't shoot the targets I'm aiming for, which is a bit annoying." - Electrolitez

          -

          "This game is amazing! It has cute graphics, voices, and characters. It also has many game modes and maps to choose from. The modded version gives me unlimited money and diamonds, which makes me happy." - Lila Rose

          -

          "This game is awesome! It's like Overwatch but with milk cartons instead of heroes. The modded version lets me access all the classes and abilities, which makes me powerful. The only problem is that sometimes the game crashes or lags, which makes me sad." - Ryan Lee

          -

          "This game is very fun and addictive. I like the graphics and the sounds. The modded version is very cool and easy to use. I can buy anything I want and play any class I want. The only thing I don't like is that some players are very rude and toxic. They insult me or cheat in the game. I wish there was a report or block feature." - Mia Smith

          -

          Based on these reviews and ratings, we can give MilkChoco Mod APK 1.32.1 a score of 4.5 out of 5 stars. This modded version of MilkChoco Online FPS is a great way to enjoy the game with more features and benefits, but it also has some drawbacks and risks that you should be aware of.

          -

          Conclusion

          -

          MilkChoco Mod APK 1.32.1 is a premium version of MilkChoco Online FPS that you can download and install on your Android device for free. This modded version of the game offers unlimited money, diamonds, classes, and no ads, which can enhance your gameplay and experience. However, this modded version of the game also has some disadvantages and dangers, such as potential malware, ban, or compatibility issues, which you should be careful of.

          -

          If you are looking for a fun and cute multiplayer shooting game with different game modes and maps, you should try out MilkChoco Online FPS. If you want to have more fun and freedom in playing the game, you should try out MilkChoco Mod APK 1.32.1. But remember to use it at your own risk and discretion, and don't abuse its features or cheat in online matches.

          -

          FAQs

          -

          Here are some frequently asked questions and answers about MilkChoco Mod APK 1.32.1:

          -
            -
          • Q: Is MilkChoco Mod APK 1.32.1 safe to use?
          • -
          • A: MilkChoco Mod APK 1.32.1 is not from the official source of MilkChoco Online FPS, so there is no guarantee that it is safe to use. There is a possibility that it may contain malware or viruses that could harm your device or steal your personal information. You should always scan the APK file with a reliable antivirus software before installing it on your device.
          • -
          • Q: Is MilkChoco Mod APK 1.32.1 legal to use?
          • -
          • A: MilkChoco Mod APK 1.32.1 violates the terms and conditions of MilkChoco Online FPS, so it is not legal to use. There is a possibility that you may get banned or suspended from playing the game online if you are detected by the game's security system. You should always use this modded version at your own risk and discretion, and not abuse its features or cheat in online matches.
          • -
          • Q: Is MilkChoco Mod APK 1.32.1 compatible with my device?
          • -
          • A: MilkChoco Mod APK 1.32.1 may not be compatible with all devices or versions of Android, so there is a possibility that you may encounter some errors or glitches when playing the game. You should always check the compatibility and requirements of this modded version before downloading and installing it on your device.
          • -
          • Q: How can I update MilkChoco Mod APK 1.32.1?
          • -
          • A: MilkChoco Mod APK 1.32.1 may not be updated regularly or automatically, so there is a possibility that it may become outdated or incompatible with the original version of MilkChoco Online FPS. You should always check Jojoy's website for the latest version of this modded version and download and install it manually on your device.
          • -
          • Q: Where can I get more information about MilkChoco Mod APK 1.32.1?
          • -
          • A: You can get more information about MilkChoco Mod APK 1.32.1 from Jojoy's website or from other sources online, such as blogs, forums, videos, etc., but you should always be careful of fake or misleading information.
          • -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/app.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/app.py deleted file mode 100644 index 629eab165af167483b3def8286581e7270b8e01c..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/app.py +++ /dev/null @@ -1,309 +0,0 @@ -import gradio as gr -import numpy as np -from audioldm import text_to_audio, build_model -from share_btn import community_icon_html, loading_icon_html, share_js - -model_id="haoheliu/AudioLDM-S-Full" - -audioldm = None -current_model_name = None - -# def predict(input, history=[]): -# # tokenize the new input sentence -# new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt') - -# # append the new user input tokens to the chat history -# bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1) - -# # generate a response -# history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id).tolist() - -# # convert the tokens to text, and then split the responses into lines -# response = tokenizer.decode(history[0]).split("<|endoftext|>") -# response = [(response[i], response[i+1]) for i in range(0, len(response)-1, 2)] # convert to tuples of list -# return response, history - -def text2audio(text, duration, guidance_scale, random_seed, n_candidates, model_name="audioldm-m-text-ft"): - global audioldm, current_model_name - - if audioldm is None or model_name != current_model_name: - audioldm=build_model(model_name=model_name) - current_model_name = model_name - - # print(text, length, guidance_scale) - waveform = text_to_audio( - latent_diffusion=audioldm, - text=text, - seed=random_seed, - duration=duration, - guidance_scale=guidance_scale, - n_candidate_gen_per_text=int(n_candidates), - ) # [bs, 1, samples] - waveform = [ - gr.make_waveform((16000, wave[0]), bg_image="bg.png") for wave in waveform - ] - # waveform = [(16000, np.random.randn(16000)), (16000, np.random.randn(16000))] - if(len(waveform) == 1): - waveform = waveform[0] - return waveform - -# iface = gr.Interface(fn=text2audio, inputs=[ -# gr.Textbox(value="A man is speaking in a huge room", max_lines=1), -# gr.Slider(2.5, 10, value=5, step=2.5), -# gr.Slider(0, 5, value=2.5, step=0.5), -# gr.Number(value=42) -# ], outputs=[gr.Audio(label="Output", type="numpy"), gr.Audio(label="Output", type="numpy")], -# allow_flagging="never" -# ) -# iface.launch(share=True) - - -css = """ - a { - color: inherit; - text-decoration: underline; - } - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: #000000; - background: #000000; - } - input[type='range'] { - accent-color: #000000; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - #advanced-btn { - font-size: .7rem !important; - line-height: 19px; - margin-top: 12px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - margin-bottom: 20px; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - #container-advanced-btns{ - display: flex; - flex-wrap: wrap; - justify-content: space-between; - align-items: center; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - margin-top: 10px; - margin-left: auto; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; - } - #share-btn * { - all: unset; - } - #share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; - } - #share-btn-container .wrap { - display: none !important; - } - .gr-form{ - flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0; - } - #prompt-container{ - gap: 0; - } - #generated_id{ - min-height: 700px - } - #setting_id{ - margin-bottom: 12px; - text-align: center; - font-weight: 900; - } -""" -iface = gr.Blocks(css=css) - -with iface: - gr.HTML( - """ -
          -
          -

          - AudioLDM: Text-to-Audio Generation with Latent Diffusion Models -

          -
          -

          - [Paper] [Project page] -

          -
          - """ - ) - gr.HTML(""" -

          - AudioLDM: Text-to-Audio Generation with Latent Diffusion Models -

          -

          For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. -
          - - Duplicate Space -

          - """) - with gr.Group(): - with gr.Box(): - ############# Input - textbox = gr.Textbox(value="A hammer is hitting a wooden surface", max_lines=1, label="Input your text here. Your text is important for the audio quality. Please ensure it is descriptive by using more adjectives.", elem_id="prompt-in") - - with gr.Accordion("Click to modify detailed configurations", open=False): - seed = gr.Number(value=45, label="Change this value (any integer number) will lead to a different generation result.") - duration = gr.Slider(2.5, 10, value=5, step=2.5, label="Duration (seconds)") - guidance_scale = gr.Slider(0, 4, value=2.5, step=0.5, label="Guidance scale (Large => better quality and relavancy to text; Small => better diversity)") - n_candidates = gr.Slider(1, 3, value=3, step=1, label="Automatic quality control. This number control the number of candidates (e.g., generate three audios and choose the best to show you). A Larger value usually lead to better quality with heavier computation") - # model_name = gr.Dropdown( - # ["audioldm-m-text-ft", "audioldm-s-text-ft", "audioldm-m-full","audioldm-s-full-v2", "audioldm-s-full", "audioldm-l-full"], value="audioldm-m-full", label="Choose the model to use. audioldm-m-text-ft and audioldm-s-text-ft are recommanded. -s- means small, -m- means medium and -l- means large", - # ) - ############# Output - # outputs=gr.Audio(label="Output", type="numpy") - outputs=gr.Video(label="Output", elem_id="output-video") - - # with gr.Group(elem_id="container-advanced-btns"): - # # advanced_button = gr.Button("Advanced options", elem_id="advanced-btn") - # with gr.Group(elem_id="share-btn-container"): - # community_icon = gr.HTML(community_icon_html, visible=False) - # loading_icon = gr.HTML(loading_icon_html, visible=False) - # share_button = gr.Button("Share to community", elem_id="share-btn", visible=False) - # outputs=[gr.Audio(label="Output", type="numpy"), gr.Audio(label="Output", type="numpy")] - btn = gr.Button("Submit").style(full_width=True) - - with gr.Group(elem_id="share-btn-container", visible=False): - community_icon = gr.HTML(community_icon_html) - loading_icon = gr.HTML(loading_icon_html) - share_button = gr.Button("Share to community", elem_id="share-btn") - - # btn.click(text2audio, inputs=[ - # textbox, duration, guidance_scale, seed, n_candidates, model_name], outputs=[outputs]) - btn.click(text2audio, inputs=[ - textbox, duration, guidance_scale, seed, n_candidates], outputs=[outputs]) - - share_button.click(None, [], [], _js=share_js) - gr.HTML(''' -

          - ''') - gr.Examples([ - ["A hammer is hitting a wooden surface", 5, 2.5, 45, 3, "audioldm-m-full"], - ["Peaceful and calming ambient music with singing bowl and other instruments.", 5, 2.5, 45, 3, "audioldm-m-full"], - ["A man is speaking in a small room.", 5, 2.5, 45, 3, "audioldm-m-full"], - ["A female is speaking followed by footstep sound", 5, 2.5, 45, 3, "audioldm-m-full"], - ["Wooden table tapping sound followed by water pouring sound.", 5, 2.5, 45, 3, "audioldm-m-full"], - ], - fn=text2audio, - # inputs=[textbox, duration, guidance_scale, seed, n_candidates, model_name], - inputs=[textbox, duration, guidance_scale, seed, n_candidates], - outputs=[outputs], - cache_examples=True, - ) - gr.HTML(''' -
          -

          Essential Tricks for Enhancing the Quality of Your Generated Audio

          -

          1. Try to use more adjectives to describe your sound. For example: "A man is speaking clearly and slowly in a large room" is better than "A man is speaking". This can make sure AudioLDM understands what you want.

          -

          2. Try to use different random seeds, which can affect the generation quality significantly sometimes.

          -

          3. It's better to use general terms like 'man' or 'woman' instead of specific names for individuals or abstract objects that humans may not be familiar with, such as 'mummy'.

          -
          - ''') - with gr.Accordion("Additional information", open=False): - gr.HTML( - """ -
          -

          We build the model with data from AudioSet, Freesound and BBC Sound Effect library. We share this demo based on the UK copyright exception of data for academic research.

          -
          - """ - ) -#

          This demo is strictly for research demo purpose only. For commercial use please contact us.

          - -iface.queue(max_size=10).launch(debug=True) -# iface.launch(debug=True, share=True) diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/content-type/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/content-type/README.md deleted file mode 100644 index c1a922a9afba84293f449dc4b661124fbac2fd5d..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/content-type/README.md +++ /dev/null @@ -1,94 +0,0 @@ -# content-type - -[![NPM Version][npm-version-image]][npm-url] -[![NPM Downloads][npm-downloads-image]][npm-url] -[![Node.js Version][node-image]][node-url] -[![Build Status][ci-image]][ci-url] -[![Coverage Status][coveralls-image]][coveralls-url] - -Create and parse HTTP Content-Type header according to RFC 7231 - -## Installation - -```sh -$ npm install content-type -``` - -## API - -```js -var contentType = require('content-type') -``` - -### contentType.parse(string) - -```js -var obj = contentType.parse('image/svg+xml; charset=utf-8') -``` - -Parse a `Content-Type` header. This will return an object with the following -properties (examples are shown for the string `'image/svg+xml; charset=utf-8'`): - - - `type`: The media type (the type and subtype, always lower case). - Example: `'image/svg+xml'` - - - `parameters`: An object of the parameters in the media type (name of parameter - always lower case). Example: `{charset: 'utf-8'}` - -Throws a `TypeError` if the string is missing or invalid. - -### contentType.parse(req) - -```js -var obj = contentType.parse(req) -``` - -Parse the `Content-Type` header from the given `req`. Short-cut for -`contentType.parse(req.headers['content-type'])`. - -Throws a `TypeError` if the `Content-Type` header is missing or invalid. - -### contentType.parse(res) - -```js -var obj = contentType.parse(res) -``` - -Parse the `Content-Type` header set on the given `res`. Short-cut for -`contentType.parse(res.getHeader('content-type'))`. - -Throws a `TypeError` if the `Content-Type` header is missing or invalid. - -### contentType.format(obj) - -```js -var str = contentType.format({ - type: 'image/svg+xml', - parameters: { charset: 'utf-8' } -}) -``` - -Format an object into a `Content-Type` header. This will return a string of the -content type for the given object with the following properties (examples are -shown that produce the string `'image/svg+xml; charset=utf-8'`): - - - `type`: The media type (will be lower-cased). Example: `'image/svg+xml'` - - - `parameters`: An object of the parameters in the media type (name of the - parameter will be lower-cased). Example: `{charset: 'utf-8'}` - -Throws a `TypeError` if the object contains an invalid type or parameter names. - -## License - -[MIT](LICENSE) - -[ci-image]: https://badgen.net/github/checks/jshttp/content-type/master?label=ci -[ci-url]: https://github.com/jshttp/content-type/actions/workflows/ci.yml -[coveralls-image]: https://badgen.net/coveralls/c/github/jshttp/content-type/master -[coveralls-url]: https://coveralls.io/r/jshttp/content-type?branch=master -[node-image]: https://badgen.net/npm/node/content-type -[node-url]: https://nodejs.org/en/download -[npm-downloads-image]: https://badgen.net/npm/dm/content-type -[npm-url]: https://npmjs.org/package/content-type -[npm-version-image]: https://badgen.net/npm/v/content-type diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/cjs/encodePacket.browser.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/cjs/encodePacket.browser.js deleted file mode 100644 index ec2b08a33355d6d16994aff619a9cf921a6ab806..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/cjs/encodePacket.browser.js +++ /dev/null @@ -1,43 +0,0 @@ -"use strict"; -Object.defineProperty(exports, "__esModule", { value: true }); -const commons_js_1 = require("./commons.js"); -const withNativeBlob = typeof Blob === "function" || - (typeof Blob !== "undefined" && - Object.prototype.toString.call(Blob) === "[object BlobConstructor]"); -const withNativeArrayBuffer = typeof ArrayBuffer === "function"; -// ArrayBuffer.isView method is not defined in IE10 -const isView = obj => { - return typeof ArrayBuffer.isView === "function" - ? ArrayBuffer.isView(obj) - : obj && obj.buffer instanceof ArrayBuffer; -}; -const encodePacket = ({ type, data }, supportsBinary, callback) => { - if (withNativeBlob && data instanceof Blob) { - if (supportsBinary) { - return callback(data); - } - else { - return encodeBlobAsBase64(data, callback); - } - } - else if (withNativeArrayBuffer && - (data instanceof ArrayBuffer || isView(data))) { - if (supportsBinary) { - return callback(data); - } - else { - return encodeBlobAsBase64(new Blob([data]), callback); - } - } - // plain string - return callback(commons_js_1.PACKET_TYPES[type] + (data || "")); -}; -const encodeBlobAsBase64 = (data, callback) => { - const fileReader = new FileReader(); - fileReader.onload = function () { - const content = fileReader.result.split(",")[1]; - callback("b" + (content || "")); - }; - return fileReader.readAsDataURL(data); -}; -exports.default = encodePacket; diff --git a/spaces/flowers-team/SocialAISchool/autocrop.sh b/spaces/flowers-team/SocialAISchool/autocrop.sh deleted file mode 100644 index e2fe2b143702fbfcf578e957fac7cf898ef3f475..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/autocrop.sh +++ /dev/null @@ -1,14 +0,0 @@ -#!/bin/bash - - -# Loop through all files in the specified directory -for file in "$@" -do - # Check if the file is an image - if [[ $file == *.jpg || $file == *.png ]] - then - # Crop the image using the `convert` command from the ImageMagick suite - echo "Cropping $file" - convert $file -trim +repage $file - fi -done diff --git a/spaces/gaviego/mnist/README.md b/spaces/gaviego/mnist/README.md deleted file mode 100644 index 67cc0de485d92ab60b0851b6dad8dca14f9b8702..0000000000000000000000000000000000000000 --- a/spaces/gaviego/mnist/README.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: MNIST Training + Gradio -emoji: 📝 - 2️⃣ -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: openrail ---- - -# MNIST Comparsion - -This repo contains two trainings for MNIST, one with just an MLP (`train.py`) and second with convolution(`train_conv.py`). - -See the live demo here (https://huggingface.co/spaces/gaviego/mnist) - diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/dnl_r50-d8.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/dnl_r50-d8.py deleted file mode 100644 index edb4c174c51e34c103737ba39bfc48bf831e561d..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/dnl_r50-d8.py +++ /dev/null @@ -1,46 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='DNLHead', - in_channels=2048, - in_index=3, - channels=512, - dropout_ratio=0.1, - reduction=2, - use_scale=True, - mode='embedded_gaussian', - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/gheng/belanjawan-2024-chatbot/search_embedding.py b/spaces/gheng/belanjawan-2024-chatbot/search_embedding.py deleted file mode 100644 index c5d4b6cb24a2cfe8fd1e0c65ff575c13ab6b0b72..0000000000000000000000000000000000000000 --- a/spaces/gheng/belanjawan-2024-chatbot/search_embedding.py +++ /dev/null @@ -1,86 +0,0 @@ -import numpy as np -import json -import openai -from sklearn.neighbors import NearestNeighbors -import os -from dataset_loader import load_dataset - -openai.api_key=os.environ['OPENAI_API_KEY'] - -class SearchEmbedding(): - def __init__(self,max_knowledge_intake=3,knowledge_score_threshold=0.8) -> None: - self.knowledge_base = None - self.knowledge_base_emb = None - self.load_knowledge_base() - - self.max_knowledge_intake = max_knowledge_intake - self.knowledge_score_threshold = knowledge_score_threshold - - def load_knowledge_base(self): - if not os.path.isdir('belanjawan-2024-speech'): - load_dataset() - - self.knowledge_base = json.load(open('belanjawan-2024-speech/knowledge.json','r')) - self.knowledge_base_emb = np.load('belanjawan-2024-speech/embedding.npy') - - - - - def search(self,question): - #question_emb = self.emb_model([question]) - question_emb = self.openai_create_embeddng(question) - - top_result = self.nn_search([question_emb]) - - context = [] - for result in top_result: - context.append(self.knowledge_base[str(result)]) - - return context - - def nn_search(self, question_emb, n_neighbors=3): - n_neighbors = min(n_neighbors, len(self.knowledge_base_emb)) - self.nn = NearestNeighbors(n_neighbors=n_neighbors) - self.nn.fit(self.knowledge_base_emb) - self.fitted = True - - neighbors = self.nn.kneighbors(question_emb, return_distance=True) - index = neighbors[1][0] - - return index - - - def knn_search(self,target): - target = np.array(target) - cosine_similarities = np.dot(self.knowledge_base_emb, target.T) / (np.linalg.norm(self.knowledge_base_emb, axis=1) * np.linalg.norm(target)) - top_result = [] - - - sorted_score = np.copy(cosine_similarities) - sorted_score[::-1].sort() - - score_intake = sorted_score[:self.max_knowledge_intake] - - - for score in score_intake: - if score >= self.knowledge_score_threshold: - - index = np.where(cosine_similarities == score) - - - top_result.append({ - 'index':str(index[0][0]), - 'score':score - }) - - return top_result - - - def openai_create_embeddng(self,question): - emb = openai.Embedding.create( - input=question, - model = 'text-embedding-ada-002', - ) - embedding = emb['data'][0]['embedding'] - - return embedding \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/data/audio/raw_audio_dataset.py b/spaces/gradio/HuBERT/fairseq/data/audio/raw_audio_dataset.py deleted file mode 100644 index 9ce3f7e39d55860f38b3332fe79917c8d38724fe..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/data/audio/raw_audio_dataset.py +++ /dev/null @@ -1,386 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import logging -import os -import sys -import io - -import numpy as np -import torch -import torch.nn.functional as F - -from .. import FairseqDataset -from ..data_utils import compute_mask_indices, get_buckets, get_bucketed_sizes -from fairseq.data.audio.audio_utils import ( - parse_path, - read_from_stored_zip, - is_sf_audio_data, -) - - -logger = logging.getLogger(__name__) - - -class RawAudioDataset(FairseqDataset): - def __init__( - self, - sample_rate, - max_sample_size=None, - min_sample_size=0, - shuffle=True, - pad=False, - normalize=False, - compute_mask_indices=False, - **mask_compute_kwargs, - ): - super().__init__() - - self.sample_rate = sample_rate - self.sizes = [] - self.max_sample_size = ( - max_sample_size if max_sample_size is not None else sys.maxsize - ) - self.min_sample_size = min_sample_size - self.pad = pad - self.shuffle = shuffle - self.normalize = normalize - self.compute_mask_indices = compute_mask_indices - if self.compute_mask_indices: - self.mask_compute_kwargs = mask_compute_kwargs - self._features_size_map = {} - self._C = mask_compute_kwargs["encoder_embed_dim"] - self._conv_feature_layers = eval(mask_compute_kwargs["conv_feature_layers"]) - - def __getitem__(self, index): - raise NotImplementedError() - - def __len__(self): - return len(self.sizes) - - def postprocess(self, feats, curr_sample_rate): - if feats.dim() == 2: - feats = feats.mean(-1) - - if curr_sample_rate != self.sample_rate: - raise Exception(f"sample rate: {curr_sample_rate}, need {self.sample_rate}") - - assert feats.dim() == 1, feats.dim() - - if self.normalize: - with torch.no_grad(): - feats = F.layer_norm(feats, feats.shape) - return feats - - def crop_to_max_size(self, wav, target_size): - size = len(wav) - diff = size - target_size - if diff <= 0: - return wav - - start = np.random.randint(0, diff + 1) - end = size - diff + start - return wav[start:end] - - def _compute_mask_indices(self, dims, padding_mask): - B, T, C = dims - mask_indices, mask_channel_indices = None, None - if self.mask_compute_kwargs["mask_prob"] > 0: - mask_indices = compute_mask_indices( - (B, T), - padding_mask, - self.mask_compute_kwargs["mask_prob"], - self.mask_compute_kwargs["mask_length"], - self.mask_compute_kwargs["mask_selection"], - self.mask_compute_kwargs["mask_other"], - min_masks=2, - no_overlap=self.mask_compute_kwargs["no_mask_overlap"], - min_space=self.mask_compute_kwargs["mask_min_space"], - ) - mask_indices = torch.from_numpy(mask_indices) - if self.mask_compute_kwargs["mask_channel_prob"] > 0: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_compute_kwargs["mask_channel_prob"], - self.mask_compute_kwargs["mask_channel_length"], - self.mask_compute_kwargs["mask_channel_selection"], - self.mask_compute_kwargs["mask_channel_other"], - no_overlap=self.mask_compute_kwargs["no_mask_channel_overlap"], - min_space=self.mask_compute_kwargs["mask_channel_min_space"], - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices).unsqueeze(1).expand(-1, T, -1) - ) - - return mask_indices, mask_channel_indices - - @staticmethod - def _bucket_tensor(tensor, num_pad, value): - return F.pad(tensor, (0, num_pad), value=value) - - def collater(self, samples): - samples = [s for s in samples if s["source"] is not None] - if len(samples) == 0: - return {} - - sources = [s["source"] for s in samples] - sizes = [len(s) for s in sources] - - if self.pad: - target_size = min(max(sizes), self.max_sample_size) - else: - target_size = min(min(sizes), self.max_sample_size) - - collated_sources = sources[0].new_zeros(len(sources), target_size) - padding_mask = ( - torch.BoolTensor(collated_sources.shape).fill_(False) if self.pad else None - ) - for i, (source, size) in enumerate(zip(sources, sizes)): - diff = size - target_size - if diff == 0: - collated_sources[i] = source - elif diff < 0: - assert self.pad - collated_sources[i] = torch.cat( - [source, source.new_full((-diff,), 0.0)] - ) - padding_mask[i, diff:] = True - else: - collated_sources[i] = self.crop_to_max_size(source, target_size) - - input = {"source": collated_sources} - out = {"id": torch.LongTensor([s["id"] for s in samples])} - if self.pad: - input["padding_mask"] = padding_mask - - if hasattr(self, "num_buckets") and self.num_buckets > 0: - assert self.pad, "Cannot bucket without padding first." - bucket = max(self._bucketed_sizes[s["id"]] for s in samples) - num_pad = bucket - collated_sources.size(-1) - if num_pad: - input["source"] = self._bucket_tensor(collated_sources, num_pad, 0) - input["padding_mask"] = self._bucket_tensor(padding_mask, num_pad, True) - - if self.compute_mask_indices: - B = input["source"].size(0) - T = self._get_mask_indices_dims(input["source"].size(-1)) - padding_mask_reshaped = input["padding_mask"].clone() - extra = padding_mask_reshaped.size(1) % T - if extra > 0: - padding_mask_reshaped = padding_mask_reshaped[:, :-extra] - padding_mask_reshaped = padding_mask_reshaped.view( - padding_mask_reshaped.size(0), T, -1 - ) - padding_mask_reshaped = padding_mask_reshaped.all(-1) - input["padding_count"] = padding_mask_reshaped.sum(-1).max().item() - mask_indices, mask_channel_indices = self._compute_mask_indices( - (B, T, self._C), - padding_mask_reshaped, - ) - input["mask_indices"] = mask_indices - input["mask_channel_indices"] = mask_channel_indices - out["sample_size"] = mask_indices.sum().item() - - out["net_input"] = input - return out - - def _get_mask_indices_dims(self, size, padding=0, dilation=1): - if size not in self._features_size_map: - L_in = size - for (_, kernel_size, stride) in self._conv_feature_layers: - L_out = L_in + 2 * padding - dilation * (kernel_size - 1) - 1 - L_out = 1 + L_out // stride - L_in = L_out - self._features_size_map[size] = L_out - return self._features_size_map[size] - - def num_tokens(self, index): - return self.size(index) - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - if self.pad: - return self.sizes[index] - return min(self.sizes[index], self.max_sample_size) - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - - if self.shuffle: - order = [np.random.permutation(len(self))] - order.append( - np.minimum( - np.array(self.sizes), - self.max_sample_size, - ) - ) - return np.lexsort(order)[::-1] - else: - return np.arange(len(self)) - - def set_bucket_info(self, num_buckets): - self.num_buckets = num_buckets - if self.num_buckets > 0: - self._collated_sizes = np.minimum( - np.array(self.sizes), - self.max_sample_size, - ) - self.buckets = get_buckets( - self._collated_sizes, - self.num_buckets, - ) - self._bucketed_sizes = get_bucketed_sizes( - self._collated_sizes, self.buckets - ) - logger.info( - f"{len(self.buckets)} bucket(s) for the audio dataset: " - f"{self.buckets}" - ) - - -class FileAudioDataset(RawAudioDataset): - def __init__( - self, - manifest_path, - sample_rate, - max_sample_size=None, - min_sample_size=0, - shuffle=True, - pad=False, - normalize=False, - num_buckets=0, - compute_mask_indices=False, - **mask_compute_kwargs, - ): - super().__init__( - sample_rate=sample_rate, - max_sample_size=max_sample_size, - min_sample_size=min_sample_size, - shuffle=shuffle, - pad=pad, - normalize=normalize, - compute_mask_indices=compute_mask_indices, - **mask_compute_kwargs, - ) - - skipped = 0 - self.fnames = [] - sizes = [] - self.skipped_indices = set() - - with open(manifest_path, "r") as f: - self.root_dir = f.readline().strip() - for i, line in enumerate(f): - items = line.strip().split("\t") - assert len(items) == 2, line - sz = int(items[1]) - if min_sample_size is not None and sz < min_sample_size: - skipped += 1 - self.skipped_indices.add(i) - continue - self.fnames.append(items[0]) - sizes.append(sz) - logger.info(f"loaded {len(self.fnames)}, skipped {skipped} samples") - - self.sizes = np.array(sizes, dtype=np.int64) - - try: - import pyarrow - - self.fnames = pyarrow.array(self.fnames) - except: - logger.debug( - "Could not create a pyarrow array. Please install pyarrow for better performance" - ) - pass - - self.set_bucket_info(num_buckets) - - def __getitem__(self, index): - import soundfile as sf - - path_or_fp = os.path.join(self.root_dir, str(self.fnames[index])) - _path, slice_ptr = parse_path(path_or_fp) - if len(slice_ptr) == 2: - byte_data = read_from_stored_zip(_path, slice_ptr[0], slice_ptr[1]) - assert is_sf_audio_data(byte_data) - path_or_fp = io.BytesIO(byte_data) - - wav, curr_sample_rate = sf.read(path_or_fp, dtype="float32") - - feats = torch.from_numpy(wav).float() - feats = self.postprocess(feats, curr_sample_rate) - return {"id": index, "source": feats} - - -class BinarizedAudioDataset(RawAudioDataset): - def __init__( - self, - data_dir, - split, - sample_rate, - max_sample_size=None, - min_sample_size=0, - shuffle=True, - pad=False, - normalize=False, - num_buckets=0, - compute_mask_indices=False, - **mask_compute_kwargs, - ): - super().__init__( - sample_rate=sample_rate, - max_sample_size=max_sample_size, - min_sample_size=min_sample_size, - shuffle=shuffle, - pad=pad, - normalize=normalize, - compute_mask_indices=compute_mask_indices, - **mask_compute_kwargs, - ) - - from fairseq.data import data_utils, Dictionary - - self.fnames_dict = Dictionary.load(os.path.join(data_dir, "dict.txt")) - - root_path = os.path.join(data_dir, f"{split}.root") - if os.path.exists(root_path): - with open(root_path, "r") as f: - self.root_dir = next(f).strip() - else: - self.root_dir = None - - fnames_path = os.path.join(data_dir, split) - self.fnames = data_utils.load_indexed_dataset(fnames_path, self.fnames_dict) - lengths_path = os.path.join(data_dir, f"{split}.lengths") - - with open(lengths_path, "r") as f: - for line in f: - sz = int(line.rstrip()) - assert ( - sz >= min_sample_size - ), f"Min sample size is not supported for binarized dataset, but found a sample with size {sz}" - self.sizes.append(sz) - - self.sizes = np.array(self.sizes, dtype=np.int64) - - self.set_bucket_info(num_buckets) - logger.info(f"loaded {len(self.fnames)} samples") - - def __getitem__(self, index): - import soundfile as sf - - fname = self.fnames_dict.string(self.fnames[index], separator="") - if self.root_dir: - fname = os.path.join(self.root_dir, fname) - - wav, curr_sample_rate = sf.read(fname) - feats = torch.from_numpy(wav).float() - feats = self.postprocess(feats, curr_sample_rate) - return {"id": index, "source": feats} diff --git a/spaces/gsaivinay/open_llm_leaderboard/src/display_models/model_metadata_type.py b/spaces/gsaivinay/open_llm_leaderboard/src/display_models/model_metadata_type.py deleted file mode 100644 index 61dec0a30f758c9350f59d028f07b0a782a3f317..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/open_llm_leaderboard/src/display_models/model_metadata_type.py +++ /dev/null @@ -1,555 +0,0 @@ -from dataclasses import dataclass -from enum import Enum -from typing import Dict - - -@dataclass -class ModelInfo: - name: str - symbol: str # emoji - - -class ModelType(Enum): - PT = ModelInfo(name="pretrained", symbol="🟢") - FT = ModelInfo(name="fine-tuned", symbol="🔶") - IFT = ModelInfo(name="instruction-tuned", symbol="⭕") - RL = ModelInfo(name="RL-tuned", symbol="🟦") - Unknown = ModelInfo(name="Unknown", symbol="?") - - def to_str(self, separator=" "): - return f"{self.value.symbol}{separator}{self.value.name}" - - -MODEL_TYPE_METADATA: Dict[str, ModelType] = { - "tiiuae/falcon-180B": ModelType.PT, - "tiiuae/falcon-180B-chat": ModelType.RL, - "microsoft/phi-1_5": ModelType.PT, - "Qwen/Qwen-7B": ModelType.PT, - "Qwen/Qwen-7B-Chat": ModelType.RL, - "notstoic/PygmalionCoT-7b": ModelType.IFT, - "aisquared/dlite-v1-355m": ModelType.IFT, - "aisquared/dlite-v1-1_5b": ModelType.IFT, - "aisquared/dlite-v1-774m": ModelType.IFT, - "aisquared/dlite-v1-124m": ModelType.IFT, - "aisquared/chopt-2_7b": ModelType.IFT, - "aisquared/dlite-v2-124m": ModelType.IFT, - "aisquared/dlite-v2-774m": ModelType.IFT, - "aisquared/dlite-v2-1_5b": ModelType.IFT, - "aisquared/chopt-1_3b": ModelType.IFT, - "aisquared/dlite-v2-355m": ModelType.IFT, - "augtoma/qCammel-13": ModelType.IFT, - "Aspik101/Llama-2-7b-hf-instruct-pl-lora_unload": ModelType.IFT, - "Aspik101/vicuna-7b-v1.3-instruct-pl-lora_unload": ModelType.IFT, - "TheBloke/alpaca-lora-65B-HF": ModelType.FT, - "TheBloke/tulu-7B-fp16": ModelType.IFT, - "TheBloke/guanaco-7B-HF": ModelType.FT, - "TheBloke/koala-7B-HF": ModelType.FT, - "TheBloke/wizardLM-7B-HF": ModelType.IFT, - "TheBloke/airoboros-13B-HF": ModelType.IFT, - "TheBloke/koala-13B-HF": ModelType.FT, - "TheBloke/Wizard-Vicuna-7B-Uncensored-HF": ModelType.FT, - "TheBloke/dromedary-65b-lora-HF": ModelType.IFT, - "TheBloke/wizardLM-13B-1.0-fp16": ModelType.IFT, - "TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-fp16": ModelType.FT, - "TheBloke/Wizard-Vicuna-30B-Uncensored-fp16": ModelType.FT, - "TheBloke/wizard-vicuna-13B-HF": ModelType.IFT, - "TheBloke/UltraLM-13B-fp16": ModelType.IFT, - "TheBloke/OpenAssistant-FT-7-Llama-30B-HF": ModelType.FT, - "TheBloke/vicuna-13B-1.1-HF": ModelType.IFT, - "TheBloke/guanaco-13B-HF": ModelType.FT, - "TheBloke/guanaco-65B-HF": ModelType.FT, - "TheBloke/airoboros-7b-gpt4-fp16": ModelType.IFT, - "TheBloke/llama-30b-supercot-SuperHOT-8K-fp16": ModelType.IFT, - "TheBloke/Llama-2-13B-fp16": ModelType.PT, - "TheBloke/llama-2-70b-Guanaco-QLoRA-fp16": ModelType.FT, - "TheBloke/landmark-attention-llama7b-fp16": ModelType.IFT, - "TheBloke/Planner-7B-fp16": ModelType.IFT, - "TheBloke/Wizard-Vicuna-13B-Uncensored-HF": ModelType.FT, - "TheBloke/gpt4-alpaca-lora-13B-HF": ModelType.IFT, - "TheBloke/gpt4-x-vicuna-13B-HF": ModelType.IFT, - "TheBloke/gpt4-alpaca-lora_mlp-65B-HF": ModelType.IFT, - "TheBloke/tulu-13B-fp16": ModelType.IFT, - "TheBloke/VicUnlocked-alpaca-65B-QLoRA-fp16": ModelType.IFT, - "TheBloke/Llama-2-70B-fp16": ModelType.IFT, - "TheBloke/WizardLM-30B-fp16": ModelType.IFT, - "TheBloke/robin-13B-v2-fp16": ModelType.FT, - "TheBloke/robin-33B-v2-fp16": ModelType.FT, - "TheBloke/Vicuna-13B-CoT-fp16": ModelType.IFT, - "TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16": ModelType.IFT, - "TheBloke/Wizard-Vicuna-30B-Superhot-8K-fp16": ModelType.FT, - "TheBloke/Nous-Hermes-13B-SuperHOT-8K-fp16": ModelType.IFT, - "TheBloke/GPlatty-30B-SuperHOT-8K-fp16": ModelType.FT, - "TheBloke/CAMEL-33B-Combined-Data-SuperHOT-8K-fp16": ModelType.IFT, - "TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-fp16": ModelType.IFT, - "jphme/orca_mini_v2_ger_7b": ModelType.IFT, - "Ejafa/vicuna_7B_vanilla_1.1": ModelType.FT, - "kevinpro/Vicuna-13B-CoT": ModelType.IFT, - "AlekseyKorshuk/pygmalion-6b-vicuna-chatml": ModelType.FT, - "AlekseyKorshuk/chatml-pyg-v1": ModelType.FT, - "concedo/Vicuzard-30B-Uncensored": ModelType.FT, - "concedo/OPT-19M-ChatSalad": ModelType.FT, - "concedo/Pythia-70M-ChatSalad": ModelType.FT, - "digitous/13B-HyperMantis": ModelType.IFT, - "digitous/Adventien-GPTJ": ModelType.FT, - "digitous/Alpacino13b": ModelType.IFT, - "digitous/GPT-R": ModelType.IFT, - "digitous/Javelin-R": ModelType.IFT, - "digitous/Javalion-GPTJ": ModelType.IFT, - "digitous/Javalion-R": ModelType.IFT, - "digitous/Skegma-GPTJ": ModelType.FT, - "digitous/Alpacino30b": ModelType.IFT, - "digitous/Janin-GPTJ": ModelType.FT, - "digitous/Janin-R": ModelType.FT, - "digitous/Javelin-GPTJ": ModelType.FT, - "SaylorTwift/gpt2_test": ModelType.PT, - "anton-l/gpt-j-tiny-random": ModelType.FT, - "Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca": ModelType.FT, - "Lazycuber/pyg-instruct-wizardlm": ModelType.FT, - "Lazycuber/Janemalion-6B": ModelType.FT, - "IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1": ModelType.FT, - "IDEA-CCNL/Ziya-LLaMA-13B-v1": ModelType.IFT, - "dsvv-cair/alpaca-cleaned-llama-30b-bf16": ModelType.FT, - "gpt2-medium": ModelType.PT, - "camel-ai/CAMEL-13B-Combined-Data": ModelType.IFT, - "camel-ai/CAMEL-13B-Role-Playing-Data": ModelType.FT, - "camel-ai/CAMEL-33B-Combined-Data": ModelType.IFT, - "PygmalionAI/pygmalion-6b": ModelType.FT, - "PygmalionAI/metharme-1.3b": ModelType.IFT, - "PygmalionAI/pygmalion-1.3b": ModelType.FT, - "PygmalionAI/pygmalion-350m": ModelType.FT, - "PygmalionAI/pygmalion-2.7b": ModelType.FT, - "medalpaca/medalpaca-7b": ModelType.FT, - "lilloukas/Platypus-30B": ModelType.IFT, - "lilloukas/GPlatty-30B": ModelType.FT, - "mncai/chatdoctor": ModelType.FT, - "chaoyi-wu/MedLLaMA_13B": ModelType.FT, - "LoupGarou/WizardCoder-Guanaco-15B-V1.0": ModelType.IFT, - "LoupGarou/WizardCoder-Guanaco-15B-V1.1": ModelType.FT, - "hakurei/instruct-12b": ModelType.IFT, - "hakurei/lotus-12B": ModelType.FT, - "shibing624/chinese-llama-plus-13b-hf": ModelType.IFT, - "shibing624/chinese-alpaca-plus-7b-hf": ModelType.IFT, - "shibing624/chinese-alpaca-plus-13b-hf": ModelType.IFT, - "mosaicml/mpt-7b-instruct": ModelType.IFT, - "mosaicml/mpt-30b-chat": ModelType.IFT, - "mosaicml/mpt-7b-storywriter": ModelType.FT, - "mosaicml/mpt-30b-instruct": ModelType.IFT, - "mosaicml/mpt-7b-chat": ModelType.IFT, - "mosaicml/mpt-30b": ModelType.PT, - "Corianas/111m": ModelType.IFT, - "Corianas/Quokka_1.3b": ModelType.IFT, - "Corianas/256_5epoch": ModelType.FT, - "Corianas/Quokka_256m": ModelType.IFT, - "Corianas/Quokka_590m": ModelType.IFT, - "Corianas/gpt-j-6B-Dolly": ModelType.FT, - "Corianas/Quokka_2.7b": ModelType.IFT, - "cyberagent/open-calm-7b": ModelType.FT, - "Aspik101/Nous-Hermes-13b-pl-lora_unload": ModelType.IFT, - "THUDM/chatglm2-6b": ModelType.IFT, - "MetaIX/GPT4-X-Alpasta-30b": ModelType.IFT, - "NYTK/PULI-GPTrio": ModelType.PT, - "EleutherAI/pythia-1.3b": ModelType.PT, - "EleutherAI/pythia-2.8b-deduped": ModelType.PT, - "EleutherAI/gpt-neo-125m": ModelType.PT, - "EleutherAI/pythia-160m": ModelType.PT, - "EleutherAI/gpt-neo-2.7B": ModelType.PT, - "EleutherAI/pythia-1b-deduped": ModelType.PT, - "EleutherAI/pythia-6.7b": ModelType.PT, - "EleutherAI/pythia-70m-deduped": ModelType.PT, - "EleutherAI/gpt-neox-20b": ModelType.PT, - "EleutherAI/pythia-1.4b-deduped": ModelType.PT, - "EleutherAI/pythia-2.7b": ModelType.PT, - "EleutherAI/pythia-6.9b-deduped": ModelType.PT, - "EleutherAI/pythia-70m": ModelType.PT, - "EleutherAI/gpt-j-6b": ModelType.PT, - "EleutherAI/pythia-12b-deduped": ModelType.PT, - "EleutherAI/gpt-neo-1.3B": ModelType.PT, - "EleutherAI/pythia-410m-deduped": ModelType.PT, - "EleutherAI/pythia-160m-deduped": ModelType.PT, - "EleutherAI/polyglot-ko-12.8b": ModelType.PT, - "EleutherAI/pythia-12b": ModelType.PT, - "roneneldan/TinyStories-33M": ModelType.PT, - "roneneldan/TinyStories-28M": ModelType.PT, - "roneneldan/TinyStories-1M": ModelType.PT, - "roneneldan/TinyStories-8M": ModelType.PT, - "roneneldan/TinyStories-3M": ModelType.PT, - "jerryjalapeno/nart-100k-7b": ModelType.FT, - "lmsys/vicuna-13b-v1.3": ModelType.IFT, - "lmsys/vicuna-7b-v1.3": ModelType.IFT, - "lmsys/vicuna-13b-v1.1": ModelType.IFT, - "lmsys/vicuna-13b-delta-v1.1": ModelType.IFT, - "lmsys/vicuna-7b-delta-v1.1": ModelType.IFT, - "abhiramtirumala/DialoGPT-sarcastic-medium": ModelType.FT, - "haonan-li/bactrian-x-llama-13b-merged": ModelType.IFT, - "Gryphe/MythoLogic-13b": ModelType.IFT, - "Gryphe/MythoBoros-13b": ModelType.IFT, - "pillowtalks-ai/delta13b": ModelType.FT, - "wannaphong/openthaigpt-0.1.0-beta-full-model_for_open_llm_leaderboard": ModelType.FT, - "bigscience/bloom-7b1": ModelType.PT, - "bigcode/tiny_starcoder_py": ModelType.PT, - "bigcode/starcoderplus": ModelType.FT, - "bigcode/gpt_bigcode-santacoder": ModelType.PT, - "bigcode/starcoder": ModelType.PT, - "Open-Orca/OpenOrca-Preview1-13B": ModelType.IFT, - "microsoft/DialoGPT-large": ModelType.FT, - "microsoft/DialoGPT-small": ModelType.FT, - "microsoft/DialoGPT-medium": ModelType.FT, - "microsoft/CodeGPT-small-py": ModelType.FT, - "Tincando/fiction_story_generator": ModelType.FT, - "Pirr/pythia-13b-deduped-green_devil": ModelType.FT, - "Aeala/GPT4-x-AlpacaDente2-30b": ModelType.FT, - "Aeala/GPT4-x-AlpacaDente-30b": ModelType.FT, - "Aeala/GPT4-x-Alpasta-13b": ModelType.FT, - "Aeala/VicUnlocked-alpaca-30b": ModelType.IFT, - "Tap-M/Luna-AI-Llama2-Uncensored": ModelType.FT, - "illuin/test-custom-llama": ModelType.FT, - "dvruette/oasst-llama-13b-2-epochs": ModelType.FT, - "dvruette/oasst-gpt-neox-20b-1000-steps": ModelType.FT, - "dvruette/llama-13b-pretrained-dropout": ModelType.PT, - "dvruette/llama-13b-pretrained": ModelType.PT, - "dvruette/llama-13b-pretrained-sft-epoch-1": ModelType.FT, - "dvruette/llama-13b-pretrained-sft-do2": ModelType.FT, - "dvruette/oasst-gpt-neox-20b-3000-steps": ModelType.FT, - "dvruette/oasst-pythia-12b-pretrained-sft": ModelType.FT, - "dvruette/oasst-pythia-6.9b-4000-steps": ModelType.FT, - "dvruette/gpt-neox-20b-full-precision": ModelType.FT, - "dvruette/oasst-llama-13b-1000-steps": ModelType.FT, - "openlm-research/open_llama_7b_700bt_preview": ModelType.PT, - "openlm-research/open_llama_7b": ModelType.PT, - "openlm-research/open_llama_7b_v2": ModelType.PT, - "openlm-research/open_llama_3b": ModelType.PT, - "openlm-research/open_llama_13b": ModelType.PT, - "openlm-research/open_llama_3b_v2": ModelType.PT, - "PocketDoc/Dans-PileOfSets-Mk1-llama-13b-merged": ModelType.IFT, - "GeorgiaTechResearchInstitute/galpaca-30b": ModelType.IFT, - "GeorgiaTechResearchInstitute/starcoder-gpteacher-code-instruct": ModelType.IFT, - "databricks/dolly-v2-7b": ModelType.IFT, - "databricks/dolly-v2-3b": ModelType.IFT, - "databricks/dolly-v2-12b": ModelType.IFT, - "Rachneet/gpt2-xl-alpaca": ModelType.FT, - "Locutusque/gpt2-conversational-or-qa": ModelType.FT, - "psyche/kogpt": ModelType.FT, - "NbAiLab/nb-gpt-j-6B-alpaca": ModelType.IFT, - "Mikael110/llama-2-7b-guanaco-fp16": ModelType.FT, - "Mikael110/llama-2-13b-guanaco-fp16": ModelType.FT, - "Fredithefish/CrimsonPajama": ModelType.IFT, - "Fredithefish/RedPajama-INCITE-Chat-3B-ShareGPT-11K": ModelType.FT, - "Fredithefish/ScarletPajama-3B-HF": ModelType.FT, - "Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4": ModelType.IFT, - "acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1": ModelType.IFT, - "eachadea/vicuna-13b-1.1": ModelType.FT, - "eachadea/vicuna-7b-1.1": ModelType.FT, - "eachadea/vicuna-13b": ModelType.FT, - "openaccess-ai-collective/wizard-mega-13b": ModelType.IFT, - "openaccess-ai-collective/manticore-13b": ModelType.IFT, - "openaccess-ai-collective/manticore-30b-chat-pyg-alpha": ModelType.IFT, - "openaccess-ai-collective/minotaur-13b": ModelType.IFT, - "openaccess-ai-collective/minotaur-13b-fixed": ModelType.IFT, - "openaccess-ai-collective/hippogriff-30b-chat": ModelType.IFT, - "openaccess-ai-collective/manticore-13b-chat-pyg": ModelType.IFT, - "pythainlp/wangchanglm-7.5B-sft-enth": ModelType.IFT, - "pythainlp/wangchanglm-7.5B-sft-en-sharded": ModelType.IFT, - "euclaise/gpt-neox-122m-minipile-digits": ModelType.FT, - "stabilityai/StableBeluga1-Delta": ModelType.IFT, - "stabilityai/stablelm-tuned-alpha-7b": ModelType.IFT, - "stabilityai/StableBeluga2": ModelType.IFT, - "stabilityai/StableBeluga-13B": ModelType.IFT, - "stabilityai/StableBeluga-7B": ModelType.IFT, - "stabilityai/stablelm-base-alpha-7b": ModelType.PT, - "stabilityai/stablelm-base-alpha-3b": ModelType.PT, - "stabilityai/stablelm-tuned-alpha-3b": ModelType.IFT, - "alibidaran/medical_transcription_generator": ModelType.FT, - "CalderaAI/30B-Lazarus": ModelType.IFT, - "CalderaAI/13B-BlueMethod": ModelType.IFT, - "CalderaAI/13B-Ouroboros": ModelType.IFT, - "KoboldAI/OPT-13B-Erebus": ModelType.FT, - "KoboldAI/GPT-J-6B-Janeway": ModelType.FT, - "KoboldAI/GPT-J-6B-Shinen": ModelType.FT, - "KoboldAI/fairseq-dense-2.7B": ModelType.PT, - "KoboldAI/OPT-6B-nerys-v2": ModelType.FT, - "KoboldAI/GPT-NeoX-20B-Skein": ModelType.FT, - "KoboldAI/PPO_Pygway-6b-Mix": ModelType.FT, - "KoboldAI/fairseq-dense-6.7B": ModelType.PT, - "KoboldAI/fairseq-dense-125M": ModelType.PT, - "KoboldAI/OPT-13B-Nerybus-Mix": ModelType.FT, - "KoboldAI/OPT-2.7B-Erebus": ModelType.FT, - "KoboldAI/OPT-350M-Nerys-v2": ModelType.FT, - "KoboldAI/OPT-2.7B-Nerys-v2": ModelType.FT, - "KoboldAI/OPT-2.7B-Nerybus-Mix": ModelType.FT, - "KoboldAI/OPT-13B-Nerys-v2": ModelType.FT, - "KoboldAI/GPT-NeoX-20B-Erebus": ModelType.FT, - "KoboldAI/OPT-6.7B-Erebus": ModelType.FT, - "KoboldAI/fairseq-dense-355M": ModelType.PT, - "KoboldAI/OPT-6.7B-Nerybus-Mix": ModelType.FT, - "KoboldAI/GPT-J-6B-Adventure": ModelType.FT, - "KoboldAI/OPT-350M-Erebus": ModelType.FT, - "KoboldAI/GPT-J-6B-Skein": ModelType.FT, - "KoboldAI/OPT-30B-Erebus": ModelType.FT, - "klosax/pythia-160m-deduped-step92k-193bt": ModelType.PT, - "klosax/open_llama_3b_350bt_preview": ModelType.PT, - "klosax/openllama-3b-350bt": ModelType.PT, - "klosax/pythia-70m-deduped-step44k-92bt": ModelType.PT, - "klosax/open_llama_13b_600bt_preview": ModelType.PT, - "klosax/open_llama_7b_400bt_preview": ModelType.PT, - "kfkas/Llama-2-ko-7b-Chat": ModelType.IFT, - "WeOpenML/Alpaca-7B-v1": ModelType.IFT, - "WeOpenML/PandaLM-Alpaca-7B-v1": ModelType.IFT, - "TFLai/gpt2-turkish-uncased": ModelType.FT, - "ehartford/WizardLM-13B-Uncensored": ModelType.IFT, - "ehartford/dolphin-llama-13b": ModelType.IFT, - "ehartford/Wizard-Vicuna-30B-Uncensored": ModelType.FT, - "ehartford/WizardLM-30B-Uncensored": ModelType.IFT, - "ehartford/Wizard-Vicuna-13B-Uncensored": ModelType.FT, - "ehartford/WizardLM-7B-Uncensored": ModelType.IFT, - "ehartford/based-30b": ModelType.FT, - "ehartford/Wizard-Vicuna-7B-Uncensored": ModelType.FT, - "wahaha1987/llama_7b_sharegpt94k_fastchat": ModelType.FT, - "wahaha1987/llama_13b_sharegpt94k_fastchat": ModelType.FT, - "OpenAssistant/oasst-sft-1-pythia-12b": ModelType.FT, - "OpenAssistant/stablelm-7b-sft-v7-epoch-3": ModelType.IFT, - "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5": ModelType.FT, - "OpenAssistant/pythia-12b-sft-v8-2.5k-steps": ModelType.IFT, - "OpenAssistant/pythia-12b-sft-v8-7k-steps": ModelType.IFT, - "OpenAssistant/pythia-12b-pre-v8-12.5k-steps": ModelType.IFT, - "OpenAssistant/llama2-13b-orca-8k-3319": ModelType.IFT, - "junelee/wizard-vicuna-13b": ModelType.FT, - "BreadAi/gpt-YA-1-1_160M": ModelType.PT, - "BreadAi/MuseCan": ModelType.PT, - "BreadAi/MusePy-1-2": ModelType.PT, - "BreadAi/DiscordPy": ModelType.PT, - "BreadAi/PM_modelV2": ModelType.PT, - "BreadAi/gpt-Youtube": ModelType.PT, - "BreadAi/StoryPy": ModelType.FT, - "julianweng/Llama-2-7b-chat-orcah": ModelType.FT, - "AGI-inc/lora_moe_7b_baseline": ModelType.FT, - "AGI-inc/lora_moe_7b": ModelType.FT, - "togethercomputer/GPT-NeoXT-Chat-Base-20B": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-Chat-7B-v0.1": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-7B-Base": ModelType.PT, - "togethercomputer/RedPajama-INCITE-7B-Instruct": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-Base-3B-v1": ModelType.PT, - "togethercomputer/Pythia-Chat-Base-7B": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-Base-7B-v0.1": ModelType.PT, - "togethercomputer/GPT-JT-6B-v1": ModelType.IFT, - "togethercomputer/GPT-JT-6B-v0": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-Chat-3B-v1": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-7B-Chat": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-Instruct-3B-v1": ModelType.IFT, - "Writer/camel-5b-hf": ModelType.IFT, - "Writer/palmyra-base": ModelType.PT, - "MBZUAI/LaMini-GPT-1.5B": ModelType.IFT, - "MBZUAI/lamini-cerebras-111m": ModelType.IFT, - "MBZUAI/lamini-neo-1.3b": ModelType.IFT, - "MBZUAI/lamini-cerebras-1.3b": ModelType.IFT, - "MBZUAI/lamini-cerebras-256m": ModelType.IFT, - "MBZUAI/LaMini-GPT-124M": ModelType.IFT, - "MBZUAI/lamini-neo-125m": ModelType.IFT, - "TehVenom/DiffMerge-DollyGPT-Pygmalion": ModelType.FT, - "TehVenom/PPO_Shygmalion-6b": ModelType.FT, - "TehVenom/Dolly_Shygmalion-6b-Dev_V8P2": ModelType.FT, - "TehVenom/Pygmalion_AlpacaLora-7b": ModelType.FT, - "TehVenom/PPO_Pygway-V8p4_Dev-6b": ModelType.FT, - "TehVenom/Dolly_Malion-6b": ModelType.FT, - "TehVenom/PPO_Shygmalion-V8p4_Dev-6b": ModelType.FT, - "TehVenom/ChanMalion": ModelType.FT, - "TehVenom/GPT-J-Pyg_PPO-6B": ModelType.IFT, - "TehVenom/Pygmalion-13b-Merged": ModelType.FT, - "TehVenom/Metharme-13b-Merged": ModelType.IFT, - "TehVenom/Dolly_Shygmalion-6b": ModelType.FT, - "TehVenom/GPT-J-Pyg_PPO-6B-Dev-V8p4": ModelType.IFT, - "georgesung/llama2_7b_chat_uncensored": ModelType.FT, - "vicgalle/gpt2-alpaca": ModelType.IFT, - "vicgalle/alpaca-7b": ModelType.FT, - "vicgalle/gpt2-alpaca-gpt4": ModelType.IFT, - "facebook/opt-350m": ModelType.PT, - "facebook/opt-125m": ModelType.PT, - "facebook/xglm-4.5B": ModelType.PT, - "facebook/opt-2.7b": ModelType.PT, - "facebook/opt-6.7b": ModelType.PT, - "facebook/galactica-30b": ModelType.PT, - "facebook/opt-13b": ModelType.PT, - "facebook/opt-66b": ModelType.PT, - "facebook/xglm-7.5B": ModelType.PT, - "facebook/xglm-564M": ModelType.PT, - "facebook/opt-30b": ModelType.PT, - "golaxy/gogpt-7b": ModelType.FT, - "golaxy/gogpt2-7b": ModelType.FT, - "golaxy/gogpt-7b-bloom": ModelType.FT, - "golaxy/gogpt-3b-bloom": ModelType.FT, - "psmathur/orca_mini_v2_7b": ModelType.IFT, - "psmathur/orca_mini_7b": ModelType.IFT, - "psmathur/orca_mini_3b": ModelType.IFT, - "psmathur/orca_mini_v2_13b": ModelType.IFT, - "gpt2-xl": ModelType.PT, - "lxe/Cerebras-GPT-2.7B-Alpaca-SP": ModelType.FT, - "Monero/Manticore-13b-Chat-Pyg-Guanaco": ModelType.FT, - "Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b": ModelType.IFT, - "Monero/WizardLM-13b-OpenAssistant-Uncensored": ModelType.IFT, - "Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b": ModelType.IFT, - "jzjiao/opt-1.3b-rlhf": ModelType.FT, - "HuggingFaceH4/starchat-beta": ModelType.IFT, - "KnutJaegersberg/gpt-2-xl-EvolInstruct": ModelType.IFT, - "KnutJaegersberg/megatron-GPT-2-345m-EvolInstruct": ModelType.IFT, - "KnutJaegersberg/galactica-orca-wizardlm-1.3b": ModelType.IFT, - "openchat/openchat_8192": ModelType.IFT, - "openchat/openchat_v2": ModelType.IFT, - "openchat/openchat_v2_w": ModelType.IFT, - "ausboss/llama-13b-supercot": ModelType.IFT, - "ausboss/llama-30b-supercot": ModelType.IFT, - "Neko-Institute-of-Science/metharme-7b": ModelType.IFT, - "Neko-Institute-of-Science/pygmalion-7b": ModelType.FT, - "SebastianSchramm/Cerebras-GPT-111M-instruction": ModelType.IFT, - "victor123/WizardLM-13B-1.0": ModelType.IFT, - "OpenBuddy/openbuddy-openllama-13b-v7-fp16": ModelType.FT, - "OpenBuddy/openbuddy-llama2-13b-v8.1-fp16": ModelType.FT, - "OpenBuddyEA/openbuddy-llama-30b-v7.1-bf16": ModelType.FT, - "baichuan-inc/Baichuan-7B": ModelType.PT, - "tiiuae/falcon-40b-instruct": ModelType.IFT, - "tiiuae/falcon-40b": ModelType.PT, - "tiiuae/falcon-7b": ModelType.PT, - "YeungNLP/firefly-llama-13b": ModelType.FT, - "YeungNLP/firefly-llama-13b-v1.2": ModelType.FT, - "YeungNLP/firefly-llama2-13b": ModelType.FT, - "YeungNLP/firefly-ziya-13b": ModelType.FT, - "shaohang/Sparse0.5_OPT-1.3": ModelType.FT, - "xzuyn/Alpacino-SuperCOT-13B": ModelType.IFT, - "xzuyn/MedicWizard-7B": ModelType.FT, - "xDAN-AI/xDAN_13b_l2_lora": ModelType.FT, - "beomi/KoAlpaca-Polyglot-5.8B": ModelType.FT, - "beomi/llama-2-ko-7b": ModelType.IFT, - "Salesforce/codegen-6B-multi": ModelType.PT, - "Salesforce/codegen-16B-nl": ModelType.PT, - "Salesforce/codegen-6B-nl": ModelType.PT, - "ai-forever/rugpt3large_based_on_gpt2": ModelType.FT, - "gpt2-large": ModelType.PT, - "frank098/orca_mini_3b_juniper": ModelType.FT, - "frank098/WizardLM_13B_juniper": ModelType.FT, - "FPHam/Free_Sydney_13b_HF": ModelType.FT, - "huggingface/llama-13b": ModelType.PT, - "huggingface/llama-7b": ModelType.PT, - "huggingface/llama-65b": ModelType.PT, - "huggingface/llama-30b": ModelType.PT, - "Henk717/chronoboros-33B": ModelType.IFT, - "jondurbin/airoboros-13b-gpt4-1.4": ModelType.IFT, - "jondurbin/airoboros-7b": ModelType.IFT, - "jondurbin/airoboros-7b-gpt4": ModelType.IFT, - "jondurbin/airoboros-7b-gpt4-1.1": ModelType.IFT, - "jondurbin/airoboros-7b-gpt4-1.2": ModelType.IFT, - "jondurbin/airoboros-7b-gpt4-1.3": ModelType.IFT, - "jondurbin/airoboros-7b-gpt4-1.4": ModelType.IFT, - "jondurbin/airoboros-l2-7b-gpt4-1.4.1": ModelType.IFT, - "jondurbin/airoboros-l2-13b-gpt4-1.4.1": ModelType.IFT, - "jondurbin/airoboros-l2-70b-gpt4-1.4.1": ModelType.IFT, - "jondurbin/airoboros-13b": ModelType.IFT, - "jondurbin/airoboros-33b-gpt4-1.4": ModelType.IFT, - "jondurbin/airoboros-33b-gpt4-1.2": ModelType.IFT, - "jondurbin/airoboros-65b-gpt4-1.2": ModelType.IFT, - "ariellee/SuperPlatty-30B": ModelType.IFT, - "danielhanchen/open_llama_3b_600bt_preview": ModelType.FT, - "cerebras/Cerebras-GPT-256M": ModelType.PT, - "cerebras/Cerebras-GPT-1.3B": ModelType.PT, - "cerebras/Cerebras-GPT-13B": ModelType.PT, - "cerebras/Cerebras-GPT-2.7B": ModelType.PT, - "cerebras/Cerebras-GPT-111M": ModelType.PT, - "cerebras/Cerebras-GPT-6.7B": ModelType.PT, - "Yhyu13/oasst-rlhf-2-llama-30b-7k-steps-hf": ModelType.RL, - "Yhyu13/llama-30B-hf-openassitant": ModelType.FT, - "NousResearch/Nous-Hermes-Llama2-13b": ModelType.IFT, - "NousResearch/Nous-Hermes-llama-2-7b": ModelType.IFT, - "NousResearch/Redmond-Puffin-13B": ModelType.IFT, - "NousResearch/Nous-Hermes-13b": ModelType.IFT, - "project-baize/baize-v2-7b": ModelType.IFT, - "project-baize/baize-v2-13b": ModelType.IFT, - "LLMs/WizardLM-13B-V1.0": ModelType.FT, - "LLMs/AlpacaGPT4-7B-elina": ModelType.FT, - "wenge-research/yayi-7b": ModelType.FT, - "wenge-research/yayi-7b-llama2": ModelType.FT, - "wenge-research/yayi-13b-llama2": ModelType.FT, - "yhyhy3/open_llama_7b_v2_med_instruct": ModelType.IFT, - "llama-anon/instruct-13b": ModelType.IFT, - "huggingtweets/jerma985": ModelType.FT, - "huggingtweets/gladosystem": ModelType.FT, - "huggingtweets/bladeecity-jerma985": ModelType.FT, - "huggyllama/llama-13b": ModelType.PT, - "huggyllama/llama-65b": ModelType.PT, - "FabbriSimo01/Facebook_opt_1.3b_Quantized": ModelType.PT, - "upstage/Llama-2-70b-instruct": ModelType.IFT, - "upstage/Llama-2-70b-instruct-1024": ModelType.IFT, - "upstage/llama-65b-instruct": ModelType.IFT, - "upstage/llama-30b-instruct-2048": ModelType.IFT, - "upstage/llama-30b-instruct": ModelType.IFT, - "WizardLM/WizardLM-13B-1.0": ModelType.IFT, - "WizardLM/WizardLM-13B-V1.1": ModelType.IFT, - "WizardLM/WizardLM-13B-V1.2": ModelType.IFT, - "WizardLM/WizardLM-30B-V1.0": ModelType.IFT, - "WizardLM/WizardCoder-15B-V1.0": ModelType.IFT, - "gpt2": ModelType.PT, - "keyfan/vicuna-chinese-replication-v1.1": ModelType.IFT, - "nthngdy/pythia-owt2-70m-100k": ModelType.FT, - "nthngdy/pythia-owt2-70m-50k": ModelType.FT, - "quantumaikr/KoreanLM-hf": ModelType.FT, - "quantumaikr/open_llama_7b_hf": ModelType.FT, - "quantumaikr/QuantumLM-70B-hf": ModelType.IFT, - "MayaPH/FinOPT-Lincoln": ModelType.FT, - "MayaPH/FinOPT-Franklin": ModelType.FT, - "MayaPH/GodziLLa-30B": ModelType.IFT, - "MayaPH/GodziLLa-30B-plus": ModelType.IFT, - "MayaPH/FinOPT-Washington": ModelType.FT, - "ogimgio/gpt-neo-125m-neurallinguisticpioneers": ModelType.FT, - "layoric/llama-2-13b-code-alpaca": ModelType.FT, - "CobraMamba/mamba-gpt-3b": ModelType.FT, - "CobraMamba/mamba-gpt-3b-v2": ModelType.FT, - "CobraMamba/mamba-gpt-3b-v3": ModelType.FT, - "timdettmers/guanaco-33b-merged": ModelType.FT, - "elinas/chronos-33b": ModelType.IFT, - "heegyu/RedTulu-Uncensored-3B-0719": ModelType.IFT, - "heegyu/WizardVicuna-Uncensored-3B-0719": ModelType.IFT, - "heegyu/WizardVicuna-3B-0719": ModelType.IFT, - "meta-llama/Llama-2-7b-chat-hf": ModelType.RL, - "meta-llama/Llama-2-7b-hf": ModelType.PT, - "meta-llama/Llama-2-13b-chat-hf": ModelType.RL, - "meta-llama/Llama-2-13b-hf": ModelType.PT, - "meta-llama/Llama-2-70b-chat-hf": ModelType.RL, - "meta-llama/Llama-2-70b-hf": ModelType.PT, - "xhyi/PT_GPTNEO350_ATG": ModelType.FT, - "h2oai/h2ogpt-gm-oasst1-en-1024-20b": ModelType.FT, - "h2oai/h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt": ModelType.FT, - "h2oai/h2ogpt-oig-oasst1-512-6_9b": ModelType.IFT, - "h2oai/h2ogpt-oasst1-512-12b": ModelType.IFT, - "h2oai/h2ogpt-oig-oasst1-256-6_9b": ModelType.IFT, - "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt": ModelType.FT, - "h2oai/h2ogpt-oasst1-512-20b": ModelType.IFT, - "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt-v2": ModelType.FT, - "h2oai/h2ogpt-gm-oasst1-en-1024-12b": ModelType.FT, - "h2oai/h2ogpt-gm-oasst1-multilang-1024-20b": ModelType.FT, - "bofenghuang/vigogne-13b-instruct": ModelType.IFT, - "bofenghuang/vigogne-13b-chat": ModelType.FT, - "bofenghuang/vigogne-2-7b-instruct": ModelType.IFT, - "bofenghuang/vigogne-7b-instruct": ModelType.IFT, - "bofenghuang/vigogne-7b-chat": ModelType.FT, - "Vmware/open-llama-7b-v2-open-instruct": ModelType.IFT, - "VMware/open-llama-0.7T-7B-open-instruct-v1.1": ModelType.IFT, - "ewof/koishi-instruct-3b": ModelType.IFT, - "gywy/llama2-13b-chinese-v1": ModelType.FT, - "GOAT-AI/GOAT-7B-Community": ModelType.FT, - "psyche/kollama2-7b": ModelType.FT, - "TheTravellingEngineer/llama2-7b-hf-guanaco": ModelType.FT, - "beaugogh/pythia-1.4b-deduped-sharegpt": ModelType.FT, - "augtoma/qCammel-70-x": ModelType.IFT, - "Lajonbot/Llama-2-7b-chat-hf-instruct-pl-lora_unload": ModelType.IFT, - "anhnv125/pygmalion-6b-roleplay": ModelType.FT, - "64bits/LexPodLM-13B": ModelType.FT, -} - - -def model_type_from_str(type): - if "fine-tuned" in type or "🔶" in type: - return ModelType.FT - if "pretrained" in type or "🟢" in type: - return ModelType.PT - if "RL-tuned" in type or "🟦" in type: - return ModelType.RL - if "instruction-tuned" in type or "⭕" in type: - return ModelType.IFT - return ModelType.Unknown diff --git a/spaces/guymorlan/Arabic2Taatik/README.md b/spaces/guymorlan/Arabic2Taatik/README.md deleted file mode 100644 index 367054309a3d7173ed712f938d975b01ded5c535..0000000000000000000000000000000000000000 --- a/spaces/guymorlan/Arabic2Taatik/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Arabic2Taatik -emoji: 📈 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/h2oai/wave-tour/examples/file_upload_compact.py b/spaces/h2oai/wave-tour/examples/file_upload_compact.py deleted file mode 100644 index 7090652ecb29e20447036146457d6c097f85bccf..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/file_upload_compact.py +++ /dev/null @@ -1,24 +0,0 @@ -# Form / File Upload / Compact -# Use a compact file #upload component to take less form space. -# #form #file_upload #compact -# --- -from h2o_wave import main, app, Q, ui - - -@app('/demo') -async def serve(q: Q): - if 'file_upload' in q.args: - q.page['example'] = ui.form_card(box='1 1 4 10', items=[ - ui.text(f'file_upload={q.args.file_upload}'), - ui.button(name='show_upload', label='Back', primary=True), - ]) - else: - q.page['example'] = ui.form_card( - box='1 1 4 7', - items=[ - ui.file_upload(name='file_upload', label='Select one or more files to upload', compact=True, - multiple=True, file_extensions=['jpg', 'png'], max_file_size=1, max_size=15), - ui.button(name='submit', label='Submit', primary=True) - ] - ) - await q.page.save() diff --git a/spaces/h2oai/wave-tour/examples/plot_line.py b/spaces/h2oai/wave-tour/examples/plot_line.py deleted file mode 100644 index 68c5a701e726213796ea6ebf5c0c624995fa2615..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/plot_line.py +++ /dev/null @@ -1,25 +0,0 @@ -# Plot / Line -# Make a line #plot. -# --- -from h2o_wave import site, data, ui - -page = site['/demo'] - -page.add('example', ui.plot_card( - box='1 1 4 5', - title='Line', - data=data('year value', 8, rows=[ - ('1991', 3), - ('1992', 4), - ('1993', 3.5), - ('1994', 5), - ('1995', 4.9), - ('1996', 6), - ('1997', 7), - ('1998', 9), - ('1999', 13), - ]), - plot=ui.plot([ui.mark(type='line', x_scale='time', x='=year', y='=value', y_min=0)]) -)) - -page.save() diff --git a/spaces/haakohu/deep_privacy2_face/sg3_torch_utils/ops/conv2d_gradfix.py b/spaces/haakohu/deep_privacy2_face/sg3_torch_utils/ops/conv2d_gradfix.py deleted file mode 100644 index e66591f19fad68760d3df7c9737a14574b70ee83..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2_face/sg3_torch_utils/ops/conv2d_gradfix.py +++ /dev/null @@ -1,175 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom replacement for `torch.nn.functional.conv2d` that supports -arbitrarily high order gradients with zero performance penalty.""" - -import warnings -import contextlib -import torch -from torch.cuda.amp import custom_bwd, custom_fwd - -# pylint: disable=redefined-builtin -# pylint: disable=arguments-differ -# pylint: disable=protected-access - -#---------------------------------------------------------------------------- - -enabled = False # Enable the custom op by setting this to true. -weight_gradients_disabled = False # Forcefully disable computation of gradients with respect to the weights. - -@contextlib.contextmanager -def no_weight_gradients(): - global weight_gradients_disabled - old = weight_gradients_disabled - weight_gradients_disabled = True - yield - weight_gradients_disabled = old - -#---------------------------------------------------------------------------- - -def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1): - if _should_use_custom_op(input): - return _conv2d_gradfix(transpose=False, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=0, dilation=dilation, groups=groups).apply(input, weight, bias) - return torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, dilation=dilation, groups=groups) - -def conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1): - if _should_use_custom_op(input): - return _conv2d_gradfix(transpose=True, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation).apply(input, weight, bias) - return torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation) - -#---------------------------------------------------------------------------- - -def _should_use_custom_op(input): - assert isinstance(input, torch.Tensor) - if (not enabled) or (not torch.backends.cudnn.enabled): - return False - if input.device.type != 'cuda': - return False - if any(torch.__version__.startswith(x) for x in ['1.7.', '1.8.', '1.9', '1.10']): - return True - warnings.warn(f'conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d().') - return False - -def _tuple_of_ints(xs, ndim): - xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim - assert len(xs) == ndim - assert all(isinstance(x, int) for x in xs) - return xs - -#---------------------------------------------------------------------------- - -_conv2d_gradfix_cache = dict() - -def _conv2d_gradfix(transpose, weight_shape, stride, padding, output_padding, dilation, groups): - # Parse arguments. - ndim = 2 - weight_shape = tuple(weight_shape) - stride = _tuple_of_ints(stride, ndim) - padding = _tuple_of_ints(padding, ndim) - output_padding = _tuple_of_ints(output_padding, ndim) - dilation = _tuple_of_ints(dilation, ndim) - - # Lookup from cache. - key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups) - if key in _conv2d_gradfix_cache: - return _conv2d_gradfix_cache[key] - - # Validate arguments. - assert groups >= 1 - assert len(weight_shape) == ndim + 2 - assert all(stride[i] >= 1 for i in range(ndim)) - assert all(padding[i] >= 0 for i in range(ndim)) - assert all(dilation[i] >= 0 for i in range(ndim)) - if not transpose: - assert all(output_padding[i] == 0 for i in range(ndim)) - else: # transpose - assert all(0 <= output_padding[i] < max(stride[i], dilation[i]) for i in range(ndim)) - - # Helpers. - common_kwargs = dict(stride=stride, padding=padding, dilation=dilation, groups=groups) - def calc_output_padding(input_shape, output_shape): - if transpose: - return [0, 0] - return [ - input_shape[i + 2] - - (output_shape[i + 2] - 1) * stride[i] - - (1 - 2 * padding[i]) - - dilation[i] * (weight_shape[i + 2] - 1) - for i in range(ndim) - ] - - # Forward & backward. - class Conv2d(torch.autograd.Function): - @staticmethod - @custom_fwd(cast_inputs=torch.float16) - def forward(ctx, input, weight, bias): - assert weight.shape == weight_shape - if not transpose: - output = torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, **common_kwargs) - else: # transpose - output = torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, output_padding=output_padding, **common_kwargs) - ctx.save_for_backward(input, weight) - return output - - @staticmethod - @custom_bwd - def backward(ctx, grad_output): - input, weight = ctx.saved_tensors - grad_input = None - grad_weight = None - grad_bias = None - - if ctx.needs_input_grad[0]: - p = calc_output_padding(input_shape=input.shape, output_shape=grad_output.shape) - grad_input = _conv2d_gradfix(transpose=(not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs).apply(grad_output.float(), weight.float(), None) - assert grad_input.shape == input.shape - - if ctx.needs_input_grad[1] and not weight_gradients_disabled: - grad_weight = Conv2dGradWeight.apply(grad_output.float(), input.float()) - assert grad_weight.shape == weight_shape - - if ctx.needs_input_grad[2]: - grad_bias = grad_output.float().sum([0, 2, 3]) - - return grad_input, grad_weight, grad_bias - - # Gradient with respect to the weights. - class Conv2dGradWeight(torch.autograd.Function): - @staticmethod - @custom_fwd(cast_inputs=torch.float16) - def forward(ctx, grad_output, input): - op = torch._C._jit_get_operation('aten::cudnn_convolution_backward_weight' if not transpose else 'aten::cudnn_convolution_transpose_backward_weight') - flags = [torch.backends.cudnn.benchmark, torch.backends.cudnn.deterministic, torch.backends.cudnn.allow_tf32] - grad_weight = op(weight_shape, grad_output, input, padding, stride, dilation, groups, *flags) - assert grad_weight.shape == weight_shape - ctx.save_for_backward(grad_output, input) - return grad_weight - - @staticmethod - @custom_bwd - def backward(ctx, grad2_grad_weight): - grad_output, input = ctx.saved_tensors - grad2_grad_output = None - grad2_input = None - - if ctx.needs_input_grad[0]: - grad2_grad_output = Conv2d.apply(input, grad2_grad_weight, None) - assert grad2_grad_output.shape == grad_output.shape - - if ctx.needs_input_grad[1]: - p = calc_output_padding(input_shape=input.shape, output_shape=grad_output.shape) - grad2_input = _conv2d_gradfix(transpose=(not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs).apply(grad_output, grad2_grad_weight, None) - assert grad2_input.shape == input.shape - - return grad2_grad_output, grad2_input - - _conv2d_gradfix_cache[key] = Conv2d - return Conv2d - -#---------------------------------------------------------------------------- diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/memory/__init__.py b/spaces/hamelcubsfan/AutoGPT/autogpt/memory/__init__.py deleted file mode 100644 index 3d18704c70dfc287642b1923e6f2e1f72a5f2a62..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/autogpt/memory/__init__.py +++ /dev/null @@ -1,99 +0,0 @@ -from autogpt.memory.local import LocalCache -from autogpt.memory.no_memory import NoMemory - -# List of supported memory backends -# Add a backend to this list if the import attempt is successful -supported_memory = ["local", "no_memory"] - -try: - from autogpt.memory.redismem import RedisMemory - - supported_memory.append("redis") -except ImportError: - # print("Redis not installed. Skipping import.") - RedisMemory = None - -try: - from autogpt.memory.pinecone import PineconeMemory - - supported_memory.append("pinecone") -except ImportError: - # print("Pinecone not installed. Skipping import.") - PineconeMemory = None - -try: - from autogpt.memory.weaviate import WeaviateMemory - - supported_memory.append("weaviate") -except ImportError: - # print("Weaviate not installed. Skipping import.") - WeaviateMemory = None - -try: - from autogpt.memory.milvus import MilvusMemory - - supported_memory.append("milvus") -except ImportError: - # print("pymilvus not installed. Skipping import.") - MilvusMemory = None - - -def get_memory(cfg, init=False): - memory = None - if cfg.memory_backend == "pinecone": - if not PineconeMemory: - print( - "Error: Pinecone is not installed. Please install pinecone" - " to use Pinecone as a memory backend." - ) - else: - memory = PineconeMemory(cfg) - if init: - memory.clear() - elif cfg.memory_backend == "redis": - if not RedisMemory: - print( - "Error: Redis is not installed. Please install redis-py to" - " use Redis as a memory backend." - ) - else: - memory = RedisMemory(cfg) - elif cfg.memory_backend == "weaviate": - if not WeaviateMemory: - print( - "Error: Weaviate is not installed. Please install weaviate-client to" - " use Weaviate as a memory backend." - ) - else: - memory = WeaviateMemory(cfg) - elif cfg.memory_backend == "milvus": - if not MilvusMemory: - print( - "Error: Milvus sdk is not installed." - "Please install pymilvus to use Milvus as memory backend." - ) - else: - memory = MilvusMemory(cfg) - elif cfg.memory_backend == "no_memory": - memory = NoMemory(cfg) - - if memory is None: - memory = LocalCache(cfg) - if init: - memory.clear() - return memory - - -def get_supported_memory_backends(): - return supported_memory - - -__all__ = [ - "get_memory", - "LocalCache", - "RedisMemory", - "PineconeMemory", - "NoMemory", - "MilvusMemory", - "WeaviateMemory", -] diff --git a/spaces/hanzportgas/rvc-models/vc_infer_pipeline.py b/spaces/hanzportgas/rvc-models/vc_infer_pipeline.py deleted file mode 100644 index c26d45068f9b6bf2b194b13c3c89f8a06347c124..0000000000000000000000000000000000000000 --- a/spaces/hanzportgas/rvc-models/vc_infer_pipeline.py +++ /dev/null @@ -1,306 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -from config import x_pad, x_query, x_center, x_max -import scipy.signal as signal -import pyworld, os, traceback, faiss -from scipy import signal - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - - -class VC(object): - def __init__(self, tgt_sr, device, is_half): - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * x_query # 查询切点前后查询时间 - self.t_center = self.sr * x_center # 查询切点位置 - self.t_max = self.sr * x_max # 免查询时长阈值 - self.device = device - self.is_half = is_half - - def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None): - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0] - f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9, # layer 9 - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - _, I = index.search(npy, 1) - npy = big_npy[I.squeeze()] - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - f0_file=None, - ): - if ( - file_big_npy != "" - and file_index != "" - and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - big_npy = np.load(file_big_npy) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - print("Feature retrieval library doesn't exist or ratio is 0") - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/haryoaw/id-recigen/app.py b/spaces/haryoaw/id-recigen/app.py deleted file mode 100644 index e92aa7fa1b570dc16fdc9b76059498a94cc7bfeb..0000000000000000000000000000000000000000 --- a/spaces/haryoaw/id-recigen/app.py +++ /dev/null @@ -1,78 +0,0 @@ -""" -Main App -""" - -import streamlit as st -from transformers import AutoModelForSeq2SeqLM - -from src.tokenizers import IndoNLGTokenizer - - -@st.cache(allow_output_mutation=True) -def fetch_tokenizer_model(): - """ - Fetch tokenizer and model - """ - tokenizer = IndoNLGTokenizer.from_pretrained("indobenchmark/indobart-v2") - model = AutoModelForSeq2SeqLM.from_pretrained("haryoaw/id-recigen-bart") - return tokenizer, model - - -tokenizer, model = fetch_tokenizer_model() - - -def predict_recipe(food: str) -> str: - """ - Predict Ingredients Here! - - Parameters - ---------- - food: str - The food that will be used - - Returns - ------- - str - Return the model here - """ - inp = tokenizer(food.lower(), return_tensors="pt")["input_ids"] - generated = model.generate( - inp, max_length=500, do_sample=False, num_beams=10, num_beam_groups=2 - ) - returned_input: str = tokenizer.decode(generated[0], skip_special_tokens=True) - returned_input = "\n".join([x.strip() for x in returned_input.split("||")]) - return returned_input - - -def create_frontend() -> None: - """ - Create front end streamlit here - """ - st.markdown("# Food Ingredients Generator Indonesia Showcase!") - st.write("🥑Generate your ingredients here!") - - with st.form("my_form"): - food_name = st.text_input( - "Food", value="Nasi Goreng Ayam", help="Input your food here!" - ) - submitted = st.form_submit_button("Submit") - if submitted: - predicted = predict_recipe(food_name) - st.markdown(f"## Bahan ( Ingredients ) `{food_name}`:") - st.text(predicted) - st.markdown("## Additional Note") - st.write( - "❗Please note that the model is trained with the food that use:" - ) - for i, ingr in enumerate(("ayam", "tempe", "ikan", "kambing", "telur", "tahu", "sapi")): - st.write(f"{i+1}. {ingr}") - - st.markdown("## Models") - st.markdown( - "🤗 Huggingface Model: [Link](https://huggingface.co/haryoaw/id-recigen-bart)" - ) - st.write("Thank you 😊") - - -if __name__ == "__main__": - create_frontend() diff --git a/spaces/haseeb-heaven/AutoBard-Coder/CHANGELOGS.md b/spaces/haseeb-heaven/AutoBard-Coder/CHANGELOGS.md deleted file mode 100644 index f558dcdab061c136fc481dd009d1bd1e82eaeb9b..0000000000000000000000000000000000000000 --- a/spaces/haseeb-heaven/AutoBard-Coder/CHANGELOGS.md +++ /dev/null @@ -1,27 +0,0 @@ -# Changelog 📝 - -All notable changes to this project will be documented in this file. -## [1.3] - 2023-05-2 -### Added -- **Updated** with totaly new _UI_ and _UX_. 🎨 -- Updated security for code checking and prompt checking. 🔒 -- Added new Help section. 🆘 -- Fix API Key bugs. 🛠 - -## [1.2] - 2023-05-28 -### Added -- **Advanced security** for code checking and prompt checking. 🔒 -- Support for graphs, charts, and tables. 📊 -- More libraries for data science. 🧬 - -## [1.1] - 2023-05-22 -### Added -- Upload files option. 📤 -- API key settings. 🔑 -### Fixed -- Error handling from server. 🛠 - -## [1.0] - 2023-05-21 -### Added -- Auto barcode generator. 🏷 -- Auto barcode interpreter. 🔎 diff --git a/spaces/hasibzunair/fifa-tryon-demo/rembg/session_factory.py b/spaces/hasibzunair/fifa-tryon-demo/rembg/session_factory.py deleted file mode 100644 index 0f96a0580e6877f449a58bb0e7b19e64c28c609b..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/rembg/session_factory.py +++ /dev/null @@ -1,63 +0,0 @@ -import hashlib -import os -import sys -from contextlib import redirect_stdout -from pathlib import Path -from typing import Type - -import gdown -import onnxruntime as ort - -from .session_base import BaseSession -from .session_cloth import ClothSession -from .session_simple import SimpleSession - - -def new_session(model_name: str) -> BaseSession: - session_class: Type[BaseSession] - - if model_name == "u2netp": - md5 = "8e83ca70e441ab06c318d82300c84806" - url = "https://drive.google.com/uc?id=1tNuFmLv0TSNDjYIkjEdeH1IWKQdUA4HR" - session_class = SimpleSession - elif model_name == "u2net": - md5 = "60024c5c889badc19c04ad937298a77b" - url = "https://drive.google.com/uc?id=1tCU5MM1LhRgGou5OpmpjBQbSrYIUoYab" - session_class = SimpleSession - elif model_name == "u2net_human_seg": - md5 = "c09ddc2e0104f800e3e1bb4652583d1f" - url = "https://drive.google.com/uc?id=1ZfqwVxu-1XWC1xU1GHIP-FM_Knd_AX5j" - session_class = SimpleSession - elif model_name == "u2net_cloth_seg": - md5 = "2434d1f3cb744e0e49386c906e5a08bb" - url = "https://drive.google.com/uc?id=15rKbQSXQzrKCQurUjZFg8HqzZad8bcyz" - session_class = ClothSession - else: - assert AssertionError( - "Choose between u2net, u2netp, u2net_human_seg or u2net_cloth_seg" - ) - - home = os.getenv("U2NET_HOME", os.path.join("~", ".u2net")) - path = Path(home).expanduser() / f"{model_name}.onnx" - path.parents[0].mkdir(parents=True, exist_ok=True) - - if not path.exists(): - with redirect_stdout(sys.stderr): - gdown.download(url, str(path), use_cookies=False) - else: - hashing = hashlib.new("md5", path.read_bytes(), usedforsecurity=False) - if hashing.hexdigest() != md5: - with redirect_stdout(sys.stderr): - gdown.download(url, str(path), use_cookies=False) - - sess_opts = ort.SessionOptions() - - if "OMP_NUM_THREADS" in os.environ: - sess_opts.inter_op_num_threads = int(os.environ["OMP_NUM_THREADS"]) - - return session_class( - model_name, - ort.InferenceSession( - str(path), providers=ort.get_available_providers(), sess_options=sess_opts - ), - ) diff --git a/spaces/hca97/Mosquito-Detection/my_models/yolov5_clip_model.py b/spaces/hca97/Mosquito-Detection/my_models/yolov5_clip_model.py deleted file mode 100644 index 22b62bc50261667f0bfc2a3a5f3b0893b607792b..0000000000000000000000000000000000000000 --- a/spaces/hca97/Mosquito-Detection/my_models/yolov5_clip_model.py +++ /dev/null @@ -1,101 +0,0 @@ -import logging -import time - -import torch -import numpy as np - -torch.set_num_threads(2) - -from my_models.clip_model.data_loader import pre_process_foo -from my_models.clip_model.classification import MosquitoClassifier - -IMG_SIZE = (224, 224) -USE_CHANNEL_LAST = False -DATASET = "laion" -DEVICE = "cpu" -PRESERVE_ASPECT_RATIO = False -SHIFT = 0 - - -@torch.no_grad() -def detect_image(model, image: np.ndarray) -> dict: - image_information = {} - result = model(image) - result_df = result.pandas().xyxy[0] - if result_df.empty: - print("No results from yolov5 model!") - else: - image_information = result_df.to_dict() - return image_information - - -@torch.no_grad() -def classify_image(model: MosquitoClassifier, image: np.ndarray, bbox: list) -> tuple: - labels = [ - "albopictus", - "culex", - "japonicus-koreicus", - "culiseta", - "anopheles", - "aegypti", - ] - - image_cropped = image[bbox[1] : bbox[3], bbox[0] : bbox[2], :] - x = torch.unsqueeze(pre_process_foo(IMG_SIZE, DATASET)(image_cropped), 0) - x = x.to(device=DEVICE) - p: torch.Tensor = model(x) - ind = torch.argmax(p).item() - label = labels[ind] - return label, p.max().item() - - -def extract_predicted_mosquito_bbox(extractedInformation): - bbox = [] - if extractedInformation is not None: - xmin = int(extractedInformation.get("xmin").get(0)) - ymin = int(extractedInformation.get("ymin").get(0)) - xmax = int(extractedInformation.get("xmax").get(0)) - ymax = int(extractedInformation.get("ymax").get(0)) - bbox = [xmin, ymin, xmax, ymax] - return bbox - - -class YOLOV5CLIPModel: - def __init__(self): - trained_model_path = "my_models/yolo_weights/mosquitoalert-yolov5-baseline.pt" - repo_path = "my_models/torch_hub_cache/yolov5" - self.det = torch.hub.load( - repo_path, - "custom", - path=trained_model_path, - force_reload=True, - source="local", - ) - - clip_model_path = f"my_models/clip_weights/best_clf.ckpt" - self.cls = MosquitoClassifier.load_from_checkpoint( - clip_model_path, head_version=7, map_location=torch.device(DEVICE) - ).eval() - - def predict(self, image: np.ndarray): - s = time.time() - predictedInformation = detect_image(self.det, image) - mosquito_class_name_predicted = "albopictus" - mosquito_class_confidence = 0.0 - mosquito_class_bbox = [0, 0, image.shape[0], image.shape[1]] - - if predictedInformation: - mosquito_class_bbox = extract_predicted_mosquito_bbox(predictedInformation) - - mosquito_class_name_predicted, mosquito_class_confidence = classify_image( - self.cls, image, mosquito_class_bbox - ) - - e = time.time() - - logging.info(f"[PREDICTION] Total time passed {e - s}ms") - return ( - mosquito_class_name_predicted, - mosquito_class_confidence, - mosquito_class_bbox, - ) diff --git a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/text/shanghainese.py b/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/text/shanghainese.py deleted file mode 100644 index cb29c24a08d2e406e8399cf7bc9fe5cb43cb9c61..0000000000000000000000000000000000000000 --- a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/text/shanghainese.py +++ /dev/null @@ -1,64 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('zaonhe') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ᴇ'), - ('B', 'bi'), - ('C', 'si'), - ('D', 'di'), - ('E', 'i'), - ('F', 'ᴇf'), - ('G', 'dʑi'), - ('H', 'ᴇtɕʰ'), - ('I', 'ᴀi'), - ('J', 'dʑᴇ'), - ('K', 'kʰᴇ'), - ('L', 'ᴇl'), - ('M', 'ᴇm'), - ('N', 'ᴇn'), - ('O', 'o'), - ('P', 'pʰi'), - ('Q', 'kʰiu'), - ('R', 'ᴀl'), - ('S', 'ᴇs'), - ('T', 'tʰi'), - ('U', 'ɦiu'), - ('V', 'vi'), - ('W', 'dᴀbɤliu'), - ('X', 'ᴇks'), - ('Y', 'uᴀi'), - ('Z', 'zᴇ') -]] - - -def _number_to_shanghainese(num): - num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两') - return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num) - - -def number_to_shanghainese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def shanghainese_to_ipa(text): - text = number_to_shanghainese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/nnUNetTrainerV2_lReLU_convlReLUIN.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/nnUNetTrainerV2_lReLU_convlReLUIN.py deleted file mode 100644 index 198f9ef43eedeaa0db5d673e77407d65738fcb9c..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/nnUNetTrainerV2_lReLU_convlReLUIN.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import torch -from nnunet.network_architecture.generic_UNet import Generic_UNet, ConvDropoutNonlinNorm -from nnunet.network_architecture.initialization import InitWeights_He -from nnunet.training.network_training.nnUNetTrainerV2 import nnUNetTrainerV2 -from nnunet.utilities.nd_softmax import softmax_helper -from torch import nn - - -class nnUNetTrainerV2_lReLU_convReLUIN(nnUNetTrainerV2): - def initialize_network(self): - if self.threeD: - conv_op = nn.Conv3d - dropout_op = nn.Dropout3d - norm_op = nn.InstanceNorm3d - - else: - conv_op = nn.Conv2d - dropout_op = nn.Dropout2d - norm_op = nn.InstanceNorm2d - - norm_op_kwargs = {'eps': 1e-5, 'affine': True} - dropout_op_kwargs = {'p': 0, 'inplace': True} - net_nonlin = nn.LeakyReLU - net_nonlin_kwargs = {'inplace': True, 'negative_slope': 1e-2} - self.network = Generic_UNet(self.num_input_channels, self.base_num_features, self.num_classes, - len(self.net_num_pool_op_kernel_sizes), - self.conv_per_stage, 2, conv_op, norm_op, norm_op_kwargs, dropout_op, dropout_op_kwargs, - net_nonlin, net_nonlin_kwargs, True, False, lambda x: x, InitWeights_He(1e-2), - self.net_num_pool_op_kernel_sizes, self.net_conv_kernel_sizes, False, True, True, - basic_block=ConvDropoutNonlinNorm) - if torch.cuda.is_available(): - self.network.cuda() - self.network.inference_apply_nonlin = softmax_helper diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/miscellaneous/__init__.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/miscellaneous/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hysts/ControlNet/app_hough.py b/spaces/hysts/ControlNet/app_hough.py deleted file mode 100644 index ef87a73ca6c757eea4352aeafbd45fdad0189599..0000000000000000000000000000000000000000 --- a/spaces/hysts/ControlNet/app_hough.py +++ /dev/null @@ -1,97 +0,0 @@ -# This file is adapted from https://github.com/lllyasviel/ControlNet/blob/f4748e3630d8141d7765e2bd9b1e348f47847707/gradio_hough2image.py -# The original license file is LICENSE.ControlNet in this repo. -import gradio as gr - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown('## Control Stable Diffusion with Hough Line Maps') - with gr.Row(): - with gr.Column(): - input_image = gr.Image(source='upload', type='numpy') - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button(label='Run') - with gr.Accordion('Advanced options', open=False): - num_samples = gr.Slider(label='Images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image Resolution', - minimum=256, - maximum=512, - value=512, - step=256) - detect_resolution = gr.Slider(label='Hough Resolution', - minimum=128, - maximum=512, - value=512, - step=1) - mlsd_value_threshold = gr.Slider( - label='Hough value threshold (MLSD)', - minimum=0.01, - maximum=2.0, - value=0.1, - step=0.01) - mlsd_distance_threshold = gr.Slider( - label='Hough distance threshold (MLSD)', - minimum=0.01, - maximum=20.0, - value=0.1, - step=0.01) - num_steps = gr.Slider(label='Steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance Scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=-1, - maximum=2147483647, - step=1, - randomize=True) - a_prompt = gr.Textbox( - label='Added Prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative Prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', - show_label=False, - elem_id='gallery').style(grid=2, - height='auto') - inputs = [ - input_image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - detect_resolution, - num_steps, - guidance_scale, - seed, - mlsd_value_threshold, - mlsd_distance_threshold, - ] - prompt.submit(fn=process, inputs=inputs, outputs=result) - run_button.click(fn=process, - inputs=inputs, - outputs=result, - api_name='hough') - return demo - - -if __name__ == '__main__': - from model import Model - model = Model() - demo = create_demo(model.process_hough) - demo.queue().launch() diff --git a/spaces/innovatorved/whisper.api/app/utils/constant.py b/spaces/innovatorved/whisper.api/app/utils/constant.py deleted file mode 100644 index 89ef6ce5ae9093e9609b202c35751d87f69546c8..0000000000000000000000000000000000000000 --- a/spaces/innovatorved/whisper.api/app/utils/constant.py +++ /dev/null @@ -1,11 +0,0 @@ -model_names = { - "tiny.en": "ggml-tiny.en.bin", - "tiny.en.q5": "ggml-model-whisper-tiny.en-q5_1.bin", - "base.en.q5": "ggml-model-whisper-base.en-q5_1.bin", -} - -model_urls = { - "tiny.en": "https://firebasestorage.googleapis.com/v0/b/model-innovatorved.appspot.com/o/ggml-model-whisper-base.en-q5_1.bin?alt=media", - "tiny.en.q5": "https://firebasestorage.googleapis.com/v0/b/model-innovatorved.appspot.com/o/ggml-model-whisper-tiny.en-q5_1.bin?alt=media", - "base.en.q5": "https://firebasestorage.googleapis.com/v0/b/model-innovatorved.appspot.com/o/ggml-model-whisper-base.en-q5_1.bin?alt=media", -} diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Dhoom 2 Full Movie Download Filmywap Bollywood UPDATED.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Dhoom 2 Full Movie Download Filmywap Bollywood UPDATED.md deleted file mode 100644 index 5c933d575ea325bc1eb2083efc54ad90e99f768f..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Dhoom 2 Full Movie Download Filmywap Bollywood UPDATED.md +++ /dev/null @@ -1,6 +0,0 @@ -

          dhoom 2 full movie download filmywap bollywood


          DOWNLOAD ✑ ✑ ✑ https://urlin.us/2uEwKK



          -
          -Ltd. Genre: Action; Released: 2006; Run Time: 2 hr 31 min; Rated: PG. © 2006 Yash Raj Films Pvt. Ltd ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/FULL Galcott Super Text Search V3.0 Serial TOP.md b/spaces/inplisQlawa/anything-midjourney-v4-1/FULL Galcott Super Text Search V3.0 Serial TOP.md deleted file mode 100644 index 91b8d558c79cfd707f22bf429ce2696a4521df31..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/FULL Galcott Super Text Search V3.0 Serial TOP.md +++ /dev/null @@ -1,12 +0,0 @@ -
          -

          137 records. x1 search serial numbers are presented here. no registration.. galcott super text search v3.12. suncross power file search v2.1.0. the trial versions are fully functional for 15 days after installation,. super text search, search files for any text, including powerful wildcard.

          -

          FULL Galcott Super Text Search V3.0 Serial


          Download Ziphttps://urlin.us/2uExKj



          -

          serialnews - serials - g1. galcott directory printer v4.0-bean :: 43 b :: 20.04.09. galcott super text search v3.0-bean :: 44 b :: 14.05.10 . ://xn--80aagyardii6h.xn--p1ai/full-galcott-super-text-search-v3-0-serial/

          -

          1477 records. find all steam key stores and prices to download jagged alliance 2 wildfire and play at the. full galcott super text search v3.0-bean :: 44 b :: 14.05.10 . ://xn--80aagyardii6h.xn--p1ai/full-galcott-super-text-search-v3-0-serial/

          -

          00 eldiego90 applications > windowsgalcott super text search v3.0 + serial 2010-08-26 2.24 mib galcott super text search v3.0 :: 44 b :: 20.06.10. ://xn--80aagyardii6h.xn--p1ai/full-galcott-super-text-search-v3-0-serial/

          -

          full galcott super text search v3.12. suncross power file search v2.1.0. the trial versions are fully functional for 15 days after installation,. super text search, search files for any text, including powerful wildcard.

          -

          find all steam key stores and prices to download jagged alliance 2 wildfire and play at the. full galcott super text search v3.0 serial -serial/. cs-pack-icons-crack-with-license-code-x64-updated-2022

          -

          -

          galcott directory printer v4.1-bean :: 43 b :: 10.02.11. galcott pdf converter v3.0-bean :: 43 b :: 19.04.09. galcott super text search v3.0-bean :: 44 b. ze converter with license key pc/windows [updated-2022]. ://xn--80aagyardii6h.xn--p1ai/full-galcott-super-text-search-v3-0-serial/

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Fable 3 Guild Seals Cheat Engine Table.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Fable 3 Guild Seals Cheat Engine Table.md deleted file mode 100644 index aac44d741db1aa3e1458b8df647d5c094ed8c638..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Fable 3 Guild Seals Cheat Engine Table.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Fable 3 Guild Seals Cheat Engine Table


          DOWNLOAD ===> https://urlin.us/2uEw6F



          -
          -... charr charred charring chars chart chart's charted charter charter's chartered ... cheapo cheapskate cheapskate's cheapskates cheat cheat's cheated cheater ... coders codes codeword codewords codex codex's codfish codfish's codfishes ... f fa fa's fab fabaceous fable fable's fabled fables fabliau fabliaux fabric fabric's ... 1fdad05405
          -
          -
          -

          diff --git a/spaces/isabel/anime-project/reader.py b/spaces/isabel/anime-project/reader.py deleted file mode 100644 index 2089f121665bf06f1c4d8a54d78df7b435b01ae9..0000000000000000000000000000000000000000 --- a/spaces/isabel/anime-project/reader.py +++ /dev/null @@ -1,161 +0,0 @@ -import os -from yattag import Doc -## --------------------------------- ### -### reading: info.txt ### -### -------------------------------- ### -# placeholders in case info.txt does not exist -def get_article(acc, most_imp_feat): - filename = "info.txt" - placeholder = "please create an info.txt to customize this text" - note = "**Note that model accuracy is based on the uploaded data.csv and reflects how well the AI model can give correct recommendations for that dataset. An accuracy of 50% means that half of the model's predictions for that dataset were accurate. Model accuracy and most important feature can be helpful for understanding how the model works, but should not be considered absolute facts about the real world." - - title = bkgd = data_collection = priv_cons = bias_cons = img_src = membs = description = placeholder - # check if info.txt is present - if os.path.isfile(filename): - # open info.txt in read mode - info = open(filename, "r") - - # read each line to a string - description = "An AI project created by " + info.readline() - title = info.readline() - bkgd = info.readline() - data_collection = info.readline() - priv_cons = info.readline() - bias_cons = info.readline() - img_src = info.readline() - membs = info.readline() - - # close file - info.close() - - # use yattag library to generate html - doc, tag, text, line = Doc().ttl() - # create html based on info.txt - with tag('div'): - with tag('div', klass='box model-container'): - with tag('div', klass='spacer'): - with tag('div', klass='box model-div'): - line('h2', "Model Accuracy", klass='acc') - line('p', acc) - with tag('div', klass='box model-div'): - line('h2', "Most Important Feature", klass='feat') - line('p', most_imp_feat) - with tag('div', klass='spacer'): - line('p', note) - with tag('div', klass='box'): - line('h2', 'Problem Statement and Research Summary', klass='prj') - line('p', bkgd) - with tag('div', klass='box'): - line('h2', 'Data Collection Plan', klass='data') - line('p', data_collection) - with tag('div', klass='box'): - line('h2', 'Ethical Considerations (Data Privacy and Bias)', klass='ethics') - with tag('ul'): - line('li', priv_cons) - line('li', bias_cons) - with tag('div', klass='box'): - line('h2', 'Our Team', klass='team') - line('p', membs) - doc.stag('img', src=img_src) - - css = ''' - .box { - border: 2px solid black; - text-align: center; - margin: 10px; - padding: 5%; - } - ul { - display: inline-block; - text-align: left; - } - img { - display: block; - margin: auto; - } - .description { - text-align: center; - } - .panel_button { - display: block !important; - width: 100% !important; - background-color: #00EACD !important; - color: #000; - transition: all .2s ease-out 0s !important; - box-shadow: 0 10px #00AEAB !important; - border-radius: 10px !important; - } - .panel_button:hover { - box-shadow: 0 5px #00AEAB; - transform: translateY(5px); - } - .submit { - color: black !important; - } - .selected { - background-color: #656bd6 !important; - } - .radio_item { - border-radius: 10px; - padding-left: 10px !important; - padding-right: 10px !important; - } - .radio_item:hover { - color: #656bd6 !important; - } - .title { - background-image: url(https://media.giphy.com/media/26BROrSHlmyzzHf3i/giphy.gif); - background-size: cover; - color: transparent; - -moz-background-clip: text; - -webkit-background-clip: text; - text-transform: uppercase; - font-size: 60px; - line-height: .75; - margin: 10px 0; - } - .panel_header { - color: black !important; - } - input { - background-color: #efeffa !important; - } - .acc, .feat { - background-color: #FF3399 !important - } - .prj { - background-color: #FFCE3B !important; - } - .data { - background-color: #ED6800 !important; - } - .ethics { - background-color: #3EE6F9 !important; - } - .team { - background-color: #9581EF !important; - } - .model-container { - display: flex; - flex-direction: column; - justify-content: center; - } - .spacer { - display: flex; - justify-content: center; - } - .model-div { - width: 45%; - } - @media screen and (max-width: 700px) { - .model-container { - flex-wrap: wrap; - } - } - ''' - return { - 'article': doc.getvalue(), - 'css': css, - 'title': title, - 'description': description, - } \ No newline at end of file diff --git a/spaces/jacinthes/PubMed-fact-checker/app.py b/spaces/jacinthes/PubMed-fact-checker/app.py deleted file mode 100644 index 8a7eca30e871c78f6e7fd563dbe62edb8fc5bd00..0000000000000000000000000000000000000000 --- a/spaces/jacinthes/PubMed-fact-checker/app.py +++ /dev/null @@ -1,216 +0,0 @@ -import streamlit as st -import GPTHelper -from sentence_transformers import CrossEncoder -from pymed import PubMed -import pandas as pd -import plotly.express as px -import logging -from langdetect import detect -from typing import Dict, List - - -if 'valid_inputs_received' not in st.session_state: - st.session_state['valid_inputs_received'] = False - - -def get_articles(query, fetcher) -> Dict[List[str], List[str]]: - # Fetches articles using pymed. Increasing max_results results in longer loading times. - results = fetcher.query(query, max_results=50) - conclusions = [] - titles = [] - links = [] - for article in results: - article_id = 0 # If PubMed search fails to return anything - try: - article_id = article.pubmed_id[:8] # Sometimes pymed wrongly returns a long list of ids. Use only the first - # [] can cause the cross-encoder to misinterpret string as a list - title = article.title.replace('[', '(').replace(']', ')') - conclusion = article.conclusions - abstract = article.abstract - article_url = f'https://pubmed.ncbi.nlm.nih.gov/{article_id}/' - article_link = f'PubMed ID: {article_id}' # Injects a link to plotly - if conclusion: - # Not all articles come with the provided conclusions. Abstract is used alternatively. - conclusion = conclusion.replace('[', '(').replace(']', ')') - conclusions.append(title+'\n'+conclusion) - titles.append(title) # Title is added to the conclusion to improve relevance ranking. - links.append(article_link) - elif abstract: - abstract = abstract.replace('[', '(').replace(']', ')') - conclusions.append(title + '\n' + abstract) - titles.append(title) - links.append(article_link) - except Exception as e: - logging.warning(f'Error reading article: {article_id}: ', exc_info=e) - - return { - 'Conclusions': conclusions, - 'Links': links - } - - -@st.cache_resource -def load_cross_encoder(): - # The pretrained cross-encoder model used for reranking. Can be substituted with a different one. - cross_encoder = CrossEncoder('cross-encoder/ms-marco-MiniLM-L-6-v2') - return cross_encoder - - -@st.cache_resource -def load_pubmed_fetcher(): - pubmed = PubMed(tool='PubmedFactChecker', email='stihec.jan@gmail.com') - return pubmed - - -def run_ui(): - # This function controls the whole app flow. - st.set_page_config(page_title='PUBMED FACT-CHECKER', page_icon='📖') - - sidebar = st.sidebar - sidebar.title('ABOUT') - sidebar.write(''' - The PubMed fact-checker app enables users to verify biomedical claims by comparing them against - research papers available on PubMed. \n - As the number of self-proclaimed experts continues to rise, - so does the risk of harmful misinformation. This app showcases the potential of Large Language Models - to provide accurate and valuable information to people. - ''') - sidebar.title('EXAMPLES') - sidebar.write('Try one of the below examples to see PubMed fact-checker in action.') - - st.title('PubMed FACT CHECKER') - with st.form(key='fact_form'): - fact = st.text_input('Fact:', placeholder='Enter your fact', key='form_input') - submitted = st.form_submit_button('Fact-Check') - - if sidebar.button('Mediterranean diet helps with weight loss.', use_container_width=250): - submitted = True - fact = 'Mediterranean diet helps with weight loss.' - - if sidebar.button('Low Carb High Fat diet is healthy in long term.', use_container_width=250): - submitted = True - fact = 'Low Carb High Fat diet is healthy in long term.' - - if sidebar.button('Vaccines are a cause of autism.', use_container_width=250): - submitted = True - fact = 'Vaccines are a cause of autism.' - - sidebar.title('HOW IT WORKS') - sidebar.write('Source code and an in-depth app description available at:') - sidebar.info('**GitHub: [@jacinthes](https://github.com/jacinthes/PubMed-fact-checker)**', icon="💻") - sidebar.title('DISCLAIMER') - sidebar.write('This project is meant for educational and research purposes. \n' - 'PubMed fact-checker may provide inaccurate information.') - - if not submitted and not st.session_state.valid_inputs_received: - st.stop() - - elif submitted and not fact: - st.warning('Please enter your fact before fact-checking.') - st.session_state.valid_inputs_received = False - st.stop() - - elif submitted and not detect(fact) == 'en': - st.warning('Please enter valid text in English. For short inputs, language detection is sometimes inaccurate.' - ' Try making the fact more verbose.') - st.session_state.valid_inputs_received = False - st.stop() - - elif submitted and not len(fact) < 75: - st.warning('To ensure accurate searching, please keep your fact under 75 characters.') - st.session_state.valid_inputs_received = False - st.stop() - - elif submitted and '?' in fact: - st.warning('Please state a fact. PubMed Fact Checker is good at verifying facts, ' - 'it is not meant to answer questions.') - st.session_state.valid_inputs_received = False - st.stop() - - elif submitted or st.session_state.valid_inputs_received: - pubmed_query = GPTHelper.gpt35_rephrase(fact) # Call gpt3.5 to rephrase the fact as a PubMed query. - pubmed = load_pubmed_fetcher() - - with st.spinner('Fetching articles...'): - articles = get_articles(pubmed_query, pubmed) - - article_conclusions = articles['Conclusions'] - article_links = articles['Links'] - if len(article_conclusions) == 0: - # If nothing is returned by pymed, inform user. - st.info( - "Unfortunately, I couldn't find anything for your search.\n" - "Don't let that discourage you, I have over 35 million citations in my database.\n" - "I am sure your next search will be more successful." - ) - st.stop() - - cross_inp = [[fact, conclusions] for conclusions in article_conclusions] - - with st.spinner('Assessing article relevancy...'): - cross_encoder = load_cross_encoder() - cross_scores = cross_encoder.predict(cross_inp) # Calculate relevancy using the defined cross-encoder. - - df = pd.DataFrame({ - 'Link': article_links, - 'Conclusion': article_conclusions, - 'Score': cross_scores - }) - df.sort_values(by=['Score'], ascending=False, inplace=True) - df = df[df['Score'] > 0] # Only keep articles with relevancy score above 0. - if df.shape[0] == 0: # If no relevant article is found, inform the user. - st.info( - "Unfortunately, I couldn't find anything for your search.\n" - "Don't let that discourage you, I have over 35 million citations in my database.\n" - "I am sure your next search will be more successful." - ) - st.stop() - - df = df.head(10) # Keep only 10 most relevant articles. This is done to control OpenAI costs and load time. - progress_text = 'Assessing the validity of the fact based on relevant research papers.' - fact_checking_bar = st.progress(0, text=progress_text) - step = 100/df.shape[0] - percent_complete = 0 - predictions = [] - for index, row in df.iterrows(): - prediction = GPTHelper.gpt35_check_fact(row['Conclusion'], fact) # Prompt to GPT3.5 to fact-check - # For output purposes I use True, False and Undetermined as labels. - if prediction == 'Entails': - predictions.append('True') - elif prediction == 'Contradicts': - predictions.append('False') - elif prediction == 'Undetermined': - predictions.append(prediction) - else: - # If GPT3.5 returns an invalid response. Has not happened during testing. - predictions.append('Invalid') - logging.warning(f'Unexpected prediction: {prediction}') - - percent_complete += step/100 - fact_checking_bar.progress(round(percent_complete, 2), text=progress_text) - fact_checking_bar.empty() - df['Prediction'] = predictions - df = df[df.Prediction != 'Invalid'] # Drop rows with invalid prediction. - # Prepare DataFrame for plotly sunburst chart. - totals = df.groupby('Prediction').size().to_dict() - df['Total'] = df['Prediction'].map(totals) - - fig = px.sunburst(df, path=['Prediction', 'Link'], values='Total', height=600, width=600, color='Prediction', - color_discrete_map={ - 'False': "#FF8384", - 'True': "#A5D46A", - 'Undetermined': "#FFDF80" - } - ) - fig.update_layout( - margin=dict(l=20, r=20, t=20, b=20), - font_size=32, - font_color='#000000' - ) - st.write(f'According to PubMed "{fact}" is:') - st.plotly_chart(fig, use_container_width=True) - - -if __name__ == "__main__": - run_ui() diff --git a/spaces/jason9693/KoreanHateSpeechClassifier/app.py b/spaces/jason9693/KoreanHateSpeechClassifier/app.py deleted file mode 100644 index 7728349ce03a91836c91a4e93584d0928257514d..0000000000000000000000000000000000000000 --- a/spaces/jason9693/KoreanHateSpeechClassifier/app.py +++ /dev/null @@ -1,116 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig -import gradio as gr -from torch.nn import functional as F -import seaborn - -import matplotlib -import platform - -from transformers.file_utils import ModelOutput - -if platform.system() == "Darwin": - print("MacOS") - matplotlib.use('Agg') -import matplotlib.pyplot as plt -import io -from PIL import Image - -import matplotlib.font_manager as fm -import util - - -# global var -MODEL_NAME = 'jason9693/SoongsilBERT-base-beep' -tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) -model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME) -config = AutoConfig.from_pretrained(MODEL_NAME) - -MODEL_BUF = { - "name": MODEL_NAME, - "tokenizer": tokenizer, - "model": model, - "config": config -} - - -font_dir = ['./'] -for font in fm.findSystemFonts(font_dir): - print(font) - fm.fontManager.addfont(font) -plt.rcParams["font.family"] = 'NanumGothicCoding' - - -def visualize_attention(sent, attention_matrix, n_words=10): - def draw(data, x, y, ax): - - seaborn.heatmap(data, - xticklabels=x, square=True, yticklabels=y, vmin=0.0, vmax=1.0, - cbar=False, ax=ax) - - # make plt figure with 1x6 subplots - fig = plt.figure(figsize=(16, 8)) - # fig.subplots_adjust(hspace=0.7, wspace=0.2) - for i, layer in enumerate(range(1, 12, 2)): - ax = fig.add_subplot(2, 3, i+1) - ax.set_title("Layer {}".format(layer)) - draw(attention_matrix[layer], sent if layer > 6 else [], sent if layer in [1,7] else [], ax=ax) - - fig.tight_layout() - plt.close() - return fig - - -def change_model_name(name): - MODEL_BUF["name"] = name - MODEL_BUF["tokenizer"] = AutoTokenizer.from_pretrained(name) - MODEL_BUF["model"] = AutoModelForSequenceClassification.from_pretrained(name) - MODEL_BUF["config"] = AutoConfig.from_pretrained(name) - - -def predict(model_name, text): - if model_name != MODEL_BUF["name"]: - change_model_name(model_name) - - tokenizer = MODEL_BUF["tokenizer"] - model = MODEL_BUF["model"] - config = MODEL_BUF["config"] - - tokenized_text = tokenizer([text], return_tensors='pt') - - input_tokens = tokenizer.convert_ids_to_tokens(tokenized_text.input_ids[0]) - try: - input_tokens = util.bytetokens_to_unicdode(input_tokens) if config.model_type in ['roberta', 'gpt', 'gpt2'] else input_tokens - except KeyError: - input_tokens = input_tokens - - model.eval() - output, attention = model(**tokenized_text, output_attentions=True, return_dict=False) - output = F.softmax(output, dim=-1) - result = {} - - for idx, label in enumerate(output[0].detach().numpy()): - result[config.id2label[idx]] = float(label) - - fig = visualize_attention(input_tokens, attention[0][0].detach().numpy()) - return result, fig#.logits.detach()#.numpy()#, output.attentions.detach().numpy() - - -if __name__ == '__main__': - text = '읿딴걸 홍볿글 읿랉곭 쌑젩낄고 앉앟있냩' - - model_name_list = [ - 'jason9693/SoongsilBERT-base-beep', - "beomi/beep-klue-roberta-base-hate", - "beomi/beep-koelectra-base-v3-discriminator-hate", - "beomi/beep-KcELECTRA-base-hate" - ] - - #Create a gradio app with a button that calls predict() - app = gr.Interface( - fn=predict, - inputs=[gr.inputs.Dropdown(model_name_list, label="Model Name"), 'text'], outputs=['label', 'plot'], - examples = [[MODEL_BUF["name"], text], [MODEL_BUF["name"], "4=🦀 4≠🦀"]], - title="한국어 혐오성 발화 분류기 (Korean Hate Speech Classifier)", - description="Korean Hate Speech Classifier with Several Pretrained LM\nCurrent Supported Model:\n1. SoongsilBERT\n2. KcBERT(+KLUE)\n3. KcELECTRA\n4.KoELECTRA." - ) - app.launch(inline=False) diff --git a/spaces/jbilcke-hf/LifeSim/scripts/test.js b/spaces/jbilcke-hf/LifeSim/scripts/test.js deleted file mode 100644 index 67a07d8e5fdac589d574c227500bf8a08b23c92b..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/LifeSim/scripts/test.js +++ /dev/null @@ -1,23 +0,0 @@ -const { promises: fs } = require("node:fs") - -const main = async () => { - console.log('generating shot..') - const response = await fetch("http://localhost:3000/api/shot", { - method: "POST", - headers: { - "Accept": "application/json", - "Content-Type": "application/json" - }, - body: JSON.stringify({ - token: process.env.VC_SECRET_ACCESS_TOKEN, - shotPrompt: "video of a dancing cat" - }) - }); - - console.log('response:', response) - const buffer = await response.buffer() - - fs.writeFile(`./test-juju.mp4`, buffer) -} - -main() \ No newline at end of file diff --git a/spaces/jbilcke-hf/ai-comic-factory/src/lib/computePercentage.ts b/spaces/jbilcke-hf/ai-comic-factory/src/lib/computePercentage.ts deleted file mode 100644 index eaf8c1645451d44bf97a417d04e098e51ec167bb..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-comic-factory/src/lib/computePercentage.ts +++ /dev/null @@ -1,4 +0,0 @@ -export function computePercentage(input: string | number) { - // TODO something - return 0 -} \ No newline at end of file diff --git a/spaces/jbilcke-hf/template-node-wizardcoder-express/public/css/tailwind-typography@0.1.2.css b/spaces/jbilcke-hf/template-node-wizardcoder-express/public/css/tailwind-typography@0.1.2.css deleted file mode 100644 index 6824ef97438023939b62642ce3a28a69cc9e1176..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/template-node-wizardcoder-express/public/css/tailwind-typography@0.1.2.css +++ /dev/null @@ -1 +0,0 @@ -.prose{color:#4a5568;max-width:65ch;font-size:1rem;line-height:1.75}.prose .lead{color:#4a5568;font-size:1.25em;line-height:1.6;margin-top:1.2em;margin-bottom:1.2em}.prose a{color:#1a202c;text-decoration:underline}.prose strong{color:#1a202c;font-weight:600}.prose ol{counter-reset:list-counter;margin-top:1.25em;margin-bottom:1.25em}.prose ol>li{position:relative;counter-increment:list-counter;padding-left:1.75em}.prose ol>li::before{content:counter(list-counter) ".";position:absolute;font-weight:400;color:#718096}.prose ul>li{position:relative;padding-left:1.75em}.prose ul>li::before{content:"";position:absolute;background-color:#cbd5e0;border-radius:50%;width:.375em;height:.375em;top:calc(.875em - .1875em);left:.25em}.prose hr{border-color:#e2e8f0;border-top-width:1px;margin-top:3em;margin-bottom:3em}.prose blockquote{font-weight:500;font-style:italic;color:#1a202c;border-left-width:.25rem;border-left-color:#e2e8f0;quotes:"\201C""\201D""\2018""\2019";margin-top:1.6em;margin-bottom:1.6em;padding-left:1em}.prose blockquote p:first-of-type::before{content:open-quote}.prose blockquote p:last-of-type::after{content:close-quote}.prose h1{color:#1a202c;font-weight:800;font-size:2.25em;margin-top:0;margin-bottom:.8888889em;line-height:1.1111111}.prose h2{color:#1a202c;font-weight:700;font-size:1.5em;margin-top:2em;margin-bottom:1em;line-height:1.3333333}.prose h3{color:#1a202c;font-weight:600;font-size:1.25em;margin-top:1.6em;margin-bottom:.6em;line-height:1.6}.prose h4{color:#1a202c;font-weight:600;margin-top:1.5em;margin-bottom:.5em;line-height:1.5}.prose figure figcaption{color:#718096;font-size:.875em;line-height:1.4285714;margin-top:.8571429em}.prose code{font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;color:#1a202c;font-weight:600;font-size:.875em}.prose code::before{content:"`"}.prose code::after{content:"`"}.prose pre{color:#e2e8f0;font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;background-color:#2d3748;overflow-x:auto;font-size:.875em;line-height:1.7142857;margin-top:1.7142857em;margin-bottom:1.7142857em;border-radius:.375rem;padding-top:.8571429em;padding-right:1.1428571em;padding-bottom:.8571429em;padding-left:1.1428571em}.prose pre code{background-color:transparent;border-width:0;border-radius:0;padding:0;font-weight:400;color:inherit;font-size:inherit;font-family:inherit;line-height:inherit}.prose pre code::before{content:""}.prose pre code::after{content:""}.prose table{width:100%;table-layout:auto;text-align:left;margin-top:2em;margin-bottom:2em;font-size:.875em;line-height:1.7142857}.prose thead{color:#1a202c;font-weight:600;border-bottom-width:1px;border-bottom-color:#cbd5e0}.prose thead th{vertical-align:bottom;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.prose tbody tr{border-bottom-width:1px;border-bottom-color:#e2e8f0}.prose tbody tr:last-child{border-bottom-width:0}.prose tbody td{vertical-align:top;padding-top:.5714286em;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.prose p{margin-top:1.25em;margin-bottom:1.25em}.prose img{margin-top:2em;margin-bottom:2em}.prose video{margin-top:2em;margin-bottom:2em}.prose figure{margin-top:2em;margin-bottom:2em}.prose figure>*{margin-top:0;margin-bottom:0}.prose h2 code{font-size:.875em}.prose h3 code{font-size:.9em}.prose ul{margin-top:1.25em;margin-bottom:1.25em}.prose li{margin-top:.5em;margin-bottom:.5em}.prose ol>li:before{left:0}.prose>ul>li p{margin-top:.75em;margin-bottom:.75em}.prose>ul>li>:first-child{margin-top:1.25em}.prose>ul>li>:last-child{margin-bottom:1.25em}.prose>ol>li>:first-child{margin-top:1.25em}.prose>ol>li>:last-child{margin-bottom:1.25em}.prose ol ol,.prose ol ul,.prose ul ol,.prose ul ul{margin-top:.75em;margin-bottom:.75em}.prose hr+*{margin-top:0}.prose h2+*{margin-top:0}.prose h3+*{margin-top:0}.prose h4+*{margin-top:0}.prose thead th:first-child{padding-left:0}.prose thead th:last-child{padding-right:0}.prose tbody td:first-child{padding-left:0}.prose tbody td:last-child{padding-right:0}.prose>:first-child{margin-top:0}.prose>:last-child{margin-bottom:0}.prose-sm{font-size:.875rem;line-height:1.7142857}.prose-sm p{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm .lead{font-size:1.2857143em;line-height:1.5555556;margin-top:.8888889em;margin-bottom:.8888889em}.prose-sm blockquote{margin-top:1.3333333em;margin-bottom:1.3333333em;padding-left:1.1111111em}.prose-sm h1{font-size:2.1428571em;margin-top:0;margin-bottom:.8em;line-height:1.2}.prose-sm h2{font-size:1.4285714em;margin-top:1.6em;margin-bottom:.8em;line-height:1.4}.prose-sm h3{font-size:1.2857143em;margin-top:1.5555556em;margin-bottom:.4444444em;line-height:1.5555556}.prose-sm h4{margin-top:1.4285714em;margin-bottom:.5714286em;line-height:1.4285714}.prose-sm img{margin-top:1.7142857em;margin-bottom:1.7142857em}.prose-sm video{margin-top:1.7142857em;margin-bottom:1.7142857em}.prose-sm figure{margin-top:1.7142857em;margin-bottom:1.7142857em}.prose-sm figure>*{margin-top:0;margin-bottom:0}.prose-sm figure figcaption{font-size:.8571429em;line-height:1.3333333;margin-top:.6666667em}.prose-sm code{font-size:.8571429em}.prose-sm h2 code{font-size:.9em}.prose-sm h3 code{font-size:.8888889em}.prose-sm pre{font-size:.8571429em;line-height:1.6666667;margin-top:1.6666667em;margin-bottom:1.6666667em;border-radius:.25rem;padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.prose-sm ol{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm ul{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm li{margin-top:.2857143em;margin-bottom:.2857143em}.prose-sm ol>li{padding-left:1.5714286em}.prose-sm ol>li:before{left:0}.prose-sm ul>li{padding-left:1.5714286em}.prose-sm ul>li::before{height:.3571429em;width:.3571429em;top:calc(.8571429em - .1785714em);left:.2142857em}.prose-sm>ul>li p{margin-top:.5714286em;margin-bottom:.5714286em}.prose-sm>ul>li>:first-child{margin-top:1.1428571em}.prose-sm>ul>li>:last-child{margin-bottom:1.1428571em}.prose-sm>ol>li>:first-child{margin-top:1.1428571em}.prose-sm>ol>li>:last-child{margin-bottom:1.1428571em}.prose-sm ol ol,.prose-sm ol ul,.prose-sm ul ol,.prose-sm ul ul{margin-top:.5714286em;margin-bottom:.5714286em}.prose-sm hr{margin-top:2.8571429em;margin-bottom:2.8571429em}.prose-sm hr+*{margin-top:0}.prose-sm h2+*{margin-top:0}.prose-sm h3+*{margin-top:0}.prose-sm h4+*{margin-top:0}.prose-sm table{font-size:.8571429em;line-height:1.5}.prose-sm thead th{padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.prose-sm thead th:first-child{padding-left:0}.prose-sm thead th:last-child{padding-right:0}.prose-sm tbody td{padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.prose-sm tbody td:first-child{padding-left:0}.prose-sm tbody td:last-child{padding-right:0}.prose-sm>:first-child{margin-top:0}.prose-sm>:last-child{margin-bottom:0}.prose-lg{font-size:1.125rem;line-height:1.7777778}.prose-lg p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg .lead{font-size:1.2222222em;line-height:1.4545455;margin-top:1.0909091em;margin-bottom:1.0909091em}.prose-lg blockquote{margin-top:1.6666667em;margin-bottom:1.6666667em;padding-left:1em}.prose-lg h1{font-size:2.6666667em;margin-top:0;margin-bottom:.8333333em;line-height:1}.prose-lg h2{font-size:1.6666667em;margin-top:1.8666667em;margin-bottom:1.0666667em;line-height:1.3333333}.prose-lg h3{font-size:1.3333333em;margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.prose-lg h4{margin-top:1.7777778em;margin-bottom:.4444444em;line-height:1.5555556}.prose-lg img{margin-top:1.7777778em;margin-bottom:1.7777778em}.prose-lg video{margin-top:1.7777778em;margin-bottom:1.7777778em}.prose-lg figure{margin-top:1.7777778em;margin-bottom:1.7777778em}.prose-lg figure>*{margin-top:0;margin-bottom:0}.prose-lg figure figcaption{font-size:.8888889em;line-height:1.5;margin-top:1em}.prose-lg code{font-size:.8888889em}.prose-lg h2 code{font-size:.8666667em}.prose-lg h3 code{font-size:.875em}.prose-lg pre{font-size:.8888889em;line-height:1.75;margin-top:2em;margin-bottom:2em;border-radius:.375rem;padding-top:1em;padding-right:1.5em;padding-bottom:1em;padding-left:1.5em}.prose-lg ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg li{margin-top:.6666667em;margin-bottom:.6666667em}.prose-lg ol>li{padding-left:1.6666667em}.prose-lg ol>li:before{left:0}.prose-lg ul>li{padding-left:1.6666667em}.prose-lg ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8888889em - .1666667em);left:.2222222em}.prose-lg>ul>li p{margin-top:.8888889em;margin-bottom:.8888889em}.prose-lg>ul>li>:first-child{margin-top:1.3333333em}.prose-lg>ul>li>:last-child{margin-bottom:1.3333333em}.prose-lg>ol>li>:first-child{margin-top:1.3333333em}.prose-lg>ol>li>:last-child{margin-bottom:1.3333333em}.prose-lg ol ol,.prose-lg ol ul,.prose-lg ul ol,.prose-lg ul ul{margin-top:.8888889em;margin-bottom:.8888889em}.prose-lg hr{margin-top:3.1111111em;margin-bottom:3.1111111em}.prose-lg hr+*{margin-top:0}.prose-lg h2+*{margin-top:0}.prose-lg h3+*{margin-top:0}.prose-lg h4+*{margin-top:0}.prose-lg table{font-size:.8888889em;line-height:1.5}.prose-lg thead th{padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.prose-lg thead th:first-child{padding-left:0}.prose-lg thead th:last-child{padding-right:0}.prose-lg tbody td{padding-top:.75em;padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.prose-lg tbody td:first-child{padding-left:0}.prose-lg tbody td:last-child{padding-right:0}.prose-lg>:first-child{margin-top:0}.prose-lg>:last-child{margin-bottom:0}.prose-xl{font-size:1.25rem;line-height:1.8}.prose-xl p{margin-top:1.2em;margin-bottom:1.2em}.prose-xl .lead{font-size:1.2em;line-height:1.5;margin-top:1em;margin-bottom:1em}.prose-xl blockquote{margin-top:1.6em;margin-bottom:1.6em;padding-left:1.0666667em}.prose-xl h1{font-size:2.8em;margin-top:0;margin-bottom:.8571429em;line-height:1}.prose-xl h2{font-size:1.8em;margin-top:1.5555556em;margin-bottom:.8888889em;line-height:1.1111111}.prose-xl h3{font-size:1.5em;margin-top:1.6em;margin-bottom:.6666667em;line-height:1.3333333}.prose-xl h4{margin-top:1.8em;margin-bottom:.6em;line-height:1.6}.prose-xl img{margin-top:2em;margin-bottom:2em}.prose-xl video{margin-top:2em;margin-bottom:2em}.prose-xl figure{margin-top:2em;margin-bottom:2em}.prose-xl figure>*{margin-top:0;margin-bottom:0}.prose-xl figure figcaption{font-size:.9em;line-height:1.5555556;margin-top:1em}.prose-xl code{font-size:.9em}.prose-xl h2 code{font-size:.8611111em}.prose-xl h3 code{font-size:.9em}.prose-xl pre{font-size:.9em;line-height:1.7777778;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.1111111em;padding-right:1.3333333em;padding-bottom:1.1111111em;padding-left:1.3333333em}.prose-xl ol{margin-top:1.2em;margin-bottom:1.2em}.prose-xl ul{margin-top:1.2em;margin-bottom:1.2em}.prose-xl li{margin-top:.6em;margin-bottom:.6em}.prose-xl ol>li{padding-left:1.8em}.prose-xl ol>li:before{left:0}.prose-xl ul>li{padding-left:1.8em}.prose-xl ul>li::before{width:.35em;height:.35em;top:calc(.9em - .175em);left:.25em}.prose-xl>ul>li p{margin-top:.8em;margin-bottom:.8em}.prose-xl>ul>li>:first-child{margin-top:1.2em}.prose-xl>ul>li>:last-child{margin-bottom:1.2em}.prose-xl>ol>li>:first-child{margin-top:1.2em}.prose-xl>ol>li>:last-child{margin-bottom:1.2em}.prose-xl ol ol,.prose-xl ol ul,.prose-xl ul ol,.prose-xl ul ul{margin-top:.8em;margin-bottom:.8em}.prose-xl hr{margin-top:2.8em;margin-bottom:2.8em}.prose-xl hr+*{margin-top:0}.prose-xl h2+*{margin-top:0}.prose-xl h3+*{margin-top:0}.prose-xl h4+*{margin-top:0}.prose-xl table{font-size:.9em;line-height:1.5555556}.prose-xl thead th{padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.prose-xl thead th:first-child{padding-left:0}.prose-xl thead th:last-child{padding-right:0}.prose-xl tbody td{padding-top:.8888889em;padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.prose-xl tbody td:first-child{padding-left:0}.prose-xl tbody td:last-child{padding-right:0}.prose-xl>:first-child{margin-top:0}.prose-xl>:last-child{margin-bottom:0}.prose-2xl{font-size:1.5rem;line-height:1.6666667}.prose-2xl p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl .lead{font-size:1.25em;line-height:1.4666667;margin-top:1.0666667em;margin-bottom:1.0666667em}.prose-2xl blockquote{margin-top:1.7777778em;margin-bottom:1.7777778em;padding-left:1.1111111em}.prose-2xl h1{font-size:2.6666667em;margin-top:0;margin-bottom:.875em;line-height:1}.prose-2xl h2{font-size:2em;margin-top:1.5em;margin-bottom:.8333333em;line-height:1.0833333}.prose-2xl h3{font-size:1.5em;margin-top:1.5555556em;margin-bottom:.6666667em;line-height:1.2222222}.prose-2xl h4{margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.prose-2xl img{margin-top:2em;margin-bottom:2em}.prose-2xl video{margin-top:2em;margin-bottom:2em}.prose-2xl figure{margin-top:2em;margin-bottom:2em}.prose-2xl figure>*{margin-top:0;margin-bottom:0}.prose-2xl figure figcaption{font-size:.8333333em;line-height:1.6;margin-top:1em}.prose-2xl code{font-size:.8333333em}.prose-2xl h2 code{font-size:.875em}.prose-2xl h3 code{font-size:.8888889em}.prose-2xl pre{font-size:.8333333em;line-height:1.8;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.2em;padding-right:1.6em;padding-bottom:1.2em;padding-left:1.6em}.prose-2xl ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl li{margin-top:.5em;margin-bottom:.5em}.prose-2xl ol>li{padding-left:1.6666667em}.prose-2xl ol>li:before{left:0}.prose-2xl ul>li{padding-left:1.6666667em}.prose-2xl ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8333333em - .1666667em);left:.25em}.prose-2xl>ul>li p{margin-top:.8333333em;margin-bottom:.8333333em}.prose-2xl>ul>li>:first-child{margin-top:1.3333333em}.prose-2xl>ul>li>:last-child{margin-bottom:1.3333333em}.prose-2xl>ol>li>:first-child{margin-top:1.3333333em}.prose-2xl>ol>li>:last-child{margin-bottom:1.3333333em}.prose-2xl ol ol,.prose-2xl ol ul,.prose-2xl ul ol,.prose-2xl ul ul{margin-top:.6666667em;margin-bottom:.6666667em}.prose-2xl hr{margin-top:3em;margin-bottom:3em}.prose-2xl hr+*{margin-top:0}.prose-2xl h2+*{margin-top:0}.prose-2xl h3+*{margin-top:0}.prose-2xl h4+*{margin-top:0}.prose-2xl table{font-size:.8333333em;line-height:1.4}.prose-2xl thead th{padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.prose-2xl thead th:first-child{padding-left:0}.prose-2xl thead th:last-child{padding-right:0}.prose-2xl tbody td{padding-top:.8em;padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.prose-2xl tbody td:first-child{padding-left:0}.prose-2xl tbody td:last-child{padding-right:0}.prose-2xl>:first-child{margin-top:0}.prose-2xl>:last-child{margin-bottom:0}@media (min-width:640px){.sm\:prose{color:#4a5568;max-width:65ch;font-size:1rem;line-height:1.75}.prose .sm\:lead{color:#4a5568;font-size:1.25em;line-height:1.6;margin-top:1.2em;margin-bottom:1.2em}.sm\:prose a{color:#1a202c;text-decoration:underline}.sm\:prose strong{color:#1a202c;font-weight:600}.sm\:prose ol{counter-reset:list-counter;margin-top:1.25em;margin-bottom:1.25em}.sm\:prose ol>li{position:relative;counter-increment:list-counter;padding-left:1.75em}.sm\:prose ol>li::before{content:counter(list-counter) ".";position:absolute;font-weight:400;color:#718096}.sm\:prose ul>li{position:relative;padding-left:1.75em}.sm\:prose ul>li::before{content:"";position:absolute;background-color:#cbd5e0;border-radius:50%;width:.375em;height:.375em;top:calc(.875em - .1875em);left:.25em}.sm\:prose hr{border-color:#e2e8f0;border-top-width:1px;margin-top:3em;margin-bottom:3em}.sm\:prose blockquote{font-weight:500;font-style:italic;color:#1a202c;border-left-width:.25rem;border-left-color:#e2e8f0;quotes:"\201C""\201D""\2018""\2019";margin-top:1.6em;margin-bottom:1.6em;padding-left:1em}.sm\:prose blockquote p:first-of-type::before{content:open-quote}.sm\:prose blockquote p:last-of-type::after{content:close-quote}.sm\:prose h1{color:#1a202c;font-weight:800;font-size:2.25em;margin-top:0;margin-bottom:.8888889em;line-height:1.1111111}.sm\:prose h2{color:#1a202c;font-weight:700;font-size:1.5em;margin-top:2em;margin-bottom:1em;line-height:1.3333333}.sm\:prose h3{color:#1a202c;font-weight:600;font-size:1.25em;margin-top:1.6em;margin-bottom:.6em;line-height:1.6}.sm\:prose h4{color:#1a202c;font-weight:600;margin-top:1.5em;margin-bottom:.5em;line-height:1.5}.sm\:prose figure figcaption{color:#718096;font-size:.875em;line-height:1.4285714;margin-top:.8571429em}.sm\:prose code{font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;color:#1a202c;font-weight:600;font-size:.875em}.sm\:prose code::before{content:"`"}.sm\:prose code::after{content:"`"}.sm\:prose pre{color:#e2e8f0;font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;background-color:#2d3748;overflow-x:auto;font-size:.875em;line-height:1.7142857;margin-top:1.7142857em;margin-bottom:1.7142857em;border-radius:.375rem;padding-top:.8571429em;padding-right:1.1428571em;padding-bottom:.8571429em;padding-left:1.1428571em}.sm\:prose pre code{background-color:transparent;border-width:0;border-radius:0;padding:0;font-weight:400;color:inherit;font-size:inherit;font-family:inherit;line-height:inherit}.sm\:prose pre code::before{content:""}.sm\:prose pre code::after{content:""}.sm\:prose table{width:100%;table-layout:auto;text-align:left;margin-top:2em;margin-bottom:2em;font-size:.875em;line-height:1.7142857}.sm\:prose thead{color:#1a202c;font-weight:600;border-bottom-width:1px;border-bottom-color:#cbd5e0}.sm\:prose thead th{vertical-align:bottom;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.sm\:prose tbody tr{border-bottom-width:1px;border-bottom-color:#e2e8f0}.sm\:prose tbody tr:last-child{border-bottom-width:0}.sm\:prose tbody td{vertical-align:top;padding-top:.5714286em;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.sm\:prose p{margin-top:1.25em;margin-bottom:1.25em}.sm\:prose img{margin-top:2em;margin-bottom:2em}.sm\:prose video{margin-top:2em;margin-bottom:2em}.sm\:prose figure{margin-top:2em;margin-bottom:2em}.sm\:prose figure>*{margin-top:0;margin-bottom:0}.sm\:prose h2 code{font-size:.875em}.sm\:prose h3 code{font-size:.9em}.sm\:prose ul{margin-top:1.25em;margin-bottom:1.25em}.sm\:prose li{margin-top:.5em;margin-bottom:.5em}.sm\:prose ol>li:before{left:0}.sm\:prose>ul>li p{margin-top:.75em;margin-bottom:.75em}.sm\:prose>ul>li>:first-child{margin-top:1.25em}.sm\:prose>ul>li>:last-child{margin-bottom:1.25em}.sm\:prose>ol>li>:first-child{margin-top:1.25em}.sm\:prose>ol>li>:last-child{margin-bottom:1.25em}.sm\:prose ol ol,.sm\:prose ol ul,.sm\:prose ul ol,.sm\:prose ul ul{margin-top:.75em;margin-bottom:.75em}.sm\:prose hr+*{margin-top:0}.sm\:prose h2+*{margin-top:0}.sm\:prose h3+*{margin-top:0}.sm\:prose h4+*{margin-top:0}.sm\:prose thead th:first-child{padding-left:0}.sm\:prose thead th:last-child{padding-right:0}.sm\:prose tbody td:first-child{padding-left:0}.sm\:prose tbody td:last-child{padding-right:0}.sm\:prose>:first-child{margin-top:0}.sm\:prose>:last-child{margin-bottom:0}.sm\:prose-sm{font-size:.875rem;line-height:1.7142857}.sm\:prose-sm p{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm .sm\:lead{font-size:1.2857143em;line-height:1.5555556;margin-top:.8888889em;margin-bottom:.8888889em}.sm\:prose-sm blockquote{margin-top:1.3333333em;margin-bottom:1.3333333em;padding-left:1.1111111em}.sm\:prose-sm h1{font-size:2.1428571em;margin-top:0;margin-bottom:.8em;line-height:1.2}.sm\:prose-sm h2{font-size:1.4285714em;margin-top:1.6em;margin-bottom:.8em;line-height:1.4}.sm\:prose-sm h3{font-size:1.2857143em;margin-top:1.5555556em;margin-bottom:.4444444em;line-height:1.5555556}.sm\:prose-sm h4{margin-top:1.4285714em;margin-bottom:.5714286em;line-height:1.4285714}.sm\:prose-sm img{margin-top:1.7142857em;margin-bottom:1.7142857em}.sm\:prose-sm video{margin-top:1.7142857em;margin-bottom:1.7142857em}.sm\:prose-sm figure{margin-top:1.7142857em;margin-bottom:1.7142857em}.sm\:prose-sm figure>*{margin-top:0;margin-bottom:0}.sm\:prose-sm figure figcaption{font-size:.8571429em;line-height:1.3333333;margin-top:.6666667em}.sm\:prose-sm code{font-size:.8571429em}.sm\:prose-sm h2 code{font-size:.9em}.sm\:prose-sm h3 code{font-size:.8888889em}.sm\:prose-sm pre{font-size:.8571429em;line-height:1.6666667;margin-top:1.6666667em;margin-bottom:1.6666667em;border-radius:.25rem;padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.sm\:prose-sm ol{margin-top:1.1428571em;margin-bottom:1.1428571em}.sm\:prose-sm ul{margin-top:1.1428571em;margin-bottom:1.1428571em}.sm\:prose-sm li{margin-top:.2857143em;margin-bottom:.2857143em}.sm\:prose-sm ol>li{padding-left:1.5714286em}.sm\:prose-sm ol>li:before{left:0}.sm\:prose-sm ul>li{padding-left:1.5714286em}.sm\:prose-sm ul>li::before{height:.3571429em;width:.3571429em;top:calc(.8571429em - .1785714em);left:.2142857em}.sm\:prose-sm>ul>li p{margin-top:.5714286em;margin-bottom:.5714286em}.sm\:prose-sm>ul>li>:first-child{margin-top:1.1428571em}.sm\:prose-sm>ul>li>:last-child{margin-bottom:1.1428571em}.sm\:prose-sm>ol>li>:first-child{margin-top:1.1428571em}.sm\:prose-sm>ol>li>:last-child{margin-bottom:1.1428571em}.sm\:prose-sm ol ol,.sm\:prose-sm ol ul,.sm\:prose-sm ul ol,.sm\:prose-sm ul ul{margin-top:.5714286em;margin-bottom:.5714286em}.sm\:prose-sm hr{margin-top:2.8571429em;margin-bottom:2.8571429em}.sm\:prose-sm hr+*{margin-top:0}.sm\:prose-sm h2+*{margin-top:0}.sm\:prose-sm h3+*{margin-top:0}.sm\:prose-sm h4+*{margin-top:0}.sm\:prose-sm table{font-size:.8571429em;line-height:1.5}.sm\:prose-sm thead th{padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.sm\:prose-sm thead th:first-child{padding-left:0}.sm\:prose-sm thead th:last-child{padding-right:0}.sm\:prose-sm tbody td{padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.sm\:prose-sm tbody td:first-child{padding-left:0}.sm\:prose-sm tbody td:last-child{padding-right:0}.sm\:prose-sm>:first-child{margin-top:0}.sm\:prose-sm>:last-child{margin-bottom:0}.sm\:prose-lg{font-size:1.125rem;line-height:1.7777778}.sm\:prose-lg p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg .sm\:lead{font-size:1.2222222em;line-height:1.4545455;margin-top:1.0909091em;margin-bottom:1.0909091em}.sm\:prose-lg blockquote{margin-top:1.6666667em;margin-bottom:1.6666667em;padding-left:1em}.sm\:prose-lg h1{font-size:2.6666667em;margin-top:0;margin-bottom:.8333333em;line-height:1}.sm\:prose-lg h2{font-size:1.6666667em;margin-top:1.8666667em;margin-bottom:1.0666667em;line-height:1.3333333}.sm\:prose-lg h3{font-size:1.3333333em;margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.sm\:prose-lg h4{margin-top:1.7777778em;margin-bottom:.4444444em;line-height:1.5555556}.sm\:prose-lg img{margin-top:1.7777778em;margin-bottom:1.7777778em}.sm\:prose-lg video{margin-top:1.7777778em;margin-bottom:1.7777778em}.sm\:prose-lg figure{margin-top:1.7777778em;margin-bottom:1.7777778em}.sm\:prose-lg figure>*{margin-top:0;margin-bottom:0}.sm\:prose-lg figure figcaption{font-size:.8888889em;line-height:1.5;margin-top:1em}.sm\:prose-lg code{font-size:.8888889em}.sm\:prose-lg h2 code{font-size:.8666667em}.sm\:prose-lg h3 code{font-size:.875em}.sm\:prose-lg pre{font-size:.8888889em;line-height:1.75;margin-top:2em;margin-bottom:2em;border-radius:.375rem;padding-top:1em;padding-right:1.5em;padding-bottom:1em;padding-left:1.5em}.sm\:prose-lg ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.sm\:prose-lg ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.sm\:prose-lg li{margin-top:.6666667em;margin-bottom:.6666667em}.sm\:prose-lg ol>li{padding-left:1.6666667em}.sm\:prose-lg ol>li:before{left:0}.sm\:prose-lg ul>li{padding-left:1.6666667em}.sm\:prose-lg ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8888889em - .1666667em);left:.2222222em}.sm\:prose-lg>ul>li p{margin-top:.8888889em;margin-bottom:.8888889em}.sm\:prose-lg>ul>li>:first-child{margin-top:1.3333333em}.sm\:prose-lg>ul>li>:last-child{margin-bottom:1.3333333em}.sm\:prose-lg>ol>li>:first-child{margin-top:1.3333333em}.sm\:prose-lg>ol>li>:last-child{margin-bottom:1.3333333em}.sm\:prose-lg ol ol,.sm\:prose-lg ol ul,.sm\:prose-lg ul ol,.sm\:prose-lg ul ul{margin-top:.8888889em;margin-bottom:.8888889em}.sm\:prose-lg hr{margin-top:3.1111111em;margin-bottom:3.1111111em}.sm\:prose-lg hr+*{margin-top:0}.sm\:prose-lg h2+*{margin-top:0}.sm\:prose-lg h3+*{margin-top:0}.sm\:prose-lg h4+*{margin-top:0}.sm\:prose-lg table{font-size:.8888889em;line-height:1.5}.sm\:prose-lg thead th{padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.sm\:prose-lg thead th:first-child{padding-left:0}.sm\:prose-lg thead th:last-child{padding-right:0}.sm\:prose-lg tbody td{padding-top:.75em;padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.sm\:prose-lg tbody td:first-child{padding-left:0}.sm\:prose-lg tbody td:last-child{padding-right:0}.sm\:prose-lg>:first-child{margin-top:0}.sm\:prose-lg>:last-child{margin-bottom:0}.sm\:prose-xl{font-size:1.25rem;line-height:1.8}.sm\:prose-xl p{margin-top:1.2em;margin-bottom:1.2em}.prose-xl .sm\:lead{font-size:1.2em;line-height:1.5;margin-top:1em;margin-bottom:1em}.sm\:prose-xl blockquote{margin-top:1.6em;margin-bottom:1.6em;padding-left:1.0666667em}.sm\:prose-xl h1{font-size:2.8em;margin-top:0;margin-bottom:.8571429em;line-height:1}.sm\:prose-xl h2{font-size:1.8em;margin-top:1.5555556em;margin-bottom:.8888889em;line-height:1.1111111}.sm\:prose-xl h3{font-size:1.5em;margin-top:1.6em;margin-bottom:.6666667em;line-height:1.3333333}.sm\:prose-xl h4{margin-top:1.8em;margin-bottom:.6em;line-height:1.6}.sm\:prose-xl img{margin-top:2em;margin-bottom:2em}.sm\:prose-xl video{margin-top:2em;margin-bottom:2em}.sm\:prose-xl figure{margin-top:2em;margin-bottom:2em}.sm\:prose-xl figure>*{margin-top:0;margin-bottom:0}.sm\:prose-xl figure figcaption{font-size:.9em;line-height:1.5555556;margin-top:1em}.sm\:prose-xl code{font-size:.9em}.sm\:prose-xl h2 code{font-size:.8611111em}.sm\:prose-xl h3 code{font-size:.9em}.sm\:prose-xl pre{font-size:.9em;line-height:1.7777778;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.1111111em;padding-right:1.3333333em;padding-bottom:1.1111111em;padding-left:1.3333333em}.sm\:prose-xl ol{margin-top:1.2em;margin-bottom:1.2em}.sm\:prose-xl ul{margin-top:1.2em;margin-bottom:1.2em}.sm\:prose-xl li{margin-top:.6em;margin-bottom:.6em}.sm\:prose-xl ol>li{padding-left:1.8em}.sm\:prose-xl ol>li:before{left:0}.sm\:prose-xl ul>li{padding-left:1.8em}.sm\:prose-xl ul>li::before{width:.35em;height:.35em;top:calc(.9em - .175em);left:.25em}.sm\:prose-xl>ul>li p{margin-top:.8em;margin-bottom:.8em}.sm\:prose-xl>ul>li>:first-child{margin-top:1.2em}.sm\:prose-xl>ul>li>:last-child{margin-bottom:1.2em}.sm\:prose-xl>ol>li>:first-child{margin-top:1.2em}.sm\:prose-xl>ol>li>:last-child{margin-bottom:1.2em}.sm\:prose-xl ol ol,.sm\:prose-xl ol ul,.sm\:prose-xl ul ol,.sm\:prose-xl ul ul{margin-top:.8em;margin-bottom:.8em}.sm\:prose-xl hr{margin-top:2.8em;margin-bottom:2.8em}.sm\:prose-xl hr+*{margin-top:0}.sm\:prose-xl h2+*{margin-top:0}.sm\:prose-xl h3+*{margin-top:0}.sm\:prose-xl h4+*{margin-top:0}.sm\:prose-xl table{font-size:.9em;line-height:1.5555556}.sm\:prose-xl thead th{padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.sm\:prose-xl thead th:first-child{padding-left:0}.sm\:prose-xl thead th:last-child{padding-right:0}.sm\:prose-xl tbody td{padding-top:.8888889em;padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.sm\:prose-xl tbody td:first-child{padding-left:0}.sm\:prose-xl tbody td:last-child{padding-right:0}.sm\:prose-xl>:first-child{margin-top:0}.sm\:prose-xl>:last-child{margin-bottom:0}.sm\:prose-2xl{font-size:1.5rem;line-height:1.6666667}.sm\:prose-2xl p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl .sm\:lead{font-size:1.25em;line-height:1.4666667;margin-top:1.0666667em;margin-bottom:1.0666667em}.sm\:prose-2xl blockquote{margin-top:1.7777778em;margin-bottom:1.7777778em;padding-left:1.1111111em}.sm\:prose-2xl h1{font-size:2.6666667em;margin-top:0;margin-bottom:.875em;line-height:1}.sm\:prose-2xl h2{font-size:2em;margin-top:1.5em;margin-bottom:.8333333em;line-height:1.0833333}.sm\:prose-2xl h3{font-size:1.5em;margin-top:1.5555556em;margin-bottom:.6666667em;line-height:1.2222222}.sm\:prose-2xl h4{margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.sm\:prose-2xl img{margin-top:2em;margin-bottom:2em}.sm\:prose-2xl video{margin-top:2em;margin-bottom:2em}.sm\:prose-2xl figure{margin-top:2em;margin-bottom:2em}.sm\:prose-2xl figure>*{margin-top:0;margin-bottom:0}.sm\:prose-2xl figure figcaption{font-size:.8333333em;line-height:1.6;margin-top:1em}.sm\:prose-2xl code{font-size:.8333333em}.sm\:prose-2xl h2 code{font-size:.875em}.sm\:prose-2xl h3 code{font-size:.8888889em}.sm\:prose-2xl pre{font-size:.8333333em;line-height:1.8;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.2em;padding-right:1.6em;padding-bottom:1.2em;padding-left:1.6em}.sm\:prose-2xl ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.sm\:prose-2xl ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.sm\:prose-2xl li{margin-top:.5em;margin-bottom:.5em}.sm\:prose-2xl ol>li{padding-left:1.6666667em}.sm\:prose-2xl ol>li:before{left:0}.sm\:prose-2xl ul>li{padding-left:1.6666667em}.sm\:prose-2xl ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8333333em - .1666667em);left:.25em}.sm\:prose-2xl>ul>li p{margin-top:.8333333em;margin-bottom:.8333333em}.sm\:prose-2xl>ul>li>:first-child{margin-top:1.3333333em}.sm\:prose-2xl>ul>li>:last-child{margin-bottom:1.3333333em}.sm\:prose-2xl>ol>li>:first-child{margin-top:1.3333333em}.sm\:prose-2xl>ol>li>:last-child{margin-bottom:1.3333333em}.sm\:prose-2xl ol ol,.sm\:prose-2xl ol ul,.sm\:prose-2xl ul ol,.sm\:prose-2xl ul ul{margin-top:.6666667em;margin-bottom:.6666667em}.sm\:prose-2xl hr{margin-top:3em;margin-bottom:3em}.sm\:prose-2xl hr+*{margin-top:0}.sm\:prose-2xl h2+*{margin-top:0}.sm\:prose-2xl h3+*{margin-top:0}.sm\:prose-2xl h4+*{margin-top:0}.sm\:prose-2xl table{font-size:.8333333em;line-height:1.4}.sm\:prose-2xl thead th{padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.sm\:prose-2xl thead th:first-child{padding-left:0}.sm\:prose-2xl thead th:last-child{padding-right:0}.sm\:prose-2xl tbody td{padding-top:.8em;padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.sm\:prose-2xl tbody td:first-child{padding-left:0}.sm\:prose-2xl tbody td:last-child{padding-right:0}.sm\:prose-2xl>:first-child{margin-top:0}.sm\:prose-2xl>:last-child{margin-bottom:0}}@media (min-width:768px){.md\:prose{color:#4a5568;max-width:65ch;font-size:1rem;line-height:1.75}.prose .md\:lead{color:#4a5568;font-size:1.25em;line-height:1.6;margin-top:1.2em;margin-bottom:1.2em}.md\:prose a{color:#1a202c;text-decoration:underline}.md\:prose strong{color:#1a202c;font-weight:600}.md\:prose ol{counter-reset:list-counter;margin-top:1.25em;margin-bottom:1.25em}.md\:prose ol>li{position:relative;counter-increment:list-counter;padding-left:1.75em}.md\:prose ol>li::before{content:counter(list-counter) ".";position:absolute;font-weight:400;color:#718096}.md\:prose ul>li{position:relative;padding-left:1.75em}.md\:prose ul>li::before{content:"";position:absolute;background-color:#cbd5e0;border-radius:50%;width:.375em;height:.375em;top:calc(.875em - .1875em);left:.25em}.md\:prose hr{border-color:#e2e8f0;border-top-width:1px;margin-top:3em;margin-bottom:3em}.md\:prose blockquote{font-weight:500;font-style:italic;color:#1a202c;border-left-width:.25rem;border-left-color:#e2e8f0;quotes:"\201C""\201D""\2018""\2019";margin-top:1.6em;margin-bottom:1.6em;padding-left:1em}.md\:prose blockquote p:first-of-type::before{content:open-quote}.md\:prose blockquote p:last-of-type::after{content:close-quote}.md\:prose h1{color:#1a202c;font-weight:800;font-size:2.25em;margin-top:0;margin-bottom:.8888889em;line-height:1.1111111}.md\:prose h2{color:#1a202c;font-weight:700;font-size:1.5em;margin-top:2em;margin-bottom:1em;line-height:1.3333333}.md\:prose h3{color:#1a202c;font-weight:600;font-size:1.25em;margin-top:1.6em;margin-bottom:.6em;line-height:1.6}.md\:prose h4{color:#1a202c;font-weight:600;margin-top:1.5em;margin-bottom:.5em;line-height:1.5}.md\:prose figure figcaption{color:#718096;font-size:.875em;line-height:1.4285714;margin-top:.8571429em}.md\:prose code{font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;color:#1a202c;font-weight:600;font-size:.875em}.md\:prose code::before{content:"`"}.md\:prose code::after{content:"`"}.md\:prose pre{color:#e2e8f0;font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;background-color:#2d3748;overflow-x:auto;font-size:.875em;line-height:1.7142857;margin-top:1.7142857em;margin-bottom:1.7142857em;border-radius:.375rem;padding-top:.8571429em;padding-right:1.1428571em;padding-bottom:.8571429em;padding-left:1.1428571em}.md\:prose pre code{background-color:transparent;border-width:0;border-radius:0;padding:0;font-weight:400;color:inherit;font-size:inherit;font-family:inherit;line-height:inherit}.md\:prose pre code::before{content:""}.md\:prose pre code::after{content:""}.md\:prose table{width:100%;table-layout:auto;text-align:left;margin-top:2em;margin-bottom:2em;font-size:.875em;line-height:1.7142857}.md\:prose thead{color:#1a202c;font-weight:600;border-bottom-width:1px;border-bottom-color:#cbd5e0}.md\:prose thead th{vertical-align:bottom;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.md\:prose tbody tr{border-bottom-width:1px;border-bottom-color:#e2e8f0}.md\:prose tbody tr:last-child{border-bottom-width:0}.md\:prose tbody td{vertical-align:top;padding-top:.5714286em;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.md\:prose p{margin-top:1.25em;margin-bottom:1.25em}.md\:prose img{margin-top:2em;margin-bottom:2em}.md\:prose video{margin-top:2em;margin-bottom:2em}.md\:prose figure{margin-top:2em;margin-bottom:2em}.md\:prose figure>*{margin-top:0;margin-bottom:0}.md\:prose h2 code{font-size:.875em}.md\:prose h3 code{font-size:.9em}.md\:prose ul{margin-top:1.25em;margin-bottom:1.25em}.md\:prose li{margin-top:.5em;margin-bottom:.5em}.md\:prose ol>li:before{left:0}.md\:prose>ul>li p{margin-top:.75em;margin-bottom:.75em}.md\:prose>ul>li>:first-child{margin-top:1.25em}.md\:prose>ul>li>:last-child{margin-bottom:1.25em}.md\:prose>ol>li>:first-child{margin-top:1.25em}.md\:prose>ol>li>:last-child{margin-bottom:1.25em}.md\:prose ol ol,.md\:prose ol ul,.md\:prose ul ol,.md\:prose ul ul{margin-top:.75em;margin-bottom:.75em}.md\:prose hr+*{margin-top:0}.md\:prose h2+*{margin-top:0}.md\:prose h3+*{margin-top:0}.md\:prose h4+*{margin-top:0}.md\:prose thead th:first-child{padding-left:0}.md\:prose thead th:last-child{padding-right:0}.md\:prose tbody td:first-child{padding-left:0}.md\:prose tbody td:last-child{padding-right:0}.md\:prose>:first-child{margin-top:0}.md\:prose>:last-child{margin-bottom:0}.md\:prose-sm{font-size:.875rem;line-height:1.7142857}.md\:prose-sm p{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm .md\:lead{font-size:1.2857143em;line-height:1.5555556;margin-top:.8888889em;margin-bottom:.8888889em}.md\:prose-sm blockquote{margin-top:1.3333333em;margin-bottom:1.3333333em;padding-left:1.1111111em}.md\:prose-sm h1{font-size:2.1428571em;margin-top:0;margin-bottom:.8em;line-height:1.2}.md\:prose-sm h2{font-size:1.4285714em;margin-top:1.6em;margin-bottom:.8em;line-height:1.4}.md\:prose-sm h3{font-size:1.2857143em;margin-top:1.5555556em;margin-bottom:.4444444em;line-height:1.5555556}.md\:prose-sm h4{margin-top:1.4285714em;margin-bottom:.5714286em;line-height:1.4285714}.md\:prose-sm img{margin-top:1.7142857em;margin-bottom:1.7142857em}.md\:prose-sm video{margin-top:1.7142857em;margin-bottom:1.7142857em}.md\:prose-sm figure{margin-top:1.7142857em;margin-bottom:1.7142857em}.md\:prose-sm figure>*{margin-top:0;margin-bottom:0}.md\:prose-sm figure figcaption{font-size:.8571429em;line-height:1.3333333;margin-top:.6666667em}.md\:prose-sm code{font-size:.8571429em}.md\:prose-sm h2 code{font-size:.9em}.md\:prose-sm h3 code{font-size:.8888889em}.md\:prose-sm pre{font-size:.8571429em;line-height:1.6666667;margin-top:1.6666667em;margin-bottom:1.6666667em;border-radius:.25rem;padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.md\:prose-sm ol{margin-top:1.1428571em;margin-bottom:1.1428571em}.md\:prose-sm ul{margin-top:1.1428571em;margin-bottom:1.1428571em}.md\:prose-sm li{margin-top:.2857143em;margin-bottom:.2857143em}.md\:prose-sm ol>li{padding-left:1.5714286em}.md\:prose-sm ol>li:before{left:0}.md\:prose-sm ul>li{padding-left:1.5714286em}.md\:prose-sm ul>li::before{height:.3571429em;width:.3571429em;top:calc(.8571429em - .1785714em);left:.2142857em}.md\:prose-sm>ul>li p{margin-top:.5714286em;margin-bottom:.5714286em}.md\:prose-sm>ul>li>:first-child{margin-top:1.1428571em}.md\:prose-sm>ul>li>:last-child{margin-bottom:1.1428571em}.md\:prose-sm>ol>li>:first-child{margin-top:1.1428571em}.md\:prose-sm>ol>li>:last-child{margin-bottom:1.1428571em}.md\:prose-sm ol ol,.md\:prose-sm ol ul,.md\:prose-sm ul ol,.md\:prose-sm ul ul{margin-top:.5714286em;margin-bottom:.5714286em}.md\:prose-sm hr{margin-top:2.8571429em;margin-bottom:2.8571429em}.md\:prose-sm hr+*{margin-top:0}.md\:prose-sm h2+*{margin-top:0}.md\:prose-sm h3+*{margin-top:0}.md\:prose-sm h4+*{margin-top:0}.md\:prose-sm table{font-size:.8571429em;line-height:1.5}.md\:prose-sm thead th{padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.md\:prose-sm thead th:first-child{padding-left:0}.md\:prose-sm thead th:last-child{padding-right:0}.md\:prose-sm tbody td{padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.md\:prose-sm tbody td:first-child{padding-left:0}.md\:prose-sm tbody td:last-child{padding-right:0}.md\:prose-sm>:first-child{margin-top:0}.md\:prose-sm>:last-child{margin-bottom:0}.md\:prose-lg{font-size:1.125rem;line-height:1.7777778}.md\:prose-lg p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg .md\:lead{font-size:1.2222222em;line-height:1.4545455;margin-top:1.0909091em;margin-bottom:1.0909091em}.md\:prose-lg blockquote{margin-top:1.6666667em;margin-bottom:1.6666667em;padding-left:1em}.md\:prose-lg h1{font-size:2.6666667em;margin-top:0;margin-bottom:.8333333em;line-height:1}.md\:prose-lg h2{font-size:1.6666667em;margin-top:1.8666667em;margin-bottom:1.0666667em;line-height:1.3333333}.md\:prose-lg h3{font-size:1.3333333em;margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.md\:prose-lg h4{margin-top:1.7777778em;margin-bottom:.4444444em;line-height:1.5555556}.md\:prose-lg img{margin-top:1.7777778em;margin-bottom:1.7777778em}.md\:prose-lg video{margin-top:1.7777778em;margin-bottom:1.7777778em}.md\:prose-lg figure{margin-top:1.7777778em;margin-bottom:1.7777778em}.md\:prose-lg figure>*{margin-top:0;margin-bottom:0}.md\:prose-lg figure figcaption{font-size:.8888889em;line-height:1.5;margin-top:1em}.md\:prose-lg code{font-size:.8888889em}.md\:prose-lg h2 code{font-size:.8666667em}.md\:prose-lg h3 code{font-size:.875em}.md\:prose-lg pre{font-size:.8888889em;line-height:1.75;margin-top:2em;margin-bottom:2em;border-radius:.375rem;padding-top:1em;padding-right:1.5em;padding-bottom:1em;padding-left:1.5em}.md\:prose-lg ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.md\:prose-lg ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.md\:prose-lg li{margin-top:.6666667em;margin-bottom:.6666667em}.md\:prose-lg ol>li{padding-left:1.6666667em}.md\:prose-lg ol>li:before{left:0}.md\:prose-lg ul>li{padding-left:1.6666667em}.md\:prose-lg ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8888889em - .1666667em);left:.2222222em}.md\:prose-lg>ul>li p{margin-top:.8888889em;margin-bottom:.8888889em}.md\:prose-lg>ul>li>:first-child{margin-top:1.3333333em}.md\:prose-lg>ul>li>:last-child{margin-bottom:1.3333333em}.md\:prose-lg>ol>li>:first-child{margin-top:1.3333333em}.md\:prose-lg>ol>li>:last-child{margin-bottom:1.3333333em}.md\:prose-lg ol ol,.md\:prose-lg ol ul,.md\:prose-lg ul ol,.md\:prose-lg ul ul{margin-top:.8888889em;margin-bottom:.8888889em}.md\:prose-lg hr{margin-top:3.1111111em;margin-bottom:3.1111111em}.md\:prose-lg hr+*{margin-top:0}.md\:prose-lg h2+*{margin-top:0}.md\:prose-lg h3+*{margin-top:0}.md\:prose-lg h4+*{margin-top:0}.md\:prose-lg table{font-size:.8888889em;line-height:1.5}.md\:prose-lg thead th{padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.md\:prose-lg thead th:first-child{padding-left:0}.md\:prose-lg thead th:last-child{padding-right:0}.md\:prose-lg tbody td{padding-top:.75em;padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.md\:prose-lg tbody td:first-child{padding-left:0}.md\:prose-lg tbody td:last-child{padding-right:0}.md\:prose-lg>:first-child{margin-top:0}.md\:prose-lg>:last-child{margin-bottom:0}.md\:prose-xl{font-size:1.25rem;line-height:1.8}.md\:prose-xl p{margin-top:1.2em;margin-bottom:1.2em}.prose-xl .md\:lead{font-size:1.2em;line-height:1.5;margin-top:1em;margin-bottom:1em}.md\:prose-xl blockquote{margin-top:1.6em;margin-bottom:1.6em;padding-left:1.0666667em}.md\:prose-xl h1{font-size:2.8em;margin-top:0;margin-bottom:.8571429em;line-height:1}.md\:prose-xl h2{font-size:1.8em;margin-top:1.5555556em;margin-bottom:.8888889em;line-height:1.1111111}.md\:prose-xl h3{font-size:1.5em;margin-top:1.6em;margin-bottom:.6666667em;line-height:1.3333333}.md\:prose-xl h4{margin-top:1.8em;margin-bottom:.6em;line-height:1.6}.md\:prose-xl img{margin-top:2em;margin-bottom:2em}.md\:prose-xl video{margin-top:2em;margin-bottom:2em}.md\:prose-xl figure{margin-top:2em;margin-bottom:2em}.md\:prose-xl figure>*{margin-top:0;margin-bottom:0}.md\:prose-xl figure figcaption{font-size:.9em;line-height:1.5555556;margin-top:1em}.md\:prose-xl code{font-size:.9em}.md\:prose-xl h2 code{font-size:.8611111em}.md\:prose-xl h3 code{font-size:.9em}.md\:prose-xl pre{font-size:.9em;line-height:1.7777778;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.1111111em;padding-right:1.3333333em;padding-bottom:1.1111111em;padding-left:1.3333333em}.md\:prose-xl ol{margin-top:1.2em;margin-bottom:1.2em}.md\:prose-xl ul{margin-top:1.2em;margin-bottom:1.2em}.md\:prose-xl li{margin-top:.6em;margin-bottom:.6em}.md\:prose-xl ol>li{padding-left:1.8em}.md\:prose-xl ol>li:before{left:0}.md\:prose-xl ul>li{padding-left:1.8em}.md\:prose-xl ul>li::before{width:.35em;height:.35em;top:calc(.9em - .175em);left:.25em}.md\:prose-xl>ul>li p{margin-top:.8em;margin-bottom:.8em}.md\:prose-xl>ul>li>:first-child{margin-top:1.2em}.md\:prose-xl>ul>li>:last-child{margin-bottom:1.2em}.md\:prose-xl>ol>li>:first-child{margin-top:1.2em}.md\:prose-xl>ol>li>:last-child{margin-bottom:1.2em}.md\:prose-xl ol ol,.md\:prose-xl ol ul,.md\:prose-xl ul ol,.md\:prose-xl ul ul{margin-top:.8em;margin-bottom:.8em}.md\:prose-xl hr{margin-top:2.8em;margin-bottom:2.8em}.md\:prose-xl hr+*{margin-top:0}.md\:prose-xl h2+*{margin-top:0}.md\:prose-xl h3+*{margin-top:0}.md\:prose-xl h4+*{margin-top:0}.md\:prose-xl table{font-size:.9em;line-height:1.5555556}.md\:prose-xl thead th{padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.md\:prose-xl thead th:first-child{padding-left:0}.md\:prose-xl thead th:last-child{padding-right:0}.md\:prose-xl tbody td{padding-top:.8888889em;padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.md\:prose-xl tbody td:first-child{padding-left:0}.md\:prose-xl tbody td:last-child{padding-right:0}.md\:prose-xl>:first-child{margin-top:0}.md\:prose-xl>:last-child{margin-bottom:0}.md\:prose-2xl{font-size:1.5rem;line-height:1.6666667}.md\:prose-2xl p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl .md\:lead{font-size:1.25em;line-height:1.4666667;margin-top:1.0666667em;margin-bottom:1.0666667em}.md\:prose-2xl blockquote{margin-top:1.7777778em;margin-bottom:1.7777778em;padding-left:1.1111111em}.md\:prose-2xl h1{font-size:2.6666667em;margin-top:0;margin-bottom:.875em;line-height:1}.md\:prose-2xl h2{font-size:2em;margin-top:1.5em;margin-bottom:.8333333em;line-height:1.0833333}.md\:prose-2xl h3{font-size:1.5em;margin-top:1.5555556em;margin-bottom:.6666667em;line-height:1.2222222}.md\:prose-2xl h4{margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.md\:prose-2xl img{margin-top:2em;margin-bottom:2em}.md\:prose-2xl video{margin-top:2em;margin-bottom:2em}.md\:prose-2xl figure{margin-top:2em;margin-bottom:2em}.md\:prose-2xl figure>*{margin-top:0;margin-bottom:0}.md\:prose-2xl figure figcaption{font-size:.8333333em;line-height:1.6;margin-top:1em}.md\:prose-2xl code{font-size:.8333333em}.md\:prose-2xl h2 code{font-size:.875em}.md\:prose-2xl h3 code{font-size:.8888889em}.md\:prose-2xl pre{font-size:.8333333em;line-height:1.8;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.2em;padding-right:1.6em;padding-bottom:1.2em;padding-left:1.6em}.md\:prose-2xl ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.md\:prose-2xl ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.md\:prose-2xl li{margin-top:.5em;margin-bottom:.5em}.md\:prose-2xl ol>li{padding-left:1.6666667em}.md\:prose-2xl ol>li:before{left:0}.md\:prose-2xl ul>li{padding-left:1.6666667em}.md\:prose-2xl ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8333333em - .1666667em);left:.25em}.md\:prose-2xl>ul>li p{margin-top:.8333333em;margin-bottom:.8333333em}.md\:prose-2xl>ul>li>:first-child{margin-top:1.3333333em}.md\:prose-2xl>ul>li>:last-child{margin-bottom:1.3333333em}.md\:prose-2xl>ol>li>:first-child{margin-top:1.3333333em}.md\:prose-2xl>ol>li>:last-child{margin-bottom:1.3333333em}.md\:prose-2xl ol ol,.md\:prose-2xl ol ul,.md\:prose-2xl ul ol,.md\:prose-2xl ul ul{margin-top:.6666667em;margin-bottom:.6666667em}.md\:prose-2xl hr{margin-top:3em;margin-bottom:3em}.md\:prose-2xl hr+*{margin-top:0}.md\:prose-2xl h2+*{margin-top:0}.md\:prose-2xl h3+*{margin-top:0}.md\:prose-2xl h4+*{margin-top:0}.md\:prose-2xl table{font-size:.8333333em;line-height:1.4}.md\:prose-2xl thead th{padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.md\:prose-2xl thead th:first-child{padding-left:0}.md\:prose-2xl thead th:last-child{padding-right:0}.md\:prose-2xl tbody td{padding-top:.8em;padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.md\:prose-2xl tbody td:first-child{padding-left:0}.md\:prose-2xl tbody td:last-child{padding-right:0}.md\:prose-2xl>:first-child{margin-top:0}.md\:prose-2xl>:last-child{margin-bottom:0}}@media (min-width:1024px){.lg\:prose{color:#4a5568;max-width:65ch;font-size:1rem;line-height:1.75}.prose .lg\:lead{color:#4a5568;font-size:1.25em;line-height:1.6;margin-top:1.2em;margin-bottom:1.2em}.lg\:prose a{color:#1a202c;text-decoration:underline}.lg\:prose strong{color:#1a202c;font-weight:600}.lg\:prose ol{counter-reset:list-counter;margin-top:1.25em;margin-bottom:1.25em}.lg\:prose ol>li{position:relative;counter-increment:list-counter;padding-left:1.75em}.lg\:prose ol>li::before{content:counter(list-counter) ".";position:absolute;font-weight:400;color:#718096}.lg\:prose ul>li{position:relative;padding-left:1.75em}.lg\:prose ul>li::before{content:"";position:absolute;background-color:#cbd5e0;border-radius:50%;width:.375em;height:.375em;top:calc(.875em - .1875em);left:.25em}.lg\:prose hr{border-color:#e2e8f0;border-top-width:1px;margin-top:3em;margin-bottom:3em}.lg\:prose blockquote{font-weight:500;font-style:italic;color:#1a202c;border-left-width:.25rem;border-left-color:#e2e8f0;quotes:"\201C""\201D""\2018""\2019";margin-top:1.6em;margin-bottom:1.6em;padding-left:1em}.lg\:prose blockquote p:first-of-type::before{content:open-quote}.lg\:prose blockquote p:last-of-type::after{content:close-quote}.lg\:prose h1{color:#1a202c;font-weight:800;font-size:2.25em;margin-top:0;margin-bottom:.8888889em;line-height:1.1111111}.lg\:prose h2{color:#1a202c;font-weight:700;font-size:1.5em;margin-top:2em;margin-bottom:1em;line-height:1.3333333}.lg\:prose h3{color:#1a202c;font-weight:600;font-size:1.25em;margin-top:1.6em;margin-bottom:.6em;line-height:1.6}.lg\:prose h4{color:#1a202c;font-weight:600;margin-top:1.5em;margin-bottom:.5em;line-height:1.5}.lg\:prose figure figcaption{color:#718096;font-size:.875em;line-height:1.4285714;margin-top:.8571429em}.lg\:prose code{font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;color:#1a202c;font-weight:600;font-size:.875em}.lg\:prose code::before{content:"`"}.lg\:prose code::after{content:"`"}.lg\:prose pre{color:#e2e8f0;font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;background-color:#2d3748;overflow-x:auto;font-size:.875em;line-height:1.7142857;margin-top:1.7142857em;margin-bottom:1.7142857em;border-radius:.375rem;padding-top:.8571429em;padding-right:1.1428571em;padding-bottom:.8571429em;padding-left:1.1428571em}.lg\:prose pre code{background-color:transparent;border-width:0;border-radius:0;padding:0;font-weight:400;color:inherit;font-size:inherit;font-family:inherit;line-height:inherit}.lg\:prose pre code::before{content:""}.lg\:prose pre code::after{content:""}.lg\:prose table{width:100%;table-layout:auto;text-align:left;margin-top:2em;margin-bottom:2em;font-size:.875em;line-height:1.7142857}.lg\:prose thead{color:#1a202c;font-weight:600;border-bottom-width:1px;border-bottom-color:#cbd5e0}.lg\:prose thead th{vertical-align:bottom;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.lg\:prose tbody tr{border-bottom-width:1px;border-bottom-color:#e2e8f0}.lg\:prose tbody tr:last-child{border-bottom-width:0}.lg\:prose tbody td{vertical-align:top;padding-top:.5714286em;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.lg\:prose p{margin-top:1.25em;margin-bottom:1.25em}.lg\:prose img{margin-top:2em;margin-bottom:2em}.lg\:prose video{margin-top:2em;margin-bottom:2em}.lg\:prose figure{margin-top:2em;margin-bottom:2em}.lg\:prose figure>*{margin-top:0;margin-bottom:0}.lg\:prose h2 code{font-size:.875em}.lg\:prose h3 code{font-size:.9em}.lg\:prose ul{margin-top:1.25em;margin-bottom:1.25em}.lg\:prose li{margin-top:.5em;margin-bottom:.5em}.lg\:prose ol>li:before{left:0}.lg\:prose>ul>li p{margin-top:.75em;margin-bottom:.75em}.lg\:prose>ul>li>:first-child{margin-top:1.25em}.lg\:prose>ul>li>:last-child{margin-bottom:1.25em}.lg\:prose>ol>li>:first-child{margin-top:1.25em}.lg\:prose>ol>li>:last-child{margin-bottom:1.25em}.lg\:prose ol ol,.lg\:prose ol ul,.lg\:prose ul ol,.lg\:prose ul ul{margin-top:.75em;margin-bottom:.75em}.lg\:prose hr+*{margin-top:0}.lg\:prose h2+*{margin-top:0}.lg\:prose h3+*{margin-top:0}.lg\:prose h4+*{margin-top:0}.lg\:prose thead th:first-child{padding-left:0}.lg\:prose thead th:last-child{padding-right:0}.lg\:prose tbody td:first-child{padding-left:0}.lg\:prose tbody td:last-child{padding-right:0}.lg\:prose>:first-child{margin-top:0}.lg\:prose>:last-child{margin-bottom:0}.lg\:prose-sm{font-size:.875rem;line-height:1.7142857}.lg\:prose-sm p{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm .lg\:lead{font-size:1.2857143em;line-height:1.5555556;margin-top:.8888889em;margin-bottom:.8888889em}.lg\:prose-sm blockquote{margin-top:1.3333333em;margin-bottom:1.3333333em;padding-left:1.1111111em}.lg\:prose-sm h1{font-size:2.1428571em;margin-top:0;margin-bottom:.8em;line-height:1.2}.lg\:prose-sm h2{font-size:1.4285714em;margin-top:1.6em;margin-bottom:.8em;line-height:1.4}.lg\:prose-sm h3{font-size:1.2857143em;margin-top:1.5555556em;margin-bottom:.4444444em;line-height:1.5555556}.lg\:prose-sm h4{margin-top:1.4285714em;margin-bottom:.5714286em;line-height:1.4285714}.lg\:prose-sm img{margin-top:1.7142857em;margin-bottom:1.7142857em}.lg\:prose-sm video{margin-top:1.7142857em;margin-bottom:1.7142857em}.lg\:prose-sm figure{margin-top:1.7142857em;margin-bottom:1.7142857em}.lg\:prose-sm figure>*{margin-top:0;margin-bottom:0}.lg\:prose-sm figure figcaption{font-size:.8571429em;line-height:1.3333333;margin-top:.6666667em}.lg\:prose-sm code{font-size:.8571429em}.lg\:prose-sm h2 code{font-size:.9em}.lg\:prose-sm h3 code{font-size:.8888889em}.lg\:prose-sm pre{font-size:.8571429em;line-height:1.6666667;margin-top:1.6666667em;margin-bottom:1.6666667em;border-radius:.25rem;padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.lg\:prose-sm ol{margin-top:1.1428571em;margin-bottom:1.1428571em}.lg\:prose-sm ul{margin-top:1.1428571em;margin-bottom:1.1428571em}.lg\:prose-sm li{margin-top:.2857143em;margin-bottom:.2857143em}.lg\:prose-sm ol>li{padding-left:1.5714286em}.lg\:prose-sm ol>li:before{left:0}.lg\:prose-sm ul>li{padding-left:1.5714286em}.lg\:prose-sm ul>li::before{height:.3571429em;width:.3571429em;top:calc(.8571429em - .1785714em);left:.2142857em}.lg\:prose-sm>ul>li p{margin-top:.5714286em;margin-bottom:.5714286em}.lg\:prose-sm>ul>li>:first-child{margin-top:1.1428571em}.lg\:prose-sm>ul>li>:last-child{margin-bottom:1.1428571em}.lg\:prose-sm>ol>li>:first-child{margin-top:1.1428571em}.lg\:prose-sm>ol>li>:last-child{margin-bottom:1.1428571em}.lg\:prose-sm ol ol,.lg\:prose-sm ol ul,.lg\:prose-sm ul ol,.lg\:prose-sm ul ul{margin-top:.5714286em;margin-bottom:.5714286em}.lg\:prose-sm hr{margin-top:2.8571429em;margin-bottom:2.8571429em}.lg\:prose-sm hr+*{margin-top:0}.lg\:prose-sm h2+*{margin-top:0}.lg\:prose-sm h3+*{margin-top:0}.lg\:prose-sm h4+*{margin-top:0}.lg\:prose-sm table{font-size:.8571429em;line-height:1.5}.lg\:prose-sm thead th{padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.lg\:prose-sm thead th:first-child{padding-left:0}.lg\:prose-sm thead th:last-child{padding-right:0}.lg\:prose-sm tbody td{padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.lg\:prose-sm tbody td:first-child{padding-left:0}.lg\:prose-sm tbody td:last-child{padding-right:0}.lg\:prose-sm>:first-child{margin-top:0}.lg\:prose-sm>:last-child{margin-bottom:0}.lg\:prose-lg{font-size:1.125rem;line-height:1.7777778}.lg\:prose-lg p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg .lg\:lead{font-size:1.2222222em;line-height:1.4545455;margin-top:1.0909091em;margin-bottom:1.0909091em}.lg\:prose-lg blockquote{margin-top:1.6666667em;margin-bottom:1.6666667em;padding-left:1em}.lg\:prose-lg h1{font-size:2.6666667em;margin-top:0;margin-bottom:.8333333em;line-height:1}.lg\:prose-lg h2{font-size:1.6666667em;margin-top:1.8666667em;margin-bottom:1.0666667em;line-height:1.3333333}.lg\:prose-lg h3{font-size:1.3333333em;margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.lg\:prose-lg h4{margin-top:1.7777778em;margin-bottom:.4444444em;line-height:1.5555556}.lg\:prose-lg img{margin-top:1.7777778em;margin-bottom:1.7777778em}.lg\:prose-lg video{margin-top:1.7777778em;margin-bottom:1.7777778em}.lg\:prose-lg figure{margin-top:1.7777778em;margin-bottom:1.7777778em}.lg\:prose-lg figure>*{margin-top:0;margin-bottom:0}.lg\:prose-lg figure figcaption{font-size:.8888889em;line-height:1.5;margin-top:1em}.lg\:prose-lg code{font-size:.8888889em}.lg\:prose-lg h2 code{font-size:.8666667em}.lg\:prose-lg h3 code{font-size:.875em}.lg\:prose-lg pre{font-size:.8888889em;line-height:1.75;margin-top:2em;margin-bottom:2em;border-radius:.375rem;padding-top:1em;padding-right:1.5em;padding-bottom:1em;padding-left:1.5em}.lg\:prose-lg ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.lg\:prose-lg ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.lg\:prose-lg li{margin-top:.6666667em;margin-bottom:.6666667em}.lg\:prose-lg ol>li{padding-left:1.6666667em}.lg\:prose-lg ol>li:before{left:0}.lg\:prose-lg ul>li{padding-left:1.6666667em}.lg\:prose-lg ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8888889em - .1666667em);left:.2222222em}.lg\:prose-lg>ul>li p{margin-top:.8888889em;margin-bottom:.8888889em}.lg\:prose-lg>ul>li>:first-child{margin-top:1.3333333em}.lg\:prose-lg>ul>li>:last-child{margin-bottom:1.3333333em}.lg\:prose-lg>ol>li>:first-child{margin-top:1.3333333em}.lg\:prose-lg>ol>li>:last-child{margin-bottom:1.3333333em}.lg\:prose-lg ol ol,.lg\:prose-lg ol ul,.lg\:prose-lg ul ol,.lg\:prose-lg ul ul{margin-top:.8888889em;margin-bottom:.8888889em}.lg\:prose-lg hr{margin-top:3.1111111em;margin-bottom:3.1111111em}.lg\:prose-lg hr+*{margin-top:0}.lg\:prose-lg h2+*{margin-top:0}.lg\:prose-lg h3+*{margin-top:0}.lg\:prose-lg h4+*{margin-top:0}.lg\:prose-lg table{font-size:.8888889em;line-height:1.5}.lg\:prose-lg thead th{padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.lg\:prose-lg thead th:first-child{padding-left:0}.lg\:prose-lg thead th:last-child{padding-right:0}.lg\:prose-lg tbody td{padding-top:.75em;padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.lg\:prose-lg tbody td:first-child{padding-left:0}.lg\:prose-lg tbody td:last-child{padding-right:0}.lg\:prose-lg>:first-child{margin-top:0}.lg\:prose-lg>:last-child{margin-bottom:0}.lg\:prose-xl{font-size:1.25rem;line-height:1.8}.lg\:prose-xl p{margin-top:1.2em;margin-bottom:1.2em}.prose-xl .lg\:lead{font-size:1.2em;line-height:1.5;margin-top:1em;margin-bottom:1em}.lg\:prose-xl blockquote{margin-top:1.6em;margin-bottom:1.6em;padding-left:1.0666667em}.lg\:prose-xl h1{font-size:2.8em;margin-top:0;margin-bottom:.8571429em;line-height:1}.lg\:prose-xl h2{font-size:1.8em;margin-top:1.5555556em;margin-bottom:.8888889em;line-height:1.1111111}.lg\:prose-xl h3{font-size:1.5em;margin-top:1.6em;margin-bottom:.6666667em;line-height:1.3333333}.lg\:prose-xl h4{margin-top:1.8em;margin-bottom:.6em;line-height:1.6}.lg\:prose-xl img{margin-top:2em;margin-bottom:2em}.lg\:prose-xl video{margin-top:2em;margin-bottom:2em}.lg\:prose-xl figure{margin-top:2em;margin-bottom:2em}.lg\:prose-xl figure>*{margin-top:0;margin-bottom:0}.lg\:prose-xl figure figcaption{font-size:.9em;line-height:1.5555556;margin-top:1em}.lg\:prose-xl code{font-size:.9em}.lg\:prose-xl h2 code{font-size:.8611111em}.lg\:prose-xl h3 code{font-size:.9em}.lg\:prose-xl pre{font-size:.9em;line-height:1.7777778;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.1111111em;padding-right:1.3333333em;padding-bottom:1.1111111em;padding-left:1.3333333em}.lg\:prose-xl ol{margin-top:1.2em;margin-bottom:1.2em}.lg\:prose-xl ul{margin-top:1.2em;margin-bottom:1.2em}.lg\:prose-xl li{margin-top:.6em;margin-bottom:.6em}.lg\:prose-xl ol>li{padding-left:1.8em}.lg\:prose-xl ol>li:before{left:0}.lg\:prose-xl ul>li{padding-left:1.8em}.lg\:prose-xl ul>li::before{width:.35em;height:.35em;top:calc(.9em - .175em);left:.25em}.lg\:prose-xl>ul>li p{margin-top:.8em;margin-bottom:.8em}.lg\:prose-xl>ul>li>:first-child{margin-top:1.2em}.lg\:prose-xl>ul>li>:last-child{margin-bottom:1.2em}.lg\:prose-xl>ol>li>:first-child{margin-top:1.2em}.lg\:prose-xl>ol>li>:last-child{margin-bottom:1.2em}.lg\:prose-xl ol ol,.lg\:prose-xl ol ul,.lg\:prose-xl ul ol,.lg\:prose-xl ul ul{margin-top:.8em;margin-bottom:.8em}.lg\:prose-xl hr{margin-top:2.8em;margin-bottom:2.8em}.lg\:prose-xl hr+*{margin-top:0}.lg\:prose-xl h2+*{margin-top:0}.lg\:prose-xl h3+*{margin-top:0}.lg\:prose-xl h4+*{margin-top:0}.lg\:prose-xl table{font-size:.9em;line-height:1.5555556}.lg\:prose-xl thead th{padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.lg\:prose-xl thead th:first-child{padding-left:0}.lg\:prose-xl thead th:last-child{padding-right:0}.lg\:prose-xl tbody td{padding-top:.8888889em;padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.lg\:prose-xl tbody td:first-child{padding-left:0}.lg\:prose-xl tbody td:last-child{padding-right:0}.lg\:prose-xl>:first-child{margin-top:0}.lg\:prose-xl>:last-child{margin-bottom:0}.lg\:prose-2xl{font-size:1.5rem;line-height:1.6666667}.lg\:prose-2xl p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl .lg\:lead{font-size:1.25em;line-height:1.4666667;margin-top:1.0666667em;margin-bottom:1.0666667em}.lg\:prose-2xl blockquote{margin-top:1.7777778em;margin-bottom:1.7777778em;padding-left:1.1111111em}.lg\:prose-2xl h1{font-size:2.6666667em;margin-top:0;margin-bottom:.875em;line-height:1}.lg\:prose-2xl h2{font-size:2em;margin-top:1.5em;margin-bottom:.8333333em;line-height:1.0833333}.lg\:prose-2xl h3{font-size:1.5em;margin-top:1.5555556em;margin-bottom:.6666667em;line-height:1.2222222}.lg\:prose-2xl h4{margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.lg\:prose-2xl img{margin-top:2em;margin-bottom:2em}.lg\:prose-2xl video{margin-top:2em;margin-bottom:2em}.lg\:prose-2xl figure{margin-top:2em;margin-bottom:2em}.lg\:prose-2xl figure>*{margin-top:0;margin-bottom:0}.lg\:prose-2xl figure figcaption{font-size:.8333333em;line-height:1.6;margin-top:1em}.lg\:prose-2xl code{font-size:.8333333em}.lg\:prose-2xl h2 code{font-size:.875em}.lg\:prose-2xl h3 code{font-size:.8888889em}.lg\:prose-2xl pre{font-size:.8333333em;line-height:1.8;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.2em;padding-right:1.6em;padding-bottom:1.2em;padding-left:1.6em}.lg\:prose-2xl ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.lg\:prose-2xl ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.lg\:prose-2xl li{margin-top:.5em;margin-bottom:.5em}.lg\:prose-2xl ol>li{padding-left:1.6666667em}.lg\:prose-2xl ol>li:before{left:0}.lg\:prose-2xl ul>li{padding-left:1.6666667em}.lg\:prose-2xl ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8333333em - .1666667em);left:.25em}.lg\:prose-2xl>ul>li p{margin-top:.8333333em;margin-bottom:.8333333em}.lg\:prose-2xl>ul>li>:first-child{margin-top:1.3333333em}.lg\:prose-2xl>ul>li>:last-child{margin-bottom:1.3333333em}.lg\:prose-2xl>ol>li>:first-child{margin-top:1.3333333em}.lg\:prose-2xl>ol>li>:last-child{margin-bottom:1.3333333em}.lg\:prose-2xl ol ol,.lg\:prose-2xl ol ul,.lg\:prose-2xl ul ol,.lg\:prose-2xl ul ul{margin-top:.6666667em;margin-bottom:.6666667em}.lg\:prose-2xl hr{margin-top:3em;margin-bottom:3em}.lg\:prose-2xl hr+*{margin-top:0}.lg\:prose-2xl h2+*{margin-top:0}.lg\:prose-2xl h3+*{margin-top:0}.lg\:prose-2xl h4+*{margin-top:0}.lg\:prose-2xl table{font-size:.8333333em;line-height:1.4}.lg\:prose-2xl thead th{padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.lg\:prose-2xl thead th:first-child{padding-left:0}.lg\:prose-2xl thead th:last-child{padding-right:0}.lg\:prose-2xl tbody td{padding-top:.8em;padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.lg\:prose-2xl tbody td:first-child{padding-left:0}.lg\:prose-2xl tbody td:last-child{padding-right:0}.lg\:prose-2xl>:first-child{margin-top:0}.lg\:prose-2xl>:last-child{margin-bottom:0}}@media (min-width:1280px){.xl\:prose{color:#4a5568;max-width:65ch;font-size:1rem;line-height:1.75}.prose .xl\:lead{color:#4a5568;font-size:1.25em;line-height:1.6;margin-top:1.2em;margin-bottom:1.2em}.xl\:prose a{color:#1a202c;text-decoration:underline}.xl\:prose strong{color:#1a202c;font-weight:600}.xl\:prose ol{counter-reset:list-counter;margin-top:1.25em;margin-bottom:1.25em}.xl\:prose ol>li{position:relative;counter-increment:list-counter;padding-left:1.75em}.xl\:prose ol>li::before{content:counter(list-counter) ".";position:absolute;font-weight:400;color:#718096}.xl\:prose ul>li{position:relative;padding-left:1.75em}.xl\:prose ul>li::before{content:"";position:absolute;background-color:#cbd5e0;border-radius:50%;width:.375em;height:.375em;top:calc(.875em - .1875em);left:.25em}.xl\:prose hr{border-color:#e2e8f0;border-top-width:1px;margin-top:3em;margin-bottom:3em}.xl\:prose blockquote{font-weight:500;font-style:italic;color:#1a202c;border-left-width:.25rem;border-left-color:#e2e8f0;quotes:"\201C""\201D""\2018""\2019";margin-top:1.6em;margin-bottom:1.6em;padding-left:1em}.xl\:prose blockquote p:first-of-type::before{content:open-quote}.xl\:prose blockquote p:last-of-type::after{content:close-quote}.xl\:prose h1{color:#1a202c;font-weight:800;font-size:2.25em;margin-top:0;margin-bottom:.8888889em;line-height:1.1111111}.xl\:prose h2{color:#1a202c;font-weight:700;font-size:1.5em;margin-top:2em;margin-bottom:1em;line-height:1.3333333}.xl\:prose h3{color:#1a202c;font-weight:600;font-size:1.25em;margin-top:1.6em;margin-bottom:.6em;line-height:1.6}.xl\:prose h4{color:#1a202c;font-weight:600;margin-top:1.5em;margin-bottom:.5em;line-height:1.5}.xl\:prose figure figcaption{color:#718096;font-size:.875em;line-height:1.4285714;margin-top:.8571429em}.xl\:prose code{font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;color:#1a202c;font-weight:600;font-size:.875em}.xl\:prose code::before{content:"`"}.xl\:prose code::after{content:"`"}.xl\:prose pre{color:#e2e8f0;font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;background-color:#2d3748;overflow-x:auto;font-size:.875em;line-height:1.7142857;margin-top:1.7142857em;margin-bottom:1.7142857em;border-radius:.375rem;padding-top:.8571429em;padding-right:1.1428571em;padding-bottom:.8571429em;padding-left:1.1428571em}.xl\:prose pre code{background-color:transparent;border-width:0;border-radius:0;padding:0;font-weight:400;color:inherit;font-size:inherit;font-family:inherit;line-height:inherit}.xl\:prose pre code::before{content:""}.xl\:prose pre code::after{content:""}.xl\:prose table{width:100%;table-layout:auto;text-align:left;margin-top:2em;margin-bottom:2em;font-size:.875em;line-height:1.7142857}.xl\:prose thead{color:#1a202c;font-weight:600;border-bottom-width:1px;border-bottom-color:#cbd5e0}.xl\:prose thead th{vertical-align:bottom;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.xl\:prose tbody tr{border-bottom-width:1px;border-bottom-color:#e2e8f0}.xl\:prose tbody tr:last-child{border-bottom-width:0}.xl\:prose tbody td{vertical-align:top;padding-top:.5714286em;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.xl\:prose p{margin-top:1.25em;margin-bottom:1.25em}.xl\:prose img{margin-top:2em;margin-bottom:2em}.xl\:prose video{margin-top:2em;margin-bottom:2em}.xl\:prose figure{margin-top:2em;margin-bottom:2em}.xl\:prose figure>*{margin-top:0;margin-bottom:0}.xl\:prose h2 code{font-size:.875em}.xl\:prose h3 code{font-size:.9em}.xl\:prose ul{margin-top:1.25em;margin-bottom:1.25em}.xl\:prose li{margin-top:.5em;margin-bottom:.5em}.xl\:prose ol>li:before{left:0}.xl\:prose>ul>li p{margin-top:.75em;margin-bottom:.75em}.xl\:prose>ul>li>:first-child{margin-top:1.25em}.xl\:prose>ul>li>:last-child{margin-bottom:1.25em}.xl\:prose>ol>li>:first-child{margin-top:1.25em}.xl\:prose>ol>li>:last-child{margin-bottom:1.25em}.xl\:prose ol ol,.xl\:prose ol ul,.xl\:prose ul ol,.xl\:prose ul ul{margin-top:.75em;margin-bottom:.75em}.xl\:prose hr+*{margin-top:0}.xl\:prose h2+*{margin-top:0}.xl\:prose h3+*{margin-top:0}.xl\:prose h4+*{margin-top:0}.xl\:prose thead th:first-child{padding-left:0}.xl\:prose thead th:last-child{padding-right:0}.xl\:prose tbody td:first-child{padding-left:0}.xl\:prose tbody td:last-child{padding-right:0}.xl\:prose>:first-child{margin-top:0}.xl\:prose>:last-child{margin-bottom:0}.xl\:prose-sm{font-size:.875rem;line-height:1.7142857}.xl\:prose-sm p{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm .xl\:lead{font-size:1.2857143em;line-height:1.5555556;margin-top:.8888889em;margin-bottom:.8888889em}.xl\:prose-sm blockquote{margin-top:1.3333333em;margin-bottom:1.3333333em;padding-left:1.1111111em}.xl\:prose-sm h1{font-size:2.1428571em;margin-top:0;margin-bottom:.8em;line-height:1.2}.xl\:prose-sm h2{font-size:1.4285714em;margin-top:1.6em;margin-bottom:.8em;line-height:1.4}.xl\:prose-sm h3{font-size:1.2857143em;margin-top:1.5555556em;margin-bottom:.4444444em;line-height:1.5555556}.xl\:prose-sm h4{margin-top:1.4285714em;margin-bottom:.5714286em;line-height:1.4285714}.xl\:prose-sm img{margin-top:1.7142857em;margin-bottom:1.7142857em}.xl\:prose-sm video{margin-top:1.7142857em;margin-bottom:1.7142857em}.xl\:prose-sm figure{margin-top:1.7142857em;margin-bottom:1.7142857em}.xl\:prose-sm figure>*{margin-top:0;margin-bottom:0}.xl\:prose-sm figure figcaption{font-size:.8571429em;line-height:1.3333333;margin-top:.6666667em}.xl\:prose-sm code{font-size:.8571429em}.xl\:prose-sm h2 code{font-size:.9em}.xl\:prose-sm h3 code{font-size:.8888889em}.xl\:prose-sm pre{font-size:.8571429em;line-height:1.6666667;margin-top:1.6666667em;margin-bottom:1.6666667em;border-radius:.25rem;padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.xl\:prose-sm ol{margin-top:1.1428571em;margin-bottom:1.1428571em}.xl\:prose-sm ul{margin-top:1.1428571em;margin-bottom:1.1428571em}.xl\:prose-sm li{margin-top:.2857143em;margin-bottom:.2857143em}.xl\:prose-sm ol>li{padding-left:1.5714286em}.xl\:prose-sm ol>li:before{left:0}.xl\:prose-sm ul>li{padding-left:1.5714286em}.xl\:prose-sm ul>li::before{height:.3571429em;width:.3571429em;top:calc(.8571429em - .1785714em);left:.2142857em}.xl\:prose-sm>ul>li p{margin-top:.5714286em;margin-bottom:.5714286em}.xl\:prose-sm>ul>li>:first-child{margin-top:1.1428571em}.xl\:prose-sm>ul>li>:last-child{margin-bottom:1.1428571em}.xl\:prose-sm>ol>li>:first-child{margin-top:1.1428571em}.xl\:prose-sm>ol>li>:last-child{margin-bottom:1.1428571em}.xl\:prose-sm ol ol,.xl\:prose-sm ol ul,.xl\:prose-sm ul ol,.xl\:prose-sm ul ul{margin-top:.5714286em;margin-bottom:.5714286em}.xl\:prose-sm hr{margin-top:2.8571429em;margin-bottom:2.8571429em}.xl\:prose-sm hr+*{margin-top:0}.xl\:prose-sm h2+*{margin-top:0}.xl\:prose-sm h3+*{margin-top:0}.xl\:prose-sm h4+*{margin-top:0}.xl\:prose-sm table{font-size:.8571429em;line-height:1.5}.xl\:prose-sm thead th{padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.xl\:prose-sm thead th:first-child{padding-left:0}.xl\:prose-sm thead th:last-child{padding-right:0}.xl\:prose-sm tbody td{padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.xl\:prose-sm tbody td:first-child{padding-left:0}.xl\:prose-sm tbody td:last-child{padding-right:0}.xl\:prose-sm>:first-child{margin-top:0}.xl\:prose-sm>:last-child{margin-bottom:0}.xl\:prose-lg{font-size:1.125rem;line-height:1.7777778}.xl\:prose-lg p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg .xl\:lead{font-size:1.2222222em;line-height:1.4545455;margin-top:1.0909091em;margin-bottom:1.0909091em}.xl\:prose-lg blockquote{margin-top:1.6666667em;margin-bottom:1.6666667em;padding-left:1em}.xl\:prose-lg h1{font-size:2.6666667em;margin-top:0;margin-bottom:.8333333em;line-height:1}.xl\:prose-lg h2{font-size:1.6666667em;margin-top:1.8666667em;margin-bottom:1.0666667em;line-height:1.3333333}.xl\:prose-lg h3{font-size:1.3333333em;margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.xl\:prose-lg h4{margin-top:1.7777778em;margin-bottom:.4444444em;line-height:1.5555556}.xl\:prose-lg img{margin-top:1.7777778em;margin-bottom:1.7777778em}.xl\:prose-lg video{margin-top:1.7777778em;margin-bottom:1.7777778em}.xl\:prose-lg figure{margin-top:1.7777778em;margin-bottom:1.7777778em}.xl\:prose-lg figure>*{margin-top:0;margin-bottom:0}.xl\:prose-lg figure figcaption{font-size:.8888889em;line-height:1.5;margin-top:1em}.xl\:prose-lg code{font-size:.8888889em}.xl\:prose-lg h2 code{font-size:.8666667em}.xl\:prose-lg h3 code{font-size:.875em}.xl\:prose-lg pre{font-size:.8888889em;line-height:1.75;margin-top:2em;margin-bottom:2em;border-radius:.375rem;padding-top:1em;padding-right:1.5em;padding-bottom:1em;padding-left:1.5em}.xl\:prose-lg ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.xl\:prose-lg ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.xl\:prose-lg li{margin-top:.6666667em;margin-bottom:.6666667em}.xl\:prose-lg ol>li{padding-left:1.6666667em}.xl\:prose-lg ol>li:before{left:0}.xl\:prose-lg ul>li{padding-left:1.6666667em}.xl\:prose-lg ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8888889em - .1666667em);left:.2222222em}.xl\:prose-lg>ul>li p{margin-top:.8888889em;margin-bottom:.8888889em}.xl\:prose-lg>ul>li>:first-child{margin-top:1.3333333em}.xl\:prose-lg>ul>li>:last-child{margin-bottom:1.3333333em}.xl\:prose-lg>ol>li>:first-child{margin-top:1.3333333em}.xl\:prose-lg>ol>li>:last-child{margin-bottom:1.3333333em}.xl\:prose-lg ol ol,.xl\:prose-lg ol ul,.xl\:prose-lg ul ol,.xl\:prose-lg ul ul{margin-top:.8888889em;margin-bottom:.8888889em}.xl\:prose-lg hr{margin-top:3.1111111em;margin-bottom:3.1111111em}.xl\:prose-lg hr+*{margin-top:0}.xl\:prose-lg h2+*{margin-top:0}.xl\:prose-lg h3+*{margin-top:0}.xl\:prose-lg h4+*{margin-top:0}.xl\:prose-lg table{font-size:.8888889em;line-height:1.5}.xl\:prose-lg thead th{padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.xl\:prose-lg thead th:first-child{padding-left:0}.xl\:prose-lg thead th:last-child{padding-right:0}.xl\:prose-lg tbody td{padding-top:.75em;padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.xl\:prose-lg tbody td:first-child{padding-left:0}.xl\:prose-lg tbody td:last-child{padding-right:0}.xl\:prose-lg>:first-child{margin-top:0}.xl\:prose-lg>:last-child{margin-bottom:0}.xl\:prose-xl{font-size:1.25rem;line-height:1.8}.xl\:prose-xl p{margin-top:1.2em;margin-bottom:1.2em}.prose-xl .xl\:lead{font-size:1.2em;line-height:1.5;margin-top:1em;margin-bottom:1em}.xl\:prose-xl blockquote{margin-top:1.6em;margin-bottom:1.6em;padding-left:1.0666667em}.xl\:prose-xl h1{font-size:2.8em;margin-top:0;margin-bottom:.8571429em;line-height:1}.xl\:prose-xl h2{font-size:1.8em;margin-top:1.5555556em;margin-bottom:.8888889em;line-height:1.1111111}.xl\:prose-xl h3{font-size:1.5em;margin-top:1.6em;margin-bottom:.6666667em;line-height:1.3333333}.xl\:prose-xl h4{margin-top:1.8em;margin-bottom:.6em;line-height:1.6}.xl\:prose-xl img{margin-top:2em;margin-bottom:2em}.xl\:prose-xl video{margin-top:2em;margin-bottom:2em}.xl\:prose-xl figure{margin-top:2em;margin-bottom:2em}.xl\:prose-xl figure>*{margin-top:0;margin-bottom:0}.xl\:prose-xl figure figcaption{font-size:.9em;line-height:1.5555556;margin-top:1em}.xl\:prose-xl code{font-size:.9em}.xl\:prose-xl h2 code{font-size:.8611111em}.xl\:prose-xl h3 code{font-size:.9em}.xl\:prose-xl pre{font-size:.9em;line-height:1.7777778;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.1111111em;padding-right:1.3333333em;padding-bottom:1.1111111em;padding-left:1.3333333em}.xl\:prose-xl ol{margin-top:1.2em;margin-bottom:1.2em}.xl\:prose-xl ul{margin-top:1.2em;margin-bottom:1.2em}.xl\:prose-xl li{margin-top:.6em;margin-bottom:.6em}.xl\:prose-xl ol>li{padding-left:1.8em}.xl\:prose-xl ol>li:before{left:0}.xl\:prose-xl ul>li{padding-left:1.8em}.xl\:prose-xl ul>li::before{width:.35em;height:.35em;top:calc(.9em - .175em);left:.25em}.xl\:prose-xl>ul>li p{margin-top:.8em;margin-bottom:.8em}.xl\:prose-xl>ul>li>:first-child{margin-top:1.2em}.xl\:prose-xl>ul>li>:last-child{margin-bottom:1.2em}.xl\:prose-xl>ol>li>:first-child{margin-top:1.2em}.xl\:prose-xl>ol>li>:last-child{margin-bottom:1.2em}.xl\:prose-xl ol ol,.xl\:prose-xl ol ul,.xl\:prose-xl ul ol,.xl\:prose-xl ul ul{margin-top:.8em;margin-bottom:.8em}.xl\:prose-xl hr{margin-top:2.8em;margin-bottom:2.8em}.xl\:prose-xl hr+*{margin-top:0}.xl\:prose-xl h2+*{margin-top:0}.xl\:prose-xl h3+*{margin-top:0}.xl\:prose-xl h4+*{margin-top:0}.xl\:prose-xl table{font-size:.9em;line-height:1.5555556}.xl\:prose-xl thead th{padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.xl\:prose-xl thead th:first-child{padding-left:0}.xl\:prose-xl thead th:last-child{padding-right:0}.xl\:prose-xl tbody td{padding-top:.8888889em;padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.xl\:prose-xl tbody td:first-child{padding-left:0}.xl\:prose-xl tbody td:last-child{padding-right:0}.xl\:prose-xl>:first-child{margin-top:0}.xl\:prose-xl>:last-child{margin-bottom:0}.xl\:prose-2xl{font-size:1.5rem;line-height:1.6666667}.xl\:prose-2xl p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl .xl\:lead{font-size:1.25em;line-height:1.4666667;margin-top:1.0666667em;margin-bottom:1.0666667em}.xl\:prose-2xl blockquote{margin-top:1.7777778em;margin-bottom:1.7777778em;padding-left:1.1111111em}.xl\:prose-2xl h1{font-size:2.6666667em;margin-top:0;margin-bottom:.875em;line-height:1}.xl\:prose-2xl h2{font-size:2em;margin-top:1.5em;margin-bottom:.8333333em;line-height:1.0833333}.xl\:prose-2xl h3{font-size:1.5em;margin-top:1.5555556em;margin-bottom:.6666667em;line-height:1.2222222}.xl\:prose-2xl h4{margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.xl\:prose-2xl img{margin-top:2em;margin-bottom:2em}.xl\:prose-2xl video{margin-top:2em;margin-bottom:2em}.xl\:prose-2xl figure{margin-top:2em;margin-bottom:2em}.xl\:prose-2xl figure>*{margin-top:0;margin-bottom:0}.xl\:prose-2xl figure figcaption{font-size:.8333333em;line-height:1.6;margin-top:1em}.xl\:prose-2xl code{font-size:.8333333em}.xl\:prose-2xl h2 code{font-size:.875em}.xl\:prose-2xl h3 code{font-size:.8888889em}.xl\:prose-2xl pre{font-size:.8333333em;line-height:1.8;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.2em;padding-right:1.6em;padding-bottom:1.2em;padding-left:1.6em}.xl\:prose-2xl ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.xl\:prose-2xl ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.xl\:prose-2xl li{margin-top:.5em;margin-bottom:.5em}.xl\:prose-2xl ol>li{padding-left:1.6666667em}.xl\:prose-2xl ol>li:before{left:0}.xl\:prose-2xl ul>li{padding-left:1.6666667em}.xl\:prose-2xl ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8333333em - .1666667em);left:.25em}.xl\:prose-2xl>ul>li p{margin-top:.8333333em;margin-bottom:.8333333em}.xl\:prose-2xl>ul>li>:first-child{margin-top:1.3333333em}.xl\:prose-2xl>ul>li>:last-child{margin-bottom:1.3333333em}.xl\:prose-2xl>ol>li>:first-child{margin-top:1.3333333em}.xl\:prose-2xl>ol>li>:last-child{margin-bottom:1.3333333em}.xl\:prose-2xl ol ol,.xl\:prose-2xl ol ul,.xl\:prose-2xl ul ol,.xl\:prose-2xl ul ul{margin-top:.6666667em;margin-bottom:.6666667em}.xl\:prose-2xl hr{margin-top:3em;margin-bottom:3em}.xl\:prose-2xl hr+*{margin-top:0}.xl\:prose-2xl h2+*{margin-top:0}.xl\:prose-2xl h3+*{margin-top:0}.xl\:prose-2xl h4+*{margin-top:0}.xl\:prose-2xl table{font-size:.8333333em;line-height:1.4}.xl\:prose-2xl thead th{padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.xl\:prose-2xl thead th:first-child{padding-left:0}.xl\:prose-2xl thead th:last-child{padding-right:0}.xl\:prose-2xl tbody td{padding-top:.8em;padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.xl\:prose-2xl tbody td:first-child{padding-left:0}.xl\:prose-2xl tbody td:last-child{padding-right:0}.xl\:prose-2xl>:first-child{margin-top:0}.xl\:prose-2xl>:last-child{margin-bottom:0}} \ No newline at end of file diff --git a/spaces/jennysun/jwsun-multisubject-render-model/gligen/ldm/__init__.py b/spaces/jennysun/jwsun-multisubject-render-model/gligen/ldm/__init__.py deleted file mode 100644 index d71fc3230d513f470c68e46752b355cd70824ecf..0000000000000000000000000000000000000000 --- a/spaces/jennysun/jwsun-multisubject-render-model/gligen/ldm/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -import gligen.evaluator as evaluator -import gligen.trainer as trainer -import gligen.ldm as ldm \ No newline at end of file diff --git a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/Fig3d_Decreased&Like.py b/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/Fig3d_Decreased&Like.py deleted file mode 100644 index 25cb5a9cc9979abe3579245538a5ebfd03f25759..0000000000000000000000000000000000000000 --- a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/Fig3d_Decreased&Like.py +++ /dev/null @@ -1,487 +0,0 @@ -#!/usr/bin/python -# coding: utf-8 - -import os -import math -import model -import torch -import json -import pickle -import numpy as np -from rdkit import Chem -from Bio import SeqIO -from collections import Counter -from collections import defaultdict -import matplotlib.pyplot as plt -from matplotlib import rc -from scipy import stats -import seaborn as sns -import pandas as pd -from scipy.stats import ranksums -from sklearn.metrics import mean_squared_error,r2_score - - -fingerprint_dict = model.load_pickle('../../Data/input/fingerprint_dict.pickle') -atom_dict = model.load_pickle('../../Data/input/atom_dict.pickle') -bond_dict = model.load_pickle('../../Data/input/bond_dict.pickle') -edge_dict = model.load_pickle('../../Data/input/edge_dict.pickle') -word_dict = model.load_pickle('../../Data/input/sequence_dict.pickle') - -def split_sequence(sequence, ngram): - sequence = '-' + sequence + '=' - # print(sequence) - # words = [word_dict[sequence[i:i+ngram]] for i in range(len(sequence)-ngram+1)] - - words = list() - for i in range(len(sequence)-ngram+1) : - try : - words.append(word_dict[sequence[i:i+ngram]]) - except : - word_dict[sequence[i:i+ngram]] = 0 - words.append(word_dict[sequence[i:i+ngram]]) - - return np.array(words) - # return word_dict - -def create_atoms(mol): - """Create a list of atom (e.g., hydrogen and oxygen) IDs - considering the aromaticity.""" - # atom_dict = defaultdict(lambda: len(atom_dict)) - atoms = [a.GetSymbol() for a in mol.GetAtoms()] - # print(atoms) - for a in mol.GetAromaticAtoms(): - i = a.GetIdx() - atoms[i] = (atoms[i], 'aromatic') - atoms = [atom_dict[a] for a in atoms] - # atoms = list() - # for a in atoms : - # try: - # atoms.append(atom_dict[a]) - # except : - # atom_dict[a] = 0 - # atoms.append(atom_dict[a]) - - return np.array(atoms) - -def create_ijbonddict(mol): - """Create a dictionary, which each key is a node ID - and each value is the tuples of its neighboring node - and bond (e.g., single and double) IDs.""" - # bond_dict = defaultdict(lambda: len(bond_dict)) - i_jbond_dict = defaultdict(lambda: []) - for b in mol.GetBonds(): - i, j = b.GetBeginAtomIdx(), b.GetEndAtomIdx() - bond = bond_dict[str(b.GetBondType())] - i_jbond_dict[i].append((j, bond)) - i_jbond_dict[j].append((i, bond)) - return i_jbond_dict - -def extract_fingerprints(atoms, i_jbond_dict, radius): - """Extract the r-radius subgraphs (i.e., fingerprints) - from a molecular graph using Weisfeiler-Lehman algorithm.""" - - # fingerprint_dict = defaultdict(lambda: len(fingerprint_dict)) - # edge_dict = defaultdict(lambda: len(edge_dict)) - - if (len(atoms) == 1) or (radius == 0): - fingerprints = [fingerprint_dict[a] for a in atoms] - - else: - nodes = atoms - i_jedge_dict = i_jbond_dict - - for _ in range(radius): - - """Update each node ID considering its neighboring nodes and edges - (i.e., r-radius subgraphs or fingerprints).""" - fingerprints = [] - for i, j_edge in i_jedge_dict.items(): - neighbors = [(nodes[j], edge) for j, edge in j_edge] - fingerprint = (nodes[i], tuple(sorted(neighbors))) - # fingerprints.append(fingerprint_dict[fingerprint]) - # fingerprints.append(fingerprint_dict.get(fingerprint)) - try : - fingerprints.append(fingerprint_dict[fingerprint]) - except : - fingerprint_dict[fingerprint] = 0 - fingerprints.append(fingerprint_dict[fingerprint]) - - nodes = fingerprints - - """Also update each edge ID considering two nodes - on its both sides.""" - _i_jedge_dict = defaultdict(lambda: []) - for i, j_edge in i_jedge_dict.items(): - for j, edge in j_edge: - both_side = tuple(sorted((nodes[i], nodes[j]))) - # edge = edge_dict[(both_side, edge)] - # edge = edge_dict.get((both_side, edge)) - try : - edge = edge_dict[(both_side, edge)] - except : - edge_dict[(both_side, edge)] = 0 - edge = edge_dict[(both_side, edge)] - - _i_jedge_dict[i].append((j, edge)) - i_jedge_dict = _i_jedge_dict - - return np.array(fingerprints) - -def create_adjacency(mol): - adjacency = Chem.GetAdjacencyMatrix(mol) - return np.array(adjacency) - -def dump_dictionary(dictionary, filename): - with open(filename, 'wb') as file: - pickle.dump(dict(dictionary), file) - -def load_tensor(file_name, dtype): - return [dtype(d).to(device) for d in np.load(file_name + '.npy', allow_pickle=True)] - -class Predictor(object): - def __init__(self, model): - self.model = model - - def predict(self, data): - predicted_value = self.model.forward(data) - - return predicted_value - -def extract_wildtype_mutant() : - with open('../../Data/database/Kcat_combination_0918_wildtype_mutant.json', 'r') as infile : - Kcat_data = json.load(infile) - - entry_keys = list() - for data in Kcat_data : - # print(data['ECNumber']) - # print(data['Substrate']) - # print(data['Organism']) - - substrate = data['Substrate'] - organism = data['Organism'] - EC = data['ECNumber'] - entry_key = substrate + '&' + organism + '&' + EC - # print(entry_key.lower()) - entry_keys.append(entry_key) - - entry_dict = dict(Counter(entry_keys)) - # print(entry_dict) - - duplicated_keys = [key for key, value in entry_dict.items() if value > 1] - # print(duplicated_keys) - - duplicated_dict = {key:value for key, value in entry_dict.items() if value > 1} - # print(duplicated_dict) - # https://stackoverflow.com/questions/613183/how-do-i-sort-a-dictionary-by-value - # print(sorted(duplicated_dict.items(), key=lambda x: x[1], reverse=True)[:30]) - duplicated_list = sorted(duplicated_dict.items(), key=lambda x: x[1], reverse=True)[:30] - - for duplicated in duplicated_list[:1] : - # print('The subtrate name:', duplicated[0]) - for data in Kcat_data : - # duplicated_one_entry = duplicated_list[0].split('&') - substrate = data['Substrate'] - organism = data['Organism'] - EC = data['ECNumber'] - one_entry = substrate + '&' + organism + '&' + EC - if one_entry == duplicated[0] : - enzyme_type = data['Type'] - Kcat_value = data['Value'] - # print('Substrate:', substrate) - # print('%s enzyme: %s' %(enzyme_type, Kcat_value)) - # print('----'*15+'\n') - - return duplicated_list - -def extract_wildtype_kcat(entry) : - with open('../../Data/database/Kcat_combination_0918_wildtype_mutant.json', 'r') as infile : - Kcat_data = json.load(infile) - - for data in Kcat_data : - substrate = data['Substrate'] - organism = data['Organism'] - EC = data['ECNumber'] - one_entry = substrate + '&' + organism + '&' + EC - if one_entry == entry : - enzyme_type = data['Type'] - if enzyme_type == 'wildtype' : - wildtype_kcat = float(data['Value']) - - if wildtype_kcat : - return wildtype_kcat - else : - return None - -def compare_prediction_wildtype_mutant() : - with open('../../Data/database/Kcat_combination_0918_wildtype_mutant.json', 'r') as infile : - Kcat_data = json.load(infile) - - wildtype_mutant_entries = extract_wildtype_mutant() - - fingerprint_dict = model.load_pickle('../../Data/input/fingerprint_dict.pickle') - atom_dict = model.load_pickle('../../Data/input/atom_dict.pickle') - bond_dict = model.load_pickle('../../Data/input/bond_dict.pickle') - word_dict = model.load_pickle('../../Data/input/sequence_dict.pickle') - n_fingerprint = len(fingerprint_dict) - n_word = len(word_dict) - # print(n_fingerprint) # 3958 - # print(n_word) # 8542 - - radius=2 - ngram=3 - # n_fingerprint = 3958 - # n_word = 8542 - - dim=10 - layer_gnn=3 - side=5 - window=11 - layer_cnn=3 - layer_output=3 - lr=1e-3 - lr_decay=0.5 - decay_interval=10 - weight_decay=1e-6 - iteration=100 - - if torch.cuda.is_available(): - device = torch.device('cuda') - else: - device = torch.device('cpu') - - # torch.manual_seed(1234) - Kcat_model = model.KcatPrediction(device, n_fingerprint, n_word, 2*dim, layer_gnn, window, layer_cnn, layer_output).to(device) - Kcat_model.load_state_dict(torch.load('../../Results/output/all--radius2--ngram3--dim20--layer_gnn3--window11--layer_cnn3--layer_output3--lr1e-3--lr_decay0.5--decay_interval10--weight_decay1e-6--iteration50', map_location=device)) - # print(state_dict.keys()) - # model.eval() - predictor = Predictor(Kcat_model) - - print('It\'s time to start the prediction!') - print('-----------------------------------') - - # prediction = predictor.predict(inputs) - - i = 0 - alldata = dict() - alldata['type'] = list() - alldata['entry'] = list() - alldata['kcat_value'] = list() - - - for wildtype_mutant_entry in wildtype_mutant_entries : - entry_names = wildtype_mutant_entry[0].split('&') - # print('This entry is:', entry_names) - # print('The total amount of wildtype and variant enzymes in the entry is:', wildtype_mutant_entry[1]) - - experimental_values = list() - predicted_values = list() - wildtype_like = list() - wildtype_decreased = list() - - if entry_names[0] in ['7,8-Dihydrofolate', 'Glycerate 3-phosphate', 'L-Aspartate', 'Penicillin G', 'Inosine', 'Isopentenyl diphosphate'] : - print('This entry is:', entry_names) - for data in Kcat_data : - # print(data) - # print(data['Substrate']) - substrate = data['Substrate'] - organism = data['Organism'] - EC = data['ECNumber'] - entry = substrate + '&' + organism + '&' + EC - - if entry == wildtype_mutant_entry[0] : - wildtype_kcat = extract_wildtype_kcat(entry) - # print('wildtype kcat:', wildtype_kcat) - # print(data) - # if wildtype_kcat : - i += 1 - # print('This is', i, '---------------------------------------') - smiles = data['Smiles'] - sequence = data['Sequence'] - enzyme_type = data['Type'] - Kcat = data['Value'] - if "." not in smiles and float(Kcat) > 0: - # i += 1 - # print('This is',i) - - mol = Chem.AddHs(Chem.MolFromSmiles(smiles)) - atoms = create_atoms(mol) - # print(atoms) - i_jbond_dict = create_ijbonddict(mol) - # print(i_jbond_dict) - - fingerprints = extract_fingerprints(atoms, i_jbond_dict, radius) - # print(fingerprints) - # compounds.append(fingerprints) - - adjacency = create_adjacency(mol) - # print(adjacency) - # adjacencies.append(adjacency) - - words = split_sequence(sequence,ngram) - # print(words) - # proteins.append(words) - - fingerprints = torch.LongTensor(fingerprints) - adjacency = torch.FloatTensor(adjacency) - words = torch.LongTensor(words) - - inputs = [fingerprints, adjacency, words] - - value = float(data['Value']) - # print('Current kcat value:', value) - normalized_value = value/wildtype_kcat - # print('%.2f' % normalized_value) - # print(type(value)) - # print(type(normalized_value)) - experimental_values.append(math.log10(value)) - - prediction = predictor.predict(inputs) - Kcat_log_value = prediction.item() - Kcat_value = math.pow(2,Kcat_log_value) - # print(Kcat_value) - # print('%.2f' % normalized_value) - # print(type(Kcat_value)) - predicted_values.append(math.log10(Kcat_value)) - - # entry_names = wildtype_mutant_entry[0].split('&') - # entry_name = entry_names[0] + '&' + entry_names[2] - entry_name = entry_names[0] - if normalized_value >= 0.5 and normalized_value < 2.0 : - wildtype_like.append(math.log10(Kcat_value)) - alldata['type'].append('Wildtype_like') - alldata['entry'].append(entry_name) - alldata['kcat_value'].append(math.log10(Kcat_value)) - if normalized_value < 0.5 : - wildtype_decreased.append(math.log10(Kcat_value)) - alldata['type'].append('Wildtype_decreased') - alldata['entry'].append(entry_name) - alldata['kcat_value'].append(math.log10(Kcat_value)) - - if wildtype_like and wildtype_decreased : - p_value = ranksums(wildtype_like, wildtype_decreased)[1] - print('The amount of wildtype_like:', len(wildtype_like)) - print('The amount of wildtype_decreased:', len(wildtype_decreased)) - print('P value is:', p_value) - print('\n') - - correlation1, p_value1 = stats.pearsonr(experimental_values, predicted_values) - - # https://blog.csdn.net/u012735708/article/details/84337262?utm_medium=distribute.pc_relevant.none- - # task-blog-BlogCommendFromMachineLearnPai2-1.pc_relevant_is_cache&depth_1-utm_source= - # distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-1.pc_relevant_is_cache - r2 = r2_score(experimental_values,predicted_values) - rmse = np.sqrt(mean_squared_error(experimental_values,predicted_values)) - # print("---------------------") - # print('\n\n') - # print(correlation) - print('r is %.4f' % correlation1) - print('P value is', p_value1) - # print('R2 is %.4f' % r2) - # print('RMSE is %.4f' % rmse) - # print('-----'*10 + '\n') - - - # Plot the boxplot figures between the wildtype_like and wildtype_decreased - allData = pd.DataFrame(alldata) - # print(type(allData)) - - plt.figure(figsize=(1.5, 1.5)) - # To solve the 'Helvetica' font cannot be used in PDF file - # https://stackoverflow.com/questions/59845568/the-pdf-backend-does-not-currently-support-the-selected-font - rc('font',**{'family':'serif','serif':['Helvetica']}) - plt.rcParams['pdf.fonttype'] = 42 - - plt.axes([0.12,0.12,0.83,0.83]) - - plt.tick_params(direction='in') - plt.tick_params(which='major',length=1.5) - plt.tick_params(which='major',width=0.4) - plt.tick_params(which='major',width=0.4) - - # rectangular box plot - palette = {"Wildtype_like": '#2166ac', "Wildtype_decreased": '#b2182b'} - - # for ind in allData.index: - # allData.loc[ind,'entry'] = '${0}$'.format(allData.loc[ind,'entry']) - - ax = sns.boxplot(data=alldata, x="entry", y="kcat_value", hue="type", - palette=palette, showfliers=False, linewidth=0.5) # boxprops=dict(alpha=1.0) - - ax = sns.stripplot(data=alldata, x="entry", y="kcat_value", hue="type", jitter=0.3, - palette=palette, size=1.3, dodge=True) - - # https://stackoverflow.com/questions/58476654/how-to-remove-or-hide-x-axis-label-from-seaborn-boxplot - # plt.xlabel(None) will remove the Label, but not the ticks. - ax.set(xlabel=None) - - for patch in ax.artists: - r, g, b, a = patch.get_facecolor() - patch.set_facecolor((r, g, b, 0.3)) - - # print(ax.artists) - # print(ax.lines) - # print(len(ax.lines)) - # https://cduvallet.github.io/posts/2018/03/boxplots-in-python - for i, artist in enumerate(ax.artists): - # print(i) - - if i % 2 == 0: - col = '#2166ac' - else: - col = '#b2182b' - - # This sets the color for the main box - artist.set_edgecolor(col) - - # Each box has 5 associated Line2D objects (to make the whiskers, fliers, etc.) - # Loop over them here, and use the same colour as above - for j in range(i*5,i*5+5): - # print(j) - line = ax.lines[j] - line.set_color(col) - line.set_mfc(col) - line.set_mec(col) - handles = [ax.artists[0], ax.artists[1]] - - # for tick in ax.get_xticklabels() : - # tick.set_rotation(30) - - plt.rcParams['font.family'] = 'Helvetica' - - plt.text(-0.2, 1.5, '***', fontweight ="normal", fontsize=6) - plt.text(1, 1.0, '*', fontweight ="normal", fontsize=6) - plt.text(1.9, 1.0, '**', fontweight ="normal", fontsize=6) - plt.text(2.9, -0.7, '**', fontweight ="normal", fontsize=6) - plt.text(3.9, 1.4, '**', fontweight ="normal", fontsize=6) - plt.text(5, -0.3, '*', fontweight ="normal", fontsize=6) - - plt.ylabel("$k$$_\mathregular{cat}$ value", fontname='Helvetica', fontsize=7) - - plt.xticks(rotation=30,ha='right') - plt.ylim(-4,4) - plt.yticks([-4,-2,0,2,4]) - - plt.xticks(fontsize=7) - plt.yticks(fontsize=6) - - ax.spines['bottom'].set_linewidth(0.5) - ax.spines['left'].set_linewidth(0.5) - ax.spines['top'].set_linewidth(0.5) - ax.spines['right'].set_linewidth(0.5) - - ax = plt.gca() - # handles,labels = ax.get_legend_handles_labels() - labels = ax.get_legend_handles_labels()[1] - # print(handles) - # print(labels) - # specify just one legend - lgd = plt.legend(handles[0:2], labels[0:2], loc=1, frameon=False, prop={'size': 6}) - - # plt.rcParams['font.family'] = 'Helvetica' - - plt.savefig("../../Results/figures/Fig3d.pdf", dpi=400, bbox_inches = 'tight') - - -if __name__ == '__main__' : - # extract_wildtype_mutant() - compare_prediction_wildtype_mutant() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/MD5.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/MD5.py deleted file mode 100644 index 554b77720fa10ab12ec18d4657b9b9c087676d48..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/MD5.py +++ /dev/null @@ -1,184 +0,0 @@ -# -*- coding: utf-8 -*- -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -from Crypto.Util.py3compat import * - -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, - create_string_buffer, - get_raw_buffer, c_size_t, - c_uint8_ptr) - -_raw_md5_lib = load_pycryptodome_raw_lib("Crypto.Hash._MD5", - """ - #define MD5_DIGEST_SIZE 16 - - int MD5_init(void **shaState); - int MD5_destroy(void *shaState); - int MD5_update(void *hs, - const uint8_t *buf, - size_t len); - int MD5_digest(const void *shaState, - uint8_t digest[MD5_DIGEST_SIZE]); - int MD5_copy(const void *src, void *dst); - - int MD5_pbkdf2_hmac_assist(const void *inner, - const void *outer, - const uint8_t first_digest[MD5_DIGEST_SIZE], - uint8_t final_digest[MD5_DIGEST_SIZE], - size_t iterations); - """) - -class MD5Hash(object): - """A MD5 hash object. - Do not instantiate directly. - Use the :func:`new` function. - - :ivar oid: ASN.1 Object ID - :vartype oid: string - - :ivar block_size: the size in bytes of the internal message block, - input to the compression function - :vartype block_size: integer - - :ivar digest_size: the size in bytes of the resulting hash - :vartype digest_size: integer - """ - - # The size of the resulting hash in bytes. - digest_size = 16 - # The internal block size of the hash algorithm in bytes. - block_size = 64 - # ASN.1 Object ID - oid = "1.2.840.113549.2.5" - - def __init__(self, data=None): - state = VoidPointer() - result = _raw_md5_lib.MD5_init(state.address_of()) - if result: - raise ValueError("Error %d while instantiating MD5" - % result) - self._state = SmartPointer(state.get(), - _raw_md5_lib.MD5_destroy) - if data: - self.update(data) - - def update(self, data): - """Continue hashing of a message by consuming the next chunk of data. - - Args: - data (byte string/byte array/memoryview): The next chunk of the message being hashed. - """ - - result = _raw_md5_lib.MD5_update(self._state.get(), - c_uint8_ptr(data), - c_size_t(len(data))) - if result: - raise ValueError("Error %d while instantiating MD5" - % result) - - def digest(self): - """Return the **binary** (non-printable) digest of the message that has been hashed so far. - - :return: The hash digest, computed over the data processed so far. - Binary form. - :rtype: byte string - """ - - bfr = create_string_buffer(self.digest_size) - result = _raw_md5_lib.MD5_digest(self._state.get(), - bfr) - if result: - raise ValueError("Error %d while instantiating MD5" - % result) - - return get_raw_buffer(bfr) - - def hexdigest(self): - """Return the **printable** digest of the message that has been hashed so far. - - :return: The hash digest, computed over the data processed so far. - Hexadecimal encoded. - :rtype: string - """ - - return "".join(["%02x" % bord(x) for x in self.digest()]) - - def copy(self): - """Return a copy ("clone") of the hash object. - - The copy will have the same internal state as the original hash - object. - This can be used to efficiently compute the digests of strings that - share a common initial substring. - - :return: A hash object of the same type - """ - - clone = MD5Hash() - result = _raw_md5_lib.MD5_copy(self._state.get(), - clone._state.get()) - if result: - raise ValueError("Error %d while copying MD5" % result) - return clone - - def new(self, data=None): - """Create a fresh SHA-1 hash object.""" - - return MD5Hash(data) - - -def new(data=None): - """Create a new hash object. - - :parameter data: - Optional. The very first chunk of the message to hash. - It is equivalent to an early call to :meth:`MD5Hash.update`. - :type data: byte string/byte array/memoryview - - :Return: A :class:`MD5Hash` hash object - """ - return MD5Hash().new(data) - -# The size of the resulting hash in bytes. -digest_size = 16 - -# The internal block size of the hash algorithm in bytes. -block_size = 64 - - -def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): - """Compute the expensive inner loop in PBKDF-HMAC.""" - - assert len(first_digest) == digest_size - assert iterations > 0 - - bfr = create_string_buffer(digest_size); - result = _raw_md5_lib.MD5_pbkdf2_hmac_assist( - inner._state.get(), - outer._state.get(), - first_digest, - bfr, - c_size_t(iterations)) - - if result: - raise ValueError("Error %d with PBKDF2-HMAC assis for MD5" % result) - - return get_raw_buffer(bfr) diff --git a/spaces/johnslegers/bilingual_stable_diffusion/share_btn.py b/spaces/johnslegers/bilingual_stable_diffusion/share_btn.py deleted file mode 100644 index 4bf271fe915e78e6df33a9df53b47ad68e620e2e..0000000000000000000000000000000000000000 --- a/spaces/johnslegers/bilingual_stable_diffusion/share_btn.py +++ /dev/null @@ -1,60 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - const gradioEl = document.querySelector('body > gradio-app'); - const imgEls = gradioEl.querySelectorAll('#gallery img'); - const promptTxt = gradioEl.querySelector('#prompt-text-input input').value; - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!imgEls.length){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const files = await Promise.all( - [...imgEls].map(async (imgEl) => { - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `diffuse-the-rest-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }) - ); - const urls = await Promise.all(files.map((f) => uploadFile(f))); - const htmlImgs = urls.map(url => ``); - const descriptionMd = `
          -${htmlImgs.join(`\n`)} -
          `; - const params = new URLSearchParams({ - title: promptTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/stabilityai/stable-diffusion/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/jone/GFPGAN/tests/test_stylegan2_clean_arch.py b/spaces/jone/GFPGAN/tests/test_stylegan2_clean_arch.py deleted file mode 100644 index 78bb920e73ce28cfec9ea89a4339cc5b87981b47..0000000000000000000000000000000000000000 --- a/spaces/jone/GFPGAN/tests/test_stylegan2_clean_arch.py +++ /dev/null @@ -1,52 +0,0 @@ -import torch - -from gfpgan.archs.stylegan2_clean_arch import StyleGAN2GeneratorClean - - -def test_stylegan2generatorclean(): - """Test arch: StyleGAN2GeneratorClean.""" - - # model init and forward (gpu) - if torch.cuda.is_available(): - net = StyleGAN2GeneratorClean( - out_size=32, num_style_feat=512, num_mlp=8, channel_multiplier=1, narrow=0.5).cuda().eval() - style = torch.rand((1, 512), dtype=torch.float32).cuda() - output = net([style], input_is_latent=False) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # -------------------- with return_latents ----------------------- # - output = net([style], input_is_latent=True, return_latents=True) - assert output[0].shape == (1, 3, 32, 32) - assert len(output[1]) == 1 - # check latent - assert output[1][0].shape == (8, 512) - - # -------------------- with randomize_noise = False ----------------------- # - output = net([style], randomize_noise=False) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # -------------------- with truncation = 0.5 and mixing----------------------- # - output = net([style, style], truncation=0.5, truncation_latent=style) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # ------------------ test make_noise ----------------------- # - out = net.make_noise() - assert len(out) == 7 - assert out[0].shape == (1, 1, 4, 4) - assert out[1].shape == (1, 1, 8, 8) - assert out[2].shape == (1, 1, 8, 8) - assert out[3].shape == (1, 1, 16, 16) - assert out[4].shape == (1, 1, 16, 16) - assert out[5].shape == (1, 1, 32, 32) - assert out[6].shape == (1, 1, 32, 32) - - # ------------------ test get_latent ----------------------- # - out = net.get_latent(style) - assert out.shape == (1, 512) - - # ------------------ test mean_latent ----------------------- # - out = net.mean_latent(2) - assert out.shape == (1, 512) diff --git a/spaces/jordonpeter01/MusicGen2/setup.py b/spaces/jordonpeter01/MusicGen2/setup.py deleted file mode 100644 index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen2/setup.py +++ /dev/null @@ -1,65 +0,0 @@ -""" - Copyright (c) Meta Platforms, Inc. and affiliates. - All rights reserved. - - This source code is licensed under the license found in the - LICENSE file in the root directory of this source tree. - -""" - -from pathlib import Path - -from setuptools import setup, find_packages - - -NAME = 'audiocraft' -DESCRIPTION = 'Audio research library for PyTorch' - -URL = 'https://github.com/fairinternal/audiocraft' -AUTHOR = 'FAIR Speech & Audio' -EMAIL = 'defossez@meta.com' -REQUIRES_PYTHON = '>=3.8.0' - -for line in open('audiocraft/__init__.py'): - line = line.strip() - if '__version__' in line: - context = {} - exec(line, context) - VERSION = context['__version__'] - -HERE = Path(__file__).parent - -try: - with open(HERE / "README.md", encoding='utf-8') as f: - long_description = '\n' + f.read() -except FileNotFoundError: - long_description = DESCRIPTION - -REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')] - -setup( - name=NAME, - version=VERSION, - description=DESCRIPTION, - author_email=EMAIL, - long_description=long_description, - long_description_content_type='text/markdown', - author=AUTHOR, - url=URL, - python_requires=REQUIRES_PYTHON, - install_requires=REQUIRED, - extras_require={ - 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'], - }, - packages=find_packages(), - package_data={'audiocraft': ['py.typed']}, - include_package_data=True, - license='MIT License', - classifiers=[ - # Trove classifiers - # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers - 'License :: OSI Approved :: MIT License', - 'Topic :: Multimedia :: Sound/Audio', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - ], -) diff --git a/spaces/jordonpeter01/ai-comic-factory/Dockerfile b/spaces/jordonpeter01/ai-comic-factory/Dockerfile deleted file mode 100644 index 91319be9b3dd35d916d18fba5260f51125c46b50..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/ai-comic-factory/Dockerfile +++ /dev/null @@ -1,65 +0,0 @@ -FROM node:18-alpine AS base - -# Install dependencies only when needed -FROM base AS deps -# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. -RUN apk add --no-cache libc6-compat -WORKDIR /app - -# Install dependencies based on the preferred package manager -COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ -RUN \ - if [ -f yarn.lock ]; then yarn --frozen-lockfile; \ - elif [ -f package-lock.json ]; then npm ci; \ - elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \ - else echo "Lockfile not found." && exit 1; \ - fi - -# Uncomment the following lines if you want to use a secret at buildtime, -# for example to access your private npm packages -# RUN --mount=type=secret,id=HF_EXAMPLE_SECRET,mode=0444,required=true \ -# $(cat /run/secrets/HF_EXAMPLE_SECRET) - -# Rebuild the source code only when needed -FROM base AS builder -WORKDIR /app -COPY --from=deps /app/node_modules ./node_modules -COPY . . - -# Next.js collects completely anonymous telemetry data about general usage. -# Learn more here: https://nextjs.org/telemetry -# Uncomment the following line in case you want to disable telemetry during the build. -# ENV NEXT_TELEMETRY_DISABLED 1 - -# RUN yarn build - -# If you use yarn, comment out this line and use the line above -RUN npm run build - -# Production image, copy all the files and run next -FROM base AS runner -WORKDIR /app - -ENV NODE_ENV production -# Uncomment the following line in case you want to disable telemetry during runtime. -# ENV NEXT_TELEMETRY_DISABLED 1 - -RUN addgroup --system --gid 1001 nodejs -RUN adduser --system --uid 1001 nextjs - -COPY --from=builder /app/public ./public - -# Automatically leverage output traces to reduce image size -# https://nextjs.org/docs/advanced-features/output-file-tracing -COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ -COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static -COPY --from=builder --chown=nextjs:nodejs /app/.next/cache ./.next/cache -# COPY --from=builder --chown=nextjs:nodejs /app/.next/cache/fetch-cache ./.next/cache/fetch-cache - -USER nextjs - -EXPOSE 3000 - -ENV PORT 3000 - -CMD ["node", "server.js"] \ No newline at end of file diff --git a/spaces/josedolot/HybridNet_Demo2/utils/sync_batchnorm/batchnorm.py b/spaces/josedolot/HybridNet_Demo2/utils/sync_batchnorm/batchnorm.py deleted file mode 100644 index e1bf74feb5fa27f659110be91e97f99d862ad492..0000000000000000000000000000000000000000 --- a/spaces/josedolot/HybridNet_Demo2/utils/sync_batchnorm/batchnorm.py +++ /dev/null @@ -1,394 +0,0 @@ -# -*- coding: utf-8 -*- -# File : batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import collections -import contextlib - -import torch -import torch.nn.functional as F - -from torch.nn.modules.batchnorm import _BatchNorm - -try: - from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast -except ImportError: - ReduceAddCoalesced = Broadcast = None - -try: - from jactorch.parallel.comm import SyncMaster - from jactorch.parallel.data_parallel import JacDataParallel as DataParallelWithCallback -except ImportError: - from .comm import SyncMaster - from .replicate import DataParallelWithCallback - -__all__ = [ - 'SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d', - 'patch_sync_batchnorm', 'convert_model' -] - - -def _sum_ft(tensor): - """sum over the first and last dimention""" - return tensor.sum(dim=0).sum(dim=-1) - - -def _unsqueeze_ft(tensor): - """add new dimensions at the front and the tail""" - return tensor.unsqueeze(0).unsqueeze(-1) - - -_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size']) -_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std']) - - -class _SynchronizedBatchNorm(_BatchNorm): - def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True): - assert ReduceAddCoalesced is not None, 'Can not use Synchronized Batch Normalization without CUDA support.' - - super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine) - - self._sync_master = SyncMaster(self._data_parallel_master) - - self._is_parallel = False - self._parallel_id = None - self._slave_pipe = None - - def forward(self, input): - # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation. - if not (self._is_parallel and self.training): - return F.batch_norm( - input, self.running_mean, self.running_var, self.weight, self.bias, - self.training, self.momentum, self.eps) - - # Resize the input to (B, C, -1). - input_shape = input.size() - input = input.view(input.size(0), self.num_features, -1) - - # Compute the sum and square-sum. - sum_size = input.size(0) * input.size(2) - input_sum = _sum_ft(input) - input_ssum = _sum_ft(input ** 2) - - # Reduce-and-broadcast the statistics. - if self._parallel_id == 0: - mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size)) - else: - mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size)) - - # Compute the output. - if self.affine: - # MJY:: Fuse the multiplication for speed. - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias) - else: - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std) - - # Reshape it. - return output.view(input_shape) - - def __data_parallel_replicate__(self, ctx, copy_id): - self._is_parallel = True - self._parallel_id = copy_id - - # parallel_id == 0 means master device. - if self._parallel_id == 0: - ctx.sync_master = self._sync_master - else: - self._slave_pipe = ctx.sync_master.register_slave(copy_id) - - def _data_parallel_master(self, intermediates): - """Reduce the sum and square-sum, compute the statistics, and broadcast it.""" - - # Always using same "device order" makes the ReduceAdd operation faster. - # Thanks to:: Tete Xiao (http://tetexiao.com/) - intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device()) - - to_reduce = [i[1][:2] for i in intermediates] - to_reduce = [j for i in to_reduce for j in i] # flatten - target_gpus = [i[1].sum.get_device() for i in intermediates] - - sum_size = sum([i[1].sum_size for i in intermediates]) - sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce) - mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size) - - broadcasted = Broadcast.apply(target_gpus, mean, inv_std) - - outputs = [] - for i, rec in enumerate(intermediates): - outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2]))) - - return outputs - - def _compute_mean_std(self, sum_, ssum, size): - """Compute the mean and standard-deviation with sum and square-sum. This method - also maintains the moving average on the master device.""" - assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.' - mean = sum_ / size - sumvar = ssum - sum_ * mean - unbias_var = sumvar / (size - 1) - bias_var = sumvar / size - - if hasattr(torch, 'no_grad'): - with torch.no_grad(): - self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data - self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data - else: - self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data - self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data - - return mean, bias_var.clamp(self.eps) ** -0.5 - - -class SynchronizedBatchNorm1d(_SynchronizedBatchNorm): - r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a - mini-batch. - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm1d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm - - Args: - num_features: num_features from an expected input of size - `batch_size x num_features [x width]` - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape:: - - Input: :math:`(N, C)` or :math:`(N, C, L)` - - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 2 and input.dim() != 3: - raise ValueError('expected 2D or 3D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm1d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm2d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch - of 3d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm2d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape:: - - Input: :math:`(N, C, H, W)` - - Output: :math:`(N, C, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 4: - raise ValueError('expected 4D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm2d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm3d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch - of 4d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm3d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm - or Spatio-temporal BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x depth x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape:: - - Input: :math:`(N, C, D, H, W)` - - Output: :math:`(N, C, D, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 5: - raise ValueError('expected 5D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm3d, self)._check_input_dim(input) - - -@contextlib.contextmanager -def patch_sync_batchnorm(): - import torch.nn as nn - - backup = nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d - - nn.BatchNorm1d = SynchronizedBatchNorm1d - nn.BatchNorm2d = SynchronizedBatchNorm2d - nn.BatchNorm3d = SynchronizedBatchNorm3d - - yield - - nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d = backup - - -def convert_model(module): - """Traverse the input module and its child recursively - and replace all instance of torch.nn.modules.batchnorm.BatchNorm*N*d - to SynchronizedBatchNorm*N*d - - Args: - module: the input module needs to be convert to SyncBN model - - Examples: - >>> import torch.nn as nn - >>> import torchvision - >>> # m is a standard pytorch model - >>> m = torchvision.models.resnet18(True) - >>> m = nn.DataParallel(m) - >>> # after convert, m is using SyncBN - >>> m = convert_model(m) - """ - if isinstance(module, torch.nn.DataParallel): - mod = module.module - mod = convert_model(mod) - mod = DataParallelWithCallback(mod, device_ids=module.device_ids) - return mod - - mod = module - for pth_module, sync_module in zip([torch.nn.modules.batchnorm.BatchNorm1d, - torch.nn.modules.batchnorm.BatchNorm2d, - torch.nn.modules.batchnorm.BatchNorm3d], - [SynchronizedBatchNorm1d, - SynchronizedBatchNorm2d, - SynchronizedBatchNorm3d]): - if isinstance(module, pth_module): - mod = sync_module(module.num_features, module.eps, module.momentum, module.affine) - mod.running_mean = module.running_mean - mod.running_var = module.running_var - if module.affine: - mod.weight.data = module.weight.data.clone().detach() - mod.bias.data = module.bias.data.clone().detach() - - for name, child in module.named_children(): - mod.add_module(name, convert_model(child)) - - return mod diff --git a/spaces/jtpotato/firetrace/firetrace/interface_text.py b/spaces/jtpotato/firetrace/firetrace/interface_text.py deleted file mode 100644 index 8845902ce51186c66bb1faca6c49a09120c3f15f..0000000000000000000000000000000000000000 --- a/spaces/jtpotato/firetrace/firetrace/interface_text.py +++ /dev/null @@ -1,57 +0,0 @@ -from decimal import * - -q_and_a = """ - ### **Q: What's the deal with Firetrace? 🔍**\n - A: Picture this—it's like having your very own bushfire fortune teller! - 🔮 Firetrace is an AI-powered web interface that predicts the severity of bushfire events all across Australia. - 🌍 It's armed with projected weather data, BOM weather observatories and NASA's MODIS satellite intel. - It even considers time information to get climate change trends. 🦸‍♂️🔥 - \n - - ### **Q: What is the inspiration behind Firetrace?** 🤔 \n - A: Those bushfires! They cause habitat loss, put many animal species at risk, and drastically impact the economy 😢📉 - The series of bushfires that took place in Australia is a clear demonstration of this. - ☝️ So, we put our heads together to conjure up Firetrace—a cutting-edge tool that helps us predict the severity of these fiery events. - This will be helpful in making smarter decisions and being prepared to take on the bushfire challenges head-on! 💪🌳 - \n - - ### **Q: How do I use Firetrace?** 🎉 \n - A: It's easy-peasy! 🔥 - Simply enter in values for the input boxes below, and you'll unlock access to the hottest predictions, - and uncover the secrets of bushfire severity. - The prediction you receive will be the number of square kilometres of land the fire covers that day. - (Keep in mind that it's not completely right, we're not wizards here 🧙 - but it does have reasonable accuracy) - \n - - Let's work together to face these fiery challenges to create a safer future! 🚀🌿 - """ - -privacy = "*By using this app you consent to the collection of: your user agent string (informing us of what type of device you might be using this on, to improve your experience), the time you have accessed this website, the time zone name (giving us an approximate geographic location to understand the demographics of our users, informing future decisions around localisation) as well as any input you have provided. No other information is collected, we swear.*" - -def additional_context(scan_area): - def get_percentage(scan_area, area): - result = (scan_area / area) * 100 - return Decimal(str(result)).quantize(Decimal("0.01")) # Decimal is required because Python doesn't handle floating points very well by default. - - def get_times(scan_area, area): - result = scan_area / area - return Decimal(str(result)).quantize(Decimal("0.01")) - - rounded_fire_area = Decimal(str(scan_area)).quantize(Decimal("0.01")) - LARGEST_EVENT = 5854.7 - MELBOURNE_AREA = 9993 - PORT_JACKSON_BAY_AREA = 55 - MURRAY_DARLING_BASIN_AREA = 1059000 - ACT_AREA = 2400 - - context_string = f""" - In this hypothetical scenario, `{rounded_fire_area}` square kilometres of fire would be burning simultaneously across the entire country. 🤯 This is `{get_percentage(scan_area, LARGEST_EVENT)}%` of the largest fire event 🔥 in our database, at {LARGEST_EVENT} square kilometres, recorded on the 19th of September 2011. - - ### Other things this fire compares to: - - `{get_percentage(scan_area, MELBOURNE_AREA)}%` of Greater Melbourne. 😧 - - `{get_times(scan_area, PORT_JACKSON_BAY_AREA)}` Port Jackson Bays 🌊 - - `{get_percentage(scan_area, MURRAY_DARLING_BASIN_AREA)}%` of the Murray-Darling Basin. 🌾 - - `{get_percentage(scan_area, ACT_AREA)}%` of the ACT. 🏙️ - """ - - return context_string \ No newline at end of file diff --git a/spaces/julien-c/sveltekit-demo/build/_app/pages/about.svelte-310947ca.js b/spaces/julien-c/sveltekit-demo/build/_app/pages/about.svelte-310947ca.js deleted file mode 100644 index 08c6fe6e2f1a2c19568179ebdac087bc98da23e2..0000000000000000000000000000000000000000 --- a/spaces/julien-c/sveltekit-demo/build/_app/pages/about.svelte-310947ca.js +++ /dev/null @@ -1,9 +0,0 @@ -import{S as z,i as F,s as G,j as _,e as i,t as o,U as N,d as a,l as m,c as r,a as d,g as s,b as B,f as U,I as t,J as I}from"../chunks/vendor-92f01141.js";const Q=!0,W=!1;function X(V){let h,e,f,b,E,p,S,u,k,x,A,g,J,K,y,O,P,c,D,v,H,j;return{c(){h=_(),e=i("div"),f=i("h1"),b=o("About this app"),E=_(),p=i("p"),S=o("This is a "),u=i("a"),k=o("SvelteKit"),x=o(` app. You can make your own by typing the - following into your command line and following the prompts:`),A=_(),g=i("pre"),J=o("npm init svelte@next"),K=_(),y=i("p"),O=o(`The page you're looking at is purely static HTML, with no client-side interactivity needed. - Because of that, we don't need to load any JavaScript. Try viewing the page's source, or opening - the devtools network panel and reloading.`),P=_(),c=i("p"),D=o("The "),v=i("a"),H=o("TODOs"),j=o(` page illustrates SvelteKit's data loading and form handling. Try using - it with JavaScript disabled!`),this.h()},l(l){N('[data-svelte="svelte-1ine71f"]',document.head).forEach(a),h=m(l),e=r(l,"DIV",{class:!0});var n=d(e);f=r(n,"H1",{});var L=d(f);b=s(L,"About this app"),L.forEach(a),E=m(n),p=r(n,"P",{});var w=d(p);S=s(w,"This is a "),u=r(w,"A",{href:!0});var M=d(u);k=s(M,"SvelteKit"),M.forEach(a),x=s(w,` app. You can make your own by typing the - following into your command line and following the prompts:`),w.forEach(a),A=m(n),g=r(n,"PRE",{});var Y=d(g);J=s(Y,"npm init svelte@next"),Y.forEach(a),K=m(n),y=r(n,"P",{});var C=d(y);O=s(C,`The page you're looking at is purely static HTML, with no client-side interactivity needed. - Because of that, we don't need to load any JavaScript. Try viewing the page's source, or opening - the devtools network panel and reloading.`),C.forEach(a),P=m(n),c=r(n,"P",{});var T=d(c);D=s(T,"The "),v=r(T,"A",{href:!0});var R=d(v);H=s(R,"TODOs"),R.forEach(a),j=s(T,` page illustrates SvelteKit's data loading and form handling. Try using - it with JavaScript disabled!`),T.forEach(a),n.forEach(a),this.h()},h(){document.title="About",B(u,"href","https://kit.svelte.dev"),B(v,"href","/todos"),B(e,"class","content svelte-cf77e8")},m(l,q){U(l,h,q),U(l,e,q),t(e,f),t(f,b),t(e,E),t(e,p),t(p,S),t(p,u),t(u,k),t(p,x),t(e,A),t(e,g),t(g,J),t(e,K),t(e,y),t(y,O),t(e,P),t(e,c),t(c,D),t(c,v),t(v,H),t(c,j)},p:I,i:I,o:I,d(l){l&&a(h),l&&a(e)}}}const $=W,tt=Q,et=!0;class at extends z{constructor(h){super();F(this,h,null,X,G,{})}}export{at as default,$ as hydrate,et as prerender,tt as router}; diff --git a/spaces/juuxn/SimpleRVC/README.md b/spaces/juuxn/SimpleRVC/README.md deleted file mode 100644 index d5a7b00c94a5b2777ac92563cae5b2af70e0bd28..0000000000000000000000000000000000000000 --- a/spaces/juuxn/SimpleRVC/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SimpleRVC -emoji: 🐨 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ka1kuk/fastapi/g4f/models.py b/spaces/ka1kuk/fastapi/g4f/models.py deleted file mode 100644 index e1d5b46d2a4cc4fe69812115dbe0281c00df9355..0000000000000000000000000000000000000000 --- a/spaces/ka1kuk/fastapi/g4f/models.py +++ /dev/null @@ -1,64 +0,0 @@ -from g4f import Provider - -class Model: - class model: - name: str - base_provider: str - best_provider: str - - class gpt_35_turbo: - name: str = 'gpt-3.5-turbo' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.DeepAi - - class gpt_35_turbo_16k: - name: str = 'gpt-3.5-turbo-16k' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Liaobots - - class gpt_4_dev: - name: str = 'gpt-4-for-dev' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Phind - - class gpt_4: - name: str = 'gpt-4' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Liaobots - - class gpt_4_assistant: - name: str = 'gpt-4' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.ChatgptAi - - """ 'falcon-40b': Model.falcon_40b, - 'falcon-7b': Model.falcon_7b, - 'llama-13b': Model.llama_13b,""" - - class falcon_40b: - name: str = 'falcon-40b' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.H2o - - class falcon_7b: - name: str = 'falcon-7b' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.H2o - - class llama_13b: - name: str = 'llama-13b' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.H2o - -class ModelUtils: - convert: dict = { - 'gpt-3.5-turbo': Model.gpt_35_turbo, - 'gpt-3.5-turbo-16k': Model.gpt_35_turbo_16k, - 'gpt-4': Model.gpt_4, - 'gpt-4-for-dev': Model.gpt_4_dev, - 'gpt-4-for-assitant': Model.gpt_4_assistant, - - 'falcon-40b': Model.falcon_40b, - 'falcon-7b': Model.falcon_7b, - 'llama-13b': Model.llama_13b, -} \ No newline at end of file diff --git a/spaces/kaicheng/ChatGPT_ad/run_macOS.command b/spaces/kaicheng/ChatGPT_ad/run_macOS.command deleted file mode 100644 index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000 --- a/spaces/kaicheng/ChatGPT_ad/run_macOS.command +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$(readlink -f "$0")") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" || exit - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi - -# 检查ChuanhuChatbot.py是否在运行 -if ! pgrep -f ChuanhuChatbot.py > /dev/null; then - # 如果没有运行,启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/kangvcar/RealChar/realtime_ai_character/websocket_routes.py b/spaces/kangvcar/RealChar/realtime_ai_character/websocket_routes.py deleted file mode 100644 index 2af51a0f896d3fad84feaf808a2e3d30b16e8dcb..0000000000000000000000000000000000000000 --- a/spaces/kangvcar/RealChar/realtime_ai_character/websocket_routes.py +++ /dev/null @@ -1,282 +0,0 @@ -import asyncio -import os -import uuid - -from fastapi import APIRouter, Depends, HTTPException, Path, WebSocket, WebSocketDisconnect, Query -from firebase_admin import auth -from firebase_admin.exceptions import FirebaseError - -from requests import Session - -from realtime_ai_character.audio.speech_to_text import (SpeechToText, - get_speech_to_text) -from realtime_ai_character.audio.text_to_speech import (TextToSpeech, - get_text_to_speech) -from realtime_ai_character.character_catalog.catalog_manager import ( - CatalogManager, get_catalog_manager) -from realtime_ai_character.database.connection import get_db -from realtime_ai_character.llm import (AsyncCallbackAudioHandler, - AsyncCallbackTextHandler, get_llm, LLM) -from realtime_ai_character.logger import get_logger -from realtime_ai_character.models.interaction import Interaction -from realtime_ai_character.utils import (ConversationHistory, build_history, - get_connection_manager) - -logger = get_logger(__name__) - -router = APIRouter() - -manager = get_connection_manager() - -GREETING_TXT = 'Hi, my friend, what brings you here today?' - - -async def get_current_user(token: str): - """Heler function for auth with Firebase.""" - if not token: - return "" - try: - decoded_token = auth.verify_id_token(token) - except FirebaseError as e: - logger.info(f'Receveid invalid token: {token} with error {e}') - raise HTTPException(status_code=401, - detail="Invalid authentication credentials") - - return decoded_token['uid'] - - -@router.websocket("/ws/{client_id}") -async def websocket_endpoint(websocket: WebSocket, - client_id: int = Path(...), - api_key: str = Query(None), - llm_model: str = Query(default=os.getenv( - 'LLM_MODEL_USE', 'gpt-3.5-turbo-16k')), - token: str = Query(None), - db: Session = Depends(get_db), - catalog_manager=Depends(get_catalog_manager), - speech_to_text=Depends(get_speech_to_text), - text_to_speech=Depends(get_text_to_speech)): - # Default user_id to client_id. If auth is enabled and token is provided, use - # the user_id from the token. - user_id = str(client_id) - if os.getenv('USE_AUTH', ''): - # Do not allow anonymous users to use non-GPT3.5 model. - if not token and llm_model != 'gpt-3.5-turbo-16k': - await websocket.close(code=1008, reason="Unauthorized") - return - try: - user_id = await get_current_user(token) - except HTTPException: - await websocket.close(code=1008, reason="Unauthorized") - return - llm = get_llm(model=llm_model) - await manager.connect(websocket) - try: - main_task = asyncio.create_task( - handle_receive(websocket, client_id, db, llm, catalog_manager, - speech_to_text, text_to_speech)) - - await asyncio.gather(main_task) - - except WebSocketDisconnect: - await manager.disconnect(websocket) - await manager.broadcast_message(f"User #{user_id} left the chat") - - -async def handle_receive(websocket: WebSocket, client_id: int, db: Session, - llm: LLM, catalog_manager: CatalogManager, - speech_to_text: SpeechToText, - text_to_speech: TextToSpeech): - try: - conversation_history = ConversationHistory() - # TODO: clean up client_id once migration is done. - user_id = str(client_id) - session_id = str(uuid.uuid4().hex) - - # 0. Receive client platform info (web, mobile, terminal) - data = await websocket.receive() - if data['type'] != 'websocket.receive': - raise WebSocketDisconnect('disconnected') - platform = data['text'] - logger.info(f"User #{user_id}:{platform} connected to server with " - f"session_id {session_id}") - - # 1. User selected a character - character = None - character_list = list(catalog_manager.characters.keys()) - user_input_template = 'Context:{context}\n User:{query}' - while not character: - character_message = "\n".join([ - f"{i+1} - {character}" - for i, character in enumerate(character_list) - ]) - await manager.send_message( - message= - f"Select your character by entering the corresponding number:\n" - f"{character_message}\n", - websocket=websocket) - data = await websocket.receive() - - if data['type'] != 'websocket.receive': - raise WebSocketDisconnect('disconnected') - - if not character and 'text' in data: - selection = int(data['text']) - if selection > len(character_list) or selection < 1: - await manager.send_message( - message= - f"Invalid selection. Select your character [" - f"{', '.join(catalog_manager.characters.keys())}]\n", - websocket=websocket) - continue - character = catalog_manager.get_character( - character_list[selection - 1]) - conversation_history.system_prompt = character.llm_system_prompt - user_input_template = character.llm_user_prompt - logger.info( - f"User #{user_id} selected character: {character.name}") - - tts_event = asyncio.Event() - tts_task = None - previous_transcript = None - token_buffer = [] - - # Greet the user - await manager.send_message(message=GREETING_TXT, websocket=websocket) - tts_task = asyncio.create_task( - text_to_speech.stream( - text=GREETING_TXT, - websocket=websocket, - tts_event=tts_event, - characater_name=character.name, - first_sentence=True, - )) - # Send end of the greeting so the client knows when to start listening - await manager.send_message(message='[end]\n', websocket=websocket) - - async def on_new_token(token): - return await manager.send_message(message=token, - websocket=websocket) - - async def stop_audio(): - if tts_task and not tts_task.done(): - tts_event.set() - tts_task.cancel() - if previous_transcript: - conversation_history.user.append(previous_transcript) - conversation_history.ai.append(' '.join(token_buffer)) - token_buffer.clear() - try: - await tts_task - except asyncio.CancelledError: - pass - tts_event.clear() - - while True: - data = await websocket.receive() - if data['type'] != 'websocket.receive': - raise WebSocketDisconnect('disconnected') - - # handle text message - if 'text' in data: - msg_data = data['text'] - # 0. itermidiate transcript starts with [&] - if msg_data.startswith('[&]'): - logger.info(f'intermediate transcript: {msg_data}') - if not os.getenv('EXPERIMENT_CONVERSATION_UTTERANCE', ''): - continue - asyncio.create_task(stop_audio()) - asyncio.create_task( - llm.achat_utterances( - history=build_history(conversation_history), - user_input=msg_data, - callback=AsyncCallbackTextHandler( - on_new_token, []), - audioCallback=AsyncCallbackAudioHandler( - text_to_speech, websocket, tts_event, - character.name))) - continue - # 1. Send message to LLM - print('response = await llm.achat, user_input', msg_data) - response = await llm.achat( - history=build_history(conversation_history), - user_input=msg_data, - user_input_template=user_input_template, - callback=AsyncCallbackTextHandler(on_new_token, - token_buffer), - audioCallback=AsyncCallbackAudioHandler( - text_to_speech, websocket, tts_event, character.name), - character=character) - - # 2. Send response to client - await manager.send_message(message='[end]\n', - websocket=websocket) - - # 3. Update conversation history - conversation_history.user.append(msg_data) - conversation_history.ai.append(response) - token_buffer.clear() - # 4. Persist interaction in the database - Interaction(client_id=client_id, - user_id=user_id, - session_id=session_id, - client_message_unicode=msg_data, - server_message_unicode=response, - platform=platform, - action_type='text').save(db) - - # handle binary message(audio) - elif 'bytes' in data: - binary_data = data['bytes'] - # 1. Transcribe audio - transcript: str = speech_to_text.transcribe( - binary_data, platform=platform, - prompt=character.name).strip() - - # ignore audio that picks up background noise - if (not transcript or len(transcript) < 2): - continue - - # 2. Send transcript to client - await manager.send_message( - message=f'[+]You said: {transcript}', websocket=websocket) - - # 3. stop the previous audio stream, if new transcript is received - await stop_audio() - - previous_transcript = transcript - - async def tts_task_done_call_back(response): - # Send response to client, [=] indicates the response is done - await manager.send_message(message='[=]', - websocket=websocket) - # Update conversation history - conversation_history.user.append(transcript) - conversation_history.ai.append(response) - token_buffer.clear() - # Persist interaction in the database - Interaction(client_id=client_id, - user_id=user_id, - session_id=session_id, - client_message_unicode=transcript, - server_message_unicode=response, - platform=platform, - action_type='audio').save(db) - - # 4. Send message to LLM - tts_task = asyncio.create_task( - llm.achat(history=build_history(conversation_history), - user_input=transcript, - user_input_template=user_input_template, - callback=AsyncCallbackTextHandler( - on_new_token, token_buffer, - tts_task_done_call_back), - audioCallback=AsyncCallbackAudioHandler( - text_to_speech, websocket, tts_event, - character.name), - character=character)) - - except WebSocketDisconnect: - logger.info(f"User #{user_id} closed the connection") - await manager.disconnect(websocket) - return diff --git a/spaces/kdrkdrkdr/ZhongliTTS/mel_processing.py b/spaces/kdrkdrkdr/ZhongliTTS/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/ZhongliTTS/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/keisuke-tada/gpt-playground/README.md b/spaces/keisuke-tada/gpt-playground/README.md deleted file mode 100644 index 5bb442a841b40e289a232075cc67ff5acf99f432..0000000000000000000000000000000000000000 --- a/spaces/keisuke-tada/gpt-playground/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gpt Playground -emoji: 🔥 -colorFrom: yellow -colorTo: purple -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/keithhon/Real-Time-Voice-Cloning/vocoder_preprocess.py b/spaces/keithhon/Real-Time-Voice-Cloning/vocoder_preprocess.py deleted file mode 100644 index 7ede3dfb95972e2de575de35b9d4a9c6d642885e..0000000000000000000000000000000000000000 --- a/spaces/keithhon/Real-Time-Voice-Cloning/vocoder_preprocess.py +++ /dev/null @@ -1,59 +0,0 @@ -from synthesizer.synthesize import run_synthesis -from synthesizer.hparams import hparams -from utils.argutils import print_args -import argparse -import os - - -if __name__ == "__main__": - class MyFormatter(argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFormatter): - pass - - parser = argparse.ArgumentParser( - description="Creates ground-truth aligned (GTA) spectrograms from the vocoder.", - formatter_class=MyFormatter - ) - parser.add_argument("datasets_root", type=str, help=\ - "Path to the directory containing your SV2TTS directory. If you specify both --in_dir and " - "--out_dir, this argument won't be used.") - parser.add_argument("--model_dir", type=str, - default="synthesizer/saved_models/pretrained/", help=\ - "Path to the pretrained model directory.") - parser.add_argument("-i", "--in_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the synthesizer directory that contains the mel spectrograms, the wavs and the " - "embeds. Defaults to /SV2TTS/synthesizer/.") - parser.add_argument("-o", "--out_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the output vocoder directory that will contain the ground truth aligned mel " - "spectrograms. Defaults to /SV2TTS/vocoder/.") - parser.add_argument("--hparams", default="", - help="Hyperparameter overrides as a comma-separated list of name=value " - "pairs") - parser.add_argument("--no_trim", action="store_true", help=\ - "Preprocess audio without trimming silences (not recommended).") - parser.add_argument("--cpu", action="store_true", help=\ - "If True, processing is done on CPU, even when a GPU is available.") - args = parser.parse_args() - print_args(args, parser) - modified_hp = hparams.parse(args.hparams) - - if not hasattr(args, "in_dir"): - args.in_dir = os.path.join(args.datasets_root, "SV2TTS", "synthesizer") - if not hasattr(args, "out_dir"): - args.out_dir = os.path.join(args.datasets_root, "SV2TTS", "vocoder") - - if args.cpu: - # Hide GPUs from Pytorch to force CPU processing - os.environ["CUDA_VISIBLE_DEVICES"] = "-1" - - # Verify webrtcvad is available - if not args.no_trim: - try: - import webrtcvad - except: - raise ModuleNotFoundError("Package 'webrtcvad' not found. This package enables " - "noise removal and is recommended. Please install and try again. If installation fails, " - "use --no_trim to disable this error message.") - del args.no_trim - - run_synthesis(args.in_dir, args.out_dir, args.model_dir, modified_hp) - diff --git a/spaces/kenhugs/dsed/predictor.py b/spaces/kenhugs/dsed/predictor.py deleted file mode 100644 index 8cfa32ea7ded712407698cdfa900dd2d32598f8d..0000000000000000000000000000000000000000 --- a/spaces/kenhugs/dsed/predictor.py +++ /dev/null @@ -1,184 +0,0 @@ -import os - -import numpy as np -import librosa -import pickle - -import matplotlib.pyplot as plt - -from keras.models import load_model - -class_correspondance = {"Alarm_bell_ringing": 0, "Speech": 1, "Dog": 2, "Cat": 3, "Vacuum_cleaner": 4, - "Dishes": 5, "Frying": 6, "Electric_shaver_toothbrush": 7, "Blender": 8, "Running_water": 9} - - -def load_sound_model(): - # Load the model here - # Example: - model = load_model(os.path.join(os.path.dirname(__file__), 'model.hdf5')) - return model - - -def load_parameters(): - parameters_path = os.path.join(os.path.dirname(__file__), 'best_parameters.pkl') - best_AT_path = os.path.join(os.path.dirname(__file__), 'best_AT.pkl') - - with open(parameters_path, 'rb') as parameters_file: - best_parameters = pickle.load(parameters_file) - - with open(best_AT_path, 'rb') as best_AT_file: - best_AT_thresholds_dict = pickle.load(best_AT_file) - - best_AT_thresholds = list(best_AT_thresholds_dict.values()) - - return best_AT_thresholds, best_parameters - - -def perform_prediction(wav_file_path, model, best_AT_thresholds): - # Load and process the audio file - signal, sr = librosa.load(wav_file_path, res_type='kaiser_fast') - hop_length = 512 - - power = librosa.feature.melspectrogram(y=signal, sr=sr, n_fft=2048, n_mels=64, fmin=0.0, fmax=sr / 2.0, - htk=False, hop_length=hop_length, power=2.0, norm=1) - power = librosa.core.power_to_db(power, ref=np.max) - endpoint_time = np.min([power.shape[1], 431]) - x_test = power[:, :endpoint_time] - x_test = x_test[np.newaxis, :, :, np.newaxis] - - # Perform prediction - loc_probs, at_probs = model.predict(x_test) - X_eval_pred_1_bin = at_probs.copy() - - X_eval_pred_1_bin[X_eval_pred_1_bin > best_AT_thresholds] = 1 - X_eval_pred_1_bin[X_eval_pred_1_bin <= best_AT_thresholds] = 0 - - return X_eval_pred_1_bin, loc_probs, at_probs - - -def generate_event_segments(predictions, encoder, wav_file_path, best_parameters): - # Generate event segments using the encoder - segments = encoder.encode(predictions, method='hysteresis', **best_parameters) - to_evaluate = encoder.parse(segments, [wav_file_path]) - - return to_evaluate - - -def rescale(array_1d, border_index=0): - """rescale between 0 and 1 a 1d-array""" - # border_index = 3 - if border_index > 0: - return (array_1d - np.min(array_1d[border_index:-border_index])) / ( - np.max(array_1d[border_index:-border_index]) - np.min(array_1d[border_index:-border_index])) - else: - return (array_1d - np.min(array_1d)) / (np.max(array_1d) - np.min(array_1d)) - - -def convert_string_to_list(string): - # split the string by newline characters - lines = string.split("\n") - # initialize an empty list to store the output - output = [] - # loop through each line - for line in lines: - # skip empty lines - if line == "": - continue - # split the line by tab characters - parts = line.split("\t") - # check if parts has enough elements - if len(parts) >= 4: - # extract the start time, end time, and label from the parts - start_time = float(parts[1]) - end_time = float(parts[2]) - label = parts[3] - # append a tuple of (start_time, end_time, label) to the output list - output.append((start_time, end_time, label)) - # return the output list - return output - - - -def get_prob_curves_for_predicted_classes(audio_tag_probs, strong_probs, featTestList, int2className): - audio_tag_preds = 1 * (audio_tag_probs > 0.5) - - dico_prob_curves_for_predicted_classes = {} - for i, fileid in enumerate(featTestList): - dico_prob_curves_for_predicted_classes[fileid] = {} - current_classes = np.nonzero(audio_tag_preds[i])[0] - for j in range(current_classes.shape[0]): - class_name = int2className[current_classes[j]] - dico_prob_curves_for_predicted_classes[fileid][class_name] = rescale(strong_probs[i, :, current_classes[j]], - border_index=0) - # dico_prob_curves_for_predicted_classes[fileid_short][class_name] = strong_probs[i,:,current_classes[j]] - - return dico_prob_curves_for_predicted_classes - - -def from_strong_probs_dict_2_strong_probs_3d_array(dico_prob_curves_for_predicted_classes, wav_lst_validation): - nbFrame = 431 - out_dim = 10 - - # rearange list of results - results = np.zeros((len(wav_lst_validation), nbFrame, out_dim)) - - for i, f in enumerate(wav_lst_validation): - dico_strong_probs = dico_prob_curves_for_predicted_classes[f] - curves = np.zeros((nbFrame, out_dim)) - - for event, prob in dico_strong_probs.items(): - curves[:, class_correspondance[event]] = prob - - results[i] = curves - return results - - -def load_test_strong_ground_truth(fpath): - # test ground truth - with open(fpath, "r") as f: - str_strong_y_true = f.read().splitlines()[1:] - - to_be_removed = [] - for el in str_strong_y_true: - info = el.split("\t") - if info[1] == '': - to_be_removed.append(el) - return [el for el in str_strong_y_true if el not in to_be_removed] - - -def create_bar_chart(class_list, data): - # create an empty dictionary to store the colors - colors = {} - # iterate over the classes and their indices - for i, c in enumerate(class_list): - # assign a color code based on the index - colors[c] = f"C{i}" - - # create the horizontal bar chart - fig, ax = plt.subplots(figsize=(12, 6)) - # iterate over the data - for i, (onset, offset, label) in enumerate(data): - # calculate the duration - duration = offset - onset - # use the colors dictionary to assign the same color for each event label - ax.barh(label, duration, left=onset, height=0.5, color=colors[label], alpha=0.7) - - # set the axis labels and title - ax.set_xlabel('Time (seconds)') - ax.set_ylabel('Event label') - ax.set_title('Sound Event Detection') - - # Set the x-axis limits to ensure the entire 10 seconds are visible - ax.set_xlim(0, 10) - - # Set the directory to save the plot - save_dir = os.path.join(os.path.dirname(__file__), 'static', 'images') - os.makedirs(save_dir, exist_ok=True) - - # Save the plot as an image file in the specified directory - save_path = os.path.join(save_dir, 'myplot.png') - plt.savefig(save_path) - - plt.close(fig) -# display the plot in a window - # plt.show() \ No newline at end of file diff --git a/spaces/kepl/gpt/g4f/Provider/Providers/DeepAi.py b/spaces/kepl/gpt/g4f/Provider/Providers/DeepAi.py deleted file mode 100644 index 02b08120ec8ef50c91c9237047a4f36c822a7bfc..0000000000000000000000000000000000000000 --- a/spaces/kepl/gpt/g4f/Provider/Providers/DeepAi.py +++ /dev/null @@ -1,46 +0,0 @@ -import os -import json -import random -import hashlib -import requests - -from ...typing import sha256, Dict, get_type_hints - -url = 'https://deepai.org' -model = ['gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - def md5(text: str) -> str: - return hashlib.md5(text.encode()).hexdigest()[::-1] - - - def get_api_key(user_agent: str) -> str: - part1 = str(random.randint(0, 10**11)) - part2 = md5(user_agent + md5(user_agent + md5(user_agent + part1 + "x"))) - - return f"tryit-{part1}-{part2}" - - user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36' - - headers = { - "api-key": get_api_key(user_agent), - "user-agent": user_agent - } - - files = { - "chat_style": (None, "chat"), - "chatHistory": (None, json.dumps(messages)) - } - - r = requests.post("https://api.deepai.org/chat_response", headers=headers, files=files, stream=True) - - for chunk in r.iter_content(chunk_size=None): - r.raise_for_status() - yield chunk.decode() - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/data_objects/speaker.py b/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/data_objects/speaker.py deleted file mode 100644 index 07379847a854d85623db02ce5e5409c1566eb80c..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/data_objects/speaker.py +++ /dev/null @@ -1,40 +0,0 @@ -from speaker_encoder.data_objects.random_cycler import RandomCycler -from speaker_encoder.data_objects.utterance import Utterance -from pathlib import Path - -# Contains the set of utterances of a single speaker -class Speaker: - def __init__(self, root: Path): - self.root = root - self.name = root.name - self.utterances = None - self.utterance_cycler = None - - def _load_utterances(self): - with self.root.joinpath("_sources.txt").open("r") as sources_file: - sources = [l.split(",") for l in sources_file] - sources = {frames_fname: wave_fpath for frames_fname, wave_fpath in sources} - self.utterances = [Utterance(self.root.joinpath(f), w) for f, w in sources.items()] - self.utterance_cycler = RandomCycler(self.utterances) - - def random_partial(self, count, n_frames): - """ - Samples a batch of unique partial utterances from the disk in a way that all - utterances come up at least once every two cycles and in a random order every time. - - :param count: The number of partial utterances to sample from the set of utterances from - that speaker. Utterances are guaranteed not to be repeated if is not larger than - the number of utterances available. - :param n_frames: The number of frames in the partial utterance. - :return: A list of tuples (utterance, frames, range) where utterance is an Utterance, - frames are the frames of the partial utterances and range is the range of the partial - utterance with regard to the complete utterance. - """ - if self.utterances is None: - self._load_utterances() - - utterances = self.utterance_cycler.sample(count) - - a = [(u,) + u.random_partial(n_frames) for u in utterances] - - return a diff --git a/spaces/kevinwang676/SadTalker/src/utils/audio.py b/spaces/kevinwang676/SadTalker/src/utils/audio.py deleted file mode 100644 index 89433eb4c681112804fbed72b157700f553739a8..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/utils/audio.py +++ /dev/null @@ -1,136 +0,0 @@ -import librosa -import librosa.filters -import numpy as np -# import tensorflow as tf -from scipy import signal -from scipy.io import wavfile -from src.utils.hparams import hparams as hp - -def load_wav(path, sr): - return librosa.core.load(path, sr=sr)[0] - -def save_wav(wav, path, sr): - wav *= 32767 / max(0.01, np.max(np.abs(wav))) - #proposed by @dsmiller - wavfile.write(path, sr, wav.astype(np.int16)) - -def save_wavenet_wav(wav, path, sr): - librosa.output.write_wav(path, wav, sr=sr) - -def preemphasis(wav, k, preemphasize=True): - if preemphasize: - return signal.lfilter([1, -k], [1], wav) - return wav - -def inv_preemphasis(wav, k, inv_preemphasize=True): - if inv_preemphasize: - return signal.lfilter([1], [1, -k], wav) - return wav - -def get_hop_size(): - hop_size = hp.hop_size - if hop_size is None: - assert hp.frame_shift_ms is not None - hop_size = int(hp.frame_shift_ms / 1000 * hp.sample_rate) - return hop_size - -def linearspectrogram(wav): - D = _stft(preemphasis(wav, hp.preemphasis, hp.preemphasize)) - S = _amp_to_db(np.abs(D)) - hp.ref_level_db - - if hp.signal_normalization: - return _normalize(S) - return S - -def melspectrogram(wav): - D = _stft(preemphasis(wav, hp.preemphasis, hp.preemphasize)) - S = _amp_to_db(_linear_to_mel(np.abs(D))) - hp.ref_level_db - - if hp.signal_normalization: - return _normalize(S) - return S - -def _lws_processor(): - import lws - return lws.lws(hp.n_fft, get_hop_size(), fftsize=hp.win_size, mode="speech") - -def _stft(y): - if hp.use_lws: - return _lws_processor(hp).stft(y).T - else: - return librosa.stft(y=y, n_fft=hp.n_fft, hop_length=get_hop_size(), win_length=hp.win_size) - -########################################################## -#Those are only correct when using lws!!! (This was messing with Wavenet quality for a long time!) -def num_frames(length, fsize, fshift): - """Compute number of time frames of spectrogram - """ - pad = (fsize - fshift) - if length % fshift == 0: - M = (length + pad * 2 - fsize) // fshift + 1 - else: - M = (length + pad * 2 - fsize) // fshift + 2 - return M - - -def pad_lr(x, fsize, fshift): - """Compute left and right padding - """ - M = num_frames(len(x), fsize, fshift) - pad = (fsize - fshift) - T = len(x) + 2 * pad - r = (M - 1) * fshift + fsize - T - return pad, pad + r -########################################################## -#Librosa correct padding -def librosa_pad_lr(x, fsize, fshift): - return 0, (x.shape[0] // fshift + 1) * fshift - x.shape[0] - -# Conversions -_mel_basis = None - -def _linear_to_mel(spectogram): - global _mel_basis - if _mel_basis is None: - _mel_basis = _build_mel_basis() - return np.dot(_mel_basis, spectogram) - -def _build_mel_basis(): - assert hp.fmax <= hp.sample_rate // 2 - return librosa.filters.mel(sr=hp.sample_rate, n_fft=hp.n_fft, n_mels=hp.num_mels, - fmin=hp.fmin, fmax=hp.fmax) - -def _amp_to_db(x): - min_level = np.exp(hp.min_level_db / 20 * np.log(10)) - return 20 * np.log10(np.maximum(min_level, x)) - -def _db_to_amp(x): - return np.power(10.0, (x) * 0.05) - -def _normalize(S): - if hp.allow_clipping_in_normalization: - if hp.symmetric_mels: - return np.clip((2 * hp.max_abs_value) * ((S - hp.min_level_db) / (-hp.min_level_db)) - hp.max_abs_value, - -hp.max_abs_value, hp.max_abs_value) - else: - return np.clip(hp.max_abs_value * ((S - hp.min_level_db) / (-hp.min_level_db)), 0, hp.max_abs_value) - - assert S.max() <= 0 and S.min() - hp.min_level_db >= 0 - if hp.symmetric_mels: - return (2 * hp.max_abs_value) * ((S - hp.min_level_db) / (-hp.min_level_db)) - hp.max_abs_value - else: - return hp.max_abs_value * ((S - hp.min_level_db) / (-hp.min_level_db)) - -def _denormalize(D): - if hp.allow_clipping_in_normalization: - if hp.symmetric_mels: - return (((np.clip(D, -hp.max_abs_value, - hp.max_abs_value) + hp.max_abs_value) * -hp.min_level_db / (2 * hp.max_abs_value)) - + hp.min_level_db) - else: - return ((np.clip(D, 0, hp.max_abs_value) * -hp.min_level_db / hp.max_abs_value) + hp.min_level_db) - - if hp.symmetric_mels: - return (((D + hp.max_abs_value) * -hp.min_level_db / (2 * hp.max_abs_value)) + hp.min_level_db) - else: - return ((D * -hp.min_level_db / hp.max_abs_value) + hp.min_level_db) diff --git a/spaces/kevinwang676/VoiceChangers/search.py b/spaces/kevinwang676/VoiceChangers/search.py deleted file mode 100644 index 25b47bccf04efc2ad8a4661c2c086887349ad688..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/search.py +++ /dev/null @@ -1,163 +0,0 @@ -import os.path -import random - -from musicdl import musicdl -from musicdl.modules import Downloader -from pydub import AudioSegment -from yt_dlp import YoutubeDL -import yt_dlp -from yt_dlp.utils import download_range_func -import json - - -def is_integer(string): - if string.isdigit(): - return int(string) - else: - return 0 - - -def is_numeric(string): - if string.isdigit(): - return True - if string.count('.') == 1: - integer_part, decimal_part = string.split('.') - if integer_part.isdigit() and decimal_part.isdigit(): - return True - return False - - -def time_to_seconds(time_string): - hours, minutes, seconds = map(lambda x: is_integer(x), time_string.split(':')) - total_seconds = hours * 3600 + minutes * 60 + seconds - return total_seconds - - -def size_to_int(size_string): - prefix_size_str = size_string[:-2] # 去除最后的单位部分,转换为浮点数 - if not is_numeric(prefix_size_str): - return 5.1 * 1024 * 1024 - unit = size_string[-2:] # 获取单位部分 - size = float(prefix_size_str) - if unit == 'KB': - size *= 1024 # 转换为字节 - elif unit == 'MB': - size *= 1024 * 1024 - elif unit == 'GB': - size *= 1024 * 1024 * 1024 - elif unit == 'TB': - size *= 1024 * 1024 * 1024 * 1024 - - return int(size) # 转换为整数 - - -def search_youtube(keywords): - YDL_OPTIONS = { - 'format': 'bestaudio', - # 'noplaylist': 'True', - # 'proxy': 'http://127.0.0.1:8889', - } - with YoutubeDL(YDL_OPTIONS) as ydl: - video = ydl.extract_info(f"ytsearch:{keywords}", download=False)['entries'][0:5] - # video = ydl.extract_info(keywords, download=False) - if len(video) > 0: - ret = random.choice(video) - return ydl.sanitize_info(ret) - else: - return None - - -def download_youtube(info, save_path): - url = info['original_url'] - duration = info['duration'] - - - start_second = 0 - end_second = duration - - ydl_opts = { - 'format': 'm4a/bestaudio/best', - 'downloader': 'ffmpeg', - 'download_ranges': download_range_func(None, [(start_second, end_second)]), - # ℹ️ See help(yt_dlp.postprocessor) for a list of available Postprocessors and their arguments - 'postprocessors': [{ # Extract audio using ffmpeg - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'mp3', - }], - 'outtmpl': save_path, - # 'proxy': 'http://127.0.0.1:8889', - } - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - info = ydl.extract_info(url, download=True) - # ℹ️ ydl.sanitize_info makes the info json-serializable - ret_info = ydl.sanitize_info(info) - ret_info['save_path'] = save_path - return ret_info - - -def get_youtube(keywords, save_path): - info = search_youtube(keywords) - if info is None: - return - else: - download_youtube(info, save_path) - - -def get_albums(keywords, config): - target_srcs = [ - 'kugou', 'kuwo', 'qqmusic', 'qianqian', 'fivesing', - 'netease', 'migu', 'joox', 'yiting', - ] - client = musicdl.musicdl(config=config) - results = client.search(keywords, target_srcs) - albums_set = set() - valid_albums = [] - for albums in results.values(): - if len(albums) == 0: - continue - for album in albums: - if album['songname'] in albums_set: - continue - if album['ext'] != 'mp3': - continue - if size_to_int(album['filesize']) > 5 * 1024 * 1024: - continue - if time_to_seconds(album['duration']) > 300: - continue - else: - albums_set.add(album['songname']) - valid_albums.append(album) - return valid_albums - - -def get_random_spit(songinfo, save_path): - d = Downloader(songinfo) - d.start() - song = AudioSegment.from_mp3(save_path) - # pydub does things in milliseconds - length = len(song) - left_idx = length / 2 - 15 * 1000 - right_idx = length / 2 + 15 * 1000 - if left_idx < 0: - left_idx = 0 - if right_idx > length: - right_idx = length - middle_30s = song[left_idx:right_idx] - middle_30s.export(save_path, format="wav") - return save_path - - -def download_random(keywords, config, save_path): - albums = get_albums(keywords, config) - if len(albums) == 0: - return None - album = random.choice(albums) - get_random_spit(album, save_path=save_path) - - -if __name__ == '__main__': - # config = {'logfilepath': 'musicdl.log', 'downloaded': 'downloaded', 'search_size_per_source': 5, 'proxies': {}} - # infos = get_albums('李荣浩', config) - # print(infos) - info = search_youtube('李荣浩 模特') - download_youtube(info, "downloaded/模特") diff --git a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/lib/models/mdl_logR.py b/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/lib/models/mdl_logR.py deleted file mode 100644 index d3546a2b881db9b387a9d1bedca3aefda1a8860d..0000000000000000000000000000000000000000 --- a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/lib/models/mdl_logR.py +++ /dev/null @@ -1,41 +0,0 @@ -from sklearn.linear_model import LogisticRegressionCV -import lib.utils as libPaths -import pickle - - -m_kstrFile = __file__ -m_kstrDataPath = libPaths.pth_data -m_kstrBinModelPath = libPaths.pth_binModels -m_kstrModelPath = m_kstrBinModelPath + 'lgr_model_colab.pkl' - - -#--- Supervised: Logistic Regession -def load_fromPkl(): - with open(m_kstrModelPath, 'rb') as filPkl: - mdlAnoms = pickle.load(filPkl) - return mdlAnoms - - - -def save_toPkl(mdlAnoms): - with open(m_kstrModelPath, 'wb') as filPkl: - pickle.dump(mdlAnoms, filPkl) - return mdlAnoms - - - -def predict(npaData): - #--- input: numpy.ndarray of feature eng, and scaled data - mdlAnoms = load_fromPkl() - npaPredict = mdlAnoms.predict(npaData) - - print("INFO (npaPredict.shape): ", npaPredict.shape) - return npaPredict - - - -def train(pdfTrainData): - mdlAnoms = LogisticRegressionCV() - mdlAnoms.fit(pdfTrainData.values) - save_toPkl(mdlAnoms) - return mdlAnoms \ No newline at end of file diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/simultaneous_translation/docs/ende-mma.md b/spaces/koajoel/PolyFormer/fairseq/examples/simultaneous_translation/docs/ende-mma.md deleted file mode 100644 index 241d604a3b31a37755da68aad6ff47d46891d3fc..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/simultaneous_translation/docs/ende-mma.md +++ /dev/null @@ -1,74 +0,0 @@ -# Simultaneous Machine Translation - -This directory contains the code for the paper [Monotonic Multihead Attention](https://openreview.net/forum?id=Hyg96gBKPS) - -## Prepare Data - -[Please follow the instructions to download and preprocess the WMT'15 En-De dataset.](https://github.com/pytorch/fairseq/tree/simulastsharedtask/examples/translation#prepare-wmt14en2desh) - -Another example of training an English to Japanese model can be found [here](docs/enja.md) - -## Training - -- MMA-IL - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type infinite_lookback \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --latency-weight-avg 0.1 \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` - -- MMA-H - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type hard_aligned \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --latency-weight-var 0.1 \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` - -- wait-k - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type wait-k \ - --waitk-lagging 3 \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` diff --git a/spaces/kohrisatou-infinity/KIP_01_beta/commons.py b/spaces/kohrisatou-infinity/KIP_01_beta/commons.py deleted file mode 100644 index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000 --- a/spaces/kohrisatou-infinity/KIP_01_beta/commons.py +++ /dev/null @@ -1,188 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -def slice_pitch_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - -def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size) - return ret, ret_pitch, ids_str - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/kohrisatou-infinity/KIP_01_beta/preprocess_flist_config.py b/spaces/kohrisatou-infinity/KIP_01_beta/preprocess_flist_config.py deleted file mode 100644 index 927dea890c0057063080b48edc6dd8c2588c6e27..0000000000000000000000000000000000000000 --- a/spaces/kohrisatou-infinity/KIP_01_beta/preprocess_flist_config.py +++ /dev/null @@ -1,117 +0,0 @@ -import os -import argparse -from tqdm import tqdm -from random import shuffle -import json -config_template = { - "train": { - "log_interval": 200, - "eval_interval": 1000, - "seed": 1234, - "epochs": 10000, - "learning_rate": 2e-4, - "betas": [0.8, 0.99], - "eps": 1e-9, - "batch_size": 12, - "fp16_run": False, - "lr_decay": 0.999875, - "segment_size": 17920, - "init_lr_ratio": 1, - "warmup_epochs": 0, - "c_mel": 45, - "c_kl": 1.0, - "use_sr": True, - "max_speclen": 384, - "port": "8001" - }, - "data": { - "training_files":"filelists/train.txt", - "validation_files":"filelists/val.txt", - "max_wav_value": 32768.0, - "sampling_rate": 32000, - "filter_length": 1280, - "hop_length": 320, - "win_length": 1280, - "n_mel_channels": 80, - "mel_fmin": 0.0, - "mel_fmax": None - }, - "model": { - "inter_channels": 192, - "hidden_channels": 192, - "filter_channels": 768, - "n_heads": 2, - "n_layers": 6, - "kernel_size": 3, - "p_dropout": 0.1, - "resblock": "1", - "resblock_kernel_sizes": [3,7,11], - "resblock_dilation_sizes": [[1,3,5], [1,3,5], [1,3,5]], - "upsample_rates": [10,8,2,2], - "upsample_initial_channel": 512, - "upsample_kernel_sizes": [16,16,4,4], - "n_layers_q": 3, - "use_spectral_norm": False, - "gin_channels": 256, - "ssl_dim": 256, - "n_speakers": 0, - }, - "spk":{ - "nen": 0, - "paimon": 1, - "yunhao": 2 - } -} - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--train_list", type=str, default="./filelists/train.txt", help="path to train list") - parser.add_argument("--val_list", type=str, default="./filelists/val.txt", help="path to val list") - parser.add_argument("--test_list", type=str, default="./filelists/test.txt", help="path to test list") - parser.add_argument("--source_dir", type=str, default="./dataset/32k", help="path to source dir") - args = parser.parse_args() - - train = [] - val = [] - test = [] - idx = 0 - spk_dict = {} - spk_id = 0 - for speaker in tqdm(os.listdir(args.source_dir)): - spk_dict[speaker] = spk_id - spk_id += 1 - wavs = [os.path.join(args.source_dir, speaker, i)for i in os.listdir(os.path.join(args.source_dir, speaker))] - wavs = [i for i in wavs if i.endswith("wav")] - shuffle(wavs) - train += wavs[2:-10] - val += wavs[:2] - test += wavs[-10:] - n_speakers = len(spk_dict.keys())*2 - shuffle(train) - shuffle(val) - shuffle(test) - - print("Writing", args.train_list) - with open(args.train_list, "w") as f: - for fname in tqdm(train): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.val_list) - with open(args.val_list, "w") as f: - for fname in tqdm(val): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.test_list) - with open(args.test_list, "w") as f: - for fname in tqdm(test): - wavpath = fname - f.write(wavpath + "\n") - - config_template["model"]["n_speakers"] = n_speakers - config_template["spk"] = spk_dict - print("Writing configs/config.json") - with open("configs/config.json", "w") as f: - json.dump(config_template, f, indent=2) diff --git a/spaces/kquote03/lama-video-watermark-remover/bin/gen_outpainting_dataset.py b/spaces/kquote03/lama-video-watermark-remover/bin/gen_outpainting_dataset.py deleted file mode 100644 index 72f6fc16c372fbc0aec9643c7be1c44ce5efeba4..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/bin/gen_outpainting_dataset.py +++ /dev/null @@ -1,88 +0,0 @@ -#!/usr/bin/env python3 -import glob -import logging -import os -import shutil -import sys -import traceback - -from saicinpainting.evaluation.data import load_image -from saicinpainting.evaluation.utils import move_to_device - -os.environ['OMP_NUM_THREADS'] = '1' -os.environ['OPENBLAS_NUM_THREADS'] = '1' -os.environ['MKL_NUM_THREADS'] = '1' -os.environ['VECLIB_MAXIMUM_THREADS'] = '1' -os.environ['NUMEXPR_NUM_THREADS'] = '1' - -import cv2 -import hydra -import numpy as np -import torch -import tqdm -import yaml -from omegaconf import OmegaConf -from torch.utils.data._utils.collate import default_collate - -from saicinpainting.training.data.datasets import make_default_val_dataset -from saicinpainting.training.trainers import load_checkpoint -from saicinpainting.utils import register_debug_signal_handlers - -LOGGER = logging.getLogger(__name__) - - -def main(args): - try: - if not args.indir.endswith('/'): - args.indir += '/' - - for in_img in glob.glob(os.path.join(args.indir, '**', '*' + args.img_suffix), recursive=True): - if 'mask' in os.path.basename(in_img): - continue - - out_img_path = os.path.join(args.outdir, os.path.splitext(in_img[len(args.indir):])[0] + '.png') - out_mask_path = f'{os.path.splitext(out_img_path)[0]}_mask.png' - - os.makedirs(os.path.dirname(out_img_path), exist_ok=True) - - img = load_image(in_img) - height, width = img.shape[1:] - pad_h, pad_w = int(height * args.coef / 2), int(width * args.coef / 2) - - mask = np.zeros((height, width), dtype='uint8') - - if args.expand: - img = np.pad(img, ((0, 0), (pad_h, pad_h), (pad_w, pad_w))) - mask = np.pad(mask, ((pad_h, pad_h), (pad_w, pad_w)), mode='constant', constant_values=255) - else: - mask[:pad_h] = 255 - mask[-pad_h:] = 255 - mask[:, :pad_w] = 255 - mask[:, -pad_w:] = 255 - - # img = np.pad(img, ((0, 0), (pad_h * 2, pad_h * 2), (pad_w * 2, pad_w * 2)), mode='symmetric') - # mask = np.pad(mask, ((pad_h * 2, pad_h * 2), (pad_w * 2, pad_w * 2)), mode = 'symmetric') - - img = np.clip(np.transpose(img, (1, 2, 0)) * 255, 0, 255).astype('uint8') - img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) - cv2.imwrite(out_img_path, img) - - cv2.imwrite(out_mask_path, mask) - except KeyboardInterrupt: - LOGGER.warning('Interrupted by user') - except Exception as ex: - LOGGER.critical(f'Prediction failed due to {ex}:\n{traceback.format_exc()}') - sys.exit(1) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('indir', type=str, help='Root directory with images') - aparser.add_argument('outdir', type=str, help='Where to store results') - aparser.add_argument('--img-suffix', type=str, default='.png', help='Input image extension') - aparser.add_argument('--expand', action='store_true', help='Generate mask by padding (true) or by cropping (false)') - aparser.add_argument('--coef', type=float, default=0.2, help='How much to crop/expand in order to get masks') - - main(aparser.parse_args()) diff --git a/spaces/kxqt/Expedit-SAM/segment_anything/modeling/hourglass_image_encoder.py b/spaces/kxqt/Expedit-SAM/segment_anything/modeling/hourglass_image_encoder.py deleted file mode 100644 index fcbcfd9e219437e88cb41cbe0995458d16c0e6d7..0000000000000000000000000000000000000000 --- a/spaces/kxqt/Expedit-SAM/segment_anything/modeling/hourglass_image_encoder.py +++ /dev/null @@ -1,446 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Variable - -from typing import Optional, Tuple, Type - -from .common import LayerNorm2d, MLPBlock - -from .image_encoder import ( - window_partition, - window_unpartition, - add_decomposed_rel_pos, - ImageEncoderViT, - Block, - Attention, -) - - -class TokenClusteringBlock(nn.Module): - def __init__(self, num_spixels=None, n_iters=5, temperture=0.01, window_size=5): - super().__init__() - if isinstance(num_spixels, tuple): - assert len(num_spixels) == 2 - elif num_spixels is not None: - x = int(math.sqrt(num_spixels)) - assert x * x == num_spixels - num_spixels = (x, x) - self.num_spixels = num_spixels - self.n_iters = n_iters - self.temperture = temperture - assert window_size % 2 == 1 - self.r = window_size // 2 - - def calc_init_centroid(self, images, num_spixels_width, num_spixels_height): - """ - calculate initial superpixels - - Args: - images: torch.Tensor - A Tensor of shape (B, C, H, W) - spixels_width: int - initial superpixel width - spixels_height: int - initial superpixel height - - Return: - centroids: torch.Tensor - A Tensor of shape (B, C, H * W) - init_label_map: torch.Tensor - A Tensor of shape (B, H * W) - num_spixels_width: int - A number of superpixels in each column - num_spixels_height: int - A number of superpixels int each raw - """ - batchsize, channels, height, width = images.shape - device = images.device - - centroids = torch.nn.functional.adaptive_avg_pool2d( - images, (num_spixels_height, num_spixels_width) - ) - - with torch.no_grad(): - num_spixels = num_spixels_width * num_spixels_height - labels = ( - torch.arange(num_spixels, device=device) - .reshape(1, 1, *centroids.shape[-2:]) - .type_as(centroids) - ) - init_label_map = torch.nn.functional.interpolate( - labels, size=(height, width), mode="nearest" - ).type_as(centroids) - init_label_map = init_label_map.repeat(batchsize, 1, 1, 1) - - init_label_map = init_label_map.reshape(batchsize, -1) - centroids = centroids.reshape(batchsize, channels, -1) - - return centroids, init_label_map - - def forward(self, pixel_features, num_spixels=None): - if num_spixels is None: - num_spixels = self.num_spixels - assert num_spixels is not None - else: - if isinstance(num_spixels, tuple): - assert len(num_spixels) == 2 - else: - x = int(math.sqrt(num_spixels)) - assert x * x == num_spixels - num_spixels = (x, x) - pixel_features = pixel_features.permute(0, 3, 1, 2) - num_spixels_height, num_spixels_width = num_spixels - num_spixels = num_spixels_width * num_spixels_height - spixel_features, init_label_map = self.calc_init_centroid( - pixel_features, num_spixels_width, num_spixels_height - ) - - device = init_label_map.device - spixels_number = torch.arange(num_spixels, device=device)[None, :, None] - relative_labels_widths = init_label_map[:, None] % num_spixels_width - spixels_number % num_spixels_width - relative_labels_heights = torch.div(init_label_map[:, None], num_spixels_width, rounding_mode='trunc') - torch.div(spixels_number, num_spixels_width, rounding_mode='trunc') - mask = torch.logical_and(torch.abs(relative_labels_widths) <= self.r, torch.abs(relative_labels_heights) <= self.r) - mask_dist = (~mask) * 1e16 - - pixel_features = pixel_features.reshape(*pixel_features.shape[:2], -1) # (B, C, L) - permuted_pixel_features = pixel_features.permute(0, 2, 1) # (B, L, C) - - for _ in range(self.n_iters): - dist_matrix = self.pairwise_dist(pixel_features, spixel_features) # (B, L', L) - dist_matrix += mask_dist - affinity_matrix = (-dist_matrix * self.temperture).softmax(1) - spixel_features = torch.bmm(affinity_matrix.detach(), permuted_pixel_features) - spixel_features = spixel_features / affinity_matrix.detach().sum(2, keepdim=True).clamp_(min=1e-16) - spixel_features = spixel_features.permute(0, 2, 1) - - dist_matrix = self.pairwise_dist(pixel_features, spixel_features) - hard_labels = torch.argmin(dist_matrix, dim=1) - - B, C, _ = spixel_features.shape - spixel_features = spixel_features.permute(0, 2, 1).reshape(B, num_spixels_height, num_spixels_width, C) - return spixel_features, hard_labels - - def pairwise_dist(self, f1, f2): - return ((f1 * f1).sum(dim=1).unsqueeze(1) - + (f2 * f2).sum(dim=1).unsqueeze(2) - - 2 * torch.einsum("bcm, bcn -> bmn", f2, f1)) - - def extra_repr(self): - return f"num_spixels={self.num_spixels}, n_iters={self.n_iters}" - - -def naive_unpool(f_regions, region_indices): - _, _, C = f_regions.shape - N, L = region_indices.shape - index = region_indices.view(N, L, 1).expand(N, L, C) - result = f_regions.gather(1, index) - return result - - -class State: - def __init__(self, unpooling): - self.unpooling = unpooling - self.__updated = False - - @property - def updated(self): - return self.__updated - - def get(self, name, default=None): - return getattr(self, name, default) - - def update_state(self, **states: dict): - self.__updated = True - for k, v in states.items(): - setattr(self, k, v) - - def call(self, input: torch.Tensor): - return self.unpooling(input, self) - - -class UnpoolingBase(nn.Module): - def forward(self, x, state: State): - if not state.updated: - return x, False - return self._forward(x, state) - - def derive_unpooler(self): - return State(self) - - -class NaiveUnpooling(UnpoolingBase): - def _forward(self, x, state: State): - return naive_unpool(x, state.hard_labels), False - - -class TokenReconstructionBlock(UnpoolingBase): - def __init__(self, k=20, temperture=0.01): - super().__init__() - - self.k = k - self.temperture = temperture - - def _forward(self, x, state: State): - feat = state.feat_before_pooling - sfeat = state.feat_after_pooling - ds = ( - (feat * feat).sum(dim=2).unsqueeze(2) - + (sfeat * sfeat).sum(dim=2).unsqueeze(1) - - 2 * torch.einsum("bnc, bmc -> bnm", feat, sfeat) - ) # distance between features and super-features - ds[ds < 0] = 0 - weight = torch.exp(-self.temperture * ds) - if self.k >= 0: - topk, indices = torch.topk(weight, k=self.k, dim=2) - mink = torch.min(topk, dim=-1).values - mink = mink.unsqueeze(-1).repeat(1, 1, weight.shape[-1]) - mask = torch.ge(weight, mink) - zero = Variable(torch.zeros_like(weight)).to(weight.device) - attention = torch.where(mask, weight, zero) - attention = F.normalize(attention, dim=2) - ret = torch.einsum("bnm, bmc -> bnc", attention, x) - - return ret, False - - - -class HourglassImageEncoderViT(ImageEncoderViT): - def __init__( - self, - img_size: int = 1024, - patch_size: int = 16, - in_chans: int = 3, - embed_dim: int = 768, - depth: int = 12, - num_heads: int = 12, - mlp_ratio: float = 4.0, - out_chans: int = 256, - qkv_bias: bool = True, - norm_layer: Type[nn.Module] = nn.LayerNorm, - act_layer: Type[nn.Module] = nn.GELU, - use_abs_pos: bool = True, - use_rel_pos: bool = False, - rel_pos_zero_init: bool = True, - window_size: int = 0, - global_attn_indexes: Tuple[int, ...] = (), - hourglass_clustering_location: int = -1, - hourglass_num_cluster: int = 100, - hourglass_cluster_iters: int = 5, - hourglass_temperture: float = 0.01, - hourglass_cluster_window_size: int = 5, - hourglass_reconstruction_k: int = 20, - ) -> None: - """ - Args: - img_size (int): Input image size. - patch_size (int): Patch size. - in_chans (int): Number of input image channels. - embed_dim (int): Patch embedding dimension. - depth (int): Depth of ViT. - num_heads (int): Number of attention heads in each ViT block. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool): If True, add a learnable bias to query, key, value. - norm_layer (nn.Module): Normalization layer. - act_layer (nn.Module): Activation layer. - use_abs_pos (bool): If True, use absolute positional embeddings. - use_rel_pos (bool): If True, add relative positional embeddings to the attention map. - rel_pos_zero_init (bool): If True, zero initialize relative positional parameters. - window_size (int): Window size for window attention blocks. - global_attn_indexes (list): Indexes for blocks using global attention. - """ - super().__init__( - img_size=img_size, - patch_size=patch_size, - in_chans=in_chans, - embed_dim=embed_dim, - depth=depth, - num_heads=num_heads, - mlp_ratio=mlp_ratio, - out_chans=out_chans, - qkv_bias=qkv_bias, - norm_layer=norm_layer, - act_layer=act_layer, - use_abs_pos=use_abs_pos, - use_rel_pos=use_rel_pos, - rel_pos_zero_init=rel_pos_zero_init, - window_size=window_size, - global_attn_indexes=global_attn_indexes, - ) - - hourglass_clustering_location = hourglass_clustering_location if hourglass_clustering_location >= 0 else depth + 1 - - self.window_size = window_size - self.ws_new = int(math.sqrt(hourglass_num_cluster)) - - self.blocks = nn.ModuleList() - for i in range(depth): - block = HourglassBlock( - dim=embed_dim, - num_heads=num_heads, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - norm_layer=norm_layer, - act_layer=act_layer, - use_rel_pos=use_rel_pos, - rel_pos_zero_init=rel_pos_zero_init, - window_size=(window_size if i < hourglass_clustering_location else self.ws_new) if i not in global_attn_indexes else 0, - window_size_ckpt=window_size, - input_size=(img_size // patch_size, img_size // patch_size), - ) - self.blocks.append(block) - - self.clustering_location = hourglass_clustering_location - self.token_clustering_block = TokenClusteringBlock( - num_spixels=hourglass_num_cluster, - n_iters=hourglass_cluster_iters, - temperture=hourglass_temperture, - window_size=hourglass_cluster_window_size, - ) - self.token_reconstruction_block = TokenReconstructionBlock( - k=hourglass_reconstruction_k, - temperture=hourglass_temperture, - ) - - def cluster(self, x, reconstructer): - # x: B, H, W, C - H, W = x.shape[1:3] - x, pad_hw = window_partition(x, self.window_size) # B*Nw, WH, WW, C - Bnw, _, _, C = x.shape - - reconstructer.update_state( - feat_before_pooling=x.view(-1, self.window_size * self.window_size, C) - ) - x, hard_labels = self.token_clustering_block(x) # B*H*W, Wh, Ww, C - reconstructer.update_state(hard_labels=hard_labels) - reconstructer.update_state(feat_after_pooling=x.view(Bnw, -1, C)) - - # merge window - # Reverse window partition - h = pad_hw[0] // self.window_size * x.shape[1] - w = pad_hw[1] // self.window_size * x.shape[2] - x = window_unpartition(x, self.ws_new, (h, w), (h, w)) - # out: B, h, w, C - return x, pad_hw - - def reconstruct(self, x, H, W, recontructer, pad_hw): - # x: B, h, w, C - x, _ = window_partition(x, self.ws_new) # B*Nw, Wh, Ww, C - Bnw, _, _, C = x.shape - x = x.view(Bnw, -1, C) - - x, _ = recontructer.call(x) # B*Nw, WH*WW, C - - # merge windows - x = x.view(-1, self.window_size, self.window_size, C) - x = window_unpartition(x, self.window_size, pad_hw, (H, W)) # B, H, W, C - return x - - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.patch_embed(x) - if self.pos_embed is not None: - x = x + self.pos_embed - - H, W = x.shape[1], x.shape[2] - reconstructer = self.token_reconstruction_block.derive_unpooler() - reconstructer.update_state(hw_shape=(H, W)) - - for i, blk in enumerate(self.blocks): - if i == self.clustering_location: - x, pad_hw = self.cluster(x, reconstructer) - x = blk(x) - - if x.shape[1] != H or x.shape[2] != W: - x = self.reconstruct(x, H, W, reconstructer, pad_hw) - - x = self.neck(x.permute(0, 3, 1, 2)) - - return x - - def load_hourglass_args(self, **hourglass_args): - hourglass_clustering_location = hourglass_args.get('hourglass_clustering_location', self.clustering_location) - hourglass_num_cluster = hourglass_args.get('hourglass_num_cluster', self.token_clustering_block.num_spixels[0] * self.token_clustering_block.num_spixels[1]) - hourglass_cluster_iters = hourglass_args.get('hourglass_cluster_iters', self.token_clustering_block.n_iters) - hourglass_temperture = hourglass_args.get('hourglass_temperture', self.token_clustering_block.temperture) - hourglass_cluster_window_size = hourglass_args.get('hourglass_cluster_window_size', self.token_clustering_block.r * 2 + 1) - hourglass_reconstruction_k = hourglass_args.get('hourglass_reconstruction_k', self.token_reconstruction_block.k) - - self.clustering_location = hourglass_clustering_location if hourglass_clustering_location >= 0 else len(self.blocks) + 1 - - self.ws_new = int(math.sqrt(hourglass_num_cluster)) - for i, blk in enumerate(self.blocks): - blk.window_size = (self.window_size if i < self.clustering_location else self.ws_new) if blk.window_size != 0 else 0 - - self.token_clustering_block = TokenClusteringBlock( - num_spixels=hourglass_num_cluster, - n_iters=hourglass_cluster_iters, - temperture=hourglass_temperture, - window_size=hourglass_cluster_window_size, - ) - self.token_reconstruction_block = TokenReconstructionBlock( - k=hourglass_reconstruction_k, - temperture=hourglass_temperture, - ) - - -class HourglassBlock(Block): - """Transformer blocks with support of window attention and residual propagation blocks""" - - def __init__( - self, - dim: int, - num_heads: int, - mlp_ratio: float = 4.0, - qkv_bias: bool = True, - norm_layer: Type[nn.Module] = nn.LayerNorm, - act_layer: Type[nn.Module] = nn.GELU, - use_rel_pos: bool = False, - rel_pos_zero_init: bool = True, - window_size: int = 0, - input_size: Optional[Tuple[int, int]] = None, - window_size_ckpt: int = 0, - ) -> None: - """ - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads in each ViT block. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool): If True, add a learnable bias to query, key, value. - norm_layer (nn.Module): Normalization layer. - act_layer (nn.Module): Activation layer. - use_rel_pos (bool): If True, add relative positional embeddings to the attention map. - rel_pos_zero_init (bool): If True, zero initialize relative positional parameters. - window_size (int): Window size for window attention blocks. If it equals 0, then - use global attention. - input_size (int or None): Input resolution for calculating the relative positional - parameter size. - """ - super(HourglassBlock, self).__init__( - dim=dim, - num_heads=num_heads, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - norm_layer=norm_layer, - act_layer=act_layer, - use_rel_pos=use_rel_pos, - rel_pos_zero_init=rel_pos_zero_init, - window_size=window_size, - input_size=input_size, - ) - - self.attn = Attention( - dim, - num_heads=num_heads, - qkv_bias=qkv_bias, - use_rel_pos=use_rel_pos, - rel_pos_zero_init=rel_pos_zero_init, - input_size=input_size if window_size == 0 else (window_size_ckpt, window_size_ckpt), - ) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-ed471d18.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-ed471d18.css deleted file mode 100644 index ea1ca6e8707c04b1e1a4517219c87d1bdab91f99..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-ed471d18.css +++ /dev/null @@ -1 +0,0 @@ -span.svelte-1vnmhm4{text-shadow:0 0 8px rgba(0,0,0,.5)}progress.svelte-1vnmhm4{margin-right:var(--size-3);border-radius:var(--radius-sm);width:var(--size-full);height:var(--size-2)}progress.svelte-1vnmhm4::-webkit-progress-bar{border-radius:2px;background-color:#fff3;overflow:hidden}progress.svelte-1vnmhm4::-webkit-progress-value{background-color:#ffffffe6}video.svelte-1vnmhm4{background-color:#000;width:var(--size-full);height:var(--size-full);object-fit:contain}.mirror.svelte-1vnmhm4{transform:scaleX(-1)}.controls.svelte-1vnmhm4{position:absolute;bottom:0;transition:.5s;margin:var(--size-2);border-radius:var(--radius-md);background:var(--color-grey-800);padding:var(--size-2) var(--size-1);width:calc(100% - .75rem);width:calc(100% - var(--size-2) * 2)}.inner.svelte-1vnmhm4{display:flex;justify-content:space-between;align-items:center;padding-right:var(--size-2);padding-left:var(--size-2);width:var(--size-full);height:var(--size-full)}.icon.svelte-1vnmhm4{display:flex;justify-content:center;cursor:pointer;width:var(--size-6);color:#fff}.time.svelte-1vnmhm4{flex-shrink:0;margin-right:var(--size-3);margin-left:var(--size-3);color:#fff;font-size:var(--text-sm);font-family:var(--font-mono)}.wrap.svelte-1vnmhm4{background-color:var(--background-fill-secondary)}.file-name.svelte-a6ruol{padding:var(--size-6);font-size:var(--text-xxl);word-break:break-all}.file-size.svelte-a6ruol{padding:var(--size-2);font-size:var(--text-xl)}.download.svelte-90pr3x{position:absolute;top:6px;right:6px} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/themes/glass.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/themes/glass.py deleted file mode 100644 index f3a93e09b7f2d25ff8b2595761274867fd5da47a..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/themes/glass.py +++ /dev/null @@ -1,99 +0,0 @@ -from __future__ import annotations - -from typing import Iterable - -from gradio.themes.base import Base -from gradio.themes.utils import colors, fonts, sizes - - -class Glass(Base): - def __init__( - self, - *, - primary_hue: colors.Color | str = colors.stone, - secondary_hue: colors.Color | str = colors.stone, - neutral_hue: colors.Color | str = colors.stone, - spacing_size: sizes.Size | str = sizes.spacing_sm, - radius_size: sizes.Size | str = sizes.radius_sm, - text_size: sizes.Size | str = sizes.text_sm, - font: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - "Optima", - "Candara", - "Noto Sans", - "source-sans-pro", - "sans-serif", - ), - font_mono: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("IBM Plex Mono"), - "ui-monospace", - "Consolas", - "monospace", - ), - ): - super().__init__( - primary_hue=primary_hue, - secondary_hue=secondary_hue, - neutral_hue=neutral_hue, - spacing_size=spacing_size, - radius_size=radius_size, - text_size=text_size, - font=font, - font_mono=font_mono, - ) - self.name = "glass" - super().set( - body_background_fill_dark="*primary_800", - background_fill_secondary_dark="*primary_800", - block_background_fill_dark="*primary_800", - button_primary_background_fill="linear-gradient(180deg, *primary_50 0%, *primary_200 50%, *primary_300 50%, *primary_200 100%)", - button_primary_background_fill_hover="linear-gradient(180deg, *primary_100 0%, *primary_200 50%, *primary_300 50%, *primary_200 100%)", - button_primary_background_fill_dark="linear-gradient(180deg, *primary_400 0%, *primary_500 50%, *primary_600 50%, *primary_500 100%)", - button_primary_background_fill_hover_dark="linear-gradient(180deg, *primary_400 0%, *primary_500 50%, *primary_600 50%, *primary_500 100%)", - button_secondary_background_fill="*button_primary_background_fill", - button_secondary_background_fill_hover="*button_primary_background_fill_hover", - button_secondary_background_fill_dark="*button_primary_background_fill", - button_secondary_background_fill_hover_dark="*button_primary_background_fill_hover", - button_cancel_background_fill="*button_primary_background_fill", - button_cancel_background_fill_hover="*button_primary_background_fill_hover", - button_cancel_background_fill_dark="*button_primary_background_fill", - button_cancel_background_fill_hover_dark="*button_primary_background_fill_hover", - button_cancel_border_color="*button_secondary_border_color", - button_cancel_border_color_dark="*button_secondary_border_color", - button_cancel_text_color="*button_secondary_text_color", - checkbox_border_width="0px", - checkbox_label_background_fill="*button_secondary_background_fill", - checkbox_label_background_fill_dark="*button_secondary_background_fill", - checkbox_label_background_fill_hover="*button_secondary_background_fill_hover", - checkbox_label_background_fill_hover_dark="*button_secondary_background_fill_hover", - checkbox_label_border_width="1px", - checkbox_background_color_dark="*primary_600", - button_border_width="1px", - button_shadow_active="*shadow_inset", - input_background_fill="linear-gradient(0deg, *secondary_50 0%, white 100%)", - input_background_fill_dark="*secondary_600", - input_border_color_focus_dark="*primary_400", - input_border_width="1px", - slider_color="*primary_400", - block_label_text_color="*primary_500", - block_title_text_color="*primary_500", - block_label_text_weight="600", - block_title_text_weight="600", - block_label_text_size="*text_md", - block_title_text_size="*text_md", - block_label_background_fill="*primary_200", - block_label_background_fill_dark="*primary_700", - block_border_width="0px", - block_border_width_dark="1px", - panel_border_width="1px", - border_color_primary_dark="*primary_500", - background_fill_primary_dark="*neutral_700", - background_fill_secondary="*primary_100", - block_background_fill="*primary_50", - block_shadow="*primary_400 0px 0px 3px 0px", - table_even_background_fill_dark="*neutral_700", - table_odd_background_fill_dark="*neutral_700", - ) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_telemetry.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_telemetry.py deleted file mode 100644 index 5de988e2795188324f69232d1beb68191591715d..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_telemetry.py +++ /dev/null @@ -1,118 +0,0 @@ -from queue import Queue -from threading import Lock, Thread -from typing import Dict, Optional, Union -from urllib.parse import quote - -from .. import constants, logging -from . import build_hf_headers, get_session, hf_raise_for_status - - -logger = logging.get_logger(__name__) - -# Telemetry is sent by a separate thread to avoid blocking the main thread. -# A daemon thread is started once and consume tasks from the _TELEMETRY_QUEUE. -# If the thread stops for some reason -shouldn't happen-, we restart a new one. -_TELEMETRY_THREAD: Optional[Thread] = None -_TELEMETRY_THREAD_LOCK = Lock() # Lock to avoid starting multiple threads in parallel -_TELEMETRY_QUEUE: Queue = Queue() - - -def send_telemetry( - topic: str, - *, - library_name: Optional[str] = None, - library_version: Optional[str] = None, - user_agent: Union[Dict, str, None] = None, -) -> None: - """ - Sends telemetry that helps tracking usage of different HF libraries. - - This usage data helps us debug issues and prioritize new features. However, we understand that not everyone wants - to share additional information, and we respect your privacy. You can disable telemetry collection by setting the - `HF_HUB_DISABLE_TELEMETRY=1` as environment variable. Telemetry is also disabled in offline mode (i.e. when setting - `HF_HUB_OFFLINE=1`). - - Telemetry collection is run in a separate thread to minimize impact for the user. - - Args: - topic (`str`): - Name of the topic that is monitored. The topic is directly used to build the URL. If you want to monitor - subtopics, just use "/" separation. Examples: "gradio", "transformers/examples",... - library_name (`str`, *optional*): - The name of the library that is making the HTTP request. Will be added to the user-agent header. - library_version (`str`, *optional*): - The version of the library that is making the HTTP request. Will be added to the user-agent header. - user_agent (`str`, `dict`, *optional*): - The user agent info in the form of a dictionary or a single string. It will be completed with information about the installed packages. - - Example: - ```py - >>> from huggingface_hub.utils import send_telemetry - - # Send telemetry without library information - >>> send_telemetry("ping") - - # Send telemetry to subtopic with library information - >>> send_telemetry("gradio/local_link", library_name="gradio", library_version="3.22.1") - - # Send telemetry with additional data - >>> send_telemetry( - ... topic="examples", - ... library_name="transformers", - ... library_version="4.26.0", - ... user_agent={"pipeline": "text_classification", "framework": "flax"}, - ... ) - ``` - """ - if constants.HF_HUB_OFFLINE or constants.HF_HUB_DISABLE_TELEMETRY: - return - - _start_telemetry_thread() # starts thread only if doesn't exist yet - _TELEMETRY_QUEUE.put( - {"topic": topic, "library_name": library_name, "library_version": library_version, "user_agent": user_agent} - ) - - -def _start_telemetry_thread(): - """Start a daemon thread to consume tasks from the telemetry queue. - - If the thread is interrupted, start a new one. - """ - with _TELEMETRY_THREAD_LOCK: # avoid to start multiple threads if called concurrently - global _TELEMETRY_THREAD - if _TELEMETRY_THREAD is None or not _TELEMETRY_THREAD.is_alive(): - _TELEMETRY_THREAD = Thread(target=_telemetry_worker, daemon=True) - _TELEMETRY_THREAD.start() - - -def _telemetry_worker(): - """Wait for a task and consume it.""" - while True: - kwargs = _TELEMETRY_QUEUE.get() - _send_telemetry_in_thread(**kwargs) - _TELEMETRY_QUEUE.task_done() - - -def _send_telemetry_in_thread( - topic: str, - *, - library_name: Optional[str] = None, - library_version: Optional[str] = None, - user_agent: Union[Dict, str, None] = None, -) -> None: - """Contains the actual data sending data to the Hub.""" - path = "/".join(quote(part) for part in topic.split("/") if len(part) > 0) - try: - r = get_session().head( - f"{constants.ENDPOINT}/api/telemetry/{path}", - headers=build_hf_headers( - token=False, # no need to send a token for telemetry - library_name=library_name, - library_version=library_version, - user_agent=user_agent, - ), - ) - hf_raise_for_status(r) - except Exception as e: - # We don't want to error in case of connection errors of any kind. - logger.debug(f"Error while sending telemetry: {e}") diff --git a/spaces/lafi23333/aikomori/inference_main.py b/spaces/lafi23333/aikomori/inference_main.py deleted file mode 100644 index db6f9634bb276097eae82cac1776a76150003660..0000000000000000000000000000000000000000 --- a/spaces/lafi23333/aikomori/inference_main.py +++ /dev/null @@ -1,56 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - -model_path = "logs/32k/G_174000-Copy1.pth" -config_path = "configs/config.json" -svc_model = Svc(model_path, config_path) -infer_tool.mkdir(["raw", "results"]) - -# 支持多个wav文件,放在raw文件夹下 -clean_names = ["君の知らない物語-src"] -trans = [-5] # 音高调整,支持正负(半音) -spk_list = ['yunhao'] # 每次同时合成多语者音色 -slice_db = -40 # 默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50 -wav_format = 'flac' # 音频输出格式 - -infer_tool.fill_a_to_b(trans, clean_names) -for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - out_audio, out_sr = svc_model.infer(spk, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - - res_path = f'./results/{clean_name}_{tran}key_{spk}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) diff --git a/spaces/legacy107/flan-t5-large-ia3-cpgqa/README.md b/spaces/legacy107/flan-t5-large-ia3-cpgqa/README.md deleted file mode 100644 index 8419997642f29d6143a9b14096c8dfc818e55918..0000000000000000000000000000000000000000 --- a/spaces/legacy107/flan-t5-large-ia3-cpgqa/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Flan T5 Large Ia3 CpgQA -emoji: 👀 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Empress And The Warriors 720p Video.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Empress And The Warriors 720p Video.md deleted file mode 100644 index 1f86f35f3cf1dc7ab59357f21a54f77fb4bb09d7..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Empress And The Warriors 720p Video.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Empress And The Warriors 720p Video


          Download Zip » https://bytlly.com/2uGw9U



          - -January 23, 2020 - Download Link for The Empress and the Warriors (2008) BluRay 720p 750MB via Google Drive | Through Acefile. . Your browser cannot play this video. Your browser cannot play this video. . Your browser cannot play this video. . Your browser cannot play this video. . Your browser cannot play this video. . Your browser cannot play this video. . Your browser cannot play this video. . Your browser cannot 8a78ff9644
          -
          -
          -

          diff --git a/spaces/liujch1998/vera/app.py b/spaces/liujch1998/vera/app.py deleted file mode 100644 index 8155b57fc8a6a1d719fe770e78d240c5538bf578..0000000000000000000000000000000000000000 --- a/spaces/liujch1998/vera/app.py +++ /dev/null @@ -1,356 +0,0 @@ -import gradio as gr -import os -import torch -import transformers -import huggingface_hub -import datetime -import json -import shutil -import threading - -device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') - -# To suppress the following warning: -# huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... -os.environ["TOKENIZERS_PARALLELISM"] = "false" - -HF_TOKEN_DOWNLOAD = os.environ['HF_TOKEN_DOWNLOAD'] -HF_TOKEN_UPLOAD = os.environ['HF_TOKEN_UPLOAD'] -MODE = os.environ['MODE'] # 'debug' or 'prod' - -MODEL_NAME = 'liujch1998/vera' -DATASET_REPO_URL = "https://huggingface.co/datasets/liujch1998/cd-pi-dataset" -DATA_DIR = 'data' -DATA_FILENAME = 'data.jsonl' if MODE != 'debug' else 'data_debug.jsonl' -DATA_PATH = os.path.join(DATA_DIR, DATA_FILENAME) - -class Interactive: - def __init__(self): - self.tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL_NAME, use_auth_token=HF_TOKEN_DOWNLOAD) - if MODE == 'debug': - return - self.model = transformers.T5EncoderModel.from_pretrained(MODEL_NAME, use_auth_token=HF_TOKEN_DOWNLOAD, low_cpu_mem_usage=True, device_map='auto', torch_dtype='auto', offload_folder='offload') - self.model.D = self.model.shared.embedding_dim - self.linear = torch.nn.Linear(self.model.D, 1, dtype=self.model.dtype).to(device) - self.linear.weight = torch.nn.Parameter(self.model.shared.weight[32099, :].unsqueeze(0)) # (1, D) - self.linear.bias = torch.nn.Parameter(self.model.shared.weight[32098, 0].unsqueeze(0)) # (1) - self.model.eval() - self.t = self.model.shared.weight[32097, 0].item() - - def run(self, statement): - if MODE == 'debug': - return { - 'timestamp': datetime.datetime.now().strftime('%Y%m%d-%H%M%S'), - 'statement': statement, - 'logit': 0.0, - 'logit_calibrated': 0.0, - 'score': 0.5, - 'score_calibrated': 0.5, - } - input_ids = self.tokenizer.batch_encode_plus([statement], return_tensors='pt', padding='longest', truncation='longest_first', max_length=128).input_ids.to(device) - with torch.no_grad(): - output = self.model(input_ids) - last_hidden_state = output.last_hidden_state.to(device) # (B=1, L, D) - hidden = last_hidden_state[0, -1, :] # (D) - logit = self.linear(hidden).squeeze(-1) # () - logit_calibrated = logit / self.t - score = logit.sigmoid() - score_calibrated = logit_calibrated.sigmoid() - return { - 'timestamp': datetime.datetime.now().strftime('%Y%m%d-%H%M%S'), - 'statement': statement, - 'logit': logit.item(), - 'logit_calibrated': logit_calibrated.item(), - 'score': score.item(), - 'score_calibrated': score_calibrated.item(), - } - - def runs(self, statements): - if MODE == 'debug': - return [{ - 'timestamp': datetime.datetime.now().strftime('%Y%m%d-%H%M%S'), - 'statement': statement, - 'logit': 0.0, - 'logit_calibrated': 0.0, - 'score': 0.5, - 'score_calibrated': 0.5, - } for statement in statements] - tok = self.tokenizer.batch_encode_plus(statements, return_tensors='pt', padding='longest') - input_ids = tok.input_ids.to(device) - attention_mask = tok.attention_mask.to(device) - with torch.no_grad(): - output = self.model(input_ids=input_ids, attention_mask=attention_mask) - last_indices = attention_mask.sum(dim=1, keepdim=True) - 1 # (B, 1) - last_indices = last_indices.unsqueeze(-1).expand(-1, -1, self.model.D) # (B, 1, D) - last_hidden_state = output.last_hidden_state.to(device) # (B, L, D) - hidden = last_hidden_state.gather(dim=1, index=last_indices).squeeze(1) # (B, D) - logits = self.linear(hidden).squeeze(-1) # (B) - logits_calibrated = logits / self.t - scores = logits.sigmoid() - scores_calibrated = logits_calibrated.sigmoid() - return [{ - 'timestamp': datetime.datetime.now().strftime('%Y%m%d-%H%M%S'), - 'statement': statement, - 'logit': logit.item(), - 'logit_calibrated': logit_calibrated.item(), - 'score': score.item(), - 'score_calibrated': score_calibrated.item(), - } for statement, logit, logit_calibrated, score, score_calibrated in zip(statements, logits, logits_calibrated, scores, scores_calibrated)] - -interactive = Interactive() - -try: - shutil.rmtree(DATA_DIR) -except: - pass -global repo, lock -repo = huggingface_hub.Repository( - local_dir=DATA_DIR, - clone_from=DATASET_REPO_URL, - token=HF_TOKEN_UPLOAD, - repo_type='dataset', -) -repo.git_pull() -lock = threading.Lock() - -# def predict(statement, do_save=True): -# output_raw = interactive.run(statement) -# output = { -# 'True': output_raw['score_calibrated'], -# 'False': 1 - output_raw['score_calibrated'], -# } -# if do_save: -# with open(DATA_PATH, 'a') as f: -# json.dump(output_raw, f, ensure_ascii=False) -# f.write('\n') -# commit_url = repo.push_to_hub() -# print('Logged statement to dataset:') -# print('Commit URL:', commit_url) -# print(output_raw) -# print() -# return output, output_raw, gr.update(visible=False), gr.update(visible=True), gr.update(visible=True), gr.update(value='Please provide your feedback before trying out another statement.') - -# def record_feedback(output_raw, feedback, do_save=True): -# if do_save: -# output_raw.update({ 'feedback': feedback }) -# with open(DATA_PATH, 'a') as f: -# json.dump(output_raw, f, ensure_ascii=False) -# f.write('\n') -# commit_url = repo.push_to_hub() -# print('Logged feedback to dataset:') -# print('Commit URL:', commit_url) -# print(output_raw) -# print() -# return gr.update(visible=True), gr.update(visible=False), gr.update(visible=False), gr.update(value='Thanks for your feedback! Now you can enter another statement.') -# def record_feedback_agree(output_raw, do_save=True): -# return record_feedback(output_raw, 'agree', do_save) -# def record_feedback_disagree(output_raw, do_save=True): -# return record_feedback(output_raw, 'disagree', do_save) - -def predict(statements, do_saves): - global lock, interactive - output_raws = interactive.runs(list(statements)) # statements is a tuple, but tokenizer takes a list - outputs = [{ - 'True': output_raw['score_calibrated'], - 'False': 1 - output_raw['score_calibrated'], - } for output_raw in output_raws] - print(f'Logging statements to {DATA_FILENAME}:') - lock.acquire() - for output_raw, do_save in zip(output_raws, do_saves): - if do_save: - print(output_raw) - with open(DATA_PATH, 'a') as f: - json.dump(output_raw, f, ensure_ascii=False) - f.write('\n') - print() - lock.release() - return outputs, output_raws, \ - [gr.update(visible=False) for _ in statements], \ - [gr.update(visible=True) for _ in statements], \ - [gr.update(visible=True) for _ in statements], \ - [gr.update(visible=True) for _ in statements], \ - [gr.update(visible=True) for _ in statements], \ - [gr.update(value='Please share your feedback before trying out another statement.') for _ in statements] - -def record_feedback(output_raws, feedback, do_saves): - global lock - print(f'Logging feedbacks to {DATA_FILENAME}:') - lock.acquire() - for output_raw, do_save in zip(output_raws, do_saves): - if do_save: - output_raw.update({ 'feedback': feedback }) - print(output_raw) - with open(DATA_PATH, 'a') as f: - json.dump(output_raw, f, ensure_ascii=False) - f.write('\n') - print() - lock.release() - return [gr.update(visible=True) for _ in output_raws], \ - [gr.update(visible=False) for _ in output_raws], \ - [gr.update(visible=False) for _ in output_raws], \ - [gr.update(visible=False) for _ in output_raws], \ - [gr.update(visible=False) for _ in output_raws], \ - [gr.update(value='Thanks for sharing your feedback! You can now enter another statement.') for _ in output_raws] -def record_feedback_agree(output_raws, do_saves): - return record_feedback(output_raws, 'agree', do_saves) -def record_feedback_disagree(output_raws, do_saves): - return record_feedback(output_raws, 'disagree', do_saves) -def record_feedback_uncertain(output_raws, do_saves): - return record_feedback(output_raws, 'uncertain', do_saves) -def record_feedback_outofscope(output_raws, do_saves): - return record_feedback(output_raws, 'outofscope', do_saves) - -def push(): - global repo, lock - lock.acquire() - if repo.is_repo_clean(): - # print('No new data recorded, skipping git push ...') - # print() - pass - else: - try: - commit_url = repo.push_to_hub() - except Exception as e: - print('Failed to push to git:', e) - shutil.rmtree(DATA_DIR) - repo = huggingface_hub.Repository( - local_dir=DATA_DIR, - clone_from=DATASET_REPO_URL, - token=HF_TOKEN_UPLOAD, - repo_type='dataset', - ) - repo.git_pull() - lock.release() - -examples = [ - # # openbookqa - # 'If a person walks in the opposite direction of a compass arrow they are walking south.', - # 'If a person walks in the opposite direction of a compass arrow they are walking north.', - # arc_easy - 'A pond is different from a lake because ponds are smaller and shallower.', - 'A pond is different from a lake because ponds have moving water.', - # arc_hard - 'Hunting strategies are more likely to be learned rather than inherited.', - 'A spotted coat is more likely to be learned rather than inherited.', - # ai2_science_elementary - 'Photosynthesis uses carbon from the air to make food for plants.', - 'Respiration uses carbon from the air to make food for plants.', - # ai2_science_middle - 'The barometer measures atmospheric pressure.', - 'The thermometer measures atmospheric pressure.', - # commonsenseqa - 'People aim to complete a job at work.', - 'People aim to kill animals at work.', - # qasc - 'Climate is generally described in terms of local weather conditions.', - 'Climate is generally described in terms of forests.', - # physical_iqa - 'ice box will turn into a cooler if you add water to it.', - 'ice box will turn into a cooler if you add soda to it.', - # social_iqa - 'Kendall opened their mouth to speak and what came out shocked everyone. Kendall is a very aggressive and talkative person.', - 'Kendall opened their mouth to speak and what came out shocked everyone. Kendall is a very quiet person.', - # winogrande_xl - 'Sarah was a much better surgeon than Maria so Maria always got the easier cases.', - 'Sarah was a much better surgeon than Maria so Sarah always got the easier cases.', - # com2sense_paired - 'If you want a quick snack, getting one banana would be a good choice generally.', - 'If you want a snack, getting twenty bananas would be a good choice generally.', - # sciq - 'Each specific polypeptide has a unique linear sequence of amino acids.', - 'Each specific polypeptide has a unique linear sequence of fatty acids.', - # quarel - 'Tommy glided across the marble floor with ease, but slipped and fell on the wet floor because wet floor has more resistance.', - 'Tommy glided across the marble floor with ease, but slipped and fell on the wet floor because marble floor has more resistance.', - # quartz - 'If less waters falls on an area of land it will cause less plants to grow in that area.', - 'If less waters falls on an area of land it will cause more plants to grow in that area.', - # cycic_mc - 'In U.S. spring, Rob visits the financial district every day. In U.S. winter, Rob visits the park every day. Rob will go to the park on January 20.', - 'In U.S. spring, Rob visits the financial district every day. In U.S. winter, Rob visits the park every day. Rob will go to the financial district on January 20.', - # comve_a - 'Summer in North America is great for swimming, boating, and fishing.', - 'Summer in North America is great for skiing, snowshoeing, and making a snowman.', - # csqa2 - 'Gas is always capable of turning into liquid under high pressure.', - 'Cotton candy is sometimes made out of cotton.', - # symkd_anno - 'James visits a famous landmark. As a result, James learns about the world.', - 'Cliff and Andrew enter the castle. But before, Cliff needed to have been a student at the school.', - # gengen_anno - 'Generally, bar patrons are capable of taking care of their own drinks.', - 'Generally, ocean currents have little influence over storm intensity.', - - # 'If A sits next to B and B sits next to C, then A must sit next to C.', - # 'If A sits next to B and B sits next to C, then A might not sit next to C.', -] - -# input_statement = gr.Dropdown(choices=examples, label='Statement:') -# input_model = gr.Textbox(label='Commonsense statement verification model:', value=MODEL_NAME, interactive=False) -# output = gr.outputs.Label(num_top_classes=2) - -# description = '''This is a demo for Vera, a commonsense statement verification model. Under development. -# ⚠️ Data Collection: by default, we are collecting the inputs entered in this app to further improve and evaluate the model. Do not share any personal or sensitive information while using the app!''' - -# gr.Interface( -# fn=predict, -# inputs=[input_statement, input_model], -# outputs=output, -# title="Vera", -# description=description, -# ).launch() - -with gr.Blocks() as demo: - with gr.Column(): - gr.Markdown( - '''# Vera - - Vera is a commonsense statement verification model. See our paper at: [https://arxiv.org/abs/2305.03695](https://arxiv.org/abs/2305.03695). - - Type a commonsense statement in the box below and click the submit button to see Vera's prediction on its correctness. You can try both correct and incorrect statements. If you are looking for inspiration, try the examples at the bottom of the page! - - We'd love your feedback! Please indicate whether you agree or disagree with Vera's prediction (and don't mind the percentage numbers). If you're unsure or the statement doesn't have a certain correctness label, please select "Uncertain". If your input is actually not a statement about commonsense, please select "I don't think this is a statement about commonsense". - - **Intended Use**: Vera is a research prototype and may make mistakes. Do not use for making critical decisions. It is intended to predict the correctness of commonsense statements, and may be unreliable when taking input out of this scope. **DO NOT input encyclopedic facts.** Vera is trained on **English** data only, please do not input statements in other languages. - - **Data Collection**: By default, we are collecting the inputs entered in this app to further improve and evaluate the model. Do not share any personal or sensitive information while using the app! You can opt out of this data collection by removing the checkbox below: - ''' - ) - with gr.Row(): - with gr.Column(scale=3): - do_save = gr.Checkbox( - value=True, - label="Store data", - info="You agree to the storage of your input for research and development purposes:") - statement = gr.Textbox(placeholder='Enter a commonsense statement here, or select an example from below', label='Statement', interactive=True) - submit = gr.Button(value='Submit', variant='primary', visible=True) - with gr.Column(scale=2): - output = gr.Label(num_top_classes=2, interactive=False) - output_raw = gr.JSON(visible=False) - with gr.Row(): - feedback_agree = gr.Button(value='👍 Agree', variant='secondary', visible=False) - feedback_uncertain = gr.Button(value='🤔 Uncertain', variant='secondary', visible=False) - feedback_disagree = gr.Button(value='👎 Disagree', variant='secondary', visible=False) - feedback_outofscope = gr.Button(value='🚫 I don\'t think this a statement about commonsense', variant='secondary', visible=False) - feedback_ack = gr.Markdown(value='', visible=True, interactive=False) - gr.Markdown('\n---\n') - with gr.Row(): - gr.Examples( - examples=examples, - fn=predict, - inputs=[statement], - outputs=[output, output_raw, statement, submit, feedback_agree, feedback_disagree, feedback_ack], - examples_per_page=100, - cache_examples=False, - run_on_click=False, # If we want this to be True, I suspect we need to enable the statement.submit() - ) - submit.click(predict, inputs=[statement, do_save], outputs=[output, output_raw, submit, feedback_agree, feedback_uncertain, feedback_disagree, feedback_outofscope, feedback_ack], batch=True, max_batch_size=16) - # statement.submit(predict, inputs=[statement], outputs=[output, output_raw]) - feedback_agree.click(record_feedback_agree, inputs=[output_raw, do_save], outputs=[submit, feedback_agree, feedback_uncertain, feedback_disagree, feedback_outofscope, feedback_ack], batch=True, max_batch_size=16) - feedback_uncertain.click(record_feedback_uncertain, inputs=[output_raw, do_save], outputs=[submit, feedback_agree, feedback_uncertain, feedback_disagree, feedback_outofscope, feedback_ack], batch=True, max_batch_size=16) - feedback_disagree.click(record_feedback_disagree, inputs=[output_raw, do_save], outputs=[submit, feedback_agree, feedback_uncertain, feedback_disagree, feedback_outofscope, feedback_ack], batch=True, max_batch_size=16) - feedback_outofscope.click(record_feedback_outofscope, inputs=[output_raw, do_save], outputs=[submit, feedback_agree, feedback_uncertain, feedback_disagree, feedback_outofscope, feedback_ack], batch=True, max_batch_size=16) - - demo.load(push, inputs=None, outputs=None, every=60) # Push to git every 60 seconds - -demo.queue(concurrency_count=1).launch(debug=True) diff --git a/spaces/lnyan/stablediffusion-infinity/js/upload.js b/spaces/lnyan/stablediffusion-infinity/js/upload.js deleted file mode 100644 index 4842960af4985847ff24c93c7f730e8e64974690..0000000000000000000000000000000000000000 --- a/spaces/lnyan/stablediffusion-infinity/js/upload.js +++ /dev/null @@ -1,19 +0,0 @@ -function(a,b){ - if(!window.my_observe_upload) - { - console.log("setup upload here"); - window.my_observe_upload = new MutationObserver(function (event) { - console.log(event); - var frame=document.querySelector("gradio-app").shadowRoot.querySelector("#sdinfframe").contentWindow.document; - frame.querySelector("#upload").click(); - }); - window.my_observe_upload_target = document.querySelector("gradio-app").shadowRoot.querySelector("#upload span"); - window.my_observe_upload.observe(window.my_observe_upload_target, { - attributes: false, - subtree: true, - childList: true, - characterData: true - }); - } - return [a,b]; -} \ No newline at end of file diff --git a/spaces/lojban/text-to-speech/vits/mel_processing.py b/spaces/lojban/text-to-speech/vits/mel_processing.py deleted file mode 100644 index 817f03756f64caf8cc54329a9325024c8fb9e0c3..0000000000000000000000000000000000000000 --- a/spaces/lojban/text-to-speech/vits/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/luxuedong/lxd/src/lib/hooks/use-enter-submit.tsx b/spaces/luxuedong/lxd/src/lib/hooks/use-enter-submit.tsx deleted file mode 100644 index d66b2d3253baff164235d4ca791aae6d84721835..0000000000000000000000000000000000000000 --- a/spaces/luxuedong/lxd/src/lib/hooks/use-enter-submit.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import { useRef, type RefObject } from 'react' - -export function useEnterSubmit(): { - formRef: RefObject - onKeyDown: (event: React.KeyboardEvent) => void -} { - const formRef = useRef(null) - - const handleKeyDown = ( - event: React.KeyboardEvent - ): void => { - if ( - event.key === 'Enter' && - !event.shiftKey && - !event.nativeEvent.isComposing - ) { - formRef.current?.requestSubmit() - event.preventDefault() - } - } - - return { formRef, onKeyDown: handleKeyDown } -} diff --git a/spaces/ma-xu/LIVE/pybind11/tests/test_embed/test_interpreter.py b/spaces/ma-xu/LIVE/pybind11/tests/test_embed/test_interpreter.py deleted file mode 100644 index 6174ede446f0356fbdf61aee4136535a78a32479..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tests/test_embed/test_interpreter.py +++ /dev/null @@ -1,10 +0,0 @@ -# -*- coding: utf-8 -*- -from widget_module import Widget - - -class DerivedWidget(Widget): - def __init__(self, message): - super(DerivedWidget, self).__init__(message) - - def the_answer(self): - return 42 diff --git a/spaces/ma-xu/LIVE/thrust/internal/reverse_rename_cub_namespace.sh b/spaces/ma-xu/LIVE/thrust/internal/reverse_rename_cub_namespace.sh deleted file mode 100644 index bc4858449577af60ac3acbc2ea745f5444da96c5..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/internal/reverse_rename_cub_namespace.sh +++ /dev/null @@ -1,7 +0,0 @@ -#! /bin/bash - -# Run this in //sw/gpgpu/thrust/thrust/system/cuda/detail/cub to undo the -# renaming of CUB's namespace macro. - -sed -i -e 's|THRUST_CUB_NS_P|CUB_NS_P|g' `find . -type f` - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/async/transform.h b/spaces/ma-xu/LIVE/thrust/thrust/async/transform.h deleted file mode 100644 index 89687e93ad38ed03df4638b0b98f15b78c8826d7..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/async/transform.h +++ /dev/null @@ -1,134 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a transform of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file async/transform.h - * \brief Functions for asynchronously transforming a range. - */ - -#pragma once - -#include -#include - -#if THRUST_CPP_DIALECT >= 2014 - -#include -#include -#include -#include - -#include - -namespace thrust -{ - -namespace async -{ - -namespace unimplemented -{ - -template < - typename DerivedPolicy -, typename ForwardIt, typename Sentinel, typename OutputIt -, typename UnaryOperation -> -__host__ -event -async_transform( - thrust::execution_policy& exec -, ForwardIt first, Sentinel last, OutputIt output, UnaryOperation op -) -{ - THRUST_STATIC_ASSERT_MSG( - (thrust::detail::depend_on_instantiation::value) - , "this algorithm is not implemented for the specified system" - ); - return {}; -} - -} // namespace unimplemented - -namespace transform_detail -{ - -using thrust::async::unimplemented::async_transform; - -struct transform_fn final -{ - template < - typename DerivedPolicy - , typename ForwardIt, typename Sentinel, typename OutputIt - , typename UnaryOperation - > - __host__ - static auto - call( - thrust::detail::execution_policy_base const& exec - , ForwardIt&& first, Sentinel&& last - , OutputIt&& output - , UnaryOperation&& op - ) - // ADL dispatch. - THRUST_RETURNS( - async_transform( - thrust::detail::derived_cast(thrust::detail::strip_const(exec)) - , THRUST_FWD(first), THRUST_FWD(last) - , THRUST_FWD(output) - , THRUST_FWD(op) - ) - ) - - template < - typename ForwardIt, typename Sentinel, typename OutputIt - , typename UnaryOperation - > - __host__ - static auto call( - ForwardIt&& first, Sentinel&& last - , OutputIt&& output - , UnaryOperation&& op - ) - THRUST_RETURNS( - transform_fn::call( - thrust::detail::select_system( - typename iterator_system>::type{} - , typename iterator_system>::type{} - ) - , THRUST_FWD(first), THRUST_FWD(last) - , THRUST_FWD(output) - , THRUST_FWD(op) - ) - ) - - template - THRUST_NODISCARD __host__ - auto operator()(Args&&... args) const - THRUST_RETURNS( - call(THRUST_FWD(args)...) - ) -}; - -} // namespace tranform_detail - -THRUST_INLINE_CONSTANT transform_detail::transform_fn transform{}; - -} // namespace async - -} // end namespace thrust - -#endif - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/guarded_driver_types.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/guarded_driver_types.h deleted file mode 100644 index 076964071cf78458de27fe54de3caf932ce93b40..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/guarded_driver_types.h +++ /dev/null @@ -1,63 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include without causing -// warnings from redefinitions of __host__ and __device__. -// carefully save their definitions and restore them -// can't tell exactly when push_macro & pop_macro were introduced to gcc; assume 4.5.0 - - -#if !defined(__GNUC__) || ((10000 * __GNUC__ + 100 * __GNUC_MINOR__ + __GNUC_PATCHLEVEL__) >= 40500) -# ifdef __host__ -# pragma push_macro("__host__") -# undef __host__ -# define THRUST_HOST_NEEDS_RESTORATION -# endif -# ifdef __device__ -# pragma push_macro("__device__") -# undef __device__ -# define THRUST_DEVICE_NEEDS_RESTORATION -# endif -#else // GNUC pre 4.5.0 -# if !defined(__DRIVER_TYPES_H__) -# ifdef __host__ -# undef __host__ -# endif -# ifdef __device__ -# undef __device__ -# endif -# endif // __DRIVER_TYPES_H__ -#endif // __GNUC__ - - -#include - - -#if !defined(__GNUC__) || ((10000 * __GNUC__ + 100 * __GNUC_MINOR__ + __GNUC_PATCHLEVEL__) >= 40500) -# ifdef THRUST_HOST_NEEDS_RESTORATION -# pragma pop_macro("__host__") -# undef THRUST_HOST_NEEDS_RESTORATION -# endif -# ifdef THRUST_DEVICE_NEEDS_RESTORATION -# pragma pop_macro("__device__") -# undef THRUST_DEVICE_NEEDS_RESTORATION -# endif -#endif // __GNUC__ - diff --git a/spaces/magicr/BuboGPT/bubogpt/datasets/builders/multimodal_base_dataset_builder.py b/spaces/magicr/BuboGPT/bubogpt/datasets/builders/multimodal_base_dataset_builder.py deleted file mode 100644 index ca61ab87470d563b2c1eff4bdcf0c3f1efb55c0f..0000000000000000000000000000000000000000 --- a/spaces/magicr/BuboGPT/bubogpt/datasets/builders/multimodal_base_dataset_builder.py +++ /dev/null @@ -1,74 +0,0 @@ -import logging - -import torch.distributed as dist - -import bubogpt.common.utils as utils -from bubogpt.common.dist_utils import is_dist_avail_and_initialized, is_main_process -from bubogpt.common.registry import registry -from bubogpt.datasets.builders import load_dataset_config -from bubogpt.processors.base_processor import BaseProcessor - - -class MultimodalBaseDatasetBuilder(): - train_dataset_cls, eval_dataset_cls = None, None - - def __init__(self, cfg=None): - super().__init__() - - if cfg is None: - # help to create datasets from default config. - self.config = load_dataset_config(self.default_config_path()) - elif isinstance(cfg, str): - self.config = load_dataset_config(cfg) - else: - # when called from task.build_dataset() - self.config = cfg - - self.data_type = self.config.data_type.split("_") - # It will be a list like ["audio", "image"], etc. - - # Add "text" manually here. - - self.processors = {modal: {"train": BaseProcessor(), "eval": BaseProcessor()} - for modal in [*self.data_type, "text"]} - - def build_datasets(self): - # download, split, etc... - # only called on 1 GPU/TPU in distributed - - if is_main_process(): - self._download_data() - - if is_dist_avail_and_initialized(): - dist.barrier() - - # at this point, all the annotations and image/videos should be all downloaded to the specified locations. - logging.info("Building datasets...") - datasets = self.build() # dataset['train'/'val'/'test'] - - return datasets - - def build_processors(self): - for modal in [*self.data_type, "text"]: - proc_cfg = self.config.get("{}_processor".format(modal)) - if proc_cfg is not None: - train_cfg = proc_cfg.get("train") - eval_cfg = proc_cfg.get("eval") - self.processors[modal]["train"] = self._build_proc_from_cfg(train_cfg) - self.processors[modal]["eval"] = self._build_proc_from_cfg(eval_cfg) - - - @staticmethod - def _build_proc_from_cfg(cfg): - return ( - registry.get_processor_class(cfg.name).from_config(cfg) - if cfg is not None - else None - ) - - @classmethod - def default_config_path(cls, type="default"): - return utils.get_abs_path(cls.DATASET_CONFIG_DICT[type]) - - def _download_data(self): - pass diff --git a/spaces/mascIT/AgeGuesser/yolov5/models/yolo.py b/spaces/mascIT/AgeGuesser/yolov5/models/yolo.py deleted file mode 100644 index 66b965ce2f9e8aebbdf3f4831154045d05f472f7..0000000000000000000000000000000000000000 --- a/spaces/mascIT/AgeGuesser/yolov5/models/yolo.py +++ /dev/null @@ -1,324 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -YOLO-specific modules - -Usage: - $ python path/to/models/yolo.py --cfg yolov5s.yaml -""" - -import argparse -import sys -from copy import deepcopy -from pathlib import Path - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -# ROOT = ROOT.relative_to(Path.cwd()) # relative - -from .common import * -from .experimental import * -from utils.autoanchor import check_anchor_order -from utils.general import LOGGER, make_divisible, print_args -from utils.torch_utils import fuse_conv_and_bn, initialize_weights, model_info, scale_img, select_device, time_sync - -try: - import thop # for FLOPs computation -except ImportError: - thop = None - - -class Detect(nn.Module): - stride = None # strides computed during build - onnx_dynamic = False # ONNX export parameter - - def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer - super().__init__() - self.nc = nc # number of classes - self.no = nc + 5 # number of outputs per anchor - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid - self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - self.inplace = inplace # use in-place ops (e.g. slice assignment) - - def forward(self, x): - z = [] # inference output - for i in range(self.nl): - x[i] = self.m[i](x[i]) # conv - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i) - - y = x[i].sigmoid() - if self.inplace: - y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 - xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy - wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - y = torch.cat((xy, wh, y[..., 4:]), -1) - z.append(y.view(bs, -1, self.no)) - - return x if self.training else (torch.cat(z, 1), x) - - def _make_grid(self, nx=20, ny=20, i=0): - d = self.anchors[i].device - yv, xv = torch.meshgrid([torch.arange(ny, device=d), torch.arange(nx, device=d)], indexing='ij') - - grid = torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float() - anchor_grid = (self.anchors[i].clone() * self.stride[i]) \ - .view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float() - return grid, anchor_grid - - -class Model(nn.Module): - def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes - super().__init__() - if isinstance(cfg, dict): - self.yaml = cfg # model dict - else: # is *.yaml - import yaml # for torch hub - self.yaml_file = Path(cfg).name - with open(cfg, encoding='ascii', errors='ignore') as f: - self.yaml = yaml.safe_load(f) # model dict - - # Define model - ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels - if nc and nc != self.yaml['nc']: - LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}") - self.yaml['nc'] = nc # override yaml value - if anchors: - LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}') - self.yaml['anchors'] = round(anchors) # override yaml value - self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist - self.names = [str(i) for i in range(self.yaml['nc'])] # default names - self.inplace = self.yaml.get('inplace', True) - - # Build strides, anchors - m = self.model[-1] # Detect() - if isinstance(m, Detect): - s = 256 # 2x min stride - m.inplace = self.inplace - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - m.anchors /= m.stride.view(-1, 1, 1) - check_anchor_order(m) - self.stride = m.stride - self._initialize_biases() # only run once - - # Init weights, biases - initialize_weights(self) - self.info() - LOGGER.info('') - - def forward(self, x, augment=False, profile=False, visualize=False): - if augment: - return self._forward_augment(x) # augmented inference, None - return self._forward_once(x, profile, visualize) # single-scale inference, train - - def _forward_augment(self, x): - img_size = x.shape[-2:] # height, width - s = [1, 0.83, 0.67] # scales - f = [None, 3, None] # flips (2-ud, 3-lr) - y = [] # outputs - for si, fi in zip(s, f): - xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max())) - yi = self._forward_once(xi)[0] # forward - # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save - yi = self._descale_pred(yi, fi, si, img_size) - y.append(yi) - y = self._clip_augmented(y) # clip augmented tails - return torch.cat(y, 1), None # augmented inference, train - - def _forward_once(self, x, profile=False, visualize=False): - y, dt = [], [] # outputs - for m in self.model: - if m.f != -1: # if not from previous layer - x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - if profile: - self._profile_one_layer(m, x, dt) - x = m(x) # run - y.append(x if m.i in self.save else None) # save output - return x - - def _descale_pred(self, p, flips, scale, img_size): - # de-scale predictions following augmented inference (inverse operation) - if self.inplace: - p[..., :4] /= scale # de-scale - if flips == 2: - p[..., 1] = img_size[0] - p[..., 1] # de-flip ud - elif flips == 3: - p[..., 0] = img_size[1] - p[..., 0] # de-flip lr - else: - x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale - if flips == 2: - y = img_size[0] - y # de-flip ud - elif flips == 3: - x = img_size[1] - x # de-flip lr - p = torch.cat((x, y, wh, p[..., 4:]), -1) - return p - - def _clip_augmented(self, y): - # Clip YOLOv5 augmented inference tails - nl = self.model[-1].nl # number of detection layers (P3-P5) - g = sum(4 ** x for x in range(nl)) # grid points - e = 1 # exclude layer count - i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices - y[0] = y[0][:, :-i] # large - i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices - y[-1] = y[-1][:, i:] # small - return y - - def _profile_one_layer(self, m, x, dt): - c = isinstance(m, Detect) # is final layer, copy input as inplace fix - o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs - t = time_sync() - for _ in range(10): - m(x.copy() if c else x) - dt.append((time_sync() - t) * 100) - if m == self.model[0]: - LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}") - LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}') - if c: - LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") - - def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - m = self.model[-1] # Detect() module - for mi, s in zip(m.m, m.stride): # from - b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b.data[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - - def _print_biases(self): - m = self.model[-1] # Detect() module - for mi in m.m: # from - b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) - LOGGER.info( - ('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean())) - - # def _print_weights(self): - # for m in self.model.modules(): - # if type(m) is Bottleneck: - # LOGGER.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights - - def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers - LOGGER.info('Fusing layers... ') - for m in self.model.modules(): - if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'): - m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - delattr(m, 'bn') # remove batchnorm - m.forward = m.forward_fuse # update forward - self.info() - return self - - def info(self, verbose=False, img_size=640): # print model information - model_info(self, verbose, img_size) - - def _apply(self, fn): - # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - self = super()._apply(fn) - m = self.model[-1] # Detect() - if isinstance(m, Detect): - m.stride = fn(m.stride) - m.grid = list(map(fn, m.grid)) - if isinstance(m.anchor_grid, list): - m.anchor_grid = list(map(fn, m.anchor_grid)) - return self - - -def parse_model(d, ch): # model_dict, input_channels(3) - LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") - anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors - no = na * (nc + 5) # number of outputs = anchors * (classes + 5) - - layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out - for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args - m = eval(m) if isinstance(m, str) else m # eval strings - for j, a in enumerate(args): - try: - args[j] = eval(a) if isinstance(a, str) else a # eval strings - except NameError: - pass - - n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, - BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: - c1, c2 = ch[f], args[0] - if c2 != no: # if not output - c2 = make_divisible(c2 * gw, 8) - - args = [c1, c2, *args[1:]] - if m in [BottleneckCSP, C3, C3TR, C3Ghost]: - args.insert(2, n) # number of repeats - n = 1 - elif m is nn.BatchNorm2d: - args = [ch[f]] - elif m is Concat: - c2 = sum(ch[x] for x in f) - elif m is Detect: - args.append([ch[x] for x in f]) - if isinstance(args[1], int): # number of anchors - args[1] = [list(range(args[1] * 2))] * len(f) - elif m is Contract: - c2 = ch[f] * args[0] ** 2 - elif m is Expand: - c2 = ch[f] // args[0] ** 2 - else: - c2 = ch[f] - - m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module - t = str(m)[8:-2].replace('__main__.', '') # module type - np = sum(x.numel() for x in m_.parameters()) # number params - m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print - save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist - layers.append(m_) - if i == 0: - ch = [] - ch.append(c2) - return nn.Sequential(*layers), sorted(save) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--profile', action='store_true', help='profile model speed') - parser.add_argument('--test', action='store_true', help='test all yolo*.yaml') - opt = parser.parse_args() - opt.cfg = 'yolov5s.yaml' # check YAML - print_args(FILE.stem, opt) - device = select_device(opt.device) - - # Create model - model = Model(opt.cfg).to(device) - model.train() - - # Profile - if opt.profile: - img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device) - y = model(img, profile=True) - - # Test all models - if opt.test: - for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'): - try: - _ = Model(cfg) - except Exception as e: - print(f'Error in {cfg}: {e}') - - # Tensorboard (not working https://github.com/ultralytics/yolov5/issues/2898) - # from torch.utils.tensorboard import SummaryWriter - # tb_writer = SummaryWriter('.') - # LOGGER.info("Run 'tensorboard --logdir=models' to view tensorboard at http://localhost:6006/") - # tb_writer.add_graph(torch.jit.trace(model, img, strict=False), []) # add model graph diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py deleted file mode 100644 index 12f6d402a3c4a113d4c37be062790fa435b72104..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Evaluation with objective metrics for the pretrained AudioGen models. -This grid takes signature from the training grid and runs evaluation-only stage. - -When running the grid for the first time, please use: -REGEN=1 dora grid audiogen.audiogen_pretrained_16khz_eval -and re-use the REGEN=1 option when the grid is changed to force regenerating it. - -Note that you need the proper metrics external libraries setup to use all -the objective metrics activated in this grid. Refer to the README for more information. -""" - -import os - -from ..musicgen._explorers import GenerationEvalExplorer -from ...environment import AudioCraftEnvironment -from ... import train - - -def eval(launcher, batch_size: int = 32): - opts = { - 'dset': 'audio/audiocaps_16khz', - 'solver/audiogen/evaluation': 'objective_eval', - 'execute_only': 'evaluate', - '+dataset.evaluate.batch_size': batch_size, - '+metrics.fad.tf.batch_size': 32, - } - # binary for FAD computation: replace this path with your own path - metrics_opts = { - 'metrics.fad.tf.bin': '/data/home/jadecopet/local/usr/opt/google-research' - } - opt1 = {'generate.lm.use_sampling': True, 'generate.lm.top_k': 250, 'generate.lm.top_p': 0.} - opt2 = {'transformer_lm.two_step_cfg': True} - - sub = launcher.bind(opts) - sub.bind_(metrics_opts) - - # base objective metrics - sub(opt1, opt2) - - -@GenerationEvalExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=4, partition=partitions) - - if 'REGEN' not in os.environ: - folder = train.main.dora.dir / 'grids' / __name__.split('.', 2)[-1] - with launcher.job_array(): - for sig in folder.iterdir(): - if not sig.is_symlink(): - continue - xp = train.main.get_xp_from_sig(sig.name) - launcher(xp.argv) - return - - audiogen_base = launcher.bind(solver="audiogen/audiogen_base_16khz") - audiogen_base.bind_({'autocast': False, 'fsdp.use': True}) - - audiogen_base_medium = audiogen_base.bind({'continue_from': '//pretrained/facebook/audiogen-medium'}) - audiogen_base_medium.bind_({'model/lm/model_scale': 'medium'}) - eval(audiogen_base_medium, batch_size=128) diff --git a/spaces/matthoffner/chatbot-mini/styles/globals.css b/spaces/matthoffner/chatbot-mini/styles/globals.css deleted file mode 100644 index c631cf9c6144a9bc6a70d6b031fe62930a5f1c30..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot-mini/styles/globals.css +++ /dev/null @@ -1,43 +0,0 @@ -@tailwind base; -@tailwind components; -@tailwind utilities; - -::-webkit-scrollbar-track { - background-color: transparent; -} - -::-webkit-scrollbar-thumb { - background-color: #ccc; - border-radius: 10px; -} - -::-webkit-scrollbar-thumb:hover { - background-color: #aaa; -} - -::-webkit-scrollbar-track:hover { - background-color: #f2f2f2; -} - -::-webkit-scrollbar-corner { - background-color: transparent; -} - -::-webkit-scrollbar { - width: 6px; - height: 6px; -} - -html { - background: #202123; -} - -@media (max-width: 720px) { - pre { - width: calc(100vw - 110px); - } -} - -pre:has(div.codeblock) { - padding: 0; -} diff --git a/spaces/merle/PROTEIN_GENERATOR/utils/model/se3_transformer/model/layers/__init__.py b/spaces/merle/PROTEIN_GENERATOR/utils/model/se3_transformer/model/layers/__init__.py deleted file mode 100644 index 9eb9e3ced5ef94e9a5c5be5883cc14ebaabe31f1..0000000000000000000000000000000000000000 --- a/spaces/merle/PROTEIN_GENERATOR/utils/model/se3_transformer/model/layers/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .linear import LinearSE3 -from .norm import NormSE3 -from .pooling import GPooling -from .convolution import ConvSE3 -from .attention import AttentionBlockSE3 \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/source/_posts/2019-11-04-data-leak.md b/spaces/merve/fill-in-the-blank/source/_posts/2019-11-04-data-leak.md deleted file mode 100644 index 51d319aa89abc8783bed834081df6553af17a08d..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/_posts/2019-11-04-data-leak.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -template: post.html -title: Why Some Models Leak Data -shorttitle: Why Some Models Leak Data -summary: Machine learning models use large amounts of data, some of which can be sensitive. If they're not trained correctly, sometimes that data is inadvertently revealed. -socialsummary: Machine learning models use large amounts of data, some of which can be sensitive. If they're not trained correctly, sometimes that data is inadvertently revealed. -permalink: /data-leak/ -shareimg: https://pair.withgoogle.com/explorables/images/model-inversion.png -date: 2020-12-01 ---- - - - - - -Let's take a look at a game of soccer. - - -
          - -

          - -Using the position of each player as training data, we can teach a model to predict which team would get to a loose ball first at each spot on the field, indicated by the color of the pixel. - -
          - -It updates in real-time—drag the players around to see the model change. - -

          - -This model reveals quite a lot about the data used to train it. Even without the actual positions of the players, it is simple to see where players might be. - -
          - -Click this button to move the players - -Take a guess at where the yellow team's goalie is now, then check their actual position. How close were you? - -

          Sensitive Salary Data

          - -In this specific soccer example, being able to make educated guesses about the data a model was trained on doesn't matter too much. But what if our data points represent something more sensitive? - -
          - -We’ve fed the same numbers into the model, but now they represent salary data instead of soccer data. Building models like this is a common technique to [detect discrimination](https://www.eeoc.gov/laws/guidance/section-10-compensation-discrimination#c.%20Using%20More%20Sophisticated%20Statistical%20Techniques%20to%20Evaluate). A union might test if a company is paying men and women fairly by building a salary model that takes into account years of experience. They can then [publish](https://postguild.org/2019-pay-study/) the results to bring pressure for change or show improvement. - -In this hypothetical salary study, even though no individual salaries have been published, it is easy to infer the salary of the newest male hire. And carefully cross referencing public start dates on LinkedIn with the model could almost perfectly reveal everyone's salary. - -Because the model here is so flexible (there are hundreds of square patches with independently calculated predictions) and we have so few data points (just 22 people), it is able to "memorize" individual data points. If we're looking to share information about patterns in salaries, a simpler and more constrained model like a linear regression might be more appropriate. - -
          - -By boiling down the 22 data points to two lines we're able to see broad trends without being able to guess anyone's salary. - -

          Subtle Leaks

          - -Removing complexity isn't a complete solution though. Depending on how the data is distributed, even a simple line can inadvertently reveal information. - -
          - -In this company, almost all the men started several years ago, so the slope of the line is especially sensitive to the salary of the new hire. - -Is their salary higher or lower than average? Based on the line, we can make a pretty good guess. - -Notice that changing the salary of someone with a more common tenure barely moves the line. In general, more typical data points are less susceptible to being leaked. This sets up a tricky trade off: we want models to learn about edge cases while being sure they haven't memorized individual data points. - -

          Real World Data

          - -Models of real world data are often quite complex—this can improve accuracy, but makes them [more susceptible](https://blog.tensorflow.org/2020/06/introducing-new-privacy-testing-library.html) to unexpectedly leaking information. Medical models have inadvertently revealed [patients' genetic markers](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4827719/). Language models have memorized [credit card numbers](https://bair.berkeley.edu/blog/2019/08/13/memorization/). Faces can even be [reconstructed](https://rist.tech.cornell.edu/papers/mi-ccs.pdf) from image models: - -
          - -[Fredrikson et al](https://rist.tech.cornell.edu/papers/mi-ccs.pdf) were able to extract the image on the left by repeatedly querying a facial recognition API. It isn't an exact match with the individual's actual face (on the right), but this attack only required access to the model's predictions, not its internal state. - -

          Protecting Private Data

          - -Training models with [differential privacy](http://www.cleverhans.io/privacy/2018/04/29/privacy-and-machine-learning.html) stops the training data from leaking by limiting how much the model can learn from any one data point. Differentially private models are still at the cutting edge of research, but they're being packaged into [machine learning frameworks](https://blog.tensorflow.org/2019/03/introducing-tensorflow-privacy-learning.html), making them much easier to use. When it isn't possible to train differentially private models, there are also tools that can [measure](https://github.com/tensorflow/privacy/tree/master/tensorflow_privacy/privacy/membership_inference_attack) how much data is the model memorizing. Also, standard techniques such as aggregation and limiting how much data a single source can contribute are still useful and usually improve the privacy of the model. - -As we saw in the [Collecting Sensitive Information Explorable](https://pair.withgoogle.com/explorables/anonymization/), adding enough random noise with differential privacy to protect outliers like the new hire can increase the amount of data required to reach a good level of accuracy. Depending on the application, the constraints of differential privacy could even improve the model—for instance, not learning too much from one data point can help prevent [overfitting](https://openreview.net/forum?id=r1xyx3R9tQ). - -Given the increasing utility of machine learning models for many real-world tasks, it’s clear that more and more systems, devices and apps will be powered, to some extent, by machine learning in the future. While [standard privacy best practices](https://owasp.org/www-project-top-ten/) developed for non-machine learning systems still apply to those with machine learning, the introduction of machine learning introduces new challenges, including the ability of the model to memorize some specific training data points and thus be vulnerable to privacy attacks that seek to extract this data from the model. Fortunately, techniques such as differential privacy exist that can be helpful in overcoming this specific challenge. Just as with other areas of [Responsible AI](https://ai.google/responsibilities/responsible-ai-practices/), it’s important to be aware of these new challenges that come along with machine learning and what steps can be taken to mitigate them. - - -

          Credits

          - -Adam Pearce and Ellen Jiang // December 2020 - -Thanks to Andreas Terzis, Ben Wedin, Carey Radebaugh, David Weinberger, Emily Reif, Fernanda Viégas, Hal Abelson, Kristen Olson, Martin Wattenberg, Michael Terry, Miguel Guevara, Thomas Steinke, Yannick Assogba, Zan Armstrong and our other colleagues at Google for their help with this piece. - - -

          More Explorables

          - -

          - - - - - - - - - \ No newline at end of file diff --git a/spaces/merve/hidden-bias/style.css b/spaces/merve/hidden-bias/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/merve/uncertainty-calibration/public/third_party/mobilenet@1.0.0.js b/spaces/merve/uncertainty-calibration/public/third_party/mobilenet@1.0.0.js deleted file mode 100644 index d50ffe68663e1aabfc07faec02e8a3cb41b5dfe5..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/third_party/mobilenet@1.0.0.js +++ /dev/null @@ -1,2 +0,0 @@ -// @tensorflow/tfjs-models Copyright 2019 Google -!function(e,a){"object"==typeof exports&&"undefined"!=typeof module?a(exports,require("@tensorflow/tfjs")):"function"==typeof define&&define.amd?define(["exports","@tensorflow/tfjs"],a):a((e=e||self).mobilenet={},e.tf)}(this,function(e,a){"use strict";function r(e,a,r,o){return new(r||(r=Promise))(function(i,t){function n(e){try{l(o.next(e))}catch(e){t(e)}}function s(e){try{l(o.throw(e))}catch(e){t(e)}}function l(e){e.done?i(e.value):new r(function(a){a(e.value)}).then(n,s)}l((o=o.apply(e,a||[])).next())})}function o(e,a){var r,o,i,t,n={label:0,sent:function(){if(1&i[0])throw i[1];return i[1]},trys:[],ops:[]};return t={next:s(0),throw:s(1),return:s(2)},"function"==typeof Symbol&&(t[Symbol.iterator]=function(){return this}),t;function s(t){return function(s){return function(t){if(r)throw new TypeError("Generator is already executing.");for(;n;)try{if(r=1,o&&(i=2&t[0]?o.return:t[0]?o.throw||((i=o.return)&&i.call(o),0):o.next)&&!(i=i.call(o,t[1])).done)return i;switch(o=0,i&&(t=[2&t[0],i.value]),t[0]){case 0:case 1:i=t;break;case 4:return n.label++,{value:t[1],done:!1};case 5:n.label++,o=t[1],t=[0];continue;case 7:t=n.ops.pop(),n.trys.pop();continue;default:if(!(i=(i=n.trys).length>0&&i[i.length-1])&&(6===t[0]||2===t[0])){n=0;continue}if(3===t[0]&&(!i||t[1]>i[0]&&t[1] tag, please also include @tensorflow/tfjs on the page before using this model.");if(r=e.toFixed(2),t=i.toFixed(2),!(r in n))throw new Error("Invalid version of MobileNet. Valid versions are: "+Object.keys(n));if(!(t in n[r]))throw new Error("MobileNet constructed with invalid alpha "+i+". Valid multipliers for this version are: "+Object.keys(n[r])+".");return[4,(l=new s(r,t)).load()];case 1:return o.sent(),[2,l]}})})},e.MobileNet=s,Object.defineProperty(e,"__esModule",{value:!0})}); \ No newline at end of file diff --git a/spaces/mnauf/detect-bees/utils/loggers/__init__.py b/spaces/mnauf/detect-bees/utils/loggers/__init__.py deleted file mode 100644 index bc8dd7621579f6372ce60e317c9e031e313e1c37..0000000000000000000000000000000000000000 --- a/spaces/mnauf/detect-bees/utils/loggers/__init__.py +++ /dev/null @@ -1,404 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Logging utils -""" - -import os -import warnings -from pathlib import Path - -import pkg_resources as pkg -import torch -from torch.utils.tensorboard import SummaryWriter - -from utils.general import LOGGER, colorstr, cv2 -from utils.loggers.clearml.clearml_utils import ClearmlLogger -from utils.loggers.wandb.wandb_utils import WandbLogger -from utils.plots import plot_images, plot_labels, plot_results -from utils.torch_utils import de_parallel - -LOGGERS = ('csv', 'tb', 'wandb', 'clearml', 'comet') # *.csv, TensorBoard, Weights & Biases, ClearML -RANK = int(os.getenv('RANK', -1)) - -try: - import wandb - - assert hasattr(wandb, '__version__') # verify package import not local dir - if pkg.parse_version(wandb.__version__) >= pkg.parse_version('0.12.2') and RANK in {0, -1}: - try: - wandb_login_success = wandb.login(timeout=30) - except wandb.errors.UsageError: # known non-TTY terminal issue - wandb_login_success = False - if not wandb_login_success: - wandb = None -except (ImportError, AssertionError): - wandb = None - -try: - import clearml - - assert hasattr(clearml, '__version__') # verify package import not local dir -except (ImportError, AssertionError): - clearml = None - -try: - if RANK not in [0, -1]: - comet_ml = None - else: - import comet_ml - - assert hasattr(comet_ml, '__version__') # verify package import not local dir - from utils.loggers.comet import CometLogger - -except (ModuleNotFoundError, ImportError, AssertionError): - comet_ml = None - - -class Loggers(): - # YOLOv5 Loggers class - def __init__(self, save_dir=None, weights=None, opt=None, hyp=None, logger=None, include=LOGGERS): - self.save_dir = save_dir - self.weights = weights - self.opt = opt - self.hyp = hyp - self.plots = not opt.noplots # plot results - self.logger = logger # for printing results to console - self.include = include - self.keys = [ - 'train/box_loss', - 'train/obj_loss', - 'train/cls_loss', # train loss - 'metrics/precision', - 'metrics/recall', - 'metrics/mAP_0.5', - 'metrics/mAP_0.5:0.95', # metrics - 'val/box_loss', - 'val/obj_loss', - 'val/cls_loss', # val loss - 'x/lr0', - 'x/lr1', - 'x/lr2'] # params - self.best_keys = ['best/epoch', 'best/precision', 'best/recall', 'best/mAP_0.5', 'best/mAP_0.5:0.95'] - for k in LOGGERS: - setattr(self, k, None) # init empty logger dictionary - self.csv = True # always log to csv - - # Messages - # if not wandb: - # prefix = colorstr('Weights & Biases: ') - # s = f"{prefix}run 'pip install wandb' to automatically track and visualize YOLOv5 🚀 runs in Weights & Biases" - # self.logger.info(s) - if not clearml: - prefix = colorstr('ClearML: ') - s = f"{prefix}run 'pip install clearml' to automatically track, visualize and remotely train YOLOv5 🚀 in ClearML" - self.logger.info(s) - if not comet_ml: - prefix = colorstr('Comet: ') - s = f"{prefix}run 'pip install comet_ml' to automatically track and visualize YOLOv5 🚀 runs in Comet" - self.logger.info(s) - # TensorBoard - s = self.save_dir - if 'tb' in self.include and not self.opt.evolve: - prefix = colorstr('TensorBoard: ') - self.logger.info(f"{prefix}Start with 'tensorboard --logdir {s.parent}', view at http://localhost:6006/") - self.tb = SummaryWriter(str(s)) - - # W&B - if wandb and 'wandb' in self.include: - wandb_artifact_resume = isinstance(self.opt.resume, str) and self.opt.resume.startswith('wandb-artifact://') - run_id = torch.load(self.weights).get('wandb_id') if self.opt.resume and not wandb_artifact_resume else None - self.opt.hyp = self.hyp # add hyperparameters - self.wandb = WandbLogger(self.opt, run_id) - # temp warn. because nested artifacts not supported after 0.12.10 - # if pkg.parse_version(wandb.__version__) >= pkg.parse_version('0.12.11'): - # s = "YOLOv5 temporarily requires wandb version 0.12.10 or below. Some features may not work as expected." - # self.logger.warning(s) - else: - self.wandb = None - - # ClearML - if clearml and 'clearml' in self.include: - self.clearml = ClearmlLogger(self.opt, self.hyp) - else: - self.clearml = None - - # Comet - if comet_ml and 'comet' in self.include: - if isinstance(self.opt.resume, str) and self.opt.resume.startswith("comet://"): - run_id = self.opt.resume.split("/")[-1] - self.comet_logger = CometLogger(self.opt, self.hyp, run_id=run_id) - - else: - self.comet_logger = CometLogger(self.opt, self.hyp) - - else: - self.comet_logger = None - - @property - def remote_dataset(self): - # Get data_dict if custom dataset artifact link is provided - data_dict = None - if self.clearml: - data_dict = self.clearml.data_dict - if self.wandb: - data_dict = self.wandb.data_dict - if self.comet_logger: - data_dict = self.comet_logger.data_dict - - return data_dict - - def on_train_start(self): - if self.comet_logger: - self.comet_logger.on_train_start() - - def on_pretrain_routine_start(self): - if self.comet_logger: - self.comet_logger.on_pretrain_routine_start() - - def on_pretrain_routine_end(self, labels, names): - # Callback runs on pre-train routine end - if self.plots: - plot_labels(labels, names, self.save_dir) - paths = self.save_dir.glob('*labels*.jpg') # training labels - if self.wandb: - self.wandb.log({"Labels": [wandb.Image(str(x), caption=x.name) for x in paths]}) - # if self.clearml: - # pass # ClearML saves these images automatically using hooks - if self.comet_logger: - self.comet_logger.on_pretrain_routine_end(paths) - - def on_train_batch_end(self, model, ni, imgs, targets, paths, vals): - log_dict = dict(zip(self.keys[0:3], vals)) - # Callback runs on train batch end - # ni: number integrated batches (since train start) - if self.plots: - if ni < 3: - f = self.save_dir / f'train_batch{ni}.jpg' # filename - plot_images(imgs, targets, paths, f) - if ni == 0 and self.tb and not self.opt.sync_bn: - log_tensorboard_graph(self.tb, model, imgsz=(self.opt.imgsz, self.opt.imgsz)) - if ni == 10 and (self.wandb or self.clearml): - files = sorted(self.save_dir.glob('train*.jpg')) - if self.wandb: - self.wandb.log({'Mosaics': [wandb.Image(str(f), caption=f.name) for f in files if f.exists()]}) - if self.clearml: - self.clearml.log_debug_samples(files, title='Mosaics') - - if self.comet_logger: - self.comet_logger.on_train_batch_end(log_dict, step=ni) - - def on_train_epoch_end(self, epoch): - # Callback runs on train epoch end - if self.wandb: - self.wandb.current_epoch = epoch + 1 - - if self.comet_logger: - self.comet_logger.on_train_epoch_end(epoch) - - def on_val_start(self): - if self.comet_logger: - self.comet_logger.on_val_start() - - def on_val_image_end(self, pred, predn, path, names, im): - # Callback runs on val image end - if self.wandb: - self.wandb.val_one_image(pred, predn, path, names, im) - if self.clearml: - self.clearml.log_image_with_boxes(path, pred, names, im) - - def on_val_batch_end(self, batch_i, im, targets, paths, shapes, out): - if self.comet_logger: - self.comet_logger.on_val_batch_end(batch_i, im, targets, paths, shapes, out) - - def on_val_end(self, nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix): - # Callback runs on val end - if self.wandb or self.clearml: - files = sorted(self.save_dir.glob('val*.jpg')) - if self.wandb: - self.wandb.log({"Validation": [wandb.Image(str(f), caption=f.name) for f in files]}) - if self.clearml: - self.clearml.log_debug_samples(files, title='Validation') - - if self.comet_logger: - self.comet_logger.on_val_end(nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix) - - def on_fit_epoch_end(self, vals, epoch, best_fitness, fi): - # Callback runs at the end of each fit (train+val) epoch - x = dict(zip(self.keys, vals)) - if self.csv: - file = self.save_dir / 'results.csv' - n = len(x) + 1 # number of cols - s = '' if file.exists() else (('%20s,' * n % tuple(['epoch'] + self.keys)).rstrip(',') + '\n') # add header - with open(file, 'a') as f: - f.write(s + ('%20.5g,' * n % tuple([epoch] + vals)).rstrip(',') + '\n') - - if self.tb: - for k, v in x.items(): - self.tb.add_scalar(k, v, epoch) - elif self.clearml: # log to ClearML if TensorBoard not used - for k, v in x.items(): - title, series = k.split('/') - self.clearml.task.get_logger().report_scalar(title, series, v, epoch) - - if self.wandb: - if best_fitness == fi: - best_results = [epoch] + vals[3:7] - for i, name in enumerate(self.best_keys): - self.wandb.wandb_run.summary[name] = best_results[i] # log best results in the summary - self.wandb.log(x) - self.wandb.end_epoch(best_result=best_fitness == fi) - - if self.clearml: - self.clearml.current_epoch_logged_images = set() # reset epoch image limit - self.clearml.current_epoch += 1 - - if self.comet_logger: - self.comet_logger.on_fit_epoch_end(x, epoch=epoch) - - def on_model_save(self, last, epoch, final_epoch, best_fitness, fi): - # Callback runs on model save event - if (epoch + 1) % self.opt.save_period == 0 and not final_epoch and self.opt.save_period != -1: - if self.wandb: - self.wandb.log_model(last.parent, self.opt, epoch, fi, best_model=best_fitness == fi) - if self.clearml: - self.clearml.task.update_output_model(model_path=str(last), - model_name='Latest Model', - auto_delete_file=False) - - if self.comet_logger: - self.comet_logger.on_model_save(last, epoch, final_epoch, best_fitness, fi) - - def on_train_end(self, last, best, epoch, results): - # Callback runs on training end, i.e. saving best model - if self.plots: - plot_results(file=self.save_dir / 'results.csv') # save results.png - files = ['results.png', 'confusion_matrix.png', *(f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R'))] - files = [(self.save_dir / f) for f in files if (self.save_dir / f).exists()] # filter - self.logger.info(f"Results saved to {colorstr('bold', self.save_dir)}") - - if self.tb and not self.clearml: # These images are already captured by ClearML by now, we don't want doubles - for f in files: - self.tb.add_image(f.stem, cv2.imread(str(f))[..., ::-1], epoch, dataformats='HWC') - - if self.wandb: - self.wandb.log(dict(zip(self.keys[3:10], results))) - self.wandb.log({"Results": [wandb.Image(str(f), caption=f.name) for f in files]}) - # Calling wandb.log. TODO: Refactor this into WandbLogger.log_model - if not self.opt.evolve: - wandb.log_artifact(str(best if best.exists() else last), - type='model', - name=f'run_{self.wandb.wandb_run.id}_model', - aliases=['latest', 'best', 'stripped']) - self.wandb.finish_run() - - if self.clearml and not self.opt.evolve: - self.clearml.task.update_output_model(model_path=str(best if best.exists() else last), - name='Best Model', - auto_delete_file=False) - - if self.comet_logger: - final_results = dict(zip(self.keys[3:10], results)) - self.comet_logger.on_train_end(files, self.save_dir, last, best, epoch, final_results) - - def on_params_update(self, params: dict): - # Update hyperparams or configs of the experiment - if self.wandb: - self.wandb.wandb_run.config.update(params, allow_val_change=True) - if self.comet_logger: - self.comet_logger.on_params_update(params) - - -class GenericLogger: - """ - YOLOv5 General purpose logger for non-task specific logging - Usage: from utils.loggers import GenericLogger; logger = GenericLogger(...) - Arguments - opt: Run arguments - console_logger: Console logger - include: loggers to include - """ - - def __init__(self, opt, console_logger, include=('tb', 'wandb')): - # init default loggers - self.save_dir = Path(opt.save_dir) - self.include = include - self.console_logger = console_logger - self.csv = self.save_dir / 'results.csv' # CSV logger - if 'tb' in self.include: - prefix = colorstr('TensorBoard: ') - self.console_logger.info( - f"{prefix}Start with 'tensorboard --logdir {self.save_dir.parent}', view at http://localhost:6006/") - self.tb = SummaryWriter(str(self.save_dir)) - - if wandb and 'wandb' in self.include: - self.wandb = wandb.init(project=web_project_name(str(opt.project)), - name=None if opt.name == "exp" else opt.name, - config=opt) - else: - self.wandb = None - - def log_metrics(self, metrics, epoch): - # Log metrics dictionary to all loggers - if self.csv: - keys, vals = list(metrics.keys()), list(metrics.values()) - n = len(metrics) + 1 # number of cols - s = '' if self.csv.exists() else (('%23s,' * n % tuple(['epoch'] + keys)).rstrip(',') + '\n') # header - with open(self.csv, 'a') as f: - f.write(s + ('%23.5g,' * n % tuple([epoch] + vals)).rstrip(',') + '\n') - - if self.tb: - for k, v in metrics.items(): - self.tb.add_scalar(k, v, epoch) - - if self.wandb: - self.wandb.log(metrics, step=epoch) - - def log_images(self, files, name='Images', epoch=0): - # Log images to all loggers - files = [Path(f) for f in (files if isinstance(files, (tuple, list)) else [files])] # to Path - files = [f for f in files if f.exists()] # filter by exists - - if self.tb: - for f in files: - self.tb.add_image(f.stem, cv2.imread(str(f))[..., ::-1], epoch, dataformats='HWC') - - if self.wandb: - self.wandb.log({name: [wandb.Image(str(f), caption=f.name) for f in files]}, step=epoch) - - def log_graph(self, model, imgsz=(640, 640)): - # Log model graph to all loggers - if self.tb: - log_tensorboard_graph(self.tb, model, imgsz) - - def log_model(self, model_path, epoch=0, metadata={}): - # Log model to all loggers - if self.wandb: - art = wandb.Artifact(name=f"run_{wandb.run.id}_model", type="model", metadata=metadata) - art.add_file(str(model_path)) - wandb.log_artifact(art) - - def update_params(self, params): - # Update the paramters logged - if self.wandb: - wandb.run.config.update(params, allow_val_change=True) - - -def log_tensorboard_graph(tb, model, imgsz=(640, 640)): - # Log model graph to TensorBoard - try: - p = next(model.parameters()) # for device, type - imgsz = (imgsz, imgsz) if isinstance(imgsz, int) else imgsz # expand - im = torch.zeros((1, 3, *imgsz)).to(p.device).type_as(p) # input image (WARNING: must be zeros, not empty) - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress jit trace warning - tb.add_graph(torch.jit.trace(de_parallel(model), im, strict=False), []) - except Exception as e: - LOGGER.warning(f'WARNING ⚠️ TensorBoard graph visualization failure {e}') - - -def web_project_name(project): - # Convert local project name to web project name - if not project.startswith('runs/train'): - return project - suffix = '-Classify' if project.endswith('-cls') else '-Segment' if project.endswith('-seg') else '' - return f'YOLOv5{suffix}' diff --git a/spaces/mrm8488/hf-diffusers/ui.py b/spaces/mrm8488/hf-diffusers/ui.py deleted file mode 100644 index aa6a4127ebcf8c32160bfe73f83021cbe298764f..0000000000000000000000000000000000000000 --- a/spaces/mrm8488/hf-diffusers/ui.py +++ /dev/null @@ -1,13 +0,0 @@ -title = "DDPM Diffusion models trained on Q-Blocks Demo" -description = """ -

          -

          -Demo of DDPM Diffusion models trained using HF/Diffusers library on -Q-Blocks hardware. -meta nllb pic -Imagen -
          -

          -""" - -examples = [] diff --git a/spaces/mshkdm/VToonify/vtoonify/model/raft/train_mixed.sh b/spaces/mshkdm/VToonify/vtoonify/model/raft/train_mixed.sh deleted file mode 100644 index d9b979f143902a17a0ba7b0a8f960598b7096e0b..0000000000000000000000000000000000000000 --- a/spaces/mshkdm/VToonify/vtoonify/model/raft/train_mixed.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/bash -mkdir -p checkpoints -python -u train.py --name raft-chairs --stage chairs --validation chairs --gpus 0 --num_steps 120000 --batch_size 8 --lr 0.00025 --image_size 368 496 --wdecay 0.0001 --mixed_precision -python -u train.py --name raft-things --stage things --validation sintel --restore_ckpt checkpoints/raft-chairs.pth --gpus 0 --num_steps 120000 --batch_size 5 --lr 0.0001 --image_size 400 720 --wdecay 0.0001 --mixed_precision -python -u train.py --name raft-sintel --stage sintel --validation sintel --restore_ckpt checkpoints/raft-things.pth --gpus 0 --num_steps 120000 --batch_size 5 --lr 0.0001 --image_size 368 768 --wdecay 0.00001 --gamma=0.85 --mixed_precision -python -u train.py --name raft-kitti --stage kitti --validation kitti --restore_ckpt checkpoints/raft-sintel.pth --gpus 0 --num_steps 50000 --batch_size 5 --lr 0.0001 --image_size 288 960 --wdecay 0.00001 --gamma=0.85 --mixed_precision diff --git a/spaces/msulemannkhan/sentiment-classification-gradio/README.md b/spaces/msulemannkhan/sentiment-classification-gradio/README.md deleted file mode 100644 index 3f7b305dee2c037e427318ea82d3634786e07074..0000000000000000000000000000000000000000 --- a/spaces/msulemannkhan/sentiment-classification-gradio/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Sentiment Classification Gradio -emoji: 🚀 -colorFrom: green -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/mueller-franzes/medfusion-app/medical_diffusion/models/__init__.py b/spaces/mueller-franzes/medfusion-app/medical_diffusion/models/__init__.py deleted file mode 100644 index ae49315a0fea3449a5fcf5d194426778c95bc364..0000000000000000000000000000000000000000 --- a/spaces/mueller-franzes/medfusion-app/medical_diffusion/models/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .model_base import BasicModel diff --git a/spaces/mufssdr/kkhuy/README.md b/spaces/mufssdr/kkhuy/README.md deleted file mode 100644 index 7eb44c0adfe17c55e857e467e33ae47fd5179b9b..0000000000000000000000000000000000000000 --- a/spaces/mufssdr/kkhuy/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Kkhuy -emoji: 🌖 -colorFrom: gray -colorTo: gray -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mun-ahmd/HairType/README.md b/spaces/mun-ahmd/HairType/README.md deleted file mode 100644 index 95d0ab55c0413cbc44eb1008dc9761954fe25b41..0000000000000000000000000000000000000000 --- a/spaces/mun-ahmd/HairType/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: HairType -emoji: 📉 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/data/scripts/get_coco128.sh b/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/data/scripts/get_coco128.sh deleted file mode 100644 index ee05a867e5644be8cc7549b89cad89d5e84573d0..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/data/scripts/get_coco128.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -# Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) -# Example usage: bash data/scripts/get_coco128.sh -# parent -# ├── yolov5 -# └── datasets -# └── coco128 ← downloads here - -# Download/unzip images and labels -d='../datasets' # unzip directory -url=https://github.com/ultralytics/yolov5/releases/download/v1.0/ -f='coco128.zip' # or 'coco128-segments.zip', 68 MB -echo 'Downloading' $url$f ' ...' -curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & - -wait # finish background tasks diff --git a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/apps/render_data.py b/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/apps/render_data.py deleted file mode 100644 index 563c03fba6e304eced73ca283152a968a65c3b8e..0000000000000000000000000000000000000000 --- a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/apps/render_data.py +++ /dev/null @@ -1,290 +0,0 @@ -#from data.config import raw_dataset, render_dataset, archive_dataset, model_list, zip_path - -from lib.renderer.camera import Camera -import numpy as np -from lib.renderer.mesh import load_obj_mesh, compute_tangent, compute_normal, load_obj_mesh_mtl -from lib.renderer.camera import Camera -import os -import cv2 -import time -import math -import random -import pyexr -import argparse -from tqdm import tqdm - - -def make_rotate(rx, ry, rz): - sinX = np.sin(rx) - sinY = np.sin(ry) - sinZ = np.sin(rz) - - cosX = np.cos(rx) - cosY = np.cos(ry) - cosZ = np.cos(rz) - - Rx = np.zeros((3,3)) - Rx[0, 0] = 1.0 - Rx[1, 1] = cosX - Rx[1, 2] = -sinX - Rx[2, 1] = sinX - Rx[2, 2] = cosX - - Ry = np.zeros((3,3)) - Ry[0, 0] = cosY - Ry[0, 2] = sinY - Ry[1, 1] = 1.0 - Ry[2, 0] = -sinY - Ry[2, 2] = cosY - - Rz = np.zeros((3,3)) - Rz[0, 0] = cosZ - Rz[0, 1] = -sinZ - Rz[1, 0] = sinZ - Rz[1, 1] = cosZ - Rz[2, 2] = 1.0 - - R = np.matmul(np.matmul(Rz,Ry),Rx) - return R - -def rotateSH(SH, R): - SHn = SH - - # 1st order - SHn[1] = R[1,1]*SH[1] - R[1,2]*SH[2] + R[1,0]*SH[3] - SHn[2] = -R[2,1]*SH[1] + R[2,2]*SH[2] - R[2,0]*SH[3] - SHn[3] = R[0,1]*SH[1] - R[0,2]*SH[2] + R[0,0]*SH[3] - - # 2nd order - SHn[4:,0] = rotateBand2(SH[4:,0],R) - SHn[4:,1] = rotateBand2(SH[4:,1],R) - SHn[4:,2] = rotateBand2(SH[4:,2],R) - - return SHn - -def rotateBand2(x, R): - s_c3 = 0.94617469575 - s_c4 = -0.31539156525 - s_c5 = 0.54627421529 - - s_c_scale = 1.0/0.91529123286551084 - s_c_scale_inv = 0.91529123286551084 - - s_rc2 = 1.5853309190550713*s_c_scale - s_c4_div_c3 = s_c4/s_c3 - s_c4_div_c3_x2 = (s_c4/s_c3)*2.0 - - s_scale_dst2 = s_c3 * s_c_scale_inv - s_scale_dst4 = s_c5 * s_c_scale_inv - - sh0 = x[3] + x[4] + x[4] - x[1] - sh1 = x[0] + s_rc2*x[2] + x[3] + x[4] - sh2 = x[0] - sh3 = -x[3] - sh4 = -x[1] - - r2x = R[0][0] + R[0][1] - r2y = R[1][0] + R[1][1] - r2z = R[2][0] + R[2][1] - - r3x = R[0][0] + R[0][2] - r3y = R[1][0] + R[1][2] - r3z = R[2][0] + R[2][2] - - r4x = R[0][1] + R[0][2] - r4y = R[1][1] + R[1][2] - r4z = R[2][1] + R[2][2] - - sh0_x = sh0 * R[0][0] - sh0_y = sh0 * R[1][0] - d0 = sh0_x * R[1][0] - d1 = sh0_y * R[2][0] - d2 = sh0 * (R[2][0] * R[2][0] + s_c4_div_c3) - d3 = sh0_x * R[2][0] - d4 = sh0_x * R[0][0] - sh0_y * R[1][0] - - sh1_x = sh1 * R[0][2] - sh1_y = sh1 * R[1][2] - d0 += sh1_x * R[1][2] - d1 += sh1_y * R[2][2] - d2 += sh1 * (R[2][2] * R[2][2] + s_c4_div_c3) - d3 += sh1_x * R[2][2] - d4 += sh1_x * R[0][2] - sh1_y * R[1][2] - - sh2_x = sh2 * r2x - sh2_y = sh2 * r2y - d0 += sh2_x * r2y - d1 += sh2_y * r2z - d2 += sh2 * (r2z * r2z + s_c4_div_c3_x2) - d3 += sh2_x * r2z - d4 += sh2_x * r2x - sh2_y * r2y - - sh3_x = sh3 * r3x - sh3_y = sh3 * r3y - d0 += sh3_x * r3y - d1 += sh3_y * r3z - d2 += sh3 * (r3z * r3z + s_c4_div_c3_x2) - d3 += sh3_x * r3z - d4 += sh3_x * r3x - sh3_y * r3y - - sh4_x = sh4 * r4x - sh4_y = sh4 * r4y - d0 += sh4_x * r4y - d1 += sh4_y * r4z - d2 += sh4 * (r4z * r4z + s_c4_div_c3_x2) - d3 += sh4_x * r4z - d4 += sh4_x * r4x - sh4_y * r4y - - dst = x - dst[0] = d0 - dst[1] = -d1 - dst[2] = d2 * s_scale_dst2 - dst[3] = -d3 - dst[4] = d4 * s_scale_dst4 - - return dst - -def render_prt_ortho(out_path, folder_name, subject_name, shs, rndr, rndr_uv, im_size, angl_step=4, n_light=1, pitch=[0]): - cam = Camera(width=im_size, height=im_size) - cam.ortho_ratio = 0.4 * (512 / im_size) - cam.near = -100 - cam.far = 100 - cam.sanity_check() - - # set path for obj, prt - mesh_file = os.path.join(folder_name, subject_name + '_100k.obj') - if not os.path.exists(mesh_file): - print('ERROR: obj file does not exist!!', mesh_file) - return - prt_file = os.path.join(folder_name, 'bounce', 'bounce0.txt') - if not os.path.exists(prt_file): - print('ERROR: prt file does not exist!!!', prt_file) - return - face_prt_file = os.path.join(folder_name, 'bounce', 'face.npy') - if not os.path.exists(face_prt_file): - print('ERROR: face prt file does not exist!!!', prt_file) - return - text_file = os.path.join(folder_name, 'tex', subject_name + '_dif_2k.jpg') - if not os.path.exists(text_file): - print('ERROR: dif file does not exist!!', text_file) - return - - texture_image = cv2.imread(text_file) - texture_image = cv2.cvtColor(texture_image, cv2.COLOR_BGR2RGB) - - vertices, faces, normals, faces_normals, textures, face_textures = load_obj_mesh(mesh_file, with_normal=True, with_texture=True) - vmin = vertices.min(0) - vmax = vertices.max(0) - up_axis = 1 if (vmax-vmin).argmax() == 1 else 2 - - vmed = np.median(vertices, 0) - vmed[up_axis] = 0.5*(vmax[up_axis]+vmin[up_axis]) - y_scale = 180/(vmax[up_axis] - vmin[up_axis]) - - rndr.set_norm_mat(y_scale, vmed) - rndr_uv.set_norm_mat(y_scale, vmed) - - tan, bitan = compute_tangent(vertices, faces, normals, textures, face_textures) - prt = np.loadtxt(prt_file) - face_prt = np.load(face_prt_file) - rndr.set_mesh(vertices, faces, normals, faces_normals, textures, face_textures, prt, face_prt, tan, bitan) - rndr.set_albedo(texture_image) - - rndr_uv.set_mesh(vertices, faces, normals, faces_normals, textures, face_textures, prt, face_prt, tan, bitan) - rndr_uv.set_albedo(texture_image) - - os.makedirs(os.path.join(out_path, 'GEO', 'OBJ', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'PARAM', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'RENDER', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'MASK', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_RENDER', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_MASK', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_POS', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_NORMAL', subject_name),exist_ok=True) - - if not os.path.exists(os.path.join(out_path, 'val.txt')): - f = open(os.path.join(out_path, 'val.txt'), 'w') - f.close() - - # copy obj file - cmd = 'cp %s %s' % (mesh_file, os.path.join(out_path, 'GEO', 'OBJ', subject_name)) - print(cmd) - os.system(cmd) - - for p in pitch: - for y in tqdm(range(0, 360, angl_step)): - R = np.matmul(make_rotate(math.radians(p), 0, 0), make_rotate(0, math.radians(y), 0)) - if up_axis == 2: - R = np.matmul(R, make_rotate(math.radians(90),0,0)) - - rndr.rot_matrix = R - rndr_uv.rot_matrix = R - rndr.set_camera(cam) - rndr_uv.set_camera(cam) - - for j in range(n_light): - sh_id = random.randint(0,shs.shape[0]-1) - sh = shs[sh_id] - sh_angle = 0.2*np.pi*(random.random()-0.5) - sh = rotateSH(sh, make_rotate(0, sh_angle, 0).T) - - dic = {'sh': sh, 'ortho_ratio': cam.ortho_ratio, 'scale': y_scale, 'center': vmed, 'R': R} - - rndr.set_sh(sh) - rndr.analytic = False - rndr.use_inverse_depth = False - rndr.display() - - out_all_f = rndr.get_color(0) - out_mask = out_all_f[:,:,3] - out_all_f = cv2.cvtColor(out_all_f, cv2.COLOR_RGBA2BGR) - - np.save(os.path.join(out_path, 'PARAM', subject_name, '%d_%d_%02d.npy'%(y,p,j)),dic) - cv2.imwrite(os.path.join(out_path, 'RENDER', subject_name, '%d_%d_%02d.jpg'%(y,p,j)),255.0*out_all_f) - cv2.imwrite(os.path.join(out_path, 'MASK', subject_name, '%d_%d_%02d.png'%(y,p,j)),255.0*out_mask) - - rndr_uv.set_sh(sh) - rndr_uv.analytic = False - rndr_uv.use_inverse_depth = False - rndr_uv.display() - - uv_color = rndr_uv.get_color(0) - uv_color = cv2.cvtColor(uv_color, cv2.COLOR_RGBA2BGR) - cv2.imwrite(os.path.join(out_path, 'UV_RENDER', subject_name, '%d_%d_%02d.jpg'%(y,p,j)),255.0*uv_color) - - if y == 0 and j == 0 and p == pitch[0]: - uv_pos = rndr_uv.get_color(1) - uv_mask = uv_pos[:,:,3] - cv2.imwrite(os.path.join(out_path, 'UV_MASK', subject_name, '00.png'),255.0*uv_mask) - - data = {'default': uv_pos[:,:,:3]} # default is a reserved name - pyexr.write(os.path.join(out_path, 'UV_POS', subject_name, '00.exr'), data) - - uv_nml = rndr_uv.get_color(2) - uv_nml = cv2.cvtColor(uv_nml, cv2.COLOR_RGBA2BGR) - cv2.imwrite(os.path.join(out_path, 'UV_NORMAL', subject_name, '00.png'),255.0*uv_nml) - - -if __name__ == '__main__': - shs = np.load('./env_sh.npy') - - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input', type=str, default='/home/shunsuke/Downloads/rp_dennis_posed_004_OBJ') - parser.add_argument('-o', '--out_dir', type=str, default='/home/shunsuke/Documents/hf_human') - parser.add_argument('-m', '--ms_rate', type=int, default=1, help='higher ms rate results in less aliased output. MESA renderer only supports ms_rate=1.') - parser.add_argument('-e', '--egl', action='store_true', help='egl rendering option. use this when rendering with headless server with NVIDIA GPU') - parser.add_argument('-s', '--size', type=int, default=512, help='rendering image size') - args = parser.parse_args() - - # NOTE: GL context has to be created before any other OpenGL function loads. - from lib.renderer.gl.init_gl import initialize_GL_context - initialize_GL_context(width=args.size, height=args.size, egl=args.egl) - - from lib.renderer.gl.prt_render import PRTRender - rndr = PRTRender(width=args.size, height=args.size, ms_rate=args.ms_rate, egl=args.egl) - rndr_uv = PRTRender(width=args.size, height=args.size, uv_mode=True, egl=args.egl) - - if args.input[-1] == '/': - args.input = args.input[:-1] - subject_name = args.input.split('/')[-1][:-4] - render_prt_ortho(args.out_dir, args.input, subject_name, shs, rndr, rndr_uv, args.size, 1, 1, pitch=[0]) \ No newline at end of file diff --git a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/net_util.py b/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/net_util.py deleted file mode 100644 index 3345c10335a0216c5ca3b3c02300911600771b52..0000000000000000000000000000000000000000 --- a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/net_util.py +++ /dev/null @@ -1,396 +0,0 @@ -import torch -from torch.nn import init -import torch.nn as nn -import torch.nn.functional as F -import functools - -import numpy as np -from .mesh_util import * -from .sample_util import * -from .geometry import index -import cv2 -from PIL import Image -from tqdm import tqdm - - -def reshape_multiview_tensors(image_tensor, calib_tensor): - # Careful here! Because we put single view and multiview together, - # the returned tensor.shape is 5-dim: [B, num_views, C, W, H] - # So we need to convert it back to 4-dim [B*num_views, C, W, H] - # Don't worry classifier will handle multi-view cases - image_tensor = image_tensor.view( - image_tensor.shape[0] * image_tensor.shape[1], - image_tensor.shape[2], - image_tensor.shape[3], - image_tensor.shape[4] - ) - calib_tensor = calib_tensor.view( - calib_tensor.shape[0] * calib_tensor.shape[1], - calib_tensor.shape[2], - calib_tensor.shape[3] - ) - - return image_tensor, calib_tensor - - -def reshape_sample_tensor(sample_tensor, num_views): - if num_views == 1: - return sample_tensor - # Need to repeat sample_tensor along the batch dim num_views times - sample_tensor = sample_tensor.unsqueeze(dim=1) - sample_tensor = sample_tensor.repeat(1, num_views, 1, 1) - sample_tensor = sample_tensor.view( - sample_tensor.shape[0] * sample_tensor.shape[1], - sample_tensor.shape[2], - sample_tensor.shape[3] - ) - return sample_tensor - - -def gen_mesh(opt, net, cuda, data, save_path, use_octree=True): - image_tensor = data['img'].to(device=cuda) - calib_tensor = data['calib'].to(device=cuda) - - net.filter(image_tensor) - - b_min = data['b_min'] - b_max = data['b_max'] - try: - save_img_path = save_path[:-4] + '.png' - save_img_list = [] - for v in range(image_tensor.shape[0]): - save_img = (np.transpose(image_tensor[v].detach().cpu().numpy(), (1, 2, 0)) * 0.5 + 0.5)[:, :, ::-1] * 255.0 - save_img_list.append(save_img) - save_img = np.concatenate(save_img_list, axis=1) - Image.fromarray(np.uint8(save_img[:,:,::-1])).save(save_img_path) - - verts, faces, _, _ = reconstruction( - net, cuda, calib_tensor, opt.resolution, b_min, b_max, use_octree=use_octree) - verts_tensor = torch.from_numpy(verts.T).unsqueeze(0).to(device=cuda).float() - xyz_tensor = net.projection(verts_tensor, calib_tensor[:1]) - uv = xyz_tensor[:, :2, :] - color = index(image_tensor[:1], uv).detach().cpu().numpy()[0].T - color = color * 0.5 + 0.5 - save_obj_mesh_with_color(save_path, verts, faces, color) - except Exception as e: - print(e) - print('Can not create marching cubes at this time.') - -def gen_mesh_color(opt, netG, netC, cuda, data, save_path, use_octree=True): - image_tensor = data['img'].to(device=cuda) - calib_tensor = data['calib'].to(device=cuda) - - netG.filter(image_tensor) - netC.filter(image_tensor) - netC.attach(netG.get_im_feat()) - - b_min = data['b_min'] - b_max = data['b_max'] - try: - save_img_path = save_path[:-4] + '.png' - save_img_list = [] - for v in range(image_tensor.shape[0]): - save_img = (np.transpose(image_tensor[v].detach().cpu().numpy(), (1, 2, 0)) * 0.5 + 0.5)[:, :, ::-1] * 255.0 - save_img_list.append(save_img) - save_img = np.concatenate(save_img_list, axis=1) - Image.fromarray(np.uint8(save_img[:,:,::-1])).save(save_img_path) - - verts, faces, _, _ = reconstruction( - netG, cuda, calib_tensor, opt.resolution, b_min, b_max, use_octree=use_octree) - - # Now Getting colors - verts_tensor = torch.from_numpy(verts.T).unsqueeze(0).to(device=cuda).float() - verts_tensor = reshape_sample_tensor(verts_tensor, opt.num_views) - - color = np.zeros(verts.shape) - interval = opt.num_sample_color - for i in range(len(color) // interval): - left = i * interval - right = i * interval + interval - if i == len(color) // interval - 1: - right = -1 - netC.query(verts_tensor[:, :, left:right], calib_tensor) - rgb = netC.get_preds()[0].detach().cpu().numpy() * 0.5 + 0.5 - color[left:right] = rgb.T - - save_obj_mesh_with_color(save_path, verts, faces, color) - except Exception as e: - print(e) - print('Can not create marching cubes at this time.') - -def adjust_learning_rate(optimizer, epoch, lr, schedule, gamma): - """Sets the learning rate to the initial LR decayed by schedule""" - if epoch in schedule: - lr *= gamma - for param_group in optimizer.param_groups: - param_group['lr'] = lr - return lr - - -def compute_acc(pred, gt, thresh=0.5): - ''' - return: - IOU, precision, and recall - ''' - with torch.no_grad(): - vol_pred = pred > thresh - vol_gt = gt > thresh - - union = vol_pred | vol_gt - inter = vol_pred & vol_gt - - true_pos = inter.sum().float() - - union = union.sum().float() - if union == 0: - union = 1 - vol_pred = vol_pred.sum().float() - if vol_pred == 0: - vol_pred = 1 - vol_gt = vol_gt.sum().float() - if vol_gt == 0: - vol_gt = 1 - return true_pos / union, true_pos / vol_pred, true_pos / vol_gt - - -def calc_error(opt, net, cuda, dataset, num_tests): - if num_tests > len(dataset): - num_tests = len(dataset) - with torch.no_grad(): - erorr_arr, IOU_arr, prec_arr, recall_arr = [], [], [], [] - for idx in tqdm(range(num_tests)): - data = dataset[idx * len(dataset) // num_tests] - # retrieve the data - image_tensor = data['img'].to(device=cuda) - calib_tensor = data['calib'].to(device=cuda) - sample_tensor = data['samples'].to(device=cuda).unsqueeze(0) - if opt.num_views > 1: - sample_tensor = reshape_sample_tensor(sample_tensor, opt.num_views) - label_tensor = data['labels'].to(device=cuda).unsqueeze(0) - - res, error = net.forward(image_tensor, sample_tensor, calib_tensor, labels=label_tensor) - - IOU, prec, recall = compute_acc(res, label_tensor) - - # print( - # '{0}/{1} | Error: {2:06f} IOU: {3:06f} prec: {4:06f} recall: {5:06f}' - # .format(idx, num_tests, error.item(), IOU.item(), prec.item(), recall.item())) - erorr_arr.append(error.item()) - IOU_arr.append(IOU.item()) - prec_arr.append(prec.item()) - recall_arr.append(recall.item()) - - return np.average(erorr_arr), np.average(IOU_arr), np.average(prec_arr), np.average(recall_arr) - -def calc_error_color(opt, netG, netC, cuda, dataset, num_tests): - if num_tests > len(dataset): - num_tests = len(dataset) - with torch.no_grad(): - error_color_arr = [] - - for idx in tqdm(range(num_tests)): - data = dataset[idx * len(dataset) // num_tests] - # retrieve the data - image_tensor = data['img'].to(device=cuda) - calib_tensor = data['calib'].to(device=cuda) - color_sample_tensor = data['color_samples'].to(device=cuda).unsqueeze(0) - - if opt.num_views > 1: - color_sample_tensor = reshape_sample_tensor(color_sample_tensor, opt.num_views) - - rgb_tensor = data['rgbs'].to(device=cuda).unsqueeze(0) - - netG.filter(image_tensor) - _, errorC = netC.forward(image_tensor, netG.get_im_feat(), color_sample_tensor, calib_tensor, labels=rgb_tensor) - - # print('{0}/{1} | Error inout: {2:06f} | Error color: {3:06f}' - # .format(idx, num_tests, errorG.item(), errorC.item())) - error_color_arr.append(errorC.item()) - - return np.average(error_color_arr) - - -def conv3x3(in_planes, out_planes, strd=1, padding=1, bias=False): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, - stride=strd, padding=padding, bias=bias) - -def init_weights(net, init_type='normal', init_gain=0.02): - """Initialize network weights. - - Parameters: - net (network) -- network to be initialized - init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal - init_gain (float) -- scaling factor for normal, xavier and orthogonal. - - We use 'normal' in the original pix2pix and CycleGAN paper. But xavier and kaiming might - work better for some applications. Feel free to try yourself. - """ - - def init_func(m): # define the initialization function - classname = m.__class__.__name__ - if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1): - if init_type == 'normal': - init.normal_(m.weight.data, 0.0, init_gain) - elif init_type == 'xavier': - init.xavier_normal_(m.weight.data, gain=init_gain) - elif init_type == 'kaiming': - init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') - elif init_type == 'orthogonal': - init.orthogonal_(m.weight.data, gain=init_gain) - else: - raise NotImplementedError('initialization method [%s] is not implemented' % init_type) - if hasattr(m, 'bias') and m.bias is not None: - init.constant_(m.bias.data, 0.0) - elif classname.find( - 'BatchNorm2d') != -1: # BatchNorm Layer's weight is not a matrix; only normal distribution applies. - init.normal_(m.weight.data, 1.0, init_gain) - init.constant_(m.bias.data, 0.0) - - print('initialize network with %s' % init_type) - net.apply(init_func) # apply the initialization function - - -def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[]): - """Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights - Parameters: - net (network) -- the network to be initialized - init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal - gain (float) -- scaling factor for normal, xavier and orthogonal. - gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2 - - Return an initialized network. - """ - if len(gpu_ids) > 0: - assert (torch.cuda.is_available()) - net.to(gpu_ids[0]) - net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs - init_weights(net, init_type, init_gain=init_gain) - return net - - -def imageSpaceRotation(xy, rot): - ''' - args: - xy: (B, 2, N) input - rot: (B, 2) x,y axis rotation angles - - rotation center will be always image center (other rotation center can be represented by additional z translation) - ''' - disp = rot.unsqueeze(2).sin().expand_as(xy) - return (disp * xy).sum(dim=1) - - -def cal_gradient_penalty(netD, real_data, fake_data, device, type='mixed', constant=1.0, lambda_gp=10.0): - """Calculate the gradient penalty loss, used in WGAN-GP paper https://arxiv.org/abs/1704.00028 - - Arguments: - netD (network) -- discriminator network - real_data (tensor array) -- real images - fake_data (tensor array) -- generated images from the generator - device (str) -- GPU / CPU: from torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') - type (str) -- if we mix real and fake data or not [real | fake | mixed]. - constant (float) -- the constant used in formula ( | |gradient||_2 - constant)^2 - lambda_gp (float) -- weight for this loss - - Returns the gradient penalty loss - """ - if lambda_gp > 0.0: - if type == 'real': # either use real images, fake images, or a linear interpolation of two. - interpolatesv = real_data - elif type == 'fake': - interpolatesv = fake_data - elif type == 'mixed': - alpha = torch.rand(real_data.shape[0], 1) - alpha = alpha.expand(real_data.shape[0], real_data.nelement() // real_data.shape[0]).contiguous().view( - *real_data.shape) - alpha = alpha.to(device) - interpolatesv = alpha * real_data + ((1 - alpha) * fake_data) - else: - raise NotImplementedError('{} not implemented'.format(type)) - interpolatesv.requires_grad_(True) - disc_interpolates = netD(interpolatesv) - gradients = torch.autograd.grad(outputs=disc_interpolates, inputs=interpolatesv, - grad_outputs=torch.ones(disc_interpolates.size()).to(device), - create_graph=True, retain_graph=True, only_inputs=True) - gradients = gradients[0].view(real_data.size(0), -1) # flat the data - gradient_penalty = (((gradients + 1e-16).norm(2, dim=1) - constant) ** 2).mean() * lambda_gp # added eps - return gradient_penalty, gradients - else: - return 0.0, None - -def get_norm_layer(norm_type='instance'): - """Return a normalization layer - Parameters: - norm_type (str) -- the name of the normalization layer: batch | instance | none - For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev). - For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics. - """ - if norm_type == 'batch': - norm_layer = functools.partial(nn.BatchNorm2d, affine=True, track_running_stats=True) - elif norm_type == 'instance': - norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False) - elif norm_type == 'group': - norm_layer = functools.partial(nn.GroupNorm, 32) - elif norm_type == 'none': - norm_layer = None - else: - raise NotImplementedError('normalization layer [%s] is not found' % norm_type) - return norm_layer - -class Flatten(nn.Module): - def forward(self, input): - return input.view(input.size(0), -1) - -class ConvBlock(nn.Module): - def __init__(self, in_planes, out_planes, norm='batch'): - super(ConvBlock, self).__init__() - self.conv1 = conv3x3(in_planes, int(out_planes / 2)) - self.conv2 = conv3x3(int(out_planes / 2), int(out_planes / 4)) - self.conv3 = conv3x3(int(out_planes / 4), int(out_planes / 4)) - - if norm == 'batch': - self.bn1 = nn.BatchNorm2d(in_planes) - self.bn2 = nn.BatchNorm2d(int(out_planes / 2)) - self.bn3 = nn.BatchNorm2d(int(out_planes / 4)) - self.bn4 = nn.BatchNorm2d(in_planes) - elif norm == 'group': - self.bn1 = nn.GroupNorm(32, in_planes) - self.bn2 = nn.GroupNorm(32, int(out_planes / 2)) - self.bn3 = nn.GroupNorm(32, int(out_planes / 4)) - self.bn4 = nn.GroupNorm(32, in_planes) - - if in_planes != out_planes: - self.downsample = nn.Sequential( - self.bn4, - nn.ReLU(True), - nn.Conv2d(in_planes, out_planes, - kernel_size=1, stride=1, bias=False), - ) - else: - self.downsample = None - - def forward(self, x): - residual = x - - out1 = self.bn1(x) - out1 = F.relu(out1, True) - out1 = self.conv1(out1) - - out2 = self.bn2(out1) - out2 = F.relu(out2, True) - out2 = self.conv2(out2) - - out3 = self.bn3(out2) - out3 = F.relu(out3, True) - out3 = self.conv3(out3) - - out3 = torch.cat((out1, out2, out3), 1) - - if self.downsample is not None: - residual = self.downsample(residual) - - out3 += residual - - return out3 - \ No newline at end of file diff --git a/spaces/nateraw/yolov6/yolov6/data/vis_dataset.py b/spaces/nateraw/yolov6/yolov6/data/vis_dataset.py deleted file mode 100644 index 89403b62e6b1b1c6bc2f80a0d093f526667874a3..0000000000000000000000000000000000000000 --- a/spaces/nateraw/yolov6/yolov6/data/vis_dataset.py +++ /dev/null @@ -1,57 +0,0 @@ -# coding=utf-8 -# Description: visualize yolo label image. - -import argparse -import os -import cv2 -import numpy as np - -IMG_FORMATS = ["bmp", "jpg", "jpeg", "png", "tif", "tiff", "dng", "webp", "mpo"] - -def main(args): - img_dir, label_dir, class_names = args.img_dir, args.label_dir, args.class_names - - label_map = dict() - for class_id, classname in enumerate(class_names): - label_map[class_id] = classname - - for file in os.listdir(img_dir): - if file.split('.')[-1] not in IMG_FORMATS: - print(f'[Warning]: Non-image file {file}') - continue - img_path = os.path.join(img_dir, file) - label_path = os.path.join(label_dir, file[: file.rindex('.')] + '.txt') - - try: - img_data = cv2.imread(img_path) - height, width, _ = img_data.shape - color = [tuple(np.random.choice(range(256), size=3)) for i in class_names] - thickness = 2 - - with open(label_path, 'r') as f: - for bbox in f: - cls, x_c, y_c, w, h = [float(v) if i > 0 else int(v) for i, v in enumerate(bbox.split('\n')[0].split(' '))] - - x_tl = int((x_c - w / 2) * width) - y_tl = int((y_c - h / 2) * height) - cv2.rectangle(img_data, (x_tl, y_tl), (x_tl + int(w * width), y_tl + int(h * height)), tuple([int(x) for x in color[cls]]), thickness) - cv2.putText(img_data, label_map[cls], (x_tl, y_tl - 10), cv2.FONT_HERSHEY_COMPLEX, 1, tuple([int(x) for x in color[cls]]), thickness) - - cv2.imshow('image', img_data) - cv2.waitKey(0) - except Exception as e: - print(f'[Error]: {e} {img_path}') - print('======All Done!======') - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--img_dir', default='VOCdevkit/voc_07_12/images') - parser.add_argument('--label_dir', default='VOCdevkit/voc_07_12/labels') - parser.add_argument('--class_names', default=['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', - 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']) - - args = parser.parse_args() - print(args) - - main(args) \ No newline at end of file diff --git a/spaces/naver/PUMP/datasets/image_set.py b/spaces/naver/PUMP/datasets/image_set.py deleted file mode 100644 index fca885493960a5c7eb20680a8f925a83f37bf59e..0000000000000000000000000000000000000000 --- a/spaces/naver/PUMP/datasets/image_set.py +++ /dev/null @@ -1,91 +0,0 @@ -# Copyright 2022-present NAVER Corp. -# CC BY-NC-SA 4.0 -# Available only for non-commercial use - -from pdb import set_trace as bb -import os -from os.path import * -from PIL import Image - - -class ImageSet(object): - """ Base class for an image dataset. - """ - def __init__(self, root, imgs): - self.root = root - self.imgs = imgs - assert imgs, f'Empty image set in {root}' - - def init_from_folder(self, *args, **kw): - imset = ImageSet.from_folder(*args, **kw) - ImageSet.__init__(self, imset.root, imset.imgs) - - def __len__(self): - return len(self.imgs) - - def get_image_path(self, idx): - return os.path.join(self.root, self.imgs[idx]) - - def get_image(self, idx): - fname = self.get_image_path(idx) - try: - return Image.open(fname).convert('RGB') - except Exception as e: - raise IOError("Could not load image %s (reason: %s)" % (fname, str(e))) - - __getitem__ = get_image - - @staticmethod - def from_folder(root, exts=('.jpg','.jpeg','.png','.ppm'), recursive=False, listing=False, check_imgs=False): - """ - recursive: bool or func. If a function, it must evaluate True to the directory name. - """ - if listing: - if listing is True: listing = f"list_imgs{'_recursive' if recursive else ''}.txt" - flist = join(root, listing) - try: return ImageSet.from_listing(root,flist) - except IOError: print(f'>> ImageSet.from_folder(listing=True): entering {root}...') - - if check_imgs is True: # default verif function - check_imgs = verify_img - - for _, dirnames, dirfiles in os.walk(root): - imgs = sorted([f for f in dirfiles if f.lower().endswith(exts)]) - if check_imgs: imgs = [img for img in imgs if check_imgs(join(root,img))] - - if recursive: - for dirname in sorted(dirnames): - if callable(recursive) and not recursive(join(root,dirname)): continue - imset = ImageSet.from_folder(join(root,dirname), exts=exts, recursive=recursive, listing=listing, check_imgs=check_imgs) - imgs += [join(dirname,f) for f in imset.imgs] - break # recursion is handled internally - - if listing: - try: open(flist,'w').write('\n'.join(imgs)) - except IOError: pass # write permission denied - return ImageSet(root, imgs) - - @staticmethod - def from_listing(root, list_path): - return ImageSet(root, open(list_path).read().splitlines()) - - def circular_pad(self, min_size): - assert self.imgs, 'cannot pad an empty image set' - while len(self.imgs) < min_size: - self.imgs += self.imgs # artifically augment size - self.imgs = self.imgs[:min_size or None] - return self - - def __repr__(self): - prefix = os.path.commonprefix((self.get_image_path(0),self.get_image_path(len(self)-1))) - return f'{self.__class__.__name__}({len(self)} images from {prefix}...)' - - - -def verify_img(path, exts=None): - if exts and not path.lower().endswith(exts): return False - try: - Image.open(path).convert('RGB') # try to open it - return True - except: - return False diff --git a/spaces/neuralmagic/question-answering/README.md b/spaces/neuralmagic/question-answering/README.md deleted file mode 100644 index bb4426fd06e56b1610d7243cda1ff8f646dbf75e..0000000000000000000000000000000000000000 --- a/spaces/neuralmagic/question-answering/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: DeepSparse Question Answering -emoji: 🏃 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.13.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/nightfury/img2audio_video_prompt_tags/utils.py b/spaces/nightfury/img2audio_video_prompt_tags/utils.py deleted file mode 100644 index d302528fd6fc9be8d782f78b6c44f4d894147d07..0000000000000000000000000000000000000000 --- a/spaces/nightfury/img2audio_video_prompt_tags/utils.py +++ /dev/null @@ -1,50 +0,0 @@ -import json -import numpy as np -import httpx - -from constants import MUBERT_TAGS, MUBERT_LICENSE, MUBERT_MODE, MUBERT_TOKEN - - -def get_mubert_tags_embeddings(w2v_model): - return w2v_model.encode(MUBERT_TAGS) - - -def get_pat(email: str): - r = httpx.post('https://api-b2b.mubert.com/v2/GetServiceAccess', - json={ - "method": "GetServiceAccess", - "params": { - "email": email, - "license": MUBERT_LICENSE, - "token": MUBERT_TOKEN, - "mode": MUBERT_MODE, - } - }) - - rdata = json.loads(r.text) - assert rdata['status'] == 1, "probably incorrect e-mail" - pat = rdata['data']['pat'] - return pat - - -def find_similar(em, embeddings, method='cosine'): - scores = [] - for ref in embeddings: - if method == 'cosine': - scores.append(1 - np.dot(ref, em) / (np.linalg.norm(ref) * np.linalg.norm(em))) - if method == 'norm': - scores.append(np.linalg.norm(ref - em)) - return np.array(scores), np.argsort(scores) - - -def get_tags_for_prompts(w2v_model, mubert_tags_embeddings, prompts, top_n=3, debug=False): - prompts_embeddings = w2v_model.encode(prompts) - ret = [] - for i, pe in enumerate(prompts_embeddings): - scores, idxs = find_similar(pe, mubert_tags_embeddings) - top_tags = MUBERT_TAGS[idxs[:top_n]] - top_prob = 1 - scores[idxs[:top_n]] - if debug: - print(f"Prompt: {prompts[i]}\nTags: {', '.join(top_tags)}\nScores: {top_prob}\n\n\n") - ret.append((prompts[i], list(top_tags))) - return ret diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py b/spaces/nikitaPDL2023/assignment4/detectron2/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py deleted file mode 100644 index bdd49a4566d1d0c79d0613c34a8cffd616f74fd2..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py +++ /dev/null @@ -1,152 +0,0 @@ -# An example config to train a mmdetection model using detectron2. - -from ..common.data.coco import dataloader -from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier -from ..common.optim import SGD as optimizer -from ..common.train import train -from ..common.data.constants import constants - -from detectron2.modeling.mmdet_wrapper import MMDetDetector -from detectron2.config import LazyCall as L - -model = L(MMDetDetector)( - detector=dict( - type="MaskRCNN", - pretrained="torchvision://resnet50", - backbone=dict( - type="ResNet", - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type="BN", requires_grad=True), - norm_eval=True, - style="pytorch", - ), - neck=dict(type="FPN", in_channels=[256, 512, 1024, 2048], out_channels=256, num_outs=5), - rpn_head=dict( - type="RPNHead", - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type="AnchorGenerator", - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64], - ), - bbox_coder=dict( - type="DeltaXYWHBBoxCoder", - target_means=[0.0, 0.0, 0.0, 0.0], - target_stds=[1.0, 1.0, 1.0, 1.0], - ), - loss_cls=dict(type="CrossEntropyLoss", use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type="L1Loss", loss_weight=1.0), - ), - roi_head=dict( - type="StandardRoIHead", - bbox_roi_extractor=dict( - type="SingleRoIExtractor", - roi_layer=dict(type="RoIAlign", output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32], - ), - bbox_head=dict( - type="Shared2FCBBoxHead", - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type="DeltaXYWHBBoxCoder", - target_means=[0.0, 0.0, 0.0, 0.0], - target_stds=[0.1, 0.1, 0.2, 0.2], - ), - reg_class_agnostic=False, - loss_cls=dict(type="CrossEntropyLoss", use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type="L1Loss", loss_weight=1.0), - ), - mask_roi_extractor=dict( - type="SingleRoIExtractor", - roi_layer=dict(type="RoIAlign", output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32], - ), - mask_head=dict( - type="FCNMaskHead", - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict(type="CrossEntropyLoss", use_mask=True, loss_weight=1.0), - ), - ), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type="MaxIoUAssigner", - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1, - ), - sampler=dict( - type="RandomSampler", - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False, - ), - allowed_border=-1, - pos_weight=-1, - debug=False, - ), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type="nms", iou_threshold=0.7), - min_bbox_size=0, - ), - rcnn=dict( - assigner=dict( - type="MaxIoUAssigner", - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1, - ), - sampler=dict( - type="RandomSampler", - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True, - ), - mask_size=28, - pos_weight=-1, - debug=False, - ), - ), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type="nms", iou_threshold=0.7), - min_bbox_size=0, - ), - rcnn=dict( - score_thr=0.05, - nms=dict(type="nms", iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5, - ), - ), - ), - pixel_mean=constants.imagenet_rgb256_mean, - pixel_std=constants.imagenet_rgb256_std, -) - -dataloader.train.mapper.image_format = "RGB" # torchvision pretrained model -train.init_checkpoint = None # pretrained model is loaded inside backbone diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/vis/extractor.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/vis/extractor.py deleted file mode 100644 index bfb2bdf693254a954e54a74b8766e5f574f6cf3a..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/vis/extractor.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -from typing import List, Optional, Sequence, Tuple -import torch - -from detectron2.layers.nms import batched_nms -from detectron2.structures.instances import Instances - -from densepose.converters import ToChartResultConverterWithConfidences -from densepose.structures import ( - DensePoseChartResultWithConfidences, - DensePoseEmbeddingPredictorOutput, -) -from densepose.vis.bounding_box import BoundingBoxVisualizer, ScoredBoundingBoxVisualizer -from densepose.vis.densepose_outputs_vertex import DensePoseOutputsVertexVisualizer -from densepose.vis.densepose_results import DensePoseResultsVisualizer - -from .base import CompoundVisualizer - -Scores = Sequence[float] -DensePoseChartResultsWithConfidences = List[DensePoseChartResultWithConfidences] - - -def extract_scores_from_instances(instances: Instances, select=None): - if instances.has("scores"): - return instances.scores if select is None else instances.scores[select] - return None - - -def extract_boxes_xywh_from_instances(instances: Instances, select=None): - if instances.has("pred_boxes"): - boxes_xywh = instances.pred_boxes.tensor.clone() - boxes_xywh[:, 2] -= boxes_xywh[:, 0] - boxes_xywh[:, 3] -= boxes_xywh[:, 1] - return boxes_xywh if select is None else boxes_xywh[select] - return None - - -def create_extractor(visualizer: object): - """ - Create an extractor for the provided visualizer - """ - if isinstance(visualizer, CompoundVisualizer): - extractors = [create_extractor(v) for v in visualizer.visualizers] - return CompoundExtractor(extractors) - elif isinstance(visualizer, DensePoseResultsVisualizer): - return DensePoseResultExtractor() - elif isinstance(visualizer, ScoredBoundingBoxVisualizer): - return CompoundExtractor([extract_boxes_xywh_from_instances, extract_scores_from_instances]) - elif isinstance(visualizer, BoundingBoxVisualizer): - return extract_boxes_xywh_from_instances - elif isinstance(visualizer, DensePoseOutputsVertexVisualizer): - return DensePoseOutputsExtractor() - else: - logger = logging.getLogger(__name__) - logger.error(f"Could not create extractor for {visualizer}") - return None - - -class BoundingBoxExtractor(object): - """ - Extracts bounding boxes from instances - """ - - def __call__(self, instances: Instances): - boxes_xywh = extract_boxes_xywh_from_instances(instances) - return boxes_xywh - - -class ScoredBoundingBoxExtractor(object): - """ - Extracts bounding boxes from instances - """ - - def __call__(self, instances: Instances, select=None): - scores = extract_scores_from_instances(instances) - boxes_xywh = extract_boxes_xywh_from_instances(instances) - if (scores is None) or (boxes_xywh is None): - return (boxes_xywh, scores) - if select is not None: - scores = scores[select] - boxes_xywh = boxes_xywh[select] - return (boxes_xywh, scores) - - -class DensePoseResultExtractor(object): - """ - Extracts DensePose chart result with confidences from instances - """ - - def __call__( - self, instances: Instances, select=None - ) -> Tuple[Optional[DensePoseChartResultsWithConfidences], Optional[torch.Tensor]]: - if instances.has("pred_densepose") and instances.has("pred_boxes"): - dpout = instances.pred_densepose - boxes_xyxy = instances.pred_boxes - boxes_xywh = extract_boxes_xywh_from_instances(instances) - if select is not None: - dpout = dpout[select] - boxes_xyxy = boxes_xyxy[select] - converter = ToChartResultConverterWithConfidences() - results = [converter.convert(dpout[i], boxes_xyxy[[i]]) for i in range(len(dpout))] - return results, boxes_xywh - else: - return None, None - - -class DensePoseOutputsExtractor(object): - """ - Extracts DensePose result from instances - """ - - def __call__( - self, - instances: Instances, - select=None, - ) -> Tuple[ - Optional[DensePoseEmbeddingPredictorOutput], Optional[torch.Tensor], Optional[List[int]] - ]: - if not (instances.has("pred_densepose") and instances.has("pred_boxes")): - return None, None, None - - dpout = instances.pred_densepose - boxes_xyxy = instances.pred_boxes - boxes_xywh = extract_boxes_xywh_from_instances(instances) - - if instances.has("pred_classes"): - classes = instances.pred_classes.tolist() - else: - classes = None - - if select is not None: - dpout = dpout[select] - boxes_xyxy = boxes_xyxy[select] - if classes is not None: - classes = classes[select] - - return dpout, boxes_xywh, classes - - -class CompoundExtractor(object): - """ - Extracts data for CompoundVisualizer - """ - - def __init__(self, extractors): - self.extractors = extractors - - def __call__(self, instances: Instances, select=None): - datas = [] - for extractor in self.extractors: - data = extractor(instances, select) - datas.append(data) - return datas - - -class NmsFilteredExtractor(object): - """ - Extracts data in the format accepted by NmsFilteredVisualizer - """ - - def __init__(self, extractor, iou_threshold): - self.extractor = extractor - self.iou_threshold = iou_threshold - - def __call__(self, instances: Instances, select=None): - scores = extract_scores_from_instances(instances) - boxes_xywh = extract_boxes_xywh_from_instances(instances) - if boxes_xywh is None: - return None - select_local_idx = batched_nms( - boxes_xywh, - scores, - torch.zeros(len(scores), dtype=torch.int32), - iou_threshold=self.iou_threshold, - ).squeeze() - select_local = torch.zeros(len(boxes_xywh), dtype=torch.bool, device=boxes_xywh.device) - select_local[select_local_idx] = True - select = select_local if select is None else (select & select_local) - return self.extractor(instances, select=select) - - -class ScoreThresholdedExtractor(object): - """ - Extracts data in the format accepted by ScoreThresholdedVisualizer - """ - - def __init__(self, extractor, min_score): - self.extractor = extractor - self.min_score = min_score - - def __call__(self, instances: Instances, select=None): - scores = extract_scores_from_instances(instances) - if scores is None: - return None - select_local = scores > self.min_score - select = select_local if select is None else (select & select_local) - data = self.extractor(instances, select=select) - return data diff --git a/spaces/noes14155/img_All_models/index.html b/spaces/noes14155/img_All_models/index.html deleted file mode 100644 index 40b11abfac0f6f7c145d1d349a978f07587cf433..0000000000000000000000000000000000000000 --- a/spaces/noes14155/img_All_models/index.html +++ /dev/null @@ -1,305 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path - -models = [ - {"name": "Deliberate", "url": "Masagin/Deliberate"}, - {"name": "Dreamlike Anime", "url": "dreamlike-art/dreamlike-anime-1.0"}, - {"name": "Dreamlike Diffusion", "url": "dreamlike-art/dreamlike-diffusion-1.0"}, - {"name": "Dreamlike Photoreal", "url": "dreamlike-art/dreamlike-photoreal-2.0"}, - {"name": "Dreamshaper", "url": "Lykon/DreamShaper"}, - {"name": "Lyriel 1.3", "url": "sakistriker/Lyriel_V1.3"}, - {"name": "Never Ending Dream 2", "url": "luongphamit/NeverEnding-Dream2"}, - {"name": "Protogen X 5.8", "url": "darkstorm2150/Protogen_x5.8_Official_Release"}, - {"name": "❤ ART MODELS ==========", "url": "dreamlike-art/dreamlike-diffusion-1.0"}, - {"name": "Alice in Diffusion Land", "url": "Guizmus/SDArt_AliceInDiffusionLand"}, - {"name": "Alt Clip", "url": "BAAI/AltCLIP"}, - {"name": "Anything Midjourney 4.1", "url": "Joeythemonster/anything-midjourney-v-4-1"}, - {"name": "Chaos and Order", "url": "Guizmus/SDArt_ChaosAndOrder768"}, - {"name": "Chilloutclara", "url": "Fred99774/chilloutvlara"}, - {"name": "Comic Diffusion", "url": "ogkalu/Comic-Diffusion"}, - {"name": "Cosmic Horros 768", "url": "Guizmus/SDArt_cosmichorrors768"}, - {"name": "Cosmic Horros", "url": "Guizmus/SDArt_cosmichorrors"}, - {"name": "DGSpitzer", "url": "DGSpitzer/DGSpitzer-Art-Diffusion"}, - {"name": "Dungeons and Diffusion", "url": "0xJustin/Dungeons-and-Diffusion"}, - {"name": "Elden Ring", "url": "nitrosocke/elden-ring-diffusion"}, - {"name": "Epic Diffusion 1.1", "url": "johnslegers/epic-diffusion-v1.1"}, - {"name": "Epic Diffusion", "url": "johnslegers/epic-diffusion"}, - {"name": "EpicMix Realism", "url": "Duskfallcrew/EpicMix_Realism"}, - {"name": "Fantasy Mix", "url": "theintuitiveye/FantasyMix"}, - {"name": "Girl New 1", "url": "Fred99774/girlnew1"}, - {"name": "Lit 6B", "url": "hakurei/lit-6B"}, - {"name": "Luna Diffusion", "url": "proximasanfinetuning/luna-diffusion"}, - {"name": "Midjourney 4.0", "url": "flax/midjourney-v4-diffusion"}, - {"name": "Midjourney 4.1", "url": "Joeythemonster/anything-midjourney-v-4-1"}, - {"name": "Mo-Di Diffusion", "url": "nitrosocke/mo-di-diffusion"}, - {"name": "Nitro Diffusion", "url": "nitrosocke/Nitro-Diffusion"}, - {"name": "Openjourney V2", "url": "prompthero/openjourney-v2"}, - {"name": "Openjourney", "url": "prompthero/openjourney"}, - {"name": "Seek Art Mega", "url": "coreco/seek.art_MEGA"}, - {"name": "Something", "url": "Guizmus/SDArt_something"}, - {"name": "Spider Verse diffusion", "url": "nitrosocke/spider-verse-diffusion"}, - {"name": "Vintedois 1.0", "url": "22h/vintedois-diffusion-v0-1"}, - {"name": "Vintedois 2.0", "url": "22h/vintedois-diffusion-v0-2"}, - {"name": "❤ ART STYLES ==========", "url": "joachimsallstrom/Double-Exposure-Diffusion"}, - {"name": "Balloon Art", "url": "Fictiverse/Stable_Diffusion_BalloonArt_Model"}, - {"name": "Double Exposure Diffusion", "url": "joachimsallstrom/Double-Exposure-Diffusion"}, - {"name": "Fluid Art", "url": "Fictiverse/Stable_Diffusion_FluidArt_Model"}, - {"name": "GTA5 Artwork Diffusion", "url": "ItsJayQz/GTA5_Artwork_Diffusion"}, - {"name": "Marvel WhatIf Diffusion", "url": "ItsJayQz/Marvel_WhatIf_Diffusion"}, - {"name": "Naruto Diffuser", "url": "lambdalabs/sd-naruto-diffusers"}, - {"name": "Papercut", "url": "Fictiverse/Stable_Diffusion_PaperCut_Model"}, - {"name": "Pokemon Diffuser", "url": "lambdalabs/sd-pokemon-diffusers"}, - {"name": "Synthwave Punk 2", "url": "ItsJayQz/SynthwavePunk-v2"}, - {"name": "Valorant Diffusion", "url": "ItsJayQz/Valorant_Diffusion"}, - {"name": "Van Gogh Diffusion", "url": "dallinmackay/Van-Gogh-diffusion"}, - {"name": "Vectorartz Diffusion", "url": "coder119/Vectorartz_Diffusion"}, - {"name": "VoxelArt", "url": "Fictiverse/Stable_Diffusion_VoxelArt_Model"}, - {"name": "❤ ANIME MODELS ==========", "url": "dreamlike-art/dreamlike-anime-1.0"}, - {"name": "7 Pa", "url": "AIARTCHAN/7pa"}, - {"name": "A Certain Model", "url": "JosephusCheung/ACertainModel"}, - {"name": "A Certain Thing", "url": "JosephusCheung/ACertainThing"}, - {"name": "A Certainity", "url": "JosephusCheung/ACertainty"}, - {"name": "Abyss Hell Hero", "url": "AIARTCHAN/AbyssHellHero"}, - {"name": "Abyss Maple 3", "url": "AIARTCHAN/AbyssMapleVer3"}, - {"name": "Abyss Orange Mix 2", "url": "WarriorMama777/AbyssOrangeMix2"}, - {"name": "Abyss Orange Mix 4", "url": "sakistriker/AbyssOrangeMix3"}, - {"name": "Abyss Orange Mix", "url": "WarriorMama777/AbyssOrangeMix"}, - {"name": "AbyssHell 3", "url": "AIARTCHAN/AbyssHellVer3"}, - {"name": "All 526 Animated", "url": "stablediffusionapi/all-526-animated"}, - {"name": "Anidosmix 3", "url": "AIARTCHAN/anidosmixV2"}, - {"name": "Anime Kawai Diffusion", "url": "Ojimi/anime-kawai-diffusion"}, - {"name": "Anireal 3D V2", "url": "circulus/sd-anireal-3d-v2"}, - {"name": "AnyLORA", "url": "kubanemil/AnyLORA"}, - {"name": "Anything 2.1", "url": "swl-models/anything-v2.1"}, - {"name": "Anything 3.0 Light", "url": "mm00/anything-v3.0-light"}, - {"name": "Anything 3.0", "url": "Linaqruf/anything-v3.0"}, - {"name": "Anything 3.1", "url": "cag/anything-v3-1"}, - {"name": "Anything 3X", "url": "iZELX1/Anything-V3-X"}, - {"name": "Anything 4.0", "url": "andite/anything-v4.0"}, - {"name": "Anything 5", "url": "sakistriker/Anything_V5_PrtRE"}, - {"name": "Anything 5.0", "url": "stablediffusionapi/anything-v5"}, - {"name": "Anything Else 4", "url": "stablediffusionapi/anythingelse-v4"}, - {"name": "Anything Else 5", "url": "stablediffusionapi/anything-v5"}, - {"name": "Arcane Diffusion", "url": "nitrosocke/Arcane-Diffusion"}, - {"name": "Archer Diffusion", "url": "nitrosocke/archer-diffusion"}, - {"name": "Asian Mix", "url": "D1b4l4p/AsianMix"}, - {"name": "Blood Orange Mix", "url": "WarriorMama777/BloodOrangeMix"}, - {"name": "CamelliaMix 2.5D","url": "stablediffusionapi/camelliamix25d"}, - {"name": "CamelliaMix Line","url": "stablediffusionapi/camelliamixline"}, - {"name": "CamelliaMix","url": "Powidl43/CamelliaMix"}, - {"name": "Cetusmix", "url": "stablediffusionapi/cetusmix"}, - {"name": "Chik Mix", "url": "stablediffusionapi/chikmix"}, - {"name": "Chikmix", "url": "stablediffusionapi/chikmix"}, - {"name": "Chillout App Factory","url": "stablediffusionapi/chillout-app-factory"}, - {"name": "Classic Anime", "url": "nitrosocke/classic-anim-diffusion"}, - {"name": "Cool Japan Diffusion 2.1.2", "url": "aipicasso/cool-japan-diffusion-2-1-2"}, - {"name": "Cosmic Babes", "url": "stablediffusionapi/cosmic-babes"}, - {"name": "Counterfeit 1.0", "url": "gsdf/counterfeit-v1.0"}, - {"name": "Counterfeit 2", "url": "gsdf/Counterfeit-V2.0"}, - {"name": "Counterfeit 2.0", "url": "gsdf/Counterfeit-V2.0"}, - {"name": "Counterfeit 3.0", "url": "stablediffusionapi/counterfeit-v30"}, - {"name": "CuteSexyRobutts", "url": "andite/cutesexyrobutts-diffusion"}, - {"name": "CyberPunk Anime", "url": "DGSpitzer/Cyberpunk-Anime-Diffusion"}, - {"name": "Dark Sushi Mix", "url": "stablediffusionapi/dark-sushi-mix"}, - {"name": "Dash Sushi 25d", "url": "stablediffusionapi/dark-sushi-25d"}, - {"name": "DucHaiten Anime", "url": "DucHaiten/DucHaitenAnime"}, - {"name": "Eerie Orange Mix", "url": "WarriorMama777/EerieOrangeMix"}, - {"name": "Eimis Anime Diffusion", "url": "eimiss/EimisAnimeDiffusion_1.0v"}, - {"name": "Ghibli Diffusion", "url": "nitrosocke/Ghibli-Diffusion"}, - {"name": "GrapeFruit", "url": "iZELX1/Grapefruit"}, - {"name": "GuoFeng 3", "url": "xiaolxl/GuoFeng3"}, - {"name": "Guweiz Diffusion", "url": "andite/guweiz-diffusion"}, - {"name": "Hiten Diffusion", "url": "andite/hiten-diffusion"}, - {"name": "Icomix 2", "url": "stablediffusionapi/icomix-2"}, - {"name": "InkPunk Diffusion", "url": "Envvi/Inkpunk-Diffusion"}, - {"name": "Mama Orange Mixs", "url": "WarriorMama777/OrangeMixs"}, - {"name": "Mashuu Diffusion", "url": "andite/mashuu-diffusion"}, - {"name": "Meainamis 8", "url": "sakistriker/MeinaMix_V8"}, - {"name": "Meina Alter", "url": "stablediffusionapi/meinaalter"}, - {"name": "Meina Pastel", "url": "stablediffusionapi/meinapastel"}, - {"name": "MeinaMix 7", "url": "Nacholmo/meinamixv7-diffusers"}, - {"name": "Mignon Diffusion", "url": "andite/mignon-diffusion"}, - {"name": "MikaPikazo Diffusion", "url": "andite/mikapikazo-diffusion"}, - {"name": "Mikapikazo", "url": "andite/mikapikazo-diffusion"}, - {"name": "Mix Pro V4", "url": "AIARTCHAN/MIX-Pro-V4"}, - {"name": "NeverEnding-Dream", "url": "Lykon/NeverEnding-Dream"}, - {"name": "Niji V5 Style 1", "url": "sakistriker/NijiV5style_V1"}, - {"name": "Openjourney 4", "url": "prompthero/openjourney-v4"}, - {"name": "OpenNiji", "url": "Korakoe/OpenNiji"}, - {"name": "Pastel Mix", "url": "andite/pastel-mix"}, - {"name": "Picasso Diffusion 1.1", "url": "aipicasso/picasso-diffusion-1-1"}, - {"name": "Piromizu Diffusion", "url": "andite/piromizu-diffusion"}, - {"name": "Protogen 2.2", "url": "darkstorm2150/Protogen_v2.2_Official_Release"}, - {"name": "Protogen Infinity", "url": "darkstorm2150/Protogen_Infinity_Official_Release"}, - {"name": "Protogen X 3.4", "url": "darkstorm2150/Protogen_x3.4_Official_Release"}, - {"name": "Rev Anim", "url": "stablediffusionapi/rev-anim"}, - {"name": "Rev Animated", "url": "coreml/coreml-ReV-Animated"}, - {"name": "Rev Animated", "url": "LottePeisch/RevAnimated-Diffusers"}, - {"name": "Something V 2.2","url": "NoCrypt/SomethingV2_2"}, - {"name": "Something V2","url": "NoCrypt/SomethingV2"}, - {"name": "Three Delicacy", "url": "stablediffusionapi/three-delicacy"}, - {"name": "Three Delicacy wonto", "url": "stablediffusionapi/three-delicacy-wonto"}, - {"name": "TMND mix", "url": "stablediffusionapi/tmnd-mix"}, - {"name": "Waifu Diffusion", "url": "hakurei/waifu-diffusion"}, - {"name": "❤ REALISTIC PHOTO MODELS ==========", "url": "dreamlike-art/dreamlike-photoreal-2.0"}, - {"name": "AmiIReal", "url": "stablediffusionapi/amireal"}, - {"name": "Analog Diffusion", "url": "wavymulder/Analog-Diffusion"}, - {"name": "Circulus 2.8", "url": "circulus/sd-photoreal-v2.8"}, - {"name": "Circulus Photoreal V2", "url": "circulus/sd-photoreal-real-v2"}, - {"name": "Claudfuen 1", "url": "claudfuen/photorealistic-fuen-v1"}, - {"name": "Collage Diffusion", "url": "wavymulder/collage-diffusion"}, - {"name": "Cyberrealistic", "url": "stablediffusionapi/cyberrealistic"}, - {"name": "Dreamful 2", "url": "Hius/DreamFul-V2"}, - {"name": "GakkiMix768", "url": "Sa1i/gakki-mix-768"}, - {"name": "Grimoeresigils", "url": "ECarbenia/grimoiresigils"}, - {"name": "HARDBlend", "url": "theintuitiveye/HARDblend"}, - {"name": "HassanBlend 1.4", "url": "hassanblend/hassanblend1.4"}, - {"name": "HassanBlend 1.5.1.2", "url": "hassanblend/HassanBlend1.5.1.2"}, - {"name": "Lomo Diffusion", "url": "wavymulder/lomo-diffusion"}, - {"name": "Model Shoot", "url": "wavymulder/modelshoot"}, - {"name": "Portrait Plus", "url": "wavymulder/portraitplus"}, - {"name": "QuinceMix", "url": "Hemlok/QuinceMix"}, - {"name": "Realistic Vision 1.4", "url": "SG161222/Realistic_Vision_V1.4"}, - {"name": "The Ally", "url": "stablediffusionapi/the-ally"}, - {"name": "Timeless Diffusion", "url": "wavymulder/timeless-diffusion"}, - {"name": "UltraSkin", "url": "VegaKH/Ultraskin"}, - {"name": "Wavyfusion", "url": "wavymulder/wavyfusion"}, - {"name": "❤ SEMI-REALISTIC MODELS ==========", "url": "stablediffusionapi/all-526"}, - {"name": "All 526", "url": "stablediffusionapi/all-526"}, - {"name": "All 526 animated", "url": "stablediffusionapi/all-526-animated"}, - {"name": "Circulus Semi Real 2", "url": "circulus/sd-photoreal-semi-v2"}, - {"name": "Semi Real Mix", "url": "robotjung/SemiRealMix"}, - {"name": "SpyBG", "url": "stablediffusionapi/spybg"}, - {"name": "❤ STABLE DIFFUSION MODELS ==========", "url": "stabilityai/stable-diffusion-2-1"}, - {"name": "Stable Diffusion 1.4","url": "CompVis/stable-diffusion-v1-4"}, - {"name": "Stable Diffusion 1.5","url": "runwayml/stable-diffusion-v1-5"}, - {"name": "Stable Diffusion 2.1","url": "stabilityai/stable-diffusion-2-1"}, - {"name": "Stable Diffusion 2.1 Base","url": "stabilityai/stable-diffusion-2-1-base"}, - {"name": "Stable Diffusion 2.1 Unclip","url": "stabilityai/stable-diffusion-2-1-unclip"}, - {"name": "❤ SCI FI MODELS ==========", "url": "nitrosocke/Future-Diffusion"}, - {"name": "Future Diffusion", "url": "nitrosocke/Future-Diffusion"}, - {"name": "JWST Deep Space Diffusion", "url": "dallinmackay/JWST-Deep-Space-diffusion"}, - {"name": "Robo Diffusion 3 Base", "url": "nousr/robo-diffusion-2-base"}, - {"name": "Robo Diffusion", "url": "nousr/robo-diffusion"}, - {"name": "Tron Legacy Diffusion", "url": "dallinmackay/Tron-Legacy-diffusion"}, - {"name": "❤ 3D ART MODELS ==========", "url": "DucHaiten/DucHaitenAIart"}, - {"name": "DucHaiten Art", "url": "DucHaiten/DucHaitenAIart"}, - {"name": "DucHaiten ClassicAnime", "url": "DucHaiten/DH_ClassicAnime"}, - {"name": "DucHaiten DreamWorld", "url": "DucHaiten/DucHaitenDreamWorld"}, - {"name": "DucHaiten Journey", "url": "DucHaiten/DucHaitenJourney"}, - {"name": "DucHaiten StyleLikeMe", "url": "DucHaiten/DucHaiten-StyleLikeMe"}, - {"name": "DucHaiten SuperCute", "url": "DucHaiten/DucHaitenSuperCute"}, - {"name": "Redshift Diffusion 768", "url": "nitrosocke/redshift-diffusion-768"}, - {"name": "Redshift Diffusion", "url": "nitrosocke/redshift-diffusion"}, -] - -current_model = models[0] - -text_gen = gr.Interface.load("spaces/Omnibus/MagicPrompt-Stable-Diffusion_link") - -models2 = [] -for model in models: - model_url = f"models/{model['url']}" - loaded_model = gr.Interface.load(model_url, live=True, preprocess=True) - models2.append(loaded_model) - - -def text_it(inputs, text_gen=text_gen): - return text_gen(inputs) - - -def set_model(current_model_index): - global current_model - current_model = models[current_model_index] - return gr.update(label=f"{current_model['name']}") - - -def send_it(inputs, model_choice): - proc = models2[model_choice] - return proc(inputs) - - -css = """""" - -with gr.Blocks(css=css) as myface: - gr.HTML( - """ - - - - - - - - - - - - - - - -""" - ) - - with gr.Row(): - with gr.Row(): - input_text = gr.Textbox(label="Prompt idea", lines=1) - # Model selection dropdown - model_name1 = gr.Dropdown( - label="Choose Model", - choices=[m["name"] for m in models], - type="index", - value=current_model["name"], - interactive=True, - ) - with gr.Row(): - see_prompts = gr.Button("Generate Prompts") - run = gr.Button("Generate Images", variant="primary") - with gr.Tab("Main"): - with gr.Row(): - output1 = gr.Image(label=f"{current_model['name']}") - output2 = gr.Image(label=f"{current_model['name']}") - output3 = gr.Image(label=f"{current_model['name']}") - output4 = gr.Image(label=f"{current_model['name']}") - with gr.Row(): - magic1 = gr.Textbox(lines=4) - magic2 = gr.Textbox(lines=4) - magic3 = gr.Textbox(lines=4) - magic4 = gr.Textbox(lines=4) - - with gr.Row(): - output5 = gr.Image(label=f"{current_model['name']}") - output6 = gr.Image(label=f"{current_model['name']}") - output7 = gr.Image(label=f"{current_model['name']}") - output8 = gr.Image(label=f"{current_model['name']}") - with gr.Row(): - magic5 = gr.Textbox(lines=4) - magic6 = gr.Textbox(lines=4) - magic7 = gr.Textbox(lines=4) - magic8 = gr.Textbox(lines=4) - - model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3, output4, output5, output6, output7, output8]) - - run.click(send_it, inputs=[magic1, model_name1], outputs=[output1]) - run.click(send_it, inputs=[magic2, model_name1], outputs=[output2]) - run.click(send_it, inputs=[magic3, model_name1], outputs=[output3]) - run.click(send_it, inputs=[magic4, model_name1], outputs=[output4]) - run.click(send_it, inputs=[magic5, model_name1], outputs=[output5]) - run.click(send_it, inputs=[magic6, model_name1], outputs=[output6]) - run.click(send_it, inputs=[magic7, model_name1], outputs=[output7]) - run.click(send_it, inputs=[magic8, model_name1], outputs=[output8]) - - see_prompts.click(text_it, inputs=[input_text], outputs=[magic1]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic2]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic3]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic4]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic5]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic6]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic7]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic8]) - -myface.queue(concurrency_count=200) -myface.launch(inline=True, show_api=False, max_threads=400) \ No newline at end of file diff --git a/spaces/nomic-ai/fnlp_moss-002-sft-data/index.html b/spaces/nomic-ai/fnlp_moss-002-sft-data/index.html deleted file mode 100644 index 03ff7b5d4fe127eefd125b441f02f64f86d7945d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/fnlp_moss-002-sft-data/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - fnlp/moss-002-sft-data - - - - -
          - -
          - - - \ No newline at end of file diff --git a/spaces/ntcwai/prompt-engine/app.py b/spaces/ntcwai/prompt-engine/app.py deleted file mode 100644 index cc3a3197450ab48fbb3c0f742a693d5053396d9e..0000000000000000000000000000000000000000 --- a/spaces/ntcwai/prompt-engine/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import gradio as grad -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer - -def load_prompter(): - prompter_model = AutoModelForCausalLM.from_pretrained("microsoft/Promptist") - tokenizer = AutoTokenizer.from_pretrained("gpt2") - tokenizer.pad_token = tokenizer.eos_token - tokenizer.padding_side = "left" - return prompter_model, tokenizer - -prompter_model, prompter_tokenizer = load_prompter() - -def generate(plain_text): - input_ids = prompter_tokenizer(plain_text.strip()+" Rephrase:", return_tensors="pt").input_ids - eos_id = prompter_tokenizer.eos_token_id - outputs = prompter_model.generate(input_ids, do_sample=False, max_new_tokens=75, num_beams=8, num_return_sequences=8, eos_token_id=eos_id, pad_token_id=eos_id, length_penalty=-1.0) - output_texts = prompter_tokenizer.batch_decode(outputs, skip_special_tokens=True) - res = output_texts[0].replace(plain_text+" Rephrase:", "").strip() - return res - -txt = grad.Textbox(lines=1, label="Initial Text", placeholder="Input Prompt") -out = grad.Textbox(lines=1, label="Optimized Prompt") -examples = ["A rabbit is wearing a space suit", "Several railroad tracks with one train passing by", "The roof is wet from the rain", "Cats dancing in a space club"] - -grad.Interface(fn=generate, - inputs=txt, - outputs=out, - title="Promptist Demo", - description="Promptist is a prompt interface for Stable Diffusion v1-4 (https://huggingface.co/CompVis/stable-diffusion-v1-4) that optimizes user input into model-preferred prompts. The online demo at Hugging Face Spaces is using CPU, so slow generation speed would be expected. Please load the model locally with GPUs for faster generation.", - examples=examples, - allow_flagging='never', - cache_examples=False, - theme="default").launch(enable_queue=True, debug=True) \ No newline at end of file diff --git a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/compute/gru_gates.h b/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/compute/gru_gates.h deleted file mode 100644 index 7b8cd489f5c6ef42de262d54727f99c5f9020b82..0000000000000000000000000000000000000000 --- a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/compute/gru_gates.h +++ /dev/null @@ -1,214 +0,0 @@ -/* - * Copyright 2021 Google LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#ifndef LYRA_CODEC_SPARSE_MATMUL_COMPUTE_GRU_GATES_H_ -#define LYRA_CODEC_SPARSE_MATMUL_COMPUTE_GRU_GATES_H_ - -#include -#include - -// IWYU pragma: begin_exports -#include "sparse_matmul/compute/ar_inputs.h" -#include "sparse_matmul/compute/gru_gates_arm.h" -#include "sparse_matmul/compute/gru_gates_avx_fixed.h" -#include "sparse_matmul/compute/gru_gates_generic.h" -#include "sparse_matmul/compute/matmul.h" -#include "sparse_matmul/numerics/fixed_types.h" -#include "sparse_matmul/numerics/type_utils.h" -#include "sparse_matmul/vector/cache_aligned_vector.h" -// IWYU pragma: end_exports - -namespace csrblocksparse { - -// The master template is really a catch-all for the unimplemented cases to -// run the generics. -template -class GruGates : public MatmulBase { - public: - using SampleWeightType = float; - static constexpr int kSIMDWidth = kGenericSIMDWidth; - - // Generic GRU function covers all uses for WaveRNN-like architectures and - // conditioning. - // Controlled by template parameters thus: - // - |kInputsMode| == |k0ARInputs|: There are no autoregressive inputs so - // |ar_sample0|, |ar_sample1|, |ar_sample2|, |ar_01_weights|, - // |ar_2_weights| are ignored. - // - |kInputsMode| == |k2ARInputs|: |ar_sample0|, |ar_sample1| are multiplied - // by |ar_01_weights| and added to the (conditioning) input. - // - |kInputsMode| == |k3ARInputs|: |ar_sample2| is multiplied by - // |ar_2_weights| and added to the other two |ar_inputs| (and added to the - // conditioning input). - // - If |kSplitGates| is true: The |*gru_recurrent_other_ptr| is secondary - // recurrent input that must be added to |*gru_recurrent_ptr|. - // - |num_replicas| determines the number of duplicates of the output to be - // written, separated by |replica_stride|. - // - |start|, |end| are |rows| in [0, |state_size|] to be processed by this - // thread. - // - // Previous state is read from |*gru_state_ptr| and the new state is written - // to *(|gru_state_ptr| + i * |replica_stride| for i in [0, |num_replicas|)). - template - void GruWithARInput(int start, int end, int state_size, - const InputType* gru_recurrent_ptr, - const InputType* input_ptr, GRUStateType* gru_state_ptr, - const SampleType* ar_sample0 = nullptr, - const SampleType* ar_sample1 = nullptr, - const SampleWeightType* ar_01_weights = nullptr, - int num_replicas = 1, int replica_stride = 0, - const SampleType* ar_sample2 = nullptr, - const SampleWeightType* ar_2_weights = nullptr, - const InputType* gru_recurrent_other_ptr = nullptr) { - CHECK_EQ(num_replicas, 1) << "Generic code should always have 1 replica"; - GoThroughGates( - start, end, ar_01_weights, gru_recurrent_ptr, gru_recurrent_other_ptr, - input_ptr, gru_state_ptr, ar_2_weights, state_size, ar_sample0, - ar_sample1, ar_sample2); - } - - // No AR inputs, no split gates, no batching, no replicated outputs. - // TODO(b/188702959): Redirect conditioning GRU here, removing code from - // gru_layer.h. - // Copy to specializations. - void PlainGru(int start, int end, int state_size, - const InputType* gru_recurrent_ptr, const InputType* input_ptr, - GRUStateType* gru_state_ptr) { - GruWithARInput( - start, end, state_size, gru_recurrent_ptr, input_ptr, gru_state_ptr); - } -}; - -#if defined __ARM_NEON || defined __aarch64__ -// Partial specialization for float. -template <> -class GruGates : public MatmulBase { - public: - static constexpr int kSIMDWidth = kNeonSIMDWidth; - - // Generic GRU function covers all uses for WaveRNN-like architectures and - // conditioning. - template - void GruWithARInput(int start, int end, int state_size, - const float* gru_recurrent_data, const float* input_data, - float* gru_state_data, const float* ar_sample0 = nullptr, - const float* ar_sample1 = nullptr, - const float* ar_01_weights = nullptr, - int num_replicas = 1, int replica_stride = 0, - const float* ar_sample2 = nullptr, - const float* ar_2_weights = nullptr, - const float* gru_recurrent_other_data = nullptr) { - DCHECK_EQ(num_replicas, 1) << "ARM code should always have 1 replica"; - GoThroughGatesFloat( - start, end, ar_01_weights, gru_recurrent_data, gru_recurrent_other_data, - input_data, gru_state_data, ar_2_weights, state_size, ar_sample0, - ar_sample1, ar_sample2); - } -}; -#endif // defined __ARM_NEON || defined __aarch64__ - -// Partial specialization for fixed types. The sample weights are always float -// whatever the fixed type of the other weights. -template -class GruGates, fixed32, - fixed16> : public MatmulBase { - public: -#if defined __ARM_NEON || defined __aarch64__ - static constexpr int kSIMDWidth = kNeonSIMDWidth; -#elif defined __AVX2__ - static constexpr int kSIMDWidth = kAVX2SIMDWidth * 2; -#else // Generic case. - static constexpr int kSIMDWidth = kGenericSIMDWidth; -#endif // __ARM_NEON || defined __aarch64__ / __AVX2__ - - using GRUStateType = fixed16; - using InputType = fixed32; - using SampleType = fixed16; - using SampleWeightType = float; - static constexpr int kInputMantissaBits = InputType::kMantissaBits; - static constexpr int kSampleMantissaBits = SampleType::kMantissaBits; - static constexpr int kStateMantissaBits = GRUStateType::kMantissaBits; - // Generic GRU function covers all uses for WaveRNN-like architectures and - // conditioning. - template - void GruWithARInput(int start, int end, int state_size, - const InputType* gru_recurrent_data, - const InputType* input_data, GRUStateType* gru_state_data, - const SampleType* ar_sample0 = nullptr, - const SampleType* ar_sample1 = nullptr, - const SampleWeightType* ar_01_weights = nullptr, - int num_replicas = 1, int replica_stride = 0, - const SampleType* ar_sample2 = nullptr, - const SampleWeightType* ar_2_weights = nullptr, - const InputType* gru_recurrent_other_data = nullptr) { -#if defined __ARM_NEON || defined __aarch64__ || defined __AVX2__ - const int32_t* gru_recurrent_ptr = - reinterpret_cast(gru_recurrent_data); - const int32_t* gru_recurrent_other_ptr = - reinterpret_cast(gru_recurrent_other_data); - const int32_t* input_ptr = reinterpret_cast(input_data); - int16_t* gru_state_ptr = reinterpret_cast(gru_state_data); -#if defined __AVX2__ - // The samples are fixed16, but we scale them up here and convert to float - // so that the product with the QR weights is always on the same scale as - // InputType, so we don't have to do any more scaling inside. - const float sample_factor = static_cast(1 << kInputMantissaBits); -#else - const float sample_factor = 1.0f; -#endif - // AR sample 0 and 1 are packed into a pair because the QR weights are - // formatted with the weights interleaved for sample 0 and 1. - std::pair ar_sample01; - float ar_sample2_float = 0.0f; - if (kInputsMode == ARInputsMode::k2ARInputs || - kInputsMode == ARInputsMode::k3ARInputs) { - ar_sample01 = {static_cast(*ar_sample0) * sample_factor, - static_cast(*ar_sample1) * sample_factor}; - if (kInputsMode == ARInputsMode::k3ARInputs) { - ar_sample2_float = static_cast(*ar_sample2) * sample_factor; - } - } -#if defined __AVX2__ - CHECK(using_avx2_) << "Compiled for AVX2, but cpu flag not set!"; - GruGatesAVXFixed( - start, end, state_size, gru_recurrent_ptr, input_ptr, &ar_sample01, - ar_01_weights, num_replicas, replica_stride, &ar_sample2_float, - ar_2_weights, gru_recurrent_other_ptr, gru_state_ptr); -#else // ARM. - DCHECK_EQ(num_replicas, 1) << "ARM code should always have 1 replica"; - GoThroughGatesFixed( - start, end, ar_01_weights, gru_recurrent_ptr, gru_recurrent_other_ptr, - input_ptr, gru_state_ptr, ar_2_weights, state_size, &ar_sample01, - &ar_sample2_float); -#endif // __AVX2__ / ARM. -#else // Generic case. - CHECK_EQ(num_replicas, 1) << "Generic code should always have 1 replica"; - GoThroughGates( - start, end, ar_01_weights, gru_recurrent_data, gru_recurrent_other_data, - input_data, gru_state_data, ar_2_weights, state_size, ar_sample0, - ar_sample1, ar_sample2); -#endif // __ARM_NEON || defined __aarch64__ / __AVX2__ - } -}; - -} // namespace csrblocksparse - -#endif // LYRA_CODEC_SPARSE_MATMUL_COMPUTE_GRU_GATES_H_ diff --git a/spaces/oconnoob/audio-intelligence-dashboard/app/css_components/file.css b/spaces/oconnoob/audio-intelligence-dashboard/app/css_components/file.css deleted file mode 100644 index 67382f3f6189d6d2db36ed765de4131feee25ece..0000000000000000000000000000000000000000 --- a/spaces/oconnoob/audio-intelligence-dashboard/app/css_components/file.css +++ /dev/null @@ -1,81 +0,0 @@ -body { - font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, - Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; -} - -.logo { - width: 180px; -} - -.title { - font-weight: 600; - text-align: left; - color: black; - font-size: 18px; -} - -.alert, -#component-2, -#component-3 { - padding: 24px; - color: black; - background-color: #f4f8fb; - border: 1px solid #d6dce7; - border-radius: 8px; - box-shadow: 0px 6px 15px rgb(0 0 0 / 2%), 0px 2px 5px rgb(0 0 0 / 4%); -} - -ol { - list-style: disc; -} - -.alert__info { - background-color: #f4f8fb; - color: #323552; -} - -.alert__warning { - background-color: #fffae5; - color: #917115; - border: 1px solid #e4cf2b; -} - -#pw { - -webkit-text-security: disc; -} - -/* unvisited link */ -a:link { - color: #6b2bd6; -} - -/* visited link */ -a:visited { - color: #6b2bd6; -} - -/* mouse over link */ -a:hover { - color: #6b2bd6; -} - -/* selected link */ -a:active { - color: #6b2bd6; -} - -li { - margin-left: 1em; -} - -.apikey { -} - -.entity-list { - color: #6b2bd6; - font-size: 16px -} - -.entity-elt { - color: black -} \ No newline at end of file diff --git a/spaces/odettecantswim/vits-models-genshin/text/ngu_dialect.py b/spaces/odettecantswim/vits-models-genshin/text/ngu_dialect.py deleted file mode 100644 index ce3e12bbf0469426872eed5f681985d3e1be9b26..0000000000000000000000000000000000000000 --- a/spaces/odettecantswim/vits-models-genshin/text/ngu_dialect.py +++ /dev/null @@ -1,30 +0,0 @@ -import re -import opencc - - -dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou', - 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing', - 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang', - 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan', - 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen', - 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'} - -converters = {} - -for dialect in dialects.values(): - try: - converters[dialect] = opencc.OpenCC(dialect) - except: - pass - - -def ngu_dialect_to_ipa(text, dialect): - dialect = dialects[dialect] - text = converters[dialect].convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/oguzakif/video-object-remover/SiamMask/experiments/siamrpn_resnet/resnet.py b/spaces/oguzakif/video-object-remover/SiamMask/experiments/siamrpn_resnet/resnet.py deleted file mode 100644 index 4ca45fc4b5593460ab2943c6eff71dba1c07fb17..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/SiamMask/experiments/siamrpn_resnet/resnet.py +++ /dev/null @@ -1,359 +0,0 @@ -import torch.nn as nn -import torch -from torch.autograd import Variable -import math -import torch.utils.model_zoo as model_zoo -from models.features import Features -from utils.log_helper import log_once - - -__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101', - 'resnet152'] - - -model_urls = { - 'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth', - 'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth', - 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth', - 'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth', - 'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth', -} - - -def conv3x3(in_planes, out_planes, stride=1): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(Features): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - # padding = (2 - stride) + (dilation // 2 - 1) - padding = 2 - stride - assert stride==1 or dilation==1, "stride and dilation must have one equals to zero at least" - if dilation > 1: - padding = dilation - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, - padding=padding, bias=False, dilation=dilation) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * 4) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - if out.size() != residual.size(): - print(out.size(), residual.size()) - out += residual - - out = self.relu(out) - - return out - - -class Bottleneck_nop(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(Bottleneck_nop, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, - padding=0, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * 4) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - s = residual.size(3) - residual = residual[:, :, 1:s-1, 1:s-1] - - out += residual - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__(self, block, layers, layer4=False, layer3=False): - self.inplanes = 64 - super(ResNet, self).__init__() - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=0, # 3 - bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) # 31x31, 15x15 - - self.feature_size = 128 * block.expansion - - if layer3: - self.layer3 = self._make_layer(block, 256, layers[2], stride=1, dilation=2) # 15x15, 7x7 - self.feature_size = (256 + 128) * block.expansion - else: - self.layer3 = lambda x:x # identity - - if layer4: - self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation=4) # 7x7, 3x3 - self.feature_size = 512 * block.expansion - else: - self.layer4 = lambda x:x # identity - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - - def _make_layer(self, block, planes, blocks, stride=1, dilation=1): - downsample = None - dd = dilation - if stride != 1 or self.inplanes != planes * block.expansion: - if stride == 1 and dilation == 1: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - else: - if dilation > 1: - dd = dilation // 2 - padding = dd - else: - dd = 1 - padding = 0 - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=3, stride=stride, bias=False, - padding=padding, dilation=dd), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - # layers.append(block(self.inplanes, planes, stride, downsample, dilation=dilation)) - layers.append(block(self.inplanes, planes, stride, downsample, dilation=dd)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, dilation=dilation)) - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - # print x.size() - x = self.maxpool(x) - # print x.size() - - p1 = self.layer1(x) - p2 = self.layer2(p1) - p3 = self.layer3(p2) - # p3 = torch.cat([p2, p3], 1) - - log_once("p3 {}".format(p3.size())) - p4 = self.layer4(p3) - - return p2, p3, p4 - - -class ResAdjust(nn.Module): - def __init__(self, - block=Bottleneck, - out_channels=256, - adjust_number=1, - fuse_layers=[2,3,4]): - super(ResAdjust, self).__init__() - self.fuse_layers = set(fuse_layers) - - if 2 in self.fuse_layers: - self.layer2 = self._make_layer(block, 128, 1, out_channels, adjust_number) - if 3 in self.fuse_layers: - self.layer3 = self._make_layer(block, 256, 2, out_channels, adjust_number) - if 4 in self.fuse_layers: - self.layer4 = self._make_layer(block, 512, 4, out_channels, adjust_number) - - self.feature_size = out_channels * len(self.fuse_layers) - - def _make_layer(self, block, plances, dilation, out, number=1): - - layers = [] - - for _ in range(number): - layer = block(plances * block.expansion, plances, dilation=dilation) - layers.append(layer) - - downsample = nn.Sequential( - nn.Conv2d(plances * block.expansion, out, kernel_size=3, padding=1, bias=False), - nn.BatchNorm2d(out) - ) - layers.append(downsample) - - return nn.Sequential(*layers) - - def forward(self, p2, p3, p4): - - outputs = [] - - if 2 in self.fuse_layers: - outputs.append(self.layer2(p2)) - if 3 in self.fuse_layers: - outputs.append(self.layer3(p3)) - if 4 in self.fuse_layers: - outputs.append(self.layer4(p4)) - # return torch.cat(outputs, 1) - return outputs - - -def resnet18(pretrained=False, **kwargs): - """Constructs a ResNet-18 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs) - if pretrained: - model.load_state_dict(model_zoo.load_url(model_urls['resnet18'])) - return model - - -def resnet34(pretrained=False, **kwargs): - """Constructs a ResNet-34 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs) - if pretrained: - model.load_state_dict(model_zoo.load_url(model_urls['resnet34'])) - return model - - -def resnet50(pretrained=False, **kwargs): - """Constructs a ResNet-50 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) - if pretrained: - model.load_state_dict(model_zoo.load_url(model_urls['resnet50'])) - return model - - -def resnet101(pretrained=False, **kwargs): - """Constructs a ResNet-101 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs) - if pretrained: - model.load_state_dict(model_zoo.load_url(model_urls['resnet101'])) - return model - - -def resnet152(pretrained=False, **kwargs): - """Constructs a ResNet-152 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs) - if pretrained: - model.load_state_dict(model_zoo.load_url(model_urls['resnet152'])) - return model - - -if __name__ == '__main__': - net = resnet50() - print(net) - net = net.cuda() - - var = torch.FloatTensor(1,3,127,127).cuda() - var = Variable(var) - template = net(var) - print('Examplar Size: {}'.format(template.shape)) - - var = torch.FloatTensor(1,3,255,255).cuda() - var = Variable(var) - - net(var) - diff --git a/spaces/osiria/bert-italian-cased-ner/app.py b/spaces/osiria/bert-italian-cased-ner/app.py deleted file mode 100644 index cc6db94fd2e10eb23be7ee1221b94d6380133fef..0000000000000000000000000000000000000000 --- a/spaces/osiria/bert-italian-cased-ner/app.py +++ /dev/null @@ -1,197 +0,0 @@ -import os -import gradio as gr -import subprocess -import sys - -def install(package): - subprocess.check_call([sys.executable, "-m", "pip", "install", package]) - -install("numpy") -install("torch") -install("transformers") -install("unidecode") - -import numpy as np -import torch -from transformers import AutoTokenizer -from transformers import BertForTokenClassification -from collections import Counter -from unidecode import unidecode -import string -import re - -tokenizer = AutoTokenizer.from_pretrained("osiria/bert-italian-cased-ner") -model = BertForTokenClassification.from_pretrained("osiria/bert-italian-cased-ner", num_labels = 5) -device = torch.device("cpu") -model = model.to(device) -model.eval() - -from transformers import pipeline -ner = pipeline('ner', model=model, tokenizer=tokenizer, device=-1) - - -header = '''-------------------------------------------------------------------------------------------------- - -
          - - - D -    E -    M - O - - -
          -
          -''' - - -maps = {"O": "NONE", "PER": "PER", "LOC": "LOC", "ORG": "ORG", "MISC": "MISC", "DATE": "DATE"} -reg_month = "(?:gennaio|febbraio|marzo|aprile|maggio|giugno|luglio|agosto|settembre|ottobre|novembre|dicembre|january|february|march|april|may|june|july|august|september|october|november|december)" -reg_date = "(?:\d{1,2}\°{0,1}|primo|\d{1,2}\º{0,1})" + " " + reg_month + " " + "\d{4}|" -reg_date = reg_date + reg_month + " " + "\d{4}|" -reg_date = reg_date + "\d{1,2}" + " " + reg_month -reg_date = reg_date + "\d{1,2}" + "(?:\/|\.)\d{1,2}(?:\/|\.)" + "\d{4}|" -reg_date = reg_date + "(?<=dal )\d{4}|(?<=al )\d{4}|(?<=nel )\d{4}|(?<=anno )\d{4}|(?<=del )\d{4}|" -reg_date = reg_date + "\d{1,5} a\.c\.|\d{1,5} d\.c\." -map_punct = {"’": "'", "«": '"', "»": '"', "”": '"', "“": '"', "–": "-", "$": ""} -unk_tok = 9005 - -merge_th_1 = 0.8 -merge_th_2 = 0.4 -min_th = 0.5 - -def extract(text): - - text = text.strip() - for mp in map_punct: - text = text.replace(mp, map_punct[mp]) - text = re.sub("\[\d+\]", "", text) - - warn_flag = False - - res_total = [] - out_text = "" - - for p_text in text.split("\n"): - - if p_text: - - toks = tokenizer.encode(p_text) - if unk_tok in toks: - warn_flag = True - - res_orig = ner(p_text, aggregation_strategy = "first") - res_orig = [el for r, el in enumerate(res_orig) if len(el["word"].strip()) > 1] - res = [] - - for r, ent in enumerate(res_orig): - if r > 0 and ent["score"] < merge_th_1 and ent["start"] <= res[-1]["end"] + 1 and ent["score"] <= res[-1]["score"]: - res[-1]["word"] = res[-1]["word"] + " " + ent["word"] - res[-1]["score"] = merge_th_1*(res[-1]["score"] > merge_th_2) - res[-1]["end"] = ent["end"] - elif r < len(res_orig) - 1 and ent["score"] < merge_th_1 and res_orig[r+1]["start"] <= ent["end"] + 1 and res_orig[r+1]["score"] > ent["score"]: - res_orig[r+1]["word"] = ent["word"] + " " + res_orig[r+1]["word"] - res_orig[r+1]["score"] = merge_th_1*(res_orig[r+1]["score"] > merge_th_2) - res_orig[r+1]["start"] = ent["start"] - else: - res.append(ent) - - res = [el for r, el in enumerate(res) if el["score"] >= min_th] - - dates = [{"entity_group": "DATE", "score": 1.0, "word": p_text[el.span()[0]:el.span()[1]], "start": el.span()[0], "end": el.span()[1]} for el in re.finditer(reg_date, p_text, flags = re.IGNORECASE)] - res.extend(dates) - res = sorted(res, key = lambda t: t["start"]) - res_total.extend(res) - - chunks = [("", "", 0, "NONE")] - - for el in res: - if maps[el["entity_group"]] != "NONE": - tag = maps[el["entity_group"]] - chunks.append((p_text[el["start"]: el["end"]], p_text[chunks[-1][2]:el["end"]], el["end"], tag)) - - if chunks[-1][2] < len(p_text): - chunks.append(("END", p_text[chunks[-1][2]:], -1, "NONE")) - chunks = chunks[1:] - - n_text = [] - - for i, chunk in enumerate(chunks): - - rep = chunk[0] - - if chunk[3] == "PER": - rep = 'ᴘᴇʀ ' + chunk[0] + '' - elif chunk[3] == "LOC": - rep = 'ʟᴏᴄ ' + chunk[0] + '' - elif chunk[3] == "ORG": - rep = 'ᴏʀɢ ' + chunk[0] + '' - elif chunk[3] == "MISC": - rep = 'ᴍɪsᴄ ' + chunk[0] + '' - elif chunk[3] == "DATE": - rep = 'ᴅᴀᴛᴇ ' + chunk[0] + '' - - n_text.append(chunk[1].replace(chunk[0], rep)) - - n_text = "".join(n_text) - if out_text: - out_text = out_text + "
          " + n_text - else: - out_text = n_text - - - tags = [el["word"] for el in res_total if el["entity_group"] not in ['DATE', None]] - cnt = Counter(tags) - tags = sorted(list(set([el for el in tags if cnt[el] > 1])), key = lambda t: cnt[t]*np.exp(-tags.index(t)))[::-1] - tags = [" ".join(re.sub("[^A-Za-z0-9\s]", "", unidecode(tag)).split()) for tag in tags] - tags = ['ᴛᴀɢ ' + el + '' for el in tags] - tags = " ".join(tags) - - if tags: - out_text = out_text + "

          Tags: " + tags - - if warn_flag: - out_text = out_text + "

          Warning ⚠️: Unknown tokens detected in text. The model might behave erratically" - - return out_text - - - -init_text = '''L'Agenzia spaziale europea, nota internazionalmente con l'acronimo ESA dalla denominazione inglese European Space Agency, è un'agenzia internazionale fondata nel 1975 incaricata di coordinare i progetti spaziali di 22 Paesi europei. Il suo quartier generale si trova a Parigi in Francia, con uffici a Mosca, Bruxelles, Washington e Houston. Il personale dell'ESA del 2016 ammontava a 2 200 persone (esclusi sub-appaltatori e le agenzie nazionali) e il budget del 2022 è di 7,15 miliardi di euro. Attualmente il direttore generale dell'agenzia è l'austriaco Josef Aschbacher, il quale ha sostituito il tedesco Johann-Dietrich Wörner il primo marzo 2021. -Lo spazioporto dell'ESA è il Centre Spatial Guyanais a Kourou, nella Guyana francese, un sito scelto, come tutte le basi di lancio, per via della sua vicinanza con l'equatore. Durante gli ultimi anni il lanciatore Ariane 5 ha consentito all'ESA di raggiungere una posizione di primo piano nei lanci commerciali e l'ESA è il principale concorrente della NASA nell'esplorazione spaziale. -Le missioni scientifiche dell'ESA hanno le loro basi al Centro europeo per la ricerca e la tecnologia spaziale (ESTEC) di Noordwijk, nei Paesi Bassi. Il Centro europeo per le operazioni spaziali (ESOC), di Darmstadt in Germania, è responsabile del controllo dei satelliti ESA in orbita. Le responsabilità del Centro europeo per l'osservazione della Terra (ESRIN) di Frascati, in Italia, includono la raccolta, l'archiviazione e la distribuzione di dati satellitari ai partner dell'ESA; oltre a ciò, la struttura agisce come centro di informazione tecnologica per l'intera agenzia. [...] -L'Agenzia Spaziale Italiana (ASI) venne fondata nel 1988 per promuovere, coordinare e condurre le attività spaziali in Italia. Opera in collaborazione con il Ministero dell'università e della ricerca scientifica e coopera in numerosi progetti con entità attive nella ricerca scientifica e nelle attività commerciali legate allo spazio. Internazionalmente l'ASI fornisce la delegazione italiana per l'Agenzia Spaziale Europea e le sue sussidiarie.''' - -init_output = extract(init_text) - - - - -with gr.Blocks(css="footer {visibility: hidden}", theme=gr.themes.Default(text_size="lg", spacing_size="lg")) as interface: - - with gr.Row(): - gr.Markdown(header) - with gr.Row(): - text = gr.Text(label="Extract entities", lines = 10, value = init_text) - with gr.Row(): - with gr.Column(): - button = gr.Button("Extract").style(full_width=False) - with gr.Row(): - with gr.Column(): - entities = gr.Markdown(init_output) - - with gr.Row(): - with gr.Column(): - gr.Markdown("
          The input examples in this demo are extracted from https://it.wikipedia.org
          ") - - button.click(extract, inputs=[text], outputs = [entities]) - - -interface.launch() \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/__init__.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/__init__.py deleted file mode 100644 index 25db36bb1c7a905f877e5a7e3b872bd5d0096600..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/__init__.py +++ /dev/null @@ -1,717 +0,0 @@ -__version__ = "0.22.0.dev0" - -from typing import TYPE_CHECKING - -from .utils import ( - OptionalDependencyNotAvailable, - _LazyModule, - is_flax_available, - is_k_diffusion_available, - is_librosa_available, - is_note_seq_available, - is_onnx_available, - is_scipy_available, - is_torch_available, - is_torchsde_available, - is_transformers_available, -) - - -# Lazy Import based on -# https://github.com/huggingface/transformers/blob/main/src/transformers/__init__.py - -# When adding a new object to this init, please add it to `_import_structure`. The `_import_structure` is a dictionary submodule to list of object names, -# and is used to defer the actual importing for when the objects are requested. -# This way `import diffusers` provides the names in the namespace without actually importing anything (and especially none of the backends). - -_import_structure = { - "configuration_utils": ["ConfigMixin"], - "models": [], - "pipelines": [], - "schedulers": [], - "utils": [ - "OptionalDependencyNotAvailable", - "is_flax_available", - "is_inflect_available", - "is_invisible_watermark_available", - "is_k_diffusion_available", - "is_k_diffusion_version", - "is_librosa_available", - "is_note_seq_available", - "is_onnx_available", - "is_scipy_available", - "is_torch_available", - "is_torchsde_available", - "is_transformers_available", - "is_transformers_version", - "is_unidecode_available", - "logging", - ], -} - -try: - if not is_onnx_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils import dummy_onnx_objects # noqa F403 - - _import_structure["utils.dummy_onnx_objects"] = [ - name for name in dir(dummy_onnx_objects) if not name.startswith("_") - ] - -else: - _import_structure["pipelines"].extend(["OnnxRuntimeModel"]) - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils import dummy_pt_objects # noqa F403 - - _import_structure["utils.dummy_pt_objects"] = [name for name in dir(dummy_pt_objects) if not name.startswith("_")] - -else: - _import_structure["models"].extend( - [ - "AsymmetricAutoencoderKL", - "AutoencoderKL", - "AutoencoderTiny", - "ControlNetModel", - "ModelMixin", - "MultiAdapter", - "PriorTransformer", - "T2IAdapter", - "T5FilmDecoder", - "Transformer2DModel", - "UNet1DModel", - "UNet2DConditionModel", - "UNet2DModel", - "UNet3DConditionModel", - "VQModel", - ] - ) - _import_structure["optimization"] = [ - "get_constant_schedule", - "get_constant_schedule_with_warmup", - "get_cosine_schedule_with_warmup", - "get_cosine_with_hard_restarts_schedule_with_warmup", - "get_linear_schedule_with_warmup", - "get_polynomial_decay_schedule_with_warmup", - "get_scheduler", - ] - - _import_structure["pipelines"].extend( - [ - "AudioPipelineOutput", - "AutoPipelineForImage2Image", - "AutoPipelineForInpainting", - "AutoPipelineForText2Image", - "ConsistencyModelPipeline", - "DanceDiffusionPipeline", - "DDIMPipeline", - "DDPMPipeline", - "DiffusionPipeline", - "DiTPipeline", - "ImagePipelineOutput", - "KarrasVePipeline", - "LDMPipeline", - "LDMSuperResolutionPipeline", - "PNDMPipeline", - "RePaintPipeline", - "ScoreSdeVePipeline", - ] - ) - _import_structure["schedulers"].extend( - [ - "CMStochasticIterativeScheduler", - "DDIMInverseScheduler", - "DDIMParallelScheduler", - "DDIMScheduler", - "DDPMParallelScheduler", - "DDPMScheduler", - "DDPMWuerstchenScheduler", - "DEISMultistepScheduler", - "DPMSolverMultistepInverseScheduler", - "DPMSolverMultistepScheduler", - "DPMSolverSinglestepScheduler", - "EulerAncestralDiscreteScheduler", - "EulerDiscreteScheduler", - "HeunDiscreteScheduler", - "IPNDMScheduler", - "KarrasVeScheduler", - "KDPM2AncestralDiscreteScheduler", - "KDPM2DiscreteScheduler", - "PNDMScheduler", - "RePaintScheduler", - "SchedulerMixin", - "ScoreSdeVeScheduler", - "UnCLIPScheduler", - "UniPCMultistepScheduler", - "VQDiffusionScheduler", - ] - ) - _import_structure["training_utils"] = ["EMAModel"] - -try: - if not (is_torch_available() and is_scipy_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils import dummy_torch_and_scipy_objects # noqa F403 - - _import_structure["utils.dummy_torch_and_scipy_objects"] = [ - name for name in dir(dummy_torch_and_scipy_objects) if not name.startswith("_") - ] - -else: - _import_structure["schedulers"].extend(["LMSDiscreteScheduler"]) - -try: - if not (is_torch_available() and is_torchsde_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils import dummy_torch_and_torchsde_objects # noqa F403 - - _import_structure["utils.dummy_torch_and_torchsde_objects"] = [ - name for name in dir(dummy_torch_and_torchsde_objects) if not name.startswith("_") - ] - -else: - _import_structure["schedulers"].extend(["DPMSolverSDEScheduler"]) - -try: - if not (is_torch_available() and is_transformers_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils import dummy_torch_and_transformers_objects # noqa F403 - - _import_structure["utils.dummy_torch_and_transformers_objects"] = [ - name for name in dir(dummy_torch_and_transformers_objects) if not name.startswith("_") - ] - -else: - _import_structure["pipelines"].extend( - [ - "AltDiffusionImg2ImgPipeline", - "AltDiffusionPipeline", - "AudioLDM2Pipeline", - "AudioLDM2ProjectionModel", - "AudioLDM2UNet2DConditionModel", - "AudioLDMPipeline", - "BlipDiffusionControlNetPipeline", - "BlipDiffusionPipeline", - "CLIPImageProjection", - "CycleDiffusionPipeline", - "IFImg2ImgPipeline", - "IFImg2ImgSuperResolutionPipeline", - "IFInpaintingPipeline", - "IFInpaintingSuperResolutionPipeline", - "IFPipeline", - "IFSuperResolutionPipeline", - "ImageTextPipelineOutput", - "KandinskyCombinedPipeline", - "KandinskyImg2ImgCombinedPipeline", - "KandinskyImg2ImgPipeline", - "KandinskyInpaintCombinedPipeline", - "KandinskyInpaintPipeline", - "KandinskyPipeline", - "KandinskyPriorPipeline", - "KandinskyV22CombinedPipeline", - "KandinskyV22ControlnetImg2ImgPipeline", - "KandinskyV22ControlnetPipeline", - "KandinskyV22Img2ImgCombinedPipeline", - "KandinskyV22Img2ImgPipeline", - "KandinskyV22InpaintCombinedPipeline", - "KandinskyV22InpaintPipeline", - "KandinskyV22Pipeline", - "KandinskyV22PriorEmb2EmbPipeline", - "KandinskyV22PriorPipeline", - "LDMTextToImagePipeline", - "MusicLDMPipeline", - "PaintByExamplePipeline", - "SemanticStableDiffusionPipeline", - "ShapEImg2ImgPipeline", - "ShapEPipeline", - "StableDiffusionAdapterPipeline", - "StableDiffusionAttendAndExcitePipeline", - "StableDiffusionControlNetImg2ImgPipeline", - "StableDiffusionControlNetInpaintPipeline", - "StableDiffusionControlNetPipeline", - "StableDiffusionDepth2ImgPipeline", - "StableDiffusionDiffEditPipeline", - "StableDiffusionGLIGENPipeline", - "StableDiffusionGLIGENTextImagePipeline", - "StableDiffusionImageVariationPipeline", - "StableDiffusionImg2ImgPipeline", - "StableDiffusionInpaintPipeline", - "StableDiffusionInpaintPipelineLegacy", - "StableDiffusionInstructPix2PixPipeline", - "StableDiffusionLatentUpscalePipeline", - "StableDiffusionLDM3DPipeline", - "StableDiffusionModelEditingPipeline", - "StableDiffusionPanoramaPipeline", - "StableDiffusionParadigmsPipeline", - "StableDiffusionPipeline", - "StableDiffusionPipelineSafe", - "StableDiffusionPix2PixZeroPipeline", - "StableDiffusionSAGPipeline", - "StableDiffusionUpscalePipeline", - "StableDiffusionXLAdapterPipeline", - "StableDiffusionXLControlNetImg2ImgPipeline", - "StableDiffusionXLControlNetInpaintPipeline", - "StableDiffusionXLControlNetPipeline", - "StableDiffusionXLImg2ImgPipeline", - "StableDiffusionXLInpaintPipeline", - "StableDiffusionXLInstructPix2PixPipeline", - "StableDiffusionXLPipeline", - "StableUnCLIPImg2ImgPipeline", - "StableUnCLIPPipeline", - "TextToVideoSDPipeline", - "TextToVideoZeroPipeline", - "UnCLIPImageVariationPipeline", - "UnCLIPPipeline", - "UniDiffuserModel", - "UniDiffuserPipeline", - "UniDiffuserTextDecoder", - "VersatileDiffusionDualGuidedPipeline", - "VersatileDiffusionImageVariationPipeline", - "VersatileDiffusionPipeline", - "VersatileDiffusionTextToImagePipeline", - "VideoToVideoSDPipeline", - "VQDiffusionPipeline", - "WuerstchenCombinedPipeline", - "WuerstchenDecoderPipeline", - "WuerstchenPriorPipeline", - ] - ) - -try: - if not (is_torch_available() and is_transformers_available() and is_k_diffusion_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils import dummy_torch_and_transformers_and_k_diffusion_objects # noqa F403 - - _import_structure["utils.dummy_torch_and_transformers_and_k_diffusion_objects"] = [ - name for name in dir(dummy_torch_and_transformers_and_k_diffusion_objects) if not name.startswith("_") - ] - -else: - _import_structure["pipelines"].extend(["StableDiffusionKDiffusionPipeline"]) - -try: - if not (is_torch_available() and is_transformers_available() and is_onnx_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils import dummy_torch_and_transformers_and_onnx_objects # noqa F403 - - _import_structure["utils.dummy_torch_and_transformers_and_onnx_objects"] = [ - name for name in dir(dummy_torch_and_transformers_and_onnx_objects) if not name.startswith("_") - ] - -else: - _import_structure["pipelines"].extend( - [ - "OnnxStableDiffusionImg2ImgPipeline", - "OnnxStableDiffusionInpaintPipeline", - "OnnxStableDiffusionInpaintPipelineLegacy", - "OnnxStableDiffusionPipeline", - "OnnxStableDiffusionUpscalePipeline", - "StableDiffusionOnnxPipeline", - ] - ) - -try: - if not (is_torch_available() and is_librosa_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils import dummy_torch_and_librosa_objects # noqa F403 - - _import_structure["utils.dummy_torch_and_librosa_objects"] = [ - name for name in dir(dummy_torch_and_librosa_objects) if not name.startswith("_") - ] - -else: - _import_structure["pipelines"].extend(["AudioDiffusionPipeline", "Mel"]) - -try: - if not (is_transformers_available() and is_torch_available() and is_note_seq_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils import dummy_transformers_and_torch_and_note_seq_objects # noqa F403 - - _import_structure["utils.dummy_transformers_and_torch_and_note_seq_objects"] = [ - name for name in dir(dummy_transformers_and_torch_and_note_seq_objects) if not name.startswith("_") - ] - - -else: - _import_structure["pipelines"].extend(["SpectrogramDiffusionPipeline"]) - -try: - if not is_flax_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils import dummy_flax_objects # noqa F403 - - _import_structure["utils.dummy_flax_objects"] = [ - name for name in dir(dummy_flax_objects) if not name.startswith("_") - ] - - -else: - _import_structure["models.controlnet_flax"] = ["FlaxControlNetModel"] - _import_structure["models.modeling_flax_utils"] = ["FlaxModelMixin"] - _import_structure["models.unet_2d_condition_flax"] = ["FlaxUNet2DConditionModel"] - _import_structure["models.vae_flax"] = ["FlaxAutoencoderKL"] - _import_structure["pipelines"].extend(["FlaxDiffusionPipeline"]) - _import_structure["schedulers"].extend( - [ - "FlaxDDIMScheduler", - "FlaxDDPMScheduler", - "FlaxDPMSolverMultistepScheduler", - "FlaxEulerDiscreteScheduler", - "FlaxKarrasVeScheduler", - "FlaxLMSDiscreteScheduler", - "FlaxPNDMScheduler", - "FlaxSchedulerMixin", - "FlaxScoreSdeVeScheduler", - ] - ) - - -try: - if not (is_flax_available() and is_transformers_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils import dummy_flax_and_transformers_objects # noqa F403 - - _import_structure["utils.dummy_flax_and_transformers_objects"] = [ - name for name in dir(dummy_flax_and_transformers_objects) if not name.startswith("_") - ] - - -else: - _import_structure["pipelines"].extend( - [ - "FlaxStableDiffusionControlNetPipeline", - "FlaxStableDiffusionImg2ImgPipeline", - "FlaxStableDiffusionInpaintPipeline", - "FlaxStableDiffusionPipeline", - "FlaxStableDiffusionXLPipeline", - ] - ) - -try: - if not (is_note_seq_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils import dummy_note_seq_objects # noqa F403 - - _import_structure["utils.dummy_note_seq_objects"] = [ - name for name in dir(dummy_note_seq_objects) if not name.startswith("_") - ] - - -else: - _import_structure["pipelines"].extend(["MidiProcessor"]) - -if TYPE_CHECKING: - from .configuration_utils import ConfigMixin - - try: - if not is_onnx_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from .utils.dummy_onnx_objects import * # noqa F403 - else: - from .pipelines import OnnxRuntimeModel - - try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from .utils.dummy_pt_objects import * # noqa F403 - else: - from .models import ( - AsymmetricAutoencoderKL, - AutoencoderKL, - AutoencoderTiny, - ControlNetModel, - ModelMixin, - MultiAdapter, - PriorTransformer, - T2IAdapter, - T5FilmDecoder, - Transformer2DModel, - UNet1DModel, - UNet2DConditionModel, - UNet2DModel, - UNet3DConditionModel, - VQModel, - ) - from .optimization import ( - get_constant_schedule, - get_constant_schedule_with_warmup, - get_cosine_schedule_with_warmup, - get_cosine_with_hard_restarts_schedule_with_warmup, - get_linear_schedule_with_warmup, - get_polynomial_decay_schedule_with_warmup, - get_scheduler, - ) - from .pipelines import ( - AudioPipelineOutput, - AutoPipelineForImage2Image, - AutoPipelineForInpainting, - AutoPipelineForText2Image, - BlipDiffusionControlNetPipeline, - BlipDiffusionPipeline, - CLIPImageProjection, - ConsistencyModelPipeline, - DanceDiffusionPipeline, - DDIMPipeline, - DDPMPipeline, - DiffusionPipeline, - DiTPipeline, - ImagePipelineOutput, - KarrasVePipeline, - LDMPipeline, - LDMSuperResolutionPipeline, - PNDMPipeline, - RePaintPipeline, - ScoreSdeVePipeline, - ) - from .schedulers import ( - CMStochasticIterativeScheduler, - DDIMInverseScheduler, - DDIMParallelScheduler, - DDIMScheduler, - DDPMParallelScheduler, - DDPMScheduler, - DDPMWuerstchenScheduler, - DEISMultistepScheduler, - DPMSolverMultistepInverseScheduler, - DPMSolverMultistepScheduler, - DPMSolverSinglestepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - HeunDiscreteScheduler, - IPNDMScheduler, - KarrasVeScheduler, - KDPM2AncestralDiscreteScheduler, - KDPM2DiscreteScheduler, - PNDMScheduler, - RePaintScheduler, - SchedulerMixin, - ScoreSdeVeScheduler, - UnCLIPScheduler, - UniPCMultistepScheduler, - VQDiffusionScheduler, - ) - from .training_utils import EMAModel - - try: - if not (is_torch_available() and is_scipy_available()): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from .utils.dummy_torch_and_scipy_objects import * # noqa F403 - else: - from .schedulers import LMSDiscreteScheduler - - try: - if not (is_torch_available() and is_torchsde_available()): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from .utils.dummy_torch_and_torchsde_objects import * # noqa F403 - else: - from .schedulers import DPMSolverSDEScheduler - - try: - if not (is_torch_available() and is_transformers_available()): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from .utils.dummy_torch_and_transformers_objects import * # noqa F403 - else: - from .pipelines import ( - AltDiffusionImg2ImgPipeline, - AltDiffusionPipeline, - AudioLDM2Pipeline, - AudioLDM2ProjectionModel, - AudioLDM2UNet2DConditionModel, - AudioLDMPipeline, - CLIPImageProjection, - CycleDiffusionPipeline, - IFImg2ImgPipeline, - IFImg2ImgSuperResolutionPipeline, - IFInpaintingPipeline, - IFInpaintingSuperResolutionPipeline, - IFPipeline, - IFSuperResolutionPipeline, - ImageTextPipelineOutput, - KandinskyCombinedPipeline, - KandinskyImg2ImgCombinedPipeline, - KandinskyImg2ImgPipeline, - KandinskyInpaintCombinedPipeline, - KandinskyInpaintPipeline, - KandinskyPipeline, - KandinskyPriorPipeline, - KandinskyV22CombinedPipeline, - KandinskyV22ControlnetImg2ImgPipeline, - KandinskyV22ControlnetPipeline, - KandinskyV22Img2ImgCombinedPipeline, - KandinskyV22Img2ImgPipeline, - KandinskyV22InpaintCombinedPipeline, - KandinskyV22InpaintPipeline, - KandinskyV22Pipeline, - KandinskyV22PriorEmb2EmbPipeline, - KandinskyV22PriorPipeline, - LDMTextToImagePipeline, - MusicLDMPipeline, - PaintByExamplePipeline, - SemanticStableDiffusionPipeline, - ShapEImg2ImgPipeline, - ShapEPipeline, - StableDiffusionAdapterPipeline, - StableDiffusionAttendAndExcitePipeline, - StableDiffusionControlNetImg2ImgPipeline, - StableDiffusionControlNetInpaintPipeline, - StableDiffusionControlNetPipeline, - StableDiffusionDepth2ImgPipeline, - StableDiffusionDiffEditPipeline, - StableDiffusionGLIGENPipeline, - StableDiffusionGLIGENTextImagePipeline, - StableDiffusionImageVariationPipeline, - StableDiffusionImg2ImgPipeline, - StableDiffusionInpaintPipeline, - StableDiffusionInpaintPipelineLegacy, - StableDiffusionInstructPix2PixPipeline, - StableDiffusionLatentUpscalePipeline, - StableDiffusionLDM3DPipeline, - StableDiffusionModelEditingPipeline, - StableDiffusionPanoramaPipeline, - StableDiffusionParadigmsPipeline, - StableDiffusionPipeline, - StableDiffusionPipelineSafe, - StableDiffusionPix2PixZeroPipeline, - StableDiffusionSAGPipeline, - StableDiffusionUpscalePipeline, - StableDiffusionXLAdapterPipeline, - StableDiffusionXLControlNetImg2ImgPipeline, - StableDiffusionXLControlNetInpaintPipeline, - StableDiffusionXLControlNetPipeline, - StableDiffusionXLImg2ImgPipeline, - StableDiffusionXLInpaintPipeline, - StableDiffusionXLInstructPix2PixPipeline, - StableDiffusionXLPipeline, - StableUnCLIPImg2ImgPipeline, - StableUnCLIPPipeline, - TextToVideoSDPipeline, - TextToVideoZeroPipeline, - UnCLIPImageVariationPipeline, - UnCLIPPipeline, - UniDiffuserModel, - UniDiffuserPipeline, - UniDiffuserTextDecoder, - VersatileDiffusionDualGuidedPipeline, - VersatileDiffusionImageVariationPipeline, - VersatileDiffusionPipeline, - VersatileDiffusionTextToImagePipeline, - VideoToVideoSDPipeline, - VQDiffusionPipeline, - WuerstchenCombinedPipeline, - WuerstchenDecoderPipeline, - WuerstchenPriorPipeline, - ) - - try: - if not (is_torch_available() and is_transformers_available() and is_k_diffusion_available()): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from .utils.dummy_torch_and_transformers_and_k_diffusion_objects import * # noqa F403 - else: - from .pipelines import StableDiffusionKDiffusionPipeline - - try: - if not (is_torch_available() and is_transformers_available() and is_onnx_available()): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from .utils.dummy_torch_and_transformers_and_onnx_objects import * # noqa F403 - else: - from .pipelines import ( - OnnxStableDiffusionImg2ImgPipeline, - OnnxStableDiffusionInpaintPipeline, - OnnxStableDiffusionInpaintPipelineLegacy, - OnnxStableDiffusionPipeline, - OnnxStableDiffusionUpscalePipeline, - StableDiffusionOnnxPipeline, - ) - - try: - if not (is_torch_available() and is_librosa_available()): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from .utils.dummy_torch_and_librosa_objects import * # noqa F403 - else: - from .pipelines import AudioDiffusionPipeline, Mel - - try: - if not (is_transformers_available() and is_torch_available() and is_note_seq_available()): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from .utils.dummy_transformers_and_torch_and_note_seq_objects import * # noqa F403 - else: - from .pipelines import SpectrogramDiffusionPipeline - - try: - if not is_flax_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from .utils.dummy_flax_objects import * # noqa F403 - else: - from .models.controlnet_flax import FlaxControlNetModel - from .models.modeling_flax_utils import FlaxModelMixin - from .models.unet_2d_condition_flax import FlaxUNet2DConditionModel - from .models.vae_flax import FlaxAutoencoderKL - from .pipelines import FlaxDiffusionPipeline - from .schedulers import ( - FlaxDDIMScheduler, - FlaxDDPMScheduler, - FlaxDPMSolverMultistepScheduler, - FlaxEulerDiscreteScheduler, - FlaxKarrasVeScheduler, - FlaxLMSDiscreteScheduler, - FlaxPNDMScheduler, - FlaxSchedulerMixin, - FlaxScoreSdeVeScheduler, - ) - - try: - if not (is_flax_available() and is_transformers_available()): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from .utils.dummy_flax_and_transformers_objects import * # noqa F403 - else: - from .pipelines import ( - FlaxStableDiffusionControlNetPipeline, - FlaxStableDiffusionImg2ImgPipeline, - FlaxStableDiffusionInpaintPipeline, - FlaxStableDiffusionPipeline, - FlaxStableDiffusionXLPipeline, - ) - - try: - if not (is_note_seq_available()): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - from .utils.dummy_note_seq_objects import * # noqa F403 - else: - from .pipelines import MidiProcessor - -else: - import sys - - sys.modules[__name__] = _LazyModule( - __name__, - globals()["__file__"], - _import_structure, - module_spec=__spec__, - extra_objects={"__version__": __version__}, - ) diff --git a/spaces/paulbauriegel/voice-coe-data/README.md b/spaces/paulbauriegel/voice-coe-data/README.md deleted file mode 100644 index 66aac818547c4842d202975acfacf20e45dd23e6..0000000000000000000000000000000000000000 --- a/spaces/paulbauriegel/voice-coe-data/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Voice Coe Data -emoji: 🐠 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/perezcatriel/data_world_jobs/page/new.py b/spaces/perezcatriel/data_world_jobs/page/new.py deleted file mode 100644 index ad03864b618cf3c9bfa7624e139c3d39c8780da5..0000000000000000000000000000000000000000 --- a/spaces/perezcatriel/data_world_jobs/page/new.py +++ /dev/null @@ -1,167 +0,0 @@ -import time - -import pandas as pd -import streamlit as st -from sklearn.ensemble import RandomForestRegressor -from sklearn.feature_extraction.text import CountVectorizer -from sklearn.naive_bayes import MultinomialNB -from sklearn.preprocessing import LabelEncoder - - -def New(): - progress_text = "Operación en progreso... :sunglasses: Por favor " \ - "espere... " \ - ":bomb:" - my_bar = st.progress(0, text=progress_text) - - for percent_complete in range(100): - time.sleep(0.05) - my_bar.progress(percent_complete + 1, text=progress_text) - - st.markdown(""" -

          ¡Descubre los proyectos más innovadores en fase Beta y - sé el primero en experimentar con ellos!

          -
          - - !Te presentamos un adelanto exclusivo de lo que viene - próximamente en nuestros proyectos más innovadores! - - """, unsafe_allow_html=True) - - # Leer los datos y seleccionar las columnas necesarias - df = pd.read_csv('./ML/ds_salaries.csv') - df = df[['company_location', 'salary_in_usd']] - - # Codificar las ubicaciones de las empresas - le = LabelEncoder() - df['company_location'] = le.fit_transform(df['company_location']) - - # Decodificar las ubicaciones de las empresas - decoded_locations = le.inverse_transform(df['company_location'].unique()) - - # Separar los datos de entrada y salida - X = df.iloc[:, :-1].values - y = df.iloc[:, -1].values - - # Entrenar el modelo - model = RandomForestRegressor(n_estimators=100, random_state=42) - model.fit(X, y) - - # Obtener las ubicaciones de las empresas y sus salarios predichos - locations = df['company_location'].unique() - predicted_salaries = model.predict(locations.reshape(-1, 1)) - results_df = pd.DataFrame( - {'company_location': locations, 'predicted_salary': predicted_salaries}) - - # Decodificar las ubicaciones de las empresas - results_df['company_location'] = le.inverse_transform( - results_df['company_location']) - - # Ordenar los resultados por salario predicho - results_df = results_df.sort_values('predicted_salary', - ascending=False).reset_index(drop=True) - - # Mostrar el título y el top 5 de países mejor pagados - st.markdown(""" -

          Atrae a los mejores talentos con nuestra lista precisa de los - países mejor pagados.

          -
          - """, unsafe_allow_html=True) - - # Descripción - st.markdown(""" -

          Como reclutador, sabes que la remuneración es un factor clave - para atraer a los mejores talentos. Con nuestro algoritmo - RandomForest, obtenemos un promedio preciso de los - países - mejor pagados en todo el mundo, lo que te permite atraer a los candidatos más talentosos. Nuestra lista está menos sesgada por outliers gracias a nuestra selección aleatoria de empresas en cada país. Únete a nuestra comunidad y toma decisiones informadas para un futuro próspero. ¡Atrae a los mejores talentos y lleva tu empresa al siguiente nivel con nuestra lista precisa de los países mejor pagados!""", unsafe_allow_html=True) - - for i in range(5): - location = results_df.loc[i, 'company_location'] - salary = results_df.loc[i, 'predicted_salary'] - st.markdown(f'### **{location}**: ${salary:,.2f}', - unsafe_allow_html=True) - - # Mostrar el menú desplegable para seleccionar un país - st.markdown(""" -

          Seleccionar un país

          -
          - """, unsafe_allow_html=True) - selected_location = st.selectbox('Ubicación de la empresa', - decoded_locations) - - # Mostrar el salario predicho para el país seleccionado - predicted_salary = results_df.loc[results_df[ - 'company_location'] == selected_location, 'predicted_salary'].iloc[ - 0] - st.markdown(f'### **{selected_location}**: ${predicted_salary:,.2f}', - unsafe_allow_html=True) - - ##### - - # Cargar los datos - df = pd.read_csv('./assets/dataset_modelo_1.csv') - - # Crear una lista con todas las skills disponibles - all_skills = set() - for skills in df.skills: - all_skills.update(skills.split(", ")) - - # Crear un diccionario que relaciona cada skill con su índice en el vector - skill_indices = {skill: i for i, skill in enumerate(all_skills)} - - # Crear una matriz de características con la frecuencia de cada skill en cada fila - vectorizer = CountVectorizer(vocabulary=skill_indices.keys(), - lowercase=False) - X = vectorizer.fit_transform(df.skills) - - # Entrenar el modelo - clf = MultinomialNB() - clf.fit(X, df.Aptitude) - - # Crear la interfaz de usuario con Streamlit - st.markdown(""" -

          Encuentra el talento perfecto con nuestro modelo de Naive Bayes para identificar las habilidades más importantes.

          -
          - """, unsafe_allow_html=True) - st.markdown( - """ -

          Como reclutador, sabes que encontrar al talento perfecto es un - desafío constante. Con nuestro modelo de Naive Bayes, - podemos - identificar las habilidades más importantes para cualquier trabajo en particular. Al ingresar el título de trabajo, nuestro algoritmo genera una lista precisa de habilidades necesarias para tener éxito en esa posición específica. Esto te permite encontrar y atraer al candidato perfecto para el trabajo. Únete a nuestra comunidad y comienza a impulsar tu carrera hoy mismo!

          """, unsafe_allow_html=True) - - title = st.text_input("Título del trabajo") - - # Crear una función que encuentra las habilidades más importantes para un título dado - def get_top_skills(title, limit): - # Filtrar el dataframe por el título dado - filtered_df = df[df.job_title.str.contains(title, case=False)] - - # Crear una matriz de características con la frecuencia de cada skill en el dataframe filtrado - X_filtered = vectorizer.transform(filtered_df.skills) - - # Calcular la frecuencia de cada habilidad en el dataframe filtrado - skill_frequencies = X_filtered.sum(axis=0).A1 - - # Obtener los nombres de las habilidades - skill_names = vectorizer.vocabulary_.keys() - - # Crear un diccionario que relaciona cada habilidad con su frecuencia - skill_freq_dict = dict(zip(skill_names, skill_frequencies)) - - # Ordenar las habilidades por frecuencia descendente y devolver las más importantes (según el límite dado) - top_skills = sorted(skill_freq_dict, key=skill_freq_dict.get, - reverse=True)[:limit] - return top_skills - - if title: - limit = st.number_input("Cantidad de habilidades a mostrar", value=5, - min_value=1, max_value=len(all_skills)) - top_skills = get_top_skills(title, limit) - st.write( - f"Las {limit} habilidades más importantes para el trabajo de '{title}' son:") - for skill in top_skills: - st.write(f"- {skill}") - - ##### diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langturkishmodel.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langturkishmodel.py deleted file mode 100644 index 291857c25c83f91a151c1d7760e8e5e09c1ee238..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langturkishmodel.py +++ /dev/null @@ -1,4380 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -TURKISH_LANG_MODEL = { - 23: { # 'A' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 1, # 'i' - 24: 0, # 'j' - 10: 2, # 'k' - 5: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 37: { # 'B' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 2, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 47: { # 'C' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 1, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 2, # 'l' - 13: 2, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 2, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 39: { # 'D' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 1, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 1, # 'Ş' - 19: 0, # 'ş' - }, - 29: { # 'E' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 1, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 1, # 'j' - 10: 0, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 1, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 52: { # 'F' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 1, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 2, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 1, # 'b' - 28: 1, # 'c' - 12: 1, # 'd' - 2: 0, # 'e' - 18: 1, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 1, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 2, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 2, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 2, # 'ş' - }, - 36: { # 'G' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 2, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 2, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 1, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 1, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 0, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 1, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 45: { # 'H' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 2, # 'G' - 45: 1, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 1, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 2, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 1, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 2, # 'ğ' - 41: 1, # 'İ' - 6: 0, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 53: { # 'I' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 2, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 60: { # 'J' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 0, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 1, # 's' - 9: 0, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 16: { # 'K' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 1, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 0, # 'u' - 32: 3, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 49: { # 'L' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 2, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 2, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 0, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 1, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 2, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 1, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 20: { # 'M' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 2, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 0, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 46: { # 'N' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 1, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 1, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 1, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 42: { # 'O' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 1, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 2, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 2, # 'İ' - 6: 1, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 48: { # 'P' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 2, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 0, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 44: { # 'R' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 1, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 2, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 1, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 35: { # 'S' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 1, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 1, # 'l' - 13: 2, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 1, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 2, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 31: { # 'T' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 2, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 1, # 'j' - 10: 2, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 2, # 'r' - 8: 0, # 's' - 9: 2, # 't' - 14: 2, # 'u' - 32: 1, # 'v' - 57: 1, # 'w' - 58: 1, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 51: { # 'U' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 1, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 1, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 38: { # 'V' - 23: 1, # 'A' - 37: 1, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 2, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 1, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 1, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 62: { # 'W' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 0, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 43: { # 'Y' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 1, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 2, # 'N' - 42: 0, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 1, # 'j' - 10: 1, # 'k' - 5: 1, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 1, # 'Ü' - 59: 1, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 0, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 56: { # 'Z' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 2, # 'Z' - 1: 2, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 1, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 1, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 1: { # 'a' - 23: 3, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 2, # 'Z' - 1: 2, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 2, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 3, # 'v' - 57: 2, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 1, # 'î' - 34: 1, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 21: { # 'b' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 3, # 'g' - 25: 1, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 1, # 'r' - 8: 2, # 's' - 9: 2, # 't' - 14: 2, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 28: { # 'c' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 2, # 'E' - 52: 0, # 'F' - 36: 2, # 'G' - 45: 2, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 2, # 'T' - 51: 2, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 3, # 'Y' - 56: 0, # 'Z' - 1: 1, # 'a' - 21: 1, # 'b' - 28: 2, # 'c' - 12: 2, # 'd' - 2: 1, # 'e' - 18: 1, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 1, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 2, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 1, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 1, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 1, # 'î' - 34: 2, # 'ö' - 17: 2, # 'ü' - 30: 2, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 2, # 'ş' - }, - 12: { # 'd' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 2, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 1, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 2, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 1, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 2: { # 'e' - 23: 2, # 'A' - 37: 0, # 'B' - 47: 2, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 2, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 3, # 'v' - 57: 2, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 1, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 18: { # 'f' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 1, # 'i' - 24: 1, # 'j' - 10: 1, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 1, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 1, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 27: { # 'g' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 2, # 'r' - 8: 2, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 25: { # 'h' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 3: { # 'i' - 23: 2, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 1, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 3, # 'g' - 25: 1, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 1, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 1, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 1, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 24: { # 'j' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 1, # 'j' - 10: 2, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 2, # 'r' - 8: 3, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 10: { # 'k' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 1, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 2, # 'r' - 8: 2, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 3, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 5: { # 'l' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 1, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 1, # 'l' - 13: 1, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 2, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 13: { # 'm' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 2, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 2, # 'u' - 32: 2, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 4: { # 'n' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 2, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 1, # 'f' - 27: 2, # 'g' - 25: 3, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 3, # 'p' - 7: 2, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 15: { # 'o' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 2, # 'L' - 20: 0, # 'M' - 46: 2, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 2, # 'ğ' - 41: 2, # 'İ' - 6: 3, # 'ı' - 40: 2, # 'Ş' - 19: 2, # 'ş' - }, - 26: { # 'p' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 1, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 2, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 7: { # 'r' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 2, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 1, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 3, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 8: { # 's' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 2, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 9: { # 't' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 3, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 14: { # 'u' - 23: 3, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 2, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 2, # 'Z' - 1: 2, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 2, # 'e' - 18: 2, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 2, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 32: { # 'v' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 1, # 'j' - 10: 1, # 'k' - 5: 3, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 1, # 'r' - 8: 2, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 57: { # 'w' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 1, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 1, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 1, # 's' - 9: 0, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 2, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 58: { # 'x' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 1, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 1, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 2, # 's' - 9: 1, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 11: { # 'y' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 1, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 2, # 'r' - 8: 1, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 22: { # 'z' - 23: 2, # 'A' - 37: 2, # 'B' - 47: 1, # 'C' - 39: 2, # 'D' - 29: 3, # 'E' - 52: 1, # 'F' - 36: 2, # 'G' - 45: 2, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 2, # 'N' - 42: 2, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 3, # 'T' - 51: 2, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 1, # 'Z' - 1: 1, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 2, # 'd' - 2: 2, # 'e' - 18: 3, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 2, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 0, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 2, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 2, # 'Ü' - 59: 1, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 2, # 'ü' - 30: 2, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 1, # 'Ş' - 19: 2, # 'ş' - }, - 63: { # '·' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 1, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 54: { # 'Ç' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 0, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 0, # 'h' - 3: 3, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 2, # 'r' - 8: 0, # 's' - 9: 1, # 't' - 14: 0, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 2, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 50: { # 'Ö' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 2, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 2, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 1, # 'N' - 42: 2, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 2, # 'd' - 2: 0, # 'e' - 18: 1, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 0, # 'j' - 10: 2, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 3, # 'n' - 15: 2, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 1, # 's' - 9: 2, # 't' - 14: 0, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 2, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 55: { # 'Ü' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 1, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 59: { # 'â' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 1, # 'Ş' - 19: 0, # 'ş' - }, - 33: { # 'ç' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 0, # 'e' - 18: 2, # 'f' - 27: 1, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 0, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 2, # 's' - 9: 3, # 't' - 14: 0, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 61: { # 'î' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 1, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 1, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 34: { # 'ö' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 1, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 2, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 1, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 3, # 's' - 9: 1, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 2, # 'ğ' - 41: 1, # 'İ' - 6: 1, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 17: { # 'ü' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 1, # 'f' - 27: 2, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 1, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 2, # 'r' - 8: 3, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 1, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 30: { # 'ğ' - 23: 0, # 'A' - 37: 2, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 2, # 'N' - 42: 2, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 3, # 'j' - 10: 1, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 2, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 2, # 'İ' - 6: 2, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 41: { # 'İ' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 1, # 'E' - 52: 0, # 'F' - 36: 2, # 'G' - 45: 2, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 1, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 2, # 'd' - 2: 1, # 'e' - 18: 0, # 'f' - 27: 3, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 2, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 2, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 1, # 'Ü' - 59: 1, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 1, # 'ü' - 30: 2, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 6: { # 'ı' - 23: 2, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 2, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 3, # 'v' - 57: 1, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 40: { # 'Ş' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 1, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 2, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 2, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 1, # 'Z' - 1: 0, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 0, # 'e' - 18: 3, # 'f' - 27: 0, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 3, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 3, # 'r' - 8: 2, # 's' - 9: 2, # 't' - 14: 1, # 'u' - 32: 3, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 1, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 1, # 'ü' - 30: 2, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 1, # 'Ş' - 19: 2, # 'ş' - }, - 19: { # 'ş' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 2, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 1, # 'h' - 3: 1, # 'i' - 24: 0, # 'j' - 10: 2, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 1, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 1, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -ISO_8859_9_TURKISH_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 255, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 255, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 255, # ' ' - 33: 255, # '!' - 34: 255, # '"' - 35: 255, # '#' - 36: 255, # '$' - 37: 255, # '%' - 38: 255, # '&' - 39: 255, # "'" - 40: 255, # '(' - 41: 255, # ')' - 42: 255, # '*' - 43: 255, # '+' - 44: 255, # ',' - 45: 255, # '-' - 46: 255, # '.' - 47: 255, # '/' - 48: 255, # '0' - 49: 255, # '1' - 50: 255, # '2' - 51: 255, # '3' - 52: 255, # '4' - 53: 255, # '5' - 54: 255, # '6' - 55: 255, # '7' - 56: 255, # '8' - 57: 255, # '9' - 58: 255, # ':' - 59: 255, # ';' - 60: 255, # '<' - 61: 255, # '=' - 62: 255, # '>' - 63: 255, # '?' - 64: 255, # '@' - 65: 23, # 'A' - 66: 37, # 'B' - 67: 47, # 'C' - 68: 39, # 'D' - 69: 29, # 'E' - 70: 52, # 'F' - 71: 36, # 'G' - 72: 45, # 'H' - 73: 53, # 'I' - 74: 60, # 'J' - 75: 16, # 'K' - 76: 49, # 'L' - 77: 20, # 'M' - 78: 46, # 'N' - 79: 42, # 'O' - 80: 48, # 'P' - 81: 69, # 'Q' - 82: 44, # 'R' - 83: 35, # 'S' - 84: 31, # 'T' - 85: 51, # 'U' - 86: 38, # 'V' - 87: 62, # 'W' - 88: 65, # 'X' - 89: 43, # 'Y' - 90: 56, # 'Z' - 91: 255, # '[' - 92: 255, # '\\' - 93: 255, # ']' - 94: 255, # '^' - 95: 255, # '_' - 96: 255, # '`' - 97: 1, # 'a' - 98: 21, # 'b' - 99: 28, # 'c' - 100: 12, # 'd' - 101: 2, # 'e' - 102: 18, # 'f' - 103: 27, # 'g' - 104: 25, # 'h' - 105: 3, # 'i' - 106: 24, # 'j' - 107: 10, # 'k' - 108: 5, # 'l' - 109: 13, # 'm' - 110: 4, # 'n' - 111: 15, # 'o' - 112: 26, # 'p' - 113: 64, # 'q' - 114: 7, # 'r' - 115: 8, # 's' - 116: 9, # 't' - 117: 14, # 'u' - 118: 32, # 'v' - 119: 57, # 'w' - 120: 58, # 'x' - 121: 11, # 'y' - 122: 22, # 'z' - 123: 255, # '{' - 124: 255, # '|' - 125: 255, # '}' - 126: 255, # '~' - 127: 255, # '\x7f' - 128: 180, # '\x80' - 129: 179, # '\x81' - 130: 178, # '\x82' - 131: 177, # '\x83' - 132: 176, # '\x84' - 133: 175, # '\x85' - 134: 174, # '\x86' - 135: 173, # '\x87' - 136: 172, # '\x88' - 137: 171, # '\x89' - 138: 170, # '\x8a' - 139: 169, # '\x8b' - 140: 168, # '\x8c' - 141: 167, # '\x8d' - 142: 166, # '\x8e' - 143: 165, # '\x8f' - 144: 164, # '\x90' - 145: 163, # '\x91' - 146: 162, # '\x92' - 147: 161, # '\x93' - 148: 160, # '\x94' - 149: 159, # '\x95' - 150: 101, # '\x96' - 151: 158, # '\x97' - 152: 157, # '\x98' - 153: 156, # '\x99' - 154: 155, # '\x9a' - 155: 154, # '\x9b' - 156: 153, # '\x9c' - 157: 152, # '\x9d' - 158: 151, # '\x9e' - 159: 106, # '\x9f' - 160: 150, # '\xa0' - 161: 149, # '¡' - 162: 148, # '¢' - 163: 147, # '£' - 164: 146, # '¤' - 165: 145, # '¥' - 166: 144, # '¦' - 167: 100, # '§' - 168: 143, # '¨' - 169: 142, # '©' - 170: 141, # 'ª' - 171: 140, # '«' - 172: 139, # '¬' - 173: 138, # '\xad' - 174: 137, # '®' - 175: 136, # '¯' - 176: 94, # '°' - 177: 80, # '±' - 178: 93, # '²' - 179: 135, # '³' - 180: 105, # '´' - 181: 134, # 'µ' - 182: 133, # '¶' - 183: 63, # '·' - 184: 132, # '¸' - 185: 131, # '¹' - 186: 130, # 'º' - 187: 129, # '»' - 188: 128, # '¼' - 189: 127, # '½' - 190: 126, # '¾' - 191: 125, # '¿' - 192: 124, # 'À' - 193: 104, # 'Á' - 194: 73, # 'Â' - 195: 99, # 'Ã' - 196: 79, # 'Ä' - 197: 85, # 'Å' - 198: 123, # 'Æ' - 199: 54, # 'Ç' - 200: 122, # 'È' - 201: 98, # 'É' - 202: 92, # 'Ê' - 203: 121, # 'Ë' - 204: 120, # 'Ì' - 205: 91, # 'Í' - 206: 103, # 'Î' - 207: 119, # 'Ï' - 208: 68, # 'Ğ' - 209: 118, # 'Ñ' - 210: 117, # 'Ò' - 211: 97, # 'Ó' - 212: 116, # 'Ô' - 213: 115, # 'Õ' - 214: 50, # 'Ö' - 215: 90, # '×' - 216: 114, # 'Ø' - 217: 113, # 'Ù' - 218: 112, # 'Ú' - 219: 111, # 'Û' - 220: 55, # 'Ü' - 221: 41, # 'İ' - 222: 40, # 'Ş' - 223: 86, # 'ß' - 224: 89, # 'à' - 225: 70, # 'á' - 226: 59, # 'â' - 227: 78, # 'ã' - 228: 71, # 'ä' - 229: 82, # 'å' - 230: 88, # 'æ' - 231: 33, # 'ç' - 232: 77, # 'è' - 233: 66, # 'é' - 234: 84, # 'ê' - 235: 83, # 'ë' - 236: 110, # 'ì' - 237: 75, # 'í' - 238: 61, # 'î' - 239: 96, # 'ï' - 240: 30, # 'ğ' - 241: 67, # 'ñ' - 242: 109, # 'ò' - 243: 74, # 'ó' - 244: 87, # 'ô' - 245: 102, # 'õ' - 246: 34, # 'ö' - 247: 95, # '÷' - 248: 81, # 'ø' - 249: 108, # 'ù' - 250: 76, # 'ú' - 251: 72, # 'û' - 252: 17, # 'ü' - 253: 6, # 'ı' - 254: 19, # 'ş' - 255: 107, # 'ÿ' -} - -ISO_8859_9_TURKISH_MODEL = SingleByteCharSetModel( - charset_name="ISO-8859-9", - language="Turkish", - char_to_order_map=ISO_8859_9_TURKISH_CHAR_TO_ORDER, - language_model=TURKISH_LANG_MODEL, - typical_positive_ratio=0.97029, - keep_ascii_letters=True, - alphabet="ABCDEFGHIJKLMNOPRSTUVYZabcdefghijklmnoprstuvyzÂÇÎÖÛÜâçîöûüĞğİıŞş", -) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/WmfImagePlugin.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/WmfImagePlugin.py deleted file mode 100644 index 3e5fb01512df01c39092783028434463a132e5c6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/WmfImagePlugin.py +++ /dev/null @@ -1,178 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# WMF stub codec -# -# history: -# 1996-12-14 fl Created -# 2004-02-22 fl Turned into a stub driver -# 2004-02-23 fl Added EMF support -# -# Copyright (c) Secret Labs AB 1997-2004. All rights reserved. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# -# WMF/EMF reference documentation: -# https://winprotocoldoc.blob.core.windows.net/productionwindowsarchives/MS-WMF/[MS-WMF].pdf -# http://wvware.sourceforge.net/caolan/index.html -# http://wvware.sourceforge.net/caolan/ora-wmf.html - -from . import Image, ImageFile -from ._binary import i16le as word -from ._binary import si16le as short -from ._binary import si32le as _long - -_handler = None - - -def register_handler(handler): - """ - Install application-specific WMF image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -if hasattr(Image.core, "drawwmf"): - # install default handler (windows only) - - class WmfHandler: - def open(self, im): - im._mode = "RGB" - self.bbox = im.info["wmf_bbox"] - - def load(self, im): - im.fp.seek(0) # rewind - return Image.frombytes( - "RGB", - im.size, - Image.core.drawwmf(im.fp.read(), im.size, self.bbox), - "raw", - "BGR", - (im.size[0] * 3 + 3) & -4, - -1, - ) - - register_handler(WmfHandler()) - -# -# -------------------------------------------------------------------- -# Read WMF file - - -def _accept(prefix): - return ( - prefix[:6] == b"\xd7\xcd\xc6\x9a\x00\x00" or prefix[:4] == b"\x01\x00\x00\x00" - ) - - -## -# Image plugin for Windows metafiles. - - -class WmfStubImageFile(ImageFile.StubImageFile): - format = "WMF" - format_description = "Windows Metafile" - - def _open(self): - self._inch = None - - # check placable header - s = self.fp.read(80) - - if s[:6] == b"\xd7\xcd\xc6\x9a\x00\x00": - # placeable windows metafile - - # get units per inch - self._inch = word(s, 14) - - # get bounding box - x0 = short(s, 6) - y0 = short(s, 8) - x1 = short(s, 10) - y1 = short(s, 12) - - # normalize size to 72 dots per inch - self.info["dpi"] = 72 - size = ( - (x1 - x0) * self.info["dpi"] // self._inch, - (y1 - y0) * self.info["dpi"] // self._inch, - ) - - self.info["wmf_bbox"] = x0, y0, x1, y1 - - # sanity check (standard metafile header) - if s[22:26] != b"\x01\x00\t\x00": - msg = "Unsupported WMF file format" - raise SyntaxError(msg) - - elif s[:4] == b"\x01\x00\x00\x00" and s[40:44] == b" EMF": - # enhanced metafile - - # get bounding box - x0 = _long(s, 8) - y0 = _long(s, 12) - x1 = _long(s, 16) - y1 = _long(s, 20) - - # get frame (in 0.01 millimeter units) - frame = _long(s, 24), _long(s, 28), _long(s, 32), _long(s, 36) - - size = x1 - x0, y1 - y0 - - # calculate dots per inch from bbox and frame - xdpi = 2540.0 * (x1 - y0) / (frame[2] - frame[0]) - ydpi = 2540.0 * (y1 - y0) / (frame[3] - frame[1]) - - self.info["wmf_bbox"] = x0, y0, x1, y1 - - if xdpi == ydpi: - self.info["dpi"] = xdpi - else: - self.info["dpi"] = xdpi, ydpi - - else: - msg = "Unsupported file format" - raise SyntaxError(msg) - - self._mode = "RGB" - self._size = size - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - def load(self, dpi=None): - if dpi is not None and self._inch is not None: - self.info["dpi"] = dpi - x0, y0, x1, y1 = self.info["wmf_bbox"] - self._size = ( - (x1 - x0) * self.info["dpi"] // self._inch, - (y1 - y0) * self.info["dpi"] // self._inch, - ) - return super().load() - - -def _save(im, fp, filename): - if _handler is None or not hasattr(_handler, "save"): - msg = "WMF save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -# -------------------------------------------------------------------- -# Registry stuff - - -Image.register_open(WmfStubImageFile.format, WmfStubImageFile, _accept) -Image.register_save(WmfStubImageFile.format, _save) - -Image.register_extensions(WmfStubImageFile.format, [".wmf", ".emf"]) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/colorama/ansitowin32.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/colorama/ansitowin32.py deleted file mode 100644 index abf209e60c7c4a9b1ae57452e36b383969848c2e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/colorama/ansitowin32.py +++ /dev/null @@ -1,277 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -import re -import sys -import os - -from .ansi import AnsiFore, AnsiBack, AnsiStyle, Style, BEL -from .winterm import enable_vt_processing, WinTerm, WinColor, WinStyle -from .win32 import windll, winapi_test - - -winterm = None -if windll is not None: - winterm = WinTerm() - - -class StreamWrapper(object): - ''' - Wraps a stream (such as stdout), acting as a transparent proxy for all - attribute access apart from method 'write()', which is delegated to our - Converter instance. - ''' - def __init__(self, wrapped, converter): - # double-underscore everything to prevent clashes with names of - # attributes on the wrapped stream object. - self.__wrapped = wrapped - self.__convertor = converter - - def __getattr__(self, name): - return getattr(self.__wrapped, name) - - def __enter__(self, *args, **kwargs): - # special method lookup bypasses __getattr__/__getattribute__, see - # https://stackoverflow.com/questions/12632894/why-doesnt-getattr-work-with-exit - # thus, contextlib magic methods are not proxied via __getattr__ - return self.__wrapped.__enter__(*args, **kwargs) - - def __exit__(self, *args, **kwargs): - return self.__wrapped.__exit__(*args, **kwargs) - - def __setstate__(self, state): - self.__dict__ = state - - def __getstate__(self): - return self.__dict__ - - def write(self, text): - self.__convertor.write(text) - - def isatty(self): - stream = self.__wrapped - if 'PYCHARM_HOSTED' in os.environ: - if stream is not None and (stream is sys.__stdout__ or stream is sys.__stderr__): - return True - try: - stream_isatty = stream.isatty - except AttributeError: - return False - else: - return stream_isatty() - - @property - def closed(self): - stream = self.__wrapped - try: - return stream.closed - # AttributeError in the case that the stream doesn't support being closed - # ValueError for the case that the stream has already been detached when atexit runs - except (AttributeError, ValueError): - return True - - -class AnsiToWin32(object): - ''' - Implements a 'write()' method which, on Windows, will strip ANSI character - sequences from the text, and if outputting to a tty, will convert them into - win32 function calls. - ''' - ANSI_CSI_RE = re.compile('\001?\033\\[((?:\\d|;)*)([a-zA-Z])\002?') # Control Sequence Introducer - ANSI_OSC_RE = re.compile('\001?\033\\]([^\a]*)(\a)\002?') # Operating System Command - - def __init__(self, wrapped, convert=None, strip=None, autoreset=False): - # The wrapped stream (normally sys.stdout or sys.stderr) - self.wrapped = wrapped - - # should we reset colors to defaults after every .write() - self.autoreset = autoreset - - # create the proxy wrapping our output stream - self.stream = StreamWrapper(wrapped, self) - - on_windows = os.name == 'nt' - # We test if the WinAPI works, because even if we are on Windows - # we may be using a terminal that doesn't support the WinAPI - # (e.g. Cygwin Terminal). In this case it's up to the terminal - # to support the ANSI codes. - conversion_supported = on_windows and winapi_test() - try: - fd = wrapped.fileno() - except Exception: - fd = -1 - system_has_native_ansi = not on_windows or enable_vt_processing(fd) - have_tty = not self.stream.closed and self.stream.isatty() - need_conversion = conversion_supported and not system_has_native_ansi - - # should we strip ANSI sequences from our output? - if strip is None: - strip = need_conversion or not have_tty - self.strip = strip - - # should we should convert ANSI sequences into win32 calls? - if convert is None: - convert = need_conversion and have_tty - self.convert = convert - - # dict of ansi codes to win32 functions and parameters - self.win32_calls = self.get_win32_calls() - - # are we wrapping stderr? - self.on_stderr = self.wrapped is sys.stderr - - def should_wrap(self): - ''' - True if this class is actually needed. If false, then the output - stream will not be affected, nor will win32 calls be issued, so - wrapping stdout is not actually required. This will generally be - False on non-Windows platforms, unless optional functionality like - autoreset has been requested using kwargs to init() - ''' - return self.convert or self.strip or self.autoreset - - def get_win32_calls(self): - if self.convert and winterm: - return { - AnsiStyle.RESET_ALL: (winterm.reset_all, ), - AnsiStyle.BRIGHT: (winterm.style, WinStyle.BRIGHT), - AnsiStyle.DIM: (winterm.style, WinStyle.NORMAL), - AnsiStyle.NORMAL: (winterm.style, WinStyle.NORMAL), - AnsiFore.BLACK: (winterm.fore, WinColor.BLACK), - AnsiFore.RED: (winterm.fore, WinColor.RED), - AnsiFore.GREEN: (winterm.fore, WinColor.GREEN), - AnsiFore.YELLOW: (winterm.fore, WinColor.YELLOW), - AnsiFore.BLUE: (winterm.fore, WinColor.BLUE), - AnsiFore.MAGENTA: (winterm.fore, WinColor.MAGENTA), - AnsiFore.CYAN: (winterm.fore, WinColor.CYAN), - AnsiFore.WHITE: (winterm.fore, WinColor.GREY), - AnsiFore.RESET: (winterm.fore, ), - AnsiFore.LIGHTBLACK_EX: (winterm.fore, WinColor.BLACK, True), - AnsiFore.LIGHTRED_EX: (winterm.fore, WinColor.RED, True), - AnsiFore.LIGHTGREEN_EX: (winterm.fore, WinColor.GREEN, True), - AnsiFore.LIGHTYELLOW_EX: (winterm.fore, WinColor.YELLOW, True), - AnsiFore.LIGHTBLUE_EX: (winterm.fore, WinColor.BLUE, True), - AnsiFore.LIGHTMAGENTA_EX: (winterm.fore, WinColor.MAGENTA, True), - AnsiFore.LIGHTCYAN_EX: (winterm.fore, WinColor.CYAN, True), - AnsiFore.LIGHTWHITE_EX: (winterm.fore, WinColor.GREY, True), - AnsiBack.BLACK: (winterm.back, WinColor.BLACK), - AnsiBack.RED: (winterm.back, WinColor.RED), - AnsiBack.GREEN: (winterm.back, WinColor.GREEN), - AnsiBack.YELLOW: (winterm.back, WinColor.YELLOW), - AnsiBack.BLUE: (winterm.back, WinColor.BLUE), - AnsiBack.MAGENTA: (winterm.back, WinColor.MAGENTA), - AnsiBack.CYAN: (winterm.back, WinColor.CYAN), - AnsiBack.WHITE: (winterm.back, WinColor.GREY), - AnsiBack.RESET: (winterm.back, ), - AnsiBack.LIGHTBLACK_EX: (winterm.back, WinColor.BLACK, True), - AnsiBack.LIGHTRED_EX: (winterm.back, WinColor.RED, True), - AnsiBack.LIGHTGREEN_EX: (winterm.back, WinColor.GREEN, True), - AnsiBack.LIGHTYELLOW_EX: (winterm.back, WinColor.YELLOW, True), - AnsiBack.LIGHTBLUE_EX: (winterm.back, WinColor.BLUE, True), - AnsiBack.LIGHTMAGENTA_EX: (winterm.back, WinColor.MAGENTA, True), - AnsiBack.LIGHTCYAN_EX: (winterm.back, WinColor.CYAN, True), - AnsiBack.LIGHTWHITE_EX: (winterm.back, WinColor.GREY, True), - } - return dict() - - def write(self, text): - if self.strip or self.convert: - self.write_and_convert(text) - else: - self.wrapped.write(text) - self.wrapped.flush() - if self.autoreset: - self.reset_all() - - - def reset_all(self): - if self.convert: - self.call_win32('m', (0,)) - elif not self.strip and not self.stream.closed: - self.wrapped.write(Style.RESET_ALL) - - - def write_and_convert(self, text): - ''' - Write the given text to our wrapped stream, stripping any ANSI - sequences from the text, and optionally converting them into win32 - calls. - ''' - cursor = 0 - text = self.convert_osc(text) - for match in self.ANSI_CSI_RE.finditer(text): - start, end = match.span() - self.write_plain_text(text, cursor, start) - self.convert_ansi(*match.groups()) - cursor = end - self.write_plain_text(text, cursor, len(text)) - - - def write_plain_text(self, text, start, end): - if start < end: - self.wrapped.write(text[start:end]) - self.wrapped.flush() - - - def convert_ansi(self, paramstring, command): - if self.convert: - params = self.extract_params(command, paramstring) - self.call_win32(command, params) - - - def extract_params(self, command, paramstring): - if command in 'Hf': - params = tuple(int(p) if len(p) != 0 else 1 for p in paramstring.split(';')) - while len(params) < 2: - # defaults: - params = params + (1,) - else: - params = tuple(int(p) for p in paramstring.split(';') if len(p) != 0) - if len(params) == 0: - # defaults: - if command in 'JKm': - params = (0,) - elif command in 'ABCD': - params = (1,) - - return params - - - def call_win32(self, command, params): - if command == 'm': - for param in params: - if param in self.win32_calls: - func_args = self.win32_calls[param] - func = func_args[0] - args = func_args[1:] - kwargs = dict(on_stderr=self.on_stderr) - func(*args, **kwargs) - elif command in 'J': - winterm.erase_screen(params[0], on_stderr=self.on_stderr) - elif command in 'K': - winterm.erase_line(params[0], on_stderr=self.on_stderr) - elif command in 'Hf': # cursor position - absolute - winterm.set_cursor_position(params, on_stderr=self.on_stderr) - elif command in 'ABCD': # cursor position - relative - n = params[0] - # A - up, B - down, C - forward, D - back - x, y = {'A': (0, -n), 'B': (0, n), 'C': (n, 0), 'D': (-n, 0)}[command] - winterm.cursor_adjust(x, y, on_stderr=self.on_stderr) - - - def convert_osc(self, text): - for match in self.ANSI_OSC_RE.finditer(text): - start, end = match.span() - text = text[:start] + text[end:] - paramstring, command = match.groups() - if command == BEL: - if paramstring.count(";") == 1: - params = paramstring.split(";") - # 0 - change title and icon (we will only change title) - # 1 - change icon (we don't support this) - # 2 - change title - if params[0] in '02': - winterm.set_title(params[1]) - return text - - - def flush(self): - self.wrapped.flush() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/staticfiles.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/staticfiles.py deleted file mode 100644 index 299015d4fef268cde91273790251f35192e1c8a6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/staticfiles.py +++ /dev/null @@ -1 +0,0 @@ -from starlette.staticfiles import StaticFiles as StaticFiles # noqa diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/ttGlyphPen.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/ttGlyphPen.py deleted file mode 100644 index de2ccaeeb45c18c80caae049f3bd26b4ff22e99e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/ttGlyphPen.py +++ /dev/null @@ -1,335 +0,0 @@ -from array import array -from typing import Any, Callable, Dict, Optional, Tuple -from fontTools.misc.fixedTools import MAX_F2DOT14, floatToFixedToFloat -from fontTools.misc.loggingTools import LogMixin -from fontTools.pens.pointPen import AbstractPointPen -from fontTools.misc.roundTools import otRound -from fontTools.pens.basePen import LoggingPen, PenError -from fontTools.pens.transformPen import TransformPen, TransformPointPen -from fontTools.ttLib.tables import ttProgram -from fontTools.ttLib.tables._g_l_y_f import flagOnCurve, flagCubic -from fontTools.ttLib.tables._g_l_y_f import Glyph -from fontTools.ttLib.tables._g_l_y_f import GlyphComponent -from fontTools.ttLib.tables._g_l_y_f import GlyphCoordinates -from fontTools.ttLib.tables._g_l_y_f import dropImpliedOnCurvePoints -import math - - -__all__ = ["TTGlyphPen", "TTGlyphPointPen"] - - -class _TTGlyphBasePen: - def __init__( - self, - glyphSet: Optional[Dict[str, Any]], - handleOverflowingTransforms: bool = True, - ) -> None: - """ - Construct a new pen. - - Args: - glyphSet (Dict[str, Any]): A glyphset object, used to resolve components. - handleOverflowingTransforms (bool): See below. - - If ``handleOverflowingTransforms`` is True, the components' transform values - are checked that they don't overflow the limits of a F2Dot14 number: - -2.0 <= v < +2.0. If any transform value exceeds these, the composite - glyph is decomposed. - - An exception to this rule is done for values that are very close to +2.0 - (both for consistency with the -2.0 case, and for the relative frequency - these occur in real fonts). When almost +2.0 values occur (and all other - values are within the range -2.0 <= x <= +2.0), they are clamped to the - maximum positive value that can still be encoded as an F2Dot14: i.e. - 1.99993896484375. - - If False, no check is done and all components are translated unmodified - into the glyf table, followed by an inevitable ``struct.error`` once an - attempt is made to compile them. - - If both contours and components are present in a glyph, the components - are decomposed. - """ - self.glyphSet = glyphSet - self.handleOverflowingTransforms = handleOverflowingTransforms - self.init() - - def _decompose( - self, - glyphName: str, - transformation: Tuple[float, float, float, float, float, float], - ): - tpen = self.transformPen(self, transformation) - getattr(self.glyphSet[glyphName], self.drawMethod)(tpen) - - def _isClosed(self): - """ - Check if the current path is closed. - """ - raise NotImplementedError - - def init(self) -> None: - self.points = [] - self.endPts = [] - self.types = [] - self.components = [] - - def addComponent( - self, - baseGlyphName: str, - transformation: Tuple[float, float, float, float, float, float], - identifier: Optional[str] = None, - **kwargs: Any, - ) -> None: - """ - Add a sub glyph. - """ - self.components.append((baseGlyphName, transformation)) - - def _buildComponents(self, componentFlags): - if self.handleOverflowingTransforms: - # we can't encode transform values > 2 or < -2 in F2Dot14, - # so we must decompose the glyph if any transform exceeds these - overflowing = any( - s > 2 or s < -2 - for (glyphName, transformation) in self.components - for s in transformation[:4] - ) - components = [] - for glyphName, transformation in self.components: - if glyphName not in self.glyphSet: - self.log.warning(f"skipped non-existing component '{glyphName}'") - continue - if self.points or (self.handleOverflowingTransforms and overflowing): - # can't have both coordinates and components, so decompose - self._decompose(glyphName, transformation) - continue - - component = GlyphComponent() - component.glyphName = glyphName - component.x, component.y = (otRound(v) for v in transformation[4:]) - # quantize floats to F2Dot14 so we get same values as when decompiled - # from a binary glyf table - transformation = tuple( - floatToFixedToFloat(v, 14) for v in transformation[:4] - ) - if transformation != (1, 0, 0, 1): - if self.handleOverflowingTransforms and any( - MAX_F2DOT14 < s <= 2 for s in transformation - ): - # clamp values ~= +2.0 so we can keep the component - transformation = tuple( - MAX_F2DOT14 if MAX_F2DOT14 < s <= 2 else s - for s in transformation - ) - component.transform = (transformation[:2], transformation[2:]) - component.flags = componentFlags - components.append(component) - return components - - def glyph( - self, - componentFlags: int = 0x04, - dropImpliedOnCurves: bool = False, - *, - round: Callable[[float], int] = otRound, - ) -> Glyph: - """ - Returns a :py:class:`~._g_l_y_f.Glyph` object representing the glyph. - - Args: - componentFlags: Flags to use for component glyphs. (default: 0x04) - - dropImpliedOnCurves: Whether to remove implied-oncurve points. (default: False) - """ - if not self._isClosed(): - raise PenError("Didn't close last contour.") - components = self._buildComponents(componentFlags) - - glyph = Glyph() - glyph.coordinates = GlyphCoordinates(self.points) - glyph.endPtsOfContours = self.endPts - glyph.flags = array("B", self.types) - self.init() - - if components: - # If both components and contours were present, they have by now - # been decomposed by _buildComponents. - glyph.components = components - glyph.numberOfContours = -1 - else: - glyph.numberOfContours = len(glyph.endPtsOfContours) - glyph.program = ttProgram.Program() - glyph.program.fromBytecode(b"") - if dropImpliedOnCurves: - dropImpliedOnCurvePoints(glyph) - glyph.coordinates.toInt(round=round) - - return glyph - - -class TTGlyphPen(_TTGlyphBasePen, LoggingPen): - """ - Pen used for drawing to a TrueType glyph. - - This pen can be used to construct or modify glyphs in a TrueType format - font. After using the pen to draw, use the ``.glyph()`` method to retrieve - a :py:class:`~._g_l_y_f.Glyph` object representing the glyph. - """ - - drawMethod = "draw" - transformPen = TransformPen - - def __init__( - self, - glyphSet: Optional[Dict[str, Any]] = None, - handleOverflowingTransforms: bool = True, - outputImpliedClosingLine: bool = False, - ) -> None: - super().__init__(glyphSet, handleOverflowingTransforms) - self.outputImpliedClosingLine = outputImpliedClosingLine - - def _addPoint(self, pt: Tuple[float, float], tp: int) -> None: - self.points.append(pt) - self.types.append(tp) - - def _popPoint(self) -> None: - self.points.pop() - self.types.pop() - - def _isClosed(self) -> bool: - return (not self.points) or ( - self.endPts and self.endPts[-1] == len(self.points) - 1 - ) - - def lineTo(self, pt: Tuple[float, float]) -> None: - self._addPoint(pt, flagOnCurve) - - def moveTo(self, pt: Tuple[float, float]) -> None: - if not self._isClosed(): - raise PenError('"move"-type point must begin a new contour.') - self._addPoint(pt, flagOnCurve) - - def curveTo(self, *points) -> None: - assert len(points) % 2 == 1 - for pt in points[:-1]: - self._addPoint(pt, flagCubic) - - # last point is None if there are no on-curve points - if points[-1] is not None: - self._addPoint(points[-1], 1) - - def qCurveTo(self, *points) -> None: - assert len(points) >= 1 - for pt in points[:-1]: - self._addPoint(pt, 0) - - # last point is None if there are no on-curve points - if points[-1] is not None: - self._addPoint(points[-1], 1) - - def closePath(self) -> None: - endPt = len(self.points) - 1 - - # ignore anchors (one-point paths) - if endPt == 0 or (self.endPts and endPt == self.endPts[-1] + 1): - self._popPoint() - return - - if not self.outputImpliedClosingLine: - # if first and last point on this path are the same, remove last - startPt = 0 - if self.endPts: - startPt = self.endPts[-1] + 1 - if self.points[startPt] == self.points[endPt]: - self._popPoint() - endPt -= 1 - - self.endPts.append(endPt) - - def endPath(self) -> None: - # TrueType contours are always "closed" - self.closePath() - - -class TTGlyphPointPen(_TTGlyphBasePen, LogMixin, AbstractPointPen): - """ - Point pen used for drawing to a TrueType glyph. - - This pen can be used to construct or modify glyphs in a TrueType format - font. After using the pen to draw, use the ``.glyph()`` method to retrieve - a :py:class:`~._g_l_y_f.Glyph` object representing the glyph. - """ - - drawMethod = "drawPoints" - transformPen = TransformPointPen - - def init(self) -> None: - super().init() - self._currentContourStartIndex = None - - def _isClosed(self) -> bool: - return self._currentContourStartIndex is None - - def beginPath(self, identifier: Optional[str] = None, **kwargs: Any) -> None: - """ - Start a new sub path. - """ - if not self._isClosed(): - raise PenError("Didn't close previous contour.") - self._currentContourStartIndex = len(self.points) - - def endPath(self) -> None: - """ - End the current sub path. - """ - # TrueType contours are always "closed" - if self._isClosed(): - raise PenError("Contour is already closed.") - if self._currentContourStartIndex == len(self.points): - # ignore empty contours - self._currentContourStartIndex = None - return - - contourStart = self.endPts[-1] + 1 if self.endPts else 0 - self.endPts.append(len(self.points) - 1) - self._currentContourStartIndex = None - - # Resolve types for any cubic segments - flags = self.types - for i in range(contourStart, len(flags)): - if flags[i] == "curve": - j = i - 1 - if j < contourStart: - j = len(flags) - 1 - while flags[j] == 0: - flags[j] = flagCubic - j -= 1 - flags[i] = flagOnCurve - - def addPoint( - self, - pt: Tuple[float, float], - segmentType: Optional[str] = None, - smooth: bool = False, - name: Optional[str] = None, - identifier: Optional[str] = None, - **kwargs: Any, - ) -> None: - """ - Add a point to the current sub path. - """ - if self._isClosed(): - raise PenError("Can't add a point to a closed contour.") - if segmentType is None: - self.types.append(0) - elif segmentType in ("line", "move"): - self.types.append(flagOnCurve) - elif segmentType == "qcurve": - self.types.append(flagOnCurve) - elif segmentType == "curve": - self.types.append("curve") - else: - raise AssertionError(segmentType) - - self.points.append(pt) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-9a36a7ca.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-9a36a7ca.css deleted file mode 100644 index cca598778f233cad73ec7066b69bb4e609c35cb2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-9a36a7ca.css +++ /dev/null @@ -1 +0,0 @@ -input.svelte-q8uklq{position:absolute;top:var(--size-2);right:var(--size-2);bottom:var(--size-2);left:var(--size-2);flex:1 1 0%;transform:translate(-.1px);outline:none;border:none;background:transparent}span.svelte-q8uklq{flex:1 1 0%;outline:none;padding:var(--size-2)}.header.svelte-q8uklq{transform:translate(0);font:var(--weight-bold)}.edit.svelte-q8uklq{opacity:0;pointer-events:none}table.svelte-1jok1de.svelte-1jok1de{position:relative;overflow-y:scroll;overflow-x:scroll;-webkit-overflow-scrolling:touch;max-height:100vh;box-sizing:border-box;display:block;padding:0;margin:0;color:var(--body-text-color);font-size:var(--input-text-size);line-height:var(--line-md);font-family:var(--font-mono);border-spacing:0;width:100%;scroll-snap-type:x proximity}table.svelte-1jok1de .svelte-1jok1de:is(thead,tfoot,tbody){display:table;table-layout:fixed;width:100%;box-sizing:border-box}tbody.svelte-1jok1de.svelte-1jok1de{overflow-x:scroll;overflow-y:hidden}table.svelte-1jok1de tbody.svelte-1jok1de{padding-top:var(--bw-svt-p-top);padding-bottom:var(--bw-svt-p-bottom)}tbody.svelte-1jok1de.svelte-1jok1de{position:relative;box-sizing:border-box;border:0px solid currentColor}tbody.svelte-1jok1de>tr:last-child{border:none}table.svelte-1jok1de td{scroll-snap-align:start}tbody.svelte-1jok1de>tr:nth-child(2n){background:var(--table-even-background-fill)}thead.svelte-1jok1de.svelte-1jok1de{position:sticky;top:0;left:0;z-index:var(--layer-1);box-shadow:var(--shadow-drop)}.button-wrap.svelte-1bvc1p0:hover svg.svelte-1bvc1p0.svelte-1bvc1p0{color:var(--color-accent)}.button-wrap.svelte-1bvc1p0 svg.svelte-1bvc1p0.svelte-1bvc1p0{margin-right:var(--size-1);margin-left:-5px}.label.svelte-1bvc1p0 p.svelte-1bvc1p0.svelte-1bvc1p0{position:relative;z-index:var(--layer-4);margin-bottom:var(--size-2);color:var(--block-label-text-color);font-size:var(--block-label-text-size)}.table-wrap.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{position:relative;transition:.15s;border:1px solid var(--border-color-primary);border-radius:var(--table-radius);overflow:hidden}.table-wrap.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0:focus-within{outline:none;background-color:none}.dragging.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{border-color:var(--color-accent)}.no-wrap.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{white-space:nowrap}table.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{position:absolute;opacity:0;transition:.15s;width:var(--size-full);table-layout:auto;color:var(--body-text-color);font-size:var(--input-text-size);line-height:var(--line-md);font-family:var(--font-mono);border-spacing:0}div.svelte-1bvc1p0:not(.no-wrap) td.svelte-1bvc1p0.svelte-1bvc1p0{overflow-wrap:anywhere}div.no-wrap.svelte-1bvc1p0 td.svelte-1bvc1p0.svelte-1bvc1p0{overflow-x:hidden}table.fixed-layout.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{table-layout:fixed}thead.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{position:sticky;top:0;left:0;z-index:var(--layer-1);box-shadow:var(--shadow-drop)}tr.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{border-bottom:1px solid var(--border-color-primary);text-align:left}tr.svelte-1bvc1p0>.svelte-1bvc1p0+.svelte-1bvc1p0{border-right-width:0px;border-left-width:1px;border-style:solid;border-color:var(--border-color-primary)}th.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0,td.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{--ring-color:transparent;position:relative;outline:none;box-shadow:inset 0 0 0 1px var(--ring-color);padding:0}th.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0:first-child{border-top-left-radius:var(--table-radius)}th.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0:last-child{border-top-right-radius:var(--table-radius)}th.focus.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0,td.focus.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{--ring-color:var(--color-accent)}tr.svelte-1bvc1p0:last-child td.svelte-1bvc1p0.svelte-1bvc1p0:first-child{border-bottom-left-radius:var(--table-radius)}tr.svelte-1bvc1p0:last-child td.svelte-1bvc1p0.svelte-1bvc1p0:last-child{border-bottom-right-radius:var(--table-radius)}tr.svelte-1bvc1p0 th.svelte-1bvc1p0.svelte-1bvc1p0{background:var(--table-even-background-fill)}th.svelte-1bvc1p0 svg.svelte-1bvc1p0.svelte-1bvc1p0{fill:currentColor;font-size:10px}.sort-button.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{display:flex;flex:none;justify-content:center;align-items:center;transition:.15s;cursor:pointer;padding:var(--size-2);color:var(--body-text-color-subdued);line-height:var(--text-sm)}.sort-button.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0:hover{color:var(--body-text-color)}.des.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{transform:scaleY(-1)}.sort-button.sorted.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{color:var(--color-accent)}.editing.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{background:var(--table-editing)}.cell-wrap.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{display:flex;align-items:center;outline:none;height:var(--size-full);min-height:var(--size-9)}.controls-wrap.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{display:flex;justify-content:flex-end;padding-top:var(--size-2)}.controls-wrap.svelte-1bvc1p0>.svelte-1bvc1p0+.svelte-1bvc1p0{margin-left:var(--size-1)}.row_odd.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{background:var(--table-odd-background-fill)}.row_odd.focus.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{background:var(--background-fill-primary)} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpx/_transports/default.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpx/_transports/default.py deleted file mode 100644 index 7dba5b8208a5129c930e74e30178f8b5b5e5aa6f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpx/_transports/default.py +++ /dev/null @@ -1,378 +0,0 @@ -""" -Custom transports, with nicely configured defaults. - -The following additional keyword arguments are currently supported by httpcore... - -* uds: str -* local_address: str -* retries: int - -Example usages... - -# Disable HTTP/2 on a single specific domain. -mounts = { - "all://": httpx.HTTPTransport(http2=True), - "all://*example.org": httpx.HTTPTransport() -} - -# Using advanced httpcore configuration, with connection retries. -transport = httpx.HTTPTransport(retries=1) -client = httpx.Client(transport=transport) - -# Using advanced httpcore configuration, with unix domain sockets. -transport = httpx.HTTPTransport(uds="socket.uds") -client = httpx.Client(transport=transport) -""" -import contextlib -import typing -from types import TracebackType - -import httpcore - -from .._config import DEFAULT_LIMITS, Limits, Proxy, create_ssl_context -from .._exceptions import ( - ConnectError, - ConnectTimeout, - LocalProtocolError, - NetworkError, - PoolTimeout, - ProtocolError, - ProxyError, - ReadError, - ReadTimeout, - RemoteProtocolError, - TimeoutException, - UnsupportedProtocol, - WriteError, - WriteTimeout, -) -from .._models import Request, Response -from .._types import AsyncByteStream, CertTypes, SyncByteStream, VerifyTypes -from .base import AsyncBaseTransport, BaseTransport - -T = typing.TypeVar("T", bound="HTTPTransport") -A = typing.TypeVar("A", bound="AsyncHTTPTransport") - -SOCKET_OPTION = typing.Union[ - typing.Tuple[int, int, int], - typing.Tuple[int, int, typing.Union[bytes, bytearray]], - typing.Tuple[int, int, None, int], -] - - -@contextlib.contextmanager -def map_httpcore_exceptions() -> typing.Iterator[None]: - try: - yield - except Exception as exc: # noqa: PIE-786 - mapped_exc = None - - for from_exc, to_exc in HTTPCORE_EXC_MAP.items(): - if not isinstance(exc, from_exc): - continue - # We want to map to the most specific exception we can find. - # Eg if `exc` is an `httpcore.ReadTimeout`, we want to map to - # `httpx.ReadTimeout`, not just `httpx.TimeoutException`. - if mapped_exc is None or issubclass(to_exc, mapped_exc): - mapped_exc = to_exc - - if mapped_exc is None: # pragma: no cover - raise - - message = str(exc) - raise mapped_exc(message) from exc - - -HTTPCORE_EXC_MAP = { - httpcore.TimeoutException: TimeoutException, - httpcore.ConnectTimeout: ConnectTimeout, - httpcore.ReadTimeout: ReadTimeout, - httpcore.WriteTimeout: WriteTimeout, - httpcore.PoolTimeout: PoolTimeout, - httpcore.NetworkError: NetworkError, - httpcore.ConnectError: ConnectError, - httpcore.ReadError: ReadError, - httpcore.WriteError: WriteError, - httpcore.ProxyError: ProxyError, - httpcore.UnsupportedProtocol: UnsupportedProtocol, - httpcore.ProtocolError: ProtocolError, - httpcore.LocalProtocolError: LocalProtocolError, - httpcore.RemoteProtocolError: RemoteProtocolError, -} - - -class ResponseStream(SyncByteStream): - def __init__(self, httpcore_stream: typing.Iterable[bytes]): - self._httpcore_stream = httpcore_stream - - def __iter__(self) -> typing.Iterator[bytes]: - with map_httpcore_exceptions(): - for part in self._httpcore_stream: - yield part - - def close(self) -> None: - if hasattr(self._httpcore_stream, "close"): - self._httpcore_stream.close() - - -class HTTPTransport(BaseTransport): - def __init__( - self, - verify: VerifyTypes = True, - cert: typing.Optional[CertTypes] = None, - http1: bool = True, - http2: bool = False, - limits: Limits = DEFAULT_LIMITS, - trust_env: bool = True, - proxy: typing.Optional[Proxy] = None, - uds: typing.Optional[str] = None, - local_address: typing.Optional[str] = None, - retries: int = 0, - socket_options: typing.Optional[typing.Iterable[SOCKET_OPTION]] = None, - ) -> None: - ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env) - - if proxy is None: - self._pool = httpcore.ConnectionPool( - ssl_context=ssl_context, - max_connections=limits.max_connections, - max_keepalive_connections=limits.max_keepalive_connections, - keepalive_expiry=limits.keepalive_expiry, - http1=http1, - http2=http2, - uds=uds, - local_address=local_address, - retries=retries, - socket_options=socket_options, - ) - elif proxy.url.scheme in ("http", "https"): - self._pool = httpcore.HTTPProxy( - proxy_url=httpcore.URL( - scheme=proxy.url.raw_scheme, - host=proxy.url.raw_host, - port=proxy.url.port, - target=proxy.url.raw_path, - ), - proxy_auth=proxy.raw_auth, - proxy_headers=proxy.headers.raw, - ssl_context=ssl_context, - proxy_ssl_context=proxy.ssl_context, - max_connections=limits.max_connections, - max_keepalive_connections=limits.max_keepalive_connections, - keepalive_expiry=limits.keepalive_expiry, - http1=http1, - http2=http2, - socket_options=socket_options, - ) - elif proxy.url.scheme == "socks5": - try: - import socksio # noqa - except ImportError: # pragma: no cover - raise ImportError( - "Using SOCKS proxy, but the 'socksio' package is not installed. " - "Make sure to install httpx using `pip install httpx[socks]`." - ) from None - - self._pool = httpcore.SOCKSProxy( - proxy_url=httpcore.URL( - scheme=proxy.url.raw_scheme, - host=proxy.url.raw_host, - port=proxy.url.port, - target=proxy.url.raw_path, - ), - proxy_auth=proxy.raw_auth, - ssl_context=ssl_context, - max_connections=limits.max_connections, - max_keepalive_connections=limits.max_keepalive_connections, - keepalive_expiry=limits.keepalive_expiry, - http1=http1, - http2=http2, - ) - else: # pragma: no cover - raise ValueError( - f"Proxy protocol must be either 'http', 'https', or 'socks5', but got {proxy.url.scheme!r}." - ) - - def __enter__(self: T) -> T: # Use generics for subclass support. - self._pool.__enter__() - return self - - def __exit__( - self, - exc_type: typing.Optional[typing.Type[BaseException]] = None, - exc_value: typing.Optional[BaseException] = None, - traceback: typing.Optional[TracebackType] = None, - ) -> None: - with map_httpcore_exceptions(): - self._pool.__exit__(exc_type, exc_value, traceback) - - def handle_request( - self, - request: Request, - ) -> Response: - assert isinstance(request.stream, SyncByteStream) - - req = httpcore.Request( - method=request.method, - url=httpcore.URL( - scheme=request.url.raw_scheme, - host=request.url.raw_host, - port=request.url.port, - target=request.url.raw_path, - ), - headers=request.headers.raw, - content=request.stream, - extensions=request.extensions, - ) - with map_httpcore_exceptions(): - resp = self._pool.handle_request(req) - - assert isinstance(resp.stream, typing.Iterable) - - return Response( - status_code=resp.status, - headers=resp.headers, - stream=ResponseStream(resp.stream), - extensions=resp.extensions, - ) - - def close(self) -> None: - self._pool.close() - - -class AsyncResponseStream(AsyncByteStream): - def __init__(self, httpcore_stream: typing.AsyncIterable[bytes]): - self._httpcore_stream = httpcore_stream - - async def __aiter__(self) -> typing.AsyncIterator[bytes]: - with map_httpcore_exceptions(): - async for part in self._httpcore_stream: - yield part - - async def aclose(self) -> None: - if hasattr(self._httpcore_stream, "aclose"): - await self._httpcore_stream.aclose() - - -class AsyncHTTPTransport(AsyncBaseTransport): - def __init__( - self, - verify: VerifyTypes = True, - cert: typing.Optional[CertTypes] = None, - http1: bool = True, - http2: bool = False, - limits: Limits = DEFAULT_LIMITS, - trust_env: bool = True, - proxy: typing.Optional[Proxy] = None, - uds: typing.Optional[str] = None, - local_address: typing.Optional[str] = None, - retries: int = 0, - socket_options: typing.Optional[typing.Iterable[SOCKET_OPTION]] = None, - ) -> None: - ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env) - - if proxy is None: - self._pool = httpcore.AsyncConnectionPool( - ssl_context=ssl_context, - max_connections=limits.max_connections, - max_keepalive_connections=limits.max_keepalive_connections, - keepalive_expiry=limits.keepalive_expiry, - http1=http1, - http2=http2, - uds=uds, - local_address=local_address, - retries=retries, - socket_options=socket_options, - ) - elif proxy.url.scheme in ("http", "https"): - self._pool = httpcore.AsyncHTTPProxy( - proxy_url=httpcore.URL( - scheme=proxy.url.raw_scheme, - host=proxy.url.raw_host, - port=proxy.url.port, - target=proxy.url.raw_path, - ), - proxy_auth=proxy.raw_auth, - proxy_headers=proxy.headers.raw, - ssl_context=ssl_context, - max_connections=limits.max_connections, - max_keepalive_connections=limits.max_keepalive_connections, - keepalive_expiry=limits.keepalive_expiry, - http1=http1, - http2=http2, - socket_options=socket_options, - ) - elif proxy.url.scheme == "socks5": - try: - import socksio # noqa - except ImportError: # pragma: no cover - raise ImportError( - "Using SOCKS proxy, but the 'socksio' package is not installed. " - "Make sure to install httpx using `pip install httpx[socks]`." - ) from None - - self._pool = httpcore.AsyncSOCKSProxy( - proxy_url=httpcore.URL( - scheme=proxy.url.raw_scheme, - host=proxy.url.raw_host, - port=proxy.url.port, - target=proxy.url.raw_path, - ), - proxy_auth=proxy.raw_auth, - ssl_context=ssl_context, - max_connections=limits.max_connections, - max_keepalive_connections=limits.max_keepalive_connections, - keepalive_expiry=limits.keepalive_expiry, - http1=http1, - http2=http2, - ) - else: # pragma: no cover - raise ValueError( - f"Proxy protocol must be either 'http', 'https', or 'socks5', but got {proxy.url.scheme!r}." - ) - - async def __aenter__(self: A) -> A: # Use generics for subclass support. - await self._pool.__aenter__() - return self - - async def __aexit__( - self, - exc_type: typing.Optional[typing.Type[BaseException]] = None, - exc_value: typing.Optional[BaseException] = None, - traceback: typing.Optional[TracebackType] = None, - ) -> None: - with map_httpcore_exceptions(): - await self._pool.__aexit__(exc_type, exc_value, traceback) - - async def handle_async_request( - self, - request: Request, - ) -> Response: - assert isinstance(request.stream, AsyncByteStream) - - req = httpcore.Request( - method=request.method, - url=httpcore.URL( - scheme=request.url.raw_scheme, - host=request.url.raw_host, - port=request.url.port, - target=request.url.raw_path, - ), - headers=request.headers.raw, - content=request.stream, - extensions=request.extensions, - ) - with map_httpcore_exceptions(): - resp = await self._pool.handle_async_request(req) - - assert isinstance(resp.stream, typing.AsyncIterable) - - return Response( - status_code=resp.status, - headers=resp.headers, - stream=AsyncResponseStream(resp.stream), - extensions=resp.extensions, - ) - - async def aclose(self) -> None: - await self._pool.aclose() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/test_files.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/test_files.py deleted file mode 100644 index 197a063ba43ffa7884b6789247468c4190fea457..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/test_files.py +++ /dev/null @@ -1,112 +0,0 @@ -import typing -import textwrap -import unittest -import warnings -import importlib -import contextlib - -import importlib_resources as resources -from ..abc import Traversable -from . import data01 -from . import util -from . import _path -from ._compat import os_helper, import_helper - - -@contextlib.contextmanager -def suppress_known_deprecation(): - with warnings.catch_warnings(record=True) as ctx: - warnings.simplefilter('default', category=DeprecationWarning) - yield ctx - - -class FilesTests: - def test_read_bytes(self): - files = resources.files(self.data) - actual = files.joinpath('utf-8.file').read_bytes() - assert actual == b'Hello, UTF-8 world!\n' - - def test_read_text(self): - files = resources.files(self.data) - actual = files.joinpath('utf-8.file').read_text(encoding='utf-8') - assert actual == 'Hello, UTF-8 world!\n' - - @unittest.skipUnless( - hasattr(typing, 'runtime_checkable'), - "Only suitable when typing supports runtime_checkable", - ) - def test_traversable(self): - assert isinstance(resources.files(self.data), Traversable) - - def test_old_parameter(self): - """ - Files used to take a 'package' parameter. Make sure anyone - passing by name is still supported. - """ - with suppress_known_deprecation(): - resources.files(package=self.data) - - -class OpenDiskTests(FilesTests, unittest.TestCase): - def setUp(self): - self.data = data01 - - -class OpenZipTests(FilesTests, util.ZipSetup, unittest.TestCase): - pass - - -class OpenNamespaceTests(FilesTests, unittest.TestCase): - def setUp(self): - from . import namespacedata01 - - self.data = namespacedata01 - - -class SiteDir: - def setUp(self): - self.fixtures = contextlib.ExitStack() - self.addCleanup(self.fixtures.close) - self.site_dir = self.fixtures.enter_context(os_helper.temp_dir()) - self.fixtures.enter_context(import_helper.DirsOnSysPath(self.site_dir)) - self.fixtures.enter_context(import_helper.CleanImport()) - - -class ModulesFilesTests(SiteDir, unittest.TestCase): - def test_module_resources(self): - """ - A module can have resources found adjacent to the module. - """ - spec = { - 'mod.py': '', - 'res.txt': 'resources are the best', - } - _path.build(spec, self.site_dir) - import mod - - actual = resources.files(mod).joinpath('res.txt').read_text(encoding='utf-8') - assert actual == spec['res.txt'] - - -class ImplicitContextFilesTests(SiteDir, unittest.TestCase): - def test_implicit_files(self): - """ - Without any parameter, files() will infer the location as the caller. - """ - spec = { - 'somepkg': { - '__init__.py': textwrap.dedent( - """ - import importlib_resources as res - val = res.files().joinpath('res.txt').read_text(encoding='utf-8') - """ - ), - 'res.txt': 'resources are the best', - }, - } - _path.build(spec, self.site_dir) - assert importlib.import_module('somepkg').val == 'resources are the best' - - -if __name__ == '__main__': - unittest.main() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axisartist/tests/test_axislines.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axisartist/tests/test_axislines.py deleted file mode 100644 index b722316a5c0c01fff91be6c67dc7223f307ece2a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axisartist/tests/test_axislines.py +++ /dev/null @@ -1,147 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt -from matplotlib.testing.decorators import image_comparison -from matplotlib.transforms import IdentityTransform - -from mpl_toolkits.axisartist.axislines import AxesZero, SubplotZero, Subplot -from mpl_toolkits.axisartist import Axes, SubplotHost - - -@image_comparison(['SubplotZero.png'], style='default') -def test_SubplotZero(): - # Remove this line when this test image is regenerated. - plt.rcParams['text.kerning_factor'] = 6 - - fig = plt.figure() - - ax = SubplotZero(fig, 1, 1, 1) - fig.add_subplot(ax) - - ax.axis["xzero"].set_visible(True) - ax.axis["xzero"].label.set_text("Axis Zero") - - for n in ["top", "right"]: - ax.axis[n].set_visible(False) - - xx = np.arange(0, 2 * np.pi, 0.01) - ax.plot(xx, np.sin(xx)) - ax.set_ylabel("Test") - - -@image_comparison(['Subplot.png'], style='default') -def test_Subplot(): - # Remove this line when this test image is regenerated. - plt.rcParams['text.kerning_factor'] = 6 - - fig = plt.figure() - - ax = Subplot(fig, 1, 1, 1) - fig.add_subplot(ax) - - xx = np.arange(0, 2 * np.pi, 0.01) - ax.plot(xx, np.sin(xx)) - ax.set_ylabel("Test") - - ax.axis["top"].major_ticks.set_tick_out(True) - ax.axis["bottom"].major_ticks.set_tick_out(True) - - ax.axis["bottom"].set_label("Tk0") - - -def test_Axes(): - fig = plt.figure() - ax = Axes(fig, [0.15, 0.1, 0.65, 0.8]) - fig.add_axes(ax) - ax.plot([1, 2, 3], [0, 1, 2]) - ax.set_xscale('log') - fig.canvas.draw() - - -@image_comparison(['ParasiteAxesAuxTrans_meshplot.png'], - remove_text=True, style='default', tol=0.075) -def test_ParasiteAxesAuxTrans(): - data = np.ones((6, 6)) - data[2, 2] = 2 - data[0, :] = 0 - data[-2, :] = 0 - data[:, 0] = 0 - data[:, -2] = 0 - x = np.arange(6) - y = np.arange(6) - xx, yy = np.meshgrid(x, y) - - funcnames = ['pcolor', 'pcolormesh', 'contourf'] - - fig = plt.figure() - for i, name in enumerate(funcnames): - - ax1 = SubplotHost(fig, 1, 3, i+1) - fig.add_subplot(ax1) - - ax2 = ax1.get_aux_axes(IdentityTransform(), viewlim_mode=None) - if name.startswith('pcolor'): - getattr(ax2, name)(xx, yy, data[:-1, :-1]) - else: - getattr(ax2, name)(xx, yy, data) - ax1.set_xlim((0, 5)) - ax1.set_ylim((0, 5)) - - ax2.contour(xx, yy, data, colors='k') - - -@image_comparison(['axisline_style.png'], remove_text=True, style='mpl20') -def test_axisline_style(): - fig = plt.figure(figsize=(2, 2)) - ax = fig.add_subplot(axes_class=AxesZero) - ax.axis["xzero"].set_axisline_style("-|>") - ax.axis["xzero"].set_visible(True) - ax.axis["yzero"].set_axisline_style("->") - ax.axis["yzero"].set_visible(True) - - for direction in ("left", "right", "bottom", "top"): - ax.axis[direction].set_visible(False) - - -@image_comparison(['axisline_style_size_color.png'], remove_text=True, - style='mpl20') -def test_axisline_style_size_color(): - fig = plt.figure(figsize=(2, 2)) - ax = fig.add_subplot(axes_class=AxesZero) - ax.axis["xzero"].set_axisline_style("-|>", size=2.0, facecolor='r') - ax.axis["xzero"].set_visible(True) - ax.axis["yzero"].set_axisline_style("->, size=1.5") - ax.axis["yzero"].set_visible(True) - - for direction in ("left", "right", "bottom", "top"): - ax.axis[direction].set_visible(False) - - -@image_comparison(['axisline_style_tight.png'], remove_text=True, - style='mpl20') -def test_axisline_style_tight(): - fig = plt.figure(figsize=(2, 2)) - ax = fig.add_subplot(axes_class=AxesZero) - ax.axis["xzero"].set_axisline_style("-|>", size=5, facecolor='g') - ax.axis["xzero"].set_visible(True) - ax.axis["yzero"].set_axisline_style("->, size=8") - ax.axis["yzero"].set_visible(True) - - for direction in ("left", "right", "bottom", "top"): - ax.axis[direction].set_visible(False) - - fig.tight_layout() - - -@image_comparison(['subplotzero_ylabel.png'], style='mpl20') -def test_subplotzero_ylabel(): - fig = plt.figure() - ax = fig.add_subplot(111, axes_class=SubplotZero) - - ax.set(xlim=(-3, 7), ylim=(-3, 7), xlabel="x", ylabel="y") - - zero_axis = ax.axis["xzero", "yzero"] - zero_axis.set_visible(True) # they are hidden by default - - ax.axis["left", "right", "bottom", "top"].set_visible(False) - - zero_axis.set_axisline_style("->") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_compare.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_compare.py deleted file mode 100644 index a4d0a7068a3a650beb11529065d0b62ab702143b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_compare.py +++ /dev/null @@ -1,305 +0,0 @@ -import numpy as np -import pytest - -from pandas.compat.numpy import np_version_gte1p25 - -import pandas as pd -import pandas._testing as tm - - -@pytest.mark.parametrize("align_axis", [0, 1, "index", "columns"]) -def test_compare_axis(align_axis): - # GH#30429 - df = pd.DataFrame( - {"col1": ["a", "b", "c"], "col2": [1.0, 2.0, np.nan], "col3": [1.0, 2.0, 3.0]}, - columns=["col1", "col2", "col3"], - ) - df2 = df.copy() - df2.loc[0, "col1"] = "c" - df2.loc[2, "col3"] = 4.0 - - result = df.compare(df2, align_axis=align_axis) - - if align_axis in (1, "columns"): - indices = pd.Index([0, 2]) - columns = pd.MultiIndex.from_product([["col1", "col3"], ["self", "other"]]) - expected = pd.DataFrame( - [["a", "c", np.nan, np.nan], [np.nan, np.nan, 3.0, 4.0]], - index=indices, - columns=columns, - ) - else: - indices = pd.MultiIndex.from_product([[0, 2], ["self", "other"]]) - columns = pd.Index(["col1", "col3"]) - expected = pd.DataFrame( - [["a", np.nan], ["c", np.nan], [np.nan, 3.0], [np.nan, 4.0]], - index=indices, - columns=columns, - ) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "keep_shape, keep_equal", - [ - (True, False), - (False, True), - (True, True), - # False, False case is already covered in test_compare_axis - ], -) -def test_compare_various_formats(keep_shape, keep_equal): - df = pd.DataFrame( - {"col1": ["a", "b", "c"], "col2": [1.0, 2.0, np.nan], "col3": [1.0, 2.0, 3.0]}, - columns=["col1", "col2", "col3"], - ) - df2 = df.copy() - df2.loc[0, "col1"] = "c" - df2.loc[2, "col3"] = 4.0 - - result = df.compare(df2, keep_shape=keep_shape, keep_equal=keep_equal) - - if keep_shape: - indices = pd.Index([0, 1, 2]) - columns = pd.MultiIndex.from_product( - [["col1", "col2", "col3"], ["self", "other"]] - ) - if keep_equal: - expected = pd.DataFrame( - [ - ["a", "c", 1.0, 1.0, 1.0, 1.0], - ["b", "b", 2.0, 2.0, 2.0, 2.0], - ["c", "c", np.nan, np.nan, 3.0, 4.0], - ], - index=indices, - columns=columns, - ) - else: - expected = pd.DataFrame( - [ - ["a", "c", np.nan, np.nan, np.nan, np.nan], - [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan], - [np.nan, np.nan, np.nan, np.nan, 3.0, 4.0], - ], - index=indices, - columns=columns, - ) - else: - indices = pd.Index([0, 2]) - columns = pd.MultiIndex.from_product([["col1", "col3"], ["self", "other"]]) - expected = pd.DataFrame( - [["a", "c", 1.0, 1.0], ["c", "c", 3.0, 4.0]], index=indices, columns=columns - ) - tm.assert_frame_equal(result, expected) - - -def test_compare_with_equal_nulls(): - # We want to make sure two NaNs are considered the same - # and dropped where applicable - df = pd.DataFrame( - {"col1": ["a", "b", "c"], "col2": [1.0, 2.0, np.nan], "col3": [1.0, 2.0, 3.0]}, - columns=["col1", "col2", "col3"], - ) - df2 = df.copy() - df2.loc[0, "col1"] = "c" - - result = df.compare(df2) - indices = pd.Index([0]) - columns = pd.MultiIndex.from_product([["col1"], ["self", "other"]]) - expected = pd.DataFrame([["a", "c"]], index=indices, columns=columns) - tm.assert_frame_equal(result, expected) - - -def test_compare_with_non_equal_nulls(): - # We want to make sure the relevant NaNs do not get dropped - # even if the entire row or column are NaNs - df = pd.DataFrame( - {"col1": ["a", "b", "c"], "col2": [1.0, 2.0, np.nan], "col3": [1.0, 2.0, 3.0]}, - columns=["col1", "col2", "col3"], - ) - df2 = df.copy() - df2.loc[0, "col1"] = "c" - df2.loc[2, "col3"] = np.nan - - result = df.compare(df2) - - indices = pd.Index([0, 2]) - columns = pd.MultiIndex.from_product([["col1", "col3"], ["self", "other"]]) - expected = pd.DataFrame( - [["a", "c", np.nan, np.nan], [np.nan, np.nan, 3.0, np.nan]], - index=indices, - columns=columns, - ) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("align_axis", [0, 1]) -def test_compare_multi_index(align_axis): - df = pd.DataFrame( - {"col1": ["a", "b", "c"], "col2": [1.0, 2.0, np.nan], "col3": [1.0, 2.0, 3.0]} - ) - df.columns = pd.MultiIndex.from_arrays([["a", "a", "b"], ["col1", "col2", "col3"]]) - df.index = pd.MultiIndex.from_arrays([["x", "x", "y"], [0, 1, 2]]) - - df2 = df.copy() - df2.iloc[0, 0] = "c" - df2.iloc[2, 2] = 4.0 - - result = df.compare(df2, align_axis=align_axis) - - if align_axis == 0: - indices = pd.MultiIndex.from_arrays( - [["x", "x", "y", "y"], [0, 0, 2, 2], ["self", "other", "self", "other"]] - ) - columns = pd.MultiIndex.from_arrays([["a", "b"], ["col1", "col3"]]) - data = [["a", np.nan], ["c", np.nan], [np.nan, 3.0], [np.nan, 4.0]] - else: - indices = pd.MultiIndex.from_arrays([["x", "y"], [0, 2]]) - columns = pd.MultiIndex.from_arrays( - [ - ["a", "a", "b", "b"], - ["col1", "col1", "col3", "col3"], - ["self", "other", "self", "other"], - ] - ) - data = [["a", "c", np.nan, np.nan], [np.nan, np.nan, 3.0, 4.0]] - - expected = pd.DataFrame(data=data, index=indices, columns=columns) - tm.assert_frame_equal(result, expected) - - -def test_compare_unaligned_objects(): - # test DataFrames with different indices - msg = ( - r"Can only compare identically-labeled \(both index and columns\) DataFrame " - "objects" - ) - with pytest.raises(ValueError, match=msg): - df1 = pd.DataFrame([1, 2, 3], index=["a", "b", "c"]) - df2 = pd.DataFrame([1, 2, 3], index=["a", "b", "d"]) - df1.compare(df2) - - # test DataFrames with different shapes - msg = ( - r"Can only compare identically-labeled \(both index and columns\) DataFrame " - "objects" - ) - with pytest.raises(ValueError, match=msg): - df1 = pd.DataFrame(np.ones((3, 3))) - df2 = pd.DataFrame(np.zeros((2, 1))) - df1.compare(df2) - - -def test_compare_result_names(): - # GH 44354 - df1 = pd.DataFrame( - {"col1": ["a", "b", "c"], "col2": [1.0, 2.0, np.nan], "col3": [1.0, 2.0, 3.0]}, - ) - df2 = pd.DataFrame( - { - "col1": ["c", "b", "c"], - "col2": [1.0, 2.0, np.nan], - "col3": [1.0, 2.0, np.nan], - }, - ) - result = df1.compare(df2, result_names=("left", "right")) - expected = pd.DataFrame( - { - ("col1", "left"): {0: "a", 2: np.nan}, - ("col1", "right"): {0: "c", 2: np.nan}, - ("col3", "left"): {0: np.nan, 2: 3.0}, - ("col3", "right"): {0: np.nan, 2: np.nan}, - } - ) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "result_names", - [ - [1, 2], - "HK", - {"2": 2, "3": 3}, - 3, - 3.0, - ], -) -def test_invalid_input_result_names(result_names): - # GH 44354 - df1 = pd.DataFrame( - {"col1": ["a", "b", "c"], "col2": [1.0, 2.0, np.nan], "col3": [1.0, 2.0, 3.0]}, - ) - df2 = pd.DataFrame( - { - "col1": ["c", "b", "c"], - "col2": [1.0, 2.0, np.nan], - "col3": [1.0, 2.0, np.nan], - }, - ) - with pytest.raises( - TypeError, - match=( - f"Passing 'result_names' as a {type(result_names)} is not " - "supported. Provide 'result_names' as a tuple instead." - ), - ): - df1.compare(df2, result_names=result_names) - - -@pytest.mark.parametrize( - "val1,val2", - [(4, pd.NA), (pd.NA, pd.NA), (pd.NA, 4)], -) -def test_compare_ea_and_np_dtype(val1, val2): - # GH 48966 - arr = [4.0, val1] - ser = pd.Series([1, val2], dtype="Int64") - - df1 = pd.DataFrame({"a": arr, "b": [1.0, 2]}) - df2 = pd.DataFrame({"a": ser, "b": [1.0, 2]}) - expected = pd.DataFrame( - { - ("a", "self"): arr, - ("a", "other"): ser, - ("b", "self"): np.nan, - ("b", "other"): np.nan, - } - ) - if val1 is pd.NA and val2 is pd.NA: - # GH#18463 TODO: is this really the desired behavior? - expected.loc[1, ("a", "self")] = np.nan - - if val1 is pd.NA and np_version_gte1p25: - # can't compare with numpy array if it contains pd.NA - with pytest.raises(TypeError, match="boolean value of NA is ambiguous"): - result = df1.compare(df2, keep_shape=True) - else: - result = df1.compare(df2, keep_shape=True) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "df1_val,df2_val,diff_self,diff_other", - [ - (4, 3, 4, 3), - (4, 4, pd.NA, pd.NA), - (4, pd.NA, 4, pd.NA), - (pd.NA, pd.NA, pd.NA, pd.NA), - ], -) -def test_compare_nullable_int64_dtype(df1_val, df2_val, diff_self, diff_other): - # GH 48966 - df1 = pd.DataFrame({"a": pd.Series([df1_val, pd.NA], dtype="Int64"), "b": [1.0, 2]}) - df2 = df1.copy() - df2.loc[0, "a"] = df2_val - - expected = pd.DataFrame( - { - ("a", "self"): pd.Series([diff_self, pd.NA], dtype="Int64"), - ("a", "other"): pd.Series([diff_other, pd.NA], dtype="Int64"), - ("b", "self"): np.nan, - ("b", "other"): np.nan, - } - ) - result = df1.compare(df2, keep_shape=True) - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/distlib/index.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/distlib/index.py deleted file mode 100644 index b1fbbf8e8d2a47d0a7d2fe0b4568fd11f8be4c36..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/distlib/index.py +++ /dev/null @@ -1,509 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2013 Vinay Sajip. -# Licensed to the Python Software Foundation under a contributor agreement. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -import hashlib -import logging -import os -import shutil -import subprocess -import tempfile -try: - from threading import Thread -except ImportError: - from dummy_threading import Thread - -from . import DistlibException -from .compat import (HTTPBasicAuthHandler, Request, HTTPPasswordMgr, - urlparse, build_opener, string_types) -from .util import zip_dir, ServerProxy - -logger = logging.getLogger(__name__) - -DEFAULT_INDEX = 'https://pypi.org/pypi' -DEFAULT_REALM = 'pypi' - -class PackageIndex(object): - """ - This class represents a package index compatible with PyPI, the Python - Package Index. - """ - - boundary = b'----------ThIs_Is_tHe_distlib_index_bouNdaRY_$' - - def __init__(self, url=None): - """ - Initialise an instance. - - :param url: The URL of the index. If not specified, the URL for PyPI is - used. - """ - self.url = url or DEFAULT_INDEX - self.read_configuration() - scheme, netloc, path, params, query, frag = urlparse(self.url) - if params or query or frag or scheme not in ('http', 'https'): - raise DistlibException('invalid repository: %s' % self.url) - self.password_handler = None - self.ssl_verifier = None - self.gpg = None - self.gpg_home = None - with open(os.devnull, 'w') as sink: - # Use gpg by default rather than gpg2, as gpg2 insists on - # prompting for passwords - for s in ('gpg', 'gpg2'): - try: - rc = subprocess.check_call([s, '--version'], stdout=sink, - stderr=sink) - if rc == 0: - self.gpg = s - break - except OSError: - pass - - def _get_pypirc_command(self): - """ - Get the distutils command for interacting with PyPI configurations. - :return: the command. - """ - from .util import _get_pypirc_command as cmd - return cmd() - - def read_configuration(self): - """ - Read the PyPI access configuration as supported by distutils. This populates - ``username``, ``password``, ``realm`` and ``url`` attributes from the - configuration. - """ - from .util import _load_pypirc - cfg = _load_pypirc(self) - self.username = cfg.get('username') - self.password = cfg.get('password') - self.realm = cfg.get('realm', 'pypi') - self.url = cfg.get('repository', self.url) - - def save_configuration(self): - """ - Save the PyPI access configuration. You must have set ``username`` and - ``password`` attributes before calling this method. - """ - self.check_credentials() - from .util import _store_pypirc - _store_pypirc(self) - - def check_credentials(self): - """ - Check that ``username`` and ``password`` have been set, and raise an - exception if not. - """ - if self.username is None or self.password is None: - raise DistlibException('username and password must be set') - pm = HTTPPasswordMgr() - _, netloc, _, _, _, _ = urlparse(self.url) - pm.add_password(self.realm, netloc, self.username, self.password) - self.password_handler = HTTPBasicAuthHandler(pm) - - def register(self, metadata): - """ - Register a distribution on PyPI, using the provided metadata. - - :param metadata: A :class:`Metadata` instance defining at least a name - and version number for the distribution to be - registered. - :return: The HTTP response received from PyPI upon submission of the - request. - """ - self.check_credentials() - metadata.validate() - d = metadata.todict() - d[':action'] = 'verify' - request = self.encode_request(d.items(), []) - response = self.send_request(request) - d[':action'] = 'submit' - request = self.encode_request(d.items(), []) - return self.send_request(request) - - def _reader(self, name, stream, outbuf): - """ - Thread runner for reading lines of from a subprocess into a buffer. - - :param name: The logical name of the stream (used for logging only). - :param stream: The stream to read from. This will typically a pipe - connected to the output stream of a subprocess. - :param outbuf: The list to append the read lines to. - """ - while True: - s = stream.readline() - if not s: - break - s = s.decode('utf-8').rstrip() - outbuf.append(s) - logger.debug('%s: %s' % (name, s)) - stream.close() - - def get_sign_command(self, filename, signer, sign_password, - keystore=None): - """ - Return a suitable command for signing a file. - - :param filename: The pathname to the file to be signed. - :param signer: The identifier of the signer of the file. - :param sign_password: The passphrase for the signer's - private key used for signing. - :param keystore: The path to a directory which contains the keys - used in verification. If not specified, the - instance's ``gpg_home`` attribute is used instead. - :return: The signing command as a list suitable to be - passed to :class:`subprocess.Popen`. - """ - cmd = [self.gpg, '--status-fd', '2', '--no-tty'] - if keystore is None: - keystore = self.gpg_home - if keystore: - cmd.extend(['--homedir', keystore]) - if sign_password is not None: - cmd.extend(['--batch', '--passphrase-fd', '0']) - td = tempfile.mkdtemp() - sf = os.path.join(td, os.path.basename(filename) + '.asc') - cmd.extend(['--detach-sign', '--armor', '--local-user', - signer, '--output', sf, filename]) - logger.debug('invoking: %s', ' '.join(cmd)) - return cmd, sf - - def run_command(self, cmd, input_data=None): - """ - Run a command in a child process , passing it any input data specified. - - :param cmd: The command to run. - :param input_data: If specified, this must be a byte string containing - data to be sent to the child process. - :return: A tuple consisting of the subprocess' exit code, a list of - lines read from the subprocess' ``stdout``, and a list of - lines read from the subprocess' ``stderr``. - """ - kwargs = { - 'stdout': subprocess.PIPE, - 'stderr': subprocess.PIPE, - } - if input_data is not None: - kwargs['stdin'] = subprocess.PIPE - stdout = [] - stderr = [] - p = subprocess.Popen(cmd, **kwargs) - # We don't use communicate() here because we may need to - # get clever with interacting with the command - t1 = Thread(target=self._reader, args=('stdout', p.stdout, stdout)) - t1.start() - t2 = Thread(target=self._reader, args=('stderr', p.stderr, stderr)) - t2.start() - if input_data is not None: - p.stdin.write(input_data) - p.stdin.close() - - p.wait() - t1.join() - t2.join() - return p.returncode, stdout, stderr - - def sign_file(self, filename, signer, sign_password, keystore=None): - """ - Sign a file. - - :param filename: The pathname to the file to be signed. - :param signer: The identifier of the signer of the file. - :param sign_password: The passphrase for the signer's - private key used for signing. - :param keystore: The path to a directory which contains the keys - used in signing. If not specified, the instance's - ``gpg_home`` attribute is used instead. - :return: The absolute pathname of the file where the signature is - stored. - """ - cmd, sig_file = self.get_sign_command(filename, signer, sign_password, - keystore) - rc, stdout, stderr = self.run_command(cmd, - sign_password.encode('utf-8')) - if rc != 0: - raise DistlibException('sign command failed with error ' - 'code %s' % rc) - return sig_file - - def upload_file(self, metadata, filename, signer=None, sign_password=None, - filetype='sdist', pyversion='source', keystore=None): - """ - Upload a release file to the index. - - :param metadata: A :class:`Metadata` instance defining at least a name - and version number for the file to be uploaded. - :param filename: The pathname of the file to be uploaded. - :param signer: The identifier of the signer of the file. - :param sign_password: The passphrase for the signer's - private key used for signing. - :param filetype: The type of the file being uploaded. This is the - distutils command which produced that file, e.g. - ``sdist`` or ``bdist_wheel``. - :param pyversion: The version of Python which the release relates - to. For code compatible with any Python, this would - be ``source``, otherwise it would be e.g. ``3.2``. - :param keystore: The path to a directory which contains the keys - used in signing. If not specified, the instance's - ``gpg_home`` attribute is used instead. - :return: The HTTP response received from PyPI upon submission of the - request. - """ - self.check_credentials() - if not os.path.exists(filename): - raise DistlibException('not found: %s' % filename) - metadata.validate() - d = metadata.todict() - sig_file = None - if signer: - if not self.gpg: - logger.warning('no signing program available - not signed') - else: - sig_file = self.sign_file(filename, signer, sign_password, - keystore) - with open(filename, 'rb') as f: - file_data = f.read() - md5_digest = hashlib.md5(file_data).hexdigest() - sha256_digest = hashlib.sha256(file_data).hexdigest() - d.update({ - ':action': 'file_upload', - 'protocol_version': '1', - 'filetype': filetype, - 'pyversion': pyversion, - 'md5_digest': md5_digest, - 'sha256_digest': sha256_digest, - }) - files = [('content', os.path.basename(filename), file_data)] - if sig_file: - with open(sig_file, 'rb') as f: - sig_data = f.read() - files.append(('gpg_signature', os.path.basename(sig_file), - sig_data)) - shutil.rmtree(os.path.dirname(sig_file)) - request = self.encode_request(d.items(), files) - return self.send_request(request) - - def upload_documentation(self, metadata, doc_dir): - """ - Upload documentation to the index. - - :param metadata: A :class:`Metadata` instance defining at least a name - and version number for the documentation to be - uploaded. - :param doc_dir: The pathname of the directory which contains the - documentation. This should be the directory that - contains the ``index.html`` for the documentation. - :return: The HTTP response received from PyPI upon submission of the - request. - """ - self.check_credentials() - if not os.path.isdir(doc_dir): - raise DistlibException('not a directory: %r' % doc_dir) - fn = os.path.join(doc_dir, 'index.html') - if not os.path.exists(fn): - raise DistlibException('not found: %r' % fn) - metadata.validate() - name, version = metadata.name, metadata.version - zip_data = zip_dir(doc_dir).getvalue() - fields = [(':action', 'doc_upload'), - ('name', name), ('version', version)] - files = [('content', name, zip_data)] - request = self.encode_request(fields, files) - return self.send_request(request) - - def get_verify_command(self, signature_filename, data_filename, - keystore=None): - """ - Return a suitable command for verifying a file. - - :param signature_filename: The pathname to the file containing the - signature. - :param data_filename: The pathname to the file containing the - signed data. - :param keystore: The path to a directory which contains the keys - used in verification. If not specified, the - instance's ``gpg_home`` attribute is used instead. - :return: The verifying command as a list suitable to be - passed to :class:`subprocess.Popen`. - """ - cmd = [self.gpg, '--status-fd', '2', '--no-tty'] - if keystore is None: - keystore = self.gpg_home - if keystore: - cmd.extend(['--homedir', keystore]) - cmd.extend(['--verify', signature_filename, data_filename]) - logger.debug('invoking: %s', ' '.join(cmd)) - return cmd - - def verify_signature(self, signature_filename, data_filename, - keystore=None): - """ - Verify a signature for a file. - - :param signature_filename: The pathname to the file containing the - signature. - :param data_filename: The pathname to the file containing the - signed data. - :param keystore: The path to a directory which contains the keys - used in verification. If not specified, the - instance's ``gpg_home`` attribute is used instead. - :return: True if the signature was verified, else False. - """ - if not self.gpg: - raise DistlibException('verification unavailable because gpg ' - 'unavailable') - cmd = self.get_verify_command(signature_filename, data_filename, - keystore) - rc, stdout, stderr = self.run_command(cmd) - if rc not in (0, 1): - raise DistlibException('verify command failed with error ' - 'code %s' % rc) - return rc == 0 - - def download_file(self, url, destfile, digest=None, reporthook=None): - """ - This is a convenience method for downloading a file from an URL. - Normally, this will be a file from the index, though currently - no check is made for this (i.e. a file can be downloaded from - anywhere). - - The method is just like the :func:`urlretrieve` function in the - standard library, except that it allows digest computation to be - done during download and checking that the downloaded data - matched any expected value. - - :param url: The URL of the file to be downloaded (assumed to be - available via an HTTP GET request). - :param destfile: The pathname where the downloaded file is to be - saved. - :param digest: If specified, this must be a (hasher, value) - tuple, where hasher is the algorithm used (e.g. - ``'md5'``) and ``value`` is the expected value. - :param reporthook: The same as for :func:`urlretrieve` in the - standard library. - """ - if digest is None: - digester = None - logger.debug('No digest specified') - else: - if isinstance(digest, (list, tuple)): - hasher, digest = digest - else: - hasher = 'md5' - digester = getattr(hashlib, hasher)() - logger.debug('Digest specified: %s' % digest) - # The following code is equivalent to urlretrieve. - # We need to do it this way so that we can compute the - # digest of the file as we go. - with open(destfile, 'wb') as dfp: - # addinfourl is not a context manager on 2.x - # so we have to use try/finally - sfp = self.send_request(Request(url)) - try: - headers = sfp.info() - blocksize = 8192 - size = -1 - read = 0 - blocknum = 0 - if "content-length" in headers: - size = int(headers["Content-Length"]) - if reporthook: - reporthook(blocknum, blocksize, size) - while True: - block = sfp.read(blocksize) - if not block: - break - read += len(block) - dfp.write(block) - if digester: - digester.update(block) - blocknum += 1 - if reporthook: - reporthook(blocknum, blocksize, size) - finally: - sfp.close() - - # check that we got the whole file, if we can - if size >= 0 and read < size: - raise DistlibException( - 'retrieval incomplete: got only %d out of %d bytes' - % (read, size)) - # if we have a digest, it must match. - if digester: - actual = digester.hexdigest() - if digest != actual: - raise DistlibException('%s digest mismatch for %s: expected ' - '%s, got %s' % (hasher, destfile, - digest, actual)) - logger.debug('Digest verified: %s', digest) - - def send_request(self, req): - """ - Send a standard library :class:`Request` to PyPI and return its - response. - - :param req: The request to send. - :return: The HTTP response from PyPI (a standard library HTTPResponse). - """ - handlers = [] - if self.password_handler: - handlers.append(self.password_handler) - if self.ssl_verifier: - handlers.append(self.ssl_verifier) - opener = build_opener(*handlers) - return opener.open(req) - - def encode_request(self, fields, files): - """ - Encode fields and files for posting to an HTTP server. - - :param fields: The fields to send as a list of (fieldname, value) - tuples. - :param files: The files to send as a list of (fieldname, filename, - file_bytes) tuple. - """ - # Adapted from packaging, which in turn was adapted from - # http://code.activestate.com/recipes/146306 - - parts = [] - boundary = self.boundary - for k, values in fields: - if not isinstance(values, (list, tuple)): - values = [values] - - for v in values: - parts.extend(( - b'--' + boundary, - ('Content-Disposition: form-data; name="%s"' % - k).encode('utf-8'), - b'', - v.encode('utf-8'))) - for key, filename, value in files: - parts.extend(( - b'--' + boundary, - ('Content-Disposition: form-data; name="%s"; filename="%s"' % - (key, filename)).encode('utf-8'), - b'', - value)) - - parts.extend((b'--' + boundary + b'--', b'')) - - body = b'\r\n'.join(parts) - ct = b'multipart/form-data; boundary=' + boundary - headers = { - 'Content-type': ct, - 'Content-length': str(len(body)) - } - return Request(self.url, body, headers) - - def search(self, terms, operator=None): - if isinstance(terms, string_types): - terms = {'name': terms} - rpc_proxy = ServerProxy(self.url, timeout=3.0) - try: - return rpc_proxy.search(terms, operator or 'and') - finally: - rpc_proxy('close')() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/_cl_builtins.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/_cl_builtins.py deleted file mode 100644 index beb7b4d6f10254cbbf8c84585a79e92f6fe03fa1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/_cl_builtins.py +++ /dev/null @@ -1,231 +0,0 @@ -""" - pygments.lexers._cl_builtins - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - ANSI Common Lisp builtins. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -BUILTIN_FUNCTIONS = { # 638 functions - '<', '<=', '=', '>', '>=', '-', '/', '/=', '*', '+', '1-', '1+', - 'abort', 'abs', 'acons', 'acos', 'acosh', 'add-method', 'adjoin', - 'adjustable-array-p', 'adjust-array', 'allocate-instance', - 'alpha-char-p', 'alphanumericp', 'append', 'apply', 'apropos', - 'apropos-list', 'aref', 'arithmetic-error-operands', - 'arithmetic-error-operation', 'array-dimension', 'array-dimensions', - 'array-displacement', 'array-element-type', 'array-has-fill-pointer-p', - 'array-in-bounds-p', 'arrayp', 'array-rank', 'array-row-major-index', - 'array-total-size', 'ash', 'asin', 'asinh', 'assoc', 'assoc-if', - 'assoc-if-not', 'atan', 'atanh', 'atom', 'bit', 'bit-and', 'bit-andc1', - 'bit-andc2', 'bit-eqv', 'bit-ior', 'bit-nand', 'bit-nor', 'bit-not', - 'bit-orc1', 'bit-orc2', 'bit-vector-p', 'bit-xor', 'boole', - 'both-case-p', 'boundp', 'break', 'broadcast-stream-streams', - 'butlast', 'byte', 'byte-position', 'byte-size', 'caaaar', 'caaadr', - 'caaar', 'caadar', 'caaddr', 'caadr', 'caar', 'cadaar', 'cadadr', - 'cadar', 'caddar', 'cadddr', 'caddr', 'cadr', 'call-next-method', 'car', - 'cdaaar', 'cdaadr', 'cdaar', 'cdadar', 'cdaddr', 'cdadr', 'cdar', - 'cddaar', 'cddadr', 'cddar', 'cdddar', 'cddddr', 'cdddr', 'cddr', 'cdr', - 'ceiling', 'cell-error-name', 'cerror', 'change-class', 'char', 'char<', - 'char<=', 'char=', 'char>', 'char>=', 'char/=', 'character', - 'characterp', 'char-code', 'char-downcase', 'char-equal', - 'char-greaterp', 'char-int', 'char-lessp', 'char-name', - 'char-not-equal', 'char-not-greaterp', 'char-not-lessp', 'char-upcase', - 'cis', 'class-name', 'class-of', 'clear-input', 'clear-output', - 'close', 'clrhash', 'code-char', 'coerce', 'compile', - 'compiled-function-p', 'compile-file', 'compile-file-pathname', - 'compiler-macro-function', 'complement', 'complex', 'complexp', - 'compute-applicable-methods', 'compute-restarts', 'concatenate', - 'concatenated-stream-streams', 'conjugate', 'cons', 'consp', - 'constantly', 'constantp', 'continue', 'copy-alist', 'copy-list', - 'copy-pprint-dispatch', 'copy-readtable', 'copy-seq', 'copy-structure', - 'copy-symbol', 'copy-tree', 'cos', 'cosh', 'count', 'count-if', - 'count-if-not', 'decode-float', 'decode-universal-time', 'delete', - 'delete-duplicates', 'delete-file', 'delete-if', 'delete-if-not', - 'delete-package', 'denominator', 'deposit-field', 'describe', - 'describe-object', 'digit-char', 'digit-char-p', 'directory', - 'directory-namestring', 'disassemble', 'documentation', 'dpb', - 'dribble', 'echo-stream-input-stream', 'echo-stream-output-stream', - 'ed', 'eighth', 'elt', 'encode-universal-time', 'endp', - 'enough-namestring', 'ensure-directories-exist', - 'ensure-generic-function', 'eq', 'eql', 'equal', 'equalp', 'error', - 'eval', 'evenp', 'every', 'exp', 'export', 'expt', 'fboundp', - 'fceiling', 'fdefinition', 'ffloor', 'fifth', 'file-author', - 'file-error-pathname', 'file-length', 'file-namestring', - 'file-position', 'file-string-length', 'file-write-date', - 'fill', 'fill-pointer', 'find', 'find-all-symbols', 'find-class', - 'find-if', 'find-if-not', 'find-method', 'find-package', 'find-restart', - 'find-symbol', 'finish-output', 'first', 'float', 'float-digits', - 'floatp', 'float-precision', 'float-radix', 'float-sign', 'floor', - 'fmakunbound', 'force-output', 'format', 'fourth', 'fresh-line', - 'fround', 'ftruncate', 'funcall', 'function-keywords', - 'function-lambda-expression', 'functionp', 'gcd', 'gensym', 'gentemp', - 'get', 'get-decoded-time', 'get-dispatch-macro-character', 'getf', - 'gethash', 'get-internal-real-time', 'get-internal-run-time', - 'get-macro-character', 'get-output-stream-string', 'get-properties', - 'get-setf-expansion', 'get-universal-time', 'graphic-char-p', - 'hash-table-count', 'hash-table-p', 'hash-table-rehash-size', - 'hash-table-rehash-threshold', 'hash-table-size', 'hash-table-test', - 'host-namestring', 'identity', 'imagpart', 'import', - 'initialize-instance', 'input-stream-p', 'inspect', - 'integer-decode-float', 'integer-length', 'integerp', - 'interactive-stream-p', 'intern', 'intersection', - 'invalid-method-error', 'invoke-debugger', 'invoke-restart', - 'invoke-restart-interactively', 'isqrt', 'keywordp', 'last', 'lcm', - 'ldb', 'ldb-test', 'ldiff', 'length', 'lisp-implementation-type', - 'lisp-implementation-version', 'list', 'list*', 'list-all-packages', - 'listen', 'list-length', 'listp', 'load', - 'load-logical-pathname-translations', 'log', 'logand', 'logandc1', - 'logandc2', 'logbitp', 'logcount', 'logeqv', 'logical-pathname', - 'logical-pathname-translations', 'logior', 'lognand', 'lognor', - 'lognot', 'logorc1', 'logorc2', 'logtest', 'logxor', 'long-site-name', - 'lower-case-p', 'machine-instance', 'machine-type', 'machine-version', - 'macroexpand', 'macroexpand-1', 'macro-function', 'make-array', - 'make-broadcast-stream', 'make-concatenated-stream', 'make-condition', - 'make-dispatch-macro-character', 'make-echo-stream', 'make-hash-table', - 'make-instance', 'make-instances-obsolete', 'make-list', - 'make-load-form', 'make-load-form-saving-slots', 'make-package', - 'make-pathname', 'make-random-state', 'make-sequence', 'make-string', - 'make-string-input-stream', 'make-string-output-stream', 'make-symbol', - 'make-synonym-stream', 'make-two-way-stream', 'makunbound', 'map', - 'mapc', 'mapcan', 'mapcar', 'mapcon', 'maphash', 'map-into', 'mapl', - 'maplist', 'mask-field', 'max', 'member', 'member-if', 'member-if-not', - 'merge', 'merge-pathnames', 'method-combination-error', - 'method-qualifiers', 'min', 'minusp', 'mismatch', 'mod', - 'muffle-warning', 'name-char', 'namestring', 'nbutlast', 'nconc', - 'next-method-p', 'nintersection', 'ninth', 'no-applicable-method', - 'no-next-method', 'not', 'notany', 'notevery', 'nreconc', 'nreverse', - 'nset-difference', 'nset-exclusive-or', 'nstring-capitalize', - 'nstring-downcase', 'nstring-upcase', 'nsublis', 'nsubst', 'nsubst-if', - 'nsubst-if-not', 'nsubstitute', 'nsubstitute-if', 'nsubstitute-if-not', - 'nth', 'nthcdr', 'null', 'numberp', 'numerator', 'nunion', 'oddp', - 'open', 'open-stream-p', 'output-stream-p', 'package-error-package', - 'package-name', 'package-nicknames', 'packagep', - 'package-shadowing-symbols', 'package-used-by-list', 'package-use-list', - 'pairlis', 'parse-integer', 'parse-namestring', 'pathname', - 'pathname-device', 'pathname-directory', 'pathname-host', - 'pathname-match-p', 'pathname-name', 'pathnamep', 'pathname-type', - 'pathname-version', 'peek-char', 'phase', 'plusp', 'position', - 'position-if', 'position-if-not', 'pprint', 'pprint-dispatch', - 'pprint-fill', 'pprint-indent', 'pprint-linear', 'pprint-newline', - 'pprint-tab', 'pprint-tabular', 'prin1', 'prin1-to-string', 'princ', - 'princ-to-string', 'print', 'print-object', 'probe-file', 'proclaim', - 'provide', 'random', 'random-state-p', 'rassoc', 'rassoc-if', - 'rassoc-if-not', 'rational', 'rationalize', 'rationalp', 'read', - 'read-byte', 'read-char', 'read-char-no-hang', 'read-delimited-list', - 'read-from-string', 'read-line', 'read-preserving-whitespace', - 'read-sequence', 'readtable-case', 'readtablep', 'realp', 'realpart', - 'reduce', 'reinitialize-instance', 'rem', 'remhash', 'remove', - 'remove-duplicates', 'remove-if', 'remove-if-not', 'remove-method', - 'remprop', 'rename-file', 'rename-package', 'replace', 'require', - 'rest', 'restart-name', 'revappend', 'reverse', 'room', 'round', - 'row-major-aref', 'rplaca', 'rplacd', 'sbit', 'scale-float', 'schar', - 'search', 'second', 'set', 'set-difference', - 'set-dispatch-macro-character', 'set-exclusive-or', - 'set-macro-character', 'set-pprint-dispatch', 'set-syntax-from-char', - 'seventh', 'shadow', 'shadowing-import', 'shared-initialize', - 'short-site-name', 'signal', 'signum', 'simple-bit-vector-p', - 'simple-condition-format-arguments', 'simple-condition-format-control', - 'simple-string-p', 'simple-vector-p', 'sin', 'sinh', 'sixth', 'sleep', - 'slot-boundp', 'slot-exists-p', 'slot-makunbound', 'slot-missing', - 'slot-unbound', 'slot-value', 'software-type', 'software-version', - 'some', 'sort', 'special-operator-p', 'sqrt', 'stable-sort', - 'standard-char-p', 'store-value', 'stream-element-type', - 'stream-error-stream', 'stream-external-format', 'streamp', 'string', - 'string<', 'string<=', 'string=', 'string>', 'string>=', 'string/=', - 'string-capitalize', 'string-downcase', 'string-equal', - 'string-greaterp', 'string-left-trim', 'string-lessp', - 'string-not-equal', 'string-not-greaterp', 'string-not-lessp', - 'stringp', 'string-right-trim', 'string-trim', 'string-upcase', - 'sublis', 'subseq', 'subsetp', 'subst', 'subst-if', 'subst-if-not', - 'substitute', 'substitute-if', 'substitute-if-not', 'subtypep','svref', - 'sxhash', 'symbol-function', 'symbol-name', 'symbolp', 'symbol-package', - 'symbol-plist', 'symbol-value', 'synonym-stream-symbol', 'syntax:', - 'tailp', 'tan', 'tanh', 'tenth', 'terpri', 'third', - 'translate-logical-pathname', 'translate-pathname', 'tree-equal', - 'truename', 'truncate', 'two-way-stream-input-stream', - 'two-way-stream-output-stream', 'type-error-datum', - 'type-error-expected-type', 'type-of', 'typep', 'unbound-slot-instance', - 'unexport', 'unintern', 'union', 'unread-char', 'unuse-package', - 'update-instance-for-different-class', - 'update-instance-for-redefined-class', 'upgraded-array-element-type', - 'upgraded-complex-part-type', 'upper-case-p', 'use-package', - 'user-homedir-pathname', 'use-value', 'values', 'values-list', 'vector', - 'vectorp', 'vector-pop', 'vector-push', 'vector-push-extend', 'warn', - 'wild-pathname-p', 'write', 'write-byte', 'write-char', 'write-line', - 'write-sequence', 'write-string', 'write-to-string', 'yes-or-no-p', - 'y-or-n-p', 'zerop', -} - -SPECIAL_FORMS = { - 'block', 'catch', 'declare', 'eval-when', 'flet', 'function', 'go', 'if', - 'labels', 'lambda', 'let', 'let*', 'load-time-value', 'locally', 'macrolet', - 'multiple-value-call', 'multiple-value-prog1', 'progn', 'progv', 'quote', - 'return-from', 'setq', 'symbol-macrolet', 'tagbody', 'the', 'throw', - 'unwind-protect', -} - -MACROS = { - 'and', 'assert', 'call-method', 'case', 'ccase', 'check-type', 'cond', - 'ctypecase', 'decf', 'declaim', 'defclass', 'defconstant', 'defgeneric', - 'define-compiler-macro', 'define-condition', 'define-method-combination', - 'define-modify-macro', 'define-setf-expander', 'define-symbol-macro', - 'defmacro', 'defmethod', 'defpackage', 'defparameter', 'defsetf', - 'defstruct', 'deftype', 'defun', 'defvar', 'destructuring-bind', 'do', - 'do*', 'do-all-symbols', 'do-external-symbols', 'dolist', 'do-symbols', - 'dotimes', 'ecase', 'etypecase', 'formatter', 'handler-bind', - 'handler-case', 'ignore-errors', 'incf', 'in-package', 'lambda', 'loop', - 'loop-finish', 'make-method', 'multiple-value-bind', 'multiple-value-list', - 'multiple-value-setq', 'nth-value', 'or', 'pop', - 'pprint-exit-if-list-exhausted', 'pprint-logical-block', 'pprint-pop', - 'print-unreadable-object', 'prog', 'prog*', 'prog1', 'prog2', 'psetf', - 'psetq', 'push', 'pushnew', 'remf', 'restart-bind', 'restart-case', - 'return', 'rotatef', 'setf', 'shiftf', 'step', 'time', 'trace', 'typecase', - 'unless', 'untrace', 'when', 'with-accessors', 'with-compilation-unit', - 'with-condition-restarts', 'with-hash-table-iterator', - 'with-input-from-string', 'with-open-file', 'with-open-stream', - 'with-output-to-string', 'with-package-iterator', 'with-simple-restart', - 'with-slots', 'with-standard-io-syntax', -} - -LAMBDA_LIST_KEYWORDS = { - '&allow-other-keys', '&aux', '&body', '&environment', '&key', '&optional', - '&rest', '&whole', -} - -DECLARATIONS = { - 'dynamic-extent', 'ignore', 'optimize', 'ftype', 'inline', 'special', - 'ignorable', 'notinline', 'type', -} - -BUILTIN_TYPES = { - 'atom', 'boolean', 'base-char', 'base-string', 'bignum', 'bit', - 'compiled-function', 'extended-char', 'fixnum', 'keyword', 'nil', - 'signed-byte', 'short-float', 'single-float', 'double-float', 'long-float', - 'simple-array', 'simple-base-string', 'simple-bit-vector', 'simple-string', - 'simple-vector', 'standard-char', 'unsigned-byte', - - # Condition Types - 'arithmetic-error', 'cell-error', 'condition', 'control-error', - 'division-by-zero', 'end-of-file', 'error', 'file-error', - 'floating-point-inexact', 'floating-point-overflow', - 'floating-point-underflow', 'floating-point-invalid-operation', - 'parse-error', 'package-error', 'print-not-readable', 'program-error', - 'reader-error', 'serious-condition', 'simple-condition', 'simple-error', - 'simple-type-error', 'simple-warning', 'stream-error', 'storage-condition', - 'style-warning', 'type-error', 'unbound-variable', 'unbound-slot', - 'undefined-function', 'warning', -} - -BUILTIN_CLASSES = { - 'array', 'broadcast-stream', 'bit-vector', 'built-in-class', 'character', - 'class', 'complex', 'concatenated-stream', 'cons', 'echo-stream', - 'file-stream', 'float', 'function', 'generic-function', 'hash-table', - 'integer', 'list', 'logical-pathname', 'method-combination', 'method', - 'null', 'number', 'package', 'pathname', 'ratio', 'rational', 'readtable', - 'real', 'random-state', 'restart', 'sequence', 'standard-class', - 'standard-generic-function', 'standard-method', 'standard-object', - 'string-stream', 'stream', 'string', 'structure-class', 'structure-object', - 'symbol', 'synonym-stream', 't', 'two-way-stream', 'vector', -} diff --git a/spaces/quidiaMuxgu/Expedit-SAM/ACDSystem All Products Multi Keygen V3.6 ? CORE.md b/spaces/quidiaMuxgu/Expedit-SAM/ACDSystem All Products Multi Keygen V3.6 ? CORE.md deleted file mode 100644 index 0ce2df7aae0e635d10a371a7ab2f073dc1fe5de5..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/ACDSystem All Products Multi Keygen V3.6 ? CORE.md +++ /dev/null @@ -1,6 +0,0 @@ -

          ACDSystem All Products Multi Keygen v3.6 – CORE


          Download Zip »»» https://geags.com/2uCqCU



          - - 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/infer_uvr5.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/infer_uvr5.py deleted file mode 100644 index 9b58f05ef69d1ea96ccf5d3d018b27acbb1c3b32..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/infer_uvr5.py +++ /dev/null @@ -1,355 +0,0 @@ -import os, sys, torch, warnings - -now_dir = os.getcwd() -sys.path.append(now_dir) - -warnings.filterwarnings("ignore") -import librosa -import numpy as np -from lib.uvr5_pack.lib_v5 import spec_utils -from lib.uvr5_pack.utils import inference -from lib.uvr5_pack.lib_v5.model_param_init import ModelParameters -import soundfile as sf -from lib.uvr5_pack.lib_v5.nets_new import CascadedNet -from lib.uvr5_pack.lib_v5 import nets_61968KB as nets - - -class _audio_pre_: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("lib/uvr5_pack/lib_v5/modelparams/4band_v2.json") - model = nets.CascadedASPPNet(mp.param["bins"] * 2) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"): - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - print("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - print("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -class _audio_pre_new: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("lib/uvr5_pack/lib_v5/modelparams/4band_v3.json") - nout = 64 if "DeReverb" in model_path else 48 - model = CascadedNet(mp.param["bins"] * 2, nout) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_( - self, music_file, vocal_root=None, ins_root=None, format="flac" - ): # 3个VR模型vocal和ins是反的 - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - print("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - print("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -if __name__ == "__main__": - device = "cuda" - is_half = True - model_path = "assets/uvr5_weights/DeEchoNormal.pth" - pre_fun = _audio_pre_new(model_path=model_path, device=device, is_half=True, agg=10) - audio_path = "雪雪伴奏对消HP5.wav" - save_path = "opt" - pre_fun._path_audio_(audio_path, save_path, save_path) diff --git a/spaces/radames/Candle-Phi-1.5-Wasm/build/m_bg.wasm.d.ts b/spaces/radames/Candle-Phi-1.5-Wasm/build/m_bg.wasm.d.ts deleted file mode 100644 index 4b90d93e2860eeed2a740629b9d6bea23de57d9c..0000000000000000000000000000000000000000 --- a/spaces/radames/Candle-Phi-1.5-Wasm/build/m_bg.wasm.d.ts +++ /dev/null @@ -1,14 +0,0 @@ -/* tslint:disable */ -/* eslint-disable */ -export const memory: WebAssembly.Memory; -export function __wbg_model_free(a: number): void; -export function model_load(a: number, b: number, c: number, d: number, e: number, f: number, g: number, h: number): void; -export function model_init_with_prompt(a: number, b: number, c: number, d: number, e: number, f: number, g: number, h: number, i: number): void; -export function model_next_token(a: number, b: number): void; -export function main(a: number, b: number): number; -export function __wbindgen_add_to_stack_pointer(a: number): number; -export function __wbindgen_malloc(a: number, b: number): number; -export function __wbindgen_realloc(a: number, b: number, c: number, d: number): number; -export function __wbindgen_free(a: number, b: number, c: number): void; -export function __wbindgen_exn_store(a: number): void; -export function __wbindgen_start(): void; diff --git a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/scripts/test.sh b/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/scripts/test.sh deleted file mode 100644 index a7a3d7ec6d2a3572bbb699f935aefd8c575e768e..0000000000000000000000000000000000000000 --- a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/scripts/test.sh +++ /dev/null @@ -1,39 +0,0 @@ -#!/usr/bin/env bash -set -ex - -# Training -GPU_ID=0 -DISPLAY_ID=$((GPU_ID*10+10)) -NAME='spaces_demo' - -# Network configuration - -BATCH_SIZE=1 -MLP_DIM='257 1024 512 256 128 1' -MLP_DIM_COLOR='513 1024 512 256 128 3' - -# Reconstruction resolution -# NOTE: one can change here to reconstruct mesh in a different resolution. -# VOL_RES=256 - -# CHECKPOINTS_NETG_PATH='./checkpoints/net_G' -# CHECKPOINTS_NETC_PATH='./checkpoints/net_C' - -# TEST_FOLDER_PATH='./sample_images' - -# command -CUDA_VISIBLE_DEVICES=${GPU_ID} python ./apps/eval_spaces.py \ - --name ${NAME} \ - --batch_size ${BATCH_SIZE} \ - --mlp_dim ${MLP_DIM} \ - --mlp_dim_color ${MLP_DIM_COLOR} \ - --num_stack 4 \ - --num_hourglass 2 \ - --resolution ${VOL_RES} \ - --hg_down 'ave_pool' \ - --norm 'group' \ - --norm_color 'group' \ - --load_netG_checkpoint_path ${CHECKPOINTS_NETG_PATH} \ - --load_netC_checkpoint_path ${CHECKPOINTS_NETC_PATH} \ - --results_path ${RESULTS_PATH} \ - --img_path ${INPUT_IMAGE_PATH} \ No newline at end of file diff --git a/spaces/radames/Real-Time-Latent-Consistency-Model/app-img2img.py b/spaces/radames/Real-Time-Latent-Consistency-Model/app-img2img.py deleted file mode 100644 index b9021816b95429a81545c7ccb6b26f9d056b72d5..0000000000000000000000000000000000000000 --- a/spaces/radames/Real-Time-Latent-Consistency-Model/app-img2img.py +++ /dev/null @@ -1,262 +0,0 @@ -import asyncio -import json -import logging -import traceback -from pydantic import BaseModel - -from fastapi import FastAPI, WebSocket, HTTPException, WebSocketDisconnect -from fastapi.middleware.cors import CORSMiddleware -from fastapi.responses import StreamingResponse, JSONResponse -from fastapi.staticfiles import StaticFiles - -from diffusers import AutoPipelineForImage2Image, AutoencoderTiny -from compel import Compel -import torch - -try: - import intel_extension_for_pytorch as ipex -except: - pass -from PIL import Image -import numpy as np -import gradio as gr -import io -import uuid -import os -import time -import psutil - -MAX_QUEUE_SIZE = int(os.environ.get("MAX_QUEUE_SIZE", 0)) -TIMEOUT = float(os.environ.get("TIMEOUT", 0)) -SAFETY_CHECKER = os.environ.get("SAFETY_CHECKER", None) -TORCH_COMPILE = os.environ.get("TORCH_COMPILE", None) - -WIDTH = 512 -HEIGHT = 512 -# disable tiny autoencoder for better quality speed tradeoff -USE_TINY_AUTOENCODER = True - -# check if MPS is available OSX only M1/M2/M3 chips -mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available() -xpu_available = hasattr(torch, "xpu") and torch.xpu.is_available() -device = torch.device( - "cuda" if torch.cuda.is_available() else "xpu" if xpu_available else "cpu" -) -torch_device = device - -# change to torch.float16 to save GPU memory -torch_dtype = torch.float32 - -print(f"TIMEOUT: {TIMEOUT}") -print(f"SAFETY_CHECKER: {SAFETY_CHECKER}") -print(f"MAX_QUEUE_SIZE: {MAX_QUEUE_SIZE}") -print(f"device: {device}") - -if mps_available: - device = torch.device("mps") - torch_device = "cpu" - torch_dtype = torch.float32 - -if SAFETY_CHECKER == "True": - pipe = AutoPipelineForImage2Image.from_pretrained( - "SimianLuo/LCM_Dreamshaper_v7", - ) -else: - pipe = AutoPipelineForImage2Image.from_pretrained( - "SimianLuo/LCM_Dreamshaper_v7", - safety_checker=None, - ) - -if USE_TINY_AUTOENCODER: - pipe.vae = AutoencoderTiny.from_pretrained( - "madebyollin/taesd", torch_dtype=torch_dtype, use_safetensors=True - ) -pipe.set_progress_bar_config(disable=True) -pipe.to(device=torch_device, dtype=torch_dtype).to(device) -pipe.unet.to(memory_format=torch.channels_last) - -if psutil.virtual_memory().total < 64 * 1024**3: - pipe.enable_attention_slicing() - -if TORCH_COMPILE: - pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) - pipe.vae = torch.compile(pipe.vae, mode="reduce-overhead", fullgraph=True) - - pipe(prompt="warmup", image=[Image.new("RGB", (512, 512))]) - -compel_proc = Compel( - tokenizer=pipe.tokenizer, - text_encoder=pipe.text_encoder, - truncate_long_prompts=False, -) -user_queue_map = {} - - -class InputParams(BaseModel): - seed: int = 2159232 - prompt: str - guidance_scale: float = 8.0 - strength: float = 0.5 - steps: int = 4 - lcm_steps: int = 50 - width: int = WIDTH - height: int = HEIGHT - -def predict(input_image: Image.Image, params: InputParams, prompt_embeds: torch.Tensor = None): - generator = torch.manual_seed(params.seed) - results = pipe( - prompt_embeds=prompt_embeds, - generator=generator, - image=input_image, - strength=params.strength, - num_inference_steps=params.steps, - guidance_scale=params.guidance_scale, - width=params.width, - height=params.height, - original_inference_steps=params.lcm_steps, - output_type="pil", - ) - nsfw_content_detected = ( - results.nsfw_content_detected[0] - if "nsfw_content_detected" in results - else False - ) - if nsfw_content_detected: - return None - return results.images[0] - - -app = FastAPI() -app.add_middleware( - CORSMiddleware, - allow_origins=["*"], - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], -) - - -@app.websocket("/ws") -async def websocket_endpoint(websocket: WebSocket): - await websocket.accept() - if MAX_QUEUE_SIZE > 0 and len(user_queue_map) >= MAX_QUEUE_SIZE: - print("Server is full") - await websocket.send_json({"status": "error", "message": "Server is full"}) - await websocket.close() - return - - try: - uid = str(uuid.uuid4()) - print(f"New user connected: {uid}") - await websocket.send_json( - {"status": "success", "message": "Connected", "userId": uid} - ) - user_queue_map[uid] = {"queue": asyncio.Queue()} - await websocket.send_json( - {"status": "start", "message": "Start Streaming", "userId": uid} - ) - await handle_websocket_data(websocket, uid) - except WebSocketDisconnect as e: - logging.error(f"WebSocket Error: {e}, {uid}") - traceback.print_exc() - finally: - print(f"User disconnected: {uid}") - queue_value = user_queue_map.pop(uid, None) - queue = queue_value.get("queue", None) - if queue: - while not queue.empty(): - try: - queue.get_nowait() - except asyncio.QueueEmpty: - continue - - -@app.get("/queue_size") -async def get_queue_size(): - queue_size = len(user_queue_map) - return JSONResponse({"queue_size": queue_size}) - - -@app.get("/stream/{user_id}") -async def stream(user_id: uuid.UUID): - uid = str(user_id) - try: - user_queue = user_queue_map[uid] - queue = user_queue["queue"] - - async def generate(): - last_prompt: str = None - prompt_embeds: torch.Tensor = None - while True: - data = await queue.get() - input_image = data["image"] - params = data["params"] - if input_image is None: - continue - # avoid recalculate prompt embeds - if last_prompt != params.prompt: - print("new prompt") - prompt_embeds = compel_proc(params.prompt) - last_prompt = params.prompt - - image = predict( - input_image, - params, - prompt_embeds, - ) - if image is None: - continue - frame_data = io.BytesIO() - image.save(frame_data, format="JPEG") - frame_data = frame_data.getvalue() - if frame_data is not None and len(frame_data) > 0: - yield b"--frame\r\nContent-Type: image/jpeg\r\n\r\n" + frame_data + b"\r\n" - - await asyncio.sleep(1.0 / 120.0) - - return StreamingResponse( - generate(), media_type="multipart/x-mixed-replace;boundary=frame" - ) - except Exception as e: - logging.error(f"Streaming Error: {e}, {user_queue_map}") - traceback.print_exc() - return HTTPException(status_code=404, detail="User not found") - - -async def handle_websocket_data(websocket: WebSocket, user_id: uuid.UUID): - uid = str(user_id) - user_queue = user_queue_map[uid] - queue = user_queue["queue"] - if not queue: - return HTTPException(status_code=404, detail="User not found") - last_time = time.time() - try: - while True: - data = await websocket.receive_bytes() - params = await websocket.receive_json() - params = InputParams(**params) - pil_image = Image.open(io.BytesIO(data)) - - while not queue.empty(): - try: - queue.get_nowait() - except asyncio.QueueEmpty: - continue - await queue.put({"image": pil_image, "params": params}) - if TIMEOUT > 0 and time.time() - last_time > TIMEOUT: - await websocket.send_json( - { - "status": "timeout", - "message": "Your session has ended", - "userId": uid, - } - ) - await websocket.close() - return - - except Exception as e: - logging.error(f"Error: {e}") - traceback.print_exc() - - -app.mount("/", StaticFiles(directory="img2img", html=True), name="public") diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Bioshock Infinite Jpn Ps3 Torrent.md b/spaces/raedeXanto/academic-chatgpt-beta/Bioshock Infinite Jpn Ps3 Torrent.md deleted file mode 100644 index 18891f5976d0fea75254caf9572a6029e53b55df..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Bioshock Infinite Jpn Ps3 Torrent.md +++ /dev/null @@ -1,28 +0,0 @@ - -

          How to Download Bioshock Infinite Jpn Ps3 Torrent

          -

          Bioshock Infinite is a first-person shooter game set in a floating city of Columbia in 1912. You play as Booker DeWitt, a former soldier who is hired to rescue a mysterious girl named Elizabeth from her captors. Along the way, you will encounter various enemies, weapons, and powers, as well as uncover the secrets of the city and its history.

          -

          If you want to play Bioshock Infinite on your PS3 console, you might be interested in downloading the Jpn version of the game, which has Japanese voiceovers and subtitles. This version is not available on the official PSN store, but you can find it online as a torrent file.

          -

          Bioshock Infinite Jpn Ps3 Torrent


          Download Filehttps://tinourl.com/2uL5DN



          -

          A torrent file is a small file that contains information about the files you want to download, such as their names, sizes, and locations on other computers. To download a torrent file, you need a special program called a torrent client, such as uTorrent or BitTorrent. These programs will connect you to other users who have the same files and download them to your computer.

          -

          However, downloading torrent files can be risky, as they may contain viruses, malware, or illegal content. You should always scan the files with an antivirus program before opening them, and use a VPN service to protect your privacy and avoid legal issues.

          -

          Here are some steps to download Bioshock Infinite Jpn Ps3 Torrent:

          -
            -
          1. Download and install a torrent client on your computer.
          2. -
          3. Go to one of the websites that offer Bioshock Infinite Jpn Ps3 Torrent, such as Archive.org, Skidrowreloaded, or Github. These websites are based on the search results I found for your keyword[^1^] [^2^] [^3^]. You can also use other websites that you trust.
          4. -
          5. Find the torrent file for Bioshock Infinite Jpn Ps3 and click on it. It will open in your torrent client and start downloading.
          6. -
          7. Wait until the download is complete. It may take some time depending on your internet speed and the number of seeders (users who have the complete file and are sharing it).
          8. -
          9. Scan the downloaded files with an antivirus program before opening them.
          10. -
          11. Copy the files to a USB drive or an external hard drive formatted in FAT32.
          12. -
          13. Plug the USB drive or the external hard drive into your PS3 console.
          14. -
          15. Go to Game > Install Package Files and select the Bioshock Infinite Jpn Ps3 file. It will install the game on your PS3 console.
          16. -
          17. Enjoy playing Bioshock Infinite Jpn Ps3!
          18. -
          -

          I hope this article was helpful for you. If you have any questions or feedback, please let me know.

          - -

          Bioshock Infinite Jpn Ps3 is a great game to play if you are a fan of the Bioshock series, or if you enjoy immersive and atmospheric games with a strong story and gameplay. The game has received critical acclaim for its graphics, sound, music, voice acting, and narrative. It has also won several awards, such as the BAFTA Game of the Year, the Golden Joystick Award for Best Storytelling, and the Spike Video Game Award for Best Shooter.

          -

          The game is set in an alternate history where the United States is a world power and has built a floating city called Columbia as a symbol of its greatness. However, the city secedes from the nation and becomes a rogue state with its own ideology and agenda. The game explores themes such as American exceptionalism, racism, religion, nationalism, and free will.

          -

          The game features a dynamic combat system that allows you to use various weapons, powers, and gadgets to fight your enemies. You can also use the Sky-Line, a rail system that connects different parts of the city, to move around and gain tactical advantages. You are not alone in your journey, as you are accompanied by Elizabeth, a mysterious girl who has the ability to manipulate tears in reality. She can help you in combat by providing you with ammo, health, or other items. She can also open tears to bring in allies, weapons, or cover.

          -

          Bioshock Infinite Jpn Ps3 is a game that will keep you hooked from start to finish with its stunning visuals, captivating story, and thrilling gameplay. If you want to experience this game in Japanese language, you can download the torrent file from one of the websites mentioned above and follow the instructions to install it on your PS3 console. You will not regret it!

          -

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Cossacks 2 Crack HOT! Windows 7.md b/spaces/raedeXanto/academic-chatgpt-beta/Cossacks 2 Crack HOT! Windows 7.md deleted file mode 100644 index c3b3d537879d6eef8cc8577cd150eb5bd02cd730..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Cossacks 2 Crack HOT! Windows 7.md +++ /dev/null @@ -1,35 +0,0 @@ -
          -

          How to Play Cossacks 2 on Windows 7

          -

          Cossacks 2 is a real-time strategy game set in the Napoleonic era. It was released in 2005 and is compatible with Windows XP. However, many players have reported problems with running the game on Windows 7, such as low performance, lags, crashes, or black screens. In this article, we will show you some possible solutions to fix these issues and enjoy the game on your Windows 7 PC.

          -

          cossacks 2 crack windows 7


          Download Ziphttps://tinourl.com/2uL2YB



          -

          Method 1: Use a No-CD Crack

          -

          One of the common causes of Cossacks 2 problems on Windows 7 is the copy protection system that checks for the presence of the game disc in the drive. This system may not work well with newer versions of Windows and may prevent the game from launching or cause errors. To bypass this problem, you can use a no-CD crack that allows you to run the game without the disc. Here are the steps to do so:

          -
            -
          1. Download the no-CD crack for Cossacks 2: Battle for Europe v1.3 from this link. This crack works for both Cossacks 2: Napoleonic Wars and Cossacks 2: Battle for Europe.
          2. -
          3. Extract the contents of the zip file to a folder on your desktop.
          4. -
          5. Copy the file named "Cossacks2.exe" from the folder and paste it into your game installation folder (default: C:\GOG Games\Cossacks II - Napoleonic Wars or C:\GOG Games\Cossacks II - Battle for Europe).
          6. -
          7. Replace the original file when prompted.
          8. -
          9. Run the game as administrator and enjoy.
          10. -
          -

          Method 2: Make log.txt Read-Only

          -

          Another possible solution to fix Cossacks 2 performance issues on Windows 7 is to make the log.txt file in your game installation folder read-only. This file records all the events that happen in the game, such as errors, warnings, or messages. However, it may also cause lags or slowdowns if it becomes too large or corrupted. To prevent this, you can make the file read-only so that it does not change or grow in size. Here are the steps to do so:

          -
            -
          1. Navigate to your game installation folder (default: C:\GOG Games\Cossacks II - Napoleonic Wars or C:\GOG Games\Cossacks II - Battle for Europe).
          2. -
          3. Right-click on log.txt and select Properties.
          4. -
          5. Under the General tab, check the box next to Read-only and click OK.
          6. -
          7. Run the game as administrator and check if it runs better.
          8. -
          -

          Method 3: Run the Game in Compatibility Mode

          -

          If none of the above methods work for you, you can try to run the game in compatibility mode for Windows XP. Compatibility mode is a feature that allows you to run older programs on newer versions of Windows by simulating the environment and settings of an older operating system. This may help resolve some compatibility issues that prevent Cossacks 2 from running properly on Windows 7. Here are the steps to do so:

          -

          -
            -
          1. Navigate to your game installation folder (default: C:\GOG Games\Cossacks II - Napoleonic Wars or C:\GOG Games\Cossacks II - Battle for Europe).
          2. -
          3. Right-click on Cossacks2.exe and select Properties.
          4. -
          5. Under the Compatibility tab, check the box next to Run this program in compatibility mode for and select Windows XP (Service Pack 3) from the drop-down menu.
          6. -
          7. Also check the box next to Run this program as an administrator and click OK.
          8. -
          9. Run the game and see if it works.
          10. -
          -

          Conclusion

          -

          We hope that this article has helped you fix Cossacks 2 problems on Windows 7 and enjoy this classic strategy game on your PC. If you have any questions or suggestions, feel free to leave a comment below.

          7b8c122e87
          -
          -
          \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR.rar - How to Create Amazing Beats with This Software.md b/spaces/raedeXanto/academic-chatgpt-beta/D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR.rar - How to Create Amazing Beats with This Software.md deleted file mode 100644 index 9a7a4dbd100802d5c82e4c0c7b8c819e4a612530..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR.rar - How to Create Amazing Beats with This Software.md +++ /dev/null @@ -1,94 +0,0 @@ -
          -

          D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR.rar - What is it and why you need it

          -

          Introduction

          -

          If you are a music producer, a beat maker, or a fan of electronic music, you have probably heard of the legendary Roland TR-909 drum machine. This iconic device was released in 1984 and has been used by countless artists across genres such as techno, house, hip hop, and more. The 909 is known for its distinctive sounds, such as the punchy kick, the snappy snare, the metallic hi-hats, and the booming toms.

          -

          However, finding an original 909 nowadays is not easy or cheap. That's why many software developers have created virtual instruments that emulate the sound and functionality of the 909. One of them is D16 Group, a Polish company that specializes in high-quality audio plugins. Their product, Drumazon VSTi, is one of the most faithful and realistic recreations of the 909 ever made.

          -

          D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR.rar -


          DOWNLOAD === https://tinourl.com/2uL54P



          -

          But what is Drumazon VSTi exactly? And what is a keygen? And who are AiR? In this article, we will answer these questions and more. We will also show you how to install and use Drumazon VSTi, as well as its features and benefits. By the end of this article, you will have a clear idea of what Drumazon VSTi can do for you and your music.

          -

          What is D16 Group Drumazon VSTi

          -

          D16 Group Drumazon VSTi is a software plugin that simulates the sound and behavior of the Roland TR-909 drum machine. It runs as a virtual instrument (VSTi) inside your digital audio workstation (DAW), such as Ableton Live, FL Studio, Cubase, or Logic Pro. You can use it to create drum patterns, loops, beats, or entire songs with the authentic 909 sound.

          -

          What is a keygen and why is it included

          -

          A keygen is a small program that generates a serial number or a license file for a software product. It is usually used to bypass the copy protection or activation process of the software. A keygen is often included with cracked or pirated software that has been illegally distributed on the internet.

          -

          D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR download
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR free
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR crack
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR torrent
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR review
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR serial
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR license
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR activation
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR full version
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR mac
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR windows
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR rar password
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR manual
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR presets
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR skins
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR alternative
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR update
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR online
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR demo
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR reddit
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR youtube
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR soundcloud
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR beatport
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR plugin boutique
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR kvr audio
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR gearslutz
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR splice
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR loopmasters
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR ableton live
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR fl studio
          -D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR logic pro x
          -D16 Group Drumazon

          -

          The file name D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR.rar indicates that this is a compressed archive file that contains both the plugin installer and the keygen program. The keygen program can be used to generate a license file for Drumazon VSTi, which will allow you to use it without paying for it.

          -

          What is AiR and why is it important

          -

          AiR is a group of hackers or crackers who specialize in cracking audio software. They have released hundreds of cracked plugins, effects, instruments, and applications for music production. They are known for their high-quality releases, their detailed instructions, and their signature sound that plays when you run their keygens.

          -

          AiR is important because they are one of the most prolific and respected groups in the audio software cracking scene. They have cracked many popular products from companies such as Native Instruments, Spectrasonics, Arturia, Steinberg, Propellerhead, iZotope, and more. Many music producers rely on their releases to access expensive software that they cannot afford otherwise.

          -

          Features and benefits of D16 Group Drumazon VSTi

          -

          D16 Group Drumazon VSTi is not just a simple copy of the Roland TR-909 drum machine. It is also a powerful and versatile plugin that offers many features and benefits that enhance your music production workflow. Here are some of them:

          -

          Synthesis emulation of the classic 909 drum machine

          -

          D16 Group Drumazon VSTi uses advanced synthesis algorithms to emulate all the sounds of the original 909. All the nuances and details of the instruments are captured perfectly. You can adjust parameters such as tune, decay, attack, tone, accent, volume, pan, mute, solo, etc. You can also switch between two modes: original mode (which reproduces the exact sound of the hardware) and enhanced mode (which adds some extra features such as velocity sensitivity).

          -

          Multiple outputs and MIDI control

          -

          D16 Group Drumazon VSTi allows you to route each instrument to a separate output channel in your DAW. This gives you more flexibility and control over your mix. You can apply different effects, EQs, compressors, etc., to each instrument individually. You can also control each instrument with an external MIDI controller or keyboard. You can assign different MIDI notes or CC messages to each parameter.

          -

          Internal sequencer and pattern manager

          -

          D16 Group Drumazon VSTi has an internal sequencer that lets you create drum patterns with up to 32 steps per instrument. You can edit each step with parameters such as velocity, accent, flam, shuffle, etc. You can also chain up to 12 patterns together to form a song. The pattern manager allows you to store up to 128 patterns in memory. You can load them from presets or import them from external files.

          -

          Presets and customization options

          -

          D16 Group Drumazon VSTi comes with over 400 presets that cover various genres and styles of music. You can use them as they are or modify them to suit your needs. You can also create your own presets and save them for later use. You can also customize the appearance of the plugin by changing its skin or color scheme.

          -

          How to install and use D16 Group Drumazon VSTi

          -

          Now that you know what D16 Group Drumazon VSTi is and what it can do for you, let's see how to install and use it on your computer. Here are the steps:

          -

          Downloading and extracting the file

          -

          The first step is to download the file D16 Group Drumazon VSTi V1 4 0 Incl Keygen AiR.rar from a reliable source on the internet. You will need a torrent client such as qBittorrent or uTorrent to download it. Once you have downloaded it, you will need a program such as WinRAR or 7-Zip to extract it.

          -

          To extract it, right-click on the file and select "Extract here" or "Extract to" from the menu. You will see two folders inside: one called "D16.Group.Drumazon.VSTi.v1.4.0.Incl.Keygen-AiR" which contains the plugin installer; and another called "AiR" which contains the keygen program.

          -

          Running the keygen and generating a license

          -

          Running the keygen and generating a license

          -

          The next step is to run the keygen program inside the "AiR" folder. To do this, double-click on "keygen.exe". You will hear a short melody playing in your speakers or headphones; this is AiR's signature sound.

          -

          You will see a window with several buttons and fields. Click on "Generate" to create a license file for Drumazon VSTi. You will see a message saying "License file generated successfully!". Click on "Save" to save the license file to your computer. You can name it whatever you want, but make sure it has the extension ".lic".

          -

          Installing the plugin and activating it

          -

          The third step is to install the plugin on your computer. To do this, go to the folder "D16.Group.Drumazon.VSTi.v1.4.0.Incl.Keygen-AiR" and double-click on "setup.exe". You will see a window with the D16 Group logo and some options. Click on "Next" to start the installation process.

          -

          You will see a window with the license agreement. Read it carefully and click on "I accept the agreement" if you agree. Then click on "Next". You will see a window with the destination folder. Choose where you want to install the plugin and click on "Next". You will see a window with the VST plugin folder. Choose where you want to install the VST plugin and click on "Next". You will see a window with the summary of your choices. Click on "Install" to begin the installation.

          -

          Wait for the installation to finish. You will see a window with a message saying "Installation completed successfully!". Click on "Finish" to exit the installer.

          -

          Now you need to activate the plugin with the license file you generated earlier. To do this, open your DAW and load Drumazon VSTi as a plugin. You will see a window with a message saying "Please select license file". Click on "Browse" and locate the license file you saved before. Click on "Open" and then on "OK". You will see a window with a message saying "License accepted!". Click on "OK" again. Congratulations! You have successfully installed and activated Drumazon VSTi!

          -

          Loading the plugin in your DAW and creating beats

          -

          The final step is to use Drumazon VSTi in your DAW and create some awesome beats with it. To do this, create a new project or open an existing one in your DAW. Add Drumazon VSTi as an instrument track or as an effect on an audio track. You will see the plugin interface, which looks like this:

          - ![Drumazon interface](https://d16.pl/images/products/drumazon/drumazon_gui.png)

          You can use the mouse or a MIDI controller to play and edit the sounds of each instrument. You can also use the internal sequencer to create patterns and songs. You can access different functions and menus by clicking on the buttons at the top of the interface.

          -

          For more details and tutorials on how to use Drumazon VSTi, you can check out the user manual or watch some videos online.

          -

          Conclusion

          -

          D16 Group Drumazon VSTi is an amazing plugin that emulates the sound and functionality of the Roland TR-909 drum machine. It is easy to install and use, and it offers many features and benefits that make it a great tool for music production. Whether you want to create classic techno beats, modern hip hop grooves, or anything in between, Drumazon VSTi can help you achieve your goals.

          -

          If you are interested in getting Drumazon VSTi, you can download it from D16 Group's website for 99 EUR (about 113 USD). However, if you want to save some money and get it for free, you can download it from a torrent site along with a keygen program that will generate a license file for you.

          -

          However, we do not recommend or endorse this option, as it is illegal and unethical. It also exposes you to potential risks such as viruses, malware, or legal actions. The best way to support D16 Group and their amazing products is to buy them legally and enjoy them safely.

          -

          We hope this article has been helpful and informative for you. If you have any questions or comments, feel free to leave them below. Thank you for reading!

          -

          FAQs

          -

          Is D16 Group Drumazon VSTi legal and safe to use?

          -

          D16 Group Drumazon VSTi is legal and safe to use if you buy it from D16 Group's website or from an authorized reseller. However, if you download it from a torrent site along with a keygen program, it is illegal and unsafe to use. You are violating D16 Group's intellectual property rights and exposing yourself to potential risks such as viruses, malware, or legal actions.

          -

          What are the system requirements for D16 Group Drumazon VSTi?

          -

          The system requirements for D16 Group Drumazon VSTi are as follows:

          - | Operating system | Windows 7 or later / Mac OS X 10.7 or later | | Processor | 2 GHz Intel Core 2 Duo / AMD Athlon 64 X2 or equivalent | | RAM | 4 GB | | Hard disk space | 100 MB | | Audio interface | ASIO / CoreAudio compatible | | Plugin format | VST / AU |

          How can I update D16 Group Drumazon VSTi to the latest version?

          -

          You can update D16 Group Drumazon VSTi to the latest version by downloading it from D16 Group's website or from your user account if you have bought it legally. You can also check for updates from within the plugin by clicking on the "About" button at the top of the interface.

          -

          Where can I find more tutorials and tips for D16 Group Drumazon VSTi?

          -

          You can find more tutorials and tips for D16 Group Drumazon VSTi by reading the user manual or watching some videos online. You can also join D16 Group's forum or social media pages where you can interact with other users and get support from D16 Group's team.

          -

          What are some alternatives to D16 Group Drumazon VSTi?

          -

          Some alternatives to D16 Group Drumazon VSTi are:

          - - Roland Cloud TR-909: The official software version of the Roland TR-909 drum machine by Roland Corporation. - AudioRealism ADM: A software drum machine that emulates various vintage drum machines such as 808, 909, 606, etc. - Wave Alchemy Revolution: A software drum machine that combines synthesis and sampling of various classic drum machines such as 909, 808, 707, etc.

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Delphi XE Slip Crack The Best Way to Backup and Restore Your Delphi License Info.md b/spaces/raedeXanto/academic-chatgpt-beta/Delphi XE Slip Crack The Best Way to Backup and Restore Your Delphi License Info.md deleted file mode 100644 index 27862989063c746806ac1ac707c9939553c776d5..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Delphi XE Slip Crack The Best Way to Backup and Restore Your Delphi License Info.md +++ /dev/null @@ -1,121 +0,0 @@ - -
          - Benefits of using Delphi XE for software development | | H2: What is a slip file and how does it work? | - Explanation of slip files and their role in Delphi XE activation
          - How to obtain and use a slip file legally | | H2: What is a slip crack and why do people use it? | - Definition of slip crack and its purpose
          - Risks and drawbacks of using a slip crack | | H2: How to reinstall Delphi XE without losing your license? | - Steps to backup and restore your slip file
          - Tips to avoid re-registering Delphi XE | | H2: How to get help and support for Delphi XE? | - Resources and links for Delphi XE users
          - Contact information for Embarcadero support | | H1: Conclusion | - Summary of the main points
          - Call to action | **Article with HTML formatting**

          What is Delphi XE and why do you need it?

          -

          If you are a software developer, you probably have heard of Delphi XE, the latest version of the popular Delphi programming language and IDE. Delphi XE is a powerful tool that allows you to create fast, native, cross-platform applications for Windows, Mac, Linux, iOS, Android, and more. With Delphi XE, you can:

          -

          delphi xe slip crack


          Download >>>>> https://tinourl.com/2uL2Nm



          -
            -
          • Write code once and compile it for multiple platforms
          • -
          • Use modern features such as generics, anonymous methods, attributes, and RTTI
          • -
          • Leverage a rich set of components and libraries for UI, database, web, cloud, IoT, and AI
          • -
          • Debug and test your code with integrated tools such as Code Insight, LiveBindings, Code Coverage, and Test Insight
          • -
          • Deploy your applications easily with a single installer or app store
          • -
          -

          Delphi XE is a great choice for developers who want to create high-performance, scalable, and secure applications with less code and more productivity. Whether you are developing desktop, mobile, web, or cloud applications, Delphi XE can help you achieve your goals faster and easier.

          -

          What is a slip file and how does it work?

          -

          A slip file is a file that contains the license information for Delphi XE. When you purchase Delphi XE from Embarcadero or an authorized reseller, you will receive a slip file that activates your product. The slip file has a .slip extension and is usually located in the C:\ProgramData\CodeGear or C:\ProgramData\Embarcadero folder on your computer.

          -

          The slip file works by verifying your product serial number and machine name with the Embarcadero license server. The license server then grants you access to the product features according to your edition (Starter, Professional, Enterprise, or Architect). The slip file also keeps track of how many installations you have made with your serial number. You can install Delphi XE on up to five machines with the same serial number.

          -

          To use a slip file legally, you need to follow these steps:

          -

          delphi xe activation file crack
          -delphi xe serial number generator
          -delphi xe license key download
          -delphi xe registration code crack
          -delphi xe slip file generator
          -delphi xe keygen free download
          -delphi xe crack patch
          -delphi xe activation code crack
          -delphi xe license manager crack
          -delphi xe slip file download
          -delphi xe serial key crack
          -delphi xe license file crack
          -delphi xe registration key generator
          -delphi xe slip file editor
          -delphi xe crack download
          -delphi xe activation slip crack
          -delphi xe serial number crack
          -delphi xe license key generator
          -delphi xe registration code generator
          -delphi xe slip file creator
          -delphi xe keygen download
          -delphi xe crack serial
          -delphi xe activation code generator
          -delphi xe license manager patch
          -delphi xe slip file crack
          -delphi xe serial key generator
          -delphi xe license file download
          -delphi xe registration key crack
          -delphi xe slip file maker
          -delphi xe crack free download
          -delphi xe activation key crack
          -delphi xe serial number download
          -delphi xe license key crack
          -delphi xe registration code download
          -delphi xe slip file generator online
          -delphi xe keygen online
          -delphi xe crack keygen
          -delphi xe activation code download
          -delphi xe license manager download
          -delphi xe slip file online
          -delphi xe serial key download
          -delphi xe license file generator
          -delphi xe registration key download
          -delphi xe slip file converter
          -delphi xe crack online
          -delphi xe activation key generator
          -delphi xe serial number online
          -delphi xe license key online
          -delphi xe registration code online

          -
            -
          1. Purchase Delphi XE from Embarcadero or an authorized reseller
          2. -
          3. Download the product installer from the Embarcadero website or use the DVD
          4. -
          5. Run the installer and enter your serial number when prompted
          6. -
          7. The installer will connect to the Embarcadero license server and download your slip file
          8. -
          9. The installer will move the slip file to the license folder on your computer
          10. -
          11. The installer will complete the installation process
          12. -
          13. You can now launch Delphi XE and enjoy its features
          14. -
          -

          What is a slip crack and why do people use it?

          -

          A slip crack is a file that mimics a slip file but bypasses the Embarcadero license server. A slip crack is usually created by hackers who modify the original slip file or generate a fake one. A slip crack has a .slip extension but may have a different name or location than the genuine one.

          -

          The purpose of a slip crack is to activate Delphi XE without paying for it. A slip crack may also allow you to use features that are not available in your edition or install Delphi XE on more than five machines. Some people use a slip crack because they want to save money, try out different features, or avoid re-registering Delphi XE when they reinstall Windows.

          -

          However, using a slip crack is illegal and risky. Here are some of the dangers of using a slip crack:

          -
            -
          • You may violate the Embarcadero end-user license agreement (EULA) and face legal consequences
          • -
          • You may expose your computer to viruses, malware, or spyware that are hidden in the slip crack
          • -
          • You may lose access to product updates, patches, bug fixes, and technical support from Embarcadero
          • -
          • You may experience performance issues, errors, crashes, or data loss due to incompatible or corrupted files
          • -
          • You may damage your reputation as a professional developer or harm your clients' trust
          • -
          -

          How to reinstall Delphi XE without losing your license?

          -

          If you need to reinstall Windows and Delphi XE on your computer, you may wonder how to avoid re-registering Delphi XE and wasting one of your installations. Fortunately, there is a way to backup and restore your slip file so that you can reinstall Delphi XE without losing your license.

          -

          To backup and restore your slip file, you need to follow these steps:

          -

          Before reinstalling Windows:

          -
            -
          1. Locate your slip file on your computer (usually in C:\ProgramData\CodeGear or C:\ProgramData\Embarcadero)
          2. -
          3. Copy the entire folder that contains the slip file (including all subfolders) to an external drive or cloud storage
          4. -
          5. Note down your machine name (you can find it by right-clicking on My Computer > Properties > Computer Name)
          6. -
          7. Note down your serial number (you can find it in your email confirmation from Embarcadero or on the DVD case)
          8. -
          -

          After reinstalling Windows:

          -
            -
          1. Name your computer exactly as before (case-sensitive)
          2. -
          3. Copy the folder that contains the slip file from your external drive or cloud storage back to C:\ProgramData\CodeGear or C:\ProgramData\Embarcadero on your computer
          4. -
          5. Download the product installer from the Embarcadero website or use the DVD
          6. -
          7. Run the installer and enter your serial number when prompted
          8. -
          9. The installer will detect your existing slip file and skip connecting to the Embarcadero license server
          10. -
          11. The installer will complete the installation process
          12. -
          13. You can now launch Delphi XE without re-registering it
          14. -
          -

          Tips to avoid re-registering Delphi XE:

          -
            -
          • If possible, use an image backup software such as Acronis True Image or Macrium Reflect to create an exact copy of your system drive before reinstalling Windows. This way, you can restore your system drive with all your programs and settings intact.
          • -
          • If you change your computer name or hardware configuration significantly after reinstalling Windows (such as adding more RAM or replacing the motherboard), you may need to re-register Delphi XE anyway because it may detect a different machine signature.
          • -
          • If you run out of installations with your serial number due to reinstallation or hardware changes, you can contact Embarcadero support via email or phone and request them to increase your installation limit. They will usually do so within 24 hours after verifying your purchase details.
          • -

            How to get help and support for Delphi XE?

            -

            If you encounter any problems or have any questions about Delphi XE, there are many resources and channels that can help you. Here are some of them:

            -
              -
            • The Delphi XE documentation, which contains user guides, reference manuals, tutorials, samples, and videos on how to use Delphi XE and its features.
            • -
            • The Embarcadero community portal, which hosts forums, blogs, **Article with HTML formatting (continued)** tutorials, samples, and webinars that are available on the Embarcadero website or YouTube channel. You can also join forums, blogs, articles, podcasts, and events that are hosted by the Embarcadero community portal or other Delphi-related websites. You can also enroll in courses, books, e-books, or magazines that are offered by Embarcadero or other Delphi experts. You can also practice your skills by creating your own projects or participating in challenges or contests.
            • -
            • How can I get help and support for Delphi XE?
              -If you need help or support for Delphi XE, you can contact Embarcadero via email or phone. You can also report bugs, request features, vote on issues, and track the status of your cases on the Embarcadero Quality Portal. You can also ask and answer questions on Stack Overflow or other Q&A sites. You can also seek advice or feedback from other Delphi users on the Embarcadero community portal or other Delphi-related forums.
            • - -

              0a6ba089eb
              -
              -
              \ No newline at end of file diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/timers/promises.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/timers/promises.d.ts deleted file mode 100644 index c1450684d60a323526a9ae750669adb21ba75c17..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/timers/promises.d.ts +++ /dev/null @@ -1,93 +0,0 @@ -/** - * The `timers/promises` API provides an alternative set of timer functions - * that return `Promise` objects. The API is accessible via`require('timers/promises')`. - * - * ```js - * import { - * setTimeout, - * setImmediate, - * setInterval, - * } from 'timers/promises'; - * ``` - * @since v15.0.0 - */ -declare module 'timers/promises' { - import { TimerOptions } from 'node:timers'; - /** - * ```js - * import { - * setTimeout, - * } from 'timers/promises'; - * - * const res = await setTimeout(100, 'result'); - * - * console.log(res); // Prints 'result' - * ``` - * @since v15.0.0 - * @param [delay=1] The number of milliseconds to wait before fulfilling the promise. - * @param value A value with which the promise is fulfilled. - */ - function setTimeout(delay?: number, value?: T, options?: TimerOptions): Promise; - /** - * ```js - * import { - * setImmediate, - * } from 'timers/promises'; - * - * const res = await setImmediate('result'); - * - * console.log(res); // Prints 'result' - * ``` - * @since v15.0.0 - * @param value A value with which the promise is fulfilled. - */ - function setImmediate(value?: T, options?: TimerOptions): Promise; - /** - * Returns an async iterator that generates values in an interval of `delay` ms. - * - * ```js - * import { - * setInterval, - * } from 'timers/promises'; - * - * const interval = 100; - * for await (const startTime of setInterval(interval, Date.now())) { - * const now = Date.now(); - * console.log(now); - * if ((now - startTime) > 1000) - * break; - * } - * console.log(Date.now()); - * ``` - * @since v15.9.0 - */ - function setInterval(delay?: number, value?: T, options?: TimerOptions): AsyncIterable; - - interface Scheduler { - /** - * ```js - * import { scheduler } from 'node:timers/promises'; - * - * await scheduler.wait(1000); // Wait one second before continuing - * ``` - * An experimental API defined by the Scheduling APIs draft specification being developed as a standard Web Platform API. - * Calling timersPromises.scheduler.wait(delay, options) is roughly equivalent to calling timersPromises.setTimeout(delay, undefined, options) except that the ref option is not supported. - * @since v16.14.0 - * @experimental - * @param [delay=1] The number of milliseconds to wait before fulfilling the promise. - */ - wait: (delay?: number, options?: TimerOptions) => Promise; - /** - * An experimental API defined by the Scheduling APIs draft specification being developed as a standard Web Platform API. - * Calling timersPromises.scheduler.yield() is equivalent to calling timersPromises.setImmediate() with no arguments. - * @since v16.14.0 - * @experimental - */ - yield: () => Promise; - } - - const scheduler: Scheduler; -} -declare module 'node:timers/promises' { - export * from 'timers/promises'; -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/DB-Ozone-X-BdItttf-VERIFIED.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/DB-Ozone-X-BdItttf-VERIFIED.md deleted file mode 100644 index af9852a6d689f1beba2d6499b92477c238ef6dd9..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/DB-Ozone-X-BdItttf-VERIFIED.md +++ /dev/null @@ -1,68 +0,0 @@ -## DB Ozone X BdIt.ttf - - - - - - - - - -**Click Here ✯ [https://lodystiri.blogspot.com/?file=2txtmn](https://lodystiri.blogspot.com/?file=2txtmn)** - - - - - - - - - - - - - -# DB Ozone X BdIt.ttf: A Bold Italic Font for Creative Projects - - - -DB Ozone X BdIt.ttf is a font file that belongs to the DB Ozone X font family, designed by Prinya Rojarayanont and released by DB Fonts. DB Ozone X is a sans serif font that has a futuristic and geometric look. It supports basic Latin, Latin-1 supplement, Thai, and some symbols and shapes. DB Ozone X BdIt.ttf is the bold italic version of the font, which adds more emphasis and style to the text. - - - -DB Ozone X BdIt.ttf can be used for various creative projects, such as graphic design, web design, logo design, poster design, flyer design, and more. It can also be used for personal or non-commercial purposes, as long as the license agreement is followed. The font file can be downloaded from some online sources, such as Fontke.com[^4^] [^5^], but it may require a membership or a payment to access it. Alternatively, it can be found in some online platforms that use or showcase the font, such as Sway.office.com[^1^], SoundCloud.com[^2^], or Condommessage.com[^3^]. However, these platforms may not provide a direct download link or a license agreement for the font file. - - - -DB Ozone X BdIt.ttf is a unique and eye-catching font that can make any text stand out. It is suitable for projects that need a modern and futuristic touch. However, users should be careful about the source and the license of the font file before downloading and using it. - - - -## How to Use DB Ozone X BdIt.ttf Font - - - -DB Ozone X BdIt.ttf font is a file that can be installed and used on various devices and applications that support TrueType fonts (TTF). To use this font, users need to follow some steps depending on their operating system and software. Here are some general guidelines for using DB Ozone X BdIt.ttf font: - - - -- Download the font file from a reliable source, such as Fontke.com[^4^] , and save it to a folder on your device. Make sure you have the permission and the license to use the font for your intended purpose. - -- Extract the font file from the compressed folder if necessary. You should see a file with the extension .ttf. - -- Install the font file on your device. For Windows users, you can right-click on the font file and select Install. For Mac users, you can double-click on the font file and click Install Font. For Linux users, you can copy the font file to the /usr/share/fonts or ~/.fonts directory. - -- Open the application that you want to use the font with, such as Microsoft Word, Photoshop, Illustrator, etc. You should be able to see DB Ozone X BdIt.ttf font in the font menu or list. Select it and apply it to your text. - -- Adjust the font size, color, alignment, spacing, and other settings as you wish. You can also combine DB Ozone X BdIt.ttf font with other fonts from the same family or different families for more variety and contrast. - - - -DB Ozone X BdIt.ttf font is a versatile and stylish font that can enhance any text with its bold italic style. It can be used for headlines, titles, logos, banners, posters, flyers, and more. However, users should be careful about the readability and legibility of the font, especially when using it for small or long texts. Users should also respect the license and the rights of the font designer and foundry when using DB Ozone X BdIt.ttf font. - - 1b8d091108 - - - - - diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Autodata 3 38 Srpski Download 75 2021.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Autodata 3 38 Srpski Download 75 2021.md deleted file mode 100644 index 85365047f4dbc0a5f4efe7b87de0223c9234202a..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Autodata 3 38 Srpski Download 75 2021.md +++ /dev/null @@ -1,17 +0,0 @@ -
              -

              Autodata 3.38 Srpski: A Useful Program for Car Services

              -

              Autodata 3.38 Srpski is a version of Autodata, a popular program for car services, that contains information in Serbian language. Autodata provides data about injection systems for gasoline and some diesel engines (PINDATA), as well as parameters for adjusting the toe-in, installing timing belts and chains, repairing air conditioners, airbags, ABS and other systems of European cars[^3^].

              -

              Autodata 3.38 Srpski is a useful tool for car mechanics, technicians and enthusiasts who want to diagnose and repair various problems in their vehicles. It can also help them to learn more about the technical specifications and features of different car models. Autodata 3.38 Srpski is easy to use and has a user-friendly interface that allows users to access the information they need quickly and efficiently.

              -

              autodata 3 38 srpski download 75


              DOWNLOAD ››››› https://urlgoal.com/2uCN2u



              -

              Autodata 3.38 Srpski is available for download from various online sources, such as Tealfeed[^1^] and AAC Itta[^2^]. However, users should be careful when downloading files from unknown or untrusted websites, as they may contain viruses or malware that can harm their computers or devices. Users should also check the compatibility of the program with their operating system and hardware before installing it.

              To use Autodata 3.38 Srpski, you need to install it on your computer first. You can do this by following these steps:

              -
                -
              1. Download the ISO image of the program from a reliable source.
              2. -
              3. Open the ISO image and copy the file ADCDA2 to your C drive.
              4. -
              5. Run the file Instal.cmd and wait for a few minutes until the installation is complete.
              6. -
              7. Run the program from the shortcut on your desktop or start menu.
              8. -
              -

              If you encounter any errors while running the program, such as RUNTIME ERROR 217, you may need to change the compatibility mode of the program. You can do this by right-clicking on the program icon, selecting Properties, then Compatibility, then choosing a different version of Windows (such as Windows 8) and checking the option Run this program as an administrator[^1^].

              -

              Autodata 3.38 Srpski has a simple and intuitive interface that allows you to access various information about different car models and systems. You can select a car model from the list on the left side of the screen, then choose a system or component from the tabs on the top of the screen. You can also use the search function to find specific information by entering keywords or codes. You can view diagrams, tables, charts, photos and instructions that will help you diagnose and repair your car problems.

              In conclusion, Autodata 3.38 Srpski is a useful program for car services that provides comprehensive and reliable data about various car models and systems. It can help users to diagnose and repair car problems, as well as to learn more about the technical aspects of their vehicles. Autodata 3.38 Srpski is easy to install and use, but it may require some adjustments in the compatibility mode to run smoothly on different versions of Windows. Autodata 3.38 Srpski is a valuable tool for car mechanics, technicians and enthusiasts who want to improve their skills and knowledge in the field of car maintenance and repair.

              -

              d5da3c52bf
              -
              -
              \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Computer Organization Carl Hamacher Pdf Free Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Computer Organization Carl Hamacher Pdf Free Download.md deleted file mode 100644 index eec84722777da48e3c106301bc98f6d481a1cf61..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Computer Organization Carl Hamacher Pdf Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

              computer organization carl hamacher pdf free download


              Download Zip –––––>>> https://urlgoal.com/2uCK0C



              - -COMPUTER ORGANIZATION. AND EMBEDED SYSTEMS. SIXTH EDITION. Carl Hamacher. Royal University. Zvonko Vranesic. University of Toronto. Safwat Zaki. "Computer modeling of physical processes associated with life in weightlessness" "Application of computer technology to the problem of preserving human life and health". "Application of computers for the development of the theory of behavior of living systems" "Computer simulation of the evolution of living systems". "Computer simulation of life on Earth". INTRODUCTION Since we learned to speak, we have been asking one thing: What is the meaning of life? All other questions are ridiculous when death is behind you. 8a78ff9644
              -
              -
              -

              diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download ((LINK)) October Movie Torrent 1080p.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download ((LINK)) October Movie Torrent 1080p.md deleted file mode 100644 index 24d2b6979fcfe655a23c3a211a66b44d85f5c2dc..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download ((LINK)) October Movie Torrent 1080p.md +++ /dev/null @@ -1,10 +0,0 @@ - -

              Once you have an adequate torrent client, you can add torrents that you find to your client by browsing a torrent search engine. Most of the top torrent sites keep a list of active torrents on the front page, for instance, and that list will be presented as a rank, and with the option to save torrents. Hence, once you have a ranking, you can get a list of torrents, as well as the progress of the downloading.

              -

              Finally, save a torrent that you find on a popular torrent site, and your client will automatically download the torrent file to your computer. It will do the same with the metadata in the torrent file, which will tell your client where it can find the files to download. So, for example, if you download a torrent for a movie, your torrent client will know that you can find the movie files and the DVD rip on the free files section of The Pirate Bay.

              -

              Download October Movie Torrent 1080p


              DOWNLOAD ✺✺✺ https://urlgoal.com/2uCLVl



              -

              Once that is done, the torrent client will use your browser as the BitTorrent protocol to make it download the torrent file and initiate the download for you, depending on the speed of your internet connection.

              -

              Second, never download torrents for a faster connection than your own bandwidth. Sure, in the short term it might give you a huge boost, but in the long term, you are robbing your own download speed. In fact, you are doing a disservice to the entire torrent sharing community.

              -

              Finally, sort your torrents. If youre browsing through a torrent site, youll undoubtedly download many files from torrents. However, keep in mind that you have to download each one individually. Sorting the torrents to a different folder and only searching for files youre actually interested in might save you a lot of time.

              -

              899543212b
              -
              -
              \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FS2004 - Zinertek - Ultimate Night Environment Professional.epub !FREE!.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FS2004 - Zinertek - Ultimate Night Environment Professional.epub !FREE!.md deleted file mode 100644 index 63bd80eae70b28cca02a431fcab86132c8511fb4..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FS2004 - Zinertek - Ultimate Night Environment Professional.epub !FREE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

              FS2004 - Zinertek - Ultimate Night Environment Professional.epub


              DOWNLOAD >>>>> https://urlgoal.com/2uCLEc



              -
              -epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER!! FS2004 - Zinertek - Ultimate Night Environment Professional.epub free download!!BETTER 4fefd39f24
              -
              -
              -

              diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/GM Techline ESI Download [NEW] Pc.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/GM Techline ESI Download [NEW] Pc.md deleted file mode 100644 index ef156c04af504150ee3a107a2cabd3ccaf3e653f..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/GM Techline ESI Download [NEW] Pc.md +++ /dev/null @@ -1,20 +0,0 @@ - -

              How to Install GM Techline eSI Service Manual on Your PC

              -

              GM Techline eSI is a software that provides service information, diagnostics and programming for GM vehicles. It is an internet-based subscription service that requires a GM MDI scan tool and a Techline Connect account. In this article, we will show you how to download and install GM Techline eSI on your PC.

              -

              Step 1: Download GM Techline eSI

              -

              You can download GM Techline eSI as a torrent file from this link: https://thepiratebay.org/torrent/3591... (link is constantly changing, just Google: "thepiratebay GM_Techline_eS" for most current link). If you are not familiar with torrents, you will need a bit-torrent client, such as: https://deluge-torrent.org/[^2^]. Alternatively, you can purchase a DVD version of GM Techline eSI from GM Parts[^1^].

              -

              GM Techline eSI download pc


              Download Zip » https://urlgoal.com/2uCJcE



              -

              Step 2: Mount or Burn the ISO File

              -

              After downloading the torrent file, you will get an ISO file that contains the GM Techline eSI software. You can either mount the ISO file using a virtual drive software, such as Daemon Tools, or burn it to a DVD using a burning software, such as Nero. If you mount the ISO file, you can access it from your computer as a virtual drive. If you burn it to a DVD, you can insert it into your DVD drive.

              -

              Step 3: Run the Setup File

              -

              Once you have access to the ISO file or the DVD, you can run the setup file to install GM Techline eSI on your PC. The setup file is called "setup.exe" and it is located in the root folder of the ISO file or the DVD. Follow the instructions on the screen to complete the installation process. You may need to restart your PC after the installation.

              -

              Step 4: Activate GM Techline eSI

              -

              After installing GM Techline eSI on your PC, you will need to activate it using a license key. The license key is included in the torrent file or the DVD. You can find it in a text file called "license.txt" in the root folder of the ISO file or the DVD. Copy and paste the license key into the activation window of GM Techline eSI and click "OK". You should see a message that says "Activation successful".

              -

              Step 5: Connect Your GM MDI Scan Tool and Your Techline Connect Account

              -

              To use GM Techline eSI on your PC, you will need a GM MDI scan tool and a Techline Connect account. The GM MDI scan tool is a device that connects your PC to your vehicle's diagnostic port. You can purchase a GM MDI scan tool from GM Parts[^1^] or other online sources. The Techline Connect account is an online service that provides access to GM vehicle calibrations, Global Diagnostic System software and scan tool hardware updates. You can subscribe to Techline Connect from this website: https://www.gmparts.com/technical-resources/diagnostic-support-resources[^1^]. You will need to enter your username and password to log in to Techline Connect.

              -

              Step 6: Enjoy GM Techline eSI on Your PC

              -

              Once you have connected your GM MDI scan tool and your Techline Connect account, you can start using GM Techline eSI on your PC. You can access various features of GM Techline eSI, such as service information, diagnostics, programming, security access, snapshot, tech2 view, techline print, global diagnostic system and RPO code display. You can also update your GM MDI scan tool and your Techline Connect software using GM Techline eSI.

              -

              We hope this article has helped you install GM Techline eSI on your PC. If you have any questions or problems, please contact GM Parts[^1^] or watch this video guide[^2^] for more details.

              -

              d5da3c52bf
              -
              -
              \ No newline at end of file diff --git a/spaces/reinformator/LL/app.py b/spaces/reinformator/LL/app.py deleted file mode 100644 index 62bb1abc515ed8ec92d9d7f81ae46107e52d36b3..0000000000000000000000000000000000000000 --- a/spaces/reinformator/LL/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import numpy as np -import gradio as gr -from PIL import Image -import keras -from huggingface_hub import from_pretrained_keras - - -model = from_pretrained_keras("keras-io/lowlight-enhance-mirnet", compile=False) -examples = ['cells.png'] - - -def infer(original_image): - image = keras.utils.img_to_array(original_image) - image = image.astype("float32") / 255.0 - image = np.expand_dims(image, axis=0) - output = model.predict(image) - output_image = output[0] * 255.0 - output_image = output_image.clip(0, 255) - output_image = output_image.reshape( - (np.shape(output_image)[0], np.shape(output_image)[1], 3) - ) - output_image = np.uint32(output_image) - return output_image - -iface = gr.Interface( - fn=infer, - title="Low Light Image Enhancement 🔬", - description = "Reinformator's implementation for microscopy 🔬", - inputs=[gr.inputs.Image(label="image", type="pil", shape=(960, 640))], - outputs="image", - examples=examples, - article = "Author: Vu Minh Chien. Based on the keras example from Soumik Rakshit. Modified by Reinformator", - ).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/renatotn7/teste2/app2.py b/spaces/renatotn7/teste2/app2.py deleted file mode 100644 index 641159d268fe03d23bbdd9eb7cb47d673be09b49..0000000000000000000000000000000000000000 --- a/spaces/renatotn7/teste2/app2.py +++ /dev/null @@ -1,118 +0,0 @@ -import streamlit as st -import os.path - -os.system("mkdir _input") -os.system("mkdir _output") -os.system("mkdir _outputf") -os.system("ls") -if not os.path.isfile("./_input/imagem-0001.png"): - os.system("ffmpeg -i vivi.mp4 -compression_level 10 -pred mixed -pix_fmt rgb24 -sws_flags +accurate_rnd+full_chroma_int -s 1080x1920 -r 0.12 ./_input/imagem-%4d.png") - -os.system("ls ./_input") -if 'myVar' not in globals(): - myVar="" - os.system("pip install git+https://github.com/TencentARC/GFPGAN.git") - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random - -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - -# set up GFPGAN restorer -bg_upsampler = None -print(f"Is CUDA available: {torch.cuda.is_available()}") -if 'restorer' not in globals(): - restorer = GFPGANer( - model_path='GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -img_list = sorted(glob.glob(os.path.join("./_input", '*'))) - -for img_path in img_list: - # read image - img_name = os.path.basename(img_path) - print(f'Processing {img_name} ...') - basename, ext = os.path.splitext(img_name) - input_img = cv2.imread(img_path, cv2.IMREAD_COLOR) - - # restore faces and background if necessary - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, - has_aligned='store_true', - only_center_face='store_true', - paste_back=True, - weight=0.5) - - # save faces - for idx, (cropped_face, restored_face) in enumerate(zip(cropped_faces, restored_faces)): - # save cropped face - save_crop_path = os.path.join("_output", 'cropped_faces', f'{basename}_{idx:02d}.png') - imwrite(cropped_face, save_crop_path) - # save restored face - if None is not None: - save_face_name = f'{basename}_{idx:04d}_{args.suffix}.png' - else: - save_face_name = f'{basename}_{idx:04d}.png' - save_restore_path = os.path.join("_output", 'restored_faces', save_face_name) - imwrite(restored_face, save_restore_path) - # save comparison image - cmp_img = np.concatenate((cropped_face, restored_face), axis=1) - imwrite(cmp_img, os.path.join("_output", 'cmp', f'{basename}_{idx:04d}.png')) - - # save restored img - if restored_img is not None: - print('encontrou**************') - if args.ext == 'auto': - extension = ext[1:] - else: - extension = args.ext - - if None is not None: - save_restore_path = os.path.join("_output", 'restored_imgs', f'{basename}_{args.suffix}.{extension}') - else: - save_restore_path = os.path.join("_output", 'restored_imgs', f'{basename}.{extension}') - imwrite(restored_img, save_restore_path) -os.system("ls ./_output") -os.system("echo ----") -os.system("ls ./_output/cmp") -os.system("echo ----") -os.system("ls ./_output/restored_imgs") -os.system("echo ----") - - - - -def inference(): - random.randint(0, 9) - input_img = cv2.imread("./_output/cmp/imagem-000"+str(random.randint(1, 4))+"_0000.png" , cv2.IMREAD_COLOR) - input_img= cv2.cvtColor(input_img,cv2.COLOR_BGR2RGB) - st.image(input_img) - - #return Image.fromarray(restored_faces[0][:,:,::-1]) - - -title = "Melhoria de imagens" - -os.system("ls") -description = "Sistema para automação。" - -article = "

              clone from akhaliq@huggingface with little change | GFPGAN Github Repo

              visitor badge
              " -st.button('Comparacao',on_click=inference) - - \ No newline at end of file diff --git a/spaces/renumics/cifar10-embeddings/prepare.py b/spaces/renumics/cifar10-embeddings/prepare.py deleted file mode 100644 index f8883ba4b0057203207ad64e8d502a8af624022c..0000000000000000000000000000000000000000 --- a/spaces/renumics/cifar10-embeddings/prepare.py +++ /dev/null @@ -1,37 +0,0 @@ -import pickle -import datasets -import os -import umap - - -if __name__ == "__main__": - cache_file = "dataset_cache.pkl" - if os.path.exists(cache_file): - # Load dataset from cache - with open(cache_file, "rb") as file: - dataset = pickle.load(file) - print("Dataset loaded from cache.") - else: - # Load dataset using datasets.load_dataset() - ds = datasets.load_dataset("renumics/cifar10-outlier", split="train") - print("Dataset loaded using datasets.load_dataset().") - - df = ds.rename_columns({"img": "image", "label": "labels"}).to_pandas() - df["label_str"] = df["labels"].apply(lambda x: ds.features["label"].int2str(x)) - - # df = df[:1000] - - - - df["embedding_foundation_precalc"] = umap.UMAP( - n_neighbors=70, min_dist=0.5, random_state=42 - ).fit_transform(df["embedding_foundation"].tolist()).tolist() - - print("Umap for base done") - - # Save dataset to cache - with open(cache_file, "wb") as file: - pickle.dump(df, file) - - print("Dataset saved to cache.") - diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/solo_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/solo_head.py deleted file mode 100644 index e89aacb420af4f5df11183e656e04c87f3dc8fe4..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/solo_head.py +++ /dev/null @@ -1,1197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from mmdet.core import InstanceData, mask_matrix_nms, multi_apply -from mmdet.core.utils import center_of_mass, generate_coordinate -from mmdet.models.builder import HEADS, build_loss -from mmdet.utils.misc import floordiv -from .base_mask_head import BaseMaskHead - - -@HEADS.register_module() -class SOLOHead(BaseMaskHead): - """SOLO mask head used in `SOLO: Segmenting Objects by Locations. - - `_ - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - Default: 256. - stacked_convs (int): Number of stacking convs of the head. - Default: 4. - strides (tuple): Downsample factor of each feature map. - scale_ranges (tuple[tuple[int, int]]): Area range of multiple - level masks, in the format [(min1, max1), (min2, max2), ...]. - A range of (16, 64) means the area range between (16, 64). - pos_scale (float): Constant scale factor to control the center region. - num_grids (list[int]): Divided image into a uniform grids, each - feature map has a different grid value. The number of output - channels is grid ** 2. Default: [40, 36, 24, 16, 12]. - cls_down_index (int): The index of downsample operation in - classification branch. Default: 0. - loss_mask (dict): Config of mask loss. - loss_cls (dict): Config of classification loss. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, - requires_grad=True). - train_cfg (dict): Training config of head. - test_cfg (dict): Testing config of head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__( - self, - num_classes, - in_channels, - feat_channels=256, - stacked_convs=4, - strides=(4, 8, 16, 32, 64), - scale_ranges=((8, 32), (16, 64), (32, 128), (64, 256), (128, 512)), - pos_scale=0.2, - num_grids=[40, 36, 24, 16, 12], - cls_down_index=0, - loss_mask=None, - loss_cls=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - train_cfg=None, - test_cfg=None, - init_cfg=[ - dict(type='Normal', layer='Conv2d', std=0.01), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_cls')) - ], - ): - super(SOLOHead, self).__init__(init_cfg) - self.num_classes = num_classes - self.cls_out_channels = self.num_classes - self.in_channels = in_channels - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.strides = strides - self.num_grids = num_grids - # number of FPN feats - self.num_levels = len(strides) - assert self.num_levels == len(scale_ranges) == len(num_grids) - self.scale_ranges = scale_ranges - self.pos_scale = pos_scale - - self.cls_down_index = cls_down_index - self.loss_cls = build_loss(loss_cls) - self.loss_mask = build_loss(loss_mask) - self.norm_cfg = norm_cfg - self.init_cfg = init_cfg - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self._init_layers() - - def _init_layers(self): - self.mask_convs = nn.ModuleList() - self.cls_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels + 2 if i == 0 else self.feat_channels - self.mask_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - self.conv_mask_list = nn.ModuleList() - for num_grid in self.num_grids: - self.conv_mask_list.append( - nn.Conv2d(self.feat_channels, num_grid**2, 1)) - - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - - def resize_feats(self, feats): - """Downsample the first feat and upsample last feat in feats.""" - out = [] - for i in range(len(feats)): - if i == 0: - out.append( - F.interpolate( - feats[0], - size=feats[i + 1].shape[-2:], - mode='bilinear', - align_corners=False)) - elif i == len(feats) - 1: - out.append( - F.interpolate( - feats[i], - size=feats[i - 1].shape[-2:], - mode='bilinear', - align_corners=False)) - else: - out.append(feats[i]) - return out - - def forward(self, feats): - assert len(feats) == self.num_levels - feats = self.resize_feats(feats) - mlvl_mask_preds = [] - mlvl_cls_preds = [] - for i in range(self.num_levels): - x = feats[i] - mask_feat = x - cls_feat = x - # generate and concat the coordinate - coord_feat = generate_coordinate(mask_feat.size(), - mask_feat.device) - mask_feat = torch.cat([mask_feat, coord_feat], 1) - - for mask_layer in (self.mask_convs): - mask_feat = mask_layer(mask_feat) - - mask_feat = F.interpolate( - mask_feat, scale_factor=2, mode='bilinear') - mask_pred = self.conv_mask_list[i](mask_feat) - - # cls branch - for j, cls_layer in enumerate(self.cls_convs): - if j == self.cls_down_index: - num_grid = self.num_grids[i] - cls_feat = F.interpolate( - cls_feat, size=num_grid, mode='bilinear') - cls_feat = cls_layer(cls_feat) - - cls_pred = self.conv_cls(cls_feat) - - if not self.training: - feat_wh = feats[0].size()[-2:] - upsampled_size = (feat_wh[0] * 2, feat_wh[1] * 2) - mask_pred = F.interpolate( - mask_pred.sigmoid(), size=upsampled_size, mode='bilinear') - cls_pred = cls_pred.sigmoid() - # get local maximum - local_max = F.max_pool2d(cls_pred, 2, stride=1, padding=1) - keep_mask = local_max[:, :, :-1, :-1] == cls_pred - cls_pred = cls_pred * keep_mask - - mlvl_mask_preds.append(mask_pred) - mlvl_cls_preds.append(cls_pred) - return mlvl_mask_preds, mlvl_cls_preds - - def loss(self, - mlvl_mask_preds, - mlvl_cls_preds, - gt_labels, - gt_masks, - img_metas, - gt_bboxes=None, - **kwargs): - """Calculate the loss of total batch. - - Args: - mlvl_mask_preds (list[Tensor]): Multi-level mask prediction. - Each element in the list has shape - (batch_size, num_grids**2 ,h ,w). - mlvl_cls_preds (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes, num_grids ,num_grids). - gt_labels (list[Tensor]): Labels of multiple images. - gt_masks (list[Tensor]): Ground truth masks of multiple images. - Each has shape (num_instances, h, w). - img_metas (list[dict]): Meta information of multiple images. - gt_bboxes (list[Tensor]): Ground truth bboxes of multiple - images. Default: None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_levels = self.num_levels - num_imgs = len(gt_labels) - - featmap_sizes = [featmap.size()[-2:] for featmap in mlvl_mask_preds] - - # `BoolTensor` in `pos_masks` represent - # whether the corresponding point is - # positive - pos_mask_targets, labels, pos_masks = multi_apply( - self._get_targets_single, - gt_bboxes, - gt_labels, - gt_masks, - featmap_sizes=featmap_sizes) - - # change from the outside list meaning multi images - # to the outside list meaning multi levels - mlvl_pos_mask_targets = [[] for _ in range(num_levels)] - mlvl_pos_mask_preds = [[] for _ in range(num_levels)] - mlvl_pos_masks = [[] for _ in range(num_levels)] - mlvl_labels = [[] for _ in range(num_levels)] - for img_id in range(num_imgs): - assert num_levels == len(pos_mask_targets[img_id]) - for lvl in range(num_levels): - mlvl_pos_mask_targets[lvl].append( - pos_mask_targets[img_id][lvl]) - mlvl_pos_mask_preds[lvl].append( - mlvl_mask_preds[lvl][img_id, pos_masks[img_id][lvl], ...]) - mlvl_pos_masks[lvl].append(pos_masks[img_id][lvl].flatten()) - mlvl_labels[lvl].append(labels[img_id][lvl].flatten()) - - # cat multiple image - temp_mlvl_cls_preds = [] - for lvl in range(num_levels): - mlvl_pos_mask_targets[lvl] = torch.cat( - mlvl_pos_mask_targets[lvl], dim=0) - mlvl_pos_mask_preds[lvl] = torch.cat( - mlvl_pos_mask_preds[lvl], dim=0) - mlvl_pos_masks[lvl] = torch.cat(mlvl_pos_masks[lvl], dim=0) - mlvl_labels[lvl] = torch.cat(mlvl_labels[lvl], dim=0) - temp_mlvl_cls_preds.append(mlvl_cls_preds[lvl].permute( - 0, 2, 3, 1).reshape(-1, self.cls_out_channels)) - - num_pos = sum(item.sum() for item in mlvl_pos_masks) - # dice loss - loss_mask = [] - for pred, target in zip(mlvl_pos_mask_preds, mlvl_pos_mask_targets): - if pred.size()[0] == 0: - loss_mask.append(pred.sum().unsqueeze(0)) - continue - loss_mask.append( - self.loss_mask(pred, target, reduction_override='none')) - if num_pos > 0: - loss_mask = torch.cat(loss_mask).sum() / num_pos - else: - loss_mask = torch.cat(loss_mask).mean() - - flatten_labels = torch.cat(mlvl_labels) - flatten_cls_preds = torch.cat(temp_mlvl_cls_preds) - loss_cls = self.loss_cls( - flatten_cls_preds, flatten_labels, avg_factor=num_pos + 1) - return dict(loss_mask=loss_mask, loss_cls=loss_cls) - - def _get_targets_single(self, - gt_bboxes, - gt_labels, - gt_masks, - featmap_sizes=None): - """Compute targets for predictions of single image. - - Args: - gt_bboxes (Tensor): Ground truth bbox of each instance, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth label of each instance, - shape (num_gts,). - gt_masks (Tensor): Ground truth mask of each instance, - shape (num_gts, h, w). - featmap_sizes (list[:obj:`torch.size`]): Size of each - feature map from feature pyramid, each element - means (feat_h, feat_w). Default: None. - - Returns: - Tuple: Usually returns a tuple containing targets for predictions. - - - mlvl_pos_mask_targets (list[Tensor]): Each element represent - the binary mask targets for positive points in this - level, has shape (num_pos, out_h, out_w). - - mlvl_labels (list[Tensor]): Each element is - classification labels for all - points in this level, has shape - (num_grid, num_grid). - - mlvl_pos_masks (list[Tensor]): Each element is - a `BoolTensor` to represent whether the - corresponding point in single level - is positive, has shape (num_grid **2). - """ - device = gt_labels.device - gt_areas = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - - mlvl_pos_mask_targets = [] - mlvl_labels = [] - mlvl_pos_masks = [] - for (lower_bound, upper_bound), stride, featmap_size, num_grid \ - in zip(self.scale_ranges, self.strides, - featmap_sizes, self.num_grids): - - mask_target = torch.zeros( - [num_grid**2, featmap_size[0], featmap_size[1]], - dtype=torch.uint8, - device=device) - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - labels = torch.zeros([num_grid, num_grid], - dtype=torch.int64, - device=device) + self.num_classes - pos_mask = torch.zeros([num_grid**2], - dtype=torch.bool, - device=device) - - gt_inds = ((gt_areas >= lower_bound) & - (gt_areas <= upper_bound)).nonzero().flatten() - if len(gt_inds) == 0: - mlvl_pos_mask_targets.append( - mask_target.new_zeros(0, featmap_size[0], featmap_size[1])) - mlvl_labels.append(labels) - mlvl_pos_masks.append(pos_mask) - continue - hit_gt_bboxes = gt_bboxes[gt_inds] - hit_gt_labels = gt_labels[gt_inds] - hit_gt_masks = gt_masks[gt_inds, ...] - - pos_w_ranges = 0.5 * (hit_gt_bboxes[:, 2] - - hit_gt_bboxes[:, 0]) * self.pos_scale - pos_h_ranges = 0.5 * (hit_gt_bboxes[:, 3] - - hit_gt_bboxes[:, 1]) * self.pos_scale - - # Make sure hit_gt_masks has a value - valid_mask_flags = hit_gt_masks.sum(dim=-1).sum(dim=-1) > 0 - output_stride = stride / 2 - - for gt_mask, gt_label, pos_h_range, pos_w_range, \ - valid_mask_flag in \ - zip(hit_gt_masks, hit_gt_labels, pos_h_ranges, - pos_w_ranges, valid_mask_flags): - if not valid_mask_flag: - continue - upsampled_size = (featmap_sizes[0][0] * 4, - featmap_sizes[0][1] * 4) - center_h, center_w = center_of_mass(gt_mask) - - coord_w = int( - floordiv((center_w / upsampled_size[1]), (1. / num_grid), - rounding_mode='trunc')) - coord_h = int( - floordiv((center_h / upsampled_size[0]), (1. / num_grid), - rounding_mode='trunc')) - - # left, top, right, down - top_box = max( - 0, - int( - floordiv( - (center_h - pos_h_range) / upsampled_size[0], - (1. / num_grid), - rounding_mode='trunc'))) - down_box = min( - num_grid - 1, - int( - floordiv( - (center_h + pos_h_range) / upsampled_size[0], - (1. / num_grid), - rounding_mode='trunc'))) - left_box = max( - 0, - int( - floordiv( - (center_w - pos_w_range) / upsampled_size[1], - (1. / num_grid), - rounding_mode='trunc'))) - right_box = min( - num_grid - 1, - int( - floordiv( - (center_w + pos_w_range) / upsampled_size[1], - (1. / num_grid), - rounding_mode='trunc'))) - - top = max(top_box, coord_h - 1) - down = min(down_box, coord_h + 1) - left = max(coord_w - 1, left_box) - right = min(right_box, coord_w + 1) - - labels[top:(down + 1), left:(right + 1)] = gt_label - # ins - gt_mask = np.uint8(gt_mask.cpu().numpy()) - # Follow the original implementation, F.interpolate is - # different from cv2 and opencv - gt_mask = mmcv.imrescale(gt_mask, scale=1. / output_stride) - gt_mask = torch.from_numpy(gt_mask).to(device=device) - - for i in range(top, down + 1): - for j in range(left, right + 1): - index = int(i * num_grid + j) - mask_target[index, :gt_mask.shape[0], :gt_mask. - shape[1]] = gt_mask - pos_mask[index] = True - mlvl_pos_mask_targets.append(mask_target[pos_mask]) - mlvl_labels.append(labels) - mlvl_pos_masks.append(pos_mask) - return mlvl_pos_mask_targets, mlvl_labels, mlvl_pos_masks - - def get_results(self, mlvl_mask_preds, mlvl_cls_scores, img_metas, - **kwargs): - """Get multi-image mask results. - - Args: - mlvl_mask_preds (list[Tensor]): Multi-level mask prediction. - Each element in the list has shape - (batch_size, num_grids**2 ,h ,w). - mlvl_cls_scores (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes, num_grids ,num_grids). - img_metas (list[dict]): Meta information of all images. - - Returns: - list[:obj:`InstanceData`]: Processed results of multiple - images.Each :obj:`InstanceData` usually contains - following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - mlvl_cls_scores = [ - item.permute(0, 2, 3, 1) for item in mlvl_cls_scores - ] - assert len(mlvl_mask_preds) == len(mlvl_cls_scores) - num_levels = len(mlvl_cls_scores) - - results_list = [] - for img_id in range(len(img_metas)): - cls_pred_list = [ - mlvl_cls_scores[lvl][img_id].view(-1, self.cls_out_channels) - for lvl in range(num_levels) - ] - mask_pred_list = [ - mlvl_mask_preds[lvl][img_id] for lvl in range(num_levels) - ] - - cls_pred_list = torch.cat(cls_pred_list, dim=0) - mask_pred_list = torch.cat(mask_pred_list, dim=0) - - results = self._get_results_single( - cls_pred_list, mask_pred_list, img_meta=img_metas[img_id]) - results_list.append(results) - - return results_list - - def _get_results_single(self, cls_scores, mask_preds, img_meta, cfg=None): - """Get processed mask related results of single image. - - Args: - cls_scores (Tensor): Classification score of all points - in single image, has shape (num_points, num_classes). - mask_preds (Tensor): Mask prediction of all points in - single image, has shape (num_points, feat_h, feat_w). - img_meta (dict): Meta information of corresponding image. - cfg (dict, optional): Config used in test phase. - Default: None. - - Returns: - :obj:`InstanceData`: Processed results of single image. - it usually contains following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - - def empty_results(results, cls_scores): - """Generate a empty results.""" - results.scores = cls_scores.new_ones(0) - results.masks = cls_scores.new_zeros(0, *results.ori_shape[:2]) - results.labels = cls_scores.new_ones(0) - return results - - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(mask_preds) - results = InstanceData(img_meta) - - featmap_size = mask_preds.size()[-2:] - - img_shape = results.img_shape - ori_shape = results.ori_shape - - h, w, _ = img_shape - upsampled_size = (featmap_size[0] * 4, featmap_size[1] * 4) - - score_mask = (cls_scores > cfg.score_thr) - cls_scores = cls_scores[score_mask] - if len(cls_scores) == 0: - return empty_results(results, cls_scores) - - inds = score_mask.nonzero() - cls_labels = inds[:, 1] - - # Filter the mask mask with an area is smaller than - # stride of corresponding feature level - lvl_interval = cls_labels.new_tensor(self.num_grids).pow(2).cumsum(0) - strides = cls_scores.new_ones(lvl_interval[-1]) - strides[:lvl_interval[0]] *= self.strides[0] - for lvl in range(1, self.num_levels): - strides[lvl_interval[lvl - - 1]:lvl_interval[lvl]] *= self.strides[lvl] - strides = strides[inds[:, 0]] - mask_preds = mask_preds[inds[:, 0]] - - masks = mask_preds > cfg.mask_thr - sum_masks = masks.sum((1, 2)).float() - keep = sum_masks > strides - if keep.sum() == 0: - return empty_results(results, cls_scores) - masks = masks[keep] - mask_preds = mask_preds[keep] - sum_masks = sum_masks[keep] - cls_scores = cls_scores[keep] - cls_labels = cls_labels[keep] - - # maskness. - mask_scores = (mask_preds * masks).sum((1, 2)) / sum_masks - cls_scores *= mask_scores - - scores, labels, _, keep_inds = mask_matrix_nms( - masks, - cls_labels, - cls_scores, - mask_area=sum_masks, - nms_pre=cfg.nms_pre, - max_num=cfg.max_per_img, - kernel=cfg.kernel, - sigma=cfg.sigma, - filter_thr=cfg.filter_thr) - mask_preds = mask_preds[keep_inds] - mask_preds = F.interpolate( - mask_preds.unsqueeze(0), size=upsampled_size, - mode='bilinear')[:, :, :h, :w] - mask_preds = F.interpolate( - mask_preds, size=ori_shape[:2], mode='bilinear').squeeze(0) - masks = mask_preds > cfg.mask_thr - - results.masks = masks - results.labels = labels - results.scores = scores - - return results - - -@HEADS.register_module() -class DecoupledSOLOHead(SOLOHead): - """Decoupled SOLO mask head used in `SOLO: Segmenting Objects by Locations. - - `_ - - Args: - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - *args, - init_cfg=[ - dict(type='Normal', layer='Conv2d', std=0.01), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list_x')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list_y')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_cls')) - ], - **kwargs): - super(DecoupledSOLOHead, self).__init__( - *args, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - self.mask_convs_x = nn.ModuleList() - self.mask_convs_y = nn.ModuleList() - self.cls_convs = nn.ModuleList() - - for i in range(self.stacked_convs): - chn = self.in_channels + 1 if i == 0 else self.feat_channels - self.mask_convs_x.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - self.mask_convs_y.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - - self.conv_mask_list_x = nn.ModuleList() - self.conv_mask_list_y = nn.ModuleList() - for num_grid in self.num_grids: - self.conv_mask_list_x.append( - nn.Conv2d(self.feat_channels, num_grid, 3, padding=1)) - self.conv_mask_list_y.append( - nn.Conv2d(self.feat_channels, num_grid, 3, padding=1)) - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - - def forward(self, feats): - assert len(feats) == self.num_levels - feats = self.resize_feats(feats) - mask_preds_x = [] - mask_preds_y = [] - cls_preds = [] - for i in range(self.num_levels): - x = feats[i] - mask_feat = x - cls_feat = x - # generate and concat the coordinate - coord_feat = generate_coordinate(mask_feat.size(), - mask_feat.device) - mask_feat_x = torch.cat([mask_feat, coord_feat[:, 0:1, ...]], 1) - mask_feat_y = torch.cat([mask_feat, coord_feat[:, 1:2, ...]], 1) - - for mask_layer_x, mask_layer_y in \ - zip(self.mask_convs_x, self.mask_convs_y): - mask_feat_x = mask_layer_x(mask_feat_x) - mask_feat_y = mask_layer_y(mask_feat_y) - - mask_feat_x = F.interpolate( - mask_feat_x, scale_factor=2, mode='bilinear') - mask_feat_y = F.interpolate( - mask_feat_y, scale_factor=2, mode='bilinear') - - mask_pred_x = self.conv_mask_list_x[i](mask_feat_x) - mask_pred_y = self.conv_mask_list_y[i](mask_feat_y) - - # cls branch - for j, cls_layer in enumerate(self.cls_convs): - if j == self.cls_down_index: - num_grid = self.num_grids[i] - cls_feat = F.interpolate( - cls_feat, size=num_grid, mode='bilinear') - cls_feat = cls_layer(cls_feat) - - cls_pred = self.conv_cls(cls_feat) - - if not self.training: - feat_wh = feats[0].size()[-2:] - upsampled_size = (feat_wh[0] * 2, feat_wh[1] * 2) - mask_pred_x = F.interpolate( - mask_pred_x.sigmoid(), - size=upsampled_size, - mode='bilinear') - mask_pred_y = F.interpolate( - mask_pred_y.sigmoid(), - size=upsampled_size, - mode='bilinear') - cls_pred = cls_pred.sigmoid() - # get local maximum - local_max = F.max_pool2d(cls_pred, 2, stride=1, padding=1) - keep_mask = local_max[:, :, :-1, :-1] == cls_pred - cls_pred = cls_pred * keep_mask - - mask_preds_x.append(mask_pred_x) - mask_preds_y.append(mask_pred_y) - cls_preds.append(cls_pred) - return mask_preds_x, mask_preds_y, cls_preds - - def loss(self, - mlvl_mask_preds_x, - mlvl_mask_preds_y, - mlvl_cls_preds, - gt_labels, - gt_masks, - img_metas, - gt_bboxes=None, - **kwargs): - """Calculate the loss of total batch. - - Args: - mlvl_mask_preds_x (list[Tensor]): Multi-level mask prediction - from x branch. Each element in the list has shape - (batch_size, num_grids ,h ,w). - mlvl_mask_preds_x (list[Tensor]): Multi-level mask prediction - from y branch. Each element in the list has shape - (batch_size, num_grids ,h ,w). - mlvl_cls_preds (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes, num_grids ,num_grids). - gt_labels (list[Tensor]): Labels of multiple images. - gt_masks (list[Tensor]): Ground truth masks of multiple images. - Each has shape (num_instances, h, w). - img_metas (list[dict]): Meta information of multiple images. - gt_bboxes (list[Tensor]): Ground truth bboxes of multiple - images. Default: None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_levels = self.num_levels - num_imgs = len(gt_labels) - featmap_sizes = [featmap.size()[-2:] for featmap in mlvl_mask_preds_x] - - pos_mask_targets, labels, \ - xy_pos_indexes = \ - multi_apply(self._get_targets_single, - gt_bboxes, - gt_labels, - gt_masks, - featmap_sizes=featmap_sizes) - - # change from the outside list meaning multi images - # to the outside list meaning multi levels - mlvl_pos_mask_targets = [[] for _ in range(num_levels)] - mlvl_pos_mask_preds_x = [[] for _ in range(num_levels)] - mlvl_pos_mask_preds_y = [[] for _ in range(num_levels)] - mlvl_labels = [[] for _ in range(num_levels)] - for img_id in range(num_imgs): - - for lvl in range(num_levels): - mlvl_pos_mask_targets[lvl].append( - pos_mask_targets[img_id][lvl]) - mlvl_pos_mask_preds_x[lvl].append( - mlvl_mask_preds_x[lvl][img_id, - xy_pos_indexes[img_id][lvl][:, 1]]) - mlvl_pos_mask_preds_y[lvl].append( - mlvl_mask_preds_y[lvl][img_id, - xy_pos_indexes[img_id][lvl][:, 0]]) - mlvl_labels[lvl].append(labels[img_id][lvl].flatten()) - - # cat multiple image - temp_mlvl_cls_preds = [] - for lvl in range(num_levels): - mlvl_pos_mask_targets[lvl] = torch.cat( - mlvl_pos_mask_targets[lvl], dim=0) - mlvl_pos_mask_preds_x[lvl] = torch.cat( - mlvl_pos_mask_preds_x[lvl], dim=0) - mlvl_pos_mask_preds_y[lvl] = torch.cat( - mlvl_pos_mask_preds_y[lvl], dim=0) - mlvl_labels[lvl] = torch.cat(mlvl_labels[lvl], dim=0) - temp_mlvl_cls_preds.append(mlvl_cls_preds[lvl].permute( - 0, 2, 3, 1).reshape(-1, self.cls_out_channels)) - - num_pos = 0. - # dice loss - loss_mask = [] - for pred_x, pred_y, target in \ - zip(mlvl_pos_mask_preds_x, - mlvl_pos_mask_preds_y, mlvl_pos_mask_targets): - num_masks = pred_x.size(0) - if num_masks == 0: - # make sure can get grad - loss_mask.append((pred_x.sum() + pred_y.sum()).unsqueeze(0)) - continue - num_pos += num_masks - pred_mask = pred_y.sigmoid() * pred_x.sigmoid() - loss_mask.append( - self.loss_mask(pred_mask, target, reduction_override='none')) - if num_pos > 0: - loss_mask = torch.cat(loss_mask).sum() / num_pos - else: - loss_mask = torch.cat(loss_mask).mean() - - # cate - flatten_labels = torch.cat(mlvl_labels) - flatten_cls_preds = torch.cat(temp_mlvl_cls_preds) - - loss_cls = self.loss_cls( - flatten_cls_preds, flatten_labels, avg_factor=num_pos + 1) - return dict(loss_mask=loss_mask, loss_cls=loss_cls) - - def _get_targets_single(self, - gt_bboxes, - gt_labels, - gt_masks, - featmap_sizes=None): - """Compute targets for predictions of single image. - - Args: - gt_bboxes (Tensor): Ground truth bbox of each instance, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth label of each instance, - shape (num_gts,). - gt_masks (Tensor): Ground truth mask of each instance, - shape (num_gts, h, w). - featmap_sizes (list[:obj:`torch.size`]): Size of each - feature map from feature pyramid, each element - means (feat_h, feat_w). Default: None. - - Returns: - Tuple: Usually returns a tuple containing targets for predictions. - - - mlvl_pos_mask_targets (list[Tensor]): Each element represent - the binary mask targets for positive points in this - level, has shape (num_pos, out_h, out_w). - - mlvl_labels (list[Tensor]): Each element is - classification labels for all - points in this level, has shape - (num_grid, num_grid). - - mlvl_xy_pos_indexes (list[Tensor]): Each element - in the list contains the index of positive samples in - corresponding level, has shape (num_pos, 2), last - dimension 2 present (index_x, index_y). - """ - mlvl_pos_mask_targets, mlvl_labels, \ - mlvl_pos_masks = \ - super()._get_targets_single(gt_bboxes, gt_labels, gt_masks, - featmap_sizes=featmap_sizes) - - mlvl_xy_pos_indexes = [(item - self.num_classes).nonzero() - for item in mlvl_labels] - - return mlvl_pos_mask_targets, mlvl_labels, mlvl_xy_pos_indexes - - def get_results(self, - mlvl_mask_preds_x, - mlvl_mask_preds_y, - mlvl_cls_scores, - img_metas, - rescale=None, - **kwargs): - """Get multi-image mask results. - - Args: - mlvl_mask_preds_x (list[Tensor]): Multi-level mask prediction - from x branch. Each element in the list has shape - (batch_size, num_grids ,h ,w). - mlvl_mask_preds_y (list[Tensor]): Multi-level mask prediction - from y branch. Each element in the list has shape - (batch_size, num_grids ,h ,w). - mlvl_cls_scores (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes ,num_grids ,num_grids). - img_metas (list[dict]): Meta information of all images. - - Returns: - list[:obj:`InstanceData`]: Processed results of multiple - images.Each :obj:`InstanceData` usually contains - following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - mlvl_cls_scores = [ - item.permute(0, 2, 3, 1) for item in mlvl_cls_scores - ] - assert len(mlvl_mask_preds_x) == len(mlvl_cls_scores) - num_levels = len(mlvl_cls_scores) - - results_list = [] - for img_id in range(len(img_metas)): - cls_pred_list = [ - mlvl_cls_scores[i][img_id].view( - -1, self.cls_out_channels).detach() - for i in range(num_levels) - ] - mask_pred_list_x = [ - mlvl_mask_preds_x[i][img_id] for i in range(num_levels) - ] - mask_pred_list_y = [ - mlvl_mask_preds_y[i][img_id] for i in range(num_levels) - ] - - cls_pred_list = torch.cat(cls_pred_list, dim=0) - mask_pred_list_x = torch.cat(mask_pred_list_x, dim=0) - mask_pred_list_y = torch.cat(mask_pred_list_y, dim=0) - - results = self._get_results_single( - cls_pred_list, - mask_pred_list_x, - mask_pred_list_y, - img_meta=img_metas[img_id], - cfg=self.test_cfg) - results_list.append(results) - return results_list - - def _get_results_single(self, cls_scores, mask_preds_x, mask_preds_y, - img_meta, cfg): - """Get processed mask related results of single image. - - Args: - cls_scores (Tensor): Classification score of all points - in single image, has shape (num_points, num_classes). - mask_preds_x (Tensor): Mask prediction of x branch of - all points in single image, has shape - (sum_num_grids, feat_h, feat_w). - mask_preds_y (Tensor): Mask prediction of y branch of - all points in single image, has shape - (sum_num_grids, feat_h, feat_w). - img_meta (dict): Meta information of corresponding image. - cfg (dict): Config used in test phase. - - Returns: - :obj:`InstanceData`: Processed results of single image. - it usually contains following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - - def empty_results(results, cls_scores): - """Generate a empty results.""" - results.scores = cls_scores.new_ones(0) - results.masks = cls_scores.new_zeros(0, *results.ori_shape[:2]) - results.labels = cls_scores.new_ones(0) - return results - - cfg = self.test_cfg if cfg is None else cfg - - results = InstanceData(img_meta) - img_shape = results.img_shape - ori_shape = results.ori_shape - h, w, _ = img_shape - featmap_size = mask_preds_x.size()[-2:] - upsampled_size = (featmap_size[0] * 4, featmap_size[1] * 4) - - score_mask = (cls_scores > cfg.score_thr) - cls_scores = cls_scores[score_mask] - inds = score_mask.nonzero() - lvl_interval = inds.new_tensor(self.num_grids).pow(2).cumsum(0) - num_all_points = lvl_interval[-1] - lvl_start_index = inds.new_ones(num_all_points) - num_grids = inds.new_ones(num_all_points) - seg_size = inds.new_tensor(self.num_grids).cumsum(0) - mask_lvl_start_index = inds.new_ones(num_all_points) - strides = inds.new_ones(num_all_points) - - lvl_start_index[:lvl_interval[0]] *= 0 - mask_lvl_start_index[:lvl_interval[0]] *= 0 - num_grids[:lvl_interval[0]] *= self.num_grids[0] - strides[:lvl_interval[0]] *= self.strides[0] - - for lvl in range(1, self.num_levels): - lvl_start_index[lvl_interval[lvl - 1]:lvl_interval[lvl]] *= \ - lvl_interval[lvl - 1] - mask_lvl_start_index[lvl_interval[lvl - 1]:lvl_interval[lvl]] *= \ - seg_size[lvl - 1] - num_grids[lvl_interval[lvl - 1]:lvl_interval[lvl]] *= \ - self.num_grids[lvl] - strides[lvl_interval[lvl - 1]:lvl_interval[lvl]] *= \ - self.strides[lvl] - - lvl_start_index = lvl_start_index[inds[:, 0]] - mask_lvl_start_index = mask_lvl_start_index[inds[:, 0]] - num_grids = num_grids[inds[:, 0]] - strides = strides[inds[:, 0]] - - y_lvl_offset = (inds[:, 0] - lvl_start_index) // num_grids - x_lvl_offset = (inds[:, 0] - lvl_start_index) % num_grids - y_inds = mask_lvl_start_index + y_lvl_offset - x_inds = mask_lvl_start_index + x_lvl_offset - - cls_labels = inds[:, 1] - mask_preds = mask_preds_x[x_inds, ...] * mask_preds_y[y_inds, ...] - - masks = mask_preds > cfg.mask_thr - sum_masks = masks.sum((1, 2)).float() - keep = sum_masks > strides - if keep.sum() == 0: - return empty_results(results, cls_scores) - - masks = masks[keep] - mask_preds = mask_preds[keep] - sum_masks = sum_masks[keep] - cls_scores = cls_scores[keep] - cls_labels = cls_labels[keep] - - # maskness. - mask_scores = (mask_preds * masks).sum((1, 2)) / sum_masks - cls_scores *= mask_scores - - scores, labels, _, keep_inds = mask_matrix_nms( - masks, - cls_labels, - cls_scores, - mask_area=sum_masks, - nms_pre=cfg.nms_pre, - max_num=cfg.max_per_img, - kernel=cfg.kernel, - sigma=cfg.sigma, - filter_thr=cfg.filter_thr) - mask_preds = mask_preds[keep_inds] - mask_preds = F.interpolate( - mask_preds.unsqueeze(0), size=upsampled_size, - mode='bilinear')[:, :, :h, :w] - mask_preds = F.interpolate( - mask_preds, size=ori_shape[:2], mode='bilinear').squeeze(0) - masks = mask_preds > cfg.mask_thr - - results.masks = masks - results.labels = labels - results.scores = scores - - return results - - -@HEADS.register_module() -class DecoupledSOLOLightHead(DecoupledSOLOHead): - """Decoupled Light SOLO mask head used in `SOLO: Segmenting Objects by - Locations `_ - - Args: - with_dcn (bool): Whether use dcn in mask_convs and cls_convs, - default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - *args, - dcn_cfg=None, - init_cfg=[ - dict(type='Normal', layer='Conv2d', std=0.01), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list_x')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list_y')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_cls')) - ], - **kwargs): - assert dcn_cfg is None or isinstance(dcn_cfg, dict) - self.dcn_cfg = dcn_cfg - super(DecoupledSOLOLightHead, self).__init__( - *args, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - self.mask_convs = nn.ModuleList() - self.cls_convs = nn.ModuleList() - - for i in range(self.stacked_convs): - if self.dcn_cfg is not None\ - and i == self.stacked_convs - 1: - conv_cfg = self.dcn_cfg - else: - conv_cfg = None - - chn = self.in_channels + 2 if i == 0 else self.feat_channels - self.mask_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg)) - - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg)) - - self.conv_mask_list_x = nn.ModuleList() - self.conv_mask_list_y = nn.ModuleList() - for num_grid in self.num_grids: - self.conv_mask_list_x.append( - nn.Conv2d(self.feat_channels, num_grid, 3, padding=1)) - self.conv_mask_list_y.append( - nn.Conv2d(self.feat_channels, num_grid, 3, padding=1)) - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - - def forward(self, feats): - assert len(feats) == self.num_levels - feats = self.resize_feats(feats) - mask_preds_x = [] - mask_preds_y = [] - cls_preds = [] - for i in range(self.num_levels): - x = feats[i] - mask_feat = x - cls_feat = x - # generate and concat the coordinate - coord_feat = generate_coordinate(mask_feat.size(), - mask_feat.device) - mask_feat = torch.cat([mask_feat, coord_feat], 1) - - for mask_layer in self.mask_convs: - mask_feat = mask_layer(mask_feat) - - mask_feat = F.interpolate( - mask_feat, scale_factor=2, mode='bilinear') - - mask_pred_x = self.conv_mask_list_x[i](mask_feat) - mask_pred_y = self.conv_mask_list_y[i](mask_feat) - - # cls branch - for j, cls_layer in enumerate(self.cls_convs): - if j == self.cls_down_index: - num_grid = self.num_grids[i] - cls_feat = F.interpolate( - cls_feat, size=num_grid, mode='bilinear') - cls_feat = cls_layer(cls_feat) - - cls_pred = self.conv_cls(cls_feat) - - if not self.training: - feat_wh = feats[0].size()[-2:] - upsampled_size = (feat_wh[0] * 2, feat_wh[1] * 2) - mask_pred_x = F.interpolate( - mask_pred_x.sigmoid(), - size=upsampled_size, - mode='bilinear') - mask_pred_y = F.interpolate( - mask_pred_y.sigmoid(), - size=upsampled_size, - mode='bilinear') - cls_pred = cls_pred.sigmoid() - # get local maximum - local_max = F.max_pool2d(cls_pred, 2, stride=1, padding=1) - keep_mask = local_max[:, :, :-1, :-1] == cls_pred - cls_pred = cls_pred * keep_mask - - mask_preds_x.append(mask_pred_x) - mask_preds_y.append(mask_pred_y) - cls_preds.append(cls_pred) - return mask_preds_x, mask_preds_y, cls_preds diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/segment_anything/build_sam.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/segment_anything/build_sam.py deleted file mode 100644 index 07abfca24e96eced7f13bdefd3212ce1b77b8999..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/segment_anything/build_sam.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from functools import partial - -from .modeling import ImageEncoderViT, MaskDecoder, PromptEncoder, Sam, TwoWayTransformer - - -def build_sam_vit_h(checkpoint=None): - return _build_sam( - encoder_embed_dim=1280, - encoder_depth=32, - encoder_num_heads=16, - encoder_global_attn_indexes=[7, 15, 23, 31], - checkpoint=checkpoint, - ) - - -build_sam = build_sam_vit_h - - -def build_sam_vit_l(checkpoint=None): - return _build_sam( - encoder_embed_dim=1024, - encoder_depth=24, - encoder_num_heads=16, - encoder_global_attn_indexes=[5, 11, 17, 23], - checkpoint=checkpoint, - ) - - -def build_sam_vit_b(checkpoint=None): - return _build_sam( - encoder_embed_dim=768, - encoder_depth=12, - encoder_num_heads=12, - encoder_global_attn_indexes=[2, 5, 8, 11], - checkpoint=checkpoint, - ) - - -sam_model_registry = { - "default": build_sam, - "vit_h": build_sam, - "vit_l": build_sam_vit_l, - "vit_b": build_sam_vit_b, -} - - -def _build_sam( - encoder_embed_dim, - encoder_depth, - encoder_num_heads, - encoder_global_attn_indexes, - checkpoint=None, -): - prompt_embed_dim = 256 - image_size = 1024 - vit_patch_size = 16 - image_embedding_size = image_size // vit_patch_size - sam = Sam( - image_encoder=ImageEncoderViT( - depth=encoder_depth, - embed_dim=encoder_embed_dim, - img_size=image_size, - mlp_ratio=4, - norm_layer=partial(torch.nn.LayerNorm, eps=1e-6), - num_heads=encoder_num_heads, - patch_size=vit_patch_size, - qkv_bias=True, - use_rel_pos=True, - global_attn_indexes=encoder_global_attn_indexes, - window_size=14, - out_chans=prompt_embed_dim, - ), - prompt_encoder=PromptEncoder( - embed_dim=prompt_embed_dim, - image_embedding_size=(image_embedding_size, image_embedding_size), - input_image_size=(image_size, image_size), - mask_in_chans=16, - ), - mask_decoder=MaskDecoder( - num_multimask_outputs=3, - transformer=TwoWayTransformer( - depth=2, - embedding_dim=prompt_embed_dim, - mlp_dim=2048, - num_heads=8, - ), - transformer_dim=prompt_embed_dim, - iou_head_depth=3, - iou_head_hidden_dim=256, - ), - pixel_mean=[123.675, 116.28, 103.53], - pixel_std=[58.395, 57.12, 57.375], - ) - sam.eval() - if checkpoint is not None: - with open(checkpoint, "rb") as f: - state_dict = torch.load(f) - sam.load_state_dict(state_dict) - return sam diff --git a/spaces/rorallitri/biomedical-language-models/Just Cause 4: Day One Edition 5 DLCs Repack (16.4 GB).md b/spaces/rorallitri/biomedical-language-models/Just Cause 4: Day One Edition 5 DLCs Repack (16.4 GB).md deleted file mode 100644 index 7a60dfa17c5daf6abf54df1f56b5d7c8afb06093..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/Just Cause 4: Day One Edition 5 DLCs Repack (16.4 GB).md +++ /dev/null @@ -1,62 +0,0 @@ -## Just Cause 4: Day One Edition 5 DLCs Repack (16.4 GB) - - - - - - ![Just Cause 4: Day One Edition 5 DLCs Repack (16.4 GB)](https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcRsCI4DVduB9--60Qgu8lQMw26MTxsqq3ogwjIQuobKGJYyaFacBrcorew) - - - - - -**LINK → [https://denirade.blogspot.com/?download=2txos3](https://denirade.blogspot.com/?download=2txos3)** - - - - - - - - - - - - - -# Just Cause 4: Day One Edition 5 DLCs Repack (16.4 GB) - A Review - - - -If you are looking for a thrilling action-adventure game with stunning graphics and an open world to explore, you might want to check out Just Cause 4: Day One Edition 5 DLCs Repack (16.4 GB). This is a repack version of the original game that includes the base game and five downloadable content packs: Deathstalker Scorpion Pack, Renegade Pack, Neon Racer Pack, Golden Gear Pack and Digital Deluxe Content. The repack also reduces the game size from 53.9 GB to 16.4 GB, making it easier to download and install. - - - -Just Cause 4 is the fourth installment in the popular Just Cause series, developed by Avalanche Studios and published by Square Enix. The game follows the adventures of Rico Rodriguez, a rogue agent who travels to the fictional South American country of Solis to uncover the truth behind his father's death and stop a sinister plot by the Black Hand, a private military organization. Solis is a vast and diverse land with four distinct biomes: rainforest, grassland, alpine and desert. Each biome has its own weather system, such as tornadoes, sandstorms, blizzards and lightning storms, that affect the gameplay and environment. - - - -The game features a variety of vehicles, weapons and gadgets that Rico can use to cause chaos and destruction. Rico can also use his signature grappling hook, wingsuit and parachute to traverse the terrain and perform stunts. The game has a dynamic day-night cycle and a physics-based destruction system that allows players to interact with almost anything in the world. The game also has a story mode with missions and side quests, as well as a sandbox mode where players can create their own scenarios using the in-game tools. - - - -Just Cause 4: Day One Edition 5 DLCs Repack (16.4 GB) is a great option for fans of the Just Cause series or anyone who enjoys explosive action and freedom of choice. The game offers hours of fun and entertainment with its immersive gameplay and stunning visuals. The repack version also saves space and time without compromising quality or performance. If you are interested in downloading Just Cause 4: Day One Edition 5 DLCs Repack (16.4 GB), you can find it on various torrent sites or online platforms. - - - -One of the main attractions of Just Cause 4: Day One Edition 5 DLCs Repack (16.4 GB) is the variety and customization of the vehicles, weapons and gadgets that Rico can use. The game features over 100 vehicles, ranging from cars, motorcycles, boats, planes, helicopters, tanks, trains and more. Each vehicle has its own characteristics and can be upgraded with different mods and skins. The game also has over 80 weapons, such as pistols, rifles, shotguns, rocket launchers, grenades and more. Each weapon has its own functionality and can be modified with different attachments and scopes. The game also has several gadgets that Rico can use to enhance his abilities and cause mayhem. The most notable ones are the grappling hook, the wingsuit and the parachute. - - - -The grappling hook is Rico's signature tool that allows him to attach to almost any surface or object and pull himself towards it. The grappling hook can also be used to tether objects together and create various effects, such as explosions, electric shocks, balloons and boosters. The grappling hook can be customized with different loadouts and mods that change its behavior and appearance. The wingsuit is Rico's way of flying through the air and performing acrobatic maneuvers. The wingsuit can be used to glide over long distances and dodge enemy fire. The wingsuit can also be upgraded with different mods that enhance its speed, maneuverability and durability. The parachute is Rico's way of slowing down his descent and landing safely. The parachute can be used to steer in mid-air and deploy weapons or gadgets. The parachute can also be improved with different mods that increase its size, stability and resistance. - - - -Just Cause 4: Day One Edition 5 DLCs Repack (16.4 GB) also includes five downloadable content packs that add more content and features to the game. The Deathstalker Scorpion Pack adds a new vehicle, weapon and outfit inspired by the deadly scorpion. The Renegade Pack adds a new vehicle, weapon and outfit inspired by the rebellious spirit of Rico. The Neon Racer Pack adds a new vehicle, weapon and outfit inspired by the futuristic neon style. The Golden Gear Pack adds a new vehicle, weapon and outfit inspired by the luxurious gold theme. The Digital Deluxe Content adds a new vehicle, weapon and outfit inspired by the exclusive digital edition of the game. - - 1b8d091108 - - - - - diff --git a/spaces/rorallitri/biomedical-language-models/logs/(Pthc) (Mylola Info) Nidea Shower ((Hussyfan)).rar !!HOT!!.md b/spaces/rorallitri/biomedical-language-models/logs/(Pthc) (Mylola Info) Nidea Shower ((Hussyfan)).rar !!HOT!!.md deleted file mode 100644 index 7cef0136b13db5d69d6a41076a3fb1af8201223d..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/(Pthc) (Mylola Info) Nidea Shower ((Hussyfan)).rar !!HOT!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

              (Pthc) (Mylola Info) Nidea shower ((Hussyfan)).rar


              DOWNLOAD ✸✸✸ https://tinurll.com/2uzlmG



              -
              - aaccfb2cb3
              -
              -
              -

              diff --git a/spaces/rorallitri/biomedical-language-models/logs/Daemon Tools Ultra 4.0.1.0425 Full Crack.md b/spaces/rorallitri/biomedical-language-models/logs/Daemon Tools Ultra 4.0.1.0425 Full Crack.md deleted file mode 100644 index 8c6477863c8ffcbdcb8fe6f00716d5e89bc86b76..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Daemon Tools Ultra 4.0.1.0425 Full Crack.md +++ /dev/null @@ -1,122 +0,0 @@ - -

              DAEMON Tools Ultra 4.0.1.0425 Full Crack: A Comprehensive Guide

              -

              If you are looking for a powerful and versatile imaging software that can handle various types of disc images, create virtual drives, mount VHD images, create bootable USB devices, and more, then you should consider DAEMON Tools Ultra 4.0.1.0425 Full Crack. This is the latest version of the popular DAEMON Tools software that offers a user-friendly interface and a rich set of features. In this article, we will show you how to download, install, and use DAEMON Tools Ultra 4.0.1.0425 Full Crack, as well as some of its main functions and benefits.

              -

              Daemon Tools Ultra 4.0.1.0425 Full Crack


              DOWNLOADhttps://tinurll.com/2uzmfX



              -

              How to Download and Install DAEMON Tools Ultra 4.0.1.0425 Full Crack

              -

              Downloading and installing DAEMON Tools Ultra 4.0.1.0425 Full Crack is very easy and fast. You just need to follow these simple steps:

              -
                -
              1. Click on one of the links below to download DAEMON Tools Ultra 4.0.1.0425 Full Crack ( Zippyshare 24 Mb ) or ( Embedupload 24 Mb ).
              2. -
              3. Extract the downloaded file using WinRAR or any other file archiver.
              4. -
              5. Run the setup file and follow the instructions to install DAEMON Tools Ultra 4 on your computer.
              6. -
              7. Copy the crack file from the crack folder and paste it into the installation directory of DAEMON Tools Ultra 4.
              8. -
              9. Run DAEMON Tools Ultra 4 and enjoy its full features.
              10. -
              -

              How to Use DAEMON Tools Ultra 4.0.1.0425 Full Crack

              -

              Using DAEMON Tools Ultra 4.0.1.0425 Full Crack is very simple and intuitive. You can access all its functions from the main window or from the DAEMON Tools Gadget on your Windows Desktop. Here are some of the most common tasks that you can perform with DAEMON Tools Ultra 4:

              -

              Mount disc images

              -

              DAEMON Tools Ultra 4 supports a wide range of disc image formats, such as *.mdx, *.mds/*.mdf, *.iso, *.b5t, *.b6t, *.bwt, *.ccd, *.cdi, *.bin/*.cue, *.ape/*.cue, *.flac/*.cue, *.nrg, *.isz. You can mount disc images in two ways:

              -
                -
              • Drag and drop an image onto the main window of DAEMON Tools Ultra 4 and it immediately gets mounted.
              • -
              • Use the “Quick Mount” option to mount up to 32 disc images with one click.
              • -
              -

              You can also customize the image parameters for future mounting in Image Catalog, such as device letter, mount point, emulation mode, etc.

              -

              Create disc images

              -

              DAEMON Tools Ultra 4 allows you to create disc images from CD, DVD, Blu-ray discs or from files and folders on your computer. You can also convert all supported image formats to *.mdf/*.mds, *.mdx

              -

              DAEMON Tools Ultra 4 allows you to create disc images from CD, DVD, Blu-ray discs or from files and folders on your computer. You can also convert all supported image formats to *.mdf/*.mds, *.mdx, *.iso. You can create disc images in two ways:

              -

              -
                -
              • Use the “Create a Data Image” wizard to make a custom image from files and folders.
              • -
              • Use the “Create a Disc Image” wizard to make an image of a physical disc.
              • -
              -

              You can also make compressed disc images or split one image to several files. You can protect disc images with password to prevent unauthorized access.

              -

              Create and mount VHD images

              -

              DAEMON Tools Ultra 4 enables you to create and mount Virtual Hard Disk (VHD) images with dynamic or fixed size. VHD images are files that simulate a hard disk drive and can be used to back up any of your data or to store operating system installation files. You can create and mount VHD images in two ways:

              -
                -
              • Use the “Create a VHD” wizard to make a new VHD image from scratch.
              • -
              • Use the “Add a VHD” option to mount an existing VHD image.
              • -
              -

              You can have easy access to your data stored in VHD file and choose now the mounting option – HDD or removable device.

              -

              Create bootable USB devices

              -

              DAEMON Tools Ultra 4 allows you to write bootable images to USB devices in a few clicks. You can store operating system installer on fast, reusable, durable and handy device and setup OS on notebooks without drives easily and quickly. You can create bootable USB devices in two ways:

              -
                -
              • Use the “Create a Bootable USB” wizard to make a new bootable USB device from an image file.
              • -
              • Use the “Burn an Image” option to write an existing image file to a USB device.
              • -
              -

              You can also erase a USB device if you want to reuse it for another purpose.

              -

              Create and mount RAM disks

              -

              DAEMON Tools Ultra 4 allows you to create and mount virtual RAM disks that use a block of memory. RAM disks are temporary storage devices that are faster than hard disk drives and can be used to keep your temporary files in the fastest storage to get the highest performance. You can also forget about hard disk fragmentation caused by undeleted temporary files and synchronize RAM disk with VHD to use it after the reboot. You can create and mount RAM disks in two ways:

              -
                -
              • Use the “Create a RAM Disk” wizard to make a new RAM disk with a custom size.
              • -
              • Use the “Add a RAM Disk” option to mount an existing RAM disk.
              • -
              -

              Benefits of DAEMON Tools Ultra 4.0.1.0425 Full Crack

              -

              DAEMON Tools Ultra 4.0.1.0425 Full Crack is not only a user-friendly application that brings together some of the most popular functions of DAEMON Tools Pro, but also offers some unique features that make it stand out from other imaging software. Some of the benefits of DAEMON Tools Ultra 4 are:

              -
                -
              • It has a brand-new design inspired by Windows 10, which makes it easy to use and navigate.
              • -
              • It has a GameSpace feature that lets you get more information relevant to discs in your Image Collection, such as game news, reviews, videos, screenshots, recommendations, ratings, etc.
              • -
              • It has an iSCSI Initiator feature that lets you work with iSCSI targets created with DAEMON Tools Net Data Server or third party iSCSI servers and use DT virtual devices to mount iSCSI targets as disc images.
              • -
              • It has an advanced imaging feature that lets you create bootable USB devices easily, create or edit disc images simple with new widgets, burn created image files to media discs, burn disc images with RMPS data, master bootable discs or images, manage your Image Collection, etc.
              • -
              • It has a media devices virtualization feature that lets you use “Quick Mount” option to mount and use up to 32 disc images right away, set up to 32 SCSI and 4 IDE virtual devices in advanced mode, change the device parameters if necessary, customize image parameters for future mounting in Image Catalog, etc.
              • -
              -

              Conclusion

              -

              In this article, we have shown you how to download, install, and use DAEMON Tools Ultra 4.0.1.0425 Full Crack, as well as some of its main functions and benefits. DAEMON Tools Ultra 4 is a powerful and versatile imaging software that can handle various types of disc images, create virtual drives, mount VHD images, create bootable USB devices, and more. It is also user-friendly and has a modern design that makes it easy to use and navigate. If you are looking for a reliable and advanced imaging software that can meet your needs, then you should give DAEMON Tools Ultra 4 a try.

              -

              How to update DAEMON Tools Ultra 4?

              -

              If you want to update DAEMON Tools Ultra 4 to the latest version, you can do it easily and automatically by following these steps:

              -
                -
              1. Open DAEMON Tools Ultra 4 and click on the Help menu.
              2. -
              3. Select Check for Updates and wait for the program to scan for available updates.
              4. -
              5. If there are any updates, click on Download and Install and follow the instructions to complete the update process.
              6. -
              7. Restart DAEMON Tools Ultra 4 and enjoy its new features and improvements.
              8. -
              -

              You can also check for updates manually by visiting the official website of DAEMON Tools and downloading the latest version of DAEMON Tools Ultra 4.

              -

              How to contact DAEMON Tools support?

              -

              If you have any questions, problems or suggestions regarding DAEMON Tools Ultra 4, you can contact DAEMON Tools support team by using one of these methods:

              -
                -
              • Visit the official website of DAEMON Tools and click on Support.
              • -
              • Fill out the online form with your name, email, subject and message and click on Send.
              • -
              • Wait for a reply from DAEMON Tools support team within 24 hours.
              • -
              -

              You can also visit the official forum of DAEMON Tools and post your question or issue there. You can also browse through the existing topics and find answers or solutions from other users or moderators.

              -

              Final Words

              -

              We hope that this article has helped you to learn more about DAEMON Tools Ultra 4.0.1.0425 Full Crack, how to download, install and use it, as well as some of its main functions and benefits. DAEMON Tools Ultra 4 is a powerful and versatile imaging software that can handle various types of disc images, create virtual drives, mount VHD images, create bootable USB devices, and more. It is also user-friendly and has a modern design that makes it easy to use and navigate. If you are looking for a reliable and advanced imaging software that can meet your needs, then you should give DAEMON Tools Ultra 4 a try.

              -

              There is no need to continue the article as it already has a conclusion and covers all the relevant aspects of DAEMON Tools Ultra 4.0.1.0425 Full Crack. However, if you want to add some more paragraphs, you can write about some of these topics:

              -
                -
              • A comparison of DAEMON Tools Ultra 4 with other versions of DAEMON Tools, such as DAEMON Tools Lite, DAEMON Tools Pro and DAEMON Tools Net.
              • -
              • A tutorial on how to use DAEMON Tools Ultra 4 to create and mount disc images of various games and software.
              • -
              • A list of tips and tricks on how to optimize the performance and security of DAEMON Tools Ultra 4.
              • -
              • A testimonial or a review of DAEMON Tools Ultra 4 from a satisfied user or a professional reviewer.
              • -
              -

              A Comparison of DAEMON Tools Ultra 4 with Other Versions of DAEMON Tools

              -

              DAEMON Tools Ultra 4 is not the only version of DAEMON Tools software that you can use to work with disc images and virtual drives. There are also other versions of DAEMON Tools, such as DAEMON Tools Lite, DAEMON Tools Pro and DAEMON Tools Net, that have different features and functions. Here is a brief comparison of DAEMON Tools Ultra 4 with other versions of DAEMON Tools:

              - - - - - - - - - - - - - - - - - - - - - - - - - - -
              VersionFeaturesPrice
              DAEMON Tools LiteThis is the basic and free version of DAEMON Tools that allows you to create and mount up to 4 disc images and virtual drives. It supports ISO, MDS/MDF, MDX, B5T/B6T, BWT, CCD, CDI, BIN/CUE, APE/CUE, FLAC/CUE, NRG and ISZ image formats. It also has a simple interface and a gadget for Windows Desktop.Free
              DAEMON Tools ProThis is the advanced and professional version of DAEMON Tools that allows you to create and mount up to 32 disc images and virtual drives. It supports all the image formats that DAEMON Tools Lite supports, plus VHD, VMDK and TrueCrypt images. It also has a full-fledged interface and a gadget for Windows Desktop. It also has some additional features, such as creating bootable USB devices, editing disc images, burning disc images to media discs, emulating up to 16 DT and 16 SCSI devices in advanced mode, etc.$39.99
              DAEMON Tools NetThis is the network version of DAEMON Tools that allows you to create and mount disc images and virtual drives on multiple computers within a local network. It supports all the image formats that DAEMON Tools Pro supports. It also has a web interface and a gadget for Windows Desktop. It also has some additional features, such as creating iSCSI targets from disc images or physical devices, managing iSCSI targets from any computer within the network, etc.$249.99
              DAEMON Tools Ultra 4This is the ultimate and most powerful version of DAEMON Tools that allows you to create and mount unlimited disc images and virtual drives. It supports all the image formats that DAEMON Tools Net supports, plus RAM disks. It also has a brand-new design inspired by Windows 10 and a gadget for Windows Desktop. It also has some unique features, such as creating and mounting VHD images, creating RAM disks, working with iSCSI targets created with DAEMON Tools Net Data Server or third party iSCSI servers, using GameSpace feature to get more information relevant to discs in your Image Collection, etc.$59.99
              -

              As you can see, each version of DAEMON Tools has its own advantages and disadvantages. You can choose the version that suits your needs and budget best.

              -

              Conclusion

              -

              In this article, we have shown you how to download, install and use DAEMON Tools Ultra 4.0.1.0425 Full Crack, as well as some of its main functions and benefits. We have also compared DAEMON Tools Ultra 4 with other versions of DAEMON Tools, such as DAEMON Tools Lite, DAEMON Tools Pro and DAEMON Tools Net, and answered some of the frequently asked questions about DAEMON Tools Ultra 4. DAEMON Tools Ultra 4 is a powerful and versatile imaging software that can handle various types of disc images, create virtual drives, mount VHD images, create bootable USB devices, and more. It is also user-friendly and has a modern design that makes it easy to use and navigate. If you are looking for a reliable and advanced imaging software that can meet your needs, then you should give DAEMON Tools Ultra 4 a try.

              3cee63e6c2
              -
              -
              \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Manolete Pasodoble Partitura Pdf and Enjoy the Rhythm of the Bullfight.md b/spaces/rorallitri/biomedical-language-models/logs/Download Manolete Pasodoble Partitura Pdf and Enjoy the Rhythm of the Bullfight.md deleted file mode 100644 index 202718b9d7deaa3466b6ed391fed3f1c25717836..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download Manolete Pasodoble Partitura Pdf and Enjoy the Rhythm of the Bullfight.md +++ /dev/null @@ -1,23 +0,0 @@ - -

              En la línea de los espectáculos folclóricos de la época, Antoñita Moreno estrena el 6 de agosto de 1947 Colores de España, en el Teatro de La Latina, original de Antonio Paso (hijo) y José Ruiz Azagra. El 28 de agosto de ese año, Manuel Rodríguez "Manolete" muere en la plaza de toros de Linares. Jacinto Guerrero y el marqués de Luca de Tena, dueño y director de ABC, componen un pasodoble que dan a conocer a Antoñita. Lo estrena en la función de tarde del 6 de septiembre, con Guerrero acompañando al piano. Ese mismo día ya se pudo comprar ya el disco que Antoñita había grabado unos días antes junto con el número "Moreno tiene que ser" de la revista La blanca doble. [Emilio García Carretero: Antoñita Moreno. La voz que nunca muere, Rayego, Madrid, 2011]

              -

              Manolete Pasodoble Partitura Pdf


              DOWNLOADhttps://tinurll.com/2uzooH



              -

              "En La Latina, la magnífica intérprete de la canción andaluza Antoñita Moreno renovó, con motivo de la función homenaje que le fue ofrecida sus triunfos sobre la escena. En Colores de España la sugestiva artista hizo gala de su preciosa voz y gran estilo, cantando insuperablemente el pasodoble La muerte de Manolete, dirigido por su autor, el ilustre maestro Guerrero, número que fue repetido entre una clamorosa ovación..." [Informaciones y noticias teatrales y cinematográficas, en ABC, 21 de septiembre de 1947, página 26]

              -

              Pasodoble (Spanish: double step) is a fast-paced Spanish military march used by infantry troops. Its speed allowed troops to give 120 steps per minute (double the average of a regular unit, hence its name). This military march gave rise recently to a modern Spanish dance, a musical genre including both voice and instruments, and a genre of instrumental music often played during bullfight. Both the dance and the non martial compositions are also called pasodoble.

              -

              All pasodobles have binary rhythm. Its musical structure consists of an introduction based on the dominant chord of the piece, followed by a first fragment based on the main tone and a second part, called "the trío", based on the sub-dominant note, based yet again on the dominant chord. Each change is preceded by a brieph. The last segment of the pasodoble is usually "the trío" strongly played.[1]The different types of pasodoble- popular, taurino, militar- can vary in rhythm, with the taurine pasodobles being the slowest and the popular being faster and often incorporating voice.Pasodoble as we know it started in Spain but is now played in a wide variety of Hispanic nations. Each region has developed its own subgenre and personal style of pasodoble, adjusting some formal aspects of the structure to fit their local musical tradition.[2]In modern Spain, the most prolific composition of pasodobles is happening in the Levantine coast, associated to the festivals of Moors and Christians.

              -

              -

              The facts known about it due to physical historical evidence are that it was being written as early as the 18th century, since Spain has pasodoble scores dating back to 1780; that it was incorporated into comedies and adopted as a regulatory step for the Spanish infantry; and that the music was not introduced into bullfights until the 19th century.

              -

              One hypothesis suggests, based on the etymology of the name, that it comes from the French "pas-redouble", a form of speedy march of the French infantry during the late 18th century. It is calimed to have both Spanish and French characteristics. The modern steps often contain French terms, but the dance resembles the nature of the bullfight. It is said to have emerged from southern French culture during the 1930s. Supporters of this hypothesis, mostly French musicologists, suggested that pasodoble was a way for the French to portray the techniques used in Spanish bullfights. This hypothesis neglects to explain the presence of scores dating from 1780, that Spanish infantry already marched at doble speed before the French army did and French musicologist usually refers the bullfight-related movements or themes, peculiarity that does not make sense since in Spain it was associated with bullfighting a long time later.

              -

              Famous bullfighters have been honored with pasodoble tunes named for them. Other tunes have been inspired by patriotic motifs or local characters. The pasodoble is well-known and used today for dance competitions.

              -

              During the early 20th century, the pasodoble became part of the repertoire of Italian American musicians in San Francisco playing in the ballo liscio style.[3] Four pasodobles were collected by Sidney Robertson Cowell for the WPA California Folk Music Project in 1939 by Mexican American wedding party band on mandolin, guitar, and violin.[4]

              -

              Also called "military pasodoble", it was created as, or keeps its role as, an infantry march. It is usually fast and lacks lyrics.[5] Famous examples are "Soldadito español", "El Abanico", "Los nardos", "Las Corsarias" or " Los Voluntarios"

              -

              Often played during bullfights, or with that intense atmosphere in mind. They are slowed and more dramatic than martial pasodobles, and lack lyrics, too. This pasodoble is based on music played at bullfights during the bullfighters' entrance (paseo), or during the passes (faena) just before the kill. It is also composed to honor outstanding bullfighters.[6] Some of the most famous are Suspiros de España, España cañí, Agüero, La Gracia de Dios,1 El Gato Montés, Viva el pasodoble, Tercio de Quites, Pan y toros, Cielo Andaluz, La Morena de mi Copla, Francisco Alegre, Amparito Roca, El Beso, Plaza de las Ventas.

              -

              Pasodobles that require an entire band to be played, and are almost exclusively designed for popular parades and village celebrations. They often use colorful characters of the region and light hearted subjects as inspiration. This pasodobles are very alive in Spain, Today, the largest center for the mass production and creation of new pasodobles is the southeast of Spain, mainly the Valencian Community, related to the popular Moors and Christians festivals. The traditional ones can be heard in Spanish popular celebrations, patron saint verbenas, and weddings.[8] Well known examples are "Paquito el Chocolatero", "Fiesta en Benidorm", "Alegría Agostense" or "Pirata Quiero Ser".

              -

              The leader of this dance plays the part of the matador. The follower generally plays the part of the matador's cape, but can also represent the shadow of the matador, as well as the flamenco dancer in some figures. The follower never represents the bull, although this is a common misconception.This form of pasodoble is a lively style of dance to the duple meter march-like music, and is often performed in the context of theater. This form of pasodoble was mistakenly taken as the original form by English and French musicologists visiting Spain in the 20th century.

              -

              Tunas is the name given to a brotherhood of students that play popular music together on the street to get some extra coins, or under the window of the beloved of one of them, to try and help the lovestruck member to get a date with her. Tunas have become one of the main forces keeping Spanish pasodoble alive. Tunas tend to adapt or repeat simple pieces that are already composed, but they sometimes compose their own, satirical pieces.[10]

              -

              Puerto Rican pasodobles are known for their nostalgic quality.Some of the most famous are: Ecos de Puerto Rico (El Maestro Ladi), Morena (Noro Morales), Cuando pienso en España (Juan Peña Reyes), Reminiscencias (Juan Peña Reyes), El trueno (Juan Peña Reyes), Himno a Humacao (Miguel López), Sol andaluz (Manuel Peña Vázquez).

              -

              Pasodoble is not as popular in Colombia as in other countries, but the Colombia pasodoble, "Feria de Manizales", is an emblematic piece. It was composed in 1957, with lyrics by Guillermo González Ospina and music by Juan Mari Asins inspired by the Spanish classic "España Cañi". This pasodoble is based on the development of a parade and a dance with every single "Queen of the city" of Manizales, and it lasts one week.

              -

              Many pasodoble songs are variations of España Cañi. The song has breaks or "highlights" in fixed positions in the song (two highlights at syllabus levels,[clarification needed] three highlights and a longer song at open levels). Highlights emphasize music and are more powerful than other parts of the music. Usually, dancers strike a dramatic pose and then hold position until the end of the highlight. Traditionally, pasodoble routines are choreographed to match these highlights, as well as the musical phrases. Accordingly, most ballroom pasodoble tunes are written with similar highlights (those without are simply avoided in competition).

              -

              Because of its heavily choreographed tradition, ballroom pasodoble is danced mostly competitively, almost never socially, or without a previously learned routine. That said, in Spain, France, Vietnam, Colombia, Costa Rica and some parts of Germany, it is danced socially as a led (unchoreographed) dance. In Venezuela, pasodoble is almost a must-have dance in weddings and big parties. It became especially famous thanks to the hit song "Guitarra Española" by Los Melódicos.

              -

              In competitive dance, modern pasodoble is combined with other four dances (samba, cha-cha-cha, rumba and jive) under the banner International Latin. Modern pasodoble dance consists of two dancing parts and one break in between for dancers of class D and of three parts and two breaks in between for dancers of class C, B, A, according to the IDSF classification.[12] Dancers of lower than D-class usually perform only four official dances of the Latin-American Program.

              aaccfb2cb3
              -
              -
              \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (Varranger 2 New Version).md b/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (Varranger 2 New Version).md deleted file mode 100644 index 0828c060c9977c44f5b1424dab203599c09c5762..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (Varranger 2 New Version).md +++ /dev/null @@ -1,97 +0,0 @@ -
              -

              HD Online Player (Varranger 2 New Version): A Software that Lets You Play and Arrange Music Like a Pro

              - -

              Do you love music and want to create your own musical projects? Do you want to play and arrange MIDI and MP3 files with ease and flexibility? Do you want to control external synthesizers and sound modules with your keyboard or controller? If you answered yes to any of these questions, then you should check out HD Online Player (Varranger 2 New Version), a software that lets you play and arrange music like a pro.

              -

              HD Online Player (Varranger 2 New Version)


              DOWNLOAD ->>> https://tinurll.com/2uzmmI



              - -

              What is HD Online Player (Varranger 2 New Version)?

              - -

              HD Online Player (Varranger 2 New Version) is a software that can play and arrange MIDI and MP3 files in various styles and genres of music, such as pop, rock, jazz, country, Latin, and more. It can also control external synthesizers and sound modules via MIDI, giving you access to a wide range of sounds and instruments. You can use HD Online Player (Varranger 2 New Version) to perform live, record your own songs, or remix existing tracks.

              - -

              How does HD Online Player (Varranger 2 New Version) work?

              - -

              HD Online Player (Varranger 2 New Version) works by using style files that contain the patterns and arrangements for different types of music. You can load a style file from the style library or create your own style using the style editor. Then, you can select a tempo, a key, and a variation for the style. Next, you can play some chords on your keyboard or controller and listen to how HD Online Player (Varranger 2 New Version) arranges them automatically. You can also add some melody or solo tracks using the right hand section of your keyboard or controller. Finally, you can mix and edit your tracks using the mixer and the MIDI editor.

              - -

              What are the benefits of using HD Online Player (Varranger 2 New Version)?

              - -

              Using HD Online Player (Varranger 2 New Version) has many benefits for music enthusiasts. Some of the benefits are:

              - -
                -
              • It is compatible with Windows, MAC, and Linux operating systems.
              • -
              • It is affordable and offers great value for money.
              • -
              • It is updated regularly with new features and improvements.
              • -
              • It has a simple and intuitive interface that lets you access all the functions easily.
              • -
              • It has a large library of sounds and instruments that you can use to create realistic and expressive arrangements.
              • -
              • It has a chord sequencer that allows you to create complex chord progressions and harmonies.
              • -
              • It has a MP3 player that lets you play along with your favorite songs or use them as backing tracks.
              • -
              • It has a recorder that lets you record your own songs or export them as WAV or MP3 files.
              • -
              • It has a live mode that lets you perform live with your keyboard or controller.
              • -
              - -

              Conclusion

              - -

              If you are looking for a software that can play and arrange music in a professional and creative way, you should try HD Online Player (Varranger 2 New Version). HD Online Player (Varranger 2 New Version) is a powerful and versatile software that can help you create amazing musical projects with ease. Whether you want to perform live, record your own songs, or remix existing tracks, HD Online Player (Varranger 2 New Version) can do it all. Download HD Online Player (Varranger 2 New Version) today and discover the joy of playing and arranging music!

              -

              What are the reviews of HD Online Player (Varranger 2 New Version)?

              - -

              HD Online Player (Varranger 2 New Version) has received many positive reviews from users who have tried it. Here are some of the reviews from the web:

              - -
              -

              "I have been using vArranger 2 for a few months now and I am very impressed with it. It is very easy to use and has a lot of features that make it a great software for playing and arranging music. I especially like the chord sequencer and the MIDI editor that allow me to create my own styles and edit them to my liking. I also like the fact that it can control external synthesizers and sound modules via MIDI, which gives me more options for sounds and instruments. I highly recommend vArranger 2 to anyone who loves music and wants to create their own musical projects." - User review from Audiofanzine

              -

              -
              - -
              -

              "vArranger 2 is a software that I have been looking for a long time. It is a software that can play and arrange MIDI and MP3 files in various styles and genres of music, such as pop, rock, jazz, country, Latin, and more. It can also control external synthesizers and sound modules via MIDI, which is very useful for me as I have a lot of them. I use vArranger 2 to perform live, record my own songs, or remix existing tracks. It is very fun and easy to use, and it has a simple and intuitive interface that lets me access all the functions easily. It is also updated regularly with new features and improvements, which makes it even better. vArranger 2 is a software that I can't live without." - User review from Tistory

              -
              - -
              -

              "vArranger 2 is a software that lets me play and arrange music like a pro. It is a software that can play and arrange MIDI and MP3 files in various styles and genres of music, such as pop, rock, jazz, country, Latin, and more. It can also control external synthesizers and sound modules via MIDI, which gives me access to a wide range of sounds and instruments. I use vArranger 2 to perform live, record my own songs, or remix existing tracks. It has a lot of features that make it a great software for playing and arranging music, such as the chord sequencer, the mixer, the MIDI editor, the MP3 player, the recorder, and the live mode. It is also compatible with Windows, MAC, and Linux operating systems, which makes it very convenient for me. vArranger 2 is a software that I love." - User review from Aytunga

              -
              -

              How to download HD Online Player (Varranger 2 New Version)?

              - -

              If you want to download HD Online Player (Varranger 2 New Version), you can do so from the official website of vArranger. You can choose between a free trial version or a full version that requires a license key. The free trial version allows you to use the software for 30 days with some limitations, such as not being able to save your projects or export them as WAV or MP3 files. The full version gives you access to all the features and functions of the software without any restrictions. You can also get updates and support from the developer and the community of users.

              - -

              To download HD Online Player (Varranger 2 New Version), you need to follow these steps:

              - -
                -
              1. Go to the official website of vArranger and click on the Download button.
              2. -
              3. Choose between the free trial version or the full version and click on the corresponding link.
              4. -
              5. Fill in your name and email address and click on the Submit button.
              6. -
              7. Check your email inbox for a confirmation message with a download link.
              8. -
              9. Click on the download link and save the file on your computer.
              10. -
              11. Run the file and follow the instructions to install the software on your computer.
              12. -
              13. Launch the software and enter your license key if you have purchased the full version.
              14. -
              15. Enjoy using HD Online Player (Varranger 2 New Version) for playing and arranging music!
              16. -
              -

              How to learn HD Online Player (Varranger 2 New Version) with tutorials?

              - -

              If you want to learn how to use HD Online Player (Varranger 2 New Version) for playing and arranging music, you can find some helpful tutorials on the internet. Here are some of the tutorials that you can watch or read:

              - -
                -
              • How to use vArranger 2 - This is a video tutorial by vArranger that shows you how to use the software for playing and arranging MIDI and MP3 files. It covers the basics of the interface, the style library, the chord sequencer, the mixer, the MIDI editor, the MP3 player, the recorder, and the live mode. You can watch this tutorial on YouTube or on the official website of vArranger.
              • -
              • How to use vArranger 2 with external synthesizers and sound modules - This is another video tutorial by vArranger that shows you how to use the software to control external synthesizers and sound modules via MIDI. It covers how to select your MIDI devices, how to assign them to different tracks, how to change sounds and instruments, and how to adjust volume and effects. You can watch this tutorial on YouTube or on the official website of vArranger.
              • -
              • How to use HD Online Player (Varranger 2 New Version) for Playing and Arranging Music - This is a written tutorial by Aytunga that shows you how to use the software for playing and arranging music in various styles and genres. It covers how to load a style file, how to select a tempo, a key, and a variation, how to play chords and melodies, how to mix and edit your tracks, how to save or export your project, and how to perform live. You can read this tutorial on Aytunga's website.
              • -
              - -

              How to get support for HD Online Player (Varranger 2 New Version)?

              - -

              If you need support for HD Online Player (Varranger 2 New Version), you can contact the developer or the community of users. Here are some ways to get support:

              - -
                -
              • Contact the developer - You can contact the developer of vArranger by email or by phone. You can find the contact information on the official website of vArranger. The developer is very responsive and helpful and will try to answer your questions or solve your problems as soon as possible.
              • -
              • Join the forum - You can join the forum of vArranger users and share your experiences, tips, suggestions, feedback, or questions with other users. You can also find useful information, resources, downloads, updates, and news about vArranger on the forum. You can access the forum from the official website of vArranger.
              • -
              • Follow on social media - You can follow vArranger on social media platforms such as Facebook, Twitter, Instagram, or YouTube. You can get updates, news, videos, photos, or reviews about vArranger on these platforms. You can also interact with other users or with the developer on these platforms.
              • -
              -

              Conclusion

              - -

              HD Online Player (Varranger 2 New Version) is a software that lets you play and arrange music like a pro. It is a software that can play and arrange MIDI and MP3 files in various styles and genres of music, such as pop, rock, jazz, country, Latin, and more. It can also control external synthesizers and sound modules via MIDI, giving you access to a wide range of sounds and instruments. You can use HD Online Player (Varranger 2 New Version) to perform live, record your own songs, or remix existing tracks. It is compatible with Windows, MAC, and Linux operating systems. It is affordable and offers great value for money. It is updated regularly with new features and improvements. It has a simple and intuitive interface that lets you access all the functions easily. It has a large library of sounds and instruments that you can use to create realistic and expressive arrangements. It has a chord sequencer that allows you to create complex chord progressions and harmonies. It has a mixer that lets you adjust the volume, pan, reverb, and other effects of each track. It has a MIDI editor that lets you edit and modify the notes, velocity, pitch bend, and other parameters of each track. It has a MP3 player that lets you play along with your favorite songs or use them as backing tracks. It has a recorder that lets you record your own songs or export them as WAV or MP3 files. It has a live mode that lets you perform live with your keyboard or controller.

              - -

              If you want to download HD Online Player (Varranger 2 New Version), you can do so from the official website of vArranger. You can choose between a free trial version or a full version that requires a license key. The free trial version allows you to use the software for 30 days with some limitations, such as not being able to save your projects or export them as WAV or MP3 files. The full version gives you access to all the features and functions of the software without any restrictions. You can also get updates and support from the developer and the community of users.

              - -

              If you want to learn how to use HD Online Player (Varranger 2 New Version) for playing and arranging music, you can find some helpful tutorials on the internet. You can watch or read tutorials that show you how to use the software for playing and arranging MIDI and MP3 files, how to use the software to control external synthesizers and sound modules via MIDI, and how to use the software for playing and arranging music in various styles and genres.

              - -

              If you need support for HD Online Player (Varranger 2 New Version), you can contact the developer or the community of users. You can contact the developer by email or by phone. You can join the forum of vArranger users and share your experiences, tips, suggestions, feedback, or questions with other users. You can also follow vArranger on social media platforms such as Facebook, Twitter, Instagram, or YouTube.

              - -

              HD Online Player (Varranger 2 New Version) is a software that can help you unleash your musical creativity and potential. It is a software that can play and arrange music in a professional and creative way. Whether you want to perform live, record your own songs, or remix existing tracks, HD Online Player (Varranger 2 New Version) can do it all. Download HD Online Player (Varranger 2 New Version) today and discover the joy of playing and arranging music!

              3cee63e6c2
              -
              -
              \ No newline at end of file diff --git a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/metrics/psnr_ssim.py b/spaces/sczhou/CodeFormer/CodeFormer/basicsr/metrics/psnr_ssim.py deleted file mode 100644 index bbd950699c2495880236883861d9e199f900eae8..0000000000000000000000000000000000000000 --- a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/metrics/psnr_ssim.py +++ /dev/null @@ -1,128 +0,0 @@ -import cv2 -import numpy as np - -from basicsr.metrics.metric_util import reorder_image, to_y_channel -from basicsr.utils.registry import METRIC_REGISTRY - - -@METRIC_REGISTRY.register() -def calculate_psnr(img1, img2, crop_border, input_order='HWC', test_y_channel=False): - """Calculate PSNR (Peak Signal-to-Noise Ratio). - - Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio - - Args: - img1 (ndarray): Images with range [0, 255]. - img2 (ndarray): Images with range [0, 255]. - crop_border (int): Cropped pixels in each edge of an image. These - pixels are not involved in the PSNR calculation. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - Default: 'HWC'. - test_y_channel (bool): Test on Y channel of YCbCr. Default: False. - - Returns: - float: psnr result. - """ - - assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.') - if input_order not in ['HWC', 'CHW']: - raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"') - img1 = reorder_image(img1, input_order=input_order) - img2 = reorder_image(img2, input_order=input_order) - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - - if crop_border != 0: - img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...] - img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...] - - if test_y_channel: - img1 = to_y_channel(img1) - img2 = to_y_channel(img2) - - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20. * np.log10(255. / np.sqrt(mse)) - - -def _ssim(img1, img2): - """Calculate SSIM (structural similarity) for one channel images. - - It is called by func:`calculate_ssim`. - - Args: - img1 (ndarray): Images with range [0, 255] with order 'HWC'. - img2 (ndarray): Images with range [0, 255] with order 'HWC'. - - Returns: - float: ssim result. - """ - - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -@METRIC_REGISTRY.register() -def calculate_ssim(img1, img2, crop_border, input_order='HWC', test_y_channel=False): - """Calculate SSIM (structural similarity). - - Ref: - Image quality assessment: From error visibility to structural similarity - - The results are the same as that of the official released MATLAB code in - https://ece.uwaterloo.ca/~z70wang/research/ssim/. - - For three-channel images, SSIM is calculated for each channel and then - averaged. - - Args: - img1 (ndarray): Images with range [0, 255]. - img2 (ndarray): Images with range [0, 255]. - crop_border (int): Cropped pixels in each edge of an image. These - pixels are not involved in the SSIM calculation. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - Default: 'HWC'. - test_y_channel (bool): Test on Y channel of YCbCr. Default: False. - - Returns: - float: ssim result. - """ - - assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.') - if input_order not in ['HWC', 'CHW']: - raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"') - img1 = reorder_image(img1, input_order=input_order) - img2 = reorder_image(img2, input_order=input_order) - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - - if crop_border != 0: - img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...] - img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...] - - if test_y_channel: - img1 = to_y_channel(img1) - img2 = to_y_channel(img2) - - ssims = [] - for i in range(img1.shape[2]): - ssims.append(_ssim(img1[..., i], img2[..., i])) - return np.array(ssims).mean() diff --git a/spaces/segments-tobias/conex/espnet/mt/mt_utils.py b/spaces/segments-tobias/conex/espnet/mt/mt_utils.py deleted file mode 100644 index 50aa792ba3846c71fa185e7d454a1985e7702ab7..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/mt/mt_utils.py +++ /dev/null @@ -1,83 +0,0 @@ -#!/usr/bin/env python3 -# encoding: utf-8 - -# Copyright 2019 Kyoto University (Hirofumi Inaguma) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Utility funcitons for the text translation task.""" - -import logging - - -# * ------------------ recognition related ------------------ * -def parse_hypothesis(hyp, char_list): - """Parse hypothesis. - - :param list hyp: recognition hypothesis - :param list char_list: list of characters - :return: recognition text string - :return: recognition token string - :return: recognition tokenid string - """ - # remove sos and get results - tokenid_as_list = list(map(int, hyp["yseq"][1:])) - token_as_list = [char_list[idx] for idx in tokenid_as_list] - score = float(hyp["score"]) - - # convert to string - tokenid = " ".join([str(idx) for idx in tokenid_as_list]) - token = " ".join(token_as_list) - text = "".join(token_as_list).replace("", " ") - - return text, token, tokenid, score - - -def add_results_to_json(js, nbest_hyps, char_list): - """Add N-best results to json. - - :param dict js: groundtruth utterance dict - :param list nbest_hyps: list of hypothesis - :param list char_list: list of characters - :return: N-best results added utterance dict - """ - # copy old json info - new_js = dict() - if "utt2spk" in js.keys(): - new_js["utt2spk"] = js["utt2spk"] - new_js["output"] = [] - - for n, hyp in enumerate(nbest_hyps, 1): - # parse hypothesis - rec_text, rec_token, rec_tokenid, score = parse_hypothesis(hyp, char_list) - - # copy ground-truth - if len(js["output"]) > 0: - out_dic = dict(js["output"][0].items()) - else: - out_dic = {"name": ""} - - # update name - out_dic["name"] += "[%d]" % n - - # add recognition results - out_dic["rec_text"] = rec_text - out_dic["rec_token"] = rec_token - out_dic["rec_tokenid"] = rec_tokenid - out_dic["score"] = score - - # add source reference - out_dic["text_src"] = js["output"][1]["text"] - out_dic["token_src"] = js["output"][1]["token"] - out_dic["tokenid_src"] = js["output"][1]["tokenid"] - - # add to list of N-best result dicts - new_js["output"].append(out_dic) - - # show 1-best result - if n == 1: - if "text" in out_dic.keys(): - logging.info("groundtruth: %s" % out_dic["text"]) - logging.info("prediction : %s" % out_dic["rec_text"]) - logging.info("source : %s" % out_dic["token_src"]) - - return new_js diff --git a/spaces/segments-tobias/conex/espnet/tts/__init__.py b/spaces/segments-tobias/conex/espnet/tts/__init__.py deleted file mode 100644 index b7f177368e62a5578b8706300e101f831a3972ac..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/tts/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Initialize sub package.""" diff --git a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/util/misc.py b/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/util/misc.py deleted file mode 100644 index d64b84ef24bea0c98e76824feb1903f6bfebe7a5..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/util/misc.py +++ /dev/null @@ -1,717 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Misc functions, including distributed helpers. - -Mostly copy-paste from torchvision references. -""" -import colorsys -import datetime -import functools -import io -import json -import os -import pickle -import subprocess -import time -from collections import OrderedDict, defaultdict, deque -from typing import List, Optional - -import numpy as np -import torch -import torch.distributed as dist - -# needed due to empty tensor bug in pytorch and torchvision 0.5 -import torchvision -from torch import Tensor - -__torchvision_need_compat_flag = float(torchvision.__version__.split(".")[1]) < 7 -if __torchvision_need_compat_flag: - from torchvision.ops import _new_empty_tensor - from torchvision.ops.misc import _output_size - - -class SmoothedValue(object): - """Track a series of values and provide access to smoothed values over a - window or the global series average. - """ - - def __init__(self, window_size=20, fmt=None): - if fmt is None: - fmt = "{median:.4f} ({global_avg:.4f})" - self.deque = deque(maxlen=window_size) - self.total = 0.0 - self.count = 0 - self.fmt = fmt - - def update(self, value, n=1): - self.deque.append(value) - self.count += n - self.total += value * n - - def synchronize_between_processes(self): - """ - Warning: does not synchronize the deque! - """ - if not is_dist_avail_and_initialized(): - return - t = torch.tensor([self.count, self.total], dtype=torch.float64, device="cuda") - dist.barrier() - dist.all_reduce(t) - t = t.tolist() - self.count = int(t[0]) - self.total = t[1] - - @property - def median(self): - d = torch.tensor(list(self.deque)) - if d.shape[0] == 0: - return 0 - return d.median().item() - - @property - def avg(self): - d = torch.tensor(list(self.deque), dtype=torch.float32) - return d.mean().item() - - @property - def global_avg(self): - if os.environ.get("SHILONG_AMP", None) == "1": - eps = 1e-4 - else: - eps = 1e-6 - return self.total / (self.count + eps) - - @property - def max(self): - return max(self.deque) - - @property - def value(self): - return self.deque[-1] - - def __str__(self): - return self.fmt.format( - median=self.median, - avg=self.avg, - global_avg=self.global_avg, - max=self.max, - value=self.value, - ) - - -@functools.lru_cache() -def _get_global_gloo_group(): - """ - Return a process group based on gloo backend, containing all the ranks - The result is cached. - """ - - if dist.get_backend() == "nccl": - return dist.new_group(backend="gloo") - - return dist.group.WORLD - - -def all_gather_cpu(data): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors) - Args: - data: any picklable object - Returns: - list[data]: list of data gathered from each rank - """ - - world_size = get_world_size() - if world_size == 1: - return [data] - - cpu_group = _get_global_gloo_group() - - buffer = io.BytesIO() - torch.save(data, buffer) - data_view = buffer.getbuffer() - device = "cuda" if cpu_group is None else "cpu" - tensor = torch.ByteTensor(data_view).to(device) - - # obtain Tensor size of each rank - local_size = torch.tensor([tensor.numel()], device=device, dtype=torch.long) - size_list = [torch.tensor([0], device=device, dtype=torch.long) for _ in range(world_size)] - if cpu_group is None: - dist.all_gather(size_list, local_size) - else: - print("gathering on cpu") - dist.all_gather(size_list, local_size, group=cpu_group) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - assert isinstance(local_size.item(), int) - local_size = int(local_size.item()) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device=device)) - if local_size != max_size: - padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device=device) - tensor = torch.cat((tensor, padding), dim=0) - if cpu_group is None: - dist.all_gather(tensor_list, tensor) - else: - dist.all_gather(tensor_list, tensor, group=cpu_group) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - tensor = torch.split(tensor, [size, max_size - size], dim=0)[0] - buffer = io.BytesIO(tensor.cpu().numpy()) - obj = torch.load(buffer) - data_list.append(obj) - - return data_list - - -def all_gather(data): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors) - Args: - data: any picklable object - Returns: - list[data]: list of data gathered from each rank - """ - - if os.getenv("CPU_REDUCE") == "1": - return all_gather_cpu(data) - - world_size = get_world_size() - if world_size == 1: - return [data] - - # serialized to a Tensor - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to("cuda") - - # obtain Tensor size of each rank - local_size = torch.tensor([tensor.numel()], device="cuda") - size_list = [torch.tensor([0], device="cuda") for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device="cuda")) - if local_size != max_size: - padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device="cuda") - tensor = torch.cat((tensor, padding), dim=0) - dist.all_gather(tensor_list, tensor) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_dict(input_dict, average=True): - """ - Args: - input_dict (dict): all the values will be reduced - average (bool): whether to do average or sum - Reduce the values in the dictionary from all processes so that all processes - have the averaged results. Returns a dict with the same fields as - input_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return input_dict - with torch.no_grad(): - names = [] - values = [] - # sort the keys so that they are consistent across processes - for k in sorted(input_dict.keys()): - names.append(k) - values.append(input_dict[k]) - values = torch.stack(values, dim=0) - dist.all_reduce(values) - if average: - values /= world_size - reduced_dict = {k: v for k, v in zip(names, values)} - return reduced_dict - - -class MetricLogger(object): - def __init__(self, delimiter="\t"): - self.meters = defaultdict(SmoothedValue) - self.delimiter = delimiter - - def update(self, **kwargs): - for k, v in kwargs.items(): - if isinstance(v, torch.Tensor): - v = v.item() - assert isinstance(v, (float, int)) - self.meters[k].update(v) - - def __getattr__(self, attr): - if attr in self.meters: - return self.meters[attr] - if attr in self.__dict__: - return self.__dict__[attr] - raise AttributeError("'{}' object has no attribute '{}'".format(type(self).__name__, attr)) - - def __str__(self): - loss_str = [] - for name, meter in self.meters.items(): - # print(name, str(meter)) - # import ipdb;ipdb.set_trace() - if meter.count > 0: - loss_str.append("{}: {}".format(name, str(meter))) - return self.delimiter.join(loss_str) - - def synchronize_between_processes(self): - for meter in self.meters.values(): - meter.synchronize_between_processes() - - def add_meter(self, name, meter): - self.meters[name] = meter - - def log_every(self, iterable, print_freq, header=None, logger=None): - if logger is None: - print_func = print - else: - print_func = logger.info - - i = 0 - if not header: - header = "" - start_time = time.time() - end = time.time() - iter_time = SmoothedValue(fmt="{avg:.4f}") - data_time = SmoothedValue(fmt="{avg:.4f}") - space_fmt = ":" + str(len(str(len(iterable)))) + "d" - if torch.cuda.is_available(): - log_msg = self.delimiter.join( - [ - header, - "[{0" + space_fmt + "}/{1}]", - "eta: {eta}", - "{meters}", - "time: {time}", - "data: {data}", - "max mem: {memory:.0f}", - ] - ) - else: - log_msg = self.delimiter.join( - [ - header, - "[{0" + space_fmt + "}/{1}]", - "eta: {eta}", - "{meters}", - "time: {time}", - "data: {data}", - ] - ) - MB = 1024.0 * 1024.0 - for obj in iterable: - data_time.update(time.time() - end) - yield obj - # import ipdb; ipdb.set_trace() - iter_time.update(time.time() - end) - if i % print_freq == 0 or i == len(iterable) - 1: - eta_seconds = iter_time.global_avg * (len(iterable) - i) - eta_string = str(datetime.timedelta(seconds=int(eta_seconds))) - if torch.cuda.is_available(): - print_func( - log_msg.format( - i, - len(iterable), - eta=eta_string, - meters=str(self), - time=str(iter_time), - data=str(data_time), - memory=torch.cuda.max_memory_allocated() / MB, - ) - ) - else: - print_func( - log_msg.format( - i, - len(iterable), - eta=eta_string, - meters=str(self), - time=str(iter_time), - data=str(data_time), - ) - ) - i += 1 - end = time.time() - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print_func( - "{} Total time: {} ({:.4f} s / it)".format( - header, total_time_str, total_time / len(iterable) - ) - ) - - -def get_sha(): - cwd = os.path.dirname(os.path.abspath(__file__)) - - def _run(command): - return subprocess.check_output(command, cwd=cwd).decode("ascii").strip() - - sha = "N/A" - diff = "clean" - branch = "N/A" - try: - sha = _run(["git", "rev-parse", "HEAD"]) - subprocess.check_output(["git", "diff"], cwd=cwd) - diff = _run(["git", "diff-index", "HEAD"]) - diff = "has uncommited changes" if diff else "clean" - branch = _run(["git", "rev-parse", "--abbrev-ref", "HEAD"]) - except Exception: - pass - message = f"sha: {sha}, status: {diff}, branch: {branch}" - return message - - -def collate_fn(batch): - # import ipdb; ipdb.set_trace() - batch = list(zip(*batch)) - batch[0] = nested_tensor_from_tensor_list(batch[0]) - return tuple(batch) - - -def _max_by_axis(the_list): - # type: (List[List[int]]) -> List[int] - maxes = the_list[0] - for sublist in the_list[1:]: - for index, item in enumerate(sublist): - maxes[index] = max(maxes[index], item) - return maxes - - -class NestedTensor(object): - def __init__(self, tensors, mask: Optional[Tensor]): - self.tensors = tensors - self.mask = mask - if mask == "auto": - self.mask = torch.zeros_like(tensors).to(tensors.device) - if self.mask.dim() == 3: - self.mask = self.mask.sum(0).to(bool) - elif self.mask.dim() == 4: - self.mask = self.mask.sum(1).to(bool) - else: - raise ValueError( - "tensors dim must be 3 or 4 but {}({})".format( - self.tensors.dim(), self.tensors.shape - ) - ) - - def imgsize(self): - res = [] - for i in range(self.tensors.shape[0]): - mask = self.mask[i] - maxH = (~mask).sum(0).max() - maxW = (~mask).sum(1).max() - res.append(torch.Tensor([maxH, maxW])) - return res - - def to(self, device): - # type: (Device) -> NestedTensor # noqa - cast_tensor = self.tensors.to(device) - mask = self.mask - if mask is not None: - assert mask is not None - cast_mask = mask.to(device) - else: - cast_mask = None - return NestedTensor(cast_tensor, cast_mask) - - def to_img_list_single(self, tensor, mask): - assert tensor.dim() == 3, "dim of tensor should be 3 but {}".format(tensor.dim()) - maxH = (~mask).sum(0).max() - maxW = (~mask).sum(1).max() - img = tensor[:, :maxH, :maxW] - return img - - def to_img_list(self): - """remove the padding and convert to img list - - Returns: - [type]: [description] - """ - if self.tensors.dim() == 3: - return self.to_img_list_single(self.tensors, self.mask) - else: - res = [] - for i in range(self.tensors.shape[0]): - tensor_i = self.tensors[i] - mask_i = self.mask[i] - res.append(self.to_img_list_single(tensor_i, mask_i)) - return res - - @property - def device(self): - return self.tensors.device - - def decompose(self): - return self.tensors, self.mask - - def __repr__(self): - return str(self.tensors) - - @property - def shape(self): - return {"tensors.shape": self.tensors.shape, "mask.shape": self.mask.shape} - - -def nested_tensor_from_tensor_list(tensor_list: List[Tensor]): - # TODO make this more general - if tensor_list[0].ndim == 3: - if torchvision._is_tracing(): - # nested_tensor_from_tensor_list() does not export well to ONNX - # call _onnx_nested_tensor_from_tensor_list() instead - return _onnx_nested_tensor_from_tensor_list(tensor_list) - - # TODO make it support different-sized images - max_size = _max_by_axis([list(img.shape) for img in tensor_list]) - # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list])) - batch_shape = [len(tensor_list)] + max_size - b, c, h, w = batch_shape - dtype = tensor_list[0].dtype - device = tensor_list[0].device - tensor = torch.zeros(batch_shape, dtype=dtype, device=device) - mask = torch.ones((b, h, w), dtype=torch.bool, device=device) - for img, pad_img, m in zip(tensor_list, tensor, mask): - pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - m[: img.shape[1], : img.shape[2]] = False - else: - raise ValueError("not supported") - return NestedTensor(tensor, mask) - - -# _onnx_nested_tensor_from_tensor_list() is an implementation of -# nested_tensor_from_tensor_list() that is supported by ONNX tracing. -@torch.jit.unused -def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor: - max_size = [] - for i in range(tensor_list[0].dim()): - max_size_i = torch.max( - torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32) - ).to(torch.int64) - max_size.append(max_size_i) - max_size = tuple(max_size) - - # work around for - # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - # m[: img.shape[1], :img.shape[2]] = False - # which is not yet supported in onnx - padded_imgs = [] - padded_masks = [] - for img in tensor_list: - padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))] - padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0])) - padded_imgs.append(padded_img) - - m = torch.zeros_like(img[0], dtype=torch.int, device=img.device) - padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1) - padded_masks.append(padded_mask.to(torch.bool)) - - tensor = torch.stack(padded_imgs) - mask = torch.stack(padded_masks) - - return NestedTensor(tensor, mask=mask) - - -def setup_for_distributed(is_master): - """ - This function disables printing when not in master process - """ - import builtins as __builtin__ - - builtin_print = __builtin__.print - - def print(*args, **kwargs): - force = kwargs.pop("force", False) - if is_master or force: - builtin_print(*args, **kwargs) - - __builtin__.print = print - - -def is_dist_avail_and_initialized(): - if not dist.is_available(): - return False - if not dist.is_initialized(): - return False - return True - - -def get_world_size(): - if not is_dist_avail_and_initialized(): - return 1 - return dist.get_world_size() - - -def get_rank(): - if not is_dist_avail_and_initialized(): - return 0 - return dist.get_rank() - - -def is_main_process(): - return get_rank() == 0 - - -def save_on_master(*args, **kwargs): - if is_main_process(): - torch.save(*args, **kwargs) - - -def init_distributed_mode(args): - if "WORLD_SIZE" in os.environ and os.environ["WORLD_SIZE"] != "": # 'RANK' in os.environ and - args.rank = int(os.environ["RANK"]) - args.world_size = int(os.environ["WORLD_SIZE"]) - args.gpu = args.local_rank = int(os.environ["LOCAL_RANK"]) - - # launch by torch.distributed.launch - # Single node - # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 1 --rank 0 ... - # Multi nodes - # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 2 --rank 0 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' ... - # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 2 --rank 1 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' ... - # args.rank = int(os.environ.get('OMPI_COMM_WORLD_RANK')) - # local_world_size = int(os.environ['GPU_PER_NODE_COUNT']) - # args.world_size = args.world_size * local_world_size - # args.gpu = args.local_rank = int(os.environ['LOCAL_RANK']) - # args.rank = args.rank * local_world_size + args.local_rank - print( - "world size: {}, rank: {}, local rank: {}".format( - args.world_size, args.rank, args.local_rank - ) - ) - print(json.dumps(dict(os.environ), indent=2)) - elif "SLURM_PROCID" in os.environ: - args.rank = int(os.environ["SLURM_PROCID"]) - args.gpu = args.local_rank = int(os.environ["SLURM_LOCALID"]) - args.world_size = int(os.environ["SLURM_NPROCS"]) - - print( - "world size: {}, world rank: {}, local rank: {}, device_count: {}".format( - args.world_size, args.rank, args.local_rank, torch.cuda.device_count() - ) - ) - else: - print("Not using distributed mode") - args.distributed = False - args.world_size = 1 - args.rank = 0 - args.local_rank = 0 - return - - print("world_size:{} rank:{} local_rank:{}".format(args.world_size, args.rank, args.local_rank)) - args.distributed = True - torch.cuda.set_device(args.local_rank) - args.dist_backend = "nccl" - print("| distributed init (rank {}): {}".format(args.rank, args.dist_url), flush=True) - - torch.distributed.init_process_group( - backend=args.dist_backend, - world_size=args.world_size, - rank=args.rank, - init_method=args.dist_url, - ) - - print("Before torch.distributed.barrier()") - torch.distributed.barrier() - print("End torch.distributed.barrier()") - setup_for_distributed(args.rank == 0) - - -@torch.no_grad() -def accuracy(output, target, topk=(1,)): - """Computes the precision@k for the specified values of k""" - if target.numel() == 0: - return [torch.zeros([], device=output.device)] - maxk = max(topk) - batch_size = target.size(0) - - _, pred = output.topk(maxk, 1, True, True) - pred = pred.t() - correct = pred.eq(target.view(1, -1).expand_as(pred)) - - res = [] - for k in topk: - correct_k = correct[:k].view(-1).float().sum(0) - res.append(correct_k.mul_(100.0 / batch_size)) - return res - - -@torch.no_grad() -def accuracy_onehot(pred, gt): - """_summary_ - - Args: - pred (_type_): n, c - gt (_type_): n, c - """ - tp = ((pred - gt).abs().sum(-1) < 1e-4).float().sum() - acc = tp / gt.shape[0] * 100 - return acc - - -def interpolate(input, size=None, scale_factor=None, mode="nearest", align_corners=None): - # type: (Tensor, Optional[List[int]], Optional[float], str, Optional[bool]) -> Tensor - """ - Equivalent to nn.functional.interpolate, but with support for empty batch sizes. - This will eventually be supported natively by PyTorch, and this - class can go away. - """ - if __torchvision_need_compat_flag < 0.7: - if input.numel() > 0: - return torch.nn.functional.interpolate(input, size, scale_factor, mode, align_corners) - - output_shape = _output_size(2, input, size, scale_factor) - output_shape = list(input.shape[:-2]) + list(output_shape) - return _new_empty_tensor(input, output_shape) - else: - return torchvision.ops.misc.interpolate(input, size, scale_factor, mode, align_corners) - - -class color_sys: - def __init__(self, num_colors) -> None: - self.num_colors = num_colors - colors = [] - for i in np.arange(0.0, 360.0, 360.0 / num_colors): - hue = i / 360.0 - lightness = (50 + np.random.rand() * 10) / 100.0 - saturation = (90 + np.random.rand() * 10) / 100.0 - colors.append( - tuple([int(j * 255) for j in colorsys.hls_to_rgb(hue, lightness, saturation)]) - ) - self.colors = colors - - def __call__(self, idx): - return self.colors[idx] - - -def inverse_sigmoid(x, eps=1e-3): - x = x.clamp(min=0, max=1) - x1 = x.clamp(min=eps) - x2 = (1 - x).clamp(min=eps) - return torch.log(x1 / x2) - - -def clean_state_dict(state_dict): - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k[:7] == "module.": - k = k[7:] # remove `module.` - new_state_dict[k] = v - return new_state_dict diff --git a/spaces/shainis/Art_Generation_with_Neural_Style_Transfer/app.py b/spaces/shainis/Art_Generation_with_Neural_Style_Transfer/app.py deleted file mode 100644 index 34613a7fd8887e269ab8ec51e5b130bb76c523cf..0000000000000000000000000000000000000000 --- a/spaces/shainis/Art_Generation_with_Neural_Style_Transfer/app.py +++ /dev/null @@ -1,69 +0,0 @@ -import os -import tensorflow as tf -os.environ['TFHUB_MODEL_LOAD_FORMAT'] = 'COMPRESSED' -import numpy as np -import PIL.Image -import gradio as gr -import tensorflow_hub as hub -import matplotlib.pyplot as plt - - -def tensor_to_image(tensor): - tensor = tensor*255 - tensor = np.array(tensor, dtype=np.uint8) - if np.ndim(tensor)>3: - assert tensor.shape[0] == 1 - tensor = tensor[0] - return PIL.Image.fromarray(tensor) - - -style_urls = { - 'Kanagawa great wave': 'The_Great_Wave_off_Kanagawa.jpg', - 'Kandinsky composition 7': 'Kandinsky_Composition_7.jpg', - 'Hubble pillars of creation': 'Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg', - 'Van gogh starry night': 'Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg', - 'Turner nantes': 'JMW_Turner_-_Nantes_from_the_Ile_Feydeau.jpg', - 'Munch scream': 'Edvard_Munch.jpg', - 'Picasso demoiselles avignon': 'Les_Demoiselles.jpg', - 'Picasso violin': 'picaso_violin.jpg', - 'Picasso bottle of rum': 'picaso_rum.jpg', - 'Fire': 'Large_bonfire.jpg', - 'Derkovits woman head': 'Derkovits_Gyula_Woman_head_1922.jpg', - 'Amadeo style life': 'Amadeo_Souza_Cardoso.jpg', - 'Derkovtis talig': 'Derkovits_Gyula_Talig.jpg', - 'Kadishman': 'kadishman.jpeg' -} - - -style_images = [k for k, v in style_urls.items()] - - -content_image_input = gr.inputs.Image(label="Content Image") -radio_style = gr.Radio(style_images, label="Choose Style") - - -def perform_neural_transfer(content_image_input, style_image_input): - - content_image = content_image_input.astype(np.float32)[np.newaxis, ...] / 255. - content_image = tf.image.resize(content_image, (400, 600)) - - style_image_input = style_urls[style_image_input] - style_image_input = plt.imread(style_image_input) - style_image = style_image_input.astype(np.float32)[np.newaxis, ...] / 255. - - style_image = tf.image.resize(style_image, (256, 256)) - - hub_module = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2') - - outputs = hub_module(tf.constant(content_image), tf.constant(style_image)) - stylized_image = outputs[0] - - return tensor_to_image(stylized_image) - - -app_interface = gr.Interface(fn=perform_neural_transfer, - inputs=[content_image_input, radio_style], - outputs="image", - title="Art Generation with Neural Style Transfer", - ) -app_interface.launch() \ No newline at end of file diff --git a/spaces/shencc/gpt/crazy_functions/crazy_utils.py b/spaces/shencc/gpt/crazy_functions/crazy_utils.py deleted file mode 100644 index e54136c441e7d713b0e8f5a66de9fb8bae1b1f4c..0000000000000000000000000000000000000000 --- a/spaces/shencc/gpt/crazy_functions/crazy_utils.py +++ /dev/null @@ -1,608 +0,0 @@ -from toolbox import update_ui, get_conf, trimmed_format_exc - -def input_clipping(inputs, history, max_token_limit): - import numpy as np - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - - mode = 'input-and-history' - # 当 输入部分的token占比 小于 全文的一半时,只裁剪历史 - input_token_num = get_token_num(inputs) - if input_token_num < max_token_limit//2: - mode = 'only-history' - max_token_limit = max_token_limit - input_token_num - - everything = [inputs] if mode == 'input-and-history' else [''] - everything.extend(history) - n_token = get_token_num('\n'.join(everything)) - everything_token = [get_token_num(e) for e in everything] - delta = max(everything_token) // 16 # 截断时的颗粒度 - - while n_token > max_token_limit: - where = np.argmax(everything_token) - encoded = enc.encode(everything[where], disallowed_special=()) - clipped_encoded = encoded[:len(encoded)-delta] - everything[where] = enc.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char - everything_token[where] = get_token_num(everything[where]) - n_token = get_token_num('\n'.join(everything)) - - if mode == 'input-and-history': - inputs = everything[0] - else: - pass - history = everything[1:] - return inputs, history - -def request_gpt_model_in_new_thread_with_ui_alive( - inputs, inputs_show_user, llm_kwargs, - chatbot, history, sys_prompt, refresh_interval=0.2, - handle_token_exceed=True, - retry_times_at_unknown_error=2, - ): - """ - Request GPT model,请求GPT模型同时维持用户界面活跃。 - - 输入参数 Args (以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行): - inputs (string): List of inputs (输入) - inputs_show_user (string): List of inputs to show user(展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性) - top_p (float): Top p value for sampling from model distribution (GPT参数,浮点数) - temperature (float): Temperature value for sampling from model distribution(GPT参数,浮点数) - chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化) - history (list): List of chat history (历史,对话历史列表) - sys_prompt (string): List of system prompts (系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样) - refresh_interval (float, optional): Refresh interval for UI (default: 0.2) (刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果) - handle_token_exceed:是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启 - retry_times_at_unknown_error:失败时的重试次数 - - 输出 Returns: - future: 输出,GPT返回的结果 - """ - import time - from concurrent.futures import ThreadPoolExecutor - from request_llm.bridge_all import predict_no_ui_long_connection - # 用户反馈 - chatbot.append([inputs_show_user, ""]) - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - executor = ThreadPoolExecutor(max_workers=16) - mutable = ["", time.time(), ""] - def _req_gpt(inputs, history, sys_prompt): - retry_op = retry_times_at_unknown_error - exceeded_cnt = 0 - while True: - # watchdog error - if len(mutable) >= 2 and (time.time()-mutable[1]) > 5: - raise RuntimeError("检测到程序终止。") - try: - # 【第一种情况】:顺利完成 - result = predict_no_ui_long_connection( - inputs=inputs, llm_kwargs=llm_kwargs, - history=history, sys_prompt=sys_prompt, observe_window=mutable) - return result - except ConnectionAbortedError as token_exceeded_error: - # 【第二种情况】:Token溢出 - if handle_token_exceed: - exceeded_cnt += 1 - # 【选择处理】 尝试计算比例,尽可能多地保留文本 - from toolbox import get_reduce_token_percent - p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error)) - MAX_TOKEN = 4096 - EXCEED_ALLO = 512 + 512 * exceeded_cnt - inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO) - mutable[0] += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n' - continue # 返回重试 - else: - # 【选择放弃】 - tb_str = '```\n' + trimmed_format_exc() + '```' - mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - return mutable[0] # 放弃 - except: - # 【第三种情况】:其他错误:重试几次 - tb_str = '```\n' + trimmed_format_exc() + '```' - print(tb_str) - mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - if retry_op > 0: - retry_op -= 1 - mutable[0] += f"[Local Message] 重试中,请稍等 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}:\n\n" - if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str): - time.sleep(30) - time.sleep(5) - continue # 返回重试 - else: - time.sleep(5) - return mutable[0] # 放弃 - - # 提交任务 - future = executor.submit(_req_gpt, inputs, history, sys_prompt) - while True: - # yield一次以刷新前端页面 - time.sleep(refresh_interval) - # “喂狗”(看门狗) - mutable[1] = time.time() - if future.done(): - break - chatbot[-1] = [chatbot[-1][0], mutable[0]] - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - - final_result = future.result() - chatbot[-1] = [chatbot[-1][0], final_result] - yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息 - return final_result - - -def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array, inputs_show_user_array, llm_kwargs, - chatbot, history_array, sys_prompt_array, - refresh_interval=0.2, max_workers=-1, scroller_max_len=30, - handle_token_exceed=True, show_user_at_complete=False, - retry_times_at_unknown_error=2, - ): - """ - Request GPT model using multiple threads with UI and high efficiency - 请求GPT模型的[多线程]版。 - 具备以下功能: - 实时在UI上反馈远程数据流 - 使用线程池,可调节线程池的大小避免openai的流量限制错误 - 处理中途中止的情况 - 网络等出问题时,会把traceback和已经接收的数据转入输出 - - 输入参数 Args (以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行): - inputs_array (list): List of inputs (每个子任务的输入) - inputs_show_user_array (list): List of inputs to show user(每个子任务展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性) - llm_kwargs: llm_kwargs参数 - chatbot: chatbot (用户界面对话窗口句柄,用于数据流可视化) - history_array (list): List of chat history (历史对话输入,双层列表,第一层列表是子任务分解,第二层列表是对话历史) - sys_prompt_array (list): List of system prompts (系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样) - refresh_interval (float, optional): Refresh interval for UI (default: 0.2) (刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果) - max_workers (int, optional): Maximum number of threads (default: see config.py) (最大线程数,如果子任务非常多,需要用此选项防止高频地请求openai导致错误) - scroller_max_len (int, optional): Maximum length for scroller (default: 30)(数据流的显示最后收到的多少个字符,仅仅服务于视觉效果) - handle_token_exceed (bool, optional): (是否在输入过长时,自动缩减文本) - handle_token_exceed:是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启 - show_user_at_complete (bool, optional): (在结束时,把完整输入-输出结果显示在聊天框) - retry_times_at_unknown_error:子任务失败时的重试次数 - - 输出 Returns: - list: List of GPT model responses (每个子任务的输出汇总,如果某个子任务出错,response中会携带traceback报错信息,方便调试和定位问题。) - """ - import time, random - from concurrent.futures import ThreadPoolExecutor - from request_llm.bridge_all import predict_no_ui_long_connection - assert len(inputs_array) == len(history_array) - assert len(inputs_array) == len(sys_prompt_array) - if max_workers == -1: # 读取配置文件 - try: max_workers, = get_conf('DEFAULT_WORKER_NUM') - except: max_workers = 8 - if max_workers <= 0: max_workers = 3 - # 屏蔽掉 chatglm的多线程,可能会导致严重卡顿 - if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')): - max_workers = 1 - - executor = ThreadPoolExecutor(max_workers=max_workers) - n_frag = len(inputs_array) - # 用户反馈 - chatbot.append(["请开始多线程操作。", ""]) - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - # 跨线程传递 - mutable = [["", time.time(), "等待中"] for _ in range(n_frag)] - - # 子线程任务 - def _req_gpt(index, inputs, history, sys_prompt): - gpt_say = "" - retry_op = retry_times_at_unknown_error - exceeded_cnt = 0 - mutable[index][2] = "执行中" - while True: - # watchdog error - if len(mutable[index]) >= 2 and (time.time()-mutable[index][1]) > 5: - raise RuntimeError("检测到程序终止。") - try: - # 【第一种情况】:顺利完成 - # time.sleep(10); raise RuntimeError("测试") - gpt_say = predict_no_ui_long_connection( - inputs=inputs, llm_kwargs=llm_kwargs, history=history, - sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True - ) - mutable[index][2] = "已成功" - return gpt_say - except ConnectionAbortedError as token_exceeded_error: - # 【第二种情况】:Token溢出, - if handle_token_exceed: - exceeded_cnt += 1 - # 【选择处理】 尝试计算比例,尽可能多地保留文本 - from toolbox import get_reduce_token_percent - p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error)) - MAX_TOKEN = 4096 - EXCEED_ALLO = 512 + 512 * exceeded_cnt - inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO) - gpt_say += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n' - mutable[index][2] = f"截断重试" - continue # 返回重试 - else: - # 【选择放弃】 - tb_str = '```\n' + trimmed_format_exc() + '```' - gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0] - mutable[index][2] = "输入过长已放弃" - return gpt_say # 放弃 - except: - # 【第三种情况】:其他错误 - tb_str = '```\n' + trimmed_format_exc() + '```' - print(tb_str) - gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0] - if retry_op > 0: - retry_op -= 1 - wait = random.randint(5, 20) - if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str): - wait = wait * 3 - fail_info = "OpenAI绑定信用卡可解除频率限制 " - else: - fail_info = "" - # 也许等待十几秒后,情况会好转 - for i in range(wait): - mutable[index][2] = f"{fail_info}等待重试 {wait-i}"; time.sleep(1) - # 开始重试 - mutable[index][2] = f"重试中 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}" - continue # 返回重试 - else: - mutable[index][2] = "已失败" - wait = 5 - time.sleep(5) - return gpt_say # 放弃 - - # 异步任务开始 - futures = [executor.submit(_req_gpt, index, inputs, history, sys_prompt) for index, inputs, history, sys_prompt in zip( - range(len(inputs_array)), inputs_array, history_array, sys_prompt_array)] - cnt = 0 - while True: - # yield一次以刷新前端页面 - time.sleep(refresh_interval) - cnt += 1 - worker_done = [h.done() for h in futures] - if all(worker_done): - executor.shutdown() - break - # 更好的UI视觉效果 - observe_win = [] - # 每个线程都要“喂狗”(看门狗) - for thread_index, _ in enumerate(worker_done): - mutable[thread_index][1] = time.time() - # 在前端打印些好玩的东西 - for thread_index, _ in enumerate(worker_done): - print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\ - replace('\n', '').replace('```', '...').replace( - ' ', '.').replace('
              ', '.....').replace('$', '.')+"`... ]" - observe_win.append(print_something_really_funny) - # 在前端打印些好玩的东西 - stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n' - if not done else f'`{mutable[thread_index][2]}`\n\n' - for thread_index, done, obs in zip(range(len(worker_done)), worker_done, observe_win)]) - # 在前端打印些好玩的东西 - chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))] - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - - # 异步任务结束 - gpt_response_collection = [] - for inputs_show_user, f in zip(inputs_show_user_array, futures): - gpt_res = f.result() - gpt_response_collection.extend([inputs_show_user, gpt_res]) - - # 是否在结束时,在界面上显示结果 - if show_user_at_complete: - for inputs_show_user, f in zip(inputs_show_user_array, futures): - gpt_res = f.result() - chatbot.append([inputs_show_user, gpt_res]) - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - time.sleep(0.3) - return gpt_response_collection - - -def breakdown_txt_to_satisfy_token_limit(txt, get_token_fn, limit): - def cut(txt_tocut, must_break_at_empty_line): # 递归 - if get_token_fn(txt_tocut) <= limit: - return [txt_tocut] - else: - lines = txt_tocut.split('\n') - estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines) - estimated_line_cut = int(estimated_line_cut) - for cnt in reversed(range(estimated_line_cut)): - if must_break_at_empty_line: - if lines[cnt] != "": - continue - print(cnt) - prev = "\n".join(lines[:cnt]) - post = "\n".join(lines[cnt:]) - if get_token_fn(prev) < limit: - break - if cnt == 0: - raise RuntimeError("存在一行极长的文本!") - # print(len(post)) - # 列表递归接龙 - result = [prev] - result.extend(cut(post, must_break_at_empty_line)) - return result - try: - return cut(txt, must_break_at_empty_line=True) - except RuntimeError: - return cut(txt, must_break_at_empty_line=False) - - -def force_breakdown(txt, limit, get_token_fn): - """ - 当无法用标点、空行分割时,我们用最暴力的方法切割 - """ - for i in reversed(range(len(txt))): - if get_token_fn(txt[:i]) < limit: - return txt[:i], txt[i:] - return "Tiktoken未知错误", "Tiktoken未知错误" - -def breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn, limit): - # 递归 - def cut(txt_tocut, must_break_at_empty_line, break_anyway=False): - if get_token_fn(txt_tocut) <= limit: - return [txt_tocut] - else: - lines = txt_tocut.split('\n') - estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines) - estimated_line_cut = int(estimated_line_cut) - cnt = 0 - for cnt in reversed(range(estimated_line_cut)): - if must_break_at_empty_line: - if lines[cnt] != "": - continue - prev = "\n".join(lines[:cnt]) - post = "\n".join(lines[cnt:]) - if get_token_fn(prev) < limit: - break - if cnt == 0: - if break_anyway: - prev, post = force_breakdown(txt_tocut, limit, get_token_fn) - else: - raise RuntimeError(f"存在一行极长的文本!{txt_tocut}") - # print(len(post)) - # 列表递归接龙 - result = [prev] - result.extend(cut(post, must_break_at_empty_line, break_anyway=break_anyway)) - return result - try: - # 第1次尝试,将双空行(\n\n)作为切分点 - return cut(txt, must_break_at_empty_line=True) - except RuntimeError: - try: - # 第2次尝试,将单空行(\n)作为切分点 - return cut(txt, must_break_at_empty_line=False) - except RuntimeError: - try: - # 第3次尝试,将英文句号(.)作为切分点 - res = cut(txt.replace('.', '。\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在 - return [r.replace('。\n', '.') for r in res] - except RuntimeError as e: - try: - # 第4次尝试,将中文句号(。)作为切分点 - res = cut(txt.replace('。', '。。\n'), must_break_at_empty_line=False) - return [r.replace('。。\n', '。') for r in res] - except RuntimeError as e: - # 第5次尝试,没办法了,随便切一下敷衍吧 - return cut(txt, must_break_at_empty_line=False, break_anyway=True) - - - -def read_and_clean_pdf_text(fp): - """ - 这个函数用于分割pdf,用了很多trick,逻辑较乱,效果奇好 - - **输入参数说明** - - `fp`:需要读取和清理文本的pdf文件路径 - - **输出参数说明** - - `meta_txt`:清理后的文本内容字符串 - - `page_one_meta`:第一页清理后的文本内容列表 - - **函数功能** - 读取pdf文件并清理其中的文本内容,清理规则包括: - - 提取所有块元的文本信息,并合并为一个字符串 - - 去除短块(字符数小于100)并替换为回车符 - - 清理多余的空行 - - 合并小写字母开头的段落块并替换为空格 - - 清除重复的换行 - - 将每个换行符替换为两个换行符,使每个段落之间有两个换行符分隔 - """ - import fitz, copy - import re - import numpy as np - from colorful import print亮黄, print亮绿 - fc = 0 # Index 0 文本 - fs = 1 # Index 1 字体 - fb = 2 # Index 2 框框 - REMOVE_FOOT_NOTE = True # 是否丢弃掉 不是正文的内容 (比正文字体小,如参考文献、脚注、图注等) - REMOVE_FOOT_FFSIZE_PERCENT = 0.95 # 小于正文的?时,判定为不是正文(有些文章的正文部分字体大小不是100%统一的,有肉眼不可见的小变化) - def primary_ffsize(l): - """ - 提取文本块主字体 - """ - fsize_statiscs = {} - for wtf in l['spans']: - if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0 - fsize_statiscs[wtf['size']] += len(wtf['text']) - return max(fsize_statiscs, key=fsize_statiscs.get) - - def ffsize_same(a,b): - """ - 提取字体大小是否近似相等 - """ - return abs((a-b)/max(a,b)) < 0.02 - - with fitz.open(fp) as doc: - meta_txt = [] - meta_font = [] - - meta_line = [] - meta_span = [] - ############################## <第 1 步,搜集初始信息> ################################## - for index, page in enumerate(doc): - # file_content += page.get_text() - text_areas = page.get_text("dict") # 获取页面上的文本信息 - for t in text_areas['blocks']: - if 'lines' in t: - pf = 998 - for l in t['lines']: - txt_line = "".join([wtf['text'] for wtf in l['spans']]) - if len(txt_line) == 0: continue - pf = primary_ffsize(l) - meta_line.append([txt_line, pf, l['bbox'], l]) - for wtf in l['spans']: # for l in t['lines']: - meta_span.append([wtf['text'], wtf['size'], len(wtf['text'])]) - # meta_line.append(["NEW_BLOCK", pf]) - # 块元提取 for each word segment with in line for each line cross-line words for each block - meta_txt.extend([" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace( - '- ', '') for t in text_areas['blocks'] if 'lines' in t]) - meta_font.extend([np.mean([np.mean([wtf['size'] for wtf in l['spans']]) - for l in t['lines']]) for t in text_areas['blocks'] if 'lines' in t]) - if index == 0: - page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace( - '- ', '') for t in text_areas['blocks'] if 'lines' in t] - - ############################## <第 2 步,获取正文主字体> ################################## - fsize_statiscs = {} - for span in meta_span: - if span[1] not in fsize_statiscs: fsize_statiscs[span[1]] = 0 - fsize_statiscs[span[1]] += span[2] - main_fsize = max(fsize_statiscs, key=fsize_statiscs.get) - if REMOVE_FOOT_NOTE: - give_up_fize_threshold = main_fsize * REMOVE_FOOT_FFSIZE_PERCENT - - ############################## <第 3 步,切分和重新整合> ################################## - mega_sec = [] - sec = [] - for index, line in enumerate(meta_line): - if index == 0: - sec.append(line[fc]) - continue - if REMOVE_FOOT_NOTE: - if meta_line[index][fs] <= give_up_fize_threshold: - continue - if ffsize_same(meta_line[index][fs], meta_line[index-1][fs]): - # 尝试识别段落 - if meta_line[index][fc].endswith('.') and\ - (meta_line[index-1][fc] != 'NEW_BLOCK') and \ - (meta_line[index][fb][2] - meta_line[index][fb][0]) < (meta_line[index-1][fb][2] - meta_line[index-1][fb][0]) * 0.7: - sec[-1] += line[fc] - sec[-1] += "\n\n" - else: - sec[-1] += " " - sec[-1] += line[fc] - else: - if (index+1 < len(meta_line)) and \ - meta_line[index][fs] > main_fsize: - # 单行 + 字体大 - mega_sec.append(copy.deepcopy(sec)) - sec = [] - sec.append("# " + line[fc]) - else: - # 尝试识别section - if meta_line[index-1][fs] > meta_line[index][fs]: - sec.append("\n" + line[fc]) - else: - sec.append(line[fc]) - mega_sec.append(copy.deepcopy(sec)) - - finals = [] - for ms in mega_sec: - final = " ".join(ms) - final = final.replace('- ', ' ') - finals.append(final) - meta_txt = finals - - ############################## <第 4 步,乱七八糟的后处理> ################################## - def 把字符太少的块清除为回车(meta_txt): - for index, block_txt in enumerate(meta_txt): - if len(block_txt) < 100: - meta_txt[index] = '\n' - return meta_txt - meta_txt = 把字符太少的块清除为回车(meta_txt) - - def 清理多余的空行(meta_txt): - for index in reversed(range(1, len(meta_txt))): - if meta_txt[index] == '\n' and meta_txt[index-1] == '\n': - meta_txt.pop(index) - return meta_txt - meta_txt = 清理多余的空行(meta_txt) - - def 合并小写开头的段落块(meta_txt): - def starts_with_lowercase_word(s): - pattern = r"^[a-z]+" - match = re.match(pattern, s) - if match: - return True - else: - return False - for _ in range(100): - for index, block_txt in enumerate(meta_txt): - if starts_with_lowercase_word(block_txt): - if meta_txt[index-1] != '\n': - meta_txt[index-1] += ' ' - else: - meta_txt[index-1] = '' - meta_txt[index-1] += meta_txt[index] - meta_txt[index] = '\n' - return meta_txt - meta_txt = 合并小写开头的段落块(meta_txt) - meta_txt = 清理多余的空行(meta_txt) - - meta_txt = '\n'.join(meta_txt) - # 清除重复的换行 - for _ in range(5): - meta_txt = meta_txt.replace('\n\n', '\n') - - # 换行 -> 双换行 - meta_txt = meta_txt.replace('\n', '\n\n') - - ############################## <第 5 步,展示分割效果> ################################## - # for f in finals: - # print亮黄(f) - # print亮绿('***************************') - - return meta_txt, page_one_meta - - -def get_files_from_everything(txt, type): # type='.md' - """ - 这个函数是用来获取指定目录下所有指定类型(如.md)的文件,并且对于网络上的文件,也可以获取它。 - 下面是对每个参数和返回值的说明: - 参数 - - txt: 路径或网址,表示要搜索的文件或者文件夹路径或网络上的文件。 - - type: 字符串,表示要搜索的文件类型。默认是.md。 - 返回值 - - success: 布尔值,表示函数是否成功执行。 - - file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。 - - project_folder: 字符串,表示文件所在的文件夹路径。如果是网络上的文件,就是临时文件夹的路径。 - 该函数详细注释已添加,请确认是否满足您的需要。 - """ - import glob, os - - success = True - if txt.startswith('http'): - # 网络的远程文件 - import requests - from toolbox import get_conf - proxies, = get_conf('proxies') - r = requests.get(txt, proxies=proxies) - with open('./gpt_log/temp'+type, 'wb+') as f: f.write(r.content) - project_folder = './gpt_log/' - file_manifest = ['./gpt_log/temp'+type] - elif txt.endswith(type): - # 直接给定文件 - file_manifest = [txt] - project_folder = os.path.dirname(txt) - elif os.path.exists(txt): - # 本地路径,递归搜索 - project_folder = txt - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*'+type, recursive=True)] - if len(file_manifest) == 0: - success = False - else: - project_folder = None - file_manifest = [] - success = False - - return success, file_manifest, project_folder diff --git a/spaces/shivammehta25/Diff-TTSG/diff_ttsg/hifigan/env.py b/spaces/shivammehta25/Diff-TTSG/diff_ttsg/hifigan/env.py deleted file mode 100644 index 91b0b5391d9d5c226861fd76581d82f67670c2a7..0000000000000000000000000000000000000000 --- a/spaces/shivammehta25/Diff-TTSG/diff_ttsg/hifigan/env.py +++ /dev/null @@ -1,17 +0,0 @@ -""" from https://github.com/jik876/hifi-gan """ - -import os -import shutil - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name)) diff --git a/spaces/sidharthism/fashion-eye/netdissect/upsegmodel/prroi_pool/README.md b/spaces/sidharthism/fashion-eye/netdissect/upsegmodel/prroi_pool/README.md deleted file mode 100644 index bb98946d3b48a2069a58f179eb6da63e009c3849..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/netdissect/upsegmodel/prroi_pool/README.md +++ /dev/null @@ -1,66 +0,0 @@ -# PreciseRoIPooling -This repo implements the **Precise RoI Pooling** (PrRoI Pooling), proposed in the paper **Acquisition of Localization Confidence for Accurate Object Detection** published at ECCV 2018 (Oral Presentation). - -**Acquisition of Localization Confidence for Accurate Object Detection** - -_Borui Jiang*, Ruixuan Luo*, Jiayuan Mao*, Tete Xiao, Yuning Jiang_ (* indicates equal contribution.) - -https://arxiv.org/abs/1807.11590 - -## Brief - -In short, Precise RoI Pooling is an integration-based (bilinear interpolation) average pooling method for RoI Pooling. It avoids any quantization and has a continuous gradient on bounding box coordinates. It is: - -- different from the original RoI Pooling proposed in [Fast R-CNN](https://arxiv.org/abs/1504.08083). PrRoI Pooling uses average pooling instead of max pooling for each bin and has a continuous gradient on bounding box coordinates. That is, one can take the derivatives of some loss function w.r.t the coordinates of each RoI and optimize the RoI coordinates. -- different from the RoI Align proposed in [Mask R-CNN](https://arxiv.org/abs/1703.06870). PrRoI Pooling uses a full integration-based average pooling instead of sampling a constant number of points. This makes the gradient w.r.t. the coordinates continuous. - -For a better illustration, we illustrate RoI Pooling, RoI Align and PrRoI Pooing in the following figure. More details including the gradient computation can be found in our paper. - -
              - -## Implementation - -PrRoI Pooling was originally implemented by [Tete Xiao](http://tetexiao.com/) based on MegBrain, an (internal) deep learning framework built by Megvii Inc. It was later adapted into open-source deep learning frameworks. Currently, we only support PyTorch. Unfortunately, we don't have any specific plan for the adaptation into other frameworks such as TensorFlow, but any contributions (pull requests) will be more than welcome. - -## Usage (PyTorch 1.0) - -In the directory `pytorch/`, we provide a PyTorch-based implementation of PrRoI Pooling. It requires PyTorch 1.0+ and only supports CUDA (CPU mode is not implemented). -Since we use PyTorch JIT for cxx/cuda code compilation, to use the module in your code, simply do: - -``` -from prroi_pool import PrRoIPool2D - -avg_pool = PrRoIPool2D(window_height, window_width, spatial_scale) -roi_features = avg_pool(features, rois) - -# for those who want to use the "functional" - -from prroi_pool.functional import prroi_pool2d -roi_features = prroi_pool2d(features, rois, window_height, window_width, spatial_scale) -``` - - -## Usage (PyTorch 0.4) - -**!!! Please first checkout to the branch pytorch0.4.** - -In the directory `pytorch/`, we provide a PyTorch-based implementation of PrRoI Pooling. It requires PyTorch 0.4 and only supports CUDA (CPU mode is not implemented). -To use the PrRoI Pooling module, first goto `pytorch/prroi_pool` and execute `./travis.sh` to compile the essential components (you may need `nvcc` for this step). To use the module in your code, simply do: - -``` -from prroi_pool import PrRoIPool2D - -avg_pool = PrRoIPool2D(window_height, window_width, spatial_scale) -roi_features = avg_pool(features, rois) - -# for those who want to use the "functional" - -from prroi_pool.functional import prroi_pool2d -roi_features = prroi_pool2d(features, rois, window_height, window_width, spatial_scale) -``` - -Here, - -- RoI is an `m * 5` float tensor of format `(batch_index, x0, y0, x1, y1)`, following the convention in the original Caffe implementation of RoI Pooling, although in some frameworks the batch indices are provided by an integer tensor. -- `spatial_scale` is multiplied to the RoIs. For example, if your feature maps are down-sampled by a factor of 16 (w.r.t. the input image), you should use a spatial scale of `1/16`. -- The coordinates for RoI follows the [L, R) convension. That is, `(0, 0, 4, 4)` denotes a box of size `4x4`. diff --git a/spaces/silencewing/server/youyou/js/tts.js b/spaces/silencewing/server/youyou/js/tts.js deleted file mode 100644 index fe24c078420c0e76f6fa121623d052f9ed353224..0000000000000000000000000000000000000000 --- a/spaces/silencewing/server/youyou/js/tts.js +++ /dev/null @@ -1,44 +0,0 @@ - - -// 全局声明 audio -let audio = null; - -// 实时获取后台返回的 音频流(MP3流)并进行播放 -function ttsPlay(params) { - return new Promise((resolve, reject) => { - axios({ - method: 'post', - url: 'https://tsn.baidu.com/text2audio?lan=zh&per=4121&cuid=baidu_speech_demo&idx=1&cod=2&lan=zh&ctp=1&pdt=220&aue=3&pit=5&ie=UTF-8&spd=4&tex=' + params, - responseType: 'arraybuffer' - }).then((response) => { - // 将 blob 数据转换成 url - let mp3Url = window.URL.createObjectURL(new Blob([response.data])) - - // 进行音频播放 - try { - var playCount = 10; - //是否已经声明过 - if (audio == null) { - audio = new Audio(); - audio.addEventListener('ended', function() { - // alert(playCount) - playCount = playCount - 1; - if(playCount <= 0) - localStorage.setItem('audioEnded', true); - else{ - setTimeout("audio.play()",1000) - - } - }, false); - } - if (mp3Url) { - audio.src = mp3Url; - // audio.multer = true; - audio.play(); - } - } catch (e) {} - }).catch((error) => { - reject(error); - }) - }) -} \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Pixel Car Racer MOD APK and Unlock All the Features You Want.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Pixel Car Racer MOD APK and Unlock All the Features You Want.md deleted file mode 100644 index e06a584cf20be9ddc68a4c4ffccd0c38d06d380c..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Pixel Car Racer MOD APK and Unlock All the Features You Want.md +++ /dev/null @@ -1,86 +0,0 @@ - -

              Pixel Car Racer: How to Download APK Mod and Enjoy Free Super Cars

              -

              If you are a fan of racing games, you might have heard of Pixel Car Racer, a retro-style arcade game that lets you customize and race your own cars. But did you know that you can download an APK mod for this game and get access to unlimited money, diamonds, and super cars for free? In this article, we will show you how to do that and what features you can enjoy with the APK mod.

              -

              Introduction

              -

              What is Pixel Car Racer?

              -

              Pixel Car Racer is a racing game developed by Studio Furukawa, a small indie team based in Canada. The game features pixel graphics, retro music, and over 1000 car parts to choose from. You can build your own garage, customize your cars, and race them in various modes, such as drag racing, street racing, or story mode. The game also has online multiplayer, leaderboards, and achievements.

              -

              pixel car racer download apk mod


              DOWNLOAD 🌟 https://ssurll.com/2uNY3p



              -

              Why download APK mod?

              -

              While Pixel Car Racer is free to play, it also has in-app purchases that require real money. You need money and diamonds to buy new cars, parts, crates, and upgrades. You also need to earn reputation points to unlock new levels and features. This can take a lot of time and effort, especially if you want to collect all the super cars in the game.

              -

              That's why some players prefer to download an APK mod for Pixel Car Racer, which is a modified version of the original game that gives you unlimited resources and unlocks everything for free. With an APK mod, you can enjoy the game without any limitations or restrictions.

              -

              How to download APK mod for Pixel Car Racer

              -

              Step 1: Find a reliable source

              -

              The first step to download an APK mod for Pixel Car Racer is to find a reliable source that offers the latest version of the mod. There are many websites that claim to provide APK mods, but not all of them are safe or trustworthy. Some of them may contain viruses, malware, or outdated files that can harm your device or compromise your privacy.

              -

              One of the sources that we recommend is [Find Me Apk](^1^), which is a website that provides high-quality APK mods for various games and apps. You can find the Pixel Car Racer MOD APK v.1.2.3 (Free Super Cars) on their website. This mod was updated on September 9th, 2021 and has over 10 million downloads.

              -

              Step 2: Enable unknown sources on your device

              -

              The next step is to enable unknown sources on your device, which allows you to install apps from sources other than the Google Play Store. To do this, go to your device settings and look for security or privacy options. Then, find the option that says "allow installation of apps from unknown sources" or something similar and toggle it on.

              -

              Note that this step may vary depending on your device model and Android version. If you are not sure how to do it, you can search online for instructions specific to your device.

              -

              Step 3: Download and install the APK file

              -

              After enabling unknown sources, you can proceed to download and install the APK file from Find Me Apk. To do this, go to their website and click on the download button for Pixel Car Racer MOD APK. Then, wait for the file to be downloaded on your device.

              -

              Once the file is downloaded, locate the file in your downloads folder and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to be completed.

              -

              pixel car racer mod apk unlimited money and diamonds
              -pixel car racer hack apk download android
              -pixel car racer mod apk latest version 2023
              -pixel car racer apk mod free shopping
              -pixel car racer mod apk unlocked all cars
              -pixel car racer mod apk ios download
              -pixel car racer mod apk revdl
              -pixel car racer mod apk no root
              -pixel car racer mod apk offline
              -pixel car racer mod apk unlimited crates
              -pixel car racer mod apk unlimited gold
              -pixel car racer mod apk android 1
              -pixel car racer mod apk rexdl
              -pixel car racer mod apk happymod
              -pixel car racer mod apk an1
              -pixel car racer mod apk unlimited everything
              -pixel car racer mod apk obb
              -pixel car racer mod apk pure
              -pixel car racer mod apk unlimited rp
              -pixel car racer mod apk all cars unlocked
              -pixel car racer mod apk unlimited gems
              -pixel car racer mod apk android republic
              -pixel car racer mod apk apkpure
              -pixel car racer mod apk blackmod
              -pixel car racer mod apk club
              -pixel car racer mod apk download for pc
              -pixel car racer mod apk download ios
              -pixel car racer mod apk download latest version
              -pixel car racer mod apk download 2023
              -pixel car racer mod apk free download
              -pixel car racer mod apk full unlocked
              -pixel car racer mod apk gamestechy
              -pixel car racer mod apk god mode
              -pixel car racer mod apk hack download
              -pixel car racer mod apk high compress
              -pixel car racer mod apk highly compressed
              -pixel car racer mod apk ihackedit
              -pixel car racer mod apk install
              -pixel car racer mod apk lenov.ru
              -pixel car racer mod apk mega.nz

              -

              Step 4: Launch the game and enjoy

              -

              The final step is to launch the game and enjoy the features of the APK mod. You can find the game icon on your home screen or app drawer and tap on it to open it. You will see that you have unlimited money, diamonds, and super cars in your account. You can also customize your cars, race them in different modes, and compete with other players online.

              -

              Features of Pixel Car Racer APK mod

              -

              Unlimited money and diamonds

              -

              One of the main features of the APK mod is that it gives you unlimited money and diamonds, which are the two currencies in the game. You can use them to buy new cars, parts, crates, and upgrades without worrying about running out of them. You can also use them to unlock new levels and features in the game.

              -

              Free super cars and customization options

              -

              Another feature of the APK mod is that it gives you free access to all the super cars in the game, which are normally very expensive and rare. You can choose from over 100 cars, including classic muscle cars, sports cars, exotic cars, and even futuristic cars. You can also customize your cars with over 1000 parts, such as engines, tires, spoilers, decals, and more. You can create your own unique style and show it off to your friends.

              -

              Realistic racing experience and gameplay modes

              -

              The APK mod also enhances the racing experience and gameplay modes of Pixel Car Racer. The game has realistic physics, sound effects, and graphics that make you feel like you are driving a real car. The game also has various gameplay modes, such as drag racing, street racing, or story mode. You can race against AI opponents or other players online. You can also challenge yourself with different difficulty levels and objectives.

              -

              Conclusion

              -

              Summary of the main points

              -

              In conclusion, Pixel Car Racer is a fun and addictive racing game that lets you customize and race your own cars. However, if you want to enjoy the game without any limitations or restrictions, you can download an APK mod for Pixel Car Racer that gives you unlimited money, diamonds, and super cars for free. You can also enjoy realistic racing experience and gameplay modes with the APK mod.

              -

              Call to action and recommendation

              -

              If you are interested in downloading the APK mod for Pixel Car Racer, you can follow the steps that we have outlined in this article. You can also visit Find Me Apk to find more APK mods for other games and apps. We recommend that you download the APK mod for Pixel Car Racer today and have fun with your free super cars.

              -

              Frequently Asked Questions

              -

              Q: Is Pixel Car Racer APK mod safe to download?

              -

              A: Yes, Pixel Car Racer APK mod is safe to download from Find Me Apk, which is a reliable source that provides high-quality APK mods for various games and apps. However, you should always be careful when downloading any files from unknown sources and scan them with an antivirus software before installing them.

              -

              Q: Do I need to root my device to use Pixel Car Racer APK mod?

              -

              A: No, you do not need to root your device to use Pixel Car Racer APK mod. You just need to enable unknown sources on your device settings and install the APK file as instructed in this article.

              -

              Q: Will I get banned from Pixel Car Racer if I use the APK mod?

              -

              A: There is a low risk of getting banned from Pixel Car Racer if you use the APK mod, as long as you do not abuse it or cheat in online multiplayer mode. However, we cannot guarantee that you will not get banned by the game developers or Google Play Store for using an unofficial version of the game. Therefore, we advise that you use the APK mod at your own discretion and responsibility.

              -

              Q: Can I update Pixel Car Racer if I use the APK mod?

              -

              A: Yes, you can update Pixel Car Racer if you use the APK mod, but you may lose some of the features or data of the mod if you do so. Therefore, we recommend that you backup your game data before updating it or wait for a new version of the APK mod to be released by Find Me Apk.

              -

              Q: Can I play Pixel Car Racer offline if I use the APK mod?A: Yes, you can play Pixel Car Racer offline if you use the APK mod, as the game does not require an internet connection to run. However, you will not be able to access some of the online features, such as multiplayer mode, leaderboards, and achievements.

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/simsantonioii/MusicGen-Continuation/tests/common_utils/temp_utils.py b/spaces/simsantonioii/MusicGen-Continuation/tests/common_utils/temp_utils.py deleted file mode 100644 index d1e0367e979c8b9fea65472c373916d956ad5aaa..0000000000000000000000000000000000000000 --- a/spaces/simsantonioii/MusicGen-Continuation/tests/common_utils/temp_utils.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import os -import tempfile - - -class TempDirMixin: - """Mixin to provide easy access to temp dir. - """ - - temp_dir_ = None - - @classmethod - def get_base_temp_dir(cls): - # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory. - # this is handy for debugging. - key = "AUDIOCRAFT_TEST_DIR" - if key in os.environ: - return os.environ[key] - if cls.temp_dir_ is None: - cls.temp_dir_ = tempfile.TemporaryDirectory() - return cls.temp_dir_.name - - @classmethod - def tearDownClass(cls): - if cls.temp_dir_ is not None: - try: - cls.temp_dir_.cleanup() - cls.temp_dir_ = None - except PermissionError: - # On Windows there is a know issue with `shutil.rmtree`, - # which fails intermittenly. - # https://github.com/python/cpython/issues/74168 - # Following the above thread, we ignore it. - pass - super().tearDownClass() - - @property - def id(self): - return self.__class__.__name__ - - def get_temp_path(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(os.path.dirname(path), exist_ok=True) - return path - - def get_temp_dir(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(path, exist_ok=True) - return path diff --git a/spaces/sklearn-docs/Nearest_Neighbor_Regression/app.py b/spaces/sklearn-docs/Nearest_Neighbor_Regression/app.py deleted file mode 100644 index 9cffa2ceb5af721de721818d8812206099e9039f..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Nearest_Neighbor_Regression/app.py +++ /dev/null @@ -1,57 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt -from sklearn import neighbors -import gradio as gr - -def train_and_plot(weights, n_neighbors): - np.random.seed(0) - X = np.sort(5 * np.random.rand(40, 1), axis=0) - T = np.linspace(0, 5, 500)[:, np.newaxis] - y = np.sin(X).ravel() - - # Add noise to targets - y[::5] += 1 * (0.5 - np.random.rand(8)) - - knn = neighbors.KNeighborsRegressor(n_neighbors, weights=weights) - fit = knn.fit(X, y) - y_ = knn.predict(T) - score = knn.score(T, y_) - - plt.figure() - plt.scatter(X, y, color="darkorange", label="data") - plt.plot(T, y_, color="navy", label="prediction") - plt.axis("tight") - plt.legend() - plt.title("KNeighborsRegressor (k = %i, weights = '%s')" % (n_neighbors, weights)) - - plt.tight_layout() - return plt, score - - -with gr.Blocks() as demo: - link = "https://scikit-learn.org/stable/auto_examples/neighbors/plot_regression.html#sphx-glr-auto-examples-neighbors-plot-regression-py" - gr.Markdown("## Nearest Neighbors Regression") - gr.Markdown(f"This demo is based on this [scikit-learn example]({link}).") - gr.HTML("
              ") - gr.Markdown("In this demo, we learn a noise-infused sine function using k-Nearest Neighbor and observe how the function learned varies as we change the following hyperparameters:") - gr.Markdown("""1. Weight function - 2. Number of neighbors""") - - with gr.Row(): - weights = gr.Radio(['uniform', "distance"], label="Weights", info="Choose the weight function") - n_neighbors = gr.Slider(label="Neighbors", info="Choose the number of neighbors", minimum =1, maximum=15, step=1) - - - with gr.Row(): - with gr.Column(scale=2): - plot = gr.Plot(label="KNeighborsRegressor Plot") - with gr.Column(scale=1): - num = gr.Textbox(label="Test Accuracy") - - - weights.change(train_and_plot, inputs=[weights, n_neighbors], outputs=[plot, num]) - n_neighbors.change(train_and_plot, inputs=[weights, n_neighbors], outputs=[plot, num]) - - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/sneedium/dvatch_captcha_sneedium/README.md b/spaces/sneedium/dvatch_captcha_sneedium/README.md deleted file mode 100644 index 7717e121e950d87e4b64626f9d55a7c46aa15a37..0000000000000000000000000000000000000000 --- a/spaces/sneedium/dvatch_captcha_sneedium/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bruh -emoji: ⚡ -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: true ---- - - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/spacy/healthsea-demo/app.py b/spaces/spacy/healthsea-demo/app.py deleted file mode 100644 index 5de8e0db52cb26ab808635619aaedc3cab7c811e..0000000000000000000000000000000000000000 --- a/spaces/spacy/healthsea-demo/app.py +++ /dev/null @@ -1,161 +0,0 @@ -import streamlit as st -from pathlib import Path -import json -from support_functions import HealthseaSearch - -# Header -with open("style.css") as f: - st.markdown("", unsafe_allow_html=True) - -# Intro -st.title("Welcome to Healthsea 🪐") - -intro, jellyfish = st.columns(2) -jellyfish.markdown("\n") - -intro.subheader("Create easier access to health✨") - -jellyfish.image("data/img/Jellymation.gif") -intro.markdown( - """Healthsea is an end-to-end spaCy v3 pipeline for analyzing user reviews to supplementary products and extracting their potential effects on health.""" -) -intro.markdown( - """The code for Healthsea is provided in this [github repository](https://github.com/explosion/healthsea). Visit our [blog post](https://explosion.ai/blog/healthsea) or more about the Healthsea project. - """ -) - -st.write( - """This app visualizes the results of Healthsea on a dataset of up to 1 million reviews to 10.000 products. You can use the app to search for any health aspect, whether it's a disease (e.g. joint pain) or a positive state of health (e.g. energy), the app returns a list of products and substances. - You can visit the [Healthsea Pipeline app](https://huggingface.co/spaces/spacy/healthsea-pipeline) for exploring the pipeline itself. - """ -) - -st.warning("""Healthsea is an experimental project and the results should not be used as a foundation for solving health problems. Nor do we want to give the impression that supplements are the answer to anyone's health issues.""") - -# Configuration -health_aspect_path = Path("data/health_aspects.json") -product_path = Path("data/products.json") -condition_path = Path("data/condition_vectors.json") -benefit_path = Path("data/benefit_vectors.json") - -# Load data -@st.cache(allow_output_mutation=True) -def load_data( - _health_aspect_path: Path, - _product_path: Path, - _condition_path: Path, - _benefit_path: Path, -): - with open(_health_aspect_path) as reader: - health_aspects = json.load(reader) - with open(_product_path) as reader: - products = json.load(reader) - with open(_condition_path) as reader: - conditions = json.load(reader) - with open(_benefit_path) as reader: - benefits = json.load(reader) - return health_aspects, products, conditions, benefits - - -# Functions -def kpi(n, text): - html = f""" -
              -

              {n}

              - {text} -
              - """ - return html - - -def central_text(text): - html = f"""

              {text}

              """ - return html - -# Loading data -health_aspects, products, conditions, benefits = load_data( - health_aspect_path, product_path, condition_path, benefit_path -) -search_engine = HealthseaSearch(health_aspects, products, conditions, benefits) - -# KPI -st.markdown("""---""") - -st.markdown(central_text("🎀 Dataset"), unsafe_allow_html=True) - -kpi_products, kpi_reviews, kpi_condition, kpi_benefit = st.columns(4) - -def round_to_k(value): - return str(round(value/1000,1))+"k" - -kpi_products.markdown(kpi(round_to_k(len(products)), "Products"), unsafe_allow_html=True) -kpi_reviews.markdown(kpi(round_to_k(int(933240)), "Reviews"), unsafe_allow_html=True) -kpi_condition.markdown(kpi(round_to_k(len(conditions)), "Conditions"), unsafe_allow_html=True) -kpi_benefit.markdown(kpi(round_to_k(len(benefits)), "Benefits"), unsafe_allow_html=True) - -st.markdown("""---""") - -# Expander -show_conditions, show_benefits = st.columns(2) - -with show_conditions.expander("Top mentioned Conditions"): - st.write(search_engine.get_all_conditions_df()) - -with show_benefits.expander("Top mentioned Benefits"): - st.write(search_engine.get_all_benefits_df()) - -st.markdown("""---""") - -# Search -search = st.text_input(label="Search for an health aspect", value="joint pain") -n = st.slider("Show top n results", min_value=10, max_value=1000, value=25) - -st.markdown("""---""") -st.markdown(central_text("🧃 Products"), unsafe_allow_html=True) - -st.info("""The product score is based on the results of Healthsea. Variables used for the score are: health effect prediction, product rating, helpful count and whether the review is considered a 'fake review'. """) - -# DataFrame -st.write(search_engine.get_products_df(search, n)) - -# KPI & Alias -aspect_alias = search_engine.get_aspect(search)["alias"] - -kpi_product_mentions, kpi_alias = st.columns(2) - -kpi_product_mentions.markdown(kpi(len(search_engine.get_aspect(search)["products"]), "Products"), unsafe_allow_html=True) - - -kpi_alias.markdown( - kpi(len(aspect_alias), "Similar health aspects"), - unsafe_allow_html=True, -) - -depth = st.slider("Depth", min_value=0, max_value=5, value=2) - -recursive_alias, recursive_edges = search_engine.get_recursive_alias(search,0,{},[],depth) - -vectors = [] -main_aspect = search_engine.get_aspect_meta(search) -vectors.append((main_aspect["name"], main_aspect["vector"])) -for aspect in aspect_alias: - current_aspect = search_engine.get_aspect_meta(aspect) - vectors.append((current_aspect["name"], current_aspect["vector"])) -st.markdown("\n") -st.info("""Health aspects with a high similarity (>=90%) are clustered together.""") -#search_engine.pyvis(vectors) -search_engine.pyvis2(recursive_alias,recursive_edges) - -st.markdown("""---""") - -# Substances -st.markdown(central_text("🍯 Substances"), unsafe_allow_html=True) -st.info("""Substance scores are based on product scores""") - -# DataFrame -st.write(search_engine.get_substances_df(search, n)) -kpi_substances, empty = st.columns(2) -kpi_substances.markdown( - kpi(len(search_engine.get_aspect(search)["substance"]), "Substances"), - unsafe_allow_html=True, -) diff --git a/spaces/sparanoid/demucs-gpu/app.py b/spaces/sparanoid/demucs-gpu/app.py deleted file mode 100644 index 6d96d66f69c7bb6467daf93236a38f491b6e2e22..0000000000000000000000000000000000000000 --- a/spaces/sparanoid/demucs-gpu/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -import os -from scipy.io.wavfile import write - -def inference(audio): - os.makedirs("out", exist_ok=True) - write('test.wav', audio[0], audio[1]) - os.system("python3 -m demucs.separate -n mdx_extra test.wav -o out") - return "./out/mdx_extra/test/vocals.wav","./out/mdx_extra/test/bass.wav",\ -"./out/mdx_extra/test/drums.wav","./out/mdx_extra/test/other.wav" - -title = "Demucs" -description = "Gradio demo for Demucs: Music Source Separation in the Waveform Domain. To use it, simply upload your audio, or click one of the examples to load them. Read more at the links below. This space will be switched to GPU only when I need it :)" -article = "

              Music Source Separation in the Waveform Domain | Github Repo

              " - -examples=[['test.mp3']] - -iface = gr.Interface( - inference, - gr.inputs.Audio(type="numpy", label="Input"), - [gr.outputs.Audio(type="file", label="Vocals"),gr.outputs.Audio(type="file", label="Bass"),gr.outputs.Audio(type="file", label="Drums"),gr.outputs.Audio(type="file", label="Other")], - title=title, - description=description, - article=article, - examples=examples - ) - -iface.launch(enable_queue=True) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/scripts/normalize_and_filter_text.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/scripts/normalize_and_filter_text.py deleted file mode 100644 index c2bd16efb530af5af3f72ab0edb3044b4e9fcd5c..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/scripts/normalize_and_filter_text.py +++ /dev/null @@ -1,72 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import fasttext as ft -import os -import regex -import sys - - -def get_parser(): - parser = argparse.ArgumentParser( - description="reads text from stdin and outputs normalized, lid-filtered version to stdout" - ) - parser.add_argument( - "--fasttext-model", - help="path to fasttext model", - default="lid.187.bin", - ) - parser.add_argument("--lang", help="language id", required=True) - parser.add_argument( - "--lid-threshold", - type=float, - help="threshold for this lang id probability", - default=0.4, - ) - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - filter_r = regex.compile(r"[^\p{L}\p{N}\p{M}\' \-]") - - lg = args.lang.lower() - lg_label = f"__label__{lg}" - thresh = args.lid_threshold - - if os.path.exists(args.fasttext_model): - model = ft.load_model(args.fasttext_model) - else: - print( - f"fasttext language id model {args.fasttext_model} not found. Proceeding without language filtering. " - f"To enable language filtering, please download the latest language id model " - f"from https://fasttext.cc/docs/en/language-identification.html", - file=sys.stderr, - ) - model = None - - for line in sys.stdin: - line = line.strip() - line = filter_r.sub(" ", line) - line = " ".join(line.split()) - - if model is not None: - lid, prob = model.predict(line, k=100) - try: - target_idx = lid.index(lg_label) - except ValueError: - continue - if target_idx == 0 or prob[target_idx] >= thresh: - print(line) - else: - print(line) - - -if __name__ == "__main__": - main() diff --git a/spaces/stomexserde/gpt4-ui/Examples/Abbyy Finereader 11 Professional Download.md b/spaces/stomexserde/gpt4-ui/Examples/Abbyy Finereader 11 Professional Download.md deleted file mode 100644 index bdf77c3b3371c08b3d74dc1e6793619145030ea4..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Abbyy Finereader 11 Professional Download.md +++ /dev/null @@ -1,22 +0,0 @@ -
              -

              How to Download and Install Abbyy Finereader 11 Professional Edition

              -

              Abbyy Finereader 11 Professional Edition is a powerful optical character recognition (OCR) software that can convert scanned documents, images, and PDFs into editable and searchable formats. It can also perform text recognition, document conversion, and PDF editing tasks with high accuracy and speed. If you are looking for a reliable and easy-to-use OCR solution, Abbyy Finereader 11 Professional Edition is a great choice.

              -

              Abbyy Finereader 11 Professional Download


              Download Ziphttps://urlgoal.com/2uI6Pb



              -

              In this article, we will show you how to download and install Abbyy Finereader 11 Professional Edition on your Windows PC. Follow these simple steps to get started:

              -
                -
              1. Go to the official website of Abbyy Finereader 11 Professional Edition and click on the "Download" button. You will be redirected to a page where you can choose your language and region.
              2. -
              3. Select your preferred language and region and click on the "Download trial version" button. You will need to fill out a short form with your name, email address, and company name (optional) to get the download link.
              4. -
              5. Check your email inbox for the download link and click on it to start downloading the setup file. The file size is about 350 MB, so it may take some time depending on your internet speed.
              6. -
              7. Once the download is complete, run the setup file and follow the on-screen instructions to install Abbyy Finereader 11 Professional Edition on your PC. You will need to accept the license agreement, choose the installation folder, and select the components you want to install.
              8. -
              9. After the installation is finished, you can launch Abbyy Finereader 11 Professional Edition from your desktop or start menu. You will need to activate the trial version with your email address and serial number that you received in your email.
              10. -
              11. Enjoy using Abbyy Finereader 11 Professional Edition for 30 days for free. You can scan, convert, edit, and share your documents with ease and efficiency.
              12. -
              -

              If you want to buy the full version of Abbyy Finereader 11 Professional Edition, you can do so from the official website or from an authorized reseller. The full version offers unlimited usage, technical support, and updates. You can also upgrade to Abbyy Finereader 15 Corporate Edition for more advanced features and functionalities.

              -

              We hope this article helped you download and install Abbyy Finereader 11 Professional Edition on your PC. If you have any questions or feedback, feel free to leave a comment below.

              -

              - -

              Abbyy Finereader 11 Professional Edition is compatible with Windows XP, Vista, 7, 8, and 10. It supports over 190 languages and can recognize text from various types of documents, such as invoices, contracts, forms, books, magazines, and more. It can also handle multiple-page documents and batch processing with ease.

              -

              One of the main features of Abbyy Finereader 11 Professional Edition is its ability to create and edit PDF files. You can create PDFs from any source, such as scanned documents, images, or Microsoft Office files. You can also edit existing PDFs by adding or deleting pages, changing the layout, adding comments, annotations, stamps, or signatures. You can also protect your PDFs with passwords, encryption, or digital signatures.

              -

              Another feature of Abbyy Finereader 11 Professional Edition is its integration with various applications and cloud services. You can export your scanned or converted documents to Microsoft Word, Excel, PowerPoint, or Outlook. You can also save them to Google Drive, Dropbox, OneDrive, SharePoint, or Evernote. You can also access your documents from any device using the Abbyy FineReader Online service.

              7b8c122e87
              -
              -
              \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/CCleaner 5.13.5460 Professional Plus Crack And Serial Key LINK Download.md b/spaces/stomexserde/gpt4-ui/Examples/CCleaner 5.13.5460 Professional Plus Crack And Serial Key LINK Download.md deleted file mode 100644 index 549f498b2606df10d04dd6f85aae93ed1b0ed2f3..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/CCleaner 5.13.5460 Professional Plus Crack And Serial Key LINK Download.md +++ /dev/null @@ -1,33 +0,0 @@ - -

              How to Use CCleaner 5.13.5460 Professional Plus Crack and Serial Key to Optimize Your PC

              -

              CCleaner is a popular tool for cleaning and optimizing your PC. It can remove junk files, cache, viruses, and other unwanted items from your hard disk, browser, and applications. It can also free up storage space, speed up your PC, and protect your privacy.

              -

              CCleaner 5.13.5460 Professional Plus Crack and Serial Key Download


              Download Filehttps://urlgoal.com/2uI7el



              -

              However, CCleaner has two versions: free and professional. The free version has some limitations, such as fewer features, less frequent updates, and no customer support. The professional version has more features, such as real-time monitoring, automatic updates, premium support, and more.

              -

              If you want to use the professional version of CCleaner for free, you might be tempted to download a crack and serial key from the internet. A crack is a software that modifies the original program to bypass its security or licensing system. A serial key is a code that activates the program.

              -

              However, downloading a crack and serial key for CCleaner 5.13.5460 Professional Plus is not a good idea. Here are some reasons why:

              -
                -
              • It is illegal. You are violating the terms and conditions of the software developer and the copyright law. You could face legal consequences if you are caught.
              • -
              • It is risky. You could download malware or viruses along with the crack and serial key. These could harm your PC, steal your data, or compromise your security.
              • -
              • It is unreliable. You could get a fake or outdated crack and serial key that does not work or causes errors. You could also lose access to the program if it is detected or blocked by the developer.
              • -
              -

              Therefore, it is better to avoid using a crack and serial key for CCleaner 5.13.5460 Professional Plus. Instead, you should use the official version of CCleaner from the developer's website[^1^]. You can either use the free version or buy the professional version at a reasonable price.

              -

              If you use the official version of CCleaner, you will get the following benefits:

              -

              -
                -
              • It is legal. You are respecting the rights and efforts of the software developer and the copyright law. You will not face any legal issues.
              • -
              • It is safe. You will not download any malware or viruses along with the program. You will protect your PC, data, and security.
              • -
              • It is reliable. You will get a genuine and updated program that works smoothly and efficiently. You will also get access to customer support and regular updates.
              • -
              -

              To use CCleaner 5.13.5460 Professional Plus legally and safely, follow these steps:

              -
                -
              1. Go to the official website of CCleaner[^1^] and download the latest version of CCleaner.
              2. -
              3. Install CCleaner on your PC by following the instructions on the screen.
              4. -
              5. Launch CCleaner and choose either the free or the professional version.
              6. -
              7. If you choose the free version, you can start using CCleaner right away.
              8. -
              9. If you choose the professional version, you will need to buy a license key from the website[^1^] or enter a valid license key if you already have one.
              10. -
              11. Enter your name and license key in the registration window and click on Register.
              12. -
              13. Enjoy using CCleaner 5.13.5460 Professional Plus with all its features and benefits.
              14. -
              -

              By using CCleaner 5.13.5460 Professional Plus legally and safely, you will be able to optimize your PC without any risks or hassles. You will also support the software developer and encourage them to create more useful programs for you.

              81aa517590
              -
              -
              \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download Just Cause 2 Highly Compressed 10mb.md b/spaces/stomexserde/gpt4-ui/Examples/Download Just Cause 2 Highly Compressed 10mb.md deleted file mode 100644 index d3121c464a33dd40abe14bbf867d2a8f7e971d0c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Download Just Cause 2 Highly Compressed 10mb.md +++ /dev/null @@ -1,44 +0,0 @@ - -

              Download Just Cause 2 Highly Compressed 10mb for PC

              -

              If you are looking for a way to download Just Cause 2 highly compressed 10mb for PC, then you have come to the right place. Just Cause 2 is an open world action-adventure game that was released in 2010 by Avalanche Studios and Eidos Interactive. The game features Rico Rodriguez, a secret agent who is sent to a fictional island in Southeast Asia called Panau to overthrow a dictator and find his former mentor. The game has a huge map of 1,000 square kilometers that can be explored by using various vehicles, weapons, and a grappling hook. The game also has a chaos system that rewards the player for causing destruction and completing missions.

              -

              Download Just Cause 2 Highly Compressed 10mb


              DOWNLOADhttps://urlgoal.com/2uIalD



              -

              In this article, we will show you how to download Just Cause 2 highly compressed 10mb for PC using a direct Google Drive link. This is a modified version of the game that has been compressed to reduce the file size and make it easier to download. The game is 100% working and has all the features of the original game, including the online mode. You will also get some screenshots of the game and the system requirements that you need to run it on your PC.

              -

              How to Download Just Cause 2 Highly Compressed 10mb for PC

              -

              To download Just Cause 2 highly compressed 10mb for PC, you need to follow these simple steps:

              -
                -
              1. Click on the download button below to get the direct Google Drive link of the game.
              2. -
              3. Extract the zip file using WinRAR or any other software.
              4. -
              5. Run the setup.exe file and install the game on your PC.
              6. -
              7. Enjoy playing Just Cause 2 highly compressed 10mb for PC.
              8. -
              -

              Download Just Cause 2 Highly Compressed 10mb for PC

              -

              Features of Just Cause 2 Highly Compressed 10mb for PC

              -

              Here are some of the features of Just Cause 2 highly compressed 10mb for PC that you will enjoy:

              -

              -
                -
              • You will get the original files of the game without any loss of quality or content.
              • -
              • You can download the game in parts of 2 GB, 3 GB, or 4 GB according to your preference and internet speed.
              • -
              • You can run the game on low-end PCs as well as high-end PCs.
              • -
              • You will get every mission and cut-scene in the game as well as the online mode.
              • -
              • You can use a lot of weapons and vehicles to fight against enemies and explore Panau.
              • -
              -

              Just Cause 2 Highly Compressed 10mb for PC System Requirements

              -

              Before you download Just Cause 2 highly compressed 10mb for PC, you need to make sure that your PC meets these minimum and recommended system requirements:

              -

              Minimum System Requirements

              -
                -
              • Operating System: Windows Vista/7 (Windows XP is unsupported)
              • -
              • Processor: Intel Core 2 Duo 2.6 GHz or AMD Phenom X3 2.4GHz or equivalent
              • -
              • Memory: 2 GB RAM
              • -
              • Graphics: DX10 compatible graphics card with 256 MB of memory (Nvidia GeForce 8800 series/ ATI Radeon HD 2600 Pro)
              • -
              • DirectX: Version 10
              • -
              • Storage: 10 GB available space
              • -
              -

              Recommended System Requirements

              -
                -
              • Operating System: Windows Vista SP1 or Windows 7
              • -
              • Processor: Intel Core i5-750 or AMD Phenom II X4 or equivalent
              • -
              • Memory: 3 GB RAM
              • -
              • Graphics: DX10 compatible graphics card with 512 MB of memory (Nvidia GeForce GTS 250 series/ ATI Radeon HD 5750 series)
              • -
              • DirectX: Version 10.1
              • cec2833e83
                -
                -
                \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/EaseUS Data Recovery Wizard 12.8 Crack.md b/spaces/stomexserde/gpt4-ui/Examples/EaseUS Data Recovery Wizard 12.8 Crack.md deleted file mode 100644 index 034875d4e7aaa75035ad580623de32cff64d43c9..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/EaseUS Data Recovery Wizard 12.8 Crack.md +++ /dev/null @@ -1,24 +0,0 @@ -
                -

                How to Recover Lost Data with EaseUS Data Recovery Wizard 12.8 Crack

                -

                Have you ever accidentally deleted some important files or formatted a hard drive without backup? If so, you may be looking for a way to recover your lost data quickly and easily. One of the most popular data recovery software on the market is EaseUS Data Recovery Wizard, which can help you restore files from various data loss scenarios, such as deletion, virus attack, system crash, partition loss, etc.

                -

                EaseUS Data Recovery Wizard 12.8 Crack


                Download File >>> https://urlgoal.com/2uI6Q9



                -

                However, EaseUS Data Recovery Wizard is not a free software. You need to pay for a license code to activate its full features and recover unlimited data. Some people may try to find a cracked version of EaseUS Data Recovery Wizard 12.8 online, hoping to save some money and get back their data. But is it safe and legal to use EaseUS Data Recovery Wizard 12.8 crack? What are the risks and disadvantages of using a cracked data recovery software? In this article, we will answer these questions and show you a better alternative to EaseUS Data Recovery Wizard 12.8 crack.

                -

                What is EaseUS Data Recovery Wizard 12.8 Crack?

                -

                EaseUS Data Recovery Wizard 12.8 crack is a pirated version of the official EaseUS Data Recovery Wizard software that has been modified by some hackers or unauthorized websites. The crack version claims to offer the same features and functions as the original software, but without requiring a license code or registration.

                -

                Some people may think that using EaseUS Data Recovery Wizard 12.8 crack is a smart way to save money and recover data for free. However, they may not realize that using a cracked data recovery software can bring more harm than good to their devices and data. Here are some of the risks and disadvantages of using EaseUS Data Recovery Wizard 12.8 crack:

                -
                  -
                • Virus or malware infection: The crack version of EaseUS Data Recovery Wizard 12.8 may contain viruses, malware, spyware, or ransomware that can damage your computer system, steal your personal information, or encrypt your files and demand a ransom.
                • -
                • Data recovery failure: The crack version of EaseUS Data Recovery Wizard 12.8 may not work properly or have some bugs that can cause data recovery failure, data corruption, or data overwrite. You may end up losing your data permanently or making it unrecoverable by any other means.
                • -
                • No technical support or update: The crack version of EaseUS Data Recovery Wizard 12.8 is not supported by the official developer or any authorized service provider. You will not be able to get any technical support or update if you encounter any problems or errors during the data recovery process.
                • -
                • Legal issues: The crack version of EaseUS Data Recovery Wizard 12.8 is illegal and violates the intellectual property rights of the original software developer. You may face legal consequences or penalties if you are caught using or distributing a cracked data recovery software.
                • -
                -

                A Better Alternative to EaseUS Data Recovery Wizard 12.8 Crack

                -

                As you can see, using EaseUS Data Recovery Wizard 12.8 crack is not worth the risk and trouble. Instead of using a cracked data recovery software, we recommend you to use a reliable and safe alternative that can help you recover your lost data without any hassle.

                -

                One of the best alternatives to EaseUS Data Recovery Wizard 12.8 crack is EaseUS Data Recovery Wizard Free Edition, which is an official and free version of EaseUS Data Recovery Wizard software that allows you to recover up to 2GB of data for free[^4^]. You can download it from the official website[^8^] and install it on your Windows PC or laptop.

                -

                -

                EaseUS Data Recovery Wizard Free Edition has many advantages over EaseUS Data Recovery Wizard 12.8 crack, such as:

                -
                  -
                • No virus or malware: EaseUS Data Recovery Wizard Free Edition is 100% clean and safe to use. It has no virus, malware, spyware, or ransomware that can harm your computer system or data.
                • -
                • High success rate

                  cec2833e83
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/FS2004 - Digital Aviation Piper Cheyenne without Crack ( Serial Key.md b/spaces/stomexserde/gpt4-ui/Examples/FS2004 - Digital Aviation Piper Cheyenne without Crack ( Serial Key.md deleted file mode 100644 index ba7841443c23afc27a3c81458b68edc81627106d..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/FS2004 - Digital Aviation Piper Cheyenne without Crack ( Serial Key.md +++ /dev/null @@ -1,157 +0,0 @@ -
                  - - -

                  FS2004 - Digital Aviation Piper Cheyenne Review

                  - - -

                  If you are a fan of flight simulation games, you may have heard of FS2004 or Microsoft Flight Simulator 2004. It is one of the most popular and realistic flight simulation software that allows you to fly various aircrafts in different scenarios and weather conditions. However, if you want to enhance your flying experience with more realistic and detailed aircraft models, you may want to try some add-ons that are compatible with FS2004.

                  -

                  One of the best add-ons for FS2004 is Digital Aviation Piper Cheyenne. It is an

                  excellent reproduction of the Piper Cheyenne, a family of twin-engine turboprop aircrafts that are used for various purposes, such as business, cargo, or personal transportation. This add-on features four different variants of the Piper Cheyenne, each with its own specifications and liveries. It also boasts a realistic cockpit and exterior model, a functional weather radar, a custom sound set, and many other features that will make you feel like you are flying the real thing.

                  -

                  FS2004 - Digital Aviation Piper Cheyenne *without Crack* :( Serial Key


                  Download Zip ✺✺✺ https://urlgoal.com/2uIaxi



                  -

                  Features of Digital Aviation Piper Cheyenne

                  -

                  This add-on offers a lot of features that will enhance your flight simulation experience with FS2004. Here are some of the main features that you can expect from this add-on:

                  -
                    -
                  • Different aircraft variants: You can choose from four different variants of the Piper Cheyenne, namely Cheyenne I, Cheyenne IA, Cheyenne II, and Cheyenne IIXL. Each variant has its own characteristics, such as engine power, fuel capacity, cruising speed, range, and payload. You can also select from various liveries for each variant, such as American Eagle, Air France, British Airways, Lufthansa, and more.
                  • -
                  • Realistic cockpit and exterior model: You can enjoy a highly detailed and accurate cockpit and exterior model of the Piper Cheyenne, with all the instruments and controls working as they should. You can also interact with various switches, knobs, levers, and buttons in the cockpit, such as the autopilot, the GPS, the fuel selector, the landing gear, and more. The exterior model is also very realistic, with dynamic shine effects, animated propellers, moving flaps and ailerons, opening doors and windows, and more.
                  • -
                  • Functional weather radar: You can use a fully functional weather radar in the cockpit of the Piper Cheyenne, which will show you the precipitation and cloud coverage in your vicinity. You can also adjust the range and tilt of the radar display to suit your needs. The weather radar will help you avoid bad weather conditions and plan your flight accordingly.
                  • -
                  • Custom sound set: You can hear a custom sound set for the Piper Cheyenne, which was recorded from the real aircraft. The sound set includes engine sounds, propeller sounds, wind sounds, cockpit sounds, warning sounds, and more. The sound set will vary depending on the aircraft variant and the engine power.
                  • -
                  • Other features: There are many other features that this add-on offers, such as realistic flight dynamics, custom animations, interactive checklists, realistic lighting effects, PDF manuals, and more. You can also customize some aspects of the add-on to your liking, such as the panel layout, the fuel loadout, the engine failures, and more.
                  • -
                  -

                  Cheyenne I

                  -

                  The Cheyenne I is the first variant of the Piper Cheyenne family. It was introduced in 1974 and was powered by two Pratt & Whitney PT6A-28 turboprop engines. It had a maximum takeoff weight of 7,200 lbs (3,266 kg) and a maximum cruising speed of 242 knots (448 km/h). It had a range of 1,260 nautical miles (2,334 km) and a service ceiling of 26,000 feet (7,925 m). It could carry up to six passengers and one or two pilots.

                  -

                  This add-on features three liveries for the Cheyenne I: American Eagle (N777AE), Air France (F-GJAF), and British Airways (G-BAFZ).

                  -

                  Cheyenne IA

                  -

                  The Cheyenne IA is an improved version of the Cheyenne I. It was introduced in 1977 and was powered by two Pratt & Whitney PT6A-135 turboprop engines. It had a maximum takeoff weight of 7 ,200 lbs (3,266 kg) and a maximum cruising speed of 250 knots (463 km/h). It had a range of 1,350 nautical miles (2,500 km) and a service ceiling of 28,000 feet (8,534 m). It could carry up to six passengers and one or two pilots.

                  -

                  This add-on features three liveries for the Cheyenne IA: Lufthansa (D-ILHA), Swissair (HB-LSD), and United Airlines (N777UA).

                  -

                  Cheyenne II

                  -

                  The Cheyenne II is another improved version of the Cheyenne I. It was introduced in 1978 and was powered by two Pratt & Whitney PT6A-41 turboprop engines. It had a maximum takeoff weight of 8,750 lbs (3,969 kg) and a maximum cruising speed of 260 knots (482 km/h). It had a range of 1,600 nautical miles (2,963 km) and a service ceiling of 30,000 feet (9,144 m). It could carry up to eight passengers and one or two pilots.

                  -

                  This add-on features four liveries for the Cheyenne II: Air Canada (C-GJAC), Delta Air Lines (N777DL), KLM Royal Dutch Airlines (PH-KLM), and Qantas Airways (VH-QAN).

                  -

                  Cheyenne IIXL

                  -

                  The Cheyenne IIXL is the largest and most powerful variant of the Piper Cheyenne family. It was introduced in 1981 and was powered by two Pratt & Whitney PT6A-135A turboprop engines. It had a maximum takeoff weight of 10,450 lbs (4,740 kg) and a maximum cruising speed of 280 knots (519 km/h). It had a range of 1,800 nautical miles (3,334 km) and a service ceiling of 33,000 feet (10,058 m). It could carry up to nine passengers and one or two pilots.

                  -

                  -

                  This add-on features four liveries for the Cheyenne IIXL: American Airlines (N777AA), Emirates Airlines (A6-EAA), Japan Airlines (JA777J), and Singapore Airlines (9V-SAA).

                  -

                  System Requirements and Installation Guide

                  -

                  If you want to enjoy this add-on on your FS2004, you need to make sure that your computer meets the minimum or recommended system requirements for running this add-on. You also need to follow the installation guide carefully to avoid any problems or errors.

                  -

                  System Requirements

                  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

                  Installation Guide

                  -

                  To install this add-on on your FS2004, you need to follow these steps:

                  -
                    -
                  1. Download the add-on from the official website of Digital Aviation or from an authorized retailer, such as simMarket or Aerosoft. You will receive a zip file containing the installer and the serial key.
                  2. -
                  3. Extract the zip file to a temporary folder on your computer.
                  4. -
                  5. Run the installer and follow the instructions on the screen. You will need to enter the serial key when prompted.
                  6. -
                  7. Select the destination folder for the add-on. It is recommended to install it in the same folder as your FS2004.
                  8. -
                  9. Wait for the installation to complete. You may need to restart your computer after the installation.
                  10. -
                  11. Launch your FS2004 and select the Piper Cheyenne variant and livery of your choice from the aircraft menu.
                  12. -
                  13. Enjoy flying the Piper Cheyenne on your FS2004!
                  14. -
                  -

                  How to Get FS2004 - Digital Aviation Piper Cheyenne *without Crack* :( Serial Key

                  -

                  If you are wondering how to get this add-on without using a crack software, you are not alone. Many people are tempted to use a crack software to bypass the serial key requirement and get this add-on for free. However, this is not a good idea for many reasons. In this section, we will explain why you should avoid using a crack software and how you can get a genuine serial key for this add-on legally and ethically.

                  -

                  Legal and Ethical Issues of Using Crack Software

                  -

                  Using a crack software is illegal and unethical for several reasons. Here are some of them:

                  -
                    -
                  • Violating intellectual property rights: By using a crack software, you are violating the intellectual property rights of the developer of this add-on, Digital Aviation. They have invested a lot of time, money, and effort to create this add-on and they deserve to be compensated for their work. By using a crack software, you are stealing their product and depriving them of their rightful income.
                  • -
                  • Harming the developer: By using a crack software, you are harming the developer of this add-on, Digital Aviation. They rely on the sales of their products to sustain their business and continue developing more add-ons for flight simulation fans. By using a crack software, you are reducing their sales and revenue, which may affect their ability to produce more quality add-ons in the future.
                  • -
                  • Exposing yourself to malware and viruses: By using a crack software, you are exposing yourself to malware and viruses that may be hidden in the crack software. These malicious programs may damage your computer, steal your personal information, or compromise your online security. You may end up spending more money and time to fix your computer or recover your data than buying the add-on legally.
                  • -
                  -

                  Technical and Performance Issues of Using Crack Software

                  -

                  Using a crack software is not only illegal and unethical, but also problematic in terms of technical and performance aspects. Here are some of the problems that you may encounter by using a crack software:

                  -
                    -
                  • Compatibility issues: By using a crack software, you may face compatibility issues with your FS2004 or other add-ons that you have installed. The crack software may not be compatible with the latest version of FS2004 or other add-ons, which may cause conflicts or errors. You may also miss out on updates or patches that may improve the functionality or performance of this add-on or other add-ons.
                  • -
                  • Bugs, errors, crashes: By using a crack software, you may experience bugs, errors, or crashes while using this add-on or other add-ons. The crack software may not be tested or verified by the developer of this add-on or other add-ons, which may result in glitches or malfunctions. You may also lose your progress or data due to these issues.
                  • -
                  • Poor quality: By using a crack software, you may not enjoy the full quality of this add-on or other add-ons. The crack software may degrade or alter some aspects of this add-on or other add-ons, such as graphics, sounds, animations, features, etc. You may also miss out on some features or functions that are only available in the genuine version of this add-on or other add-ons.
                  • -
                  -

                  How to Get a Genuine Serial Key for This Add-on

                  -

                  If you want to avoid all these problems and enjoy this add-on legally and ethically, you need to get a genuine serial key for this add-on. There are several ways to do so, such as buying it from an authorized retailer, getting it from a giveaway site, or contacting the developer. Here are some details about each option:

                  -

                  Buying from an Authorized Retailer

                  -

                  The best and easiest way to get a genuine serial key for this add-on is to buy it from an authorized retailer, such as simMarket or Aerosoft. These are reputable and trusted online stores that sell various flight simulation products, including this add-on. You can buy this add-on from these retailers for a reasonable price and get a serial key instantly. You can also enjoy other benefits, such as customer support, refunds, discounts, and more.

                  -

                  To buy this add-on from an authorized retailer, you need to follow these steps:

                  -
                    -
                  1. Visit the official website of the retailer of your choice, such as simMarket or Aerosoft.
                  2. -
                  3. Search for FS2004 - Digital Aviation Piper Cheyenne in the product catalog or use the direct link: simMarket or Aerosoft.
                  4. -
                  5. Add the product to your shopping cart and proceed to checkout.
                  6. -
                  7. Enter your personal and payment information and confirm your order.
                  8. -
                  9. Receive your serial key by email or in your account page.
                  10. -
                  11. Download and install the add-on using the serial key.
                  12. -
                  -

                  Getting from a Giveaway Site

                  -

                  Another way to get a genuine serial key for this add-on is to get it from a giveaway site, such as Technadu or Jiho. These are websites that offer free serial keys for various software products, including this add-on. You can get this add-on from these sites for free and get a serial key randomly. However, you may also face some limitations and precautions, such as limited availability, expiration date, verification process, and more.

                  -

                  To get this add-on from a giveaway site, you need to follow these steps:

                  -
                    -
                  1. Visit the official website of the giveaway site of your choice, such as Technadu or Jiho.
                  2. -
                  3. Search for FS2004 - Digital Aviation Piper Cheyenne in the giveaway list or use the direct link: Technadu or Jiho.
                  4. -
                  5. Enter your email address and click on the "Get It Now" button.
                  6. -
                  7. Wait for the verification email and click on the link to confirm your participation.
                  8. -
                  9. Receive your serial key by email or in your account page.
                  10. -
                  11. Download and install the add-on using the serial key.
                  12. -
                  -

                  Contacting the Developer

                  -

                  The last way to get a genuine serial key for this add-on is to contact the developer of this add-on, Digital Aviation. They are the original creators of this add-on and they may have some spare serial keys that they can give away to some lucky customers. However, this is not a guaranteed or easy option, as you may need to convince them why you deserve a serial key and what you can offer them in return. You may also need to wait for a long time or never get a reply from them.

                  -

                  To contact the developer of this add-on, you need to follow these steps:

                  -
                    -
                  1. Visit the official website of Digital Aviation: http://www.digital-aviation.de/.
                  2. -
                  3. Go to the contact page: http://www.digital-aviation.de/contact.html.
                  4. -
                  5. Fill out the contact form with your name, email address, subject, and message.
                  6. -
                  7. In your message, explain why you want a serial key for this add-on and what you can offer them in return, such as feedback, promotion, donation, etc.
                  8. -
                  9. Click on the "Send" button and wait for their reply.
                  10. -
                  11. If you receive a positive reply with a serial key, download and install the add-on using the serial key.
                  12. -
                  -

                  Conclusion

                  -

                  In conclusion, FS2004 - Digital Aviation Piper Cheyenne is an amazing add-on for flight simulation enthusiasts who want to fly realistic and detailed models of the Piper Cheyenne aircraft family. It offers four different variants of the Piper Cheyenne, each with its own specifications and liveries. It also features a realistic cockpit and exterior model, a functional weather radar, a custom sound set, and many other features that will make you feel like you are flying the real thing. However , you should not use a crack software to get this add-on for free, as it is illegal, unethical, and problematic. You should get a genuine serial key for this add-on by buying it from an authorized retailer, getting it from a giveaway site, or contacting the developer. By doing so, you will support the developer, enjoy the full quality of this add-on, and avoid any legal or technical issues.

                  -

                  FAQs

                  -

                  Here are some frequently asked questions and answers about this topic:

                  -
                    -
                  1. Q: Can I use this add-on on other flight simulation software, such as FSX or P3D?
                  2. -
                  3. A: No, this add-on is only compatible with FS2004. If you want to use this add-on on other flight simulation software, you need to buy a different version of this add-on that is compatible with your software.
                  4. -
                  5. Q: How can I update this add-on to the latest version?
                  6. -
                  7. A: If you bought this add-on from an authorized retailer, you can download the latest version of this add-on from their website or from your account page. If you got this add-on from a giveaway site or from the developer, you need to contact them and ask for the latest version of this add-on.
                  8. -
                  9. Q: How can I uninstall this add-on from my FS2004?
                  10. -
                  11. A: To uninstall this add-on from your FS2004, you need to follow these steps:
                  12. -
                      -
                    • Go to the folder where you installed this add-on on your FS2004.
                    • -
                    • Delete the folder named "Digital Aviation Piper Cheyenne".
                    • -
                    • Go to the folder where your FS2004 is installed.
                    • -
                    • Delete the file named "DA_Cheyenne.gau" in the "Gauges" folder.
                    • -
                    • Delete the file named "DA_Cheyenne.dll" in the "Modules" folder.
                    • -
                    -
                  13. Q: How can I learn more about this add-on and its features?
                  14. -
                  15. A: You can learn more about this add-on and its features by reading the PDF manuals that are included in the installation package. You can also visit the official website of Digital Aviation or their forum for more information and support.
                  16. -
                  17. Q: How can I contact Digital Aviation for feedback, suggestions, or issues?
                  18. -
                  19. A: You can contact Digital Aviation by filling out the contact form on their website: http://www.digital-aviation.de/contact.html. You can also join their forum and post your feedback, suggestions, or issues there: http://www.digital-aviation.de/forum/.
                  20. -

                  b2dd77e56b
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Fontographer 5 2 LINK Keygen For Mac.md b/spaces/stomexserde/gpt4-ui/Examples/Fontographer 5 2 LINK Keygen For Mac.md deleted file mode 100644 index aa3fb1291be4f5b12d1566f889a441ddc0fdca0c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Fontographer 5 2 LINK Keygen For Mac.md +++ /dev/null @@ -1,201 +0,0 @@ - -

                  Fontographer 5.2 Keygen for Mac: How to Create and Edit Fonts Easily

                  -

                  If you are a designer or a publisher who works with fonts, you know how important it is to have a reliable and powerful font editor. You need a tool that can help you create and edit fonts in various formats, with ease and accuracy. You need a tool that can handle complex font projects, with multiple glyphs, styles, and languages. You need a tool that can export and share your fonts with other users and applications, without compromising quality or compatibility. You need Fontographer.

                  -

                  What is Fontographer and why do you need it?

                  -

                  Fontographer is a classic font editor for Mac OS X

                  -

                  Fontographer is one of the oldest and most popular font editors in the market. It was first released in 1986 by Altsys Corporation, and later acquired by Macromedia in 1995. In 2005, FontLab Ltd. bought Fontographer from Macromedia and updated it to run on modern operating systems. The latest version of Fontographer is 5.2, which was released in 2011.

                  -

                  Fontographer 5 2 Keygen For Mac


                  Download File »»» https://urlgoal.com/2uI9lZ



                  -

                  Fontographer is designed for Mac OS X users who want to create and edit fonts in a simple and intuitive way. It works on macOS 10.14 Mojave, and on Mac OS X 10.6.8 or newer. However, it does not work on macOS 10.15 Catalina, macOS 11 Big Sur, or newer versions of macOS.

                  -

                  Fontographer lets you create and edit fonts in various formats

                  -

                  Fontographer supports a wide range of font formats, including Type 1, TrueType, OpenType, Type 3, and Multiple Master fonts. You can open and generate fonts in any of these formats, as well as convert between them. You can also import and export fonts as EPS, SVG, AFM, PFM, TFM, OFM, INF, PFA, PFB files.

                  -

                  Fontographer allows you to create fonts from scratch or from existing sources. You can draw your own glyphs using the drawing tools, or import them from scanned images or vector graphics. You can also copy-paste or import glyphs from other font files or from FontLab Studio, TypeTool, or ScanFont using the VFB format.

                  -

                  Fontographer has many features and benefits for designers and publishers

                  -

                  Fontographer is not just a simple font editor. It is also a powerful font design tool that offers many features and benefits for designers and publishers who want to create professional-quality fonts.

                  -

                  Some of

                  Some of the features and benefits of Fontographer are:

                  -
                    -
                  • You can edit the metrics and kerning of your fonts, using the auto-kerning feature or the manual kerning tool. You can also use the optical metrics feature to adjust the spacing of your glyphs automatically.
                  • -
                  • You can create and edit font families, with up to 64,000 glyphs per font. You can also create and edit font variations, using the Multiple Master feature. You can define up to four axes of variation, such as weight, width, slant, and optical size.
                  • -
                  • You can add and edit OpenType features, such as ligatures, alternates, swashes, small caps, fractions, and more. You can also add and edit TrueType hints, which improve the appearance of your fonts on low-resolution screens.
                  • -
                  • You can test and validate your fonts, using the font audit feature or the font preview feature. You can check your fonts for errors, such as missing or overlapping points, incorrect directions, bad curves, and more. You can also preview your fonts in various sizes, colors, and backgrounds.
                  • -
                  • You can customize and automate your font editing workflow, using the scripting feature or the plug-in feature. You can write your own scripts in Python or FDK, or use the built-in scripts for common tasks. You can also use plug-ins to extend the functionality of Fontographer.
                  • -
                  -

                  With Fontographer, you can create and edit fonts that meet your needs and expectations. You can also save time and money by using a single tool for all your font projects.

                  -

                  How to download and install Fontographer 5.2 for Mac

                  -

                  Download Fontographer 5.2 from the official website or a trusted source

                  -

                  The first step to use Fontographer 5.2 for Mac is to download it from a reliable source. You can download it from the official website of FontLab Ltd., which is the developer and distributor of Fontographer. The website is https://www.fontlab.com/font-editor/fontographer/.

                  -

                  -

                  On the website, you can find more information about Fontographer, such as its features, screenshots, tutorials, reviews, and FAQs. You can also find the download link for Fontographer 5.2 for Mac, which is a DMG file of about 70 MB.

                  -

                  Alternatively, you can download Fontographer 5.2 for Mac from other trusted sources, such as software download websites or online forums. However, you should be careful when downloading from these sources, as they may contain viruses or malware that can harm your computer or compromise your data. You should always scan the downloaded file with an antivirus program before opening it.

                  -

                  Install Fontographer 5.2 on your Mac using the installer

                  -

                  The next step to use Fontographer 5.2 for Mac is to install it on your Mac using the installer. To do this, you need to follow these steps:

                  -
                    -
                  1. Double-click on the downloaded DMG file to open it.
                  2. -
                  3. Drag and drop the Fontographer icon to the Applications folder.
                  4. -
                  5. Eject the DMG file from your Mac.
                  6. -
                  7. Open the Applications folder and double-click on the Fontographer icon to launch it.
                  8. -
                  -

                  The installation process is very simple and fast. It does not require any additional software or settings. However, you may need to enter your administrator password to complete the installation.

                  -

                  Activate Fontographer 5.2 with a serial number or a keygen

                  -

                  The final step to use Fontographer 5.2 for Mac is to activate it with a serial number or a keygen. A serial number is a unique code that you need to enter when you first run Fontographer 5.2 for Mac. A keygen is a program that generates a valid serial number for you.

                  -

                  You can get a serial number for Fontographer 5.2 for Mac in two ways:

                  -
                    -
                  • You can buy a license for Fontographer 5.2 for Mac from the official website of FontLab Ltd., which costs $399 USD. After you make the payment, you will receive an email with your serial number and a download link for Fontographer 5.2 for Mac.
                  • -
                  • You can use a keygen for Fontographer 5.2 for Mac from a trusted source, such as a software crack website or an online forum. However, you should be careful when using a keygen, as it may contain viruses or malware that can harm your computer or compromise your data. You should always scan the keygen You should always scan the keygen with an antivirus program before running it. You should also be aware that using a keygen may be illegal or unethical, as it violates the terms and conditions of FontLab Ltd.
                  • -
                  -

                  To activate Fontographer 5.2 for Mac with a serial number or a keygen, you need to follow these steps:

                  -
                    -
                  1. Run Fontographer 5.2 for Mac for the first time.
                  2. -
                  3. Enter your name and email address in the registration window.
                  4. -
                  5. Enter your serial number or generate one with the keygen in the serial number field.
                  6. -
                  7. Click on the Activate button to complete the activation.
                  8. -
                  -

                  Once you activate Fontographer 5.2 for Mac, you can use it without any limitations or restrictions. You can also update it to the latest version from the official website of FontLab Ltd.

                  -

                  How to use Fontographer 5.2 for Mac to create and edit fonts

                  -

                  Open and generate fonts in Type 1, TrueType, OpenType, Type 3, and Multiple Master formats

                  -

                  One of the main functions of Fontographer 5.2 for Mac is to open and generate fonts in various formats. You can do this by using the File menu or the toolbar buttons.

                  -

                  To open a font file in Fontographer 5.2 for Mac, you need to follow these steps:

                  -
                    -
                  1. Click on the File menu and select Open.
                  2. -
                  3. Browse your computer and locate the font file you want to open.
                  4. -
                  5. Select the font file and click on the Open button.
                  6. -
                  -

                  You can also drag and drop the font file to the Fontographer icon or window to open it.

                  -

                  To generate a font file in Fontographer 5.2 for Mac, you need to follow these steps:

                  -
                    -
                  1. Click on the File menu and select Generate Fonts.
                  2. -
                  3. Select the format you want to generate your font in, such as Type 1, TrueType, OpenType, Type 3, or Multiple Master.
                  4. -
                  5. Choose the options you want to apply to your font, such as encoding, autohinting, subsetting, etc.
                  6. -
                  7. Click on the Generate button to create your font file.
                  8. -
                  -

                  You can also use the keyboard shortcut Command+G to generate your font file.

                  -

                  Use the drawing tools, metrics, and kerning features to design your fonts

                  -

                  Another main function of Fontographer 5.2 for Mac is to use the drawing tools, metrics, and kerning features to design your fonts. You can do this by using the Edit menu or the toolbar buttons.

                  -

                  To use the drawing tools in Fontographer 5.2 for Mac, you need to follow these steps:

                  -
                    -
                  1. Select a glyph or create a new one in the Font window or the Glyph window.
                  2. -
                  3. Click on the Edit menu and select Drawing Tools.
                  4. -
                  5. Select the drawing tool you want to use, such as Pen, Pencil, Rectangle, Ellipse, etc.
                  6. -
                  7. Draw your glyph using the drawing tool and adjust its shape and size using the handles and nodes.
                  8. -
                  -

                  You can also use the keyboard shortcuts or the toolbar buttons to select and use the drawing tools.

                  -

                  To use the metrics feature in Fontographer 5.2 for Mac, you need to follow these steps:

                  -
                    -
                  1. Select a glyph or a group of glyphs in the Font window or the Glyph window.
                  2. -
                  3. Click on the Edit menu and select Metrics.
                  4. -
                  5. Edit the metrics of your glyph or glyphs using Edit the metrics of your glyph or glyphs using the metrics window or the keyboard shortcuts. You can change the width, height, left sidebearing, right sidebearing, and advance width of your glyph or glyphs. You can also use the auto-spacing feature to adjust the metrics automatically.
                  6. -
                  -

                  You can also use the toolbar buttons or the context menu to access and use the metrics feature.

                  -

                  To use the kerning feature in Fontographer 5.2 for Mac, you need to follow these steps:

                  -
                    -
                  1. Select a pair of glyphs or a group of pairs in the Font window or the Glyph window.
                  2. -
                  3. Click on the Edit menu and select Kerning.
                  4. -
                  5. Edit the kerning of your pair or pairs using the kerning window or the keyboard shortcuts. You can change the kerning value, which is the amount of space between the pair of glyphs. You can also use the auto-kerning feature to adjust the kerning automatically.
                  6. -
                  -

                  You can also use the toolbar buttons or the context menu to access and use the kerning feature.

                  -

                  Use the font testing and validation tools to check your fonts for errors and compatibility

                  -

                  A third main function of Fontographer 5.2 for Mac is to use the font testing and validation tools to check your fonts for errors and compatibility. You can do this by using the Tools menu or the toolbar buttons.

                  -

                  To use the font audit feature in Fontographer 5.2 for Mac, you need to follow these steps:

                  -
                    -
                  1. Select a glyph or a group of glyphs in the Font window or the Glyph window.
                  2. -
                  3. Click on the Tools menu and select Font Audit.
                  4. -
                  5. Check your glyph or glyphs for errors using the font audit window or the keyboard shortcuts. You can find errors such as missing or overlapping points, incorrect directions, bad curves, and more. You can also fix some errors automatically using the fix button.
                  6. -
                  -

                  You can also use the toolbar button or the context menu to access and use the font audit feature.

                  -

                  To use the font preview feature in Fontographer 5.2 for Mac, you need to follow these steps:

                  -
                    -
                  1. Select a glyph or a group of glyphs in the Font window or the Glyph window.
                  2. -
                  3. Click on the Tools menu and select Preview.
                  4. -
                  5. Preview your glyph or glyphs in various sizes, colors, and backgrounds using Preview your glyph or glyphs in various sizes, colors, and backgrounds using the preview window or the keyboard shortcuts. You can also print your glyph or glyphs using the print button.
                  6. -
                  -

                  You can also use the toolbar button or the context menu to access and use the preview feature.

                  -

                  How to export and share your fonts with Fontographer 5.2 for Mac

                  -

                  Export your fonts as font files or as web fonts

                  -

                  A fourth main function of Fontographer 5.2 for Mac is to export and share your fonts with other users and applications. You can do this by using the File menu or the toolbar buttons.

                  -

                  To export your fonts as font files in Fontographer 5.2 for Mac, you need to follow these steps:

                  -
                    -
                  1. Select a font or a group of fonts in the Font window.
                  2. -
                  3. Click on the File menu and select Generate Fonts.
                  4. -
                  5. Select the format you want to export your font in, such as Type 1, TrueType, OpenType, Type 3, or Multiple Master.
                  6. -
                  7. Choose the options you want to apply to your font, such as encoding, autohinting, subsetting, etc.
                  8. -
                  9. Click on the Generate button to create your font file.
                  10. -
                  -

                  You can also use the keyboard shortcut Command+G to export your font file.

                  -

                  To export your fonts as web fonts in Fontographer 5.2 for Mac, you need to follow these steps:

                  -
                    -
                  1. Select a font or a group of fonts in the Font window.
                  2. -
                  3. Click on the File menu and select Generate Web Fonts.
                  4. -
                  5. Select the format you want to export your font in, such as WOFF, EOT, SVG, or TTF.
                  6. -
                  7. Choose the options you want to apply to your font, such as encoding, autohinting, subsetting, etc.
                  8. -
                  9. Click on the Generate button to create your web font file.
                  10. -
                  -

                  You can also use the keyboard shortcut Command+W to export your web font file.

                  -

                  Share your fonts with other users or applications using the VFB format

                  -

                  A fifth main function of Fontographer 5.2 for Mac is to share your fonts with other users or applications using the VFB format. The VFB format is a proprietary format of FontLab Ltd., which allows you to exchange fonts between Fontographer and other FontLab products, such as FontLab Studio, TypeTool, or ScanFont. You can also use the VFB format to backup your fonts or to collaborate with other designers.

                  -

                  To share your fonts with other users or applications using the VFB format in Fontographer 5.2 for Mac, you need to follow these steps:

                  -
                    -
                  1. Select a font or a group of fonts in the Font window.
                  2. -
                  3. Click on the File menu and select Save As.
                  4. -
                  5. Select the VFB format from the drop-down menu and choose a location for your file.
                  6. -
                  7. Click on the Save button to create your VFB file.
                  8. -
                  -

                  You can also use the keyboard shortcut Command+S to save your VFB file.

                  -

                  Publish your fonts online or offline using the license agreement feature

                  -

                  A sixth main function of Fontographer 5.2 for Mac is to publish your fonts online or offline using the license agreement feature. The license agreement feature allows you to create and edit a license agreement for your fonts, which defines the terms and conditions of their use and distribution. You can use this feature to protect your intellectual property rights and to generate revenue from your fonts.

                  -

                  To publish your fonts online or offline using the license agreement feature in Fontographer 5.2 for Mac, you need to follow these steps:

                  -
                    -
                  1. Select a font or a group of fonts in the Font window.
                  2. -
                  3. Click on the File menu and select License Agreement.
                  4. -
                  5. Edit the license agreement for your fonts using Edit the license agreement for your fonts using the license agreement window or the keyboard shortcuts. You can write your own license agreement or use a template from the drop-down menu. You can also add your name, email, website, and signature to the license agreement.
                  6. -
                  -

                  You can also use the toolbar button or the context menu to access and use the license agreement feature.

                  -

                  Once you have created and edited your license agreement, you can attach it to your font files or web font files. You can also print it or save it as a PDF file. You can then publish your fonts online or offline, according to the terms and conditions of your license agreement.

                  -

                  Conclusion

                  -

                  Fontographer 5.2 for Mac is a classic font editor that allows you to create and edit fonts in various formats, with ease and accuracy. It has many features and benefits for designers and publishers who want to create professional-quality fonts. It also lets you export and share your fonts with other users and applications, without compromising quality or compatibility. It also lets you publish your fonts online or offline, using the license agreement feature.

                  -

                  If you are looking for a reliable and powerful font editor for Mac OS X, you should consider Fontographer 5.2 for Mac. You can download it from the official website of FontLab Ltd., or from other trusted sources. You can also activate it with a serial number or a keygen. You can then use it to create and edit fonts in various formats, with ease and accuracy.

                  -

                  FAQs

                  -

                  What are the system requirements for Fontographer 5.2 for Mac?

                  -

                  The system requirements for Fontographer 5.2 for Mac are:

                  -
                    -
                  • Mac OS X 10.6.8 or newer (except macOS 10.15 Catalina, macOS 11 Big Sur, or newer versions of macOS)
                  • -
                  • Intel processor
                  • -
                  • 1 GB of RAM
                  • -
                  • 100 MB of hard disk space
                  • -
                  • 1024 x 768 screen resolution
                  • -
                  • A mouse or a tablet
                  • -
                  -

                  How can I update Fontographer 5.2 for Mac to the latest version?

                  -

                  You can update Fontographer 5.2 for Mac to the latest version by following these steps:

                  -
                    -
                  1. Run Fontographer 5.2 for Mac.
                  2. -
                  3. Click on the Help menu and select Check for Updates.
                  4. -
                  5. If there is a newer version available, click on the Download button to download it.
                  6. -
                  7. Install the newer version on your Mac using the installer.
                  8. -
                  -

                  You can also check for updates manually by visiting the official website of FontLab Ltd., which is https://www.fontlab.com/font-editor/fontographer/.

                  -

                  How can I get help or support for Fontographer 5.2 for Mac?

                  -

                  You can get help or support for Fontographer 5.2 for Mac by using one of these methods:

                  -
                    -
                  • You can read the user manual, which is available in PDF format on the official website of FontLab Ltd., or in the Help menu of Fontographer 5.2 for Mac.
                  • -
                  • You can watch the video tutorials, which are available on the official website of FontLab Ltd., or on YouTube.
                  • -
                  • You can visit the online forum, which is available on https://forum.fontlab.com/fontographer/.
                  • -
                  • You can contact the technical support team, which is available by email at support@fontlab.com, or by phone at +1-202-747-4220.
                  • -
                  -

                  How can I learn more about font design and typography?

                  -

                  You can learn more about font design and typography by using one of these resources:

                  -
                    -
                  • You can read books, such as The Elements of Typographic Style by Robert Bringhurst, Designing Type by Karen Cheng, The Anatomy of Type by Stephen Coles, etc.
                  • -
                  • You can take courses, such as Typography Fundamentals by Lynda.com, Introduction to Typography by Coursera, Learn Font Making by Skillshare, etc.
                  • -
                  • You can follow blogs, such as Typographica, Fonts In Use, I Love Typography, etc.
                  • -
                  • You can join communities, such as Typophile, TypeDrawers, Typographica You can join communities, such as Typophile, TypeDrawers, Typographica, etc.
                  • -
                  -

                  How can I buy or sell fonts online?

                  -

                  You can buy or sell fonts online by using one of these platforms:

                  -
                    -
                  • You can buy fonts from online font shops, such as MyFonts, FontShop, Fonts.com, etc.
                  • -
                  • You can sell fonts on online font marketplaces, such as Creative Market, Fontspring, YouWorkForThem, etc.
                  • -
                  • You can buy or sell fonts on online font subscription services, such as Adobe Fonts, Fontstand, SkyFonts, etc.
                  • -
                  -

                  However, before you buy or sell fonts online, you should always read and follow the license agreement of the fonts, which defines the terms and conditions of their use and distribution. You should also respect the intellectual property rights and the ethical standards of the font industry.

                  -

                  b2dd77e56b
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/stratussox/yolov5_inference/utils/loggers/comet/README.md b/spaces/stratussox/yolov5_inference/utils/loggers/comet/README.md deleted file mode 100644 index 3a51cb9b5a25212c30f018d9db6e8887557650a1..0000000000000000000000000000000000000000 --- a/spaces/stratussox/yolov5_inference/utils/loggers/comet/README.md +++ /dev/null @@ -1,256 +0,0 @@ - - -# YOLOv5 with Comet - -This guide will cover how to use YOLOv5 with [Comet](https://bit.ly/yolov5-readme-comet) - -# About Comet - -Comet builds tools that help data scientists, engineers, and team leaders accelerate and optimize machine learning and deep learning models. - -Track and visualize model metrics in real time, save your hyperparameters, datasets, and model checkpoints, and visualize your model predictions with [Comet Custom Panels](https://bit.ly/yolov5-colab-comet-panels)! -Comet makes sure you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes! - -# Getting Started - -## Install Comet - -```shell -pip install comet_ml -``` - -## Configure Comet Credentials - -There are two ways to configure Comet with YOLOv5. - -You can either set your credentials through enviroment variables - -**Environment Variables** - -```shell -export COMET_API_KEY= -export COMET_PROJECT_NAME= # This will default to 'yolov5' -``` - -Or create a `.comet.config` file in your working directory and set your credentials there. - -**Comet Configuration File** - -``` -[comet] -api_key= -project_name= # This will default to 'yolov5' -``` - -## Run the Training Script - -```shell -# Train YOLOv5s on COCO128 for 5 epochs -python train.py --img 640 --batch 16 --epochs 5 --data coco128.yaml --weights yolov5s.pt -``` - -That's it! Comet will automatically log your hyperparameters, command line arguments, training and valiation metrics. You can visualize and analyze your runs in the Comet UI - -yolo-ui - -# Try out an Example! -Check out an example of a [completed run here](https://www.comet.com/examples/comet-example-yolov5/a0e29e0e9b984e4a822db2a62d0cb357?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step&ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration) - -Or better yet, try it out yourself in this Colab Notebook - -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1RG0WOQyxlDlo5Km8GogJpIEJlg_5lyYO?usp=sharing) - -# Log automatically - -By default, Comet will log the following items - -## Metrics -- Box Loss, Object Loss, Classification Loss for the training and validation data -- mAP_0.5, mAP_0.5:0.95 metrics for the validation data. -- Precision and Recall for the validation data - -## Parameters - -- Model Hyperparameters -- All parameters passed through the command line options - -## Visualizations - -- Confusion Matrix of the model predictions on the validation data -- Plots for the PR and F1 curves across all classes -- Correlogram of the Class Labels - -# Configure Comet Logging - -Comet can be configured to log additional data either through command line flags passed to the training script -or through environment variables. - -```shell -export COMET_MODE=online # Set whether to run Comet in 'online' or 'offline' mode. Defaults to online -export COMET_MODEL_NAME= #Set the name for the saved model. Defaults to yolov5 -export COMET_LOG_CONFUSION_MATRIX=false # Set to disable logging a Comet Confusion Matrix. Defaults to true -export COMET_MAX_IMAGE_UPLOADS= # Controls how many total image predictions to log to Comet. Defaults to 100. -export COMET_LOG_PER_CLASS_METRICS=true # Set to log evaluation metrics for each detected class at the end of training. Defaults to false -export COMET_DEFAULT_CHECKPOINT_FILENAME= # Set this if you would like to resume training from a different checkpoint. Defaults to 'last.pt' -export COMET_LOG_BATCH_LEVEL_METRICS=true # Set this if you would like to log training metrics at the batch level. Defaults to false. -export COMET_LOG_PREDICTIONS=true # Set this to false to disable logging model predictions -``` - -## Logging Checkpoints with Comet - -Logging Models to Comet is disabled by default. To enable it, pass the `save-period` argument to the training script. This will save the -logged checkpoints to Comet based on the interval value provided by `save-period` - -```shell -python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt \ ---save-period 1 -``` - -## Logging Model Predictions - -By default, model predictions (images, ground truth labels and bounding boxes) will be logged to Comet. - -You can control the frequency of logged predictions and the associated images by passing the `bbox_interval` command line argument. Predictions can be visualized using Comet's Object Detection Custom Panel. This frequency corresponds to every Nth batch of data per epoch. In the example below, we are logging every 2nd batch of data for each epoch. - -**Note:** The YOLOv5 validation dataloader will default to a batch size of 32, so you will have to set the logging frequency accordingly. - -Here is an [example project using the Panel](https://www.comet.com/examples/comet-example-yolov5?shareable=YcwMiJaZSXfcEXpGOHDD12vA1&ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration) - - -```shell -python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt \ ---bbox_interval 2 -``` - -### Controlling the number of Prediction Images logged to Comet - -When logging predictions from YOLOv5, Comet will log the images associated with each set of predictions. By default a maximum of 100 validation images are logged. You can increase or decrease this number using the `COMET_MAX_IMAGE_UPLOADS` environment variable. - -```shell -env COMET_MAX_IMAGE_UPLOADS=200 python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt \ ---bbox_interval 1 -``` - -### Logging Class Level Metrics - -Use the `COMET_LOG_PER_CLASS_METRICS` environment variable to log mAP, precision, recall, f1 for each class. - -```shell -env COMET_LOG_PER_CLASS_METRICS=true python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt -``` - -## Uploading a Dataset to Comet Artifacts - -If you would like to store your data using [Comet Artifacts](https://www.comet.com/docs/v2/guides/data-management/using-artifacts/#learn-more?ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration), you can do so using the `upload_dataset` flag. - -The dataset be organized in the way described in the [YOLOv5 documentation](https://docs.ultralytics.com/tutorials/train-custom-datasets/#3-organize-directories). The dataset config `yaml` file must follow the same format as that of the `coco128.yaml` file. - -```shell -python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt \ ---upload_dataset -``` - -You can find the uploaded dataset in the Artifacts tab in your Comet Workspace -artifact-1 - -You can preview the data directly in the Comet UI. -artifact-2 - -Artifacts are versioned and also support adding metadata about the dataset. Comet will automatically log the metadata from your dataset `yaml` file -artifact-3 - -### Using a saved Artifact - -If you would like to use a dataset from Comet Artifacts, set the `path` variable in your dataset `yaml` file to point to the following Artifact resource URL. - -``` -# contents of artifact.yaml file -path: "comet:///:" -``` -Then pass this file to your training script in the following way - -```shell -python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data artifact.yaml \ ---weights yolov5s.pt -``` - -Artifacts also allow you to track the lineage of data as it flows through your Experimentation workflow. Here you can see a graph that shows you all the experiments that have used your uploaded dataset. -artifact-4 - -## Resuming a Training Run - -If your training run is interrupted for any reason, e.g. disrupted internet connection, you can resume the run using the `resume` flag and the Comet Run Path. - -The Run Path has the following format `comet:////`. - -This will restore the run to its state before the interruption, which includes restoring the model from a checkpoint, restoring all hyperparameters and training arguments and downloading Comet dataset Artifacts if they were used in the original run. The resumed run will continue logging to the existing Experiment in the Comet UI - -```shell -python train.py \ ---resume "comet://" -``` - -## Hyperparameter Search with the Comet Optimizer - -YOLOv5 is also integrated with Comet's Optimizer, making is simple to visualie hyperparameter sweeps in the Comet UI. - -### Configuring an Optimizer Sweep - -To configure the Comet Optimizer, you will have to create a JSON file with the information about the sweep. An example file has been provided in `utils/loggers/comet/optimizer_config.json` - -```shell -python utils/loggers/comet/hpo.py \ - --comet_optimizer_config "utils/loggers/comet/optimizer_config.json" -``` - -The `hpo.py` script accepts the same arguments as `train.py`. If you wish to pass additional arguments to your sweep simply add them after -the script. - -```shell -python utils/loggers/comet/hpo.py \ - --comet_optimizer_config "utils/loggers/comet/optimizer_config.json" \ - --save-period 1 \ - --bbox_interval 1 -``` - -### Running a Sweep in Parallel - -```shell -comet optimizer -j utils/loggers/comet/hpo.py \ - utils/loggers/comet/optimizer_config.json" -``` - -### Visualizing Results - -Comet provides a number of ways to visualize the results of your sweep. Take a look at a [project with a completed sweep here](https://www.comet.com/examples/comet-example-yolov5/view/PrlArHGuuhDTKC1UuBmTtOSXD/panels?ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration) - -hyperparameter-yolo diff --git a/spaces/studiobrn/SplitTrack/audiocraft/models/encodec.py b/spaces/studiobrn/SplitTrack/audiocraft/models/encodec.py deleted file mode 100644 index 69621a695887b0b41614c51cae020f6fd0af221d..0000000000000000000000000000000000000000 --- a/spaces/studiobrn/SplitTrack/audiocraft/models/encodec.py +++ /dev/null @@ -1,302 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from abc import ABC, abstractmethod -import typing as tp - -from einops import rearrange -import torch -from torch import nn - -from .. import quantization as qt - - -class CompressionModel(ABC, nn.Module): - - @abstractmethod - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - ... - - @abstractmethod - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """See `EncodecModel.encode`""" - ... - - @abstractmethod - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - """See `EncodecModel.decode`""" - ... - - @property - @abstractmethod - def channels(self) -> int: - ... - - @property - @abstractmethod - def frame_rate(self) -> int: - ... - - @property - @abstractmethod - def sample_rate(self) -> int: - ... - - @property - @abstractmethod - def cardinality(self) -> int: - ... - - @property - @abstractmethod - def num_codebooks(self) -> int: - ... - - @property - @abstractmethod - def total_codebooks(self) -> int: - ... - - @abstractmethod - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - """ - ... - - -class EncodecModel(CompressionModel): - """Encodec model operating on the raw waveform. - - Args: - encoder (nn.Module): Encoder network. - decoder (nn.Module): Decoder network. - quantizer (qt.BaseQuantizer): Quantizer network. - frame_rate (int): Frame rate for the latent representation. - sample_rate (int): Audio sample rate. - channels (int): Number of audio channels. - causal (bool): Whether to use a causal version of the model. - renormalize (bool): Whether to renormalize the audio before running the model. - """ - # we need assignement to override the property in the abstract class, - # I couldn't find a better way... - frame_rate: int = 0 - sample_rate: int = 0 - channels: int = 0 - - def __init__(self, - encoder: nn.Module, - decoder: nn.Module, - quantizer: qt.BaseQuantizer, - frame_rate: int, - sample_rate: int, - channels: int, - causal: bool = False, - renormalize: bool = False): - super().__init__() - self.encoder = encoder - self.decoder = decoder - self.quantizer = quantizer - self.frame_rate = frame_rate - self.sample_rate = sample_rate - self.channels = channels - self.renormalize = renormalize - self.causal = causal - if self.causal: - # we force disabling here to avoid handling linear overlap of segments - # as supported in original EnCodec codebase. - assert not self.renormalize, 'Causal model does not support renormalize' - - @property - def total_codebooks(self): - """Total number of quantizer codebooks available. - """ - return self.quantizer.total_codebooks - - @property - def num_codebooks(self): - """Active number of codebooks used by the quantizer. - """ - return self.quantizer.num_codebooks - - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - """ - self.quantizer.set_num_codebooks(n) - - @property - def cardinality(self): - """Cardinality of each codebook. - """ - return self.quantizer.bins - - def preprocess(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - scale: tp.Optional[torch.Tensor] - if self.renormalize: - mono = x.mean(dim=1, keepdim=True) - volume = mono.pow(2).mean(dim=2, keepdim=True).sqrt() - scale = 1e-8 + volume - x = x / scale - scale = scale.view(-1, 1) - else: - scale = None - return x, scale - - def postprocess(self, - x: torch.Tensor, - scale: tp.Optional[torch.Tensor] = None) -> torch.Tensor: - if scale is not None: - assert self.renormalize - x = x * scale.view(-1, 1, 1) - return x - - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - assert x.dim() == 3 - length = x.shape[-1] - x, scale = self.preprocess(x) - - emb = self.encoder(x) - q_res = self.quantizer(emb, self.frame_rate) - out = self.decoder(q_res.x) - - # remove extra padding added by the encoder and decoder - assert out.shape[-1] >= length, (out.shape[-1], length) - out = out[..., :length] - - q_res.x = self.postprocess(out, scale) - - return q_res - - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """Encode the given input tensor to quantized representation along with scale parameter. - - Args: - x (torch.Tensor): Float tensor of shape [B, C, T] - - Returns: - codes, scale (tp.Tuple[torch.Tensor, torch.Tensor]): Tuple composed of: - codes a float tensor of shape [B, K, T] with K the number of codebooks used and T the timestep. - scale a float tensor containing the scale for audio renormalizealization. - """ - assert x.dim() == 3 - x, scale = self.preprocess(x) - emb = self.encoder(x) - codes = self.quantizer.encode(emb) - return codes, scale - - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - """Decode the given codes to a reconstructed representation, using the scale to perform - audio denormalization if needed. - - Args: - codes (torch.Tensor): Int tensor of shape [B, K, T] - scale (tp.Optional[torch.Tensor]): Float tensor containing the scale value. - - Returns: - out (torch.Tensor): Float tensor of shape [B, C, T], the reconstructed audio. - """ - emb = self.quantizer.decode(codes) - out = self.decoder(emb) - out = self.postprocess(out, scale) - # out contains extra padding added by the encoder and decoder - return out - - -class FlattenedCompressionModel(CompressionModel): - """Wraps a CompressionModel and flatten its codebooks, e.g. - instead of returning [B, K, T], return [B, S, T * (K // S)] with - S the number of codebooks per step, and `K // S` the number of 'virtual steps' - for each real time step. - - Args: - model (CompressionModel): compression model to wrap. - codebooks_per_step (int): number of codebooks to keep per step, - this must divide the number of codebooks provided by the wrapped model. - extend_cardinality (bool): if True, and for instance if codebooks_per_step = 1, - if each codebook has a cardinality N, then the first codebook will - use the range [0, N - 1], and the second [N, 2 N - 1] etc. - On decoding, this can lead to potentially invalid sequences. - Any invalid entry will be silently remapped to the proper range - with a modulo. - """ - def __init__(self, model: CompressionModel, codebooks_per_step: int = 1, - extend_cardinality: bool = True): - super().__init__() - self.model = model - self.codebooks_per_step = codebooks_per_step - self.extend_cardinality = extend_cardinality - - @property - def total_codebooks(self): - return self.model.total_codebooks - - @property - def num_codebooks(self): - """Active number of codebooks used by the quantizer. - - ..Warning:: this reports the number of codebooks after the flattening - of the codebooks! - """ - assert self.model.num_codebooks % self.codebooks_per_step == 0 - return self.codebooks_per_step - - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - - ..Warning:: this sets the number of codebooks **before** the flattening - of the codebooks. - """ - assert n % self.codebooks_per_step == 0 - self.model.set_num_codebooks(n) - - @property - def num_virtual_steps(self) -> int: - """Return the number of virtual steps, e.g. one real step - will be split into that many steps. - """ - return self.model.num_codebooks // self.codebooks_per_step - - @property - def frame_rate(self) -> int: - return self.model.frame_rate * self.num_virtual_steps - - @property - def sample_rate(self) -> int: - return self.model.sample_rate - - @property - def channels(self) -> int: - return self.model.channels - - @property - def cardinality(self): - """Cardinality of each codebook. - """ - if self.extend_cardinality: - return self.model.cardinality * self.num_virtual_steps - else: - return self.model.cardinality - - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - raise NotImplementedError("Not supported, use encode and decode.") - - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - indices, scales = self.model.encode(x) - B, K, T = indices.shape - indices = rearrange(indices, 'b (k v) t -> b k t v', k=self.codebooks_per_step) - if self.extend_cardinality: - for virtual_step in range(1, self.num_virtual_steps): - indices[..., virtual_step] += self.model.cardinality * virtual_step - indices = rearrange(indices, 'b k t v -> b k (t v)') - return (indices, scales) - - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - B, K, T = codes.shape - assert T % self.num_virtual_steps == 0 - codes = rearrange(codes, 'b k (t v) -> b (k v) t', v=self.num_virtual_steps) - # We silently ignore potential errors from the LM when - # using extend_cardinality. - codes = codes % self.model.cardinality - return self.model.decode(codes, scale) diff --git a/spaces/sudeepshouche/minimalist/README.md b/spaces/sudeepshouche/minimalist/README.md deleted file mode 100644 index b64718c2c1e740d45f6f49566b256e7cfedaf452..0000000000000000000000000000000000000000 --- a/spaces/sudeepshouche/minimalist/README.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -tags: -- track-1 -- track-4 -- gradio -- gradio-theme -- minimalist -- minimalist-theme -- minimal-theme -- basic-theme -- custom-gradio-theme -title: minimalist -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: true -license: apache-2.0 -emoji: ⚡ ---- -# minimalist -## Description -A minimalist theme for Gradio Apps -## Contributions -Thanks to [@sudeepshouche](https://huggingface.co/sudeepshouche) for adding this gradio theme! \ No newline at end of file diff --git a/spaces/sujithvamshi/vehicle-color-recognition/app.py b/spaces/sujithvamshi/vehicle-color-recognition/app.py deleted file mode 100644 index 13711199aee2c11d471f08598dcde129ece7a41b..0000000000000000000000000000000000000000 --- a/spaces/sujithvamshi/vehicle-color-recognition/app.py +++ /dev/null @@ -1,37 +0,0 @@ -import gradio as gr - -from utils import predict -with gr.Blocks(title="Vehicle Colour Recognition") as interface: - with gr.Row(): - with gr.Column(): - gr.Markdown(''' - # Real-Time Vehicle Tracking and Colour Recogniton - When using a computer vision model for real-time vehicle tracking or color recognition, the input frame is fed into the model, and the model then makes predictions based on the contents of the frame. The model processes the image or video frame, analyzes the objects and their attributes, and outputs the predictions in the form of bounding boxes, class probabilities, or color labels. - ''') - gr.Image('./samples/input_image.png',label="Input Frame") - gr.Markdown(''' - For example, in the case of real-time vehicle tracking using YOLOv5 and StrongSORT, the input frame is first processed by the YOLOv5 algorithm, which outputs bounding boxes and class probabilities for each detected object. The StrongSORT algorithm then takes these predictions and tracks the movement of vehicles over time, maintaining object identities and handling occlusions and other challenging scenarios. - ''') - gr.Image('./samples/detections.png',label="Output Frame") - gr.Image('./samples/flow.png') - gr.Markdown(''' - Similarly, in the case of color recognition using EfficientNet, the input frame is fed into the network, which processes the image and outputs color predictions for each pixel or region of the image. The network can be trained to recognize a wide range of colors, such as red, green, blue, yellow, and more, based on the color distribution in the training data. - ''') - gr.Image('./samples/detailed.png') - gr.Markdown(''' - In both cases, the input frame is processed in real-time, allowing for real-time tracking and color recognition in a video stream. The model's predictions are then used to perform a range of applications, such as traffic monitoring, surveillance, and more. - ''') - with gr.Column(): - gr.Markdown('

                  Try Our Colour Recognition Model Here!!!') - input_image = gr.Image(shape=(224, 224)) - with gr.Row(): - send_btn = gr.Button("Submit") - output_label=gr.Text(label="Predicted Colour") - send_btn.click(fn=predict, inputs=input_image,outputs=output_label) - with gr.Row(): - gr.Examples(['./samples/blue.jpg'], label='Sample images : Blue', inputs=input_image) - gr.Examples(['./samples/red.jpg'], label='Red', inputs=input_image) - gr.Examples(['./samples/black.jpg'], label='Black', inputs=input_image) - gr.Examples(['./samples/white.jpg'], label='White', inputs=input_image) - -interface.launch() diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Album Completo Eros Ramazzotti Noi Download Torrent !!LINK!!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Album Completo Eros Ramazzotti Noi Download Torrent !!LINK!!.md deleted file mode 100644 index 8adc3c9b3e9a591c6a4a554494f8e4c2b10515c1..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Album Completo Eros Ramazzotti Noi Download Torrent !!LINK!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  album completo eros ramazzotti noi download torrent


                  Download File ——— https://cinurl.com/2uEYPz



                  -
                  -AC DC Full Discography Torrent FLAC Music Download ... full download Eros Ramazzotti Perfetto 2015 Deluxe Edition Mp3 Eros Ramazzotti ... 1fdad05405
                  -
                  -
                  -

                  diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Latest Version Of Adobe Flash Player Free Download And Install.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Latest Version Of Adobe Flash Player Free Download And Install.md deleted file mode 100644 index aece5551b18f33a36524a832879669ee0d1385e1..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Latest Version Of Adobe Flash Player Free Download And Install.md +++ /dev/null @@ -1,88 +0,0 @@ - -

                  How to Get the Latest Version of Adobe Flash Player for Free

                  -

                  Adobe Flash Player is a software that allows you to play and view multimedia content on your browser. It is widely used for animations, games, videos, and web applications. However, Adobe has announced that it will stop supporting Flash Player by the end of 2020, due to security and performance issues. Therefore, you may need to download and install the latest version of Adobe Flash Player for free before it becomes obsolete.

                  -

                  latest version of adobe flash player free download and install


                  Download File ✔✔✔ https://cinurl.com/2uEZ4P



                  -

                  In this article, we will show you how to get the latest version of Adobe Flash Player for free and install it on your Windows PC. We will also explain how to enable Flash Player on your browser, and answer some frequently asked questions about Flash Player.

                  -

                  Download and Install the Latest Version of Adobe Flash Player for Free

                  -

                  The first step to get the latest version of Adobe Flash Player for free is to download it from the official Adobe website. Here are the steps to follow:

                  -
                    -
                  1. Go to https://get.adobe.com/flashplayer/ and click on the yellow "Download now" button.
                  2. -
                  3. Uncheck the optional offers for McAfee products if you don't want them.
                  4. -
                  5. Choose your update preferences. We recommend selecting "Allow Adobe to install updates" to keep your Flash Player up to date.
                  6. -
                  7. Click on "Save File" and wait for the download to finish.
                  8. -
                  9. Open the downloaded file and follow the installation instructions.
                  10. -
                  -

                  Congratulations! You have successfully downloaded and installed the latest version of Adobe Flash Player for free on your Windows PC.

                  -

                  Enable Flash Player on Your Browser

                  -

                  The next step to get the latest version of Adobe Flash Player for free is to enable it on your browser. Depending on which browser you use, the steps may vary slightly. Here are some common browsers and how to enable Flash Player on them:

                  -

                  Chrome

                  -

                  If you use Chrome, you don't need to download Flash Player separately, because it is built-in to your browser. However, you need to enable it manually when you visit a site that uses Flash. Here are the steps to enable Flash Player on Chrome:

                  -
                    -
                  1. Click on the menu button (three dots) on the top right corner of your browser.
                  2. -
                  3. Select "Settings" from the drop-down menu.
                  4. -
                  5. Scroll down and click on "Advanced".
                  6. -
                  7. Click on "Site settings" under "Privacy and security".
                  8. -
                  9. Scroll down and click on "Flash".
                  10. -
                  11. Toggle the switch from "Block sites from running Flash (recommended)" to "Ask first".
                  12. -
                  -

                  Now, whenever you visit a site that has Flash content, you will see a prompt asking you to allow or block Flash. We suggest only allowing Flash on sites that you trust.

                  -

                  Microsoft Edge

                  -

                  If you use Microsoft Edge, you also don't need to download Flash Player separately, because it is built-in to your browser. However, you need to enable it manually when you visit a site that uses Flash. Here are the steps to enable Flash Player on Microsoft Edge:

                  -

                  -
                    -
                  1. Click on the menu button (three dots) on the top right corner of your browser.
                  2. -
                  3. Select "Settings" from the drop-down menu.
                  4. -
                  5. Click on "Advanced" on the left sidebar.
                  6. -
                  7. Toggle the switch from "Use Adobe Flash Player" to "On".
                  8. -
                  -

                  Now, whenever you visit a site that has Flash content, you will see a prompt asking you to allow or block Flash. We suggest only allowing Flash on sites that you trust.

                  -

                  Frequently Asked Questions About Adobe Flash Player

                  -

                  You may have some questions about Adobe Flash Player and its future. Here are some common questions and answers that may help you:

                  -

                  Is Adobe Flash Player Safe?

                  -

                  Adobe Flash Player is safe as long as you download it from the official Adobe website and keep it updated. However, some malicious websites may try to trick you into downloading fake or outdated versions of Flash Player that may contain viruses or malware. Therefore, you should always be careful when clicking on links or pop-ups that ask you to update or install Flash Player.

                  -

                  Is Adobe Flash Player Free?

                  -

                  Yes, Adobe Flash Player is free for personal use. You don't need to pay anything to download or install it on your PC or browser. However, if you are a developer or a business owner who wants to create or distribute Flash content, you may need to purchase a license from Adobe.

                  -

                  What Will Happen When Adobe Stops Supporting Flash Player?

                  -

                  When Adobe stops supporting Flash Player by the end of 2020, it means that it will no longer provide updates or security patches for it. This means that any existing Flash content may become vulnerable to hackers or incompatible with newer browsers or operating systems. Therefore, Adobe recommends that users uninstall Flash Player before its end of life date. Moreover, many browsers and websites have already started phasing out Flash in favor of newer technologies such as HTML5, CSS3, and JavaScript. This means that most of the web content that used to rely on Flash will no longer be available or accessible after 2020.

                  - -

                  Conclusion

                  - -

                  In this article, we have shown you how to get the latest version of Adobe Flash Player for free and install it on your Windows PC. We have also explained how to enable Flash Player on your browser, and answered some frequently asked questions about Flash Player. We hope this article has been helpful for you and that you have enjoyed watching or playing some amazing Flash content before it becomes obsolete.

                  -

                  Why You Need the Latest Version of Adobe Flash Player

                  -

                  One of the reasons why you need to get the latest version of Adobe Flash Player for free is to ensure that you have the best possible experience when watching or playing Flash content. The latest version of Flash Player offers improved performance, stability, and security. It also fixes some bugs and vulnerabilities that may affect your browser or system.

                  -

                  Another reason why you need to get the latest version of Adobe Flash Player for free is to avoid missing out on some Flash content that may not be compatible with older versions of Flash Player. Some websites or applications may require you to have the latest version of Flash Player to access their features or functions. If you don't have the latest version of Flash Player, you may encounter errors, crashes, or blank screens.

                  -

                  Therefore, it is important to get the latest version of Adobe Flash Player for free and install it on your PC as soon as possible. You can check your current version of Flash Player by visiting https://helpx.adobe.com/flash-player.html and compare it with the latest version available on the Adobe website.

                  -

                  How to Uninstall Adobe Flash Player

                  -

                  As we mentioned earlier, Adobe will stop supporting Flash Player by the end of 2020. This means that Flash Player will no longer receive any updates or security patches from Adobe. This also means that Flash Player will become obsolete and potentially unsafe to use after 2020.

                  -

                  Therefore, Adobe recommends that users uninstall Flash Player from their PC before its end of life date. This will help prevent any security risks or compatibility issues that may arise from using an outdated and unsupported software. Here are the steps to uninstall Flash Player from your Windows PC:

                  -
                    -
                  1. Go to https://helpx.adobe.com/flash-player/kb/uninstall-flash-player-windows.html and download the uninstaller for your version of Windows.
                  2. -
                  3. Close all browsers and applications that use Flash Player.
                  4. -
                  5. Run the uninstaller and follow the instructions.
                  6. -
                  7. Restart your computer.
                  8. -
                  9. Delete any remaining files or folders related to Flash Player from your system.
                  10. -
                  -

                  Congratulations! You have successfully uninstalled Flash Player from your Windows PC.

                  -

                  What Are the Benefits of Adobe Flash Player

                  -

                  Adobe Flash Player may be on its way out, but it still has some benefits that make it worth using until it is completely phased out. Here are some of the benefits of Adobe Flash Player:

                  -
                    -
                  • It supports a wide range of multimedia formats, such as SWF, FLV, MP4, MP3, PNG, JPEG, GIF, and more.
                  • -
                  • It enables rich and interactive web content, such as animations, games, videos, and web applications.
                  • -
                  • It is compatible with most browsers and operating systems, such as Windows, Mac OS, Linux, Android, and iOS.
                  • -
                  • It has a large and active community of developers and users who create and share Flash content.
                  • -
                  • It has a powerful and flexible scripting language called ActionScript that allows you to create dynamic and complex Flash content.
                  • -
                  -

                  Therefore, Adobe Flash Player still has some advantages that make it a useful and enjoyable software to have on your PC or browser.

                  -

                  What Are the Alternatives to Adobe Flash Player

                  -

                  As Adobe Flash Player is nearing its end of life, you may be wondering what are the alternatives to Adobe Flash Player that you can use to access or create web content. Here are some of the alternatives to Adobe Flash Player that you can consider:

                  -
                    -
                  • HTML5: HTML5 is the latest version of the HyperText Markup Language that is used to create web pages. HTML5 supports multimedia elements such as audio, video, canvas, and SVG without requiring any plugins or downloads. HTML5 is faster, more secure, and more responsive than Flash. It is also supported by all modern browsers and devices.
                  • -
                  • CSS3: CSS3 is the latest version of the Cascading Style Sheets that is used to style web pages. CSS3 allows you to create animations, transitions, transformations, gradients, shadows, and more without using any images or scripts. CSS3 is also faster, more secure, and more responsive than Flash. It is also supported by all modern browsers and devices.
                  • -
                  • JavaScript: JavaScript is a scripting language that is used to add interactivity and functionality to web pages. JavaScript can manipulate HTML and CSS elements, communicate with servers, store data, create games, and more. JavaScript is also faster, more secure, and more responsive than Flash. It is also supported by all modern browsers and devices.
                  • -
                  -

                  Therefore, HTML5, CSS3, and JavaScript are some of the alternatives to Adobe Flash Player that you can use to access or create web content that is more modern, efficient, and compatible.

                  -

                  Conclusion

                  -

                  In this article, we have shown you how to get the latest version of Adobe Flash Player for free and install it on your Windows PC. We have also explained how to enable Flash Player on your browser, and answered some frequently asked questions about Flash Player. We have also discussed the benefits and alternatives of Adobe Flash Player, and why you may need to uninstall it before its end of life date.

                  -

                  We hope this article has been helpful for you and that you have enjoyed watching or playing some amazing Flash content before it becomes obsolete. If you have any questions or feedback, please feel free to leave a comment below.

                  3cee63e6c2
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Monster Girl Quest! (Save Data).zip.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Monster Girl Quest! (Save Data).zip.md deleted file mode 100644 index 56fa7099868dd396762a369f67110e145f976fa1..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Monster Girl Quest! (Save Data).zip.md +++ /dev/null @@ -1,8 +0,0 @@ -

                  Monster Girl Quest! (Save Data).zip


                  Download Zip === https://cinurl.com/2uEXXn



                  -
                  -MonGirlDreams-Alpha-v23.9Tov23.9e-PatchData.zip 10 MB ... Try watching the loss/death scene from a game called "Monster Girl Quest" if you don't understand what I am... " Monster Girl Quest" is a game where you play as Monster Girl. -After she ran away from her parents' home, she was taken into slavery by two men who forced her to work on the docks. One day, while working at the docks, she witnessed a brutal attack on a woman, which caused her to be raped and killed by one of the men. -During the attack, she managed to survive, and when 8a78ff9644
                  -
                  -
                  -

                  diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/bricks/depthwise_separable_conv_module.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/bricks/depthwise_separable_conv_module.py deleted file mode 100644 index 722d5d8d71f75486e2db3008907c4eadfca41d63..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/bricks/depthwise_separable_conv_module.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .conv_module import ConvModule - - -class DepthwiseSeparableConvModule(nn.Module): - """Depthwise separable convolution module. - - See https://arxiv.org/pdf/1704.04861.pdf for details. - - This module can replace a ConvModule with the conv block replaced by two - conv block: depthwise conv block and pointwise conv block. The depthwise - conv block contains depthwise-conv/norm/activation layers. The pointwise - conv block contains pointwise-conv/norm/activation layers. It should be - noted that there will be norm/activation layer in the depthwise conv block - if `norm_cfg` and `act_cfg` are specified. - - Args: - in_channels (int): Number of channels in the input feature map. - Same as that in ``nn._ConvNd``. - out_channels (int): Number of channels produced by the convolution. - Same as that in ``nn._ConvNd``. - kernel_size (int | tuple[int]): Size of the convolving kernel. - Same as that in ``nn._ConvNd``. - stride (int | tuple[int]): Stride of the convolution. - Same as that in ``nn._ConvNd``. Default: 1. - padding (int | tuple[int]): Zero-padding added to both sides of - the input. Same as that in ``nn._ConvNd``. Default: 0. - dilation (int | tuple[int]): Spacing between kernel elements. - Same as that in ``nn._ConvNd``. Default: 1. - norm_cfg (dict): Default norm config for both depthwise ConvModule and - pointwise ConvModule. Default: None. - act_cfg (dict): Default activation config for both depthwise ConvModule - and pointwise ConvModule. Default: dict(type='ReLU'). - dw_norm_cfg (dict): Norm config of depthwise ConvModule. If it is - 'default', it will be the same as `norm_cfg`. Default: 'default'. - dw_act_cfg (dict): Activation config of depthwise ConvModule. If it is - 'default', it will be the same as `act_cfg`. Default: 'default'. - pw_norm_cfg (dict): Norm config of pointwise ConvModule. If it is - 'default', it will be the same as `norm_cfg`. Default: 'default'. - pw_act_cfg (dict): Activation config of pointwise ConvModule. If it is - 'default', it will be the same as `act_cfg`. Default: 'default'. - kwargs (optional): Other shared arguments for depthwise and pointwise - ConvModule. See ConvModule for ref. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - dw_norm_cfg='default', - dw_act_cfg='default', - pw_norm_cfg='default', - pw_act_cfg='default', - **kwargs): - super(DepthwiseSeparableConvModule, self).__init__() - assert 'groups' not in kwargs, 'groups should not be specified' - - # if norm/activation config of depthwise/pointwise ConvModule is not - # specified, use default config. - dw_norm_cfg = dw_norm_cfg if dw_norm_cfg != 'default' else norm_cfg - dw_act_cfg = dw_act_cfg if dw_act_cfg != 'default' else act_cfg - pw_norm_cfg = pw_norm_cfg if pw_norm_cfg != 'default' else norm_cfg - pw_act_cfg = pw_act_cfg if pw_act_cfg != 'default' else act_cfg - - # depthwise convolution - self.depthwise_conv = ConvModule( - in_channels, - in_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=in_channels, - norm_cfg=dw_norm_cfg, - act_cfg=dw_act_cfg, - **kwargs) - - self.pointwise_conv = ConvModule( - in_channels, - out_channels, - 1, - norm_cfg=pw_norm_cfg, - act_cfg=pw_act_cfg, - **kwargs) - - def forward(self, x): - x = self.depthwise_conv(x) - x = self.pointwise_conv(x) - return x diff --git a/spaces/t13718236382/bingoGPT4/src/components/welcome-screen.tsx b/spaces/t13718236382/bingoGPT4/src/components/welcome-screen.tsx deleted file mode 100644 index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/bingoGPT4/src/components/welcome-screen.tsx +++ /dev/null @@ -1,34 +0,0 @@ -import { useBing } from '@/lib/hooks/use-bing' - -const exampleMessages = [ - { - heading: '🧐 提出复杂问题', - message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?` - }, - { - heading: '🙌 获取更好的答案', - message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?' - }, - { - heading: '🎨 获得创意灵感', - message: `以海盗的口吻写一首关于外太空鳄鱼的俳句` - } -] - -export function WelcomeScreen({ setInput }: Pick, 'setInput'>) { - return ( -
                  - {exampleMessages.map(example => ( - - ))} -
                  - ) -} diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/modeling/roi_heads/detic_roi_heads.py b/spaces/taesiri/ChatGPT-ImageCaptioner/detic/modeling/roi_heads/detic_roi_heads.py deleted file mode 100644 index c87559359e0516443a43ed327110ec55fa4fa307..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/modeling/roi_heads/detic_roi_heads.py +++ /dev/null @@ -1,271 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import numpy as np -import json -import math -import torch -from torch import nn -from torch.autograd.function import Function -from typing import Dict, List, Optional, Tuple, Union -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import ShapeSpec -from detectron2.layers import batched_nms -from detectron2.structures import Boxes, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage - -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.roi_heads.fast_rcnn import fast_rcnn_inference -from detectron2.modeling.roi_heads.roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads -from detectron2.modeling.roi_heads.cascade_rcnn import CascadeROIHeads, _ScaleGradient -from detectron2.modeling.roi_heads.box_head import build_box_head -from .detic_fast_rcnn import DeticFastRCNNOutputLayers -from ..debug import debug_second_stage - -from torch.cuda.amp import autocast - -@ROI_HEADS_REGISTRY.register() -class DeticCascadeROIHeads(CascadeROIHeads): - @configurable - def __init__( - self, - *, - mult_proposal_score: bool = False, - with_image_labels: bool = False, - add_image_box: bool = False, - image_box_size: float = 1.0, - ws_num_props: int = 512, - add_feature_to_prop: bool = False, - mask_weight: float = 1.0, - one_class_per_proposal: bool = False, - **kwargs, - ): - super().__init__(**kwargs) - self.mult_proposal_score = mult_proposal_score - self.with_image_labels = with_image_labels - self.add_image_box = add_image_box - self.image_box_size = image_box_size - self.ws_num_props = ws_num_props - self.add_feature_to_prop = add_feature_to_prop - self.mask_weight = mask_weight - self.one_class_per_proposal = one_class_per_proposal - - @classmethod - def from_config(cls, cfg, input_shape): - ret = super().from_config(cfg, input_shape) - ret.update({ - 'mult_proposal_score': cfg.MODEL.ROI_BOX_HEAD.MULT_PROPOSAL_SCORE, - 'with_image_labels': cfg.WITH_IMAGE_LABELS, - 'add_image_box': cfg.MODEL.ROI_BOX_HEAD.ADD_IMAGE_BOX, - 'image_box_size': cfg.MODEL.ROI_BOX_HEAD.IMAGE_BOX_SIZE, - 'ws_num_props': cfg.MODEL.ROI_BOX_HEAD.WS_NUM_PROPS, - 'add_feature_to_prop': cfg.MODEL.ROI_BOX_HEAD.ADD_FEATURE_TO_PROP, - 'mask_weight': cfg.MODEL.ROI_HEADS.MASK_WEIGHT, - 'one_class_per_proposal': cfg.MODEL.ROI_HEADS.ONE_CLASS_PER_PROPOSAL, - }) - return ret - - - @classmethod - def _init_box_head(self, cfg, input_shape): - ret = super()._init_box_head(cfg, input_shape) - del ret['box_predictors'] - cascade_bbox_reg_weights = cfg.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS - box_predictors = [] - for box_head, bbox_reg_weights in zip(ret['box_heads'], \ - cascade_bbox_reg_weights): - box_predictors.append( - DeticFastRCNNOutputLayers( - cfg, box_head.output_shape, - box2box_transform=Box2BoxTransform(weights=bbox_reg_weights) - )) - ret['box_predictors'] = box_predictors - return ret - - - def _forward_box(self, features, proposals, targets=None, - ann_type='box', classifier_info=(None,None,None)): - """ - Add mult proposal scores at testing - Add ann_type - """ - if (not self.training) and self.mult_proposal_score: - if len(proposals) > 0 and proposals[0].has('scores'): - proposal_scores = [p.get('scores') for p in proposals] - else: - proposal_scores = [p.get('objectness_logits') for p in proposals] - - features = [features[f] for f in self.box_in_features] - head_outputs = [] # (predictor, predictions, proposals) - prev_pred_boxes = None - image_sizes = [x.image_size for x in proposals] - - for k in range(self.num_cascade_stages): - if k > 0: - proposals = self._create_proposals_from_boxes( - prev_pred_boxes, image_sizes, - logits=[p.objectness_logits for p in proposals]) - if self.training and ann_type in ['box']: - proposals = self._match_and_label_boxes( - proposals, k, targets) - predictions = self._run_stage(features, proposals, k, - classifier_info=classifier_info) - prev_pred_boxes = self.box_predictor[k].predict_boxes( - (predictions[0], predictions[1]), proposals) - head_outputs.append((self.box_predictor[k], predictions, proposals)) - - if self.training: - losses = {} - storage = get_event_storage() - for stage, (predictor, predictions, proposals) in enumerate(head_outputs): - with storage.name_scope("stage{}".format(stage)): - if ann_type != 'box': - stage_losses = {} - if ann_type in ['image', 'caption', 'captiontag']: - image_labels = [x._pos_category_ids for x in targets] - weak_losses = predictor.image_label_losses( - predictions, proposals, image_labels, - classifier_info=classifier_info, - ann_type=ann_type) - stage_losses.update(weak_losses) - else: # supervised - stage_losses = predictor.losses( - (predictions[0], predictions[1]), proposals, - classifier_info=classifier_info) - if self.with_image_labels: - stage_losses['image_loss'] = \ - predictions[0].new_zeros([1])[0] - losses.update({k + "_stage{}".format(stage): v \ - for k, v in stage_losses.items()}) - return losses - else: - # Each is a list[Tensor] of length #image. Each tensor is Ri x (K+1) - scores_per_stage = [h[0].predict_probs(h[1], h[2]) for h in head_outputs] - scores = [ - sum(list(scores_per_image)) * (1.0 / self.num_cascade_stages) - for scores_per_image in zip(*scores_per_stage) - ] - if self.mult_proposal_score: - scores = [(s * ps[:, None]) ** 0.5 \ - for s, ps in zip(scores, proposal_scores)] - if self.one_class_per_proposal: - scores = [s * (s == s[:, :-1].max(dim=1)[0][:, None]).float() for s in scores] - predictor, predictions, proposals = head_outputs[-1] - boxes = predictor.predict_boxes( - (predictions[0], predictions[1]), proposals) - pred_instances, _ = fast_rcnn_inference( - boxes, - scores, - image_sizes, - predictor.test_score_thresh, - predictor.test_nms_thresh, - predictor.test_topk_per_image, - ) - return pred_instances - - - def forward(self, images, features, proposals, targets=None, - ann_type='box', classifier_info=(None,None,None)): - ''' - enable debug and image labels - classifier_info is shared across the batch - ''' - if self.training: - if ann_type in ['box', 'prop', 'proptag']: - proposals = self.label_and_sample_proposals( - proposals, targets) - else: - proposals = self.get_top_proposals(proposals) - - losses = self._forward_box(features, proposals, targets, \ - ann_type=ann_type, classifier_info=classifier_info) - if ann_type == 'box' and targets[0].has('gt_masks'): - mask_losses = self._forward_mask(features, proposals) - losses.update({k: v * self.mask_weight \ - for k, v in mask_losses.items()}) - losses.update(self._forward_keypoint(features, proposals)) - else: - losses.update(self._get_empty_mask_loss( - features, proposals, - device=proposals[0].objectness_logits.device)) - return proposals, losses - else: - pred_instances = self._forward_box( - features, proposals, classifier_info=classifier_info) - pred_instances = self.forward_with_given_boxes(features, pred_instances) - return pred_instances, {} - - - def get_top_proposals(self, proposals): - for i in range(len(proposals)): - proposals[i].proposal_boxes.clip(proposals[i].image_size) - proposals = [p[:self.ws_num_props] for p in proposals] - for i, p in enumerate(proposals): - p.proposal_boxes.tensor = p.proposal_boxes.tensor.detach() - if self.add_image_box: - proposals[i] = self._add_image_box(p) - return proposals - - - def _add_image_box(self, p): - image_box = Instances(p.image_size) - n = 1 - h, w = p.image_size - f = self.image_box_size - image_box.proposal_boxes = Boxes( - p.proposal_boxes.tensor.new_tensor( - [w * (1. - f) / 2., - h * (1. - f) / 2., - w * (1. - (1. - f) / 2.), - h * (1. - (1. - f) / 2.)] - ).view(n, 4)) - image_box.objectness_logits = p.objectness_logits.new_ones(n) - return Instances.cat([p, image_box]) - - - def _get_empty_mask_loss(self, features, proposals, device): - if self.mask_on: - return {'loss_mask': torch.zeros( - (1, ), device=device, dtype=torch.float32)[0]} - else: - return {} - - - def _create_proposals_from_boxes(self, boxes, image_sizes, logits): - """ - Add objectness_logits - """ - boxes = [Boxes(b.detach()) for b in boxes] - proposals = [] - for boxes_per_image, image_size, logit in zip( - boxes, image_sizes, logits): - boxes_per_image.clip(image_size) - if self.training: - inds = boxes_per_image.nonempty() - boxes_per_image = boxes_per_image[inds] - logit = logit[inds] - prop = Instances(image_size) - prop.proposal_boxes = boxes_per_image - prop.objectness_logits = logit - proposals.append(prop) - return proposals - - - def _run_stage(self, features, proposals, stage, \ - classifier_info=(None,None,None)): - """ - Support classifier_info and add_feature_to_prop - """ - pool_boxes = [x.proposal_boxes for x in proposals] - box_features = self.box_pooler(features, pool_boxes) - box_features = _ScaleGradient.apply(box_features, 1.0 / self.num_cascade_stages) - box_features = self.box_head[stage](box_features) - if self.add_feature_to_prop: - feats_per_image = box_features.split( - [len(p) for p in proposals], dim=0) - for feat, p in zip(feats_per_image, proposals): - p.feat = feat - return self.box_predictor[stage]( - box_features, - classifier_info=classifier_info) diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/tools/remove_lvis_rare.py b/spaces/taesiri/ChatGPT-ImageCaptioner/tools/remove_lvis_rare.py deleted file mode 100644 index 06e4e881bfa50e2cd74747511a3ad2e8676e0c70..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/tools/remove_lvis_rare.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--ann', default='datasets/lvis/lvis_v1_train.json') - args = parser.parse_args() - - print('Loading', args.ann) - data = json.load(open(args.ann, 'r')) - catid2freq = {x['id']: x['frequency'] for x in data['categories']} - print('ori #anns', len(data['annotations'])) - exclude = ['r'] - data['annotations'] = [x for x in data['annotations'] \ - if catid2freq[x['category_id']] not in exclude] - print('filtered #anns', len(data['annotations'])) - out_path = args.ann[:-5] + '_norare.json' - print('Saving to', out_path) - json.dump(data, open(out_path, 'w')) diff --git a/spaces/terfces0erbo/CollegeProjectV2/Bibliocad Vip Account Crack UPD.md b/spaces/terfces0erbo/CollegeProjectV2/Bibliocad Vip Account Crack UPD.md deleted file mode 100644 index bcdfa86d80b51150c6960955b80edbf8dff9c4ac..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Bibliocad Vip Account Crack UPD.md +++ /dev/null @@ -1,50 +0,0 @@ -

                  bibliocad vip account crack


                  Downloadhttps://bytlly.com/2uGl0j



                  - -iphone - -Call girls outcall - -Video girl sex in Sreemangal - -Sex vedeni nachke sutty nya - -Bisexual bitches in Wrexham - -Horny girl in San Francisco - -Video girls who are doing sex in louisville - -When you can give it on, love lady who likes sweet and sexy ass pictures and who has a beautiful body and that loves to have sex with a man that is understanding, hard and very generous with money. - -Stockholm escorts classifieds in Neuquén - -The World's Guide to Porn is easy to find. Stockholms escort annonser Örebro escort. Browse and use the best sites at the best prices. - -But since, it is a lot more than simply "looking through the pages of a very long catalog" as most sex web sites do. No doubt, at some point, the average person will want to know more than "who they are looking at. I think most of us can agree that the porn industry could use more intelligence when it comes to designing, filming, modeling and writing. At iVIP you will find porn videos of every type and every niche. All of the videos have been hand-picked by our porn editors and are full-length.Rebuild operation is completed in Nevada, West Virginia, Pennsylvania, Indiana, Virginia, Georgia, and California. - -After years of political wrangling, the Federal Emergency Management Agency has completed its planning, construction and post-disaster evaluation of repairs to more than 2,000 homes and businesses affected by the April 27 tornado. - -“Rebuild” is the name of FEMA’s national plan for the immediate and long-term needs of the affected communities. The plan was first implemented by FEMA in Texas after Hurricane Ike. - -There are six general areas of focus in the FEMA Rebuild operation: - -Improving the safety of homes and business - -Providing much-needed temporary housing - -Stabilizing and repairing critical infrastructure - -Economic recovery - -Creating opportunities for small businesses to bounce back - -Aiding residents to move back into their own homes - -As the operation has moved forward, FEMA has been making progress in some affected areas and has identified specific milestones. - -Click on each of the state links below for information about the Rebuild operation in each state.// - -// Copyright (c) 2008-2015 the Urho 4fefd39f24
                  -
                  -
                  -

                  diff --git a/spaces/terfces0erbo/CollegeProjectV2/Eiffel Tower Blueprints Pdf Download.md b/spaces/terfces0erbo/CollegeProjectV2/Eiffel Tower Blueprints Pdf Download.md deleted file mode 100644 index 12691b3da89735af81babdd84e4282b3ae496038..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Eiffel Tower Blueprints Pdf Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  eiffel tower blueprints pdf download


                  Download File ✑ ✑ ✑ https://bytlly.com/2uGjCQ



                  -
                  -Postcode and map search; Where it operates; Charging times; Download maps; When ULEZ operates; ULEZ expansion - October 2021; Discounts and ... 1fdad05405
                  -
                  -
                  -

                  diff --git a/spaces/terfces0erbo/CollegeProjectV2/HDClone Professional V3.9.4-DOA Download VERIFIED.md b/spaces/terfces0erbo/CollegeProjectV2/HDClone Professional V3.9.4-DOA Download VERIFIED.md deleted file mode 100644 index ec2795aeb3bf14aca961febaf621022dcb3cbbfb..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/HDClone Professional V3.9.4-DOA Download VERIFIED.md +++ /dev/null @@ -1,8 +0,0 @@ -
                  -

                  image links. hdclone.professional.v3.9.4.exe. 149731. 150664.4-doa. 10.29 mb. 11.09.2015. 1. business or personal. 2. customer. 3. developer.. hdclone professional edition v 3.4 retail crack. #title: vlc media player portable 2.0.3. free download. hdclone professional edition 3. #title:hdclone professional edition 3.4 crack. #tags:hdclone,professional. get more hdclone professional 3.4 ide & docs. you are about to download hdclone professional 3.4 ide & docs files. click on the link below and start downloading hdclone professional. 9. copyright.sauron vhd media studio 6 pro [2011.6, eng] latest free key 2018 [updated] lumiahdr.rar.

                  -

                  disclaimer: our lives depend on our knowledge and we are trying to provide our best effort to help you. save your money and enjoy hdclone professional 3.9.4-doa. enjoy your crack. get hdclone professional 3.4. serial-key. .

                  -

                  HDClone Professional v3.9.4-DOA download


                  DOWNLOAD ►►► https://bytlly.com/2uGj7i



                  -

                  this is the new version of hdclone professional. let's make everyone happy! the bugs fixed, the new features implemented. and above all is hdclone professional v4.0. the price of all 864mb is 16 euro per license.

                  -

                  all our efforts combined are the result of your wish to work for a smaller license at a normal price! it is the future of the digital world, a world that is constantly.. forecast. while waiting on it, you will need the best utility to analyze, edit and copy your. what about the troubles? remember that at hdclone software we have a technical support team that answers all your. pro.zend.framework.techniques.sep.2009-attica.rar. python.3.for.absolute.txt. hdclone.professional.v3.9.4-doa.exe (6924kb) download the file and see what you've. , if the virus is more than 5 minutes. the program was diagnosed as a virus by various antivirus programs, including .

                  899543212b
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/HoRNet SongKey MKIII V3.0.2 WiN OSX.md b/spaces/terfces0erbo/CollegeProjectV2/HoRNet SongKey MKIII V3.0.2 WiN OSX.md deleted file mode 100644 index e30fb8569b73e5fae23120b3bc4a8246e0fd3c74..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/HoRNet SongKey MKIII V3.0.2 WiN OSX.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  HoRNet SongKey MKIII v3.0.2 WiN OSX


                  Download File ——— https://bytlly.com/2uGkus



                  - -HoRNet SongKey MKIII v3.0.2 WiN MAC · September 10, 2019 baomay01. Download. » Read more · 2 comments HoRNet SongKey, SongKey MKIII v3.0.2 ... 1fdad05405
                  -
                  -
                  -

                  diff --git a/spaces/test-org-q/stable-diffusion/share_btn.py b/spaces/test-org-q/stable-diffusion/share_btn.py deleted file mode 100644 index 6337110ac0cf970ccc716e7b7577ad9768fe5cec..0000000000000000000000000000000000000000 --- a/spaces/test-org-q/stable-diffusion/share_btn.py +++ /dev/null @@ -1,68 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - const gradioEl = document.querySelector('body > gradio-app'); - const imgEls = gradioEl.querySelectorAll('#gallery img'); - const promptTxt = gradioEl.querySelector('#prompt-text-input input').value; - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - if(!imgEls.length){ - return; - }; - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const files = await Promise.all( - [...imgEls].map(async (imgEl) => { - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `diffuse-the-rest-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }) - ); - - const urls = await Promise.all(files.map((f) => uploadFile(f))); - const htmlImgs = urls.map(url => ``); - const descriptionMd = `
                  -${htmlImgs.join(`\n`)} -
                  `; - - const params = new URLSearchParams({ - title: promptTxt, - description: descriptionMd, - }); - - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/stabilityai/stable-diffusion/discussions/new?${paramsStr}`, '_blank'); - - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/BSEB Matric Exam 2022 Dummy Admit Card Released Download and Check Details Here.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/BSEB Matric Exam 2022 Dummy Admit Card Released Download and Check Details Here.md deleted file mode 100644 index a802886a52672cc77b1cd66ebf0c7d8343e7a081..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/BSEB Matric Exam 2022 Dummy Admit Card Released Download and Check Details Here.md +++ /dev/null @@ -1,80 +0,0 @@ - -

                  Bihar Board Dummy Admit Card Download: Everything You Need to Know

                  -

                  If you are a student of class 10th or 12th in Bihar, you must be aware of the Bihar School Examination Board (BSEB) and its annual board exams. But do you know what a dummy admit card is and why it is important for you? In this article, we will tell you everything you need to know about the Bihar Board dummy admit card download process, how to check and correct your details, and what to do after downloading it. Read on to find out more.

                  -

                  How to Download Bihar Board Dummy Admit Card for Class 10th and 12th?

                  -

                  The BSEB releases the dummy admit cards for class 10th and 12th students every year before the final admit cards are issued. The dummy admit cards are meant for verification purposes only, and they allow the students to check their personal information, such as name, photograph, signature, and other details, on the admit card. In case of any discrepancies, the students can contact the board for correction before the final admit cards are released.

                  -

                  bihar board dummy admit card download


                  Download ✓✓✓ https://bltlly.com/2uOq0m



                  -

                  The Bihar Board dummy admit card download process is very simple and can be done online from the official website of the board. Here are the steps to follow:

                  -
                    -
                  1. Visit the official website of BSEB – biharboardonline.com
                  2. -
                  3. Go to the Secondary/Senior Secondary Section.
                  4. -
                  5. Click on the Dummy Admit Card 2022 Link.
                  6. -
                  7. Enter your Student Name, Father’s Name, Date of Birth, and School/College Code, and click on View.
                  8. -
                  9. Your Dummy Admit Card 2022 will appear on the screen.
                  10. -
                  11. Download the admit card and take a printout of it.
                  12. -
                  13. You can also save the PDF of the Dummy Admit Card for future references.
                  14. -
                  -

                  You can also use the direct links given below to download your dummy admit card:

                  - -

                  How to Check and Correct Bihar Board Dummy Admit Card Details?

                  -

                  After downloading your dummy admit card, you should carefully check all the details mentioned on it. If you find any error or mistake, you should immediately contact your school or college authority or the BSEB helpline number for correction. You can also apply for correction online by following these steps:

                  -
                    -
                  1. Visit the official website of BSEB – biharboardonline.com
                  2. -
                  3. Go to the Secondary/Senior Secondary Section.
                  4. -
                  5. Click on the Correction in Dummy Admit Card Link.
                  6. -
                  7. Login with your User ID and Password.
                  8. -
                  9. Select the details that you want to correct and make the necessary changes.
                  10. -
                  11. Submit your request and take a printout of the confirmation page.
                  12. -
                  -

                  The last date for correction in dummy admit card is October 25, 2021. You should make sure that your details are correct before this date, as no changes will be allowed after that.

                  -

                  What to Do After Downloading Bihar Board Dummy Admit Card?

                  -

                  Once you have downloaded your dummy admit card and verified your details, you should keep it safely until the final admit card is released. The final admit card will be issued by the BSEB in January 2022. You will be able to download it from the same website as the dummy admit card. You will need to carry the final admit card to the exam center along with a valid ID proof. The final admit card will have the same details as the dummy admit card, except for the exam center name and code, which will be allotted by the board later.

                  -

                  The Bihar Board class 10th and 12th exams are expected to be held in February 2022. You should start preparing for the exams well in advance and revise all the topics thoroughly. You should also practice solving previous year papers and sample papers to get familiar with the exam pattern and difficulty level. You can also refer to the Bihar Board syllabus and exam schedule for more information.

                  -

                  How to download bihar board dummy admit card 2022
                  -Bihar board 10th dummy admit card 2022 online
                  -Bihar board 12th dummy admit card 2022 release date
                  -Bihar board inter dummy admit card 2022 download link
                  -Bihar board matric dummy admit card 2022 correction
                  -Bihar board online dummy admit card 2023 for class 10th and 12th
                  -Bihar board senior secondary dummy admit card 2022 login
                  -Bihar board secondary dummy admit card 2022 website
                  -Bihar board intermediate dummy admit card 2022 pdf
                  -Bihar board class 10th dummy admit card 2022 biharboardonline.com
                  -Bihar board class 12th dummy admit card 2022 seniorsecondary.biharboardonline.com
                  -Bihar board exam 2023 dummy admit card download process
                  -Bihar board exam 2022 dummy admit card download steps
                  -Bihar board exam date 2023 and dummy admit card details
                  -Bihar board exam date 2022 and dummy admit card information
                  -Bihar board registration form for dummy admit card 2023
                  -Bihar board registration form for dummy admit card 2022
                  -Bihar board registration fee for dummy admit card download
                  -Bihar board registration last date for dummy admit card correction
                  -Bihar board registration status for dummy admit card verification
                  -BSEB bihar board dummy admit card download for matric and inter students
                  -BSEB bihar board dummy admit card download for class 10th and class 12th students
                  -BSEB bihar board dummy admit card download for annual examination 2023
                  -BSEB bihar board dummy admit card download for annual examination 2022
                  -BSEB bihar board dummy admit card download official website link
                  -BSEB bihar board online.com dummy admit card download portal
                  -BSEB bihar board online.bihar.gov.in dummy admit card download page
                  -BSEB bihar board online.in Reg18 Login.aspx dummy admit card download site
                  -BSEB bihar board online.in Reg19 Login.aspx dummy admit card download site
                  -BSEB bihar board online.in Reg20 Login.aspx dummy admit card download site

                  -

                  Frequently Asked Questions About Bihar Board Dummy Admit Card

                  -

                  Here are some of the common questions that students may have about the Bihar Board dummy admit card:

                  -

                  Q1. What is the purpose of the dummy admit card?

                  -

                  A1. The dummy admit card is a provisional document that allows the students to check and verify their personal details on the admit card before the final admit card is issued. It helps to avoid any errors or discrepancies in the final admit card, which may cause problems during the board exams.

                  -

                  Q2. How can I download my dummy admit card if I forget my login details?

                  -

                  A2. If you forget your login details, you can use your student name, father’s name, date of birth, and school/college code to download your dummy admit card from the official website of BSEB. You can also contact your school or college authority or the BSEB helpline number for assistance.

                  -

                  Q3. Can I change my exam center or medium of instruction through the dummy admit card?

                  -

                  A3. No, you cannot change your exam center or medium of instruction through the dummy admit card. The exam center and medium of instruction are decided by the board and cannot be changed by the students. You can only correct your personal details, such as name, photograph, signature, etc., through the dummy admit card.

                  -

                  Q4. What is the difference between the dummy admit card and the final admit card?

                  -

                  A4. The dummy admit card and the final admit card are both issued by the BSEB for class 10th and 12th students. The dummy admit card is released before the final admit card and is meant for verification purposes only. The final admit card is released after the correction process is over and is mandatory for appearing in the board exams. The final admit card has the same details as the dummy admit card, except for the exam center name and code, which are allotted by the board later.

                  -

                  Q5. What if I lose my dummy admit card or final admit card?

                  -

                  A5. If you lose your dummy admit card or final admit card, you should not panic and contact your school or college authority or the BSEB helpline number immediately. You can also download a duplicate copy of your dummy admit card or final admit card from the official website of BSEB by using your login details or student name, father’s name, date of birth, and school/college code.

                  -

                  I hope this article has helped you understand everything about the Bihar Board dummy admit card download process, how to check and correct your details, and what to do after downloading it. If you have any queries or suggestions, please feel free to leave a comment below. All the best for your board exams!

                  197e85843d
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Artforms 11th Edition Pdf Download VERIFIED.md b/spaces/tioseFevbu/cartoon-converter/scripts/Artforms 11th Edition Pdf Download VERIFIED.md deleted file mode 100644 index 090eec7eb6b7049ab7c229033679641ed36230bd..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Artforms 11th Edition Pdf Download VERIFIED.md +++ /dev/null @@ -1,16 +0,0 @@ -
                  -

                  Prebles' Artforms (11th Edition): A Comprehensive Guide to the World of Art

                  -

                  Are you looking for a book that will help you explore the reasons for creating art, the motivation for individual artists, and how art impacts its audience? If so, you might be interested in Prebles' Artforms (11th Edition), a textbook that covers a wide range of topics and themes related to art history, theory, and practice.

                  -

                  Prebles' Artforms (11th Edition) is written by Duane Preble Emeritus, Sarah Preble, and Patrick L. Frank, who are experts in their fields and have extensive teaching experience. The book is one of the most exhaustive revisions in its history, reflecting the dynamic environment of the art world today. It incorporates new scholarly research, changing pedagogical needs, and recent creativity by artists around the world.

                  -

                  artforms 11th edition pdf download


                  DOWNLOAD ✦✦✦ https://urlcod.com/2uHx6M



                  -

                  The book is organized into six parts: The Language of Visual Experience, The Media of Art, Art as Cultural Heritage, The Modern World, The Postmodern World, and The Themes of Art. Each part contains chapters that introduce key concepts, terms, and examples of artworks from different periods, cultures, and styles. The book also features engaging activities and assessment tools that help students experience and interact with art. Some of these include ART 21 videos, Studio Technique videos, and Closer Look tours of works of art.

                  -

                  Prebles' Artforms (11th Edition) is a comprehensive and accessible guide that will enrich your understanding and appreciation of art. Whether you are a student, a teacher, or a general reader, you will find this book to be a valuable resource for learning about the diverse and fascinating world of art.

                  In this article, we will provide a brief overview of some of the main topics and themes that Prebles' Artforms (11th Edition) covers. We will also highlight some of the features and benefits of using this book as a learning tool.

                  -

                  The Language of Visual Experience

                  -

                  The first part of the book introduces the basic elements and principles of visual art, such as line, shape, color, texture, balance, rhythm, and proportion. It also explains how artists use these elements and principles to create meaning and expression in their works. The book provides examples of artworks from different cultures and historical periods that illustrate how artists manipulate visual language to communicate ideas and emotions.

                  -

                  The Media of Art

                  -

                  The second part of the book explores the various materials and techniques that artists use to create art, such as drawing, painting, printmaking, photography, sculpture, installation, performance, and digital media. It also discusses how the choice of media affects the form and content of art, as well as how the media evolves over time in response to social and technological changes. The book showcases artworks from different genres and styles that demonstrate the diversity and creativity of artistic media.

                  -

                  Art as Cultural Heritage

                  -

                  The third part of the book examines the role of art in different cultural contexts, such as religion, politics, society, and identity. It also analyzes how art reflects and influences the values, beliefs, and traditions of various civilizations and regions, such as ancient Egypt, Greece, Rome, China, India, Africa, the Americas, Europe, and Oceania. The book presents artworks from different periods and movements that reveal the cultural diversity and complexity of art history.

                  -

                  7b8c122e87
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Ebsoft AnnuCapt Belgique V14c Bilingualrar.md b/spaces/tioseFevbu/cartoon-converter/scripts/Ebsoft AnnuCapt Belgique V14c Bilingualrar.md deleted file mode 100644 index 03e0e70d67e807e7e854896c80a6c8c5cf4fde22..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Ebsoft AnnuCapt Belgique V14c Bilingualrar.md +++ /dev/null @@ -1,17 +0,0 @@ - -

                  Ebsoft AnnuCapt Belgique: A Powerful Tool for Prospecting in Belgium

                  -

                  If you are looking for a way to create targeted and qualified prospect lists in Belgium, you might want to check out Ebsoft AnnuCapt Belgique. This software allows you to extract data from various online directories, such as Pages Jaunes, and save them in Excel or CSV format. You can also access several complete files and more than 800,000 emails for B2B or B2C prospecting in France or abroad.

                  -

                  Ebsoft AnnuCapt Belgique is a bilingual software that supports both French and Dutch languages. It is easy to use and has a fast and reliable extraction process. You can customize your searches by activity, location, category, or keywords. You can also filter the results by name, address, phone number, email, website, or any other field. You can then export the data to your CRM or email marketing software.

                  -

                  Ebsoft AnnuCapt Belgique V14c Bilingualrar


                  Download Ziphttps://urlcod.com/2uHxCi



                  -

                  Ebsoft AnnuCapt Belgique is a one-time purchase with no subscription or hidden fees. You can use it as much as you want and benefit from free updates and email support. The software is compatible with Windows XP, Vista, 7, 8, and 10. You can download a free trial version from the official website[^5^] or order the full version with a special offer[^6^].

                  -

                  With Ebsoft AnnuCapt Belgique, you can create your own prospect files for free and boost your sales performance. Whether you are a small business owner, a freelancer, a marketer, or a salesperson, you will find this software useful and efficient. Don't miss this opportunity to grow your business in Belgium and beyond.

                  - -

                  Ebsoft AnnuCapt Belgique is not only a data extraction tool, but also a data enrichment tool. You can use it to verify, update, or complete your existing prospect files. You can also use it to find new prospects from other sources, such as social media, websites, or blogs. You can then merge and deduplicate the data to create a clean and accurate database.

                  -

                  Ebsoft AnnuCapt Belgique is also a secure and ethical software. It respects the privacy and data protection laws of the countries where it operates. It does not collect or store any personal data without the consent of the owners. It also does not spam or harass the prospects with unwanted messages. It only provides you with the information that is publicly available and relevant for your business.

                  -

                  Ebsoft AnnuCapt Belgique is a trusted and reputable software that has been in the market for more than 14 years. It has been used by thousands of customers from various industries and sectors. Some of the testimonials and references are available on the official website and on YouTube. You can also contact the customer service by email or phone if you have any questions or issues.

                  - -

                  If you are ready to take your prospecting to the next level, you should try Ebsoft AnnuCapt Belgique today. You can download the free trial version and see for yourself how easy and effective it is. You can also take advantage of the special offer and get the full version with the option to access the complete files and emails. You will be amazed by how much time and money you can save with this software.

                  -

                  -

                  Ebsoft AnnuCapt Belgique is the ultimate solution for creating and managing your prospect files in Belgium. It is a powerful, reliable, and affordable software that will help you grow your business and reach your goals. Don't hesitate any longer and order Ebsoft AnnuCapt Belgique now. You won't regret it.

                  7196e7f11a
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/register.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/register.py deleted file mode 100644 index b8266b9a60f8c363ba35f7b73befd7c9c7cb4abc..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/register.py +++ /dev/null @@ -1,18 +0,0 @@ -from distutils import log -import distutils.command.register as orig - -from setuptools.errors import RemovedCommandError - - -class register(orig.register): - """Formerly used to register packages on PyPI.""" - - def run(self): - msg = ( - "The register command has been removed, use twine to upload " - + "instead (https://pypi.org/p/twine)" - ) - - self.announce("ERROR: " + msg, log.ERROR) - - raise RemovedCommandError(msg) diff --git a/spaces/tmabraham/horse2zebra_cyclegan/README.md b/spaces/tmabraham/horse2zebra_cyclegan/README.md deleted file mode 100644 index ed102478daca9e538d717945e05a9f99a7028aa7..0000000000000000000000000000000000000000 --- a/spaces/tmabraham/horse2zebra_cyclegan/README.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Horse-to-Zebra CycleGAN with UPIT -emoji: 🐴 -colorFrom: yellow -colorTo: orange -sdk: gradio -app_file: app.py -pinned: false ---- \ No newline at end of file diff --git a/spaces/tomofi/MMOCR/mmocr/models/textrecog/recognizer/base.py b/spaces/tomofi/MMOCR/mmocr/models/textrecog/recognizer/base.py deleted file mode 100644 index 4c22fa9072104ba3cfe8fe83135e305ccea2edd1..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/models/textrecog/recognizer/base.py +++ /dev/null @@ -1,232 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from abc import ABCMeta, abstractmethod -from collections import OrderedDict - -import mmcv -import torch -import torch.distributed as dist -from mmcv.runner import BaseModule, auto_fp16 - -from mmocr.core import imshow_text_label - - -class BaseRecognizer(BaseModule, metaclass=ABCMeta): - """Base class for text recognition.""" - - def __init__(self, init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.fp16_enabled = False - - @abstractmethod - def extract_feat(self, imgs): - """Extract features from images.""" - pass - - @abstractmethod - def forward_train(self, imgs, img_metas, **kwargs): - """ - Args: - img (tensor): tensors with shape (N, C, H, W). - Typically should be mean centered and std scaled. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details of the values of these keys, see - :class:`mmdet.datasets.pipelines.Collect`. - kwargs (keyword arguments): Specific to concrete implementation. - """ - pass - - @abstractmethod - def simple_test(self, img, img_metas, **kwargs): - pass - - @abstractmethod - def aug_test(self, imgs, img_metas, **kwargs): - """Test function with test time augmentation. - - Args: - imgs (list[tensor]): Tensor should have shape NxCxHxW, - which contains all images in the batch. - img_metas (list[list[dict]]): The metadata of images. - """ - pass - - def forward_test(self, imgs, img_metas, **kwargs): - """ - Args: - imgs (tensor | list[tensor]): Tensor should have shape NxCxHxW, - which contains all images in the batch. - img_metas (list[dict] | list[list[dict]]): - The outer list indicates images in a batch. - """ - if isinstance(imgs, list): - assert len(imgs) > 0 - assert imgs[0].size(0) == 1, ('aug test does not support ' - f'inference with batch size ' - f'{imgs[0].size(0)}') - assert len(imgs) == len(img_metas) - return self.aug_test(imgs, img_metas, **kwargs) - - return self.simple_test(imgs, img_metas, **kwargs) - - @auto_fp16(apply_to=('img', )) - def forward(self, img, img_metas, return_loss=True, **kwargs): - """Calls either :func:`forward_train` or :func:`forward_test` depending - on whether ``return_loss`` is ``True``. - - Note that img and img_meta are single-nested (i.e. tensor and - list[dict]). - """ - - if return_loss: - return self.forward_train(img, img_metas, **kwargs) - - if isinstance(img, list): - for idx, each_img in enumerate(img): - if each_img.dim() == 3: - img[idx] = each_img.unsqueeze(0) - else: - if len(img_metas) == 1 and isinstance(img_metas[0], list): - img_metas = img_metas[0] - - return self.forward_test(img, img_metas, **kwargs) - - def _parse_losses(self, losses): - """Parse the raw outputs (losses) of the network. - - Args: - losses (dict): Raw outputs of the network, which usually contain - losses and other necessary information. - - Returns: - tuple[tensor, dict]: (loss, log_vars), loss is the loss tensor - which may be a weighted sum of all losses, log_vars contains - all the variables to be sent to the logger. - """ - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(_value for _key, _value in log_vars.items() - if 'loss' in _key) - - log_vars['loss'] = loss - for loss_name, loss_value in log_vars.items(): - # reduce loss when distributed training - if dist.is_available() and dist.is_initialized(): - loss_value = loss_value.data.clone() - dist.all_reduce(loss_value.div_(dist.get_world_size())) - log_vars[loss_name] = loss_value.item() - - return loss, log_vars - - def train_step(self, data, optimizer): - """The iteration step during training. - - This method defines an iteration step during training, except for the - back propagation and optimizer update, which are done by an optimizer - hook. Note that in some complicated cases or models (e.g. GAN), - the whole process (including the back propagation and optimizer update) - is also defined by this method. - - Args: - data (dict): The outputs of dataloader. - optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of - runner is passed to ``train_step()``. This argument is unused - and reserved. - - Returns: - dict: It should contain at least 3 keys: ``loss``, ``log_vars``, - ``num_samples``. - - - ``loss`` is a tensor for back propagation, which is a - weighted sum of multiple losses. - - ``log_vars`` contains all the variables to be sent to the - logger. - - ``num_samples`` indicates the batch size used for - averaging the logs (Note: for the - DDP model, num_samples refers to the batch size for each GPU). - """ - losses = self(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['img_metas'])) - - return outputs - - def val_step(self, data, optimizer): - """The iteration step during validation. - - This method shares the same signature as :func:`train_step`, but is - used during val epochs. Note that the evaluation after training epochs - is not implemented by this method, but by an evaluation hook. - """ - losses = self(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['img_metas'])) - - return outputs - - def show_result(self, - img, - result, - gt_label='', - win_name='', - show=False, - wait_time=0, - out_file=None, - **kwargs): - """Draw `result` on `img`. - - Args: - img (str or tensor): The image to be displayed. - result (dict): The results to draw on `img`. - gt_label (str): Ground truth label of img. - win_name (str): The window name. - wait_time (int): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The output filename. - Default: None. - - Returns: - img (tensor): Only if not `show` or `out_file`. - """ - img = mmcv.imread(img) - img = img.copy() - pred_label = None - if 'text' in result.keys(): - pred_label = result['text'] - - # if out_file specified, do not show image in window - if out_file is not None: - show = False - # draw text label - if pred_label is not None: - img = imshow_text_label( - img, - pred_label, - gt_label, - show=show, - win_name=win_name, - wait_time=wait_time, - out_file=out_file) - - if not (show or out_file): - warnings.warn('show==False and out_file is not specified, only ' - 'result image will be returned') - return img - - return img diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_90k_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_90k_coco.py deleted file mode 100644 index 74dca24f26422967501e7ba31c3f39ca324e031c..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_90k_coco.py +++ /dev/null @@ -1,15 +0,0 @@ -_base_ = 'faster_rcnn_r50_caffe_fpn_mstrain_1x_coco.py' - -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[60000, 80000]) - -# Runner type -runner = dict(_delete_=True, type='IterBasedRunner', max_iters=90000) - -checkpoint_config = dict(interval=10000) -evaluation = dict(interval=10000, metric='bbox') diff --git a/spaces/ttt246/brain/Brain/src/model/image_model.py b/spaces/ttt246/brain/Brain/src/model/image_model.py deleted file mode 100644 index f24551c5931ad76ffc6e1f2b9d031836dc5a7df9..0000000000000000000000000000000000000000 --- a/spaces/ttt246/brain/Brain/src/model/image_model.py +++ /dev/null @@ -1,15 +0,0 @@ -"""Image model to process & handle them""" -from typing import Any - -from Brain.src.model.basic_model import DataStatus - - -class ImageModel: - def __init__(self): - self.image_text = "" - self.image_name = "" - self.uuid = "" - self.status = DataStatus.CREATED - - def to_json(self) -> Any: - return {"image_name": self.image_name, "image_text": self.image_text} diff --git a/spaces/udion/BayesCap/ds.py b/spaces/udion/BayesCap/ds.py deleted file mode 100644 index 1fd82434bac595aad5e9cb78b6c755a2acaf92eb..0000000000000000000000000000000000000000 --- a/spaces/udion/BayesCap/ds.py +++ /dev/null @@ -1,485 +0,0 @@ -from __future__ import absolute_import, division, print_function - -import random -import copy -import io -import os -import numpy as np -from PIL import Image -import skimage.transform -from collections import Counter - - -import torch -import torch.utils.data as data -from torch import Tensor -from torch.utils.data import Dataset -from torchvision import transforms -from torchvision.transforms.functional import InterpolationMode as IMode - -import utils - -class ImgDset(Dataset): - """Customize the data set loading function and prepare low/high resolution image data in advance. - - Args: - dataroot (str): Training data set address - image_size (int): High resolution image size - upscale_factor (int): Image magnification - mode (str): Data set loading method, the training data set is for data enhancement, - and the verification data set is not for data enhancement - - """ - - def __init__(self, dataroot: str, image_size: int, upscale_factor: int, mode: str) -> None: - super(ImgDset, self).__init__() - self.filenames = [os.path.join(dataroot, x) for x in os.listdir(dataroot)] - - if mode == "train": - self.hr_transforms = transforms.Compose([ - transforms.RandomCrop(image_size), - transforms.RandomRotation(90), - transforms.RandomHorizontalFlip(0.5), - ]) - else: - self.hr_transforms = transforms.Resize(image_size) - - self.lr_transforms = transforms.Resize((image_size[0]//upscale_factor, image_size[1]//upscale_factor), interpolation=IMode.BICUBIC, antialias=True) - - def __getitem__(self, batch_index: int) -> [Tensor, Tensor]: - # Read a batch of image data - image = Image.open(self.filenames[batch_index]) - - # Transform image - hr_image = self.hr_transforms(image) - lr_image = self.lr_transforms(hr_image) - - # Convert image data into Tensor stream format (PyTorch). - # Note: The range of input and output is between [0, 1] - lr_tensor = utils.image2tensor(lr_image, range_norm=False, half=False) - hr_tensor = utils.image2tensor(hr_image, range_norm=False, half=False) - - return lr_tensor, hr_tensor - - def __len__(self) -> int: - return len(self.filenames) - - -class PairedImages_w_nameList(Dataset): - ''' - can act as supervised or un-supervised based on flists - ''' - def __init__(self, flist1, flist2, transform1=None, transform2=None, do_aug=False): - self.flist1 = flist1 - self.flist2 = flist2 - self.transform1 = transform1 - self.transform2 = transform2 - self.do_aug = do_aug - def __getitem__(self, index): - impath1 = self.flist1[index] - img1 = Image.open(impath1).convert('RGB') - impath2 = self.flist2[index] - img2 = Image.open(impath2).convert('RGB') - - img1 = utils.image2tensor(img1, range_norm=False, half=False) - img2 = utils.image2tensor(img2, range_norm=False, half=False) - - if self.transform1 is not None: - img1 = self.transform1(img1) - if self.transform2 is not None: - img2 = self.transform2(img2) - - return img1, img2 - def __len__(self): - return len(self.flist1) - -class PairedImages_w_nameList_npy(Dataset): - ''' - can act as supervised or un-supervised based on flists - ''' - def __init__(self, flist1, flist2, transform1=None, transform2=None, do_aug=False): - self.flist1 = flist1 - self.flist2 = flist2 - self.transform1 = transform1 - self.transform2 = transform2 - self.do_aug = do_aug - def __getitem__(self, index): - impath1 = self.flist1[index] - img1 = np.load(impath1) - impath2 = self.flist2[index] - img2 = np.load(impath2) - - if self.transform1 is not None: - img1 = self.transform1(img1) - if self.transform2 is not None: - img2 = self.transform2(img2) - - return img1, img2 - def __len__(self): - return len(self.flist1) - -# def call_paired(): -# root1='./GOPRO_3840FPS_AVG_3-21/train/blur/' -# root2='./GOPRO_3840FPS_AVG_3-21/train/sharp/' - -# flist1=glob.glob(root1+'/*/*.png') -# flist2=glob.glob(root2+'/*/*.png') - -# dset = PairedImages_w_nameList(root1,root2,flist1,flist2) - -#### KITTI depth - -def load_velodyne_points(filename): - """Load 3D point cloud from KITTI file format - (adapted from https://github.com/hunse/kitti) - """ - points = np.fromfile(filename, dtype=np.float32).reshape(-1, 4) - points[:, 3] = 1.0 # homogeneous - return points - - -def read_calib_file(path): - """Read KITTI calibration file - (from https://github.com/hunse/kitti) - """ - float_chars = set("0123456789.e+- ") - data = {} - with open(path, 'r') as f: - for line in f.readlines(): - key, value = line.split(':', 1) - value = value.strip() - data[key] = value - if float_chars.issuperset(value): - # try to cast to float array - try: - data[key] = np.array(list(map(float, value.split(' ')))) - except ValueError: - # casting error: data[key] already eq. value, so pass - pass - - return data - - -def sub2ind(matrixSize, rowSub, colSub): - """Convert row, col matrix subscripts to linear indices - """ - m, n = matrixSize - return rowSub * (n-1) + colSub - 1 - - -def generate_depth_map(calib_dir, velo_filename, cam=2, vel_depth=False): - """Generate a depth map from velodyne data - """ - # load calibration files - cam2cam = read_calib_file(os.path.join(calib_dir, 'calib_cam_to_cam.txt')) - velo2cam = read_calib_file(os.path.join(calib_dir, 'calib_velo_to_cam.txt')) - velo2cam = np.hstack((velo2cam['R'].reshape(3, 3), velo2cam['T'][..., np.newaxis])) - velo2cam = np.vstack((velo2cam, np.array([0, 0, 0, 1.0]))) - - # get image shape - im_shape = cam2cam["S_rect_02"][::-1].astype(np.int32) - - # compute projection matrix velodyne->image plane - R_cam2rect = np.eye(4) - R_cam2rect[:3, :3] = cam2cam['R_rect_00'].reshape(3, 3) - P_rect = cam2cam['P_rect_0'+str(cam)].reshape(3, 4) - P_velo2im = np.dot(np.dot(P_rect, R_cam2rect), velo2cam) - - # load velodyne points and remove all behind image plane (approximation) - # each row of the velodyne data is forward, left, up, reflectance - velo = load_velodyne_points(velo_filename) - velo = velo[velo[:, 0] >= 0, :] - - # project the points to the camera - velo_pts_im = np.dot(P_velo2im, velo.T).T - velo_pts_im[:, :2] = velo_pts_im[:, :2] / velo_pts_im[:, 2][..., np.newaxis] - - if vel_depth: - velo_pts_im[:, 2] = velo[:, 0] - - # check if in bounds - # use minus 1 to get the exact same value as KITTI matlab code - velo_pts_im[:, 0] = np.round(velo_pts_im[:, 0]) - 1 - velo_pts_im[:, 1] = np.round(velo_pts_im[:, 1]) - 1 - val_inds = (velo_pts_im[:, 0] >= 0) & (velo_pts_im[:, 1] >= 0) - val_inds = val_inds & (velo_pts_im[:, 0] < im_shape[1]) & (velo_pts_im[:, 1] < im_shape[0]) - velo_pts_im = velo_pts_im[val_inds, :] - - # project to image - depth = np.zeros((im_shape[:2])) - depth[velo_pts_im[:, 1].astype(np.int), velo_pts_im[:, 0].astype(np.int)] = velo_pts_im[:, 2] - - # find the duplicate points and choose the closest depth - inds = sub2ind(depth.shape, velo_pts_im[:, 1], velo_pts_im[:, 0]) - dupe_inds = [item for item, count in Counter(inds).items() if count > 1] - for dd in dupe_inds: - pts = np.where(inds == dd)[0] - x_loc = int(velo_pts_im[pts[0], 0]) - y_loc = int(velo_pts_im[pts[0], 1]) - depth[y_loc, x_loc] = velo_pts_im[pts, 2].min() - depth[depth < 0] = 0 - - return depth - -def pil_loader(path): - # open path as file to avoid ResourceWarning - # (https://github.com/python-pillow/Pillow/issues/835) - with open(path, 'rb') as f: - with Image.open(f) as img: - return img.convert('RGB') - - -class MonoDataset(data.Dataset): - """Superclass for monocular dataloaders - - Args: - data_path - filenames - height - width - frame_idxs - num_scales - is_train - img_ext - """ - def __init__(self, - data_path, - filenames, - height, - width, - frame_idxs, - num_scales, - is_train=False, - img_ext='.jpg'): - super(MonoDataset, self).__init__() - - self.data_path = data_path - self.filenames = filenames - self.height = height - self.width = width - self.num_scales = num_scales - self.interp = Image.ANTIALIAS - - self.frame_idxs = frame_idxs - - self.is_train = is_train - self.img_ext = img_ext - - self.loader = pil_loader - self.to_tensor = transforms.ToTensor() - - # We need to specify augmentations differently in newer versions of torchvision. - # We first try the newer tuple version; if this fails we fall back to scalars - try: - self.brightness = (0.8, 1.2) - self.contrast = (0.8, 1.2) - self.saturation = (0.8, 1.2) - self.hue = (-0.1, 0.1) - transforms.ColorJitter.get_params( - self.brightness, self.contrast, self.saturation, self.hue) - except TypeError: - self.brightness = 0.2 - self.contrast = 0.2 - self.saturation = 0.2 - self.hue = 0.1 - - self.resize = {} - for i in range(self.num_scales): - s = 2 ** i - self.resize[i] = transforms.Resize((self.height // s, self.width // s), - interpolation=self.interp) - - self.load_depth = self.check_depth() - - def preprocess(self, inputs, color_aug): - """Resize colour images to the required scales and augment if required - - We create the color_aug object in advance and apply the same augmentation to all - images in this item. This ensures that all images input to the pose network receive the - same augmentation. - """ - for k in list(inputs): - frame = inputs[k] - if "color" in k: - n, im, i = k - for i in range(self.num_scales): - inputs[(n, im, i)] = self.resize[i](inputs[(n, im, i - 1)]) - - for k in list(inputs): - f = inputs[k] - if "color" in k: - n, im, i = k - inputs[(n, im, i)] = self.to_tensor(f) - inputs[(n + "_aug", im, i)] = self.to_tensor(color_aug(f)) - - def __len__(self): - return len(self.filenames) - - def __getitem__(self, index): - """Returns a single training item from the dataset as a dictionary. - - Values correspond to torch tensors. - Keys in the dictionary are either strings or tuples: - - ("color", , ) for raw colour images, - ("color_aug", , ) for augmented colour images, - ("K", scale) or ("inv_K", scale) for camera intrinsics, - "stereo_T" for camera extrinsics, and - "depth_gt" for ground truth depth maps. - - is either: - an integer (e.g. 0, -1, or 1) representing the temporal step relative to 'index', - or - "s" for the opposite image in the stereo pair. - - is an integer representing the scale of the image relative to the fullsize image: - -1 images at native resolution as loaded from disk - 0 images resized to (self.width, self.height ) - 1 images resized to (self.width // 2, self.height // 2) - 2 images resized to (self.width // 4, self.height // 4) - 3 images resized to (self.width // 8, self.height // 8) - """ - inputs = {} - - do_color_aug = self.is_train and random.random() > 0.5 - do_flip = self.is_train and random.random() > 0.5 - - line = self.filenames[index].split() - folder = line[0] - - if len(line) == 3: - frame_index = int(line[1]) - else: - frame_index = 0 - - if len(line) == 3: - side = line[2] - else: - side = None - - for i in self.frame_idxs: - if i == "s": - other_side = {"r": "l", "l": "r"}[side] - inputs[("color", i, -1)] = self.get_color(folder, frame_index, other_side, do_flip) - else: - inputs[("color", i, -1)] = self.get_color(folder, frame_index + i, side, do_flip) - - # adjusting intrinsics to match each scale in the pyramid - for scale in range(self.num_scales): - K = self.K.copy() - - K[0, :] *= self.width // (2 ** scale) - K[1, :] *= self.height // (2 ** scale) - - inv_K = np.linalg.pinv(K) - - inputs[("K", scale)] = torch.from_numpy(K) - inputs[("inv_K", scale)] = torch.from_numpy(inv_K) - - if do_color_aug: - color_aug = transforms.ColorJitter.get_params( - self.brightness, self.contrast, self.saturation, self.hue) - else: - color_aug = (lambda x: x) - - self.preprocess(inputs, color_aug) - - for i in self.frame_idxs: - del inputs[("color", i, -1)] - del inputs[("color_aug", i, -1)] - - if self.load_depth: - depth_gt = self.get_depth(folder, frame_index, side, do_flip) - inputs["depth_gt"] = np.expand_dims(depth_gt, 0) - inputs["depth_gt"] = torch.from_numpy(inputs["depth_gt"].astype(np.float32)) - - if "s" in self.frame_idxs: - stereo_T = np.eye(4, dtype=np.float32) - baseline_sign = -1 if do_flip else 1 - side_sign = -1 if side == "l" else 1 - stereo_T[0, 3] = side_sign * baseline_sign * 0.1 - - inputs["stereo_T"] = torch.from_numpy(stereo_T) - - return inputs - - def get_color(self, folder, frame_index, side, do_flip): - raise NotImplementedError - - def check_depth(self): - raise NotImplementedError - - def get_depth(self, folder, frame_index, side, do_flip): - raise NotImplementedError - -class KITTIDataset(MonoDataset): - """Superclass for different types of KITTI dataset loaders - """ - def __init__(self, *args, **kwargs): - super(KITTIDataset, self).__init__(*args, **kwargs) - - # NOTE: Make sure your intrinsics matrix is *normalized* by the original image size. - # To normalize you need to scale the first row by 1 / image_width and the second row - # by 1 / image_height. Monodepth2 assumes a principal point to be exactly centered. - # If your principal point is far from the center you might need to disable the horizontal - # flip augmentation. - self.K = np.array([[0.58, 0, 0.5, 0], - [0, 1.92, 0.5, 0], - [0, 0, 1, 0], - [0, 0, 0, 1]], dtype=np.float32) - - self.full_res_shape = (1242, 375) - self.side_map = {"2": 2, "3": 3, "l": 2, "r": 3} - - def check_depth(self): - line = self.filenames[0].split() - scene_name = line[0] - frame_index = int(line[1]) - - velo_filename = os.path.join( - self.data_path, - scene_name, - "velodyne_points/data/{:010d}.bin".format(int(frame_index))) - - return os.path.isfile(velo_filename) - - def get_color(self, folder, frame_index, side, do_flip): - color = self.loader(self.get_image_path(folder, frame_index, side)) - - if do_flip: - color = color.transpose(Image.FLIP_LEFT_RIGHT) - - return color - - -class KITTIDepthDataset(KITTIDataset): - """KITTI dataset which uses the updated ground truth depth maps - """ - def __init__(self, *args, **kwargs): - super(KITTIDepthDataset, self).__init__(*args, **kwargs) - - def get_image_path(self, folder, frame_index, side): - f_str = "{:010d}{}".format(frame_index, self.img_ext) - image_path = os.path.join( - self.data_path, - folder, - "image_0{}/data".format(self.side_map[side]), - f_str) - return image_path - - def get_depth(self, folder, frame_index, side, do_flip): - f_str = "{:010d}.png".format(frame_index) - depth_path = os.path.join( - self.data_path, - folder, - "proj_depth/groundtruth/image_0{}".format(self.side_map[side]), - f_str) - - depth_gt = Image.open(depth_path) - depth_gt = depth_gt.resize(self.full_res_shape, Image.NEAREST) - depth_gt = np.array(depth_gt).astype(np.float32) / 256 - - if do_flip: - depth_gt = np.fliplr(depth_gt) - - return depth_gt \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Cubase 8 Crack Rar Passwordl PATCHED.md b/spaces/usbethFlerru/sovits-modelsV2/example/Cubase 8 Crack Rar Passwordl PATCHED.md deleted file mode 100644 index 10259b45cca7b083709e084606f27ab173bc4b33..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Cubase 8 Crack Rar Passwordl PATCHED.md +++ /dev/null @@ -1,7 +0,0 @@ - -

                  they have no problem of giving you a new mac.. they send you the new mac, but they do not send the license and the old mac. so, i have to buy a mac, getting a license for $5000. i don't want to be careful and buy 3 licenses for 5.000 dollars. it's a lot of money!

                  -

                  Cubase 8 Crack Rar Passwordl


                  Download Ziphttps://urlcod.com/2uyVj8



                  -

                  a wise man said, everything old is new again. time was when they would copy flac to the hard drive and do side by side comparison or convert wav to flac on the fly like i did for 24 hours. then they would look at flac and say wow! we cant do that to mp3 or ogg. there was flac in the minds back then. now they have been programming a free plugin for years and they will have a level of programming that both mp3 and ogg will be able to do. it will be amazing to see someone make a free plugin to do side by side comparison for mp3 and ogg. that should be fun to see. i am going to download some stuff right now and side by side comparison them and see what they use as a compression level. it is my experience that this compression does what flac does in ease of use. the hard drive is like a flash drive. everything is stored in the hard drive. flac does this and has kept the entire album track list. i would like to see some free plugins do this.

                  -

                  dwa i am glad you have your website back and that you will be playing with it every day. i hope you look into finishing that cubase 8 portable add-on. there are several projects missing. very much like your webpage dwa. in 3 days i will have a lot of things added to a website that i can sell for people to access their music on a smartphone. if you had a very big warehouse this would be much easier. this would be a website to sell products to music lovers. i have been researching a lot about a few things and i think i will look into writing a website for other people. music is my way of life and the online world is my way of accessing my music.

                  899543212b
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/vishnun/CRAFT-OCR/app.py b/spaces/vishnun/CRAFT-OCR/app.py deleted file mode 100644 index 8ab732bdffab66001e5a68a5b65c35397b81aba7..0000000000000000000000000000000000000000 --- a/spaces/vishnun/CRAFT-OCR/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio as gr -from craft_hw_ocr import OCR - -ocr = OCR.load_models() - -def do_ocr(inp): - img, results = OCR.detection(inp, ocr[2]) - bboxes, text = OCR.recoginition(img, results, ocr[0], ocr[1]) - return OCR.visualize(img, results), text - -inputs = gr.inputs.Image() -o1 = gr.outputs.Image() -o2 = gr.outputs.Textbox() - -title = "CRAFT-OCR" -description = "OCR of both handwriting and printed text using CRAFT Text detector and TrOCR recognition, detection of lines and extraction of them are happening here because TrOCR pre-trained models are modelled on IAM lines dataset and the same needs to be implemented here." -examples=[['example_1.png'],['example_2.jpg']] - -article = "

                  craft_hw_ocr

                  craft-text-detector

                  " -gr.Interface(fn=do_ocr, inputs=inputs, outputs=[o1, o2], title=title, description=description, article=article, examples=examples, enable_queue=True).launch() diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/exp/upernet_global_small/test_config_g.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/exp/upernet_global_small/test_config_g.py deleted file mode 100644 index e43737a98a3b174a9f2fe059c06d511144686459..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/exp/upernet_global_small/test_config_g.py +++ /dev/null @@ -1,38 +0,0 @@ -_base_ = [ - '../../configs/_base_/models/upernet_uniformer.py', - '../../configs/_base_/datasets/ade20k.py', - '../../configs/_base_/default_runtime.py', - '../../configs/_base_/schedules/schedule_160k.py' -] -model = dict( - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - drop_path_rate=0.25, - windows=False, - hybrid=False, - ), - decode_head=dict( - in_channels=[64, 128, 320, 512], - num_classes=150 - ), - auxiliary_head=dict( - in_channels=320, - num_classes=150 - )) - -# AdamW optimizer, no weight decay for position embedding & layer norm in backbone -optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) - -lr_config = dict(_delete_=True, policy='poly', - warmup='linear', - warmup_iters=1500, - warmup_ratio=1e-6, - power=1.0, min_lr=0.0, by_epoch=False) - -data=dict(samples_per_gpu=2) \ No newline at end of file diff --git a/spaces/wilson1/bingai/README.md b/spaces/wilson1/bingai/README.md deleted file mode 100644 index 8652129d90245688641ee860f3ac75654bea8753..0000000000000000000000000000000000000000 --- a/spaces/wilson1/bingai/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bingai -emoji: 😻 -colorFrom: green -colorTo: purple -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wwwwwwww2/bingo/src/components/chat-suggestions.tsx b/spaces/wwwwwwww2/bingo/src/components/chat-suggestions.tsx deleted file mode 100644 index 48aec7c84e4407c482acdfcc7857fb0f660d12d3..0000000000000000000000000000000000000000 --- a/spaces/wwwwwwww2/bingo/src/components/chat-suggestions.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import React, { useMemo } from 'react' -import Image from 'next/image' -import HelpIcon from '@/assets/images/help.svg' -import { SuggestedResponse } from '@/lib/bots/bing/types' -import { useBing } from '@/lib/hooks/use-bing' -import { atom, useAtom } from 'jotai' - -type Suggestions = SuggestedResponse[] -const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text })) -const suggestionsAtom = atom([]) - -type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions } - -export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) { - const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom) - const toggleSuggestions = (() => { - if (currentSuggestions === helpSuggestions) { - setSuggestions(suggestions) - } else { - setSuggestions(helpSuggestions) - } - }) - - useMemo(() => { - setSuggestions(suggestions) - window.scrollBy(0, 2000) - }, [suggestions.length, setSuggestions]) - - return currentSuggestions?.length ? ( -

                  -
                  - - { - currentSuggestions.map(suggestion => ( - - )) - } -
                  -
                  - ) : null -} diff --git a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/interface.py b/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/interface.py deleted file mode 100644 index cebf9fc96b819b6133d2bfb8ba9b1796397c7454..0000000000000000000000000000000000000000 --- a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/interface.py +++ /dev/null @@ -1,206 +0,0 @@ -import cv2 -import numpy as np -from PIL import Image -import glob -import os -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.download_util import load_file_from_url - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - - -def realEsrgan( - model_name="RealESRGAN_x4plus_anime_6B", - model_path=None, - input_dir="inputs", - output_dir="results", - denoise_strength=0.5, - outscale=4, - suffix="out", - tile=200, - tile_pad=10, - pre_pad=0, - face_enhance=True, - alpha_upsampler="realsrgan", - out_ext="auto", - fp32=True, - gpu_id=None, -): - - # determine models according to model names - model_name = model_name.split(".")[0] - if model_name == "RealESRGAN_x4plus": # x4 RRDBNet model - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=4, - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth" - ] - elif model_name == "RealESRNet_x4plus": # x4 RRDBNet model - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=4, - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth" - ] - elif model_name == "RealESRGAN_x4plus_anime_6B": # x4 RRDBNet model with 6 blocks - model = RRDBNet( - num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4 - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth" - ] - elif model_name == "RealESRGAN_x2plus": # x2 RRDBNet model - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=2, - ) - netscale = 2 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth" - ] - elif model_name == "realesr-animevideov3": # x4 VGG-style model (XS size) - model = SRVGGNetCompact( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_conv=16, - upscale=4, - act_type="prelu", - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth" - ] - elif model_name == "realesr-general-x4v3": # x4 VGG-style model (S size) - model = SRVGGNetCompact( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_conv=32, - upscale=4, - act_type="prelu", - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth", - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth", - ] - - # determine model paths - if model_path is None: - model_path = os.path.join("weights", model_name + ".pth") - if not os.path.isfile(model_path): - ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) - for url in file_url: - # model_path will be updated - model_path = load_file_from_url( - url=url, - model_dir=os.path.join(ROOT_DIR, "weights"), - progress=True, - file_name=None, - ) - - # use dni to control the denoise strength - dni_weight = None - if model_name == "realesr-general-x4v3" and denoise_strength != 1: - wdn_model_path = model_path.replace( - "realesr-general-x4v3", "realesr-general-wdn-x4v3" - ) - model_path = [model_path, wdn_model_path] - dni_weight = [denoise_strength, 1 - denoise_strength] - - # restorer - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - dni_weight=dni_weight, - model=model, - tile=tile, - tile_pad=tile_pad, - pre_pad=pre_pad, - half=not fp32, - gpu_id=gpu_id, - ) - - if face_enhance: # Use GFPGAN for face enhancement - from gfpgan import GFPGANer - - face_enhancer = GFPGANer( - model_path="https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth", - upscale=outscale, - arch="clean", - channel_multiplier=2, - bg_upsampler=upsampler, - ) - os.makedirs(output_dir, exist_ok=True) - - if not isinstance(input_dir, list): - paths = [input_dir] - else: - paths = sorted(glob.glob(os.path.join(input_dir, "*"))) - - Imgs = [] - for idx, path in enumerate(paths): - print(f"Scaling x{outscale}:", path) - if isinstance(path, Image.Image): - img = path - img = cv2.cvtColor(np.asarray(img), cv2.COLOR_RGB2BGR) - imgname = f"img_{idx}" - else: - imgname, extension = os.path.splitext(os.path.basename(path)) - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) - if len(img.shape) == 3 and img.shape[2] == 4: - img_mode = "RGBA" - else: - img_mode = None - - try: - if face_enhance: - _, _, output = face_enhancer.enhance( - img, has_aligned=False, only_center_face=False, paste_back=True - ) - else: - output, _ = upsampler.enhance(img, outscale=outscale) - except RuntimeError as error: - print("Error", error) - print( - "If you encounter CUDA or RAM out of memory, try to set --tile with a smaller number." - ) - else: - # if out_ext == "auto": - # extension = extension[1:] - # else: - # extension = out_ext - # if img_mode == "RGBA": # RGBA images should be saved in png format - # extension = "png" - # if suffix == "": - # save_path = os.path.join(output_dir, f"{imgname}.{extension}") - # else: - # save_path = os.path.join(output_dir, f"{imgname}_{suffix}.{extension}") - # - # cv2.imwrite(save_path, output) - - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = Image.fromarray(img) - Imgs.append(img) - - return Imgs diff --git a/spaces/yangliuyi601/rvc-models/infer_pack/transforms.py b/spaces/yangliuyi601/rvc-models/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/yangliuyi601/rvc-models/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/yeqingmei123/face-test/e4e/utils/model_utils.py b/spaces/yeqingmei123/face-test/e4e/utils/model_utils.py deleted file mode 100644 index e51e95578f72b3218d6d832e3b604193cb68c1d7..0000000000000000000000000000000000000000 --- a/spaces/yeqingmei123/face-test/e4e/utils/model_utils.py +++ /dev/null @@ -1,35 +0,0 @@ -import torch -import argparse -from models.psp import pSp -from models.encoders.psp_encoders import Encoder4Editing - - -def setup_model(checkpoint_path, device='cuda'): - ckpt = torch.load(checkpoint_path, map_location='cpu') - opts = ckpt['opts'] - - opts['checkpoint_path'] = checkpoint_path - opts['device'] = device - opts = argparse.Namespace(**opts) - - net = pSp(opts) - net.eval() - net = net.to(device) - return net, opts - - -def load_e4e_standalone(checkpoint_path, device='cuda'): - ckpt = torch.load(checkpoint_path, map_location='cpu') - opts = argparse.Namespace(**ckpt['opts']) - e4e = Encoder4Editing(50, 'ir_se', opts) - e4e_dict = {k.replace('encoder.', ''): v for k, v in ckpt['state_dict'].items() if k.startswith('encoder.')} - e4e.load_state_dict(e4e_dict) - e4e.eval() - e4e = e4e.to(device) - latent_avg = ckpt['latent_avg'].to(device) - - def add_latent_avg(model, inputs, outputs): - return outputs + latent_avg.repeat(outputs.shape[0], 1, 1) - - e4e.register_forward_hook(add_latent_avg) - return e4e diff --git a/spaces/yerfor/SyntaSpeech/data_gen/tts/runs/train_mfa_align.py b/spaces/yerfor/SyntaSpeech/data_gen/tts/runs/train_mfa_align.py deleted file mode 100644 index daaeebe57690a8032be3d15c05d71701211604a7..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/data_gen/tts/runs/train_mfa_align.py +++ /dev/null @@ -1,46 +0,0 @@ -import utils.commons.single_thread_env # NOQA -import glob -import subprocess -from textgrid import TextGrid -import os -from utils.commons.hparams import hparams, set_hparams - - -def train_mfa_align(mfa_outputs="mfa_outputs", - mfa_inputs="mfa_inputs", - model_name=None, pretrain_model_name=None, - mfa_cmd='train'): - CORPUS = hparams['processed_data_dir'].split("/")[-1] - NUM_JOB = int(os.getenv('N_PROC', os.cpu_count())) - env_vars = [f'CORPUS={CORPUS}', f'NUM_JOB={NUM_JOB}'] - if mfa_outputs is not None: - env_vars.append(f'MFA_OUTPUTS={mfa_outputs}') - if mfa_inputs is not None: - env_vars.append(f'MFA_INPUTS={mfa_inputs}') - if model_name is not None: - env_vars.append(f'MODEL_NAME={model_name}') - if pretrain_model_name is not None: - env_vars.append(f'PRETRAIN_MODEL_NAME={pretrain_model_name}') - if mfa_cmd is not None: - env_vars.append(f'MFA_CMD={mfa_cmd}') - env_str = ' '.join(env_vars) - print(f"| Run MFA for {CORPUS}. Env vars: {env_str}") - subprocess.check_call(f'{env_str} bash mfa_usr/run_mfa_train_align.sh', shell=True) - mfa_offset = hparams['preprocess_args']['mfa_offset'] - if mfa_offset > 0: - for tg_fn in glob.glob(f'{hparams["processed_data_dir"]}/{mfa_outputs}/*.TextGrid'): - tg = TextGrid.fromFile(tg_fn) - max_time = tg.maxTime - for tier in tg.tiers: - for interval in tier.intervals: - interval.maxTime = min(interval.maxTime + mfa_offset, max_time) - interval.minTime = min(interval.minTime + mfa_offset, max_time) - tier.intervals[0].minTime = 0 - tier.maxTime = min(tier.maxTime + mfa_offset, max_time) - tg.write(tg_fn) - TextGrid.fromFile(tg_fn) - - -if __name__ == '__main__': - set_hparams(print_hparams=False) - train_mfa_align() diff --git a/spaces/yerfor/SyntaSpeech/utils/commons/meters.py b/spaces/yerfor/SyntaSpeech/utils/commons/meters.py deleted file mode 100644 index e38790e9f292ec843a820dad73c9795eb2ab8daa..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/utils/commons/meters.py +++ /dev/null @@ -1,42 +0,0 @@ -import time -import torch - - -class AvgrageMeter(object): - - def __init__(self): - self.reset() - - def reset(self): - self.avg = 0 - self.sum = 0 - self.cnt = 0 - - def update(self, val, n=1): - self.sum += val * n - self.cnt += n - self.avg = self.sum / self.cnt - - -class Timer: - timer_map = {} - - def __init__(self, name, enable=False): - if name not in Timer.timer_map: - Timer.timer_map[name] = 0 - self.name = name - self.enable = enable - - def __enter__(self): - if self.enable: - if torch.cuda.is_available(): - torch.cuda.synchronize() - self.t = time.time() - - def __exit__(self, exc_type, exc_val, exc_tb): - if self.enable: - if torch.cuda.is_available(): - torch.cuda.synchronize() - Timer.timer_map[self.name] += time.time() - self.t - if self.enable: - print(f'[Timer] {self.name}: {Timer.timer_map[self.name]}') diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/face_detect/data/__init__.py b/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/face_detect/data/__init__.py deleted file mode 100644 index ea50ebaf88d64e75f4960bc99b14f138a343e575..0000000000000000000000000000000000000000 --- a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/face_detect/data/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .wider_face import WiderFaceDetection, detection_collate -from .data_augment import * -from .config import * diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/autoformer/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/autoformer/__init__.py deleted file mode 100644 index f87bfdea532d61d4bc63802eced65f108328e666..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/autoformer/__init__.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import TYPE_CHECKING - -# rely on isort to merge the imports -from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available - - -_import_structure = { - "configuration_autoformer": [ - "AUTOFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", - "AutoformerConfig", - ], -} - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["modeling_autoformer"] = [ - "AUTOFORMER_PRETRAINED_MODEL_ARCHIVE_LIST", - "AutoformerForPrediction", - "AutoformerModel", - "AutoformerPreTrainedModel", - ] - - -if TYPE_CHECKING: - from .configuration_autoformer import ( - AUTOFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, - AutoformerConfig, - ) - - try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .modeling_autoformer import ( - AUTOFORMER_PRETRAINED_MODEL_ARCHIVE_LIST, - AutoformerForPrediction, - AutoformerModel, - AutoformerPreTrainedModel, - ) - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/big_bird/modeling_flax_big_bird.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/big_bird/modeling_flax_big_bird.py deleted file mode 100644 index afdac2645f2652020c0e9fdd6b4d848b53a6899d..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/big_bird/modeling_flax_big_bird.py +++ /dev/null @@ -1,2634 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Google Flax Team Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Callable, Optional, Tuple - -import flax -import flax.linen as nn -import jax -import jax.numpy as jnp -from flax.core.frozen_dict import FrozenDict, freeze, unfreeze -from flax.linen import combine_masks, make_causal_mask -from flax.linen import partitioning as nn_partitioning -from flax.linen.attention import dot_product_attention_weights -from flax.traverse_util import flatten_dict, unflatten_dict -from jax import lax - -from ...modeling_flax_outputs import ( - FlaxBaseModelOutputWithPastAndCrossAttentions, - FlaxBaseModelOutputWithPooling, - FlaxBaseModelOutputWithPoolingAndCrossAttentions, - FlaxCausalLMOutputWithCrossAttentions, - FlaxMaskedLMOutput, - FlaxMultipleChoiceModelOutput, - FlaxSequenceClassifierOutput, - FlaxTokenClassifierOutput, -) -from ...modeling_flax_utils import ( - ACT2FN, - FlaxPreTrainedModel, - append_call_sample_docstring, - append_replace_return_docstrings, - overwrite_call_docstring, -) -from ...utils import ModelOutput, add_start_docstrings, add_start_docstrings_to_model_forward, logging -from .configuration_big_bird import BigBirdConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "google/bigbird-roberta-base" -_CONFIG_FOR_DOC = "BigBirdConfig" - -remat = nn_partitioning.remat - - -@flax.struct.dataclass -class FlaxBigBirdForPreTrainingOutput(ModelOutput): - """ - Output type of [`BigBirdForPreTraining`]. - - Args: - prediction_logits (`jnp.ndarray` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - seq_relationship_logits (`jnp.ndarray` of shape `(batch_size, 2)`): - Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation - before SoftMax). - hidden_states (`tuple(jnp.ndarray)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `jnp.ndarray` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(jnp.ndarray)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `jnp.ndarray` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - prediction_logits: jnp.ndarray = None - seq_relationship_logits: jnp.ndarray = None - hidden_states: Optional[Tuple[jnp.ndarray]] = None - attentions: Optional[Tuple[jnp.ndarray]] = None - - -@flax.struct.dataclass -class FlaxBigBirdForQuestionAnsweringModelOutput(ModelOutput): - """ - Base class for outputs of question answering models. - - Args: - start_logits (`jnp.ndarray` of shape `(batch_size, sequence_length)`): - Span-start scores (before SoftMax). - end_logits (`jnp.ndarray` of shape `(batch_size, sequence_length)`): - Span-end scores (before SoftMax). - pooled_output (`jnp.ndarray` of shape `(batch_size, hidden_size)`): - pooled_output returned by FlaxBigBirdModel. - hidden_states (`tuple(jnp.ndarray)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `jnp.ndarray` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(jnp.ndarray)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `jnp.ndarray` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - start_logits: jnp.ndarray = None - end_logits: jnp.ndarray = None - pooled_output: jnp.ndarray = None - hidden_states: Optional[Tuple[jnp.ndarray]] = None - attentions: Optional[Tuple[jnp.ndarray]] = None - - -BIG_BIRD_START_DOCSTRING = r""" - - This model inherits from [`FlaxPreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading, saving and converting weights from PyTorch models) - - This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) - subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to - general usage and behavior. - - Finally, this model supports inherent JAX features such as: - - - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) - - Parameters: - config ([`BigBirdConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~FlaxPreTrainedModel.from_pretrained`] method to load the model weights. - dtype (`jax.numpy.dtype`, *optional*, defaults to `jax.numpy.float32`): - The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and - `jax.numpy.bfloat16` (on TPUs). - - This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If - specified all the computation will be performed with the given `dtype`. - - **Note that this only specifies the dtype of the computation and does not influence the dtype of model - parameters.** - - If you wish to change the dtype of the model parameters, see [`~FlaxPreTrainedModel.to_fp16`] and - [`~FlaxPreTrainedModel.to_bf16`]. -""" - -BIG_BIRD_INPUTS_DOCSTRING = r""" - Args: - input_ids (`numpy.ndarray` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`numpy.ndarray` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`numpy.ndarray` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`numpy.ndarray` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - head_mask (`numpy.ndarray` of shape `({0})`, `optional): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - -""" - - -class FlaxBigBirdEmbeddings(nn.Module): - """Construct the embeddings from word, position and token_type embeddings.""" - - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertEmbeddings.setup - def setup(self): - self.word_embeddings = nn.Embed( - self.config.vocab_size, - self.config.hidden_size, - embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), - dtype=self.dtype, - ) - self.position_embeddings = nn.Embed( - self.config.max_position_embeddings, - self.config.hidden_size, - embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), - dtype=self.dtype, - ) - self.token_type_embeddings = nn.Embed( - self.config.type_vocab_size, - self.config.hidden_size, - embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), - dtype=self.dtype, - ) - self.LayerNorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) - - def __call__(self, input_ids, token_type_ids, position_ids, attention_mask, deterministic: bool = True): - # Embed - inputs_embeds = self.word_embeddings(input_ids.astype("i4")) - position_embeds = self.position_embeddings(position_ids.astype("i4")) - token_type_embeddings = self.token_type_embeddings(token_type_ids.astype("i4")) - - if self.config.rescale_embeddings: - inputs_embeds *= self.config.hidden_size**0.5 - - # Sum all embeddings - hidden_states = inputs_embeds + token_type_embeddings + position_embeds - - # Layer Norm - hidden_states = self.dropout(hidden_states, deterministic=deterministic) - hidden_states = self.LayerNorm(hidden_states) - return hidden_states - - -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertSelfAttention with Bert->BigBird -class FlaxBigBirdSelfAttention(nn.Module): - config: BigBirdConfig - causal: bool = False - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.head_dim = self.config.hidden_size // self.config.num_attention_heads - if self.config.hidden_size % self.config.num_attention_heads != 0: - raise ValueError( - "`config.hidden_size`: {self.config.hidden_size} has to be a multiple of `config.num_attention_heads` " - " : {self.config.num_attention_heads}" - ) - - self.query = nn.Dense( - self.config.hidden_size, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - ) - self.key = nn.Dense( - self.config.hidden_size, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - ) - self.value = nn.Dense( - self.config.hidden_size, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - ) - - if self.causal: - self.causal_mask = make_causal_mask( - jnp.ones((1, self.config.max_position_embeddings), dtype="bool"), dtype="bool" - ) - - def _split_heads(self, hidden_states): - return hidden_states.reshape(hidden_states.shape[:2] + (self.config.num_attention_heads, self.head_dim)) - - def _merge_heads(self, hidden_states): - return hidden_states.reshape(hidden_states.shape[:2] + (self.config.hidden_size,)) - - @nn.compact - # Copied from transformers.models.bart.modeling_flax_bart.FlaxBartAttention._concatenate_to_cache - def _concatenate_to_cache(self, key, value, query, attention_mask): - """ - This function takes projected key, value states from a single input token and concatenates the states to cached - states from previous steps. This function is slighly adapted from the official Flax repository: - https://github.com/google/flax/blob/491ce18759622506588784b4fca0e4bf05f8c8cd/flax/linen/attention.py#L252 - """ - # detect if we're initializing by absence of existing cache data. - is_initialized = self.has_variable("cache", "cached_key") - cached_key = self.variable("cache", "cached_key", jnp.zeros, key.shape, key.dtype) - cached_value = self.variable("cache", "cached_value", jnp.zeros, value.shape, value.dtype) - cache_index = self.variable("cache", "cache_index", lambda: jnp.array(0, dtype=jnp.int32)) - - if is_initialized: - *batch_dims, max_length, num_heads, depth_per_head = cached_key.value.shape - # update key, value caches with our new 1d spatial slices - cur_index = cache_index.value - indices = (0,) * len(batch_dims) + (cur_index, 0, 0) - key = lax.dynamic_update_slice(cached_key.value, key, indices) - value = lax.dynamic_update_slice(cached_value.value, value, indices) - cached_key.value = key - cached_value.value = value - num_updated_cache_vectors = query.shape[1] - cache_index.value = cache_index.value + num_updated_cache_vectors - # causal mask for cached decoder self-attention: our single query position should only attend to those key positions that have already been generated and cached, not the remaining zero elements. - pad_mask = jnp.broadcast_to( - jnp.arange(max_length) < cur_index + num_updated_cache_vectors, - tuple(batch_dims) + (1, num_updated_cache_vectors, max_length), - ) - attention_mask = combine_masks(pad_mask, attention_mask) - return key, value, attention_mask - - def __call__( - self, - hidden_states, - attention_mask, - layer_head_mask, - key_value_states: Optional[jnp.array] = None, - init_cache: bool = False, - deterministic=True, - output_attentions: bool = False, - ): - # if key_value_states are provided this layer is used as a cross-attention layer - # for the decoder - is_cross_attention = key_value_states is not None - batch_size = hidden_states.shape[0] - - # get query proj - query_states = self.query(hidden_states) - # get key, value proj - if is_cross_attention: - # cross_attentions - key_states = self.key(key_value_states) - value_states = self.value(key_value_states) - else: - # self_attention - key_states = self.key(hidden_states) - value_states = self.value(hidden_states) - - query_states = self._split_heads(query_states) - key_states = self._split_heads(key_states) - value_states = self._split_heads(value_states) - - # handle cache prepare causal attention mask - if self.causal: - query_length, key_length = query_states.shape[1], key_states.shape[1] - if self.has_variable("cache", "cached_key"): - mask_shift = self.variables["cache"]["cache_index"] - max_decoder_length = self.variables["cache"]["cached_key"].shape[1] - causal_mask = lax.dynamic_slice( - self.causal_mask, (0, 0, mask_shift, 0), (1, 1, query_length, max_decoder_length) - ) - else: - causal_mask = self.causal_mask[:, :, :query_length, :key_length] - causal_mask = jnp.broadcast_to(causal_mask, (batch_size,) + causal_mask.shape[1:]) - - # combine masks if needed - if attention_mask is not None and self.causal: - attention_mask = jnp.broadcast_to(jnp.expand_dims(attention_mask, axis=(-3, -2)), causal_mask.shape) - attention_mask = combine_masks(attention_mask, causal_mask) - elif self.causal: - attention_mask = causal_mask - elif attention_mask is not None: - attention_mask = jnp.expand_dims(attention_mask, axis=(-3, -2)) - - # During fast autoregressive decoding, we feed one position at a time, - # and cache the keys and values step by step. - if self.causal and (self.has_variable("cache", "cached_key") or init_cache): - key_states, value_states, attention_mask = self._concatenate_to_cache( - key_states, value_states, query_states, attention_mask - ) - - # Convert the boolean attention mask to an attention bias. - if attention_mask is not None: - # attention mask in the form of attention bias - attention_bias = lax.select( - attention_mask > 0, - jnp.full(attention_mask.shape, 0.0).astype(self.dtype), - jnp.full(attention_mask.shape, jnp.finfo(self.dtype).min).astype(self.dtype), - ) - else: - attention_bias = None - - dropout_rng = None - if not deterministic and self.config.attention_probs_dropout_prob > 0.0: - dropout_rng = self.make_rng("dropout") - - attn_weights = dot_product_attention_weights( - query_states, - key_states, - bias=attention_bias, - dropout_rng=dropout_rng, - dropout_rate=self.config.attention_probs_dropout_prob, - broadcast_dropout=True, - deterministic=deterministic, - dtype=self.dtype, - precision=None, - ) - - # Mask heads if we want to - if layer_head_mask is not None: - attn_weights = jnp.einsum("...hqk,h->...hqk", attn_weights, layer_head_mask) - - attn_output = jnp.einsum("...hqk,...khd->...qhd", attn_weights, value_states) - attn_output = attn_output.reshape(attn_output.shape[:2] + (-1,)) - - outputs = (attn_output, attn_weights) if output_attentions else (attn_output,) - return outputs - - -class FlaxBigBirdBlockSparseAttention(nn.Module): - config: BigBirdConfig - block_sparse_seed: int = None - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.query = nn.Dense( - self.config.hidden_size, - dtype=self.dtype, - use_bias=self.config.use_bias, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - ) - self.key = nn.Dense( - self.config.hidden_size, - dtype=self.dtype, - use_bias=self.config.use_bias, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - ) - self.value = nn.Dense( - self.config.hidden_size, - dtype=self.dtype, - use_bias=self.config.use_bias, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - ) - - @staticmethod - def transpose_for_scores(x, n_heads, head_size): - new_x_shape = x.shape[:-1] + (n_heads, head_size) - x = x.reshape(*new_x_shape) - return jnp.transpose(x, axes=(0, 2, 1, 3)) - - def __call__( - self, - hidden_states, - attention_mask, - deterministic=True, - output_attentions=False, - ): - n_heads = self.config.num_attention_heads - head_size = self.config.hidden_size // n_heads - - blocked_encoder_mask, band_mask, from_mask, to_mask = self.create_masks_for_block_sparse_attn( - attention_mask, self.config.block_size - ) - - query_layer = self.transpose_for_scores(self.query(hidden_states), n_heads, head_size) - key_layer = self.transpose_for_scores(self.key(hidden_states), n_heads, head_size) - value_layer = self.transpose_for_scores(self.value(hidden_states), n_heads, head_size) - - indices_prng_key = None - if not deterministic: - indices_prng_key = self.make_rng("indices") - - attn_output, attn_weights = self.bigbird_block_sparse_attention( - query_layer, - key_layer, - value_layer, - band_mask, - from_mask, - to_mask, - blocked_encoder_mask, - blocked_encoder_mask, - n_heads, - head_size, - indices_prng_key=indices_prng_key, - deterministic=deterministic, - plan_from_length=None, - plan_num_rand_blocks=None, - output_attentions=output_attentions, - ) - - outputs = (attn_output, attn_weights) if output_attentions else (attn_output,) - return outputs - - @staticmethod - def create_masks_for_block_sparse_attn(attention_mask, block_size: int): - batch_size, seq_length = attention_mask.shape - if seq_length % block_size != 0: - raise ValueError( - f"Sequence length must be multiple of block size, but sequence length is {seq_length}, while block" - f" size is {block_size}." - ) - - def create_band_mask_from_inputs(from_blocked_mask, to_blocked_mask): - """ - Create 3D attention mask from a 2D tensor mask. - - Args: - from_blocked_mask: 2D Tensor of shape [batch_size, - from_seq_length//from_block_size, from_block_size]. - to_blocked_mask: int32 Tensor of shape [batch_size, - to_seq_length//to_block_size, to_block_size]. - - Returns: - float Tensor of shape [batch_size, 1, from_seq_length//from_block_size-4, from_block_size, - 3*to_block_size]. - """ - exp_blocked_to_pad = jnp.concatenate( - [to_blocked_mask[:, 1:-3], to_blocked_mask[:, 2:-2], to_blocked_mask[:, 3:-1]], axis=2 - ) - band_mask = jnp.einsum("blq,blk->blqk", from_blocked_mask[:, 2:-2], exp_blocked_to_pad) - band_mask = jnp.expand_dims(band_mask, 1) - return band_mask - - blocked_encoder_mask = attention_mask.reshape(batch_size, seq_length // block_size, block_size) - band_mask = create_band_mask_from_inputs(blocked_encoder_mask, blocked_encoder_mask) - - from_mask = attention_mask.reshape(batch_size, 1, seq_length, 1) - to_mask = attention_mask.reshape(batch_size, 1, 1, seq_length) - - return blocked_encoder_mask, band_mask, from_mask, to_mask - - def bigbird_block_sparse_attention( - self, - query_layer, - key_layer, - value_layer, - band_mask, - from_mask, - to_mask, - from_blocked_mask, - to_blocked_mask, - n_heads, - head_size, - indices_prng_key: Optional[jax.random.PRNGKey] = None, - deterministic: Optional[bool] = True, - plan_from_length=None, - plan_num_rand_blocks=None, - output_attentions=None, - ): - # BigBird block-sparse attention as suggested in paper - - # ITC: - # global tokens: 2 x block_size - # window tokens: 3 x block_size - # random tokens: num_rand_tokens x block_size - - # ETC: - # global tokens: extra_globals_tokens + 2 x block_size - # window tokens: 3 x block_size - # random tokens: num_rand_tokens x block_size - - # Note: - # 1) Currently, ETC is not supported. - # 2) Window size is fixed to 3 blocks & it can be changed only by - # changing `block_size`. - # 3) Number of global blocks are fixed (2 blocks here) & global tokens can be - # controlled only by `block_size`. - - # attention is calculated separately for q[0], q[1], q[2:-2], q[-2], q[-1] in order to use special trick of - # shifting tokens (for calculating sliding attention). hence following code can be divided into 5 parts. - - bsz, _, from_seq_len, _ = query_layer.shape - to_seq_len = key_layer.shape[2] - from_block_size = to_block_size = self.config.block_size - - if from_seq_len % from_block_size != 0: - raise ValueError("Query sided sequence length must be multiple of block size") - - if to_seq_len % to_block_size != 0: - raise ValueError("Key/Value sided sequence length must be multiple of block size") - - if from_seq_len // from_block_size != to_seq_len // to_block_size: - raise ValueError("Error the number of blocks needs to be same!") - - n_rand_blocks = self.config.num_random_blocks - rsqrt_d = 1 / jnp.sqrt(head_size) - attn_mask_penalty = -10000.0 - - if from_seq_len in [1024, 3072, 4096]: # old plans used in paper - max_seqlen = self.config.max_position_embeddings - rand_attn = [ - self._bigbird_block_rand_mask( - max_seqlen, - max_seqlen, - from_block_size, - to_block_size, - n_rand_blocks, - indices_prng_key=indices_prng_key, - deterministic=deterministic, - last_idx=1024, - )[: (from_seq_len // from_block_size - 2)] - for _ in range(n_heads) - ] - else: - if plan_from_length is None: - plan_from_length, plan_num_rand_blocks = self._get_rand_attn_plan( - from_seq_len, from_block_size, n_rand_blocks - ) - rand_attn = self._bigbird_block_rand_mask_with_head( - from_seq_length=from_seq_len, - to_seq_length=to_seq_len, - from_block_size=from_block_size, - to_block_size=to_block_size, - num_heads=n_heads, - plan_from_length=plan_from_length, - plan_num_rand_blocks=plan_num_rand_blocks, - indices_prng_key=indices_prng_key, - ) - - rand_attn = jnp.stack(rand_attn, axis=0) - rand_attn = jnp.broadcast_to(rand_attn, (bsz,) + rand_attn.shape) - - rand_mask = self._create_rand_mask_from_inputs( - from_blocked_mask, to_blocked_mask, rand_attn, n_heads, n_rand_blocks, bsz, from_seq_len, from_block_size - ) - - blocked_query_matrix = query_layer.reshape(bsz, n_heads, from_seq_len // from_block_size, from_block_size, -1) - blocked_key_matrix = key_layer.reshape(bsz, n_heads, to_seq_len // to_block_size, to_block_size, -1) - blocked_value_matrix = value_layer.reshape(bsz, n_heads, to_seq_len // to_block_size, to_block_size, -1) - - shape = (bsz, n_heads, to_seq_len // to_block_size - 2, n_rand_blocks * to_block_size, -1) - gathered_key = self.jax_gather(blocked_key_matrix, rand_attn, batch_dims=2).reshape(*shape) - gathered_value = self.jax_gather(blocked_value_matrix, rand_attn, batch_dims=2).reshape(*shape) - - # 1st PART - # 1st block (global block) attention scores - # q[0] x (k[0], k[1], k[2], k[3], k[4] .... ) - - # [bsz, n_heads, from_block_size, -1] x [bsz, n_heads, to_seq_len, -1] ==> [bsz, n_heads, from_block_size, to_seq_len] - first_product = jnp.einsum("bhqd,bhkd->bhqk", blocked_query_matrix[:, :, 0], key_layer) - - first_product = first_product * rsqrt_d - first_product += (1.0 - to_mask) * attn_mask_penalty - first_attn_weights = jax.nn.softmax(first_product, axis=-1) # [bsz, n_heads, from_block_size, to_seq_len] - - # [bsz, n_heads, from_block_size, to_seq_len] x [bsz, n_heads, to_seq_len, -1] ==> [bsz, n_heads, from_block_size, -1] - first_context_layer = jnp.einsum("bhqk,bhkd->bhqd", first_attn_weights, value_layer) - first_context_layer = jnp.expand_dims(first_context_layer, 2) - - # 2nd PART - # 2nd block attention scores - # q[1] x (sliding_keys, random_keys, global_keys) - # sliding key blocks -> 2nd, 3rd blocks - # global key blocks -> 1st block - - second_key_mat = jnp.concatenate( - [ - blocked_key_matrix[:, :, 0], - blocked_key_matrix[:, :, 1], - blocked_key_matrix[:, :, 2], - blocked_key_matrix[:, :, -1], - gathered_key[:, :, 0], - ], - axis=2, - ) # [bsz, n_heads, (4+n_rand_blocks)*to_block_size, -1] - second_value_mat = jnp.concatenate( - [ - blocked_value_matrix[:, :, 0], - blocked_value_matrix[:, :, 1], - blocked_value_matrix[:, :, 2], - blocked_value_matrix[:, :, -1], - gathered_value[:, :, 0], - ], - axis=2, - ) # [bsz, n_heads, (4+n_rand_blocks)*to_block_size, -1] - - # [bsz, n_heads, from_block_size, -1] x [bsz, n_heads, (4+n_rand_blocks)*to_block_size, -1] - # ==> [bsz, n_heads, from_block_size, (4+n_rand_blocks)*to_block_size] - second_product = jnp.einsum("bhqd,bhkd->bhqk", blocked_query_matrix[:, :, 1], second_key_mat) - second_seq_pad = jnp.concatenate( - [ - to_mask[:, :, :, : 3 * to_block_size], - to_mask[:, :, :, -to_block_size:], - jnp.ones([bsz, 1, 1, n_rand_blocks * to_block_size], dtype=to_mask.dtype), - ], - axis=3, - ) - second_rand_pad = jnp.concatenate( - [ - jnp.ones([bsz, n_heads, from_block_size, 4 * to_block_size], dtype=rand_mask.dtype), - rand_mask[:, :, 0], - ], - axis=3, - ) - second_product = second_product * rsqrt_d - second_product += (1.0 - jnp.minimum(second_seq_pad, second_rand_pad)) * attn_mask_penalty - second_attn_weights = jax.nn.softmax( - second_product, axis=-1 - ) # [bsz, n_heads, from_block_size, (4+n_rand_blocks)*to_block_size] - - # [bsz, n_heads, from_block_size, (4+r)*to_block_size] x [bsz, n_heads, (4+r)*to_block_size, -1] - # ==> [bsz, n_heads, from_block_size, -1] - second_context_layer = jnp.einsum("bhqk,bhkd->bhqd", second_attn_weights, second_value_mat) - second_context_layer = jnp.expand_dims(second_context_layer, 2) - - # 3rd PART - # Middle blocks attention scores - # q[-2:2] x (sliding_keys, random_keys, global_keys) - # sliding attn is calculated using special trick of shifting tokens as discussed in paper - # random keys are generated by taking random indices as per `rand_attn` - # global keys -> 1st & last block - - exp_blocked_key_matrix = jnp.concatenate( - [blocked_key_matrix[:, :, 1:-3], blocked_key_matrix[:, :, 2:-2], blocked_key_matrix[:, :, 3:-1]], axis=3 - ) # [bsz, n_heads, from_seq_len//from_block_size-4, 3*to_block_size, -1] - exp_blocked_value_matrix = jnp.concatenate( - [blocked_value_matrix[:, :, 1:-3], blocked_value_matrix[:, :, 2:-2], blocked_value_matrix[:, :, 3:-1]], - axis=3, - ) # [bsz, n_heads, from_seq_len//from_block_size-4, 3*to_block_size, -1] - middle_query_matrix = blocked_query_matrix[:, :, 2:-2] - - # sliding attention scores for q[-2:2] - # [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, -1] x [b, n_heads, from_seq_len//from_block_size-4, 3*to_block_size, -1] - inner_band_product = jnp.einsum("bhlqd,bhlkd->bhlqk", middle_query_matrix, exp_blocked_key_matrix) - # ==> [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, 3*to_block_size] - inner_band_product = inner_band_product * rsqrt_d - - # randn attention scores for q[-2:2] - # [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, -1] - # x [bsz, n_heads, from_seq_len//from_block_size-4, n_rand_blocks*to_block_size, -1] - rand_band_product = jnp.einsum("bhlqd,bhlkd->bhlqk", middle_query_matrix, gathered_key[:, :, 1:-1]) - # ==> [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, n_rand_blocks*to_block_size] - rand_band_product = rand_band_product * rsqrt_d - - # Including 1st block (since it's global) - # [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, -1] x [bsz, n_heads, to_block_size, -1] - # ==> [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, to_block_size] - first_band_product = jnp.einsum("bhlqd,bhkd->bhlqk", middle_query_matrix, blocked_key_matrix[:, :, 0]) - first_band_product = first_band_product * rsqrt_d - - # Including last block (since it's global) - # [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, -1] x [bsz, n_heads, to_block_size, -1] - # ==> [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, to_block_size] - last_band_product = jnp.einsum("bhlqd,bhkd->bhlqk", middle_query_matrix, blocked_key_matrix[:, :, -1]) - last_band_product = last_band_product * rsqrt_d - - # masking padded tokens - inner_band_product += (1.0 - band_mask) * attn_mask_penalty - first_band_product += (1.0 - jnp.expand_dims(to_mask[:, :, :, :to_block_size], 3)) * attn_mask_penalty - last_band_product += (1.0 - jnp.expand_dims(to_mask[:, :, :, -to_block_size:], 3)) * attn_mask_penalty - rand_band_product += (1.0 - rand_mask[:, :, 1:-1]) * attn_mask_penalty - - # completing attention scores matrix for all q[-2:2] - band_product = jnp.concatenate( - [first_band_product, inner_band_product, rand_band_product, last_band_product], axis=-1 - ) # [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, (5+n_rand_blocks)*to_block_size] - - # safely doing softmax since attention matrix is completed - attn_weights = jax.nn.softmax( - band_product, axis=-1 - ) # [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, (5+n_rand_blocks)*to_block_size] - - # contribution of sliding keys - # [bsz, n_heads, m//from_block_size-4, from_block_size, 3*to_block_size] - # x [bsz, n_heads, from_seq_len//from_block_size-4, 3*to_block_size, -1] - context_layer = jnp.einsum( - "bhlqk,bhlkd->bhlqd", attn_weights[:, :, :, :, to_block_size : 4 * to_block_size], exp_blocked_value_matrix - ) - # ==> [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, -1] - - # adding contribution of random keys - # [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, n_rand_blocks*to_block_size] - # x [bsz, n_heads, from_seq_len//from_block_size-4, n_rand_blocks*to_block_size, -1] - context_layer += jnp.einsum( - "bhlqk,bhlkd->bhlqd", - attn_weights[:, :, :, :, 4 * to_block_size : -to_block_size], - gathered_value[:, :, 1:-1], - ) - # ==> [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, -1] - - # adding contribution of global keys - # [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, to_block_size] x [bsz, n_heads, to_block_size, -1] - # ==> [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, -1] - context_layer += jnp.einsum( - "bhlqk,bhkd->bhlqd", attn_weights[:, :, :, :, :to_block_size], blocked_value_matrix[:, :, 0] - ) - # [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, to_block_size] x [bsz, n_heads, to_block_size, -1] - # ==> [bsz, n_heads, from_seq_len//from_block_size-4, from_block_size, -1] - context_layer += jnp.einsum( - "bhlqk,bhkd->bhlqd", attn_weights[:, :, :, :, -to_block_size:], blocked_value_matrix[:, :, -1] - ) - - # 4th PART - # last 2nd token attention scores - # q[-2] x (sliding_keys, random_keys, global_keys) - # sliding key blocks -> last 3 blocks - # global key block -> 1st block - # random key block -> based on indices stored in `randn_attn` - - second_last_key_mat = jnp.concatenate( - [ - blocked_key_matrix[:, :, 0], - blocked_key_matrix[:, :, -3], - blocked_key_matrix[:, :, -2], - blocked_key_matrix[:, :, -1], - gathered_key[:, :, -1], - ], - axis=2, - ) # [bsz, n_heads, (4+n_random_blocks)*to_block_size, -1] - second_last_value_mat = jnp.concatenate( - [ - blocked_value_matrix[:, :, 0], - blocked_value_matrix[:, :, -3], - blocked_value_matrix[:, :, -2], - blocked_value_matrix[:, :, -1], - gathered_value[:, :, -1], - ], - axis=2, - ) # [bsz, n_heads, (4+r)*to_block_size, -1] - - # [bsz, n_heads, from_block_size, -1] x [bsz, n_heads, (4+n_rand_blocks)*to_block_size, -1] - # ==> [bsz, n_heads, from_block_size, (4+n_rand_blocks)*to_block_size] - second_last_product = jnp.einsum("bhqd,bhkd->bhqk", blocked_query_matrix[:, :, -2], second_last_key_mat) - second_last_seq_pad = jnp.concatenate( - [ - to_mask[:, :, :, :to_block_size], - to_mask[:, :, :, -3 * to_block_size :], - jnp.ones([bsz, 1, 1, n_rand_blocks * to_block_size], dtype=to_mask.dtype), - ], - axis=3, - ) - second_last_rand_pad = jnp.concatenate( - [ - jnp.ones([bsz, n_heads, from_block_size, 4 * to_block_size], dtype=rand_mask.dtype), - rand_mask[:, :, -1], - ], - axis=3, - ) - second_last_product = second_last_product * rsqrt_d - second_last_product += (1.0 - jnp.minimum(second_last_seq_pad, second_last_rand_pad)) * attn_mask_penalty - second_last_attn_weights = jax.nn.softmax( - second_last_product, axis=-1 - ) # [bsz, n_heads, from_block_size, (4+n_rand_blocks)*to_block_size] - - # [bsz, n_heads, from_block_size, (4+n_rand_blocks)*to_block_size] x [bsz, n_heads, (4+n_rand_blocks)*to_block_size, -1] - # ==> [bsz, n_heads, from_block_size, -1] - second_last_context_layer = jnp.einsum("bhqk,bhkd->bhqd", second_last_attn_weights, second_last_value_mat) - second_last_context_layer = jnp.expand_dims(second_last_context_layer, 2) - - # 5th PART - # last block (global) attention scores - # q[-1] x (k[0], k[1], k[2], k[3], .... ) - - # [bsz, n_heads, from_block_size, -1] x [bsz, n_heads, to_seq_len, -1] ==> [bsz, n_heads, from_block_size, to_seq_len] - last_product = jnp.einsum("bhqd,bhkd->bhqk", blocked_query_matrix[:, :, -1], key_layer) - last_product = last_product * rsqrt_d - last_product += (1.0 - to_mask) * attn_mask_penalty - last_attn_weights = jax.nn.softmax(last_product, axis=-1) # [bsz, n_heads, from_block_size, n] - - # [bsz, n_heads, from_block_size, to_seq_len] x [bsz, n_heads, to_seq_len, -1] ==> [bsz, n_heads, from_block_size, -1] - last_context_layer = jnp.einsum("bhqk,bhkd->bhqd", last_attn_weights, value_layer) - last_context_layer = jnp.expand_dims(last_context_layer, 2) - - # combining representations of all tokens - context_layer = jnp.concatenate( - [first_context_layer, second_context_layer, context_layer, second_last_context_layer, last_context_layer], - axis=2, - ) - context_layer = context_layer.reshape(bsz, n_heads, from_seq_len, -1) * from_mask - context_layer = jnp.transpose(context_layer, axes=(0, 2, 1, 3)).reshape(bsz, from_seq_len, -1) - - attention_probs = None - - return context_layer, attention_probs - - @staticmethod - def jax_gather(params, indices, batch_dims=2): - """ - Gather the indices from params correctly (equivalent to tf.gather but with modifications) - - Args: - params: (bsz, n_heads, num_blocks, block_size, head_dim) - indices: (bhlqk", from_blocked_mask[:, 1:-1], rand_mask) - return rand_mask - - @staticmethod - def _get_rand_attn_plan(from_seq_length, from_block_size, num_rand_blocks): - """ - Gives the plan of where to put random attention. - - Args: - from_seq_length: int. length of from sequence. - from_block_size: int. size of block in from sequence. - num_rand_blocks: int. Number of random chunks per row. - - Returns: - plan_from_length: ending location of from block plan_num_rand_blocks: number of random ending location for - each block - """ - - plan_from_length = [] - plan_num_rand_blocks = [] - if (2 * num_rand_blocks + 5) < (from_seq_length // from_block_size): - plan_from_length.append(int((2 * num_rand_blocks + 5) * from_block_size)) - plan_num_rand_blocks.append(num_rand_blocks) - plan_from_length.append(from_seq_length) - plan_num_rand_blocks.append(0) - elif (num_rand_blocks + 5) < (from_seq_length // from_block_size): - plan_from_length.append(int((num_rand_blocks + 5) * from_block_size)) - plan_num_rand_blocks.append(num_rand_blocks // 2) - plan_from_length.append(from_seq_length) - plan_num_rand_blocks.append(num_rand_blocks - (num_rand_blocks // 2)) - else: - plan_from_length.append(from_seq_length) - plan_num_rand_blocks.append(num_rand_blocks) - - return plan_from_length, plan_num_rand_blocks - - @staticmethod - def _bigbird_block_rand_mask( - from_seq_length, - to_seq_length, - from_block_size, - to_block_size, - num_rand_blocks, - indices_prng_key: Optional[jax.random.PRNGKey] = None, - deterministic: Optional[bool] = True, - last_idx: Optional[int] = -1, - ): - """ - Create adjacency list of random attention. - - Args: - from_seq_length: int. length of from sequence. - to_seq_length: int. length of to sequence. - from_block_size: int. size of block in from sequence. - to_block_size: int. size of block in to sequence. - num_rand_blocks: int. Number of random chunks per row. - indices_prng_key: jax.random.PRNGKey. PRNG key that is used to perform random jax operations. - deterministic: bool. When False random attention will be used. - last_idx: if -1 then num_rand_blocks blocks chosen anywhere in to sequence, - if positive then num_rand_blocks blocks chosen only up to last_idx. - - Returns: - adjacency list of size from_seq_length//from_block_size-2 by num_rand_blocks - """ - # using this method when from_seq_length in [1024, 3072, 4096] - - if from_seq_length // from_block_size != to_seq_length // to_block_size: - raise ValueError("Error the number of blocks needs to be same!") - rand_attn = jnp.zeros((from_seq_length // from_block_size - 2, num_rand_blocks), dtype=jnp.int32) - # deterministic nor randomness - if deterministic: - return rand_attn - - middle_seq = jnp.arange(1, to_seq_length // to_block_size - 1, dtype=jnp.int32) - last = to_seq_length // to_block_size - 1 - if last_idx > (2 * to_block_size): - last = (last_idx // to_block_size) - 1 - - r = num_rand_blocks # shorthand - for i in range(1, from_seq_length // from_block_size - 1): - start = i - 2 - end = i - if i == 1: - seq_values = jax.random.permutation(indices_prng_key, middle_seq[2:last])[:r] - rand_attn = rand_attn.at[i - 1].set(seq_values) - elif i == 2: - seq_values = jax.random.permutation(indices_prng_key, middle_seq[3:last])[:r] - rand_attn = rand_attn.at[i - 1].set(seq_values) - elif i == from_seq_length // from_block_size - 3: - seq_values = jax.random.permutation(indices_prng_key, middle_seq[:last])[:r] - rand_attn = rand_attn.at[i - 1].set(seq_values) - # Missing -3: should have been sliced till last-3 - elif i == from_seq_length // from_block_size - 2: - seq_values = jax.random.permutation(indices_prng_key, middle_seq[:last])[:r] - rand_attn = rand_attn.at[i - 1].set(seq_values) - # Missing -4: should have been sliced till last-4 - else: - if start > last: - start = last - seq_values = jax.random.permutation(indices_prng_key, middle_seq[:start])[:r] - rand_attn = rand_attn.at[i - 1].set(seq_values) - elif (end + 1) == last: - seq_values = jax.random.permutation(indices_prng_key, middle_seq[:start])[:r] - rand_attn = rand_attn.at[i - 1].set(seq_values) - else: - concat_values = jnp.concatenate((middle_seq[:start], middle_seq[end + 1 : last])) - seq_values = jax.random.permutation(indices_prng_key, concat_values)[:r] - rand_attn = rand_attn.at[i - 1].set(seq_values) - return rand_attn - - def _bigbird_block_rand_mask_with_head( - self, - from_seq_length, - to_seq_length, - from_block_size, - to_block_size, - num_heads, - plan_from_length, - plan_num_rand_blocks, - indices_prng_key: Optional[jax.random.PRNGKey] = None, - deterministic: Optional[bool] = True, - window_block_left=1, - window_block_right=1, - global_block_top=1, - global_block_bottom=1, - global_block_left=1, - global_block_right=1, - ): - """ - Create adjacency list of random attention. - - Args: - from_seq_length: int. length of from sequence. - to_seq_length: int. length of to sequence. - from_block_size: int. size of block in from sequence. - to_block_size: int. size of block in to sequence. - num_heads: int. total number of heads. - plan_from_length: list. plan from length where num_random_blocks are choosen from. - plan_num_rand_blocks: list. number of rand blocks within the plan. - indices_prng_key: jax.random.PRNGKey. PRNG key that is used to perform random jax operations. - deterministic: bool. When False random attention will be used. - window_block_left: int. number of blocks of window to left of a block. - window_block_right: int. number of blocks of window to right of a block. - global_block_top: int. number of blocks at the top. - global_block_bottom: int. number of blocks at the bottom. - global_block_left: int. Number of blocks globally used to the left. - global_block_right: int. Number of blocks globally used to the right. - - Returns: - adjacency list of size num_head where each element is of size from_seq_length//from_block_size-2 by - num_rand_blocks - """ - # using this method when from_seq_length not in [1024, 3072, 4096] - - if from_seq_length // from_block_size != to_seq_length // to_block_size: - raise ValueError("Error the number of blocks needs to be same!") - - if from_seq_length not in plan_from_length: - raise ValueError("Error from sequence length not in plan!") - - # Total number of blocks in the mmask - num_blocks = from_seq_length // from_block_size - # Number of blocks per plan - plan_block_length = jnp.array(plan_from_length) // from_block_size - # till when to follow plan - max_plan_idx = plan_from_length.index(from_seq_length) - - # Random Attention adjacency list - rand_attn = [ - jnp.zeros((num_blocks, sum(plan_num_rand_blocks[: max_plan_idx + 1])), dtype=jnp.int32) - for i in range(num_heads) - ] - - # deterministic - if deterministic: - for nh in range(num_heads): - rand_attn[nh] = rand_attn[nh][global_block_top : num_blocks - global_block_bottom, :] - return rand_attn - - # We will go iteratively over the plan blocks and pick random number of - # Attention blocks from the legally allowed blocks - for plan_idx in range(max_plan_idx + 1): - rnd_r_cnt = 0 - if plan_idx > 0: - # set the row for all from_blocks starting from 0 to - # plan_block_length[plan_idx-1] - # column indx start fromm plan_block_length[plan_idx-1] and ends at - # plan_block_length[plan_idx] - if plan_num_rand_blocks[plan_idx] > 0: - rnd_r_cnt = int(sum(plan_num_rand_blocks[:plan_idx])) - curr_r_cnt = int(sum(plan_num_rand_blocks[: plan_idx + 1])) - for blk_rw_idx in range(global_block_top, plan_block_length[plan_idx - 1]): - for h in range(num_heads): - single_block_row_attention = self._get_single_block_row_attention( - block_id=blk_rw_idx, - to_start_block_id=plan_block_length[plan_idx - 1], - to_end_block_id=plan_block_length[plan_idx], - num_rand_blocks=plan_num_rand_blocks[plan_idx], - window_block_left=window_block_left, - window_block_right=window_block_right, - global_block_left=global_block_left, - global_block_right=global_block_right, - indices_prng_key=indices_prng_key, - ) - rand_attn[h] = ( - rand_attn[h].at[blk_rw_idx, rnd_r_cnt:curr_r_cnt].set(single_block_row_attention) - ) - - for pl_id in range(plan_idx): - if plan_num_rand_blocks[pl_id] == 0: - continue - for blk_rw_idx in range(plan_block_length[plan_idx - 1], plan_block_length[plan_idx]): - rnd_r_cnt = 0 - to_start_block_id = 0 - if pl_id > 0: - rnd_r_cnt = int(sum(plan_num_rand_blocks[:pl_id])) - to_start_block_id = plan_block_length[pl_id - 1] - curr_r_cnt = int(sum(plan_num_rand_blocks[: pl_id + 1])) - for h in range(num_heads): - single_block_row_attention = self._get_single_block_row_attention( - block_id=blk_rw_idx, - to_start_block_id=to_start_block_id, - to_end_block_id=plan_block_length[pl_id], - num_rand_blocks=plan_num_rand_blocks[pl_id], - window_block_left=window_block_left, - window_block_right=window_block_right, - global_block_left=global_block_left, - global_block_right=global_block_right, - indices_prng_key=indices_prng_key, - ) - rand_attn[h] = ( - rand_attn[h].at[blk_rw_idx, rnd_r_cnt:curr_r_cnt].set(single_block_row_attention) - ) - - if plan_num_rand_blocks[plan_idx] == 0: - continue - curr_r_cnt = int(sum(plan_num_rand_blocks[: plan_idx + 1])) - from_start_block_id = global_block_top - to_start_block_id = 0 - if plan_idx > 0: - rnd_r_cnt = int(sum(plan_num_rand_blocks[:plan_idx])) - from_start_block_id = plan_block_length[plan_idx - 1] - to_start_block_id = plan_block_length[plan_idx - 1] - for blk_rw_idx in range(from_start_block_id, plan_block_length[plan_idx]): - for h in range(num_heads): - single_block_row_attention = self._get_single_block_row_attention( - block_id=blk_rw_idx, - to_start_block_id=to_start_block_id, - to_end_block_id=plan_block_length[plan_idx], - num_rand_blocks=plan_num_rand_blocks[plan_idx], - window_block_left=window_block_left, - window_block_right=window_block_right, - global_block_left=global_block_left, - global_block_right=global_block_right, - indices_prng_key=indices_prng_key, - ) - rand_attn[h] = rand_attn[h].at[blk_rw_idx, rnd_r_cnt:curr_r_cnt].set(single_block_row_attention) - - for nh in range(num_heads): - rand_attn[nh] = rand_attn[nh][global_block_top : num_blocks - global_block_bottom, :] - return rand_attn - - @staticmethod - def _get_single_block_row_attention( - block_id, - to_start_block_id, - to_end_block_id, - num_rand_blocks, - indices_prng_key: Optional[jax.random.PRNGKey] = None, - window_block_left=1, - window_block_right=1, - global_block_left=1, - global_block_right=1, - ): - """ - For a single row block get random row attention. - - Args: - block_id: int. block id of row. - to_start_block_id: int. random attention column start id. - to_end_block_id: int. random attention column end id. - num_rand_blocks: int. number of random blocks to be selected. - indices_prng_key: jax.random.PRNGKey. PRNG key that is used to perform random jax operations - window_block_left: int. number of blocks of window to left of a block. - window_block_right: int. number of blocks of window to right of a block. - global_block_left: int. Number of blocks globally used to the left. - global_block_right: int. Number of blocks globally used to the right. - - Returns: - row containing the random attention vector of size num_rand_blocks. - """ - # list of to_blocks from which to choose random attention - to_block_list = jnp.arange(to_start_block_id, to_end_block_id, dtype=jnp.int32) - # permute the blocks - perm_block = jax.random.permutation(indices_prng_key, to_block_list) - - # illegal blocks for the current block id, using window - illegal_blocks = list(range(block_id - window_block_left, block_id + window_block_right + 1)) - - # Add blocks at the start and at the end - illegal_blocks.extend(list(range(global_block_left))) - illegal_blocks.extend(list(range(to_end_block_id - global_block_right, to_end_block_id))) - - # The second from_block cannot choose random attention on second last to_block - if block_id == 1: - illegal_blocks.append(to_end_block_id - 2) - - # The second last from_block cannot choose random attention on second to_block - if block_id == to_end_block_id - 2: - illegal_blocks.append(1) - - selected_random_blocks = [] - - for i in range(to_end_block_id - to_start_block_id): - if perm_block[i] not in illegal_blocks: - selected_random_blocks.append(perm_block[i]) - if len(selected_random_blocks) == num_rand_blocks: - break - return jnp.array(selected_random_blocks, dtype=jnp.int32) - - -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertSelfOutput with Bert->BigBird -class FlaxBigBirdSelfOutput(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.dense = nn.Dense( - self.config.hidden_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - self.LayerNorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) - - def __call__(self, hidden_states, input_tensor, deterministic: bool = True): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states, deterministic=deterministic) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class FlaxBigBirdAttention(nn.Module): - config: BigBirdConfig - layer_id: int = None - causal: bool = False - dtype: jnp.dtype = jnp.float32 - - def setup(self): - if self.config.attention_type == "original_full": - self.self = FlaxBigBirdSelfAttention(self.config, causal=self.causal, dtype=self.dtype) - elif self.config.attention_type == "block_sparse": - self.self = FlaxBigBirdBlockSparseAttention(self.config, block_sparse_seed=self.layer_id, dtype=self.dtype) - else: - raise ValueError( - f"Your `config.attention_type` is {self.config.attention_type} but it can either be `original_full` or" - " `block_sparse`" - ) - - self.output = FlaxBigBirdSelfOutput(self.config, dtype=self.dtype) - - def __call__( - self, - hidden_states, - attention_mask, - layer_head_mask, - key_value_states=None, - init_cache=False, - deterministic=True, - output_attentions: bool = False, - ): - # Attention mask comes in as attention_mask.shape == (*batch_sizes, kv_length) - # FLAX expects: attention_mask.shape == (*batch_sizes, 1, 1, kv_length) such that it is broadcastable - # with attn_weights.shape == (*batch_sizes, num_heads, q_length, kv_length) - if self.config.attention_type == "original_full": - attn_outputs = self.self( - hidden_states, - attention_mask, - layer_head_mask=layer_head_mask, - key_value_states=key_value_states, - init_cache=init_cache, - deterministic=deterministic, - output_attentions=output_attentions, - ) - else: - attn_outputs = self.self( - hidden_states, - attention_mask, - deterministic=deterministic, - output_attentions=output_attentions, - ) - attn_output = attn_outputs[0] - hidden_states = self.output(attn_output, hidden_states, deterministic=deterministic) - - outputs = (hidden_states,) - - if output_attentions: - outputs += (attn_outputs[1],) - - return outputs - - -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertIntermediate with Bert->BigBird -class FlaxBigBirdIntermediate(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.dense = nn.Dense( - self.config.intermediate_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - self.activation = ACT2FN[self.config.hidden_act] - - def __call__(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.activation(hidden_states) - return hidden_states - - -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertOutput with Bert->BigBird -class FlaxBigBirdOutput(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.dense = nn.Dense( - self.config.hidden_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) - self.LayerNorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - - def __call__(self, hidden_states, attention_output, deterministic: bool = True): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states, deterministic=deterministic) - hidden_states = self.LayerNorm(hidden_states + attention_output) - return hidden_states - - -class FlaxBigBirdLayer(nn.Module): - config: BigBirdConfig - layer_id: int = None - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.attention = FlaxBigBirdAttention( - self.config, layer_id=self.layer_id, causal=self.config.is_decoder, dtype=self.dtype - ) - self.intermediate = FlaxBigBirdIntermediate(self.config, dtype=self.dtype) - self.output = FlaxBigBirdOutput(self.config, dtype=self.dtype) - if self.config.add_cross_attention: - self.crossattention = FlaxBigBirdAttention(self.config, causal=False, dtype=self.dtype) - - # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertLayer.__call__ with Bert->BigBird - def __call__( - self, - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - init_cache: bool = False, - deterministic: bool = True, - output_attentions: bool = False, - ): - # Self Attention - attention_outputs = self.attention( - hidden_states, - attention_mask, - layer_head_mask=layer_head_mask, - init_cache=init_cache, - deterministic=deterministic, - output_attentions=output_attentions, - ) - attention_output = attention_outputs[0] - - # Cross-Attention Block - if encoder_hidden_states is not None: - cross_attention_outputs = self.crossattention( - attention_output, - attention_mask=encoder_attention_mask, - layer_head_mask=layer_head_mask, - key_value_states=encoder_hidden_states, - deterministic=deterministic, - output_attentions=output_attentions, - ) - attention_output = cross_attention_outputs[0] - - hidden_states = self.intermediate(attention_output) - hidden_states = self.output(hidden_states, attention_output, deterministic=deterministic) - - outputs = (hidden_states,) - - if output_attentions: - outputs += (attention_outputs[1],) - if encoder_hidden_states is not None: - outputs += (cross_attention_outputs[1],) - return outputs - - -class FlaxBigBirdLayerCollection(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - gradient_checkpointing: bool = False - - def setup(self): - if self.gradient_checkpointing: - FlaxBigBirdCheckpointLayer = remat(FlaxBigBirdLayer, static_argnums=(5, 6, 7)) - self.layers = [ - FlaxBigBirdCheckpointLayer(self.config, layer_id=i, name=str(i), dtype=self.dtype) - for i in range(self.config.num_hidden_layers) - ] - else: - self.layers = [ - FlaxBigBirdLayer(self.config, layer_id=i, name=str(i), dtype=self.dtype) - for i in range(self.config.num_hidden_layers) - ] - - # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertLayerCollection.__call__ with Bert->BigBird - def __call__( - self, - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - init_cache: bool = False, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - all_attentions = () if output_attentions else None - all_hidden_states = () if output_hidden_states else None - all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - - # Check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.shape[0] != (len(self.layers)): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for " - f" {head_mask.shape[0]}." - ) - - for i, layer in enumerate(self.layers): - if output_hidden_states: - all_hidden_states += (hidden_states,) - - layer_outputs = layer( - hidden_states, - attention_mask, - head_mask[i] if head_mask is not None else None, - encoder_hidden_states, - encoder_attention_mask, - init_cache, - deterministic, - output_attentions, - ) - - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions += (layer_outputs[1],) - - if encoder_hidden_states is not None: - all_cross_attentions += (layer_outputs[2],) - - if output_hidden_states: - all_hidden_states += (hidden_states,) - - outputs = (hidden_states, all_hidden_states, all_attentions, all_cross_attentions) - - if not return_dict: - return tuple(v for v in outputs if v is not None) - - return FlaxBaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - hidden_states=all_hidden_states, - attentions=all_attentions, - cross_attentions=all_cross_attentions, - ) - - -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertEncoder with Bert->BigBird -class FlaxBigBirdEncoder(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - gradient_checkpointing: bool = False - - def setup(self): - self.layer = FlaxBigBirdLayerCollection( - self.config, - dtype=self.dtype, - gradient_checkpointing=self.gradient_checkpointing, - ) - - def __call__( - self, - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - init_cache: bool = False, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - return self.layer( - hidden_states, - attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - init_cache=init_cache, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertPredictionHeadTransform with Bert->BigBird -class FlaxBigBirdPredictionHeadTransform(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.dense = nn.Dense(self.config.hidden_size, dtype=self.dtype) - self.activation = ACT2FN[self.config.hidden_act] - self.LayerNorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - - def __call__(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.activation(hidden_states) - return self.LayerNorm(hidden_states) - - -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertLMPredictionHead with Bert->BigBird, np.ndarray->jnp.ndarray -class FlaxBigBirdLMPredictionHead(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - bias_init: Callable[..., jnp.ndarray] = jax.nn.initializers.zeros - - def setup(self): - self.transform = FlaxBigBirdPredictionHeadTransform(self.config, dtype=self.dtype) - self.decoder = nn.Dense(self.config.vocab_size, dtype=self.dtype, use_bias=False) - self.bias = self.param("bias", self.bias_init, (self.config.vocab_size,)) - - def __call__(self, hidden_states, shared_embedding=None): - hidden_states = self.transform(hidden_states) - - if shared_embedding is not None: - hidden_states = self.decoder.apply({"params": {"kernel": shared_embedding.T}}, hidden_states) - else: - hidden_states = self.decoder(hidden_states) - - bias = jnp.asarray(self.bias, self.dtype) - hidden_states += bias - return hidden_states - - -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertOnlyMLMHead with Bert->BigBird -class FlaxBigBirdOnlyMLMHead(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.predictions = FlaxBigBirdLMPredictionHead(self.config, dtype=self.dtype) - - def __call__(self, hidden_states, shared_embedding=None): - hidden_states = self.predictions(hidden_states, shared_embedding=shared_embedding) - return hidden_states - - -class FlaxBigBirdPreTrainingHeads(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.predictions = FlaxBigBirdLMPredictionHead(self.config, dtype=self.dtype) - self.seq_relationship = nn.Dense(2, dtype=self.dtype) - - def __call__(self, hidden_states, pooled_output, shared_embedding=None): - prediction_scores = self.predictions(hidden_states, shared_embedding=shared_embedding) - seq_relationship_score = self.seq_relationship(pooled_output) - return prediction_scores, seq_relationship_score - - -class FlaxBigBirdPreTrainedModel(FlaxPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = BigBirdConfig - base_model_prefix = "bert" - module_class: nn.Module = None - - def __init__( - self, - config: BigBirdConfig, - input_shape: Optional[tuple] = None, - seed: int = 0, - dtype: jnp.dtype = jnp.float32, - _do_init: bool = True, - gradient_checkpointing: bool = False, - **kwargs, - ): - module = self.module_class(config=config, dtype=dtype, gradient_checkpointing=gradient_checkpointing, **kwargs) - if config.attention_type == "block_sparse" and input_shape is None: - input_shape = (1, 12 * config.block_size) - elif input_shape is None: - input_shape = (1, 1) - - super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype, _do_init=_do_init) - - # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertPreTrainedModel.enable_gradient_checkpointing - def enable_gradient_checkpointing(self): - self._module = self.module_class( - config=self.config, - dtype=self.dtype, - gradient_checkpointing=True, - ) - - def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple, params: FrozenDict = None) -> FrozenDict: - # init input tensors - input_ids = jnp.zeros(input_shape, dtype="i4") - token_type_ids = jnp.zeros_like(input_ids) - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_shape) - attention_mask = jnp.ones_like(input_ids) - head_mask = jnp.ones((self.config.num_hidden_layers, self.config.num_attention_heads)) - - params_rng, dropout_rng, indices_rng = jax.random.split(rng, num=3) - rngs = {"params": params_rng, "dropout": dropout_rng, "indices": indices_rng} - - if self.config.add_cross_attention: - encoder_hidden_states = jnp.zeros(input_shape + (self.config.hidden_size,)) - encoder_attention_mask = attention_mask - module_init_outputs = self.module.init( - rngs, - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - return_dict=False, - ) - else: - module_init_outputs = self.module.init( - rngs, - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - return_dict=False, - ) - - random_params = module_init_outputs["params"] - - if params is not None: - random_params = flatten_dict(unfreeze(random_params)) - params = flatten_dict(unfreeze(params)) - for missing_key in self._missing_keys: - params[missing_key] = random_params[missing_key] - self._missing_keys = set() - return freeze(unflatten_dict(params)) - else: - return random_params - - # Copied from transformers.models.bart.modeling_flax_bart.FlaxBartDecoderPreTrainedModel.init_cache - def init_cache(self, batch_size, max_length): - r""" - Args: - batch_size (`int`): - batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache. - max_length (`int`): - maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized - cache. - """ - # init input variables to retrieve cache - input_ids = jnp.ones((batch_size, max_length), dtype="i4") - attention_mask = jnp.ones_like(input_ids, dtype="i4") - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) - - init_variables = self.module.init( - jax.random.PRNGKey(0), input_ids, attention_mask, position_ids, return_dict=False, init_cache=True - ) - return unfreeze(init_variables["cache"]) - - @add_start_docstrings_to_model_forward(BIG_BIRD_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - def __call__( - self, - input_ids, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - params: dict = None, - dropout_rng: Optional[jax.random.PRNGKey] = None, - indices_rng: Optional[jax.random.PRNGKey] = None, - train: bool = False, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - past_key_values: dict = None, - ): - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.return_dict - - # init input tensors if not passed - if token_type_ids is None: - token_type_ids = jnp.zeros_like(input_ids) - - if position_ids is None: - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) - - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - - if head_mask is None: - head_mask = jnp.ones((self.config.num_hidden_layers, self.config.num_attention_heads)) - - # Handle any PRNG if needed - rngs = {} - if indices_rng is not None: - rngs["indices"] = indices_rng - - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - inputs = {"params": params or self.params} - - if self.config.add_cross_attention: - # if past_key_values are passed then cache is already initialized a private flag init_cache has to be passed - # down to ensure cache is used. It has to be made sure that cache is marked as mutable so that it can be - # changed by FlaxBigBirdAttention module - if past_key_values: - inputs["cache"] = past_key_values - mutable = ["cache"] - else: - mutable = False - - outputs = self.module.apply( - inputs, - jnp.array(input_ids, dtype="i4"), - jnp.array(attention_mask, dtype="i4"), - token_type_ids=jnp.array(token_type_ids, dtype="i4"), - position_ids=jnp.array(position_ids, dtype="i4"), - head_mask=jnp.array(head_mask, dtype="i4"), - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - deterministic=not train, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - rngs=rngs, - mutable=mutable, - ) - - # add updated cache to model output - if past_key_values is not None and return_dict: - outputs, past_key_values = outputs - outputs["past_key_values"] = unfreeze(past_key_values["cache"]) - return outputs - elif past_key_values is not None and not return_dict: - outputs, past_key_values = outputs - outputs = outputs[:1] + (unfreeze(past_key_values["cache"]),) + outputs[1:] - - else: - outputs = self.module.apply( - inputs, - jnp.array(input_ids, dtype="i4"), - jnp.array(attention_mask, dtype="i4"), - token_type_ids=jnp.array(token_type_ids, dtype="i4"), - position_ids=jnp.array(position_ids, dtype="i4"), - head_mask=jnp.array(head_mask, dtype="i4"), - deterministic=not train, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - rngs=rngs, - ) - - return outputs - - -class FlaxBigBirdModule(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - add_pooling_layer: bool = True - gradient_checkpointing: bool = False - - def setup(self): - self.embeddings = FlaxBigBirdEmbeddings(self.config, dtype=self.dtype) - self.encoder = FlaxBigBirdEncoder( - self.config, dtype=self.dtype, gradient_checkpointing=self.gradient_checkpointing - ) - self.pooler = nn.Dense( - self.config.hidden_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - init_cache: bool = False, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - hidden_states = self.embeddings( - input_ids, token_type_ids, position_ids, attention_mask, deterministic=deterministic - ) - outputs = self.encoder( - hidden_states, - attention_mask, - head_mask=head_mask, - deterministic=deterministic, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - init_cache=init_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = outputs[0] - - pooled = nn.tanh(self.pooler(hidden_states[:, 0, :])) if self.add_pooling_layer else None - - if not return_dict: - # if pooled is None, don't return it - if pooled is None: - return (hidden_states,) + outputs[1:] - return (hidden_states, pooled) + outputs[1:] - - return FlaxBaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=hidden_states, - pooler_output=pooled, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - - -@add_start_docstrings( - "The bare BigBird Model transformer outputting raw hidden-states without any specific head on top.", - BIG_BIRD_START_DOCSTRING, -) -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertModel with Bert->BigBird -class FlaxBigBirdModel(FlaxBigBirdPreTrainedModel): - module_class = FlaxBigBirdModule - - -append_call_sample_docstring(FlaxBigBirdModel, _CHECKPOINT_FOR_DOC, FlaxBaseModelOutputWithPooling, _CONFIG_FOR_DOC) - - -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertForPreTrainingModule with Bert->BigBird -class FlaxBigBirdForPreTrainingModule(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - gradient_checkpointing: bool = False - - def setup(self): - self.bert = FlaxBigBirdModule( - config=self.config, - dtype=self.dtype, - gradient_checkpointing=self.gradient_checkpointing, - ) - self.cls = FlaxBigBirdPreTrainingHeads(config=self.config, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - # Model - outputs = self.bert( - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - if self.config.tie_word_embeddings: - shared_embedding = self.bert.variables["params"]["embeddings"]["word_embeddings"]["embedding"] - else: - shared_embedding = None - - hidden_states = outputs[0] - pooled_output = outputs[1] - - prediction_scores, seq_relationship_score = self.cls( - hidden_states, pooled_output, shared_embedding=shared_embedding - ) - - if not return_dict: - return (prediction_scores, seq_relationship_score) + outputs[2:] - - return FlaxBigBirdForPreTrainingOutput( - prediction_logits=prediction_scores, - seq_relationship_logits=seq_relationship_score, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - BigBird Model with two heads on top as done during the pretraining: a `masked language modeling` head and a `next - sentence prediction (classification)` head. - """, - BIG_BIRD_START_DOCSTRING, -) -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertForPreTraining with Bert->BigBird -class FlaxBigBirdForPreTraining(FlaxBigBirdPreTrainedModel): - module_class = FlaxBigBirdForPreTrainingModule - - -FLAX_BIG_BIRD_FOR_PRETRAINING_DOCSTRING = """ - Returns: - - Example: - - ```python - >>> from transformers import AutoTokenizer, FlaxBigBirdForPreTraining - - >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") - >>> model = FlaxBigBirdForPreTraining.from_pretrained("google/bigbird-roberta-base") - - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") - >>> outputs = model(**inputs) - - >>> prediction_logits = outputs.prediction_logits - >>> seq_relationship_logits = outputs.seq_relationship_logits - ``` -""" - -overwrite_call_docstring( - FlaxBigBirdForPreTraining, - BIG_BIRD_INPUTS_DOCSTRING.format("batch_size, sequence_length") + FLAX_BIG_BIRD_FOR_PRETRAINING_DOCSTRING, -) -append_replace_return_docstrings( - FlaxBigBirdForPreTraining, output_type=FlaxBigBirdForPreTrainingOutput, config_class=_CONFIG_FOR_DOC -) - - -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertForMaskedLMModule with Bert->BigBird -class FlaxBigBirdForMaskedLMModule(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - gradient_checkpointing: bool = False - - def setup(self): - self.bert = FlaxBigBirdModule( - config=self.config, - add_pooling_layer=False, - dtype=self.dtype, - gradient_checkpointing=self.gradient_checkpointing, - ) - self.cls = FlaxBigBirdOnlyMLMHead(config=self.config, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - # Model - outputs = self.bert( - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - if self.config.tie_word_embeddings: - shared_embedding = self.bert.variables["params"]["embeddings"]["word_embeddings"]["embedding"] - else: - shared_embedding = None - - # Compute the prediction scores - logits = self.cls(hidden_states, shared_embedding=shared_embedding) - - if not return_dict: - return (logits,) + outputs[1:] - - return FlaxMaskedLMOutput( - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings("""BigBird Model with a `language modeling` head on top.""", BIG_BIRD_START_DOCSTRING) -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertForMaskedLM with Bert->BigBird -class FlaxBigBirdForMaskedLM(FlaxBigBirdPreTrainedModel): - module_class = FlaxBigBirdForMaskedLMModule - - -append_call_sample_docstring(FlaxBigBirdForMaskedLM, _CHECKPOINT_FOR_DOC, FlaxMaskedLMOutput, _CONFIG_FOR_DOC) - - -class FlaxBigBirdClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.dense = nn.Dense(self.config.hidden_size, dtype=self.dtype) - classifier_dropout = ( - self.config.classifier_dropout - if self.config.classifier_dropout is not None - else self.config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.out_proj = nn.Dense(self.config.num_labels, dtype=self.dtype) - - def __call__(self, features, deterministic=True): - x = features[:, 0, :] # take token (equiv. to [CLS]) - x = self.dropout(x, deterministic=deterministic) - x = self.dense(x) - x = ACT2FN[self.config.hidden_act](x) - x = self.dropout(x, deterministic=deterministic) - x = self.out_proj(x) - return x - - -class FlaxBigBirdForSequenceClassificationModule(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - gradient_checkpointing: bool = False - - def setup(self): - self.bert = FlaxBigBirdModule( - config=self.config, dtype=self.dtype, gradient_checkpointing=self.gradient_checkpointing - ) - self.classifier = FlaxBigBirdClassificationHead(self.config, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - # Model - outputs = self.bert( - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - logits = self.classifier(sequence_output, deterministic=deterministic) - - if not return_dict: - return (logits,) + outputs[2:] - - return FlaxSequenceClassifierOutput( - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - BigBird Model transformer with a sequence classification/regression head on top (a linear layer on top of the - pooled output) e.g. for GLUE tasks. - """, - BIG_BIRD_START_DOCSTRING, -) -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertForSequenceClassification with Bert->BigBird -class FlaxBigBirdForSequenceClassification(FlaxBigBirdPreTrainedModel): - module_class = FlaxBigBirdForSequenceClassificationModule - - -append_call_sample_docstring( - FlaxBigBirdForSequenceClassification, - _CHECKPOINT_FOR_DOC, - FlaxSequenceClassifierOutput, - _CONFIG_FOR_DOC, -) - - -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertForMultipleChoiceModule with Bert->BigBird -class FlaxBigBirdForMultipleChoiceModule(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - gradient_checkpointing: bool = False - - def setup(self): - self.bert = FlaxBigBirdModule( - config=self.config, - dtype=self.dtype, - gradient_checkpointing=self.gradient_checkpointing, - ) - self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) - self.classifier = nn.Dense(1, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - num_choices = input_ids.shape[1] - input_ids = input_ids.reshape(-1, input_ids.shape[-1]) if input_ids is not None else None - attention_mask = attention_mask.reshape(-1, attention_mask.shape[-1]) if attention_mask is not None else None - token_type_ids = token_type_ids.reshape(-1, token_type_ids.shape[-1]) if token_type_ids is not None else None - position_ids = position_ids.reshape(-1, position_ids.shape[-1]) if position_ids is not None else None - - # Model - outputs = self.bert( - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = outputs[1] - pooled_output = self.dropout(pooled_output, deterministic=deterministic) - logits = self.classifier(pooled_output) - - reshaped_logits = logits.reshape(-1, num_choices) - - if not return_dict: - return (reshaped_logits,) + outputs[2:] - - return FlaxMultipleChoiceModelOutput( - logits=reshaped_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - BigBird Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a - softmax) e.g. for RocStories/SWAG tasks. - """, - BIG_BIRD_START_DOCSTRING, -) -class FlaxBigBirdForMultipleChoice(FlaxBigBirdPreTrainedModel): - module_class = FlaxBigBirdForMultipleChoiceModule - - def __init__( - self, - config: BigBirdConfig, - input_shape: Optional[tuple] = None, - seed: int = 0, - dtype: jnp.dtype = jnp.float32, - _do_init: bool = True, - **kwargs, - ): - if config.attention_type == "block_sparse" and input_shape is None: - input_shape = (1, 1, 12 * config.block_size) - elif input_shape is None: - input_shape = (1, 1) - super().__init__(config, input_shape=input_shape, seed=seed, dtype=dtype, _do_init=_do_init) - - -overwrite_call_docstring( - FlaxBigBirdForMultipleChoice, BIG_BIRD_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length") -) -append_call_sample_docstring( - FlaxBigBirdForMultipleChoice, - _CHECKPOINT_FOR_DOC, - FlaxMultipleChoiceModelOutput, - _CONFIG_FOR_DOC, -) - - -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertForTokenClassificationModule with Bert->BigBird -class FlaxBigBirdForTokenClassificationModule(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - gradient_checkpointing: bool = False - - def setup(self): - self.bert = FlaxBigBirdModule( - config=self.config, - dtype=self.dtype, - add_pooling_layer=False, - gradient_checkpointing=self.gradient_checkpointing, - ) - classifier_dropout = ( - self.config.classifier_dropout - if self.config.classifier_dropout is not None - else self.config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(rate=classifier_dropout) - self.classifier = nn.Dense(self.config.num_labels, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - # Model - outputs = self.bert( - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - hidden_states = self.dropout(hidden_states, deterministic=deterministic) - logits = self.classifier(hidden_states) - - if not return_dict: - return (logits,) + outputs[1:] - - return FlaxTokenClassifierOutput( - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - BigBird Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - BIG_BIRD_START_DOCSTRING, -) -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertForTokenClassification with Bert->BigBird -class FlaxBigBirdForTokenClassification(FlaxBigBirdPreTrainedModel): - module_class = FlaxBigBirdForTokenClassificationModule - - -append_call_sample_docstring( - FlaxBigBirdForTokenClassification, - _CHECKPOINT_FOR_DOC, - FlaxTokenClassifierOutput, - _CONFIG_FOR_DOC, -) - - -class FlaxBigBirdForQuestionAnsweringHead(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) - self.intermediate = FlaxBigBirdIntermediate(self.config, dtype=self.dtype) - self.output = FlaxBigBirdOutput(self.config, dtype=self.dtype) - self.qa_outputs = nn.Dense(self.config.num_labels, dtype=self.dtype) - - def __call__(self, encoder_output, deterministic=True): - hidden_states = self.dropout(encoder_output, deterministic=deterministic) - hidden_states = self.intermediate(hidden_states) - hidden_states = self.output(hidden_states, encoder_output) - hidden_states = self.qa_outputs(hidden_states) - return hidden_states - - -class FlaxBigBirdForQuestionAnsweringModule(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - add_pooling_layer: bool = False - gradient_checkpointing: bool = False - - def setup(self): - self.config.num_labels = 2 - self.bert = FlaxBigBirdModule( - self.config, - dtype=self.dtype, - add_pooling_layer=self.add_pooling_layer, - gradient_checkpointing=self.gradient_checkpointing, - ) - self.qa_classifier = FlaxBigBirdForQuestionAnsweringHead(self.config, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - logits_mask=None, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - # Model - outputs = self.bert( - input_ids, - attention_mask, - token_type_ids, - position_ids, - head_mask, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - pooled_output = outputs[1] if self.add_pooling_layer else None - logits = self.qa_classifier(hidden_states, deterministic=deterministic) - - if logits_mask is not None: - # removing question tokens from the competition - logits = logits - logits_mask * 1e6 - - start_logits, end_logits = logits.split(self.config.num_labels, axis=-1) - start_logits = start_logits.squeeze(-1) - end_logits = end_logits.squeeze(-1) - - if not return_dict: - return (start_logits, end_logits) + outputs[1:] - - return FlaxBigBirdForQuestionAnsweringModelOutput( - start_logits=start_logits, - end_logits=end_logits, - pooled_output=pooled_output, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - BigBird Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear - layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - BIG_BIRD_START_DOCSTRING, -) -class FlaxBigBirdForQuestionAnswering(FlaxBigBirdPreTrainedModel): - module_class = FlaxBigBirdForQuestionAnsweringModule - - @add_start_docstrings_to_model_forward(BIG_BIRD_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - def __call__( - self, - input_ids, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - question_lengths=None, - params: dict = None, - dropout_rng: Optional[jax.random.PRNGKey] = None, - indices_rng: Optional[jax.random.PRNGKey] = None, - train: bool = False, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ): - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.return_dict - - if position_ids is None: - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) - - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - - if head_mask is None: - head_mask = jnp.ones((self.config.num_hidden_layers, self.config.num_attention_heads)) - - if question_lengths is None and input_ids is not None: - # assuming input_ids format: context - question_lengths = jnp.argmax((input_ids == self.config.sep_token_id).astype("i4"), axis=-1) + 1 - question_lengths = jnp.expand_dims(question_lengths, axis=1) - - seqlen = input_ids.shape[1] - - logits_mask = None - if question_lengths is not None: - # setting lengths logits to `-inf` - logits_mask = self.prepare_question_mask(question_lengths, seqlen) - if token_type_ids is None: - token_type_ids = (~logits_mask).astype("i4") - logits_mask = jnp.expand_dims(logits_mask, axis=2) - logits_mask = logits_mask.at[:, 0].set(False) - - # init input tensors if not passed - if token_type_ids is None: - token_type_ids = jnp.zeros_like(input_ids) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - if indices_rng is not None: - rngs["indices"] = indices_rng - - return self.module.apply( - {"params": params or self.params}, - jnp.array(input_ids, dtype="i4"), - jnp.array(attention_mask, dtype="i4"), - token_type_ids, - jnp.array(position_ids, dtype="i4"), - jnp.array(head_mask, dtype="i4"), - logits_mask, - not train, - output_attentions, - output_hidden_states, - return_dict, - rngs=rngs, - ) - - @staticmethod - def prepare_question_mask(q_lengths, maxlen: int): - # q_lengths -> (bz, 1) - mask = jnp.arange(0, maxlen) - mask = jnp.expand_dims(mask, axis=0) < q_lengths - return mask - - -append_call_sample_docstring( - FlaxBigBirdForQuestionAnswering, - _CHECKPOINT_FOR_DOC, - FlaxBigBirdForQuestionAnsweringModelOutput, - _CONFIG_FOR_DOC, -) - - -class FlaxBigBirdForCausalLMModule(nn.Module): - config: BigBirdConfig - dtype: jnp.dtype = jnp.float32 - gradient_checkpointing: bool = False - - def setup(self): - self.bert = FlaxBigBirdModule( - config=self.config, - add_pooling_layer=False, - dtype=self.dtype, - gradient_checkpointing=self.gradient_checkpointing, - ) - self.cls = FlaxBigBirdOnlyMLMHead(config=self.config, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - position_ids, - token_type_ids: Optional[jnp.ndarray] = None, - head_mask: Optional[jnp.ndarray] = None, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - init_cache: bool = False, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - # Model - outputs = self.bert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - init_cache=init_cache, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - if self.config.tie_word_embeddings: - shared_embedding = self.bert.variables["params"]["embeddings"]["word_embeddings"]["embedding"] - else: - shared_embedding = None - - # Compute the prediction scores - logits = self.cls(hidden_states, shared_embedding=shared_embedding) - - if not return_dict: - return (logits,) + outputs[1:] - - return FlaxCausalLMOutputWithCrossAttentions( - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - - -@add_start_docstrings( - """ - BigBird Model with a language modeling head on top (a linear layer on top of the hidden-states output) e.g for - autoregressive tasks. - """, - BIG_BIRD_START_DOCSTRING, -) -# Copied from transformers.models.bert.modeling_flax_bert.FlaxBertForCausalLM with Bert->BigBird -class FlaxBigBirdForCausalLM(FlaxBigBirdPreTrainedModel): - module_class = FlaxBigBirdForCausalLMModule - - def prepare_inputs_for_generation(self, input_ids, max_length, attention_mask: Optional[jax.Array] = None): - # initializing the cache - batch_size, seq_length = input_ids.shape - - past_key_values = self.init_cache(batch_size, max_length) - # Note that usually one would have to put 0's in the attention_mask for x > input_ids.shape[-1] and x < cache_length. - # But since the decoder uses a causal mask, those positions are masked anyway. - # Thus, we can create a single static attention_mask here, which is more efficient for compilation - extended_attention_mask = jnp.ones((batch_size, max_length), dtype="i4") - if attention_mask is not None: - position_ids = attention_mask.cumsum(axis=-1) - 1 - extended_attention_mask = lax.dynamic_update_slice(extended_attention_mask, attention_mask, (0, 0)) - else: - position_ids = jnp.broadcast_to(jnp.arange(seq_length, dtype="i4")[None, :], (batch_size, seq_length)) - - return { - "past_key_values": past_key_values, - "attention_mask": extended_attention_mask, - "position_ids": position_ids, - } - - def update_inputs_for_generation(self, model_outputs, model_kwargs): - model_kwargs["past_key_values"] = model_outputs.past_key_values - model_kwargs["position_ids"] = model_kwargs["position_ids"][:, -1:] + 1 - return model_kwargs - - -append_call_sample_docstring( - FlaxBigBirdForCausalLM, - _CHECKPOINT_FOR_DOC, - FlaxCausalLMOutputWithCrossAttentions, - _CONFIG_FOR_DOC, -) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilebert/modeling_tf_mobilebert.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilebert/modeling_tf_mobilebert.py deleted file mode 100644 index bc508a47984e2ee704f0f981b622f6e3c22594a6..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilebert/modeling_tf_mobilebert.py +++ /dev/null @@ -1,1640 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" TF 2.0 MobileBERT model.""" - - -from __future__ import annotations - -import warnings -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import numpy as np -import tensorflow as tf - -from ...activations_tf import get_tf_activation -from ...modeling_tf_outputs import ( - TFBaseModelOutput, - TFBaseModelOutputWithPooling, - TFMaskedLMOutput, - TFMultipleChoiceModelOutput, - TFNextSentencePredictorOutput, - TFQuestionAnsweringModelOutput, - TFSequenceClassifierOutput, - TFTokenClassifierOutput, -) -from ...modeling_tf_utils import ( - TFMaskedLanguageModelingLoss, - TFModelInputType, - TFMultipleChoiceLoss, - TFNextSentencePredictionLoss, - TFPreTrainedModel, - TFQuestionAnsweringLoss, - TFSequenceClassificationLoss, - TFTokenClassificationLoss, - get_initializer, - keras_serializable, - unpack_inputs, -) -from ...tf_utils import check_embeddings_within_bounds, shape_list, stable_softmax -from ...utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from .configuration_mobilebert import MobileBertConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "google/mobilebert-uncased" -_CONFIG_FOR_DOC = "MobileBertConfig" - -# TokenClassification docstring -_CHECKPOINT_FOR_TOKEN_CLASSIFICATION = "vumichien/mobilebert-finetuned-ner" -_TOKEN_CLASS_EXPECTED_OUTPUT = "['I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O', 'I-LOC', 'O', 'I-LOC', 'I-LOC']" -_TOKEN_CLASS_EXPECTED_LOSS = 0.03 - -# QuestionAnswering docstring -_CHECKPOINT_FOR_QA = "vumichien/mobilebert-uncased-squad-v2" -_QA_EXPECTED_OUTPUT = "'a nice puppet'" -_QA_EXPECTED_LOSS = 3.98 -_QA_TARGET_START_INDEX = 12 -_QA_TARGET_END_INDEX = 13 - -# SequenceClassification docstring -_CHECKPOINT_FOR_SEQUENCE_CLASSIFICATION = "vumichien/emo-mobilebert" -_SEQ_CLASS_EXPECTED_OUTPUT = "'others'" -_SEQ_CLASS_EXPECTED_LOSS = "4.72" - -TF_MOBILEBERT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "google/mobilebert-uncased", - # See all MobileBERT models at https://huggingface.co/models?filter=mobilebert -] - - -# Copied from transformers.models.bert.modeling_tf_bert.TFBertPreTrainingLoss -class TFMobileBertPreTrainingLoss: - """ - Loss function suitable for BERT-like pretraining, that is, the task of pretraining a language model by combining - NSP + MLM. .. note:: Any label of -100 will be ignored (along with the corresponding logits) in the loss - computation. - """ - - def hf_compute_loss(self, labels: tf.Tensor, logits: tf.Tensor) -> tf.Tensor: - loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( - from_logits=True, reduction=tf.keras.losses.Reduction.NONE - ) - - # Clip negative labels to zero here to avoid NaNs and errors - those positions will get masked later anyway - unmasked_lm_losses = loss_fn(y_true=tf.nn.relu(labels["labels"]), y_pred=logits[0]) - # make sure only labels that are not equal to -100 - # are taken into account for the loss computation - lm_loss_mask = tf.cast(labels["labels"] != -100, dtype=unmasked_lm_losses.dtype) - masked_lm_losses = unmasked_lm_losses * lm_loss_mask - reduced_masked_lm_loss = tf.reduce_sum(masked_lm_losses) / tf.reduce_sum(lm_loss_mask) - - # Clip negative labels to zero here to avoid NaNs and errors - those positions will get masked later anyway - unmasked_ns_loss = loss_fn(y_true=tf.nn.relu(labels["next_sentence_label"]), y_pred=logits[1]) - ns_loss_mask = tf.cast(labels["next_sentence_label"] != -100, dtype=unmasked_ns_loss.dtype) - masked_ns_loss = unmasked_ns_loss * ns_loss_mask - - reduced_masked_ns_loss = tf.reduce_sum(masked_ns_loss) / tf.reduce_sum(ns_loss_mask) - - return tf.reshape(reduced_masked_lm_loss + reduced_masked_ns_loss, (1,)) - - -class TFMobileBertIntermediate(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.dense = tf.keras.layers.Dense(config.intermediate_size, name="dense") - - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = get_tf_activation(config.hidden_act) - else: - self.intermediate_act_fn = config.hidden_act - - def call(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - - return hidden_states - - -class TFLayerNorm(tf.keras.layers.LayerNormalization): - def __init__(self, feat_size, *args, **kwargs): - super().__init__(*args, **kwargs) - - -class TFNoNorm(tf.keras.layers.Layer): - def __init__(self, feat_size, epsilon=None, **kwargs): - super().__init__(**kwargs) - self.feat_size = feat_size - - def build(self, input_shape): - self.bias = self.add_weight("bias", shape=[self.feat_size], initializer="zeros") - self.weight = self.add_weight("weight", shape=[self.feat_size], initializer="ones") - super().build(input_shape) - - def call(self, inputs: tf.Tensor): - return inputs * self.weight + self.bias - - -NORM2FN = {"layer_norm": TFLayerNorm, "no_norm": TFNoNorm} - - -class TFMobileBertEmbeddings(tf.keras.layers.Layer): - """Construct the embeddings from word, position and token_type embeddings.""" - - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.trigram_input = config.trigram_input - self.embedding_size = config.embedding_size - self.config = config - self.hidden_size = config.hidden_size - self.max_position_embeddings = config.max_position_embeddings - self.initializer_range = config.initializer_range - self.embedding_transformation = tf.keras.layers.Dense(config.hidden_size, name="embedding_transformation") - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = NORM2FN[config.normalization_type]( - config.hidden_size, epsilon=config.layer_norm_eps, name="LayerNorm" - ) - self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) - - def build(self, input_shape): - with tf.name_scope("word_embeddings"): - self.weight = self.add_weight( - name="weight", - shape=[self.config.vocab_size, self.embedding_size], - initializer=get_initializer(initializer_range=self.initializer_range), - ) - - with tf.name_scope("token_type_embeddings"): - self.token_type_embeddings = self.add_weight( - name="embeddings", - shape=[self.config.type_vocab_size, self.hidden_size], - initializer=get_initializer(initializer_range=self.initializer_range), - ) - - with tf.name_scope("position_embeddings"): - self.position_embeddings = self.add_weight( - name="embeddings", - shape=[self.max_position_embeddings, self.hidden_size], - initializer=get_initializer(initializer_range=self.initializer_range), - ) - - super().build(input_shape) - - def call(self, input_ids=None, position_ids=None, token_type_ids=None, inputs_embeds=None, training=False): - """ - Applies embedding based on inputs tensor. - - Returns: - final_embeddings (`tf.Tensor`): output embedding tensor. - """ - assert not (input_ids is None and inputs_embeds is None) - - if input_ids is not None: - check_embeddings_within_bounds(input_ids, self.config.vocab_size) - inputs_embeds = tf.gather(params=self.weight, indices=input_ids) - - input_shape = shape_list(inputs_embeds)[:-1] - - if token_type_ids is None: - token_type_ids = tf.fill(dims=input_shape, value=0) - - if self.trigram_input: - # From the paper MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited - # Devices (https://arxiv.org/abs/2004.02984) - # - # The embedding table in BERT models accounts for a substantial proportion of model size. To compress - # the embedding layer, we reduce the embedding dimension to 128 in MobileBERT. - # Then, we apply a 1D convolution with kernel size 3 on the raw token embedding to produce a 512 - # dimensional output. - inputs_embeds = tf.concat( - [ - tf.pad(inputs_embeds[:, 1:], ((0, 0), (0, 1), (0, 0))), - inputs_embeds, - tf.pad(inputs_embeds[:, :-1], ((0, 0), (1, 0), (0, 0))), - ], - axis=2, - ) - - if self.trigram_input or self.embedding_size != self.hidden_size: - inputs_embeds = self.embedding_transformation(inputs_embeds) - - if position_ids is None: - position_ids = tf.expand_dims(tf.range(start=0, limit=input_shape[-1]), axis=0) - - position_embeds = tf.gather(params=self.position_embeddings, indices=position_ids) - token_type_embeds = tf.gather(params=self.token_type_embeddings, indices=token_type_ids) - final_embeddings = inputs_embeds + position_embeds + token_type_embeds - final_embeddings = self.LayerNorm(inputs=final_embeddings) - final_embeddings = self.dropout(inputs=final_embeddings, training=training) - - return final_embeddings - - -class TFMobileBertSelfAttention(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - if config.hidden_size % config.num_attention_heads != 0: - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " - f"heads ({config.num_attention_heads}" - ) - - self.num_attention_heads = config.num_attention_heads - self.output_attentions = config.output_attentions - assert config.hidden_size % config.num_attention_heads == 0 - self.attention_head_size = int(config.true_hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = tf.keras.layers.Dense( - self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="query" - ) - self.key = tf.keras.layers.Dense( - self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="key" - ) - self.value = tf.keras.layers.Dense( - self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="value" - ) - - self.dropout = tf.keras.layers.Dropout(config.attention_probs_dropout_prob) - - def transpose_for_scores(self, x, batch_size): - # Reshape from [batch_size, seq_length, all_head_size] to [batch_size, seq_length, num_attention_heads, attention_head_size] - x = tf.reshape(x, (batch_size, -1, self.num_attention_heads, self.attention_head_size)) - return tf.transpose(x, perm=[0, 2, 1, 3]) - - def call( - self, query_tensor, key_tensor, value_tensor, attention_mask, head_mask, output_attentions, training=False - ): - batch_size = shape_list(attention_mask)[0] - mixed_query_layer = self.query(query_tensor) - mixed_key_layer = self.key(key_tensor) - mixed_value_layer = self.value(value_tensor) - query_layer = self.transpose_for_scores(mixed_query_layer, batch_size) - key_layer = self.transpose_for_scores(mixed_key_layer, batch_size) - value_layer = self.transpose_for_scores(mixed_value_layer, batch_size) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = tf.matmul( - query_layer, key_layer, transpose_b=True - ) # (batch size, num_heads, seq_len_q, seq_len_k) - dk = tf.cast(shape_list(key_layer)[-1], dtype=attention_scores.dtype) # scale attention_scores - attention_scores = attention_scores / tf.math.sqrt(dk) - - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in TFMobileBertModel call() function) - attention_mask = tf.cast(attention_mask, dtype=attention_scores.dtype) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = stable_softmax(attention_scores, axis=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs, training=training) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - context_layer = tf.matmul(attention_probs, value_layer) - - context_layer = tf.transpose(context_layer, perm=[0, 2, 1, 3]) - context_layer = tf.reshape( - context_layer, (batch_size, -1, self.all_head_size) - ) # (batch_size, seq_len_q, all_head_size) - - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - return outputs - - -class TFMobileBertSelfOutput(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.use_bottleneck = config.use_bottleneck - self.dense = tf.keras.layers.Dense( - config.true_hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - self.LayerNorm = NORM2FN[config.normalization_type]( - config.true_hidden_size, epsilon=config.layer_norm_eps, name="LayerNorm" - ) - if not self.use_bottleneck: - self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) - - def call(self, hidden_states, residual_tensor, training=False): - hidden_states = self.dense(hidden_states) - if not self.use_bottleneck: - hidden_states = self.dropout(hidden_states, training=training) - hidden_states = self.LayerNorm(hidden_states + residual_tensor) - return hidden_states - - -class TFMobileBertAttention(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.self = TFMobileBertSelfAttention(config, name="self") - self.mobilebert_output = TFMobileBertSelfOutput(config, name="output") - - def prune_heads(self, heads): - raise NotImplementedError - - def call( - self, - query_tensor, - key_tensor, - value_tensor, - layer_input, - attention_mask, - head_mask, - output_attentions, - training=False, - ): - self_outputs = self.self( - query_tensor, key_tensor, value_tensor, attention_mask, head_mask, output_attentions, training=training - ) - - attention_output = self.mobilebert_output(self_outputs[0], layer_input, training=training) - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs - - -class TFOutputBottleneck(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.dense = tf.keras.layers.Dense(config.hidden_size, name="dense") - self.LayerNorm = NORM2FN[config.normalization_type]( - config.hidden_size, epsilon=config.layer_norm_eps, name="LayerNorm" - ) - self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) - - def call(self, hidden_states, residual_tensor, training=False): - layer_outputs = self.dense(hidden_states) - layer_outputs = self.dropout(layer_outputs, training=training) - layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) - return layer_outputs - - -class TFMobileBertOutput(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.use_bottleneck = config.use_bottleneck - self.dense = tf.keras.layers.Dense( - config.true_hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - self.LayerNorm = NORM2FN[config.normalization_type]( - config.true_hidden_size, epsilon=config.layer_norm_eps, name="LayerNorm" - ) - if not self.use_bottleneck: - self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) - else: - self.bottleneck = TFOutputBottleneck(config, name="bottleneck") - - def call(self, hidden_states, residual_tensor_1, residual_tensor_2, training=False): - hidden_states = self.dense(hidden_states) - if not self.use_bottleneck: - hidden_states = self.dropout(hidden_states, training=training) - hidden_states = self.LayerNorm(hidden_states + residual_tensor_1) - else: - hidden_states = self.LayerNorm(hidden_states + residual_tensor_1) - hidden_states = self.bottleneck(hidden_states, residual_tensor_2) - return hidden_states - - -class TFBottleneckLayer(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.dense = tf.keras.layers.Dense(config.intra_bottleneck_size, name="dense") - self.LayerNorm = NORM2FN[config.normalization_type]( - config.intra_bottleneck_size, epsilon=config.layer_norm_eps, name="LayerNorm" - ) - - def call(self, inputs): - hidden_states = self.dense(inputs) - hidden_states = self.LayerNorm(hidden_states) - return hidden_states - - -class TFBottleneck(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.key_query_shared_bottleneck = config.key_query_shared_bottleneck - self.use_bottleneck_attention = config.use_bottleneck_attention - self.bottleneck_input = TFBottleneckLayer(config, name="input") - if self.key_query_shared_bottleneck: - self.attention = TFBottleneckLayer(config, name="attention") - - def call(self, hidden_states): - # This method can return three different tuples of values. These different values make use of bottlenecks, - # which are linear layers used to project the hidden states to a lower-dimensional vector, reducing memory - # usage. These linear layer have weights that are learned during training. - # - # If `config.use_bottleneck_attention`, it will return the result of the bottleneck layer four times for the - # key, query, value, and "layer input" to be used by the attention layer. - # This bottleneck is used to project the hidden. This last layer input will be used as a residual tensor - # in the attention self output, after the attention scores have been computed. - # - # If not `config.use_bottleneck_attention` and `config.key_query_shared_bottleneck`, this will return - # four values, three of which have been passed through a bottleneck: the query and key, passed through the same - # bottleneck, and the residual layer to be applied in the attention self output, through another bottleneck. - # - # Finally, in the last case, the values for the query, key and values are the hidden states without bottleneck, - # and the residual layer will be this value passed through a bottleneck. - - bottlenecked_hidden_states = self.bottleneck_input(hidden_states) - if self.use_bottleneck_attention: - return (bottlenecked_hidden_states,) * 4 - elif self.key_query_shared_bottleneck: - shared_attention_input = self.attention(hidden_states) - return (shared_attention_input, shared_attention_input, hidden_states, bottlenecked_hidden_states) - else: - return (hidden_states, hidden_states, hidden_states, bottlenecked_hidden_states) - - -class TFFFNOutput(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.dense = tf.keras.layers.Dense(config.true_hidden_size, name="dense") - self.LayerNorm = NORM2FN[config.normalization_type]( - config.true_hidden_size, epsilon=config.layer_norm_eps, name="LayerNorm" - ) - - def call(self, hidden_states, residual_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.LayerNorm(hidden_states + residual_tensor) - return hidden_states - - -class TFFFNLayer(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.intermediate = TFMobileBertIntermediate(config, name="intermediate") - self.mobilebert_output = TFFFNOutput(config, name="output") - - def call(self, hidden_states): - intermediate_output = self.intermediate(hidden_states) - layer_outputs = self.mobilebert_output(intermediate_output, hidden_states) - return layer_outputs - - -class TFMobileBertLayer(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.use_bottleneck = config.use_bottleneck - self.num_feedforward_networks = config.num_feedforward_networks - self.attention = TFMobileBertAttention(config, name="attention") - self.intermediate = TFMobileBertIntermediate(config, name="intermediate") - self.mobilebert_output = TFMobileBertOutput(config, name="output") - - if self.use_bottleneck: - self.bottleneck = TFBottleneck(config, name="bottleneck") - if config.num_feedforward_networks > 1: - self.ffn = [TFFFNLayer(config, name=f"ffn.{i}") for i in range(config.num_feedforward_networks - 1)] - - def call(self, hidden_states, attention_mask, head_mask, output_attentions, training=False): - if self.use_bottleneck: - query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) - else: - query_tensor, key_tensor, value_tensor, layer_input = [hidden_states] * 4 - - attention_outputs = self.attention( - query_tensor, - key_tensor, - value_tensor, - layer_input, - attention_mask, - head_mask, - output_attentions, - training=training, - ) - - attention_output = attention_outputs[0] - s = (attention_output,) - - if self.num_feedforward_networks != 1: - for i, ffn_module in enumerate(self.ffn): - attention_output = ffn_module(attention_output) - s += (attention_output,) - - intermediate_output = self.intermediate(attention_output) - layer_output = self.mobilebert_output(intermediate_output, attention_output, hidden_states, training=training) - - outputs = ( - (layer_output,) - + attention_outputs[1:] - + ( - tf.constant(0), - query_tensor, - key_tensor, - value_tensor, - layer_input, - attention_output, - intermediate_output, - ) - + s - ) # add attentions if we output them - - return outputs - - -class TFMobileBertEncoder(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.output_attentions = config.output_attentions - self.output_hidden_states = config.output_hidden_states - self.layer = [TFMobileBertLayer(config, name=f"layer_._{i}") for i in range(config.num_hidden_layers)] - - def call( - self, - hidden_states, - attention_mask, - head_mask, - output_attentions, - output_hidden_states, - return_dict, - training=False, - ): - all_hidden_states = () if output_hidden_states else None - all_attentions = () if output_attentions else None - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_outputs = layer_module( - hidden_states, attention_mask, head_mask[i], output_attentions, training=training - ) - - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions = all_attentions + (layer_outputs[1],) - - # Add last layer - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None) - return TFBaseModelOutput( - last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions - ) - - -class TFMobileBertPooler(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.do_activate = config.classifier_activation - if self.do_activate: - self.dense = tf.keras.layers.Dense( - config.hidden_size, - kernel_initializer=get_initializer(config.initializer_range), - activation="tanh", - name="dense", - ) - - def call(self, hidden_states): - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - if not self.do_activate: - return first_token_tensor - else: - pooled_output = self.dense(first_token_tensor) - return pooled_output - - -class TFMobileBertPredictionHeadTransform(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.dense = tf.keras.layers.Dense( - config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - if isinstance(config.hidden_act, str): - self.transform_act_fn = get_tf_activation(config.hidden_act) - else: - self.transform_act_fn = config.hidden_act - self.LayerNorm = NORM2FN["layer_norm"](config.hidden_size, epsilon=config.layer_norm_eps, name="LayerNorm") - - def call(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.transform_act_fn(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - return hidden_states - - -class TFMobileBertLMPredictionHead(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.transform = TFMobileBertPredictionHeadTransform(config, name="transform") - self.config = config - - def build(self, input_shape): - self.bias = self.add_weight(shape=(self.config.vocab_size,), initializer="zeros", trainable=True, name="bias") - self.dense = self.add_weight( - shape=(self.config.hidden_size - self.config.embedding_size, self.config.vocab_size), - initializer="zeros", - trainable=True, - name="dense/weight", - ) - self.decoder = self.add_weight( - shape=(self.config.vocab_size, self.config.embedding_size), - initializer="zeros", - trainable=True, - name="decoder/weight", - ) - super().build(input_shape) - - def get_output_embeddings(self): - return self - - def set_output_embeddings(self, value): - self.decoder = value - self.config.vocab_size = shape_list(value)[0] - - def get_bias(self): - return {"bias": self.bias} - - def set_bias(self, value): - self.bias = value["bias"] - self.config.vocab_size = shape_list(value["bias"])[0] - - def call(self, hidden_states): - hidden_states = self.transform(hidden_states) - hidden_states = tf.matmul(hidden_states, tf.concat([tf.transpose(self.decoder), self.dense], axis=0)) - hidden_states = hidden_states + self.bias - return hidden_states - - -class TFMobileBertMLMHead(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.predictions = TFMobileBertLMPredictionHead(config, name="predictions") - - def call(self, sequence_output): - prediction_scores = self.predictions(sequence_output) - return prediction_scores - - -@keras_serializable -class TFMobileBertMainLayer(tf.keras.layers.Layer): - config_class = MobileBertConfig - - def __init__(self, config, add_pooling_layer=True, **kwargs): - super().__init__(**kwargs) - - self.config = config - self.num_hidden_layers = config.num_hidden_layers - self.output_attentions = config.output_attentions - self.output_hidden_states = config.output_hidden_states - self.return_dict = config.use_return_dict - - self.embeddings = TFMobileBertEmbeddings(config, name="embeddings") - self.encoder = TFMobileBertEncoder(config, name="encoder") - self.pooler = TFMobileBertPooler(config, name="pooler") if add_pooling_layer else None - - def get_input_embeddings(self): - return self.embeddings - - def set_input_embeddings(self, value): - self.embeddings.weight = value - self.embeddings.vocab_size = shape_list(value)[0] - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - raise NotImplementedError - - @unpack_inputs - def call( - self, - input_ids=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - training=False, - ): - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = shape_list(input_ids) - elif inputs_embeds is not None: - input_shape = shape_list(inputs_embeds)[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - if attention_mask is None: - attention_mask = tf.fill(input_shape, 1) - - if token_type_ids is None: - token_type_ids = tf.fill(input_shape, 0) - - embedding_output = self.embeddings(input_ids, position_ids, token_type_ids, inputs_embeds, training=training) - - # We create a 3D attention mask from a 2D tensor mask. - # Sizes are [batch_size, 1, 1, to_seq_length] - # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] - # this attention mask is more simple than the triangular masking of causal attention - # used in OpenAI GPT, we just need to prepare the broadcast dimension here. - extended_attention_mask = tf.reshape(attention_mask, (input_shape[0], 1, 1, input_shape[1])) - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - extended_attention_mask = tf.cast(extended_attention_mask, dtype=embedding_output.dtype) - one_cst = tf.constant(1.0, dtype=embedding_output.dtype) - ten_thousand_cst = tf.constant(-10000.0, dtype=embedding_output.dtype) - extended_attention_mask = tf.multiply(tf.subtract(one_cst, extended_attention_mask), ten_thousand_cst) - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - if head_mask is not None: - raise NotImplementedError - else: - head_mask = [None] * self.num_hidden_layers - - encoder_outputs = self.encoder( - embedding_output, - extended_attention_mask, - head_mask, - output_attentions, - output_hidden_states, - return_dict, - training=training, - ) - - sequence_output = encoder_outputs[0] - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - return ( - sequence_output, - pooled_output, - ) + encoder_outputs[1:] - - return TFBaseModelOutputWithPooling( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -class TFMobileBertPreTrainedModel(TFPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = MobileBertConfig - base_model_prefix = "mobilebert" - - -@dataclass -class TFMobileBertForPreTrainingOutput(ModelOutput): - """ - Output type of [`TFMobileBertForPreTraining`]. - - Args: - prediction_logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - seq_relationship_logits (`tf.Tensor` of shape `(batch_size, 2)`): - Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation - before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: tf.Tensor | None = None - prediction_logits: tf.Tensor = None - seq_relationship_logits: tf.Tensor = None - hidden_states: Tuple[tf.Tensor] | None = None - attentions: Tuple[tf.Tensor] | None = None - - -MOBILEBERT_START_DOCSTRING = r""" - - This model inherits from [`TFPreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it - as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and - behavior. - - - - TensorFlow models and layers in `transformers` accept two formats as input: - - - having all inputs as keyword arguments (like PyTorch models), or - - having all inputs as a list, tuple or dict in the first positional argument. - - The reason the second format is supported is that Keras methods prefer this format when passing inputs to models - and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just - pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second - format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with - the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first - positional argument: - - - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: - `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - - a dictionary with one or several input Tensors associated to the input names given in the docstring: - `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` - - Note that when creating models and layers with - [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry - about any of this, as you can just pass inputs like you would to any other Python function! - - - - Parameters: - config ([`MobileBertConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -MOBILEBERT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`Numpy array` or `tf.Tensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.__call__`] and - [`PreTrainedTokenizer.encode`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`Numpy array` or `tf.Tensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`Numpy array` or `tf.Tensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`Numpy array` or `tf.Tensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`Numpy array` or `tf.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`tf.Tensor` of shape `({0}, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the - config will be used instead. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. This argument can be used only in eager mode, in graph mode the value in the config will be - used instead. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in - eager mode, in graph mode the value will always be set to True. - training (`bool`, *optional*, defaults to `False`): - Whether or not to use the model in training mode (some modules like dropout modules have different - behaviors between training and evaluation). -""" - - -@add_start_docstrings( - "The bare MobileBert Model transformer outputting raw hidden-states without any specific head on top.", - MOBILEBERT_START_DOCSTRING, -) -class TFMobileBertModel(TFMobileBertPreTrainedModel): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.mobilebert = TFMobileBertMainLayer(config, name="mobilebert") - - @unpack_inputs - @add_start_docstrings_to_model_forward(MOBILEBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFBaseModelOutputWithPooling, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFBaseModelOutputWithPooling]: - outputs = self.mobilebert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - return outputs - - -@add_start_docstrings( - """ - MobileBert Model with two heads on top as done during the pretraining: a `masked language modeling` head and a - `next sentence prediction (classification)` head. - """, - MOBILEBERT_START_DOCSTRING, -) -class TFMobileBertForPreTraining(TFMobileBertPreTrainedModel, TFMobileBertPreTrainingLoss): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.mobilebert = TFMobileBertMainLayer(config, name="mobilebert") - self.predictions = TFMobileBertMLMHead(config, name="predictions___cls") - self.seq_relationship = TFMobileBertOnlyNSPHead(2, name="seq_relationship___cls") - - def get_lm_head(self): - return self.predictions.predictions - - def get_prefix_bias_name(self): - warnings.warn("The method get_prefix_bias_name is deprecated. Please use `get_bias` instead.", FutureWarning) - return self.name + "/" + self.predictions.name + "/" + self.predictions.predictions.name - - @unpack_inputs - @add_start_docstrings_to_model_forward(MOBILEBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=TFMobileBertForPreTrainingOutput, config_class=_CONFIG_FOR_DOC) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - next_sentence_label: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFMobileBertForPreTrainingOutput]: - r""" - Return: - - Examples: - - ```python - >>> import tensorflow as tf - >>> from transformers import AutoTokenizer, TFMobileBertForPreTraining - - >>> tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased") - >>> model = TFMobileBertForPreTraining.from_pretrained("google/mobilebert-uncased") - >>> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1 - >>> outputs = model(input_ids) - >>> prediction_scores, seq_relationship_scores = outputs[:2] - ```""" - outputs = self.mobilebert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - sequence_output, pooled_output = outputs[:2] - prediction_scores = self.predictions(sequence_output) - seq_relationship_score = self.seq_relationship(pooled_output) - - total_loss = None - if labels is not None and next_sentence_label is not None: - d_labels = {"labels": labels} - d_labels["next_sentence_label"] = next_sentence_label - total_loss = self.hf_compute_loss(labels=d_labels, logits=(prediction_scores, seq_relationship_score)) - - if not return_dict: - output = (prediction_scores, seq_relationship_score) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return TFMobileBertForPreTrainingOutput( - loss=total_loss, - prediction_logits=prediction_scores, - seq_relationship_logits=seq_relationship_score, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings("""MobileBert Model with a `language modeling` head on top.""", MOBILEBERT_START_DOCSTRING) -class TFMobileBertForMaskedLM(TFMobileBertPreTrainedModel, TFMaskedLanguageModelingLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [ - r"pooler", - r"seq_relationship___cls", - r"cls.seq_relationship", - ] - - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.mobilebert = TFMobileBertMainLayer(config, add_pooling_layer=False, name="mobilebert") - self.predictions = TFMobileBertMLMHead(config, name="predictions___cls") - - def get_lm_head(self): - return self.predictions.predictions - - def get_prefix_bias_name(self): - warnings.warn("The method get_prefix_bias_name is deprecated. Please use `get_bias` instead.", FutureWarning) - return self.name + "/" + self.mlm.name + "/" + self.mlm.predictions.name - - @unpack_inputs - @add_start_docstrings_to_model_forward(MOBILEBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFMaskedLMOutput, - config_class=_CONFIG_FOR_DOC, - expected_output="'paris'", - expected_loss=0.57, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFMaskedLMOutput]: - r""" - labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the - loss is only computed for the tokens with labels - """ - outputs = self.mobilebert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = outputs[0] - prediction_scores = self.predictions(sequence_output, training=training) - - loss = None if labels is None else self.hf_compute_loss(labels, prediction_scores) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TFMaskedLMOutput( - loss=loss, - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -class TFMobileBertOnlyNSPHead(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.seq_relationship = tf.keras.layers.Dense(2, name="seq_relationship") - - def call(self, pooled_output): - seq_relationship_score = self.seq_relationship(pooled_output) - return seq_relationship_score - - -@add_start_docstrings( - """MobileBert Model with a `next sentence prediction (classification)` head on top.""", - MOBILEBERT_START_DOCSTRING, -) -class TFMobileBertForNextSentencePrediction(TFMobileBertPreTrainedModel, TFNextSentencePredictionLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [r"predictions___cls", r"cls.predictions"] - - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.mobilebert = TFMobileBertMainLayer(config, name="mobilebert") - self.cls = TFMobileBertOnlyNSPHead(config, name="seq_relationship___cls") - - @unpack_inputs - @add_start_docstrings_to_model_forward(MOBILEBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=TFNextSentencePredictorOutput, config_class=_CONFIG_FOR_DOC) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - next_sentence_label: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFNextSentencePredictorOutput]: - r""" - Return: - - Examples: - - ```python - >>> import tensorflow as tf - >>> from transformers import AutoTokenizer, TFMobileBertForNextSentencePrediction - - >>> tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased") - >>> model = TFMobileBertForNextSentencePrediction.from_pretrained("google/mobilebert-uncased") - - >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." - >>> next_sentence = "The sky is blue due to the shorter wavelength of blue light." - >>> encoding = tokenizer(prompt, next_sentence, return_tensors="tf") - - >>> logits = model(encoding["input_ids"], token_type_ids=encoding["token_type_ids"])[0] - ```""" - outputs = self.mobilebert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - pooled_output = outputs[1] - seq_relationship_scores = self.cls(pooled_output) - - next_sentence_loss = ( - None - if next_sentence_label is None - else self.hf_compute_loss(labels=next_sentence_label, logits=seq_relationship_scores) - ) - - if not return_dict: - output = (seq_relationship_scores,) + outputs[2:] - return ((next_sentence_loss,) + output) if next_sentence_loss is not None else output - - return TFNextSentencePredictorOutput( - loss=next_sentence_loss, - logits=seq_relationship_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - MobileBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the - pooled output) e.g. for GLUE tasks. - """, - MOBILEBERT_START_DOCSTRING, -) -class TFMobileBertForSequenceClassification(TFMobileBertPreTrainedModel, TFSequenceClassificationLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [ - r"predictions___cls", - r"seq_relationship___cls", - r"cls.predictions", - r"cls.seq_relationship", - ] - _keys_to_ignore_on_load_missing = [r"dropout"] - - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.num_labels = config.num_labels - - self.mobilebert = TFMobileBertMainLayer(config, name="mobilebert") - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = tf.keras.layers.Dropout(classifier_dropout) - self.classifier = tf.keras.layers.Dense( - config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(MOBILEBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_SEQUENCE_CLASSIFICATION, - output_type=TFSequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - expected_output=_SEQ_CLASS_EXPECTED_OUTPUT, - expected_loss=_SEQ_CLASS_EXPECTED_LOSS, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFSequenceClassifierOutput]: - r""" - labels (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - outputs = self.mobilebert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - pooled_output = outputs[1] - - pooled_output = self.dropout(pooled_output, training=training) - logits = self.classifier(pooled_output) - - loss = None if labels is None else self.hf_compute_loss(labels, logits) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TFSequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - MobileBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a - linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - MOBILEBERT_START_DOCSTRING, -) -class TFMobileBertForQuestionAnswering(TFMobileBertPreTrainedModel, TFQuestionAnsweringLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [ - r"pooler", - r"predictions___cls", - r"seq_relationship___cls", - r"cls.predictions", - r"cls.seq_relationship", - ] - - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.num_labels = config.num_labels - - self.mobilebert = TFMobileBertMainLayer(config, add_pooling_layer=False, name="mobilebert") - self.qa_outputs = tf.keras.layers.Dense( - config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="qa_outputs" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(MOBILEBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_QA, - output_type=TFQuestionAnsweringModelOutput, - config_class=_CONFIG_FOR_DOC, - qa_target_start_index=_QA_TARGET_START_INDEX, - qa_target_end_index=_QA_TARGET_END_INDEX, - expected_output=_QA_EXPECTED_OUTPUT, - expected_loss=_QA_EXPECTED_LOSS, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - start_positions: np.ndarray | tf.Tensor | None = None, - end_positions: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFQuestionAnsweringModelOutput]: - r""" - start_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - """ - outputs = self.mobilebert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = outputs[0] - - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = tf.split(logits, 2, axis=-1) - start_logits = tf.squeeze(start_logits, axis=-1) - end_logits = tf.squeeze(end_logits, axis=-1) - - loss = None - if start_positions is not None and end_positions is not None: - labels = {"start_position": start_positions, "end_position": end_positions} - loss = self.hf_compute_loss(labels, (start_logits, end_logits)) - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TFQuestionAnsweringModelOutput( - loss=loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - MobileBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and - a softmax) e.g. for RocStories/SWAG tasks. - """, - MOBILEBERT_START_DOCSTRING, -) -class TFMobileBertForMultipleChoice(TFMobileBertPreTrainedModel, TFMultipleChoiceLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [ - r"predictions___cls", - r"seq_relationship___cls", - r"cls.predictions", - r"cls.seq_relationship", - ] - _keys_to_ignore_on_load_missing = [r"dropout"] - - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.mobilebert = TFMobileBertMainLayer(config, name="mobilebert") - self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) - self.classifier = tf.keras.layers.Dense( - 1, kernel_initializer=get_initializer(config.initializer_range), name="classifier" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward( - MOBILEBERT_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length") - ) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFMultipleChoiceModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFMultipleChoiceModelOutput]: - r""" - labels (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., num_choices]` - where `num_choices` is the size of the second dimension of the input tensors. (See `input_ids` above) - """ - if input_ids is not None: - num_choices = shape_list(input_ids)[1] - seq_length = shape_list(input_ids)[2] - else: - num_choices = shape_list(inputs_embeds)[1] - seq_length = shape_list(inputs_embeds)[2] - - flat_input_ids = tf.reshape(input_ids, (-1, seq_length)) if input_ids is not None else None - flat_attention_mask = tf.reshape(attention_mask, (-1, seq_length)) if attention_mask is not None else None - flat_token_type_ids = tf.reshape(token_type_ids, (-1, seq_length)) if token_type_ids is not None else None - flat_position_ids = tf.reshape(position_ids, (-1, seq_length)) if position_ids is not None else None - flat_inputs_embeds = ( - tf.reshape(inputs_embeds, (-1, seq_length, shape_list(inputs_embeds)[3])) - if inputs_embeds is not None - else None - ) - outputs = self.mobilebert( - flat_input_ids, - flat_attention_mask, - flat_token_type_ids, - flat_position_ids, - head_mask, - flat_inputs_embeds, - output_attentions, - output_hidden_states, - return_dict=return_dict, - training=training, - ) - pooled_output = outputs[1] - pooled_output = self.dropout(pooled_output, training=training) - logits = self.classifier(pooled_output) - reshaped_logits = tf.reshape(logits, (-1, num_choices)) - - loss = None if labels is None else self.hf_compute_loss(labels, reshaped_logits) - - if not return_dict: - output = (reshaped_logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TFMultipleChoiceModelOutput( - loss=loss, - logits=reshaped_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - MobileBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. - for Named-Entity-Recognition (NER) tasks. - """, - MOBILEBERT_START_DOCSTRING, -) -class TFMobileBertForTokenClassification(TFMobileBertPreTrainedModel, TFTokenClassificationLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [ - r"pooler", - r"predictions___cls", - r"seq_relationship___cls", - r"cls.predictions", - r"cls.seq_relationship", - ] - _keys_to_ignore_on_load_missing = [r"dropout"] - - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.num_labels = config.num_labels - - self.mobilebert = TFMobileBertMainLayer(config, add_pooling_layer=False, name="mobilebert") - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = tf.keras.layers.Dropout(classifier_dropout) - self.classifier = tf.keras.layers.Dense( - config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(MOBILEBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_TOKEN_CLASSIFICATION, - output_type=TFTokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - expected_output=_TOKEN_CLASS_EXPECTED_OUTPUT, - expected_loss=_TOKEN_CLASS_EXPECTED_LOSS, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFTokenClassifierOutput]: - r""" - labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. - """ - outputs = self.mobilebert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = outputs[0] - - sequence_output = self.dropout(sequence_output, training=training) - logits = self.classifier(sequence_output) - - loss = None if labels is None else self.hf_compute_loss(labels, logits) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TFTokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tools/benchmark.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tools/benchmark.py deleted file mode 100644 index aaac56400148f7b140b7c1356bbbc3b4293e5ce3..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tools/benchmark.py +++ /dev/null @@ -1,197 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -""" -A script to benchmark builtin models. - -Note: this script has an extra dependency of psutil. -""" - -import itertools -import logging -import psutil -import torch -import tqdm -from fvcore.common.timer import Timer -from torch.nn.parallel import DistributedDataParallel - -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import LazyConfig, get_cfg, instantiate -from detectron2.data import ( - DatasetFromList, - build_detection_test_loader, - build_detection_train_loader, -) -from detectron2.data.benchmark import DataLoaderBenchmark -from detectron2.engine import AMPTrainer, SimpleTrainer, default_argument_parser, hooks, launch -from detectron2.modeling import build_model -from detectron2.solver import build_optimizer -from detectron2.utils import comm -from detectron2.utils.collect_env import collect_env_info -from detectron2.utils.events import CommonMetricPrinter -from detectron2.utils.logger import setup_logger - -logger = logging.getLogger("detectron2") - - -def setup(args): - if args.config_file.endswith(".yaml"): - cfg = get_cfg() - cfg.merge_from_file(args.config_file) - cfg.SOLVER.BASE_LR = 0.001 # Avoid NaNs. Not useful in this script anyway. - cfg.merge_from_list(args.opts) - cfg.freeze() - else: - cfg = LazyConfig.load(args.config_file) - cfg = LazyConfig.apply_overrides(cfg, args.opts) - setup_logger(distributed_rank=comm.get_rank()) - return cfg - - -def create_data_benchmark(cfg, args): - if args.config_file.endswith(".py"): - dl_cfg = cfg.dataloader.train - dl_cfg._target_ = DataLoaderBenchmark - return instantiate(dl_cfg) - else: - kwargs = build_detection_train_loader.from_config(cfg) - kwargs.pop("aspect_ratio_grouping", None) - kwargs["_target_"] = DataLoaderBenchmark - return instantiate(kwargs) - - -def RAM_msg(): - vram = psutil.virtual_memory() - return "RAM Usage: {:.2f}/{:.2f} GB".format( - (vram.total - vram.available) / 1024 ** 3, vram.total / 1024 ** 3 - ) - - -def benchmark_data(args): - cfg = setup(args) - logger.info("After spawning " + RAM_msg()) - - benchmark = create_data_benchmark(cfg, args) - benchmark.benchmark_distributed(250, 10) - # test for a few more rounds - for k in range(10): - logger.info(f"Iteration {k} " + RAM_msg()) - benchmark.benchmark_distributed(250, 1) - - -def benchmark_data_advanced(args): - # benchmark dataloader with more details to help analyze performance bottleneck - cfg = setup(args) - benchmark = create_data_benchmark(cfg, args) - - if comm.get_rank() == 0: - benchmark.benchmark_dataset(100) - benchmark.benchmark_mapper(100) - benchmark.benchmark_workers(100, warmup=10) - benchmark.benchmark_IPC(100, warmup=10) - if comm.get_world_size() > 1: - benchmark.benchmark_distributed(100) - logger.info("Rerun ...") - benchmark.benchmark_distributed(100) - - -def benchmark_train(args): - cfg = setup(args) - model = build_model(cfg) - logger.info("Model:\n{}".format(model)) - if comm.get_world_size() > 1: - model = DistributedDataParallel( - model, device_ids=[comm.get_local_rank()], broadcast_buffers=False - ) - optimizer = build_optimizer(cfg, model) - checkpointer = DetectionCheckpointer(model, optimizer=optimizer) - checkpointer.load(cfg.MODEL.WEIGHTS) - - cfg.defrost() - cfg.DATALOADER.NUM_WORKERS = 2 - data_loader = build_detection_train_loader(cfg) - dummy_data = list(itertools.islice(data_loader, 100)) - - def f(): - data = DatasetFromList(dummy_data, copy=False, serialize=False) - while True: - yield from data - - max_iter = 400 - trainer = (AMPTrainer if cfg.SOLVER.AMP.ENABLED else SimpleTrainer)(model, f(), optimizer) - trainer.register_hooks( - [ - hooks.IterationTimer(), - hooks.PeriodicWriter([CommonMetricPrinter(max_iter)]), - hooks.TorchProfiler( - lambda trainer: trainer.iter == max_iter - 1, cfg.OUTPUT_DIR, save_tensorboard=True - ), - ] - ) - trainer.train(1, max_iter) - - -@torch.no_grad() -def benchmark_eval(args): - cfg = setup(args) - if args.config_file.endswith(".yaml"): - model = build_model(cfg) - DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS) - - cfg.defrost() - cfg.DATALOADER.NUM_WORKERS = 0 - data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0]) - else: - model = instantiate(cfg.model) - model.to(cfg.train.device) - DetectionCheckpointer(model).load(cfg.train.init_checkpoint) - - cfg.dataloader.num_workers = 0 - data_loader = instantiate(cfg.dataloader.test) - - model.eval() - logger.info("Model:\n{}".format(model)) - dummy_data = DatasetFromList(list(itertools.islice(data_loader, 100)), copy=False) - - def f(): - while True: - yield from dummy_data - - for k in range(5): # warmup - model(dummy_data[k]) - - max_iter = 300 - timer = Timer() - with tqdm.tqdm(total=max_iter) as pbar: - for idx, d in enumerate(f()): - if idx == max_iter: - break - model(d) - pbar.update() - logger.info("{} iters in {} seconds.".format(max_iter, timer.seconds())) - - -if __name__ == "__main__": - parser = default_argument_parser() - parser.add_argument("--task", choices=["train", "eval", "data", "data_advanced"], required=True) - args = parser.parse_args() - assert not args.eval_only - - logger.info("Environment info:\n" + collect_env_info()) - if "data" in args.task: - print("Initial " + RAM_msg()) - if args.task == "data": - f = benchmark_data - if args.task == "data_advanced": - f = benchmark_data_advanced - elif args.task == "train": - """ - Note: training speed may not be representative. - The training cost of a R-CNN model varies with the content of the data - and the quality of the model. - """ - f = benchmark_train - elif args.task == "eval": - f = benchmark_eval - # only benchmark single-GPU inference. - assert args.num_gpus == 1 and args.num_machines == 1 - launch(f, args.num_gpus, args.num_machines, args.machine_rank, args.dist_url, args=(args,)) diff --git a/spaces/ysharma/LLaVA_v1/llava/model/builder.py b/spaces/ysharma/LLaVA_v1/llava/model/builder.py deleted file mode 100644 index 9b93ec7324ce47ee221c46237bd47d044ada8f62..0000000000000000000000000000000000000000 --- a/spaces/ysharma/LLaVA_v1/llava/model/builder.py +++ /dev/null @@ -1,151 +0,0 @@ -# Copyright 2023 Haotian Liu -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import os -import warnings -import shutil - -from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig, BitsAndBytesConfig -import torch -from llava.model import * -from llava.constants import DEFAULT_IMAGE_PATCH_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN - - -def load_pretrained_model(model_path, model_base, model_name, load_8bit=False, load_4bit=False, device_map="auto"): - kwargs = {"device_map": device_map} - kwargs["offload_folder"] = "offload" - - if load_8bit: - kwargs['load_in_8bit'] = True - elif load_4bit: - kwargs['load_in_4bit'] = True - kwargs['quantization_config'] = BitsAndBytesConfig( - load_in_4bit=True, - bnb_4bit_compute_dtype=torch.float16, - bnb_4bit_use_double_quant=True, - bnb_4bit_quant_type='nf4' - ) - else: - kwargs['torch_dtype'] = torch.float16 - - if 'llava' in model_name.lower(): - # Load LLaVA model - if 'lora' in model_name.lower() and model_base is None: - warnings.warn('There is `lora` in model name but no `model_base` is provided. If you are loading a LoRA model, please provide the `model_base` argument. Detailed instruction: https://github.com/haotian-liu/LLaVA#launch-a-model-worker-lora-weights-unmerged.') - if 'lora' in model_name.lower() and model_base is not None: - lora_cfg_pretrained = AutoConfig.from_pretrained(model_path) - tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False) - print('Loading LLaVA from base model...') - model = LlavaLlamaForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=lora_cfg_pretrained, **kwargs) - token_num, tokem_dim = model.lm_head.out_features, model.lm_head.in_features - if model.lm_head.weight.shape[0] != token_num: - model.lm_head.weight = torch.nn.Parameter(torch.empty(token_num, tokem_dim, device=model.device, dtype=model.dtype)) - model.model.embed_tokens.weight = torch.nn.Parameter(torch.empty(token_num, tokem_dim, device=model.device, dtype=model.dtype)) - - print('Loading additional LLaVA weights...') - if os.path.exists(os.path.join(model_path, 'non_lora_trainables.bin')): - non_lora_trainables = torch.load(os.path.join(model_path, 'non_lora_trainables.bin'), map_location='cpu') - else: - # this is probably from HF Hub - from huggingface_hub import hf_hub_download - def load_from_hf(repo_id, filename, subfolder=None): - cache_file = hf_hub_download( - repo_id=repo_id, - filename=filename, - subfolder=subfolder) - return torch.load(cache_file, map_location='cpu') - non_lora_trainables = load_from_hf(model_path, 'non_lora_trainables.bin') - non_lora_trainables = {(k[11:] if k.startswith('base_model.') else k): v for k, v in non_lora_trainables.items()} - if any(k.startswith('model.model.') for k in non_lora_trainables): - non_lora_trainables = {(k[6:] if k.startswith('model.') else k): v for k, v in non_lora_trainables.items()} - model.load_state_dict(non_lora_trainables, strict=False) - - from peft import PeftModel - print('Loading LoRA weights...') - model = PeftModel.from_pretrained(model, model_path) - print('Merging LoRA weights...') - model = model.merge_and_unload() - print('Model is loaded...') - elif model_base is not None: - # this may be mm projector only - print('Loading LLaVA from base model...') - if 'mpt' in model_name.lower(): - if not os.path.isfile(os.path.join(model_path, 'configuration_mpt.py')): - shutil.copyfile(os.path.join(model_base, 'configuration_mpt.py'), os.path.join(model_path, 'configuration_mpt.py')) - tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=True) - cfg_pretrained = AutoConfig.from_pretrained(model_path, trust_remote_code=True) - model = LlavaMPTForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=cfg_pretrained, **kwargs) - else: - tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False) - cfg_pretrained = AutoConfig.from_pretrained(model_path) - model = LlavaLlamaForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=cfg_pretrained, **kwargs) - - mm_projector_weights = torch.load(os.path.join(model_path, 'mm_projector.bin'), map_location='cpu') - mm_projector_weights = {k: v.to(torch.float16) for k, v in mm_projector_weights.items()} - model.load_state_dict(mm_projector_weights, strict=False) - else: - if 'mpt' in model_name.lower(): - tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True) - model = LlavaMPTForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, **kwargs) - else: - tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) - model = LlavaLlamaForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, **kwargs) - else: - # Load language model - if model_base is not None: - # PEFT model - from peft import PeftModel - tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False) - model = AutoModelForCausalLM.from_pretrained(model_base, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto") - print(f"Loading LoRA weights from {model_path}") - model = PeftModel.from_pretrained(model, model_path) - print(f"Merging weights") - model = model.merge_and_unload() - print('Convert to FP16...') - model.to(torch.float16) - else: - use_fast = False - if 'mpt' in model_name.lower(): - tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True) - model = AutoModelForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, trust_remote_code=True, **kwargs) - else: - tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) - model = AutoModelForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, **kwargs) - - image_processor = None - - if 'llava' in model_name.lower(): - mm_use_im_start_end = getattr(model.config, "mm_use_im_start_end", False) - mm_use_im_patch_token = getattr(model.config, "mm_use_im_patch_token", True) - if mm_use_im_patch_token: - tokenizer.add_tokens([DEFAULT_IMAGE_PATCH_TOKEN], special_tokens=True) - if mm_use_im_start_end: - tokenizer.add_tokens([DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN], special_tokens=True) - model.resize_token_embeddings(len(tokenizer)) - - vision_tower = model.get_vision_tower() - if not vision_tower.is_loaded: - vision_tower.load_model() - - - vision_tower.to(device=model.device, dtype=torch.float16) - image_processor = vision_tower.image_processor - - if hasattr(model.config, "max_sequence_length"): - context_len = model.config.max_sequence_length - else: - context_len = 2048 - - return tokenizer, model, image_processor, context_len diff --git a/spaces/ysharma/text-to-ner-to-image-to-video/README.md b/spaces/ysharma/text-to-ner-to-image-to-video/README.md deleted file mode 100644 index 344abda3c251b62b59cc58ac0a087a28e3c71fa2..0000000000000000000000000000000000000000 --- a/spaces/ysharma/text-to-ner-to-image-to-video/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Test To Ner To Image To Video -emoji: 👁 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/yufiofficial/MusicGenQ/CHANGELOG.md b/spaces/yufiofficial/MusicGenQ/CHANGELOG.md deleted file mode 100644 index 24fc214df236b40efead4b1585b01632d9658e9b..0000000000000000000000000000000000000000 --- a/spaces/yufiofficial/MusicGenQ/CHANGELOG.md +++ /dev/null @@ -1,23 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). - -## [0.0.2a] - TBD - -Improved demo, fixed top p (thanks @jnordberg). - -Compressor tanh on output to avoid clipping with some style (especially piano). -Now repeating the conditioning periodically if it is too short. - -More options when launching Gradio app locally (thanks @ashleykleynhans). - -Testing out PyTorch 2.0 memory efficient attention. - -Added extended generation (infinite length) by slowly moving the windows. -Note that other implementations exist: https://github.com/camenduru/MusicGen-colab. - -## [0.0.1] - 2023-06-09 - -Initial release, with model evaluation only. diff --git a/spaces/zenafey/prodia-studio/app.py b/spaces/zenafey/prodia-studio/app.py deleted file mode 100644 index 9cda8dc291fca3f7591c9e734430a08a4d4428c4..0000000000000000000000000000000000000000 --- a/spaces/zenafey/prodia-studio/app.py +++ /dev/null @@ -1,171 +0,0 @@ -""" -Original code by Zenafey -@zenafey -""" -import gradio as gr - -from engine import generate_sd, generate_sdxl, transform_sd, controlnet_sd, image_upscale, get_models -from const import CMODELS, CMODULES, SAMPLER_LIST, SDXL_MODEL_LIST - - -with gr.Blocks() as demo: - gr.Markdown(""" -

                  Prodia Studio
                  -

                  powered by Prodia Stable Diffusion API

                  """) - with gr.Tab("/sdxl/generate [BETA]"): - with gr.Row(): - with gr.Column(scale=6, min_width=600): - prompt = gr.Textbox("puppies in a cloud, 4k", placeholder="Prompt", show_label=False, lines=3) - negative_prompt = gr.Textbox(placeholder="Negative Prompt", show_label=False, lines=3) - with gr.Row(): - with gr.Column(): - sampler = gr.Dropdown(value="Euler a", show_label=True, label="Sampling Method", - choices=SAMPLER_LIST) - model = gr.Dropdown( - interactive=True, - value="sd_xl_base_1.0.safetensors [be9edd61]", - show_label=True, - label="Stable Diffusion XL Checkpoint", - choices=SDXL_MODEL_LIST - ) - seed = gr.Number(label="Seed", value=-1) - with gr.Column(): - steps = gr.Slider(label="Sampling Steps", minimum=1, maximum=50, value=25, step=1) - cfg_scale = gr.Slider(label="CFG Scale", minimum=1, maximum=20, value=7, step=1) - - text_button = gr.Button("Generate", variant='primary') - - with gr.Column(scale=7): - image_output = gr.Image() - - text_button.click(generate_sdxl, - inputs=[prompt, negative_prompt, model, steps, sampler, cfg_scale, seed], outputs=image_output) - with gr.Tab("/sd/generate"): - with gr.Row(): - with gr.Column(scale=6, min_width=600): - prompt = gr.Textbox("puppies in a cloud, 4k", placeholder="Prompt", show_label=False, lines=3) - negative_prompt = gr.Textbox(placeholder="Negative Prompt", show_label=False, lines=3) - with gr.Row(): - with gr.Column(): - sampler = gr.Dropdown(value="Euler a", show_label=True, label="Sampling Method", - choices=SAMPLER_LIST) - model = gr.Dropdown( - interactive=True, - value=get_models()[1], - show_label=True, - label="Stable Diffusion Checkpoint", - choices=get_models() - ) - upscale = gr.Checkbox(label="Upscale", value=True) - seed = gr.Number(label="Seed", value=-1) - with gr.Column(): - width = gr.Slider(label="Width", maximum=1024, value=512, step=8) - height = gr.Slider(label="Height", maximum=1024, value=512, step=8) - steps = gr.Slider(label="Sampling Steps", minimum=1, maximum=50, value=25, step=1) - cfg_scale = gr.Slider(label="CFG Scale", minimum=1, maximum=20, value=7, step=1) - - text_button = gr.Button("Generate", variant='primary') - - with gr.Column(scale=7): - image_output = gr.Image() - - text_button.click(generate_sd, - inputs=[prompt, negative_prompt, model, steps, sampler, cfg_scale, width, height, seed, - upscale], outputs=image_output) - - with gr.Tab("/sd/transform"): - with gr.Row(): - with gr.Row(): - with gr.Column(scale=6, min_width=600): - with gr.Row(): - with gr.Column(): - image_input = gr.Image(type='filepath') - with gr.Column(): - prompt = gr.Textbox("puppies in a cloud, 4k", label='Prompt', placeholder="Prompt", lines=3) - negative_prompt = gr.Textbox(placeholder="badly drawn", label='Negative Prompt', lines=3) - with gr.Row(): - with gr.Column(): - sampler = gr.Dropdown(value="Euler a", show_label=True, label="Sampling Method", choices=SAMPLER_LIST) - model = gr.Dropdown( - interactive=True, - value=get_models()[1], - show_label=True, - label="Stable Diffusion Checkpoint", - choices=get_models() - ) - upscale = gr.Checkbox(label="Upscale", value=True) - seed = gr.Number(label="Seed", value=-1) - with gr.Column(): - steps = gr.Slider(label="Sampling Steps", minimum=1, maximum=30, value=25, step=1) - cfg_scale = gr.Slider(label="CFG Scale", minimum=1, maximum=20, value=7, step=1) - denoising_strength = gr.Slider(label="Denoising Strength", minimum=0.1, maximum=1.0, value=0.7, step=0.1) - - text_button = gr.Button("Generate", variant='primary') - - with gr.Column(scale=7): - image_output = gr.Image() - - text_button.click(transform_sd, - inputs=[image_input, model, prompt, denoising_strength, negative_prompt, steps, cfg_scale, seed, upscale, sampler - ], outputs=image_output) - - with gr.Tab("/sd/controlnet"): - with gr.Row(): - with gr.Row(): - with gr.Column(scale=6, min_width=600): - with gr.Row(): - with gr.Column(): - image_input = gr.Image(type='filepath') - with gr.Column(): - prompt = gr.Textbox("puppies in a cloud, 4k", label='Prompt', placeholder="Prompt", lines=3) - negative_prompt = gr.Textbox(placeholder="badly drawn", label='Negative Prompt', lines=3) - with gr.Row(): - with gr.Column(): - sampler = gr.Dropdown(value="Euler a", show_label=True, label="Sampling Method", choices=SAMPLER_LIST) - model = gr.Dropdown( - interactive=True, - value="control_v11p_sd15_canny [d14c016b]", - show_label=True, - label="ControlNet Model", - choices=CMODELS - ) - module = gr.Dropdown( - interactive=True, - value="none", - show_label=True, - label="ControlNet Module", - choices=CMODULES - ) - seed = gr.Number(label="Seed", value=-1) - with gr.Column(): - width = gr.Slider(label="Width", maximum=1024, value=512, step=8) - height = gr.Slider(label="Height", maximum=1024, value=512, step=8) - steps = gr.Slider(label="Sampling Steps", minimum=1, maximum=30, value=25, step=1) - cfg_scale = gr.Slider(label="CFG Scale", minimum=1, maximum=20, value=7, step=1) - resize_mode = gr.Dropdown(label='resize_mode', value="0", choices=["0", "1", "2"]) - with gr.Row(): - threshold_a = gr.Number(label="threshold_a", value=100) - threshold_b = gr.Number(label="threshold_b", value=200) - - text_button = gr.Button("Generate", variant='primary') - - with gr.Column(scale=7): - image_output = gr.Image() - - text_button.click(controlnet_sd, - inputs=[image_input, model, module, threshold_a, threshold_b, resize_mode, prompt, - negative_prompt, steps, cfg_scale, seed, sampler, width, height], - outputs=image_output) - - with gr.Tab("/upscale"): - with gr.Row(): - with gr.Column(): - image_input = gr.Image(type='filepath') - scale_by = gr.Radio(['2', '4'], label="Scale by") - upscale_btn = gr.Button("Upscale!", variant='primary') - with gr.Column(): - image_output = gr.Image() - - upscale_btn.click(image_upscale, inputs=[image_input, scale_by], outputs=image_output) - -demo.launch(show_api=False) diff --git a/spaces/zestyoreo/vtryon/app.py b/spaces/zestyoreo/vtryon/app.py deleted file mode 100644 index f7f13da8ac4f078f6de45e3fa99e4bd85d2002ff..0000000000000000000000000000000000000000 --- a/spaces/zestyoreo/vtryon/app.py +++ /dev/null @@ -1,157 +0,0 @@ -import gradio as gr -import torch -import pickle -import time -from options.test_options import TestOptions -from data.data_loader_test import CreateDataLoader -from models.networks import ResUnetGenerator, load_checkpoint -from models.afwm import AFWM -import torch.nn as nn -import os -import numpy as np -import torch -import cv2 -import torch.nn.functional as F -from torchvision import utils -from util import flow_util - -def de_offset(s_grid): - [b,_,h,w] = s_grid.size() - - - x = torch.arange(w).view(1, -1).expand(h, -1).float() - y = torch.arange(h).view(-1, 1).expand(-1, w).float() - x = 2*x/(w-1)-1 - y = 2*y/(h-1)-1 - grid = torch.stack([x,y], dim=0).float().cuda() - grid = grid.unsqueeze(0).expand(b, -1, -1, -1) - - offset = grid - s_grid - - offset_x = offset[:,0,:,:] * (w-1) / 2 - offset_y = offset[:,1,:,:] * (h-1) / 2 - - offset = torch.cat((offset_y,offset_x),0) - - return offset - -def tryon(person,cloth,edge): - - #save images in folders - cv2.imwrite('./data/test_ma_img/000001_0.jpg', person) - cv2.imwrite('./data/test_edge/000001_1.jpg', edge) - cv2.imwrite('./data/test_clothes/000001_1.jpg', cloth) - - with open('opt.pkl', 'rb') as handle: - opt = pickle.load(handle) - - f2c = flow_util.flow2color() - start_epoch, epoch_iter = 1, 0 - - data_loader = CreateDataLoader(opt) - dataset = data_loader.load_data() - dataset_size = len(data_loader) #must be 1 - print(dataset_size) - - warp_model = AFWM(opt, 3) - print(warp_model) - warp_model.eval() - #warp_model.cuda() - load_checkpoint(warp_model, opt.warp_checkpoint) - - gen_model = ResUnetGenerator(7, 4, 5, ngf=64, norm_layer=nn.BatchNorm2d) - gen_model.eval() - #gen_model.cuda() - load_checkpoint(gen_model, opt.gen_checkpoint) - - total_steps = (start_epoch-1) * dataset_size + epoch_iter - step = 0 - step_per_batch = dataset_size / opt.batchSize - - if not os.path.exists('our_t_results'): - os.mkdir('our_t_results') - - for epoch in range(1,2): - - for i, data in enumerate(dataset, start=epoch_iter): - iter_start_time = time.time() - total_steps += opt.batchSize - epoch_iter += opt.batchSize - - real_image = data['image'] - clothes = data['clothes'] - ##edge is extracted from the clothes image with the built-in function in python - edge = data['edge'] - edge = torch.FloatTensor((edge.detach().numpy() > 0.5).astype(np.int64)) - clothes = clothes * edge - - flow_out = warp_model(real_image.cuda(), clothes.cuda()) - warped_cloth, last_flow, = flow_out - warped_edge = F.grid_sample(edge.cuda(), last_flow.permute(0, 2, 3, 1), - mode='bilinear', padding_mode='zeros') - - gen_inputs = torch.cat([real_image.cuda(), warped_cloth, warped_edge], 1) - gen_outputs = gen_model(gen_inputs) - p_rendered, m_composite = torch.split(gen_outputs, [3, 1], 1) - p_rendered = torch.tanh(p_rendered) - m_composite = torch.sigmoid(m_composite) - m_composite = m_composite * warped_edge - p_tryon = warped_cloth * m_composite + p_rendered * (1 - m_composite) - - path = 'results/' + opt.name - os.makedirs(path, exist_ok=True) - #sub_path = path + '/PFAFN' - #os.makedirs(sub_path,exist_ok=True) - print(data['p_name']) - - if step % 1 == 0: - - ## save try-on image only - - utils.save_image( - p_tryon, - os.path.join('./our_t_results', data['p_name'][0]), - nrow=int(1), - normalize=True, - value_range=(-1,1), - ) - - ## save person image, garment, flow, warped garment, and try-on image - - #a = real_image.float().cuda() - #b = clothes.cuda() - #flow_offset = de_offset(last_flow) - #flow_color = f2c(flow_offset).cuda() - #c= warped_cloth.cuda() - #d = p_tryon - #combine = torch.cat([a[0],b[0], flow_color, c[0], d[0]], 2).squeeze() - #utils.save_image( - # combine, - # os.path.join('./im_gar_flow_wg', data['p_name'][0]), - # nrow=int(1), - # normalize=True, - # range=(-1,1), - #) - - - step += 1 - if epoch_iter >= dataset_size: - break - - result_img = cv2.imread('./our_t_results/000001_0.jpg') - return result_img - -demo = gr.Interface(fn=tryon, - inputs=[gr.inputs.Image(label="Person"),gr.inputs.Image(label="Cloth"),gr.inputs.Image(label="Edge")], - outputs="image" - ) - -# def pp(inp1,inp2): -# return inp1+" hello "+inp2 - -# demo2 = gr.Interface(fn=pp, -# inputs=[gr.inputs.Textbox(lines=5, label="Input Text"),gr.inputs.Textbox(lines=5, label="Input Text2")], -# outputs=gr.outputs.Textbox(label="Generated Text"), - # ) - -demo.launch() \ No newline at end of file
                  MinimumRecommended
                  Windows XP/Vista/7/8/10Windows XP/Vista/7/8/10
                  FS2004 with Service Pack 1 and 2 installedFS2004 with Service Pack 1 and 2 installed
                  Pentium III 1.0 GHz or equivalent processorPentium IV 2.0 GHz or equivalent processor
                  256 MB RAM512 MB RAM
                  64 MB graphics card128 MB graphics card
                  1 GB free hard disk space2 GB free hard disk space
                  Sound cardSound card
                  CD-ROM driveCD-ROM drive