diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cocsoft Stream Down 6.8 Keygen Download Any Streaming Video and Audio with Ease.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cocsoft Stream Down 6.8 Keygen Download Any Streaming Video and Audio with Ease.md deleted file mode 100644 index 83197e8331e00cd23a0622fd91517991cec3b46e..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cocsoft Stream Down 6.8 Keygen Download Any Streaming Video and Audio with Ease.md +++ /dev/null @@ -1,137 +0,0 @@ - -

Cocsoft Stream Down 6.8 Keygen: How to Download and Activate the Software

-

If you are looking for a powerful and easy-to-use tool to download and save streaming video and audio from the Internet, you might want to check out Cocsoft Stream Down 6.8. This software supports not only HTTP and FTP download, but also streaming media download, such as RTSP, MMS, MMSU, and MMST. In this article, we will show you how to download, install, and activate Cocsoft Stream Down 6.8 using a keygen.

-

Cocsoft Stream Down 6.8 Keygen


Download Filehttps://byltly.com/2uKylx



-

What is Cocsoft Stream Down 6.8?

-

Cocsoft Stream Down 6.8 is a streaming video media download tool developed by Cocsoft Computing Inc. It allows you to download and save multimedia streaming and RTSP (Real Time Streaming Protocol) to local files, enabling you to download movies, music, and capture streaming video and audio from the Internet.

-

Features of Cocsoft Stream Down 6.8

- -

Benefits of Cocsoft Stream Down 6.8

- -

How to download Cocsoft Stream Down 6.8?

-

To download Cocsoft Stream Down 6.8, you need to follow these steps:

-

Step 1: Visit the official website

-

The official website of Cocsoft Stream Down 6.8 is https://cocsoft-stream-down.soft32.com/. You can find more information about the software and its features on this website.

-

Step 2: Choose a download link

-

On the website, you will see a green button that says "Download Now". Click on it to start downloading the software. Alternatively, you can choose a different download link from the list below the button. For example, you can choose "Download CoCSoft Stream Down from external server (availability not guaranteed)" or "Alternative download".

-

Step 3: Save the file to your computer

-

Once you click on a download link, you will be prompted to save the file to your computer. The file name is "cocstreamdown.exe" and the file size is 2.25 MB. Choose a location where you want to save the file and click "Save". The download process will start and it will take a few minutes depending on your Internet speed.

-

Cocsoft Stream Down 6.8 crack download
-How to activate Cocsoft Stream Down 6.8
-Cocsoft Stream Down 6.8 serial number generator
-Cocsoft Stream Down 6.8 license key free
-Cocsoft Stream Down 6.8 full version with keygen
-Cocsoft Stream Down 6.8 patch file
-Cocsoft Stream Down 6.8 registration code
-Cocsoft Stream Down 6.8 activation key online
-Cocsoft Stream Down 6.8 product key finder
-Cocsoft Stream Down 6.8 keygen torrent
-Cocsoft Stream Down 6.8 cracked software
-How to install Cocsoft Stream Down 6.8 with keygen
-Cocsoft Stream Down 6.8 serial key free download
-Cocsoft Stream Down 6.8 license code generator
-Cocsoft Stream Down 6.8 full crack with keygen
-Cocsoft Stream Down 6.8 patch download
-Cocsoft Stream Down 6.8 activation code free
-Cocsoft Stream Down 6.8 keygen online
-Cocsoft Stream Down 6.8 product key generator
-Cocsoft Stream Down 6.8 keygen download
-Cocsoft Stream Down 6.8 crack software download
-How to use Cocsoft Stream Down 6.8 keygen
-Cocsoft Stream Down 6.8 serial number free download
-Cocsoft Stream Down 6.8 license key generator online
-Cocsoft Stream Down 6.8 full version crack with keygen
-Cocsoft Stream Down 6.8 patch file download
-Cocsoft Stream Down 6.8 activation key free download
-Cocsoft Stream Down 6.8 keygen free online
-Cocsoft Stream Down 6.8 product key free download
-Cocsoft Stream Down 6.8 keygen software download
-Cocsoft Stream Down 6.8 crack file download
-How to get Cocsoft Stream Down 6.8 keygen
-Cocsoft Stream Down 6.8 serial code free download
-Cocsoft Stream Down 6.8 license code free online
-Cocsoft Stream Down 6.8 full crack download with keygen
-Cocsoft Stream Down 6.8 patch software download
-Cocsoft Stream Down 6.8 activation code generator online
-Cocsoft Stream Down 6.8 keygen software online
-Cocsoft Stream Down 6.8 product code free download
-Cocsoft Stream Down 6.8 keygen file download
-Cocsoft Stream Down 6.8 crack software online
-How to register Cocsoft Stream Down 6.8 with keygen
-Cocsoft Stream Down 6.8 serial key generator online
-Cocsoft Stream Down 6.8 license key free online
-Cocsoft Stream Down 6.8 full version download with keygen
-Cocsoft Stream Down 6.8 patch file online
-Cocsoft Stream Down 6.8 activation key generator free
-Cocsoft Stream Down 6.8 keygen online free download
-Cocsoft Stream Down 6.8 product key online free download

-

How to install Cocsoft Stream Down 6.8?

-

To install Cocsoft Stream Down 6.8, you need to follow these steps:

-

Step 1: Run the setup file

-

After downloading the file, locate it on your computer and double-click on it to run it. You will see a welcome screen that says "Welcome to CoCSoft StreamDown Setup Wizard". Click "Next" to continue.

-

Step 2: Follow the instructions

-

The setup wizard will guide you through the installation process. You will need to choose a destination folder where you want to install the software, a start menu folder where you want to create shortcuts, and additional tasks such as creating a desktop icon or launching the software after installation. You can also change the language of the interface from English to other languages such as Chinese or French. Click "Next" after each step until you reach the final screen that says "Completing CoCSoft StreamDown Setup Wizard".

-

Step 3: Agree to the terms and conditions

-

Before finishing the installation, you will need to agree to the terms and conditions of using the software. Read them carefully and check the box that says "I accept the agreement". Then click "Finish" to complete the installation.

-

How to activate Cocsoft Stream Down 6.8?

-

To activate Cocsoft Stream Down 6.8 using a keygen, you need to follow these steps:

-

Step 1: Open the software

-

After installing the software, you can open it by clicking on its icon on your desktop or start menu. You will see a main window that shows a list of tasks such as "Add URL", "Start", "Stop", "Delete", etc.

-

Step 2: Enter the keygen

-

To activate the full version of the software, you need to enter a keygen that will generate a serial number for you. You can find a keygen online by searching for "Cocsoft Stream Down 6.8 Keygen" on Google or other search engines. Download a keygen from a reliable source and run it on your computer. You will see a window that asks you to enter your name and email address. Enter any name and email address that you want and click "Generate". You will see a serial number that is generated for you.

- - - - -
Name:Jane Doe
Email:jane.doe@example.com
Serial Number:CSD-1234-5678-9012-3456
-

Step 3: Enjoy the full version

-

Copy the serial number from the keygen window and paste it into the software window where it says "Enter Serial Number". Click "OK" to confirm. You will see a message that says "Thank you for registering CoCSoft StreamDown". Click "OK" again to close it. Now you have activated the full version of Cocsoft Stream Down 6.8 and you can enjoy all its features without any limitations.

-

Conclusion

-

Cocsoft Stream Down 6.8 is a great tool for downloading and saving streaming video and audio from the Internet. It supports various protocols, formats, settings, and languages. It is easy to download, install, and activate using a keygen that generates a serial number for you. However, you should be careful when using a keygen as it may contain viruses or malware that can harm your computer or compromise your privacy. Therefore, we recommend that you use an antivirus program before running any keygen or crack on your computer. We hope this article has helped you learn how to use Cocsoft Stream Down 6.8 keygen effectively.

- **FAQs**
    -**FAQs**
      -
    1. What are some alternatives to Cocsoft Stream Down 6.8?
    2. -

      Some alternatives to Cocsoft Stream Down 6.8 are:

      - -
    3. How can I update Cocsoft Stream Down 6.8?
    4. -

      To update Cocsoft Stream Down 6.8, you need to visit the official website and check for any new versions available. If there is a new version, you can download it and install it over the old version. You may need to enter the keygen again to activate the new version.

      -
    5. How can I uninstall Cocsoft Stream Down 6.8?
    6. -

      To uninstall Cocsoft Stream Down 6.8, you need to follow these steps:

      -
        -
      1. Go to the Start menu and click on Control Panel.
      2. -
      3. Click on Programs and Features or Add or Remove Programs.
      4. -
      5. Find Cocsoft Stream Down 6.8 in the list of programs and click on it.
      6. -
      7. Click on Uninstall or Change/Remove.
      8. -
      9. Follow the instructions to complete the uninstallation process.
      10. -
      -
    7. How can I contact Cocsoft Computing Inc. for support or feedback?
    8. -

      To contact Cocsoft Computing Inc., you can use the following methods:

      - -
    9. Is Cocsoft Stream Down 6.8 legal to use?
    10. -

      Cocsoft Stream Down 6.8 is legal to use as long as you comply with the terms and conditions of using the software and the streaming media content that you download. You should not use the software for any illegal or unethical purposes, such as infringing on the copyrights or privacy of others. You should also respect the rights and wishes of the content owners and creators and only download content that is allowed or authorized by them.

      -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Chicken No Crock Pot.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Chicken No Crock Pot.md deleted file mode 100644 index 409a7bedcd3d728f40253f1e9b2861995a50f331..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Chicken No Crock Pot.md +++ /dev/null @@ -1,45 +0,0 @@ - -

    How to Make Crack Chicken Without a Crock Pot

    -

    Crack chicken is a delicious and easy dish that consists of chicken, cream cheese, ranch dressing, bacon, and cheese. It is usually made in a crock pot or slow cooker, but what if you don't have one or you are short on time? Don't worry, you can still make crack chicken without a crock pot. In this article, we will show you how to make crack chicken in the oven or on the stove top in less than an hour.

    -

    crack chicken no crock pot


    DOWNLOAD >>> https://byltly.com/2uKvml



    -

    Ingredients

    -

    To make crack chicken without a crock pot, you will need the following ingredients:

    - -

    Directions

    -

    To make crack chicken without a crock pot, you can follow these directions:

    -

    Oven Method

    -
      -
    1. Preheat your oven to 375°F and spray a 9x13-inch baking dish with cooking spray.
    2. -
    3. Season the chicken breasts with salt and pepper and place them in the prepared baking dish.
    4. -
    5. In a medium bowl, beat the cream cheese with an electric mixer until smooth. Add the ranch dressing mix and water and mix well.
    6. -
    7. Spoon the cream cheese mixture over the chicken breasts and spread it evenly.
    8. -
    9. Sprinkle the bacon and cheese on top of the cream cheese layer.
    10. -
    11. Bake for 25 to 30 minutes or until the chicken is cooked through and the cheese is melted.
    12. -
    13. Garnish with green onions or parsley if desired and serve hot.
    14. -
    -

    Stove Top Method

    -
      -
    1. Cut the chicken breasts into bite-sized pieces and season with salt and pepper.
    2. -
    3. In a large skillet over medium-high heat, melt the butter and cook the chicken for about 15 minutes, stirring occasionally, until golden and cooked through.
    4. -
    5. In a small saucepan over low heat, combine the cream cheese, ranch dressing mix, and water. Stir until smooth and creamy.
    6. -
    7. Pour the cream cheese sauce over the chicken in the skillet and stir to coat.
    8. -
    9. Sprinkle the bacon and cheese on top of the chicken mixture and cover with a lid. Cook for another 10 minutes or until the cheese is melted.
    10. -
    11. Garnish with green onions or parsley if desired and serve hot.
    12. -
    - -

    Crack chicken is a versatile dish that can be served in many ways. You can enjoy it as a main course with a side of salad, bread, or rice. You can also use it as a filling for sandwiches, wraps, or tacos. You can even make a dip out of it by shredding the chicken and mixing it with the cream cheese sauce. Serve it with crackers, chips, or veggies for a tasty appetizer.

    -

    Crack chicken is also a great meal prep option. You can make a large batch of it and store it in an airtight container in the refrigerator for up to 4 days or in the freezer for up to 3 months. To reheat it, simply microwave it until warm or bake it in the oven at 350°F for 15 to 20 minutes.

    -

    -

    If you want to make crack chicken even more flavorful, you can add some extra ingredients to the cream cheese sauce. Some popular options are garlic powder, onion powder, dried parsley, dried dill, or hot sauce. You can also use different types of cheese, such as mozzarella, Monterey Jack, or Colby Jack. Feel free to experiment and find your favorite combination.

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dos2usb 1.59.84 Free Licence Key Gen The Best Software for DOS to USB Printing.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dos2usb 1.59.84 Free Licence Key Gen The Best Software for DOS to USB Printing.md deleted file mode 100644 index 9d2993be9a202c4770a50ff2d240eb6c8d78acea..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dos2usb 1.59.84 Free Licence Key Gen The Best Software for DOS to USB Printing.md +++ /dev/null @@ -1,124 +0,0 @@ - -

    Dos2usb 1.59.84 Free Licence Key Gen: What You Need to Know

    -

    If you have ever used MS-DOS applications that need to print on character mode printers, you may have encountered some problems when trying to use them with modern printers that only have USB ports or network connections. In this article, we will introduce you to a software utility called Dos2usb that can solve this issue by capturing MS-DOS print jobs and redirecting them to any Windows printer. We will also show you how to install, configure, and use Dos2usb with different operating systems, as well as what are the benefits and drawbacks of this software. Finally, we will tell you how to get a free licence key gen for Dos2usb 1.59.84, which is the latest version available.

    -

    How Dos2usb Works

    -

    Dos2usb is a software utility that extends the printing ability of DOS programs by capturing MS-DOS print jobs from LPT1-LPT9 and PRN ports simultaneously and redirecting them to correspondingly selected printers (GDI printers, PDF printers, network printers, IP printers, RDP printers, any kind of virtual printers etc.). The job redirection works even if a printer is physically connected to the captured port.

    -

    Dos2usb 1.59.84 Free Licence Key Gen


    Downloadhttps://byltly.com/2uKwaH



    -

    Dos2usb also provides full screen DOS prompt in all versions of Windows even in RDP also, so that MS-DOS applications get advantage of fullscreen in newer Windows OS. This way, you can run your old DOS programs without losing any functionality or compatibility.

    -

    How to Install and Configure Dos2usb

    -

    Installing and configuring Dos2usb is easy and straightforward. Here are the steps you need to follow:

    -

    Dos2usb 1.59.84 free license key generator
    -How to get Dos2usb 1.59.84 free activation code
    -Dos2usb 1.59.84 free serial number crack
    -Download Dos2usb 1.59.84 full version for free
    -Dos2usb 1.59.84 free registration key online
    -Dos2usb 1.59.84 free product key finder
    -Dos2usb 1.59.84 free licence keygen download
    -Dos2usb 1.59.84 free activation key no survey
    -Dos2usb 1.59.84 free serial key patch
    -Dos2usb 1.59.84 free license code generator
    -Dos2usb 1.59.84 free keygen software
    -Dos2usb 1.59.84 free licence key hack
    -Dos2usb 1.59.84 free serial number generator
    -Dos2usb 1.59.84 free activation code crack
    -Dos2usb 1.59.84 free registration key generator
    -Dos2usb 1.59.84 free product key crack
    -Dos2usb 1.59.84 free licence keygen online
    -Dos2usb 1.59.84 free activation key download
    -Dos2usb 1.59.84 free serial key generator
    -Dos2usb 1.59.84 free license code online
    -Dos2usb 1.59.84 free keygen download
    -Dos2usb 1.59.84 free licence key no survey
    -Dos2usb 1.59.84 free serial number online
    -Dos2usb 1.59.84 free activation code online
    -Dos2usb 1.59.84 free registration key online
    -Dos2usb 1.59.84 free product key online
    -Dos2usb 1.59.84 free licence keygen no survey
    -Dos2usb 1.59.84 free activation key online
    -Dos2usb 1.59.84 free serial key online
    -Dos2usb 1.59.84 free license code no survey
    -Dos2usb 1.59.84 free keygen online
    -Dos2usb 1.59.84 free licence key generator online
    -Dos2usb 1.59.84 free serial number no survey
    -Dos2usb 1.59.84 free activation code generator online
    -Dos2usb 1.59.84 free registration key no survey
    -Dos2usb 1.59.84 free product key no survey
    -Dos2usb 1.59.84 free licence keygen generator online
    -Dos2usb 1.59.84 free activation key no survey online
    -Dos2usb 1.59

    -

    How to Select Your Printer and Set Paper Size

    -
      -
    1. Download install.exe from the official website and save it to your hard disk.
    2. -
    3. Click Start -> Run… install.exe, click yes on the license agreement and wait for the installation to complete.
    4. -
    5. Start Dos2usb by double clicking on the icon labeled DOS2USB on the desktop or in the system tray (usually right bottom corner near the clock).
    6. -
    7. Click on Printer button.
    8. -
    9. Select your desired printer from the list.
    10. -
    11. Set paper size to A4 or as desired by you.
    12. -
    -

    How to Adjust Printer Properties and Resolution

    -
      -
    1. Click on Property button.
    2. -
    3. Select the lowest resolution (usually 300 dpi) from the drop-down menu.
    4. -
    5. Click on OK.
    6. -
    -

    How to Set Default Settings and Restart Dos2usb

    -
      -
    1. Click on Set Default button.
    2. -
    3. Click on OK again.
    4. -
    5. Exit Dos2usb by clicking on Exit -> OK.
    6. -
    7. Start Dos2usb again by double clicking on the icon.
    8. -
    -

    How to Use Dos2usb with Different Operating Systems

    -

    Dos2usb supports any PC running Windows 2000, XP, Vista, 7, 8, 8.1 and Windows Server 2003 (Service Pack 2), 2008, 2012 with LAN and RDP (Terminal Service) for capturing print and redirection. However, if you are using Windows 95 or Windows 98 or Windows ME, you need to change some settings in your printer driver before using Dos2usb. Here is how:

    -

    How to Uncheck Spool MS DOS Print Job Option

    -
      -
    1. Click on Start Menu -> Settings -> Printers.
    2. -
    3. If you have installed a printer and its driver on the LPT port then please follow these instructions to disable Windows default port capturing:
    4. - -
    -

    How to Restart Your Computer

    -
      -
    1. Restart your computer by clicking Start -> Shut Down -> Restart.
    2. -
    -

    What are the Benefits of Dos2usb

    -

    Dos2usb has several benefits that make it a useful software for anyone who needs to print from DOS applications. Here are some of them:

    -

    How it Supports Any Type of Printer

    -

    Dos2usb can print directly from DOS to USB printer, network printer or any kind of printer where Windows can print. This means that you don't need to buy a new printer or use an adapter just because your old DOS program doesn't recognize it. You can use any modern printer with advanced features without losing compatibility with your legacy software.

    -

    How it Provides Full Screen DOS Prompt in All Versions of Windows

    -

    Dos2usb provides fullscreen DOS prompt for your DOS application whenever Windows denied for the fullscreen. This way, you can enjoy running your old DOS programs in full screen mode without any interruption or distortion. You can also switch between fullscreen and windowed mode easily by pressing Alt+Enter keys.

    -

    How it Supports Multiple Languages with Built-in Code Page

    -

    Dos2usb supports printing in your own language by selecting the DOS code page of your choice. It has built-in code page supports for various languages such as Arabic, Baltic, Central European, Cyrillic, Greek, Hebrew, Turkish etc. You can also customize your own code page if you want. This feature allows you to print documents in different languages without any hassle or error.

    -

    How it Offers Remote Assistance and Money-back Guarantee

    -

    Dos2usb offers remote assistance and money-back guarantee for its customers. If you have any problem or question regarding the software, you can contact the support team via email or phone during their working hours (10:00 AM to 7:00 PM IST on Monday to Saturday, except some local holidays). They will provide you with remote assistance using TeamViewer or AnyDesk software. You can also check the FAQ section on the website for common issues and solutions. Moreover, Dos2usb has a 15-day money-back guarantee policy, which means that if you are not satisfied with the software for any reason, you can request a refund within 15 days of purchase.

    -

    What are the Drawbacks of Dos2usb

    -

    Despite its benefits, Dos2usb also has some drawbacks that you should be aware of before using it. Here are some of them:

    -

    How it Requires a License Key for Full Functionality

    -

    Dos2usb is not a free software. It requires a license key for full functionality and unlimited usage. Without a license key, you can only use it for 15 days as a trial version, and you will see a watermark on your printouts. You also cannot use it on more than one computer at a time. Therefore, if you want to use Dos2usb regularly and without any limitation, you need to buy a license key online or offline.

    -

    How it May Not Work with Some Printers or Applications

    -

    Dos2usb may not work with some printers or applications that have special requirements or features. For example, some printers may not support the lowest resolution setting or some applications may not print correctly with Dos2usb. In such cases, you may need to adjust your printer settings or use another software to print from DOS. You can also contact the support team for help or advice.

    -

    Conclusion

    -

    Dos2usb is a software utility that can help you print from DOS applications to any Windows printer. It works by capturing MS-DOS print jobs and redirecting them to your desired printer. It also provides fullscreen DOS prompt in all versions of Windows and supports multiple languages with built-in code page. However, it also has some drawbacks such as requiring a license key for full functionality and not working with some printers or applications. If you want to try Dos2usb for yourself, you can download it from the official website and get a free licence key gen for Dos2usb 1.59.84.

    -

    We hope this article has given you some useful information about Dos2usb and how to use it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    -

    FAQs

    - -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fairyland 2 Pupils Book A Free PDF Course for Young Learners of English.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fairyland 2 Pupils Book A Free PDF Course for Young Learners of English.md deleted file mode 100644 index dcfd935435e06a7d8ea22ac8e6392cc80e89a156..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fairyland 2 Pupils Book A Free PDF Course for Young Learners of English.md +++ /dev/null @@ -1,124 +0,0 @@ - -

    Fairyland 2 Pupils Book: A Fun and Engaging Course for Young Learners of English

    -

    If you are looking for a course that will help your young learners develop their English skills in a fun and engaging way, you might want to check out Fairyland 2 Pupils Book. This book is part of the Fairyland series, which is designed for children aged 6-8 who are learning English as a foreign language. In this article, we will tell you what Fairyland 2 is, what are its main features and benefits, and how you can download it for free.

    -

    fairyland2pupilsbookfreedownload


    Download File ✦✦✦ https://byltly.com/2uKwzm



    -

    What is Fairyland 2?

    -

    Fairyland 2 is a course that follows the adventures of Woody and Frosty, two friendly characters who live in the Magic Forest. Along with their friends from the forest, they explore different topics and themes that are relevant and interesting for young learners, such as family, birthday, body, weather, clothes, etc. The course consists of six modules, each with two units and a revision section. Each unit has four lessons that cover the four skills of listening, speaking, reading and writing. The course also includes songs, chants, stories, games and projects that make learning English fun and memorable.

    -

    The main features of Fairyland 2

    -

    Some of the main features of Fairyland 2 are:

    - -

    The benefits of Fairyland 2 for teachers and students

    -

    Some of the benefits of using Fairyland 2 are:

    - -

    How to download Fairyland 2 Pupils Book for free

    -

    If you are interested in using Fairyland 2 Pupils Book with your students or children, you might be wondering how you can get it for free. After all, buying books can be expensive and not always accessible. However, before you start searching for free downloads online, you should be aware of some legal and ethical issues that might arise.

    -

    The legal and ethical issues of downloading books for free

    -

    Downloading books for free from unauthorized websites can be considered as piracy or theft. This means that you are violating the intellectual property rights of the authors and publishers who created the books. This can have negative consequences for both them and you. For them, it means that they lose revenue and recognition for their work. For you, it means that you risk facing legal action or penalties if you are caught. Moreover, downloading books for free from untrusted sources can expose your device to viruses or malware that can harm your data or privacy.

    -

    Therefore, we do not recommend or endorse downloading books for free from illegal or dubious websites. Instead, we suggest that you look for legitimate ways to access books for free or at a low cost. Here are some examples:

    - -

    The best websites to download Fairyland 2 Pupils Book for free

    -

    If you still want to download Fairyland 2 Pupils Book for free online, you should be careful about which websites you use. Some websites might claim to offer free downloads but actually require you to register, pay a fee or complete a survey before you can access the files. Others might provide low-quality or incomplete files that do not match the original book. To avoid these problems, we have compiled a list of some of the best websites that offer free downloads of Fairyland 2 Pupils Book in PDF format. These websites are:

    - - - - - -
    WebsiteDescription
    ScribdScribd is a digital library that hosts millions of books, documents and audiobooks. You can download Fairyland 2 Pupils Book from Scribd by clicking on this link: https://www.scribd.com/document/364027876/fairyland-2-pupil-s-book-pdf. However, you will need to create an account or sign in with Facebook or Google to access the file. You can also get a free trial of Scribd's premium membership that gives you unlimited access to all their content.
    IDocIDoc is an online document sharing platform that allows users to upload and download various types of files. You can download Fairyland 2 Pupils Book from IDoc by clicking on this link: https://idoc.pub/documents/fairyland-2-pupils-book-6ngex6y3o2lv. You do not need to register or pay anything to use this website.
    PdfdrivePdfdrive is a search engine that helps you find PDF files on the internet. You can download Fairyland 2 Pupils Book from Pdfdrive by clicking on this link: https://www.pdfdrive.com/fairyland-4-pupils-book-e159417475.html. You do not need to register or pay anything to use this website either.
    -

    How to use Fairyland 2 Pupils Book effectively after downloading

    -

    After downloading Fairyland 2 Pupils Book for free online, you might wonder how to use it effectively with your students or children. Here are some tips:

    -

    fairyland 2 pupils book pdf free download
    -fairyland 2 pupils book online free
    -fairyland 2 pupils book ebook free download
    -fairyland 2 pupils book audio free download
    -fairyland 2 pupils book answers free download
    -fairyland 2 pupils book teacher's book free download
    -fairyland 2 pupils book workbook free download
    -fairyland 2 pupils book express publishing free download
    -fairyland 2 pupils book vk free download
    -fairyland 2 pupils book scribd free download
    -how to download fairyland 2 pupils book for free
    -where to download fairyland 2 pupils book for free
    -best sites to download fairyland 2 pupils book for free
    -fairyland 2 pupils book free download torrent
    -fairyland 2 pupils book free download zip
    -fairyland 2 pupils book free download rar
    -fairyland 2 pupils book free download epub
    -fairyland 2 pupils book free download mobi
    -fairyland 2 pupils book free download kindle
    -fairyland 2 pupils book free download google drive
    -fairyland 2 pupils book level test free download
    -fairyland 2 pupils book flashcards free download
    -fairyland 2 pupils book songs free download
    -fairyland 2 pupils book stories free download
    -fairyland 2 pupils book games free download
    -fairyland 2 pupils book activities free download
    -fairyland 2 pupils book worksheets free download
    -fairyland 2 pupils book printables free download
    -fairyland 2 pupils book posters free download
    -fairyland 2 pupils book stickers free download
    -fairyland 2 pupils book review free download
    -fairyland 2 pupils book sample pages free download
    -fairyland 2 pupils book preview free download
    -fairyland 2 pupils book flipbook free download
    -fairyland 2 pupils book video free download
    -fairyland 2 pupils book animation free download
    -fairyland 2 pupils book interactive whiteboard software free download
    -fairyland 2 pupils book digital edition free download
    -fairyland 2 pupils book app free download
    -fairyland 2 pupils book cd rom free download
    -buy fairyland 2 pupils book online with free delivery
    -get a copy of fairyland 2 pupils book for free with a subscription
    -read or listen to fairyland 2 pupils book for free with a trial offer
    -compare prices of fairyland 2 pupils book from different online stores with free shipping
    -find out how to get a discount or coupon code for fairyland 2 pupils book with a free newsletter sign up
    -learn more about the author and illustrator of fairyland 2 pupils book with a free biography and interview
    -discover the other books in the fairyland series with a free catalogue and synopsis
    -join the fan club of fairyland and get access to exclusive content and offers with a free membership and login
    -share your thoughts and opinions on fairyl

    - -

    Conclusion

    -

    A summary of the main points of the article

    -

    In this article, we have introduced you to Fairyland 2 Pupils Book, a fun and engaging course for young learners of English. We have explained what Fairyland 2 is, what are its main features and benefits, and how you can download it for free online. We have also given you some tips on how to use Fairyland 2 effectively after downloading.

    -

    A call to action for the readers

    -

    We hope that you have found this article useful and informative. If you are interested in using Fairyland 2 with your students or children, we encourage you to download it from one of the websites we have recommended and try it out for yourself. You will be amazed by how much your students or children will enjoy learning English with Fairyland 2. Don't miss this opportunity to make learning English fun and memorable!

    -

    FAQs

    -

    Here are some frequently asked questions about Fairyland 2 Pupils Book:

    -
      -
    1. What is the difference between Fairyland 1 and Fairyland 2?
    2. -

      Fairyland 1 and Fairyland 2 are both courses for young learners of English aged 6-8. However, Fairyland 1 is for beginners who have little or no previous knowledge of English, while Fairyland 2 is for elementary learners who have completed Fairyland 1 or a similar course.

      -
    3. How many hours of teaching does Fairyland 2 cover?
    4. -

      Fairyland 2 covers about 90 hours of teaching, which can be adapted according to the needs and preferences of the teacher and the students.

      -
    5. What are the other components of the Fairyland series?
    6. -

      The Fairyland series consists of four levels: Fairyland 1, Fairyland 2, Fairyland 3 and Fairyland 4. Each level has a Pupil's Book, an Activity Book, a Teacher's Book, Picture Flashcards, Posters, Audio CDs, a Pupil's CD and a Teacher's Resource Pack.

      -
    7. Where can I buy Fairyland 2 Pupils Book and other components?
    8. -

      You can buy Fairyland 2 Pupils Book and other components from online or offline bookstores that sell Express Publishing products. You can also order them directly from the Express Publishing website http://www.expresspublishing.co.uk/us/en/content/fairyland-1-4.

      -
    9. How can I contact Express Publishing for more information or support?
    10. -

      You can contact Express Publishing by phone, fax, email or mail using the contact details given on their website http://www.expresspublishing.co.uk/us/en/contact-us. You can also follow them on social media platforms such as Facebook, Twitter, YouTube and Instagram.

      -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Beatmania Iidx 20 Tricoro Anthem Hdd EXCLUSIVE.md b/spaces/1gistliPinn/ChatGPT4/Examples/Beatmania Iidx 20 Tricoro Anthem Hdd EXCLUSIVE.md deleted file mode 100644 index 442232af40ba64d0da03d19d15716d95b4be753a..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Beatmania Iidx 20 Tricoro Anthem Hdd EXCLUSIVE.md +++ /dev/null @@ -1,14 +0,0 @@ -
    -

    Beatmania IIDX 20 Tricoro: The First HD Rhythm Game with Three Colorful Events

    -

    Beatmania IIDX 20 Tricoro is the 20th installment of the popular arcade rhythm game series, Beatmania IIDX. It was released in September 2012 by Konami, and it was the first game in the series to run in high-definition resolution (1280x720).

    -

    The game features a triple color scheme of red, blue, and yellow, which corresponds to the three main events of the game: LIMIT BURST (red), LEGEND CROSS (blue), and OUR SPACE WAR (yellow). Each event has its own storyline, characters, and songs, and players can unlock them by playing songs from different genres and difficulties.

    -

    Beatmania Iidx 20 Tricoro Anthem Hdd


    DOWNLOAD ✦✦✦ https://imgfil.com/2uy21r



    -

    LIMIT BURST is a story mode where players have to clear songs with various modifiers and challenges. LEGEND CROSS is a crossover event with songs from previous Beatmania IIDX games, as well as other BEMANI games. OUR SPACE WAR is a sci-fi themed event where players have to fight against alien invaders using special weapons and abilities.

    -

    The game also features other modes and features, such as Road to SPADA, Café de Tran, Shiritsu BEMANI gakuen, Q-pro, Mimi, Nyami & Pastel-kun no minna de uchuu sensou!!, Today's Featured Songs, and WEEKLY RANKING. The game has over 600 songs in total, including new songs, revived songs, new charts, and difficulty changes.

    -

    The game's soundtrack was released in two volumes: beatmania IIDX 20 tricoro ORIGINAL SOUNDTRACK Vol.1 in February 2013, and beatmania IIDX 21 SPADA ORIGINAL SOUNDTRACK (which contains the content of the cancelled Vol.2) in December 2013. The game's slogan is 輪音転奏。 (rinnetensou.), which means "various tunes change the world [ TRI ] for the future !!!".

    - -

    Beatmania IIDX 21 Spada is the 21st installment of the series, and the sequel to Beatmania IIDX 20 Tricoro. It was released in November 2013 by Konami, and it continued to run in high-definition resolution. The game's theme is medieval and swords, as the title of the game, Spada is Italian for sword. The UI has a dark and mysterious theme and mainly features black, silver, and purple colors.

    -

    The game features a new unlocking system called Spada†leggendaria, where players have to play specific songs related to swords, crosses, or knights to unlock new boss songs. These boss songs are composed by artists whose names are based on famous swords, such as Sigmund, Ancient Scapes, and Close the World feat. a☆ru. The game also features other events and modes, such as Qprogue, Nettou! BEMANI Stadium, TAG seitansai, SUPER STAR -MITSURU- Perfect Revival, GUMI 5th Anniversary party Presented by BEMANI, Hakken! Yomigaetta BEMANI iseki, Today's Featured Songs, Tran Medal unlocks, and WEEKLY RANKING. The game has over 700 songs in total, including new songs, revived songs, new charts, and difficulty changes.

    -

    The game's soundtrack was released in two volumes: beatmania IIDX 21 SPADA ORIGINAL SOUNDTRACK in December 2013 (which also contains the content of the cancelled beatmania IIDX 20 tricoro ORIGINAL SOUNDTRACK Vol.2), and beatmania IIDX 21 SPADA ORIGINAL SOUNDTRACK VOL.2 in August 2014. The game's slogan is 鍵士とは、叩っ斬ることと みつけたり。 (kenshi to wa, tatakkiru kototo mitsuketari.), which means "swordsmen are those who strike down and discover".

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Contoh Makalah Pengelolaan Lingkungan Belajar yang Efektif dan Menyenangkan.md b/spaces/1gistliPinn/ChatGPT4/Examples/Contoh Makalah Pengelolaan Lingkungan Belajar yang Efektif dan Menyenangkan.md deleted file mode 100644 index 628579a13e3278ee6c6c2bf3b54712894fe270d7..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Contoh Makalah Pengelolaan Lingkungan Belajar yang Efektif dan Menyenangkan.md +++ /dev/null @@ -1,25 +0,0 @@ -
    -

    Sesungguhnya siapakah yang bertanggung jawab akan penyediaan dan pengelolaan lingkungan belajar bagi anak? Terlepas dari siapapun itu yang menyediakan ataupun mengelola yang jelas dan sudah pasti guru menjadi ujung tombak dalam penyediaan lingkungan belajar yang kondusif. Guru merupakan individu yang banyak terlibat dalam setiap kegiatan anak pada saat mereka belajar di sekolah. Keterampilan guru dalam meneyediakan lingkungan belajar akan berpengaruh terhadap kegiatan anak di dalam lingkungan belajar tersebut, baik dalam interaksi, eksplorasi, eksperimen maupun melakukan berbagai kegiatan kreatif lainnya.

    -

    Contoh Makalah Pengelolaan Lingkungan Belajar


    Download Filehttps://imgfil.com/2uy27Z



    -

    Menurut Kollough (1996) dalam Rusnidal (2005:52) mengungkapkan bahwa ada sejumlah hal yang berkaitan dengan anak yang harus dipertimbangkan dalam menciptakan lingkungan belajar yang kondusif diantaranya:

    -

    Lingkungan berperan dalam pemerolehan informasi sebagai sumberbelajar anak. Semakin bertambah usia anak, maka anak akan mengalamikematangan fungsi fisik dan psikis. Kematangan kedua hal tersebutmerupakan bentuk kesiapan anak untuk merespon segala rangsangan yangdiberikan lingkungannya. Banyak jenis lembaga pendidikan anak usia diniyang menawarkan berbagai suasana lingkungan belajar anak yang mampumendorong efektifitas kegiatan belajar mengajar anak.

    -

    Pada proses belajar mengajar pengelolaan lingkungan belajarmempunyai tujuan secara umum yaitu menyediakan fasilitas bagi bermacam-macam kegiatan siswa dalam lingkungan sosial, emosional dan intelektualdikelas. Fasilitas yang disediakan itu memungkinkan siswa untuk belajar danbekerja dan mengembangkan sikap apresiasi pada siswa.

    -

    -

    Menurut Suharsimi Arikunto, pengelolaan adalah pengadministrasian,pengaturan, atau penataan suatu kegiatan. Adapun lingkungan belajar adalahsuatu tempat yang berfungsi sebagai wadah atau lapangan terlaksananyaproses belajar mengajar atau pendidikan. Tanpa adanya lingkungan,pendidikan tidak dapat berlangsung. Sedangkan indoor, berasal dari bahasaInggris, yang berarti di dalam gedung.

    -

    Dun & Dun menyebutkan bahwa, kondisi belajar atau lingkunganbelajar dapat mempengaruhi konsentrasi dan penerimaan informsi bagi siswa,jadi lingkungan belajar adalah lingkungan alami yang diciptakan oleh guruatau orang lain yang bisa menambah konsentrasi siswa dan pengetahuansiswa secara efisien.

    -

    Sesuai dengan karakteristiknya, masa usia dini disebut masa peka.Pada masa ini anak sangat sensitif atau sangat peka terhadap sesuatu disekitarnya sehingga pada masa ini merupakan saat yang paling tepat bagianak untuk menerima respons atau rangsangan yang diberikan olehlingkungannya. Dengan demikian, lingkungan sebagai unsur yangmenyediakan sejumlah rangsangan perlu mendapat perhatian dan perludiciptakan sedemikian rupa, agar menyediakan objek- objek sesuai dengankebutuhan dan perkembangan anak. Untuk itu, dibutuhkan perencanaan yangmatang. Ketepatan lingkungan belajar secara langsung maupun tidaklangsung akan sangat mempengaruhi proses dan hasil belajar yang akandicapai anak.

    -

    Lingkungan belajar indoor adalah lingkungan belajar yang memangsudah disediakan oleh manajemen sekolahan agar digunakan untuk parasiswanya sebagai sumber belajar atau lingkungan belajar yang ada didalamsekolahan tersebut. Lingkungan belajar ini bisa berupa perpustakaan,laboratorium, auditorium dan utamanya adalah ruang kelas.

    -

    Disiplin kolektif ketiga yang menjadi perhatian Peter Senge adalah pembentukan mental (mental models), sebuah disiplin yang ingin menekankan sikap pengembangan kepekaan dan persepsi, baik dalam diri sendiri atau orang sekitarnya. Bekerja dengan membentuk mental ini dapat membantu kita untuk lebih jelas dan jujur dalam memandang kenyataan terkini. Karena pembentukan mental dalam pendidikan sering kali tidak dapat didiskusikan, dan tersembunyi, maka
    kritik yang harus diperhatikan oleh sekolah yang belajar adalah bagaimana kita mampu mengembangkan kapasitas untuk berbicara secara produktif dan aman tentang hal-hal yang berbahaya dan tidak nyaman. Selain itu, pengelola sekolah juga harus senantiasa aktif memikirkan asumsi-asumsi tentang apa yang terjadi dalam kelas, tingkat perkembangan siswa, dan lingkungan rumah siswa.

    -

    Kegagalan seorang guru mencapai tujuan pembelajaran berbanding lurus dengan ketidakmampuan guru mengelola kelas. Indikator dari kegagalan itu seperti prestasi belajar murid rendah, tidak sesuai dengan standar atau batas ukuran yang ditentukan. Karena itu, pengelolaan kelas merupakan kompetensi guru yang sangat penting.

    -

    Di sini jelas bahwa pengelolaan kelas yang efektif merupakan persyaratan mutlak bagi terciptanya proses belajar mengajar yang efektif pula. Maka dari itu pentingnya pengelolaan kelas guna menciptakan suasana kelas yang kondusif demi meningkatkan kualitas pembelajaran. Pengelolaan kelas menjadi tugas dan tanggung jawab guru dengan memberdayakan segala potensi yang ada dalam kelas demi kelangsungan proses pembelajaran. Hal ini berarti setiap guru dituntut secara profesional mengelola kelas sehingga terciptanya suasana kelas yang kondusif guna menunjang proses pembelajaran yang optimal menuntut kemampuan guru untuk mengetahui, memahami, memilih, dan menerapkan pendekatan yang dinilai efektif menciptakan suasana kelas yang kondusif.

    -

    Disimpulkan bahwa pengelolaan kelas adalah berbagai jenis kegiatan yang dengan sengaja dilakukan oleh guru dengan tujuan menciptakan kondisi optimal bagi terjadinya proses belajar mengajar di kelas. Pengelolaan kelas sangat berkaitan dengan upaya-upaya untuk menciptakan dan mempertahankan kondisi yang optimal bagi terjadinya proses belajar (penghentian perilaku peserta didik yang menyelewengkan perhatian kelas, pemberian ganjaran, penyelesaian tugas oleh peserta didik secara tepat waktu, penetapan norma kelompok yang produktif, di dalamnya mencakup pengaturan orang (peserta didik) dan fasilitas yang ada.

    -

    Tujuan pengelolaan kelas menurut Sudirman (dalam Djamarah 2006:170) pada hakikatnya terkandung dalam tujuan pendidikan. Tujuan pengelolaan kelas adalah penyediaan fasilitas bagi macam-macam kegiatan belajar siswa dalam lingkungan sosial, emosional, dan intelektual dalam kelas. Fasilitas yang disediakan itu memungkinkan siswa belajar dan bekerja. Terciptanya suasana sosial yang memberikan kepuasan, suasana disiplin, perkembangan intelektual, emosional, dan sikap serta apresiasi pada siswa. Sedangkan Suharsimi Arikunto (dalam Djamarah 2006:178) berpendapat bahwa tujuan pengelolaan kelas adalah agar setiap anak di kelas dapat bekerja dengan tertib sehingga segera tercapai tujuan pengajaran secara efektif dan efisian.

    -

    Jadi, pengelolaan kelas dimaksudkan untuk menciptakan kondisi di dalam kelompok kelas yang berupa lingkungan kelas yang baik, yang memungkinkan siswa berbuat sesuai dengan kemampuannya. Kemudian, dengan pengelolaan kelas produknya harus sesuai dengan tujuan yang hendak dicapai dan agar setiap anak dikelas dapat bekerja dengan tertib, sehingga segera tercapai tujuan pengajaran secara efektif dan efisien serta agar setiap guru mampu menguasai kelas dengan menggunakan berbagai macam pendekatan dengan menyesuaikan permasalahan yang ada, sehingga tercipta suasana yang kondusif, efektif dan efisien.

    -

    Pendekatan yang primisif dalam pengelolaan kelas merupakan seperangkat kegiatan pengajar yang memaksimalkan kebebasan pembelajar untuk melakukan sesuatu. Sehingga pembelajar bila kebebasan ini dihalangi dapat menghambat perkembangan pembelajar. Berbagai bentuk pendekatan dalam pelaksanaan pengelolaan kelas ini banyak menyerahkan segala inisiatif dan tindakan pada diri pembelajar:

    -

    Pada dasarnya proses belajar mengajar merupakan inti dari proses pendidikan secara keseluruhan, di antaranya guru merupakan salah satu faktor yang penting dalam menentukan berhasilnya proses belajar mengajar di dalam kelas. Oleh karena itu guru dituntut untuk meningkatkan peran dan kompetensinya, guru yang kompeten akan lebih mampu menciptakan lingkungan belajar yang efektif dan akan lebih mampu mengelola kelasnya sehingga hasil belajar siswa berada pada tingkat yang optimal. Adam dan Decey (dalam Usman, 2003) mengemukakan peranan guru dalam proses belajar mengajar adalah sebagai berikut:

    -

    Seorang guru harus dapat menguasai benar materi yag akan diajarkan juga media yang akan digunakan bahkan lingkungan sendiri juga termasuk sebagai sember belajar yang harus dipelajari oleh seorang guru. Seorang siswa mempunyai beberapa kemampuan menyerap materi berbeda-beda oleh karena itu pendidik harus pandai dalam merancang media untuk membantu siswa agar mudah memahami pelajaran. Keterampilan untuk merancang media pembelajaran adalah hal yang pokok yang harus dikuasai, sehingga pelajaran yang akan diajarkan bisa dapat diserap dengan mudah oleh peserta didik. Media pembelajaran di dalam kelas banyak macamnya misalkan torsu, chart maket, LCD, OHP/OHT.

    -

    Peran guru merupakan salah satu faktor yang penting dalam menentukan berhasilnya proses belajar mengajar di dalam kelas. Oleh karena itu guru dituntut untuk meningkatkan peran dan kompetensinya, guru yang kompeten akan lebih mampu menciptakan lingkungan belajar yang efektif dan akan lebih mampu mengelola kelasnya sehingga hasil belajar siswa berada pada tingkat yang optimal. Adam dan Decey (dalam Usman, 2003) mengemukakan peranan guru dalam proses belajar mengajar adalah sebagai berikut:

    -

    Dikatakan bahwa pengelolaan kelas yang efektif merupakan persyaratan mutlak bagi terciptanya proses belajar mengajar yang efektif pula. Maka dari itu pentingnya pengelolaan kelas guna menciptakan suasana kelas yang kondusif demi meningkatkan kualitas pembelajaran. Pengelolaan kelas menjadi tugas dan tanggung jawab guru dengan memberdayakan segala potensi yang ada dalam kelas demi kelangsungan proses pembelajaran.

    -

    Guru sebagai tenaga profesional, dituntut tidak hanya mampu mengelola pembelajaran saja tetapi juga harus mampu mengelola kelas, yaitu menciptakan dan mempertahankan kondisi belajar yang optimal bagi tercapainya tujuan pengajaran. Oleh karena itu sejalan dengan upaya pemerintah dalam meningkatkan mutu di semua jenjang pendidikan, penerapan strategi pengelolaan kelas dalam pembelajaran merupakan salah satu alternatif yang diyakini dapat digunakan untuk memecahkan persoalan yang mendasar dari permasalahan pendidikan di tanah air.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Contoh Proposal Pertandingan Bola 17.md b/spaces/1gistliPinn/ChatGPT4/Examples/Contoh Proposal Pertandingan Bola 17.md deleted file mode 100644 index abd4718357aece8b98f3198d10c8ed57c95ed39c..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Contoh Proposal Pertandingan Bola 17.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Contoh Proposal Pertandingan Bola 17


    DOWNLOAD >>> https://imgfil.com/2uxXOt



    -
    -19, 2561 BE - sentence contoh bantuan dana HUT Viking. . OFFER BANTUAN DANA 9TH ANNIVERSARY OF VIKING RABER DAN MEMPERINGATI HARI KEMERDEKAAN. · 19.0921 AB - offer contoh seperti keluarga BANTUAN DANA DAN HUT Viking. . · 19.1801 AB - offer contoh seperti dan keluarga bepul DAN HUT Viking. . · 19.2561 AB - offer contoh seperti dan keluarga bepul BANTUAN DANA DAN HUT Viking. . · 19.2616 AB - offer contoh seperti dan keluarga bepul BANTUAN DANA DAN HUT Viking. . · 19.2621 AB - offer contoh seperti dan keluarga BANTUAN DANA DAN HUT Viking. . 19.2716 AB - sentence contoh seperti dan keluarga BANT 8a78ff9644
    -
    -
    -

    diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Angry Birds 2 and Experience the New Era of Slingshot Gameplay on Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Angry Birds 2 and Experience the New Era of Slingshot Gameplay on Android.md deleted file mode 100644 index 97fc2643f3df153c40fa8b2def6f66f8ed1d6720..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Angry Birds 2 and Experience the New Era of Slingshot Gameplay on Android.md +++ /dev/null @@ -1,107 +0,0 @@ -
    -

    How to Download Angry Birds 2 for Android

    -

    Are you looking for a fun and addictive game to play on your Android device? Do you want to join millions of players around the world in flinging birds at pigs and saving eggs? If so, then you should download Angry Birds 2, the sequel to the most popular physics-based game ever. In this article, we will tell you what Angry Birds 2 is, why you should play it, and how to download it for Android. We will also share some tips and tricks to help you master the game and have more fun.

    -

    What is Angry Birds 2?

    -

    Angry Birds 2 is a puzzle video game developed by Rovio Entertainment and released in 2015. It is the twelfth game in the Angry Birds series, and the direct sequel to the original Angry Birds. It is a free-to-play game with optional purchases for in-game currency.

    -

    download angry birds 2 for android


    DOWNLOAD »»» https://urlin.us/2uT19Q



    -

    The game follows the same basic premise as the previous games: you use a slingshot to launch birds at structures made of glass, wood, and stone, where pigs are hiding. Your goal is to destroy all the pigs and save the eggs. However, Angry Birds 2 also introduces some new features and improvements that make it more fun and challenging.

    -

    Why should you play Angry Birds 2?

    -

    There are many reasons why you should play Angry Birds 2, whether you are a fan of the franchise or not. Here are some of them:

    -

    Fun gameplay

    -

    Angry Birds 2 has a fun and addictive gameplay that will keep you entertained for hours. You can choose which bird to put in the slingshot and use their special abilities to defeat the pigs with strategy. You can also use spells to unleash powerful effects on the levels. The game has hundreds of levels with multiple stages, each with different challenges and surprises. You can also compete with other players in the arena or join a clan to cooperate with friends.

    -

    Amazing graphics

    -

    Angry Birds 2 has stunning graphics that make the game look good on any device. The game uses cel-shaded visuals that give it a cartoon-like style. The characters, structures, and landscapes are colorful and detailed. The animations are smooth and expressive. The game also has dynamic weather effects that change the atmosphere of each level.

    -

    Multiple modes and events

    -

    Angry Birds 2 has many modes and events that add variety and excitement to the game. You can play daily challenges, mighty eagle's bootcamp, tower of fortune, clans wars, seasonal events, limited-time events, and more. Each mode or event has its own rules, rewards, and leaderboards. You can also collect hats, feathers, gems, chests, stars, tickets, apples and more to customize your birds and boost their power. You can also unlock new birds and spells as you progress in the game.

    -

    How to download Angry Birds 2 for Android?

    -

    Downloading Angry Birds 2 for Android is easy and fast. You just need to follow these simple steps:

    -

    Requirements

    -

    Before you download the game, make sure you have the following requirements:

    - -

    Steps

    -

    Once you have the requirements, you can download the game from the Google Play Store by following these steps:

    -

    How to download angry birds 2 for android phone
    -Download angry birds 2 for android apk free
    -Download angry birds 2 for android tablet
    -Download angry birds 2 for android offline
    -Download angry birds 2 for android latest version
    -Download angry birds 2 for android mod apk
    -Download angry birds 2 for android without ads
    -Download angry birds 2 for android from google play
    -Download angry birds 2 for android hack
    -Download angry birds 2 for android cheats
    -Download angry birds 2 for android tips and tricks
    -Download angry birds 2 for android gameplay
    -Download angry birds 2 for android review
    -Download angry birds 2 for android best levels
    -Download angry birds 2 for android walkthrough
    -Download angry birds 2 for android guide
    -Download angry birds 2 for android tutorial
    -Download angry birds 2 for android update
    -Download angry birds 2 for android new features
    -Download angry birds 2 for android characters
    -Download angry birds 2 for android unlockables
    -Download angry birds 2 for android achievements
    -Download angry birds 2 for android challenges
    -Download angry birds 2 for android multiplayer
    -Download angry birds 2 for android online mode
    -Download angry birds 2 for android co-op mode
    -Download angry birds 2 for android vs mode
    -Download angry birds 2 for android clans
    -Download angry birds 2 for android events
    -Download angry birds 2 for android tournaments
    -Download angry birds 2 for android leaderboards
    -Download angry birds 2 for android rewards
    -Download angry birds 2 for android skins
    -Download angry birds 2 for android costumes
    -Download angry birds 2 for android accessories
    -Download angry birds 2 for android stickers
    -Download angry birds 2 for android emojis
    -Download angry birds 2 for android wallpapers
    -Download angry birds 2 for android ringtones
    -Download angry birds 2 for android soundtracks
    -Download angry birds 2 for android comics
    -Download angry birds 2 for android books
    -Download angry birds 2 for android movies
    -Download angry birds 2 for android tv shows
    -Download angry birds 2 for android merchandise
    -Download angry birds 2 for android toys
    -Download angry birds 2 for android games like it

    -
      -
    1. Open the Google Play Store app on your device
    2. -
    3. Search for "Angry Birds 2" or use this link: Angry Birds 2 - Apps on Google Play
    4. -
    5. Tap on the "Install" button and wait for the download to finish
    6. -
    7. Tap on the "Open" button and enjoy the game
    8. -
    -

    Tips and tricks

    -

    To play Angry Birds 2 better and get more rewards, you can use these tips and tricks:

    -

    Use the environment

    -

    The levels in Angry Birds 2 have many environmental elements that you can use to your advantage. For example, you can hit flowers to make them explode, portals to teleport your birds, fans to change the direction of your shots, and more. Experiment with different elements and see how they affect the outcome of each level.

    -

    Fill the Destructometer

    -

    The Destructometer is a meter that fills up as you cause more damage to the structures and pigs. When it is full, you get an extra card that lets you choose another bird or spell to use. You can also get extra cards by hitting golden pigs or collecting stars. Try to fill the Destructometer as much as possible to have more options and chances to win.

    -

    Choose your bird wisely

    -

    Each bird in Angry Birds 2 has a different ability and strength that can be useful against different materials and situations. For example, Red can knock back objects with his battle cry, Chuck can speed up and pierce through wood, Bomb can explode and destroy stone, Matilda can drop an egg bomb and fly upwards, and so on. You can also upgrade your birds by collecting feathers and increase their power level. Choose your bird wisely depending on the level layout and the materials you need to break.

    -

    Save your lives and gems

    -

    Angry Birds 2 is a free-to-play game, but it has some limitations that can affect your gameplay. For example, you have a limited number of lives that regenerate over time or can be refilled by spending gems or watching ads. Gems are the premium currency of the game that can be used to buy more lives, spells, chests, hats, and more. You can earn gems by completing achievements, winning arena matches, opening chests, or buying them with real money. To save your lives and gems, you should play smartly and avoid losing levels or retrying them too often. You should also spend your gems wisely and only on things that you really need or want.

    -

    Conclusion

    -

    Angry Birds 2 is a fun and addictive game that you can download for free on your Android device. It has a fun gameplay, amazing graphics, multiple modes and events, and many features that make it more enjoyable than ever. If you want to join the millions of players who love this game, follow our guide on how to download Angry Birds 2 for Android and start flinging birds at pigs today. You will not regret it!

    -

    FAQs

    -

    Here are some frequently asked questions and answers about Angry Birds 2:

    -
      -
    1. How many levels are there in Angry Birds 2?
    2. -

      There are over 2000 levels in Angry Birds 2, divided into chapters with different themes and bosses. The game also adds new levels regularly with updates and events.

      -
    3. How do I get more spells in Angry Birds 2?
    4. -

      You can get more spells by filling the Destructometer, hitting golden pigs, collecting stars, opening chests, winning arena matches, or buying them with gems.

      -
    5. How do I join a clan in Angry Birds 2?
    6. -

      You can join a clan in Angry Birds 2 by tapping on the clan icon on the main screen and choosing a clan that suits your preferences and interests. You can also create your own clan or invite your friends to join your clan. Clans allow you to chat with other members, share tips and strategies, and participate in clan wars and events.

      -
    7. How do I unlock new birds in Angry Birds 2?
    8. -

      You can unlock new birds in Angry Birds 2 by completing certain levels or chapters, opening chests, or buying them with gems. Some of the new birds are Silver, Stella, Bubbles, Hal, Terence, and Mighty Eagle.

      -
    9. How do I contact the support team of Angry Birds 2?
    10. -

      If you have any issues or questions about the game, you can contact the support team of Angry Birds 2 by tapping on the settings icon on the main screen and then tapping on the help icon. You can also visit the official website of the game or the Rovio Entertainment website for more information and resources.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Car Parking Multiplayer Download Join Thousands of Players in an Open World Mode.md b/spaces/1phancelerku/anime-remove-background/Car Parking Multiplayer Download Join Thousands of Players in an Open World Mode.md deleted file mode 100644 index bff8b192bd1a51244a5f59a7a2ff4cb736af68e7..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Car Parking Multiplayer Download Join Thousands of Players in an Open World Mode.md +++ /dev/null @@ -1,132 +0,0 @@ -
    -

    Can You Download Car Parking Multiplayer?

    -

    If you are looking for a fun and realistic parking simulator game, you might have heard of Car Parking Multiplayer. This game is more than just parking: it offers an open-world multiplayer mode, car tuning, free walking, and many other features. But can you download Car Parking Multiplayer on your device? The answer is yes, you can! In this article, we will show you what Car Parking Multiplayer is, how to download it on different devices, and some tips and tricks for playing it.

    -

    What is Car Parking Multiplayer?

    -

    Car Parking Multiplayer is a simulation game developed by olzhass. It is one of the most popular parking games on Google Play and App Store, with over 100 million downloads and 4.4 stars rating . In this game, you can choose from over 130 cars with real interiors, drive them in various environments, park them in challenging situations, and customize them with different parts and vinyls. You can also join thousands of other players online, compete in races, exchange cars, chat with voice or text, and even role-play as a police officer or a taxi driver.

    -

    can you download car parking multiplayer


    DOWNLOADhttps://jinyurl.com/2uNRJU



    -

    Features of Car Parking Multiplayer

    -

    Some of the features that make Car Parking Multiplayer stand out from other parking games are:

    - -

    Benefits of Car Parking Multiplayer

    -

    Some of the benefits that you can get from playing Car Parking Multiplayer are:

    - -

    How to Download Car Parking Multiplayer on Different Devices?

    -

    The good news is that you can download Car Parking Multiplayer on various devices such as Android phones or tablets, iOS devices (iPhone or iPad), or PC (Windows or Mac). Here are the steps for each device:

    -

    How to download car parking multiplayer on PC
    -Car parking multiplayer free download for Android
    -Car parking multiplayer iOS app store link
    -Car parking multiplayer open world mode features
    -Car parking multiplayer tuning and racing tips
    -Car parking multiplayer best cars and skins
    -Car parking multiplayer online gameplay and chat
    -Car parking multiplayer simulation game review
    -Car parking multiplayer latest update and news
    -Car parking multiplayer cheats and hacks
    -Car parking multiplayer realistic physics and graphics
    -Car parking multiplayer custom maps and mods
    -Car parking multiplayer vs real car parking 3d
    -Car parking multiplayer challenges and missions
    -Car parking multiplayer offline mode and data usage
    -Car parking multiplayer system requirements and compatibility
    -Car parking multiplayer support and feedback
    -Car parking multiplayer community and social media
    -Car parking multiplayer fun and funny moments
    -Car parking multiplayer pros and cons
    -Car parking multiplayer beginners guide and tutorial
    -Car parking multiplayer advanced techniques and tricks
    -Car parking multiplayer police mode and roleplay
    -Car parking multiplayer gas stations and car services
    -Car parking multiplayer exchange cars with other players
    -Car parking multiplayer free walking and exploration
    -Car parking multiplayer voice chat and friend list
    -Car parking multiplayer engine tuning and swap
    -Car parking multiplayer dynamic vynils and body parts
    -Car parking multiplayer 100+ cars with real interior
    -Car parking multiplayer 16 player skins and customization
    -Car parking multiplayer highly detailed environments
    -Car parking multiplayer 82 real life parking and driving scenarios
    -Car parking multiplayer different vehicles and categories
    -Car parking multiplayer stylized and offline mode options
    -Car parking multiplayer data safety and privacy policy
    -Car parking multiplayer ratings and reviews from users
    -Car parking multiplayer developer olzhass information
    -Car parking multiplayer designed for iPad compatibility
    -Car parking multiplayer offers in-app purchases details

    -

    How to Download Car Parking Multiplayer on Android Devices?

    -

    If you have an Android device, you can download Car Parking Multiplayer from Google Play Store. Here are the steps:

    -
      -
    1. Open Google Play Store app on your device or go to play.google.com on your browser.
    2. -
    3. Search for "Car Parking Multiplayer" in the search bar at the top or browse through the apps in the simulation category.
    4. -
    5. Tap on the name of the app and then tap on Install (if the app is free) or the app's price (if the app is paid).
    6. -
    7. If If you are asked to grant permissions, tap on Accept or Allow. The app will start downloading and installing on your device.
    8. -
    9. Once the app is installed, you can open it by tapping on Open or by finding it on your home screen or app drawer.
    10. -
    -

    How to Download Car Parking Multiplayer on iOS Devices?

    -

    If you have an iOS device, you can download Car Parking Multiplayer from App Store. Here are the steps:

    -
      -
    1. Open App Store app on your device or go to apps.apple.com on your browser.
    2. -
    3. Search for "Car Parking Multiplayer" in the search bar at the bottom or browse through the apps in the simulation category.
    4. -
    5. Tap on the name of the app and then tap on Get (if the app is free) or the app's price (if the app is paid).
    6. -
    7. If you are asked to enter your Apple ID password or use Touch ID or Face ID, do so. The app will start downloading and installing on your device.
    8. -
    9. Once the app is installed, you can open it by tapping on Open or by finding it on your home screen.
    10. -
    -

    How to Download Car Parking Multiplayer on PC?

    -

    If you want to play Car Parking Multiplayer on your PC, you will need to use an Android emulator. An Android emulator is a software that allows you to run Android apps on your PC. There are many Android emulators available, but we recommend using BlueStacks as it is one of the most popular and reliable ones. Here are the steps:

    -
      -
    1. Go to bluestacks.com and download the latest version of BlueStacks for your PC (Windows or Mac).
    2. -
    3. Run the installer and follow the instructions to install BlueStacks on your PC.
    4. -
    5. Launch BlueStacks and sign in with your Google account or create a new one.
    6. -
    7. Go to Google Play Store app within BlueStacks or click on the Play Store icon on the home screen.
    8. -
    9. Search for "Car Parking Multiplayer" in the search bar at the top or browse through the apps in the simulation category.
    10. -
    11. Click on the name of the app and then click on Install (if the app is free) or the app's price (if the app is paid).
    12. -
    13. The app will start downloading and installing on your PC.
    14. -
    15. Once the app is installed, you can open it by clicking on Open or by finding it on the home screen or app center.
    16. -
    -

    Tips and Tricks for Playing Car Parking Multiplayer

    -

    Now that you have downloaded Car Parking Multiplayer on your device, you might be wondering how to play it and get better at it. Here are some tips and tricks that can help you:

    -

    How to Earn Money and Coins in Car Parking Multiplayer?

    -

    Money and coins are the main currencies in Car Parking Multiplayer. You can use them to buy new cars, upgrade your car, change your license plate, and more. There are several ways to earn money and coins in Car Parking Multiplayer, such as:

    - -

    How to Customize Your Car in Car Parking Multiplayer?

    -

    One of the fun aspects of Car Parking Multiplayer is customizing your car. You can change various aspects of your car such as color, wheels, suspension, engine, exhaust, gearbox, turbo, vinyls, body parts, and more. To customize your car, you need to go to a garage or a tuning shop. There are two types of garages in Car Parking Multiplayer: personal garage and public garage. A personal garage is where you can store your cars and access them anytime. A public garage is where you can find other players' cars and buy them if they are for sale. To go to a garage, you need to find a garage icon on the map and drive there. To go to a tuning shop, you need to find a tuning shop icon on the map and drive there. Once you are in a garage or a tuning shop, you can tap on the Customize button and start modifying your car. You can also preview your car before buying or applying any changes. To save your changes, you need to tap on the Save button and pay the required amount of money or coins.

    -

    How to Interact with Other Players in Car Parking Multiplayer?

    -

    Car Parking Multiplayer is not only a parking simulator, but also a social game. You can interact with other players in various ways, such as:

    - -

    Conclusion

    -

    Car Parking Multiplayer is a simulation game that offers more than just parking. It is a game that lets you drive, park, customize, and socialize with different cars and players. You can download Car Parking Multiplayer on your Android, iOS, or PC device and enjoy a realistic and immersive parking simulator game. If you are looking for a fun and challenging parking game, Car Parking Multiplayer is the game for you.

    -

    FAQs

    -

    Here are some frequently asked questions about Car Parking Multiplayer:

    -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/repaint/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/repaint/__init__.py deleted file mode 100644 index 4ae60c60bd825398fb4b6a0817e0288a21d21f13..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/repaint/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# flake8: noqa -from .pipeline_repaint import RePaintPipeline diff --git a/spaces/801artistry/RVC801/infer/lib/train/process_ckpt.py b/spaces/801artistry/RVC801/infer/lib/train/process_ckpt.py deleted file mode 100644 index 36d359d5f853452da4e1a696a84b8457b8386c29..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/infer/lib/train/process_ckpt.py +++ /dev/null @@ -1,261 +0,0 @@ -import os -import sys -import traceback -from collections import OrderedDict - -import torch - -from i18n import I18nAuto - -i18n = I18nAuto() - - -def savee(ckpt, sr, if_f0, name, epoch, version, hps): - try: - opt = OrderedDict() - opt["weight"] = {} - for key in ckpt.keys(): - if "enc_q" in key: - continue - opt["weight"][key] = ckpt[key].half() - opt["config"] = [ - hps.data.filter_length // 2 + 1, - 32, - hps.model.inter_channels, - hps.model.hidden_channels, - hps.model.filter_channels, - hps.model.n_heads, - hps.model.n_layers, - hps.model.kernel_size, - hps.model.p_dropout, - hps.model.resblock, - hps.model.resblock_kernel_sizes, - hps.model.resblock_dilation_sizes, - hps.model.upsample_rates, - hps.model.upsample_initial_channel, - hps.model.upsample_kernel_sizes, - hps.model.spk_embed_dim, - hps.model.gin_channels, - hps.data.sampling_rate, - ] - opt["info"] = "%sepoch" % epoch - opt["sr"] = sr - opt["f0"] = if_f0 - opt["version"] = version - torch.save(opt, "weights/%s.pth" % name) - return "Success." - except: - return traceback.format_exc() - - -def show_info(path): - try: - a = torch.load(path, map_location="cpu") - return "模型信息:%s\n采样率:%s\n模型是否输入音高引导:%s\n版本:%s" % ( - a.get("info", "None"), - a.get("sr", "None"), - a.get("f0", "None"), - a.get("version", "None"), - ) - except: - return traceback.format_exc() - - -def extract_small_model(path, name, sr, if_f0, info, version): - try: - ckpt = torch.load(path, map_location="cpu") - if "model" in ckpt: - ckpt = ckpt["model"] - opt = OrderedDict() - opt["weight"] = {} - for key in ckpt.keys(): - if "enc_q" in key: - continue - opt["weight"][key] = ckpt[key].half() - if sr == "40k": - opt["config"] = [ - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 10, 2, 2], - 512, - [16, 16, 4, 4], - 109, - 256, - 40000, - ] - elif sr == "48k": - if version == "v1": - opt["config"] = [ - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 6, 2, 2, 2], - 512, - [16, 16, 4, 4, 4], - 109, - 256, - 48000, - ] - else: - opt["config"] = [ - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [12, 10, 2, 2], - 512, - [24, 20, 4, 4], - 109, - 256, - 48000, - ] - elif sr == "32k": - if version == "v1": - opt["config"] = [ - 513, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 4, 2, 2, 2], - 512, - [16, 16, 4, 4, 4], - 109, - 256, - 32000, - ] - else: - opt["config"] = [ - 513, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 8, 2, 2], - 512, - [20, 16, 4, 4], - 109, - 256, - 32000, - ] - if info == "": - info = "Extracted model." - opt["info"] = info - opt["version"] = version - opt["sr"] = sr - opt["f0"] = int(if_f0) - torch.save(opt, "weights/%s.pth" % name) - return "Success." - except: - return traceback.format_exc() - - -def change_info(path, info, name): - try: - ckpt = torch.load(path, map_location="cpu") - ckpt["info"] = info - if name == "": - name = os.path.basename(path) - torch.save(ckpt, "weights/%s" % name) - return "Success." - except: - return traceback.format_exc() - - -def merge(path1, path2, alpha1, sr, f0, info, name, version): - try: - - def extract(ckpt): - a = ckpt["model"] - opt = OrderedDict() - opt["weight"] = {} - for key in a.keys(): - if "enc_q" in key: - continue - opt["weight"][key] = a[key] - return opt - - ckpt1 = torch.load(path1, map_location="cpu") - ckpt2 = torch.load(path2, map_location="cpu") - cfg = ckpt1["config"] - if "model" in ckpt1: - ckpt1 = extract(ckpt1) - else: - ckpt1 = ckpt1["weight"] - if "model" in ckpt2: - ckpt2 = extract(ckpt2) - else: - ckpt2 = ckpt2["weight"] - if sorted(list(ckpt1.keys())) != sorted(list(ckpt2.keys())): - return "Fail to merge the models. The model architectures are not the same." - opt = OrderedDict() - opt["weight"] = {} - for key in ckpt1.keys(): - # try: - if key == "emb_g.weight" and ckpt1[key].shape != ckpt2[key].shape: - min_shape0 = min(ckpt1[key].shape[0], ckpt2[key].shape[0]) - opt["weight"][key] = ( - alpha1 * (ckpt1[key][:min_shape0].float()) - + (1 - alpha1) * (ckpt2[key][:min_shape0].float()) - ).half() - else: - opt["weight"][key] = ( - alpha1 * (ckpt1[key].float()) + (1 - alpha1) * (ckpt2[key].float()) - ).half() - # except: - # pdb.set_trace() - opt["config"] = cfg - """ - if(sr=="40k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 10, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 40000] - elif(sr=="48k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,6,2,2,2], 512, [16, 16, 4, 4], 109, 256, 48000] - elif(sr=="32k"):opt["config"] = [513, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 4, 2, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 32000] - """ - opt["sr"] = sr - opt["f0"] = 1 if f0 == i18n("是") else 0 - opt["version"] = version - opt["info"] = info - torch.save(opt, "weights/%s.pth" % name) - return "Success." - except: - return traceback.format_exc() diff --git a/spaces/ADOPLE/ResumeAnalyzer/README.md b/spaces/ADOPLE/ResumeAnalyzer/README.md deleted file mode 100644 index 598153bb58232ae47671e4fb9f20507696cf67e3..0000000000000000000000000000000000000000 --- a/spaces/ADOPLE/ResumeAnalyzer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AdopleAI website ResumeAnalyser -emoji: 🏃 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -duplicated_from: ADOPLE/AdopleAI-ResumeAnalyzer ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/docs/README.ja.md b/spaces/AI-Hobbyist/Hoyo-RVC/docs/README.ja.md deleted file mode 100644 index cf47bd5be4652c7fb3738090fc3e3e75c09e703f..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/docs/README.ja.md +++ /dev/null @@ -1,104 +0,0 @@ -
    - -

    Retrieval-based-Voice-Conversion-WebUI

    -VITSに基づく使いやすい音声変換(voice changer)framework

    - -[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI) - -
    - -[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb) -[![Licence](https://img.shields.io/github/license/liujing04/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/%E4%BD%BF%E7%94%A8%E9%9C%80%E9%81%B5%E5%AE%88%E7%9A%84%E5%8D%8F%E8%AE%AE-LICENSE.txt) -[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/) - -[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk) - -
    - ------- - -[**更新日誌**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md) - -[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md)) - -> デモ動画は[こちら](https://www.bilibili.com/video/BV1pm4y1z7Gm/)でご覧ください。 - -> RVCによるリアルタイム音声変換: [w-okada/voice-changer](https://github.com/w-okada/voice-changer) - -> 著作権侵害を心配することなく使用できるように、基底モデルは約50時間の高品質なオープンソースデータセットで訓練されています。 - -> 今後も、次々と使用許可のある高品質な歌声の資料集を追加し、基底モデルを訓練する予定です。 - -## はじめに -本リポジトリには下記の特徴があります。 - -+ Top1検索を用いることで、生の特徴量を訓練用データセット特徴量に変換し、トーンリーケージを削減します。 -+ 比較的貧弱なGPUでも、高速かつ簡単に訓練できます。 -+ 少量のデータセットからでも、比較的良い結果を得ることができます。(10分以上のノイズの少ない音声を推奨します。) -+ モデルを融合することで、音声を混ぜることができます。(ckpt processingタブの、ckpt mergeを使用します。) -+ 使いやすいWebUI。 -+ UVR5 Modelも含んでいるため、人の声とBGMを素早く分離できます。 - -## 環境構築 -Poetryで依存関係をインストールすることをお勧めします。 - -下記のコマンドは、Python3.8以上の環境で実行する必要があります: -```bash -# PyTorch関連の依存関係をインストール。インストール済の場合は省略。 -# 参照先: https://pytorch.org/get-started/locally/ -pip install torch torchvision torchaudio - -#Windows+ Nvidia Ampere Architecture(RTX30xx)の場合、 #21 に従い、pytorchに対応するcuda versionを指定する必要があります。 -#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 - -# PyTorch関連の依存関係をインストール。インストール済の場合は省略。 -# 参照先: https://python-poetry.org/docs/#installation -curl -sSL https://install.python-poetry.org | python3 - - -# Poetry経由で依存関係をインストール -poetry install -``` - -pipでも依存関係のインストールが可能です: - -```bash -pip install -r requirements.txt -``` - -## 基底modelsを準備 -RVCは推論/訓練のために、様々な事前訓練を行った基底モデルを必要とします。 - -modelsは[Hugging Face space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)からダウンロードできます。 - -以下は、RVCに必要な基底モデルやその他のファイルの一覧です。 -```bash -hubert_base.pt - -./pretrained - -./uvr5_weights - -# ffmpegがすでにinstallされている場合は省略 -./ffmpeg -``` -その後、下記のコマンドでWebUIを起動します。 -```bash -python infer-web.py -``` -Windowsをお使いの方は、直接`RVC-beta.7z`をダウンロード後に展開し、`go-web.bat`をクリックすることで、WebUIを起動することができます。(7zipが必要です。) - -また、リポジトリに[小白简易教程.doc](./小白简易教程.doc)がありますので、参考にしてください(中国語版のみ)。 - -## 参考プロジェクト -+ [ContentVec](https://github.com/auspicious3000/contentvec/) -+ [VITS](https://github.com/jaywalnut310/vits) -+ [HIFIGAN](https://github.com/jik876/hifi-gan) -+ [Gradio](https://github.com/gradio-app/gradio) -+ [FFmpeg](https://github.com/FFmpeg/FFmpeg) -+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui) -+ [audio-slicer](https://github.com/openvpi/audio-slicer) - -## 貢献者(contributor)の皆様の尽力に感謝します - - - diff --git a/spaces/AIConsultant/MusicGen/audiocraft/losses/__init__.py b/spaces/AIConsultant/MusicGen/audiocraft/losses/__init__.py deleted file mode 100644 index d55107b2c11822cab749ed3683cf19020802898a..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/losses/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Loss related classes and functions. In particular the loss balancer from -EnCodec, and the usual spectral losses.""" - -# flake8: noqa -from .balancer import Balancer -from .sisnr import SISNR -from .stftloss import ( - LogSTFTMagnitudeLoss, - MRSTFTLoss, - SpectralConvergenceLoss, - STFTLoss -) -from .specloss import ( - MelSpectrogramL1Loss, - MultiScaleMelSpectrogramLoss, -) diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/models/quantize_cnn.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/models/quantize_cnn.py deleted file mode 100644 index b796772749efda9a225bdcb0e7262791a972a710..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/models/quantize_cnn.py +++ /dev/null @@ -1,415 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -class QuantizeEMAReset(nn.Module): - def __init__(self, nb_code, code_dim, args): - super().__init__() - self.nb_code = nb_code - self.code_dim = code_dim - self.mu = args.mu - self.reset_codebook() - - def reset_codebook(self): - self.init = False - self.code_sum = None - self.code_count = None - if torch.cuda.is_available(): - self.register_buffer('codebook', torch.zeros(self.nb_code, self.code_dim).cuda()) - else: - self.register_buffer('codebook', torch.zeros(self.nb_code, self.code_dim)) - - def _tile(self, x): - nb_code_x, code_dim = x.shape - if nb_code_x < self.nb_code: - n_repeats = (self.nb_code + nb_code_x - 1) // nb_code_x - std = 0.01 / np.sqrt(code_dim) - out = x.repeat(n_repeats, 1) - out = out + torch.randn_like(out) * std - else : - out = x - return out - - def init_codebook(self, x): - out = self._tile(x) - self.codebook = out[:self.nb_code] - self.code_sum = self.codebook.clone() - self.code_count = torch.ones(self.nb_code, device=self.codebook.device) - self.init = True - - @torch.no_grad() - def compute_perplexity(self, code_idx) : - # Calculate new centres - code_onehot = torch.zeros(self.nb_code, code_idx.shape[0], device=code_idx.device) # nb_code, N * L - code_onehot.scatter_(0, code_idx.view(1, code_idx.shape[0]), 1) - - code_count = code_onehot.sum(dim=-1) # nb_code - prob = code_count / torch.sum(code_count) - perplexity = torch.exp(-torch.sum(prob * torch.log(prob + 1e-7))) - return perplexity - - @torch.no_grad() - def update_codebook(self, x, code_idx): - - code_onehot = torch.zeros(self.nb_code, x.shape[0], device=x.device) # nb_code, N * L - code_onehot.scatter_(0, code_idx.view(1, x.shape[0]), 1) - - code_sum = torch.matmul(code_onehot, x) # nb_code, w - code_count = code_onehot.sum(dim=-1) # nb_code - - out = self._tile(x) - code_rand = out[:self.nb_code] - - # Update centres - self.code_sum = self.mu * self.code_sum + (1. - self.mu) * code_sum # w, nb_code - self.code_count = self.mu * self.code_count + (1. - self.mu) * code_count # nb_code - - usage = (self.code_count.view(self.nb_code, 1) >= 1.0).float() - code_update = self.code_sum.view(self.nb_code, self.code_dim) / self.code_count.view(self.nb_code, 1) - - self.codebook = usage * code_update + (1 - usage) * code_rand - prob = code_count / torch.sum(code_count) - perplexity = torch.exp(-torch.sum(prob * torch.log(prob + 1e-7))) - - - return perplexity - - def preprocess(self, x): - # NCT -> NTC -> [NT, C] - x = x.permute(0, 2, 1).contiguous() - x = x.view(-1, x.shape[-1]) - return x - - def quantize(self, x): - # Calculate latent code x_l - k_w = self.codebook.t() - distance = torch.sum(x ** 2, dim=-1, keepdim=True) - 2 * torch.matmul(x, k_w) + torch.sum(k_w ** 2, dim=0, - keepdim=True) # (N * L, b) - _, code_idx = torch.min(distance, dim=-1) - return code_idx - - def dequantize(self, code_idx): - x = F.embedding(code_idx, self.codebook) - return x - - - def forward(self, x): - N, width, T = x.shape - - # Preprocess - x = self.preprocess(x) - - # Init codebook if not inited - if self.training and not self.init: - self.init_codebook(x) - - # quantize and dequantize through bottleneck - code_idx = self.quantize(x) - x_d = self.dequantize(code_idx) - - # Update embeddings - if self.training: - perplexity = self.update_codebook(x, code_idx) - else : - perplexity = self.compute_perplexity(code_idx) - - # Loss - commit_loss = F.mse_loss(x, x_d.detach()) - - # Passthrough - x_d = x + (x_d - x).detach() - - # Postprocess - x_d = x_d.view(N, T, -1).permute(0, 2, 1).contiguous() #(N, DIM, T) - - return x_d, commit_loss, perplexity - - - -class Quantizer(nn.Module): - def __init__(self, n_e, e_dim, beta): - super(Quantizer, self).__init__() - - self.e_dim = e_dim - self.n_e = n_e - self.beta = beta - - self.embedding = nn.Embedding(self.n_e, self.e_dim) - self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - def forward(self, z): - - N, width, T = z.shape - z = self.preprocess(z) - assert z.shape[-1] == self.e_dim - z_flattened = z.contiguous().view(-1, self.e_dim) - - # B x V - d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \ - torch.sum(self.embedding.weight**2, dim=1) - 2 * \ - torch.matmul(z_flattened, self.embedding.weight.t()) - # B x 1 - min_encoding_indices = torch.argmin(d, dim=1) - z_q = self.embedding(min_encoding_indices).view(z.shape) - - # compute loss for embedding - loss = torch.mean((z_q - z.detach())**2) + self.beta * \ - torch.mean((z_q.detach() - z)**2) - - # preserve gradients - z_q = z + (z_q - z).detach() - z_q = z_q.view(N, T, -1).permute(0, 2, 1).contiguous() #(N, DIM, T) - - min_encodings = F.one_hot(min_encoding_indices, self.n_e).type(z.dtype) - e_mean = torch.mean(min_encodings, dim=0) - perplexity = torch.exp(-torch.sum(e_mean*torch.log(e_mean + 1e-10))) - return z_q, loss, perplexity - - def quantize(self, z): - - assert z.shape[-1] == self.e_dim - - # B x V - d = torch.sum(z ** 2, dim=1, keepdim=True) + \ - torch.sum(self.embedding.weight ** 2, dim=1) - 2 * \ - torch.matmul(z, self.embedding.weight.t()) - # B x 1 - min_encoding_indices = torch.argmin(d, dim=1) - return min_encoding_indices - - def dequantize(self, indices): - - index_flattened = indices.view(-1) - z_q = self.embedding(index_flattened) - z_q = z_q.view(indices.shape + (self.e_dim, )).contiguous() - return z_q - - def preprocess(self, x): - # NCT -> NTC -> [NT, C] - x = x.permute(0, 2, 1).contiguous() - x = x.view(-1, x.shape[-1]) - return x - - - -class QuantizeReset(nn.Module): - def __init__(self, nb_code, code_dim, args): - super().__init__() - self.nb_code = nb_code - self.code_dim = code_dim - self.reset_codebook() - self.codebook = nn.Parameter(torch.randn(nb_code, code_dim)) - - def reset_codebook(self): - self.init = False - self.code_count = None - - def _tile(self, x): - nb_code_x, code_dim = x.shape - if nb_code_x < self.nb_code: - n_repeats = (self.nb_code + nb_code_x - 1) // nb_code_x - std = 0.01 / np.sqrt(code_dim) - out = x.repeat(n_repeats, 1) - out = out + torch.randn_like(out) * std - else : - out = x - return out - - def init_codebook(self, x): - out = self._tile(x) - self.codebook = nn.Parameter(out[:self.nb_code]) - self.code_count = torch.ones(self.nb_code, device=self.codebook.device) - self.init = True - - @torch.no_grad() - def compute_perplexity(self, code_idx) : - # Calculate new centres - code_onehot = torch.zeros(self.nb_code, code_idx.shape[0], device=code_idx.device) # nb_code, N * L - code_onehot.scatter_(0, code_idx.view(1, code_idx.shape[0]), 1) - - code_count = code_onehot.sum(dim=-1) # nb_code - prob = code_count / torch.sum(code_count) - perplexity = torch.exp(-torch.sum(prob * torch.log(prob + 1e-7))) - return perplexity - - def update_codebook(self, x, code_idx): - - code_onehot = torch.zeros(self.nb_code, x.shape[0], device=x.device) # nb_code, N * L - code_onehot.scatter_(0, code_idx.view(1, x.shape[0]), 1) - - code_count = code_onehot.sum(dim=-1) # nb_code - - out = self._tile(x) - code_rand = out[:self.nb_code] - - # Update centres - self.code_count = code_count # nb_code - usage = (self.code_count.view(self.nb_code, 1) >= 1.0).float() - - self.codebook.data = usage * self.codebook.data + (1 - usage) * code_rand - prob = code_count / torch.sum(code_count) - perplexity = torch.exp(-torch.sum(prob * torch.log(prob + 1e-7))) - - - return perplexity - - def preprocess(self, x): - # NCT -> NTC -> [NT, C] - x = x.permute(0, 2, 1).contiguous() - x = x.view(-1, x.shape[-1]) - return x - - def quantize(self, x): - # Calculate latent code x_l - k_w = self.codebook.t() - distance = torch.sum(x ** 2, dim=-1, keepdim=True) - 2 * torch.matmul(x, k_w) + torch.sum(k_w ** 2, dim=0, - keepdim=True) # (N * L, b) - _, code_idx = torch.min(distance, dim=-1) - return code_idx - - def dequantize(self, code_idx): - x = F.embedding(code_idx, self.codebook) - return x - - - def forward(self, x): - N, width, T = x.shape - # Preprocess - x = self.preprocess(x) - # Init codebook if not inited - if self.training and not self.init: - self.init_codebook(x) - # quantize and dequantize through bottleneck - code_idx = self.quantize(x) - x_d = self.dequantize(code_idx) - # Update embeddings - if self.training: - perplexity = self.update_codebook(x, code_idx) - else : - perplexity = self.compute_perplexity(code_idx) - - # Loss - commit_loss = F.mse_loss(x, x_d.detach()) - - # Passthrough - x_d = x + (x_d - x).detach() - - # Postprocess - x_d = x_d.view(N, T, -1).permute(0, 2, 1).contiguous() #(N, DIM, T) - - return x_d, commit_loss, perplexity - -class QuantizeEMA(nn.Module): - def __init__(self, nb_code, code_dim, args): - super().__init__() - self.nb_code = nb_code - self.code_dim = code_dim - self.mu = 0.99 - self.reset_codebook() - - def reset_codebook(self): - self.init = False - self.code_sum = None - self.code_count = None - self.register_buffer('codebook', torch.zeros(self.nb_code, self.code_dim).cuda()) - - def _tile(self, x): - nb_code_x, code_dim = x.shape - if nb_code_x < self.nb_code: - n_repeats = (self.nb_code + nb_code_x - 1) // nb_code_x - std = 0.01 / np.sqrt(code_dim) - out = x.repeat(n_repeats, 1) - out = out + torch.randn_like(out) * std - else : - out = x - return out - - def init_codebook(self, x): - out = self._tile(x) - self.codebook = out[:self.nb_code] - self.code_sum = self.codebook.clone() - self.code_count = torch.ones(self.nb_code, device=self.codebook.device) - self.init = True - - @torch.no_grad() - def compute_perplexity(self, code_idx) : - # Calculate new centres - code_onehot = torch.zeros(self.nb_code, code_idx.shape[0], device=code_idx.device) # nb_code, N * L - code_onehot.scatter_(0, code_idx.view(1, code_idx.shape[0]), 1) - - code_count = code_onehot.sum(dim=-1) # nb_code - prob = code_count / torch.sum(code_count) - perplexity = torch.exp(-torch.sum(prob * torch.log(prob + 1e-7))) - return perplexity - - @torch.no_grad() - def update_codebook(self, x, code_idx): - - code_onehot = torch.zeros(self.nb_code, x.shape[0], device=x.device) # nb_code, N * L - code_onehot.scatter_(0, code_idx.view(1, x.shape[0]), 1) - - code_sum = torch.matmul(code_onehot, x) # nb_code, w - code_count = code_onehot.sum(dim=-1) # nb_code - - # Update centres - self.code_sum = self.mu * self.code_sum + (1. - self.mu) * code_sum # w, nb_code - self.code_count = self.mu * self.code_count + (1. - self.mu) * code_count # nb_code - - code_update = self.code_sum.view(self.nb_code, self.code_dim) / self.code_count.view(self.nb_code, 1) - - self.codebook = code_update - prob = code_count / torch.sum(code_count) - perplexity = torch.exp(-torch.sum(prob * torch.log(prob + 1e-7))) - - return perplexity - - def preprocess(self, x): - # NCT -> NTC -> [NT, C] - x = x.permute(0, 2, 1).contiguous() - x = x.view(-1, x.shape[-1]) - return x - - def quantize(self, x): - # Calculate latent code x_l - k_w = self.codebook.t() - distance = torch.sum(x ** 2, dim=-1, keepdim=True) - 2 * torch.matmul(x, k_w) + torch.sum(k_w ** 2, dim=0, - keepdim=True) # (N * L, b) - _, code_idx = torch.min(distance, dim=-1) - return code_idx - - def dequantize(self, code_idx): - x = F.embedding(code_idx, self.codebook) - return x - - - def forward(self, x): - N, width, T = x.shape - - # Preprocess - x = self.preprocess(x) - - # Init codebook if not inited - if self.training and not self.init: - self.init_codebook(x) - - # quantize and dequantize through bottleneck - code_idx = self.quantize(x) - x_d = self.dequantize(code_idx) - - # Update embeddings - if self.training: - perplexity = self.update_codebook(x, code_idx) - else : - perplexity = self.compute_perplexity(code_idx) - - # Loss - commit_loss = F.mse_loss(x, x_d.detach()) - - # Passthrough - x_d = x + (x_d - x).detach() - - # Postprocess - x_d = x_d.view(N, T, -1).permute(0, 2, 1).contiguous() #(N, DIM, T) - - return x_d, commit_loss, perplexity \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/utils/plot_statistics.py b/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/utils/plot_statistics.py deleted file mode 100644 index bebb28af3e3468e8422c6901e1aba9600270ef89..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/utils/plot_statistics.py +++ /dev/null @@ -1,2034 +0,0 @@ -import os -import sys -import numpy as np -import argparse -import h5py -import time -import _pickle as cPickle -import _pickle -import matplotlib.pyplot as plt -import csv -from sklearn import metrics - -from utilities import (create_folder, get_filename, d_prime) -import config - - -def _load_metrics0(filename, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, data_type, model_type, loss_type, balanced, augmentation, batch_size): - workspace0 = '/mnt/cephfs_new_wj/speechsv/qiuqiang.kong/workspaces/pub_audioset_tagging_cnn_transfer' - statistics_path = os.path.join(workspace0, 'statistics', filename, - 'sample_rate={},window_size={},hop_size={},mel_bins={},fmin={},fmax={}'.format( - sample_rate, window_size, hop_size, mel_bins, fmin, fmax), - 'data_type={}'.format(data_type), model_type, - 'loss_type={}'.format(loss_type), 'balanced={}'.format(balanced), - 'augmentation={}'.format(augmentation), 'batch_size={}'.format(batch_size), - 'statistics.pkl') - - statistics_dict = cPickle.load(open(statistics_path, 'rb')) - - bal_map = np.array([statistics['average_precision'] for statistics in statistics_dict['bal']]) # (N, classes_num) - bal_map = np.mean(bal_map, axis=-1) - test_map = np.array([statistics['average_precision'] for statistics in statistics_dict['test']]) # (N, classes_num) - test_map = np.mean(test_map, axis=-1) - legend = '{}, {}, bal={}, aug={}, bs={}'.format(data_type, model_type, balanced, augmentation, batch_size) - - # return {'bal_map': bal_map, 'test_map': test_map, 'legend': legend} - return bal_map, test_map, legend - - -def _load_metrics0_classwise(filename, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, data_type, model_type, loss_type, balanced, augmentation, batch_size): - workspace0 = '/mnt/cephfs_new_wj/speechsv/qiuqiang.kong/workspaces/pub_audioset_tagging_cnn_transfer' - statistics_path = os.path.join(workspace0, 'statistics', filename, - 'sample_rate={},window_size={},hop_size={},mel_bins={},fmin={},fmax={}'.format( - sample_rate, window_size, hop_size, mel_bins, fmin, fmax), - 'data_type={}'.format(data_type), model_type, - 'loss_type={}'.format(loss_type), 'balanced={}'.format(balanced), - 'augmentation={}'.format(augmentation), 'batch_size={}'.format(batch_size), - 'statistics.pkl') - - statistics_dict = cPickle.load(open(statistics_path, 'rb')) - - return statistics_dict['test'][300]['average_precision'] - - -def _load_metrics0_classwise2(filename, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, data_type, model_type, loss_type, balanced, augmentation, batch_size): - workspace0 = '/mnt/cephfs_new_wj/speechsv/qiuqiang.kong/workspaces/pub_audioset_tagging_cnn_transfer' - statistics_path = os.path.join(workspace0, 'statistics', filename, - 'sample_rate={},window_size={},hop_size={},mel_bins={},fmin={},fmax={}'.format( - sample_rate, window_size, hop_size, mel_bins, fmin, fmax), - 'data_type={}'.format(data_type), model_type, - 'loss_type={}'.format(loss_type), 'balanced={}'.format(balanced), - 'augmentation={}'.format(augmentation), 'batch_size={}'.format(batch_size), - 'statistics.pkl') - - statistics_dict = cPickle.load(open(statistics_path, 'rb')) - - k = 270 - mAP = np.mean(statistics_dict['test'][k]['average_precision']) - mAUC = np.mean(statistics_dict['test'][k]['auc']) - dprime = d_prime(mAUC) - return mAP, mAUC, dprime - - -def _load_metrics_classwise(filename, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, data_type, model_type, loss_type, balanced, augmentation, batch_size): - workspace = '/mnt/cephfs_new_wj/speechsv/kongqiuqiang/workspaces/cvssp/pub_audioset_tagging_cnn' - statistics_path = os.path.join(workspace, 'statistics', filename, - 'sample_rate={},window_size={},hop_size={},mel_bins={},fmin={},fmax={}'.format( - sample_rate, window_size, hop_size, mel_bins, fmin, fmax), - 'data_type={}'.format(data_type), model_type, - 'loss_type={}'.format(loss_type), 'balanced={}'.format(balanced), - 'augmentation={}'.format(augmentation), 'batch_size={}'.format(batch_size), - 'statistics.pkl') - - statistics_dict = cPickle.load(open(statistics_path, 'rb')) - - k = 300 - mAP = np.mean(statistics_dict['test'][k]['average_precision']) - mAUC = np.mean(statistics_dict['test'][k]['auc']) - dprime = d_prime(mAUC) - return mAP, mAUC, dprime - - -def plot(args): - - # Arguments & parameters - dataset_dir = args.dataset_dir - workspace = args.workspace - select = args.select - - classes_num = config.classes_num - max_plot_iteration = 1000000 - iterations = np.arange(0, max_plot_iteration, 2000) - - class_labels_indices_path = os.path.join(dataset_dir, 'metadata', - 'class_labels_indices.csv') - - save_out_path = 'results/{}.pdf'.format(select) - create_folder(os.path.dirname(save_out_path)) - - # Read labels - labels = config.labels - - # Plot - fig, ax = plt.subplots(1, 1, figsize=(15, 8)) - lines = [] - - def _load_metrics(filename, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, data_type, model_type, loss_type, balanced, augmentation, batch_size): - statistics_path = os.path.join(workspace, 'statistics', filename, - 'sample_rate={},window_size={},hop_size={},mel_bins={},fmin={},fmax={}'.format( - sample_rate, window_size, hop_size, mel_bins, fmin, fmax), - 'data_type={}'.format(data_type), model_type, - 'loss_type={}'.format(loss_type), 'balanced={}'.format(balanced), - 'augmentation={}'.format(augmentation), 'batch_size={}'.format(batch_size), - 'statistics.pkl') - - statistics_dict = cPickle.load(open(statistics_path, 'rb')) - - bal_map = np.array([statistics['average_precision'] for statistics in statistics_dict['bal']]) # (N, classes_num) - bal_map = np.mean(bal_map, axis=-1) - test_map = np.array([statistics['average_precision'] for statistics in statistics_dict['test']]) # (N, classes_num) - test_map = np.mean(test_map, axis=-1) - legend = '{}, {}, bal={}, aug={}, bs={}'.format(data_type, model_type, balanced, augmentation, batch_size) - - # return {'bal_map': bal_map, 'test_map': test_map, 'legend': legend} - return bal_map, test_map, legend - - bal_alpha = 0.3 - test_alpha = 1.0 - lines = [] - - if select == '1_cnn13': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_no_dropout', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13_no_specaug', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_no_specaug', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_no_dropout', color='g', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'none', 32) - line, = ax.plot(bal_map, color='k', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_no_mixup', color='k', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_mixup_in_wave', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='c', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_mixup_in_wave', color='c', alpha=test_alpha) - lines.append(line) - - elif select == '1_pooling': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_gwrp', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13_gmpgapgwrp', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_att', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13_gmpgapatt', color='g', alpha=test_alpha) - lines.append(line) - - elif select == '1_resnet': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'ResNet18', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='ResNet18', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'ResNet34', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='k', alpha=bal_alpha) - line, = ax.plot(test_map, label='resnet34', color='k', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'ResNet50', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='c', alpha=bal_alpha) - line, = ax.plot(test_map, label='resnet50', color='c', alpha=test_alpha) - lines.append(line) - - elif select == '1_densenet': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'DenseNet121', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='densenet121', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'DenseNet201', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='densenet201', color='g', alpha=test_alpha) - lines.append(line) - - elif select == '1_cnn9': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn5', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn5', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn9', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn9', color='g', alpha=test_alpha) - lines.append(line) - - elif select == '1_hop': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 500, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13_hop500', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 640, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13_hop640', color='g', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 1000, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='k', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13_hop1000', color='k', alpha=test_alpha) - lines.append(line) - - elif select == '1_emb': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb32', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13_emb32', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb128', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13_emb128', color='g', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb512', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='k', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13_emb512', color='k', alpha=test_alpha) - lines.append(line) - - elif select == '1_mobilenet': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'MobileNetV1', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='mobilenetv1', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'MobileNetV2', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='mobilenetv2', color='g', alpha=test_alpha) - lines.append(line) - - elif select == '1_waveform': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn1d_LeeNet', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn1d_LeeNet', color='g', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn1d_LeeNet18', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn1d_LeeNet18', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn1d_DaiNet', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='k', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn1d_DaiNet', color='k', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn1d_ResNet34', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='c', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn1d_ResNet34', color='c', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn1d_ResNet50', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='m', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn1d_ResNet50', color='m', alpha=test_alpha) - lines.append(line) - - elif select == '1_waveform_cnn2d': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_SpAndWav', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_SpAndWav', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_WavCnn2d', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_WavCnn2d', color='g', alpha=test_alpha) - lines.append(line) - - elif select == '1_decision_level': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_DecisionLevelMax', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_DecisionLevelMax', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_DecisionLevelAvg', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_DecisionLevelAvg', color='g', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_DecisionLevelAtt', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='k', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_DecisionLevelAtt', color='k', alpha=test_alpha) - lines.append(line) - - elif select == '1_transformer': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_Transformer1', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_Transformer1', color='g', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_Transformer3', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_Transformer3', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_Transformer6', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='k', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_Transformer6', color='k', alpha=test_alpha) - lines.append(line) - - elif select == '1_aug': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,balanced,mixup', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'none', 'none', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,none,none', color='g', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'none', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,balanced,none', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup_from_0_epoch', 32) - line, = ax.plot(bal_map, color='m', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,balanced,mixup_from_0_epoch', color='m', alpha=test_alpha) - lines.append(line) - - elif select == '1_bal_train_aug': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'balanced_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,balanced,mixup', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'balanced_train', 'Cnn14', 'clip_bce', 'none', 'none', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,none,none', color='g', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'balanced_train', 'Cnn14', 'clip_bce', 'balanced', 'none', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,balanced,none', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'balanced_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup_from_0_epoch', 32) - line, = ax.plot(bal_map, color='m', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,balanced,mixup_from_0_epoch', color='m', alpha=test_alpha) - lines.append(line) - - elif select == '1_sr': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14_16k', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14_16k', color='g', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14_8k', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14_8k', color='b', alpha=test_alpha) - lines.append(line) - - elif select == '1_time_domain': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14_mixup_time_domain', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14_time_domain', color='b', alpha=test_alpha) - lines.append(line) - - elif select == '1_partial_full': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'partial_0.9_full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,partial_0.9', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'partial_0.8_full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,partial_0.8', color='g', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'partial_0.7_full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='k', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,partial_0.7', color='k', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'partial_0.5_full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='m', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,partial_0.5', color='m', alpha=test_alpha) - lines.append(line) - - elif select == '1_window': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 2048, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14_win2048', color='b', alpha=test_alpha) - lines.append(line) - - elif select == '1_melbins': - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 32, 50, 14000, 'full_train', 'Cnn14_mel32', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14_mel32', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 128, 50, 14000, 'full_train', 'Cnn14_mel128', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14_mel128', color='g', alpha=test_alpha) - lines.append(line) - - elif select == '1_alternate': - max_plot_iteration = 2000000 - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'alternate', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14_alternate', color='b', alpha=test_alpha) - lines.append(line) - - elif select == '2_all': - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn9', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn9', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn5', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn5', color='g', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'MobileNetV1', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='MobileNetV1', color='k', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn1d_ResNet34', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn1d_ResNet34', color='grey', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'ResNet34', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='ResNet34', color='grey', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_WavCnn2d', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_WavCnn2d', color='m', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_SpAndWav', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_SpAndWav', color='orange', alpha=test_alpha) - lines.append(line) - - elif select == '2_emb': - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb32', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_emb32', color='r', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb128', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_128', color='k', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb512', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='Cnn13_512', color='g', alpha=test_alpha) - lines.append(line) - - elif select == '2_aug': - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn13', color='b', alpha=test_alpha) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_no_specaug', 'clip_bce', 'none', 'none', 32) - line, = ax.plot(bal_map, color='c', alpha=bal_alpha) - line, = ax.plot(test_map, label='cnn14,none,none', color='c', alpha=test_alpha) - lines.append(line) - - - - ax.set_ylim(0, 1.) - ax.set_xlim(0, len(iterations)) - ax.xaxis.set_ticks(np.arange(0, len(iterations), 25)) - ax.xaxis.set_ticklabels(np.arange(0, max_plot_iteration, 50000)) - ax.yaxis.set_ticks(np.arange(0, 1.01, 0.05)) - ax.yaxis.set_ticklabels(np.around(np.arange(0, 1.01, 0.05), decimals=2)) - ax.grid(color='b', linestyle='solid', linewidth=0.3) - plt.legend(handles=lines, loc=2) - # box = ax.get_position() - # ax.set_position([box.x0, box.y0, box.width * 0.8, box.height]) - # ax.legend(handles=lines, bbox_to_anchor=(1.0, 1.0)) - - plt.savefig(save_out_path) - print('Save figure to {}'.format(save_out_path)) - - -def plot_for_paper(args): - - # Arguments & parameters - dataset_dir = args.dataset_dir - workspace = args.workspace - select = args.select - - classes_num = config.classes_num - max_plot_iteration = 1000000 - iterations = np.arange(0, max_plot_iteration, 2000) - - class_labels_indices_path = os.path.join(dataset_dir, 'metadata', - 'class_labels_indices.csv') - - save_out_path = 'results/paper_{}.pdf'.format(select) - create_folder(os.path.dirname(save_out_path)) - - # Read labels - labels = config.labels - - # Plot - fig, ax = plt.subplots(1, 1, figsize=(6, 4)) - lines = [] - - def _load_metrics(filename, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, data_type, model_type, loss_type, balanced, augmentation, batch_size): - statistics_path = os.path.join(workspace, 'statistics', filename, - 'sample_rate={},window_size={},hop_size={},mel_bins={},fmin={},fmax={}'.format( - sample_rate, window_size, hop_size, mel_bins, fmin, fmax), - 'data_type={}'.format(data_type), model_type, - 'loss_type={}'.format(loss_type), 'balanced={}'.format(balanced), - 'augmentation={}'.format(augmentation), 'batch_size={}'.format(batch_size), - 'statistics.pkl') - - statistics_dict = cPickle.load(open(statistics_path, 'rb')) - - bal_map = np.array([statistics['average_precision'] for statistics in statistics_dict['bal']]) # (N, classes_num) - bal_map = np.mean(bal_map, axis=-1) - test_map = np.array([statistics['average_precision'] for statistics in statistics_dict['test']]) # (N, classes_num) - test_map = np.mean(test_map, axis=-1) - legend = '{}, {}, bal={}, aug={}, bs={}'.format(data_type, model_type, balanced, augmentation, batch_size) - - # return {'bal_map': bal_map, 'test_map': test_map, 'legend': legend} - return bal_map, test_map, legend - - bal_alpha = 0.3 - test_alpha = 1.0 - lines = [] - linewidth = 1. - - max_plot_iteration = 540000 - - if select == '2_all': - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14', color='r', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - # (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - # 320, 64, 50, 14000, 'full_train', 'Cnn9', 'clip_bce', 'balanced', 'mixup', 32) - # line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - # line, = ax.plot(test_map, label='cnn9', color='r', alpha=test_alpha) - # lines.append(line) - - # (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - # 320, 64, 50, 14000, 'full_train', 'Cnn5', 'clip_bce', 'balanced', 'mixup', 32) - # line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - # line, = ax.plot(test_map, label='cnn5', color='g', alpha=test_alpha) - # lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'MobileNetV1', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='MobileNetV1', color='b', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - # (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - # 320, 64, 50, 14000, 'full_train', 'Cnn1d_ResNet34', 'clip_bce', 'balanced', 'mixup', 32) - # line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - # line, = ax.plot(test_map, label='Cnn1d_ResNet34', color='grey', alpha=test_alpha) - # lines.append(line) - - # (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - # 320, 64, 50, 14000, 'full_train', 'Cnn13_WavCnn2d', 'clip_bce', 'balanced', 'mixup', 32) - # line, = ax.plot(bal_map, color='g', alpha=bal_alpha, linewidth=linewidth) - # line, = ax.plot(test_map, label='Wavegram-CNN', color='g', alpha=test_alpha, linewidth=linewidth) - # lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_SpAndWav', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='Wavegram-Logmel-CNN', color='g', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - elif select == '2_emb': - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,emb=2048', color='r', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb32', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,emb=32', color='b', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb128', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,emb=128', color='g', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - # (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - # 320, 64, 50, 14000, 'full_train', 'Cnn13_emb512', 'clip_bce', 'balanced', 'mixup', 32) - # line, = ax.plot(bal_map, color='g', alpha=bal_alpha) - # line, = ax.plot(test_map, label='Cnn13_512', color='g', alpha=test_alpha) - # lines.append(line) - - elif select == '2_bal': - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,bal,mixup (1.9m)', color='r', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14_mixup_time_domain', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='y', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,bal,mixup-wav (1.9m)', color='y', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'none', 'none', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,no-bal,no-mixup (1.9m)', color='b', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'none', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,bal,no-mixup (1.9m)', color='g', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'balanced_train', 'Cnn14', 'clip_bce', 'balanced', 'none', 32) - line, = ax.plot(bal_map, color='k', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,bal,no-mixup (20k)', color='k', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'balanced_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='m', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,bal,mixup (20k)', color='m', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - elif select == '2_sr': - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,32kHz', color='r', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14_16k', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,16kHz', color='b', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14_8k', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,8kHz', color='g', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - elif select == '2_partial': - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14 (100% full)', color='r', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - # (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - # 320, 64, 50, 14000, 'partial_0.9_full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - # line, = ax.plot(bal_map, color='b', alpha=bal_alpha, linewidth=linewidth) - # line, = ax.plot(test_map, label='cnn14,partial_0.9', color='b', alpha=test_alpha, linewidth=linewidth) - # lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'partial_0.8_full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='b', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14 (80% full)', color='b', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - # (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - # 320, 64, 50, 14000, 'partial_0.7_full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - # line, = ax.plot(bal_map, color='k', alpha=bal_alpha, linewidth=linewidth) - # line, = ax.plot(test_map, label='cnn14,partial_0.7', color='k', alpha=test_alpha, linewidth=linewidth) - # lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'partial_0.5_full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='g', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='cnn14 (50% full)', color='g', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - elif select == '2_melbins': - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha, linewidth=linewidth) - line, = ax.plot(test_map, label='CNN14,64-melbins', color='r', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 32, 50, 14000, 'full_train', 'Cnn14_mel32', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='CNN14,32-melbins', color='b', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 128, 50, 14000, 'full_train', 'Cnn14_mel128', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax.plot(bal_map, color='r', alpha=bal_alpha) - line, = ax.plot(test_map, label='CNN14,128-melbins', color='g', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - ax.set_ylim(0, 0.8) - ax.set_xlim(0, len(iterations)) - ax.set_xlabel('Iterations') - ax.set_ylabel('mAP') - ax.xaxis.set_ticks(np.arange(0, len(iterations), 50)) - # ax.xaxis.set_ticklabels(np.arange(0, max_plot_iteration, 50000)) - ax.xaxis.set_ticklabels(['0', '100k', '200k', '300k', '400k', '500k']) - ax.yaxis.set_ticks(np.arange(0, 0.81, 0.05)) - ax.yaxis.set_ticklabels(['0', '', '0.1', '', '0.2', '', '0.3', '', '0.4', '', '0.5', '', '0.6', '', '0.7', '', '0.8']) - # ax.yaxis.set_ticklabels(np.around(np.arange(0, 0.81, 0.05), decimals=2)) - ax.yaxis.grid(color='k', linestyle='solid', alpha=0.3, linewidth=0.3) - ax.xaxis.grid(color='k', linestyle='solid', alpha=0.3, linewidth=0.3) - plt.legend(handles=lines, loc=2) - plt.tight_layout(0, 0, 0) - # box = ax.get_position() - # ax.set_position([box.x0, box.y0, box.width * 0.8, box.height]) - # ax.legend(handles=lines, bbox_to_anchor=(1.0, 1.0)) - - plt.savefig(save_out_path) - print('Save figure to {}'.format(save_out_path)) - - -def plot_for_paper2(args): - - # Arguments & parameters - dataset_dir = args.dataset_dir - workspace = args.workspace - - classes_num = config.classes_num - max_plot_iteration = 1000000 - iterations = np.arange(0, max_plot_iteration, 2000) - - class_labels_indices_path = os.path.join(dataset_dir, 'metadata', - 'class_labels_indices.csv') - - save_out_path = 'results/paper2.pdf' - create_folder(os.path.dirname(save_out_path)) - - # Read labels - labels = config.labels - - # Plot - fig, ax = plt.subplots(2, 3, figsize=(14, 7)) - lines = [] - - def _load_metrics(filename, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, data_type, model_type, loss_type, balanced, augmentation, batch_size): - statistics_path = os.path.join(workspace, 'statistics', filename, - 'sample_rate={},window_size={},hop_size={},mel_bins={},fmin={},fmax={}'.format( - sample_rate, window_size, hop_size, mel_bins, fmin, fmax), - 'data_type={}'.format(data_type), model_type, - 'loss_type={}'.format(loss_type), 'balanced={}'.format(balanced), - 'augmentation={}'.format(augmentation), 'batch_size={}'.format(batch_size), - 'statistics.pkl') - - statistics_dict = cPickle.load(open(statistics_path, 'rb')) - - bal_map = np.array([statistics['average_precision'] for statistics in statistics_dict['bal']]) # (N, classes_num) - bal_map = np.mean(bal_map, axis=-1) - test_map = np.array([statistics['average_precision'] for statistics in statistics_dict['test']]) # (N, classes_num) - test_map = np.mean(test_map, axis=-1) - legend = '{}, {}, bal={}, aug={}, bs={}'.format(data_type, model_type, balanced, augmentation, batch_size) - - # return {'bal_map': bal_map, 'test_map': test_map, 'legend': legend} - return bal_map, test_map, legend - - def _load_metrics0(filename, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, data_type, model_type, loss_type, balanced, augmentation, batch_size): - workspace0 = '/mnt/cephfs_new_wj/speechsv/qiuqiang.kong/workspaces/pub_audioset_tagging_cnn_transfer' - statistics_path = os.path.join(workspace0, 'statistics', filename, - 'sample_rate={},window_size={},hop_size={},mel_bins={},fmin={},fmax={}'.format( - sample_rate, window_size, hop_size, mel_bins, fmin, fmax), - 'data_type={}'.format(data_type), model_type, - 'loss_type={}'.format(loss_type), 'balanced={}'.format(balanced), - 'augmentation={}'.format(augmentation), 'batch_size={}'.format(batch_size), - 'statistics.pkl') - - statistics_dict = cPickle.load(open(statistics_path, 'rb')) - - bal_map = np.array([statistics['average_precision'] for statistics in statistics_dict['bal']]) # (N, classes_num) - bal_map = np.mean(bal_map, axis=-1) - test_map = np.array([statistics['average_precision'] for statistics in statistics_dict['test']]) # (N, classes_num) - test_map = np.mean(test_map, axis=-1) - legend = '{}, {}, bal={}, aug={}, bs={}'.format(data_type, model_type, balanced, augmentation, batch_size) - - # return {'bal_map': bal_map, 'test_map': test_map, 'legend': legend} - return bal_map, test_map, legend - - bal_alpha = 0.3 - test_alpha = 1.0 - lines = [] - linewidth = 1. - - max_plot_iteration = 540000 - - if True: - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[0, 0].plot(bal_map, color='r', alpha=bal_alpha, linewidth=linewidth) - line, = ax[0, 0].plot(test_map, label='CNN14', color='r', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - # (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - # 320, 64, 50, 14000, 'full_train', 'Cnn9', 'clip_bce', 'balanced', 'mixup', 32) - # line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - # line, = ax.plot(test_map, label='cnn9', color='r', alpha=test_alpha) - # lines.append(line) - - # (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - # 320, 64, 50, 14000, 'full_train', 'Cnn5', 'clip_bce', 'balanced', 'mixup', 32) - # line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - # line, = ax.plot(test_map, label='cnn5', color='g', alpha=test_alpha) - # lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'MobileNetV1', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[0, 0].plot(bal_map, color='b', alpha=bal_alpha, linewidth=linewidth) - line, = ax[0, 0].plot(test_map, label='MobileNetV1', color='b', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - # (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - # 320, 64, 50, 14000, 'full_train', 'Cnn1d_ResNet34', 'clip_bce', 'balanced', 'mixup', 32) - # line, = ax.plot(bal_map, color='b', alpha=bal_alpha) - # line, = ax.plot(test_map, label='Cnn1d_ResNet34', color='grey', alpha=test_alpha) - # lines.append(line) - - # (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - # 320, 64, 50, 14000, 'full_train', 'ResNet34', 'clip_bce', 'balanced', 'mixup', 32) - # line, = ax[0, 0].plot(bal_map, color='k', alpha=bal_alpha, linewidth=linewidth) - # line, = ax[0, 0].plot(test_map, label='ResNet38', color='k', alpha=test_alpha, linewidth=linewidth) - # lines.append(line) - - # (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - # 320, 64, 50, 14000, 'full_train', 'Cnn13_WavCnn2d', 'clip_bce', 'balanced', 'mixup', 32) - # line, = ax.plot(bal_map, color='g', alpha=bal_alpha, linewidth=linewidth) - # line, = ax.plot(test_map, label='Wavegram-CNN', color='g', alpha=test_alpha, linewidth=linewidth) - # lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_SpAndWav', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[0, 0].plot(bal_map, color='g', alpha=bal_alpha, linewidth=linewidth) - line, = ax[0, 0].plot(test_map, label='Wavegram-Logmel-CNN', color='g', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - ax[0, 0].legend(handles=lines, loc=2) - ax[0, 0].set_title('(a) Comparison of architectures') - - if True: - lines = [] - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[0, 1].plot(bal_map, color='r', alpha=bal_alpha, linewidth=linewidth) - line, = ax[0, 1].plot(test_map, label='CNN14,bal,mixup (1.9m)', color='r', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'none', 'none', 32) - line, = ax[0, 1].plot(bal_map, color='b', alpha=bal_alpha, linewidth=linewidth) - line, = ax[0, 1].plot(test_map, label='CNN14,no-bal,no-mixup (1.9m)', color='b', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14_mixup_time_domain', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[0, 1].plot(bal_map, color='y', alpha=bal_alpha, linewidth=linewidth) - line, = ax[0, 1].plot(test_map, label='CNN14,bal,mixup-wav (1.9m)', color='y', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'none', 32) - line, = ax[0, 1].plot(bal_map, color='g', alpha=bal_alpha, linewidth=linewidth) - line, = ax[0, 1].plot(test_map, label='CNN14,bal,no-mixup (1.9m)', color='g', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'balanced_train', 'Cnn14', 'clip_bce', 'balanced', 'none', 32) - line, = ax[0, 1].plot(bal_map, color='k', alpha=bal_alpha, linewidth=linewidth) - line, = ax[0, 1].plot(test_map, label='CNN14,bal,no-mixup (20k)', color='k', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'balanced_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[0, 1].plot(bal_map, color='m', alpha=bal_alpha, linewidth=linewidth) - line, = ax[0, 1].plot(test_map, label='CNN14,bal,mixup (20k)', color='m', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - ax[0, 1].legend(handles=lines, loc=2, fontsize=8) - - ax[0, 1].set_title('(b) Comparison of training data and augmentation') - - if True: - lines = [] - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[0, 2].plot(bal_map, color='r', alpha=bal_alpha, linewidth=linewidth) - line, = ax[0, 2].plot(test_map, label='CNN14,emb=2048', color='r', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb32', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[0, 2].plot(bal_map, color='b', alpha=bal_alpha, linewidth=linewidth) - line, = ax[0, 2].plot(test_map, label='CNN14,emb=32', color='b', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics0('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb128', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[0, 2].plot(bal_map, color='g', alpha=bal_alpha, linewidth=linewidth) - line, = ax[0, 2].plot(test_map, label='CNN14,emb=128', color='g', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - ax[0, 2].legend(handles=lines, loc=2) - ax[0, 2].set_title('(c) Comparison of embedding size') - - if True: - lines = [] - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[1, 0].plot(bal_map, color='r', alpha=bal_alpha, linewidth=linewidth) - line, = ax[1, 0].plot(test_map, label='CNN14 (100% full)', color='r', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'partial_0.8_full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[1, 0].plot(bal_map, color='b', alpha=bal_alpha, linewidth=linewidth) - line, = ax[1, 0].plot(test_map, label='CNN14 (80% full)', color='b', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'partial_0.5_full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[1, 0].plot(bal_map, color='g', alpha=bal_alpha, linewidth=linewidth) - line, = ax[1, 0].plot(test_map, label='cnn14 (50% full)', color='g', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - ax[1, 0].legend(handles=lines, loc=2) - ax[1, 0].set_title('(d) Comparison of amount of training data') - - if True: - lines = [] - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[1, 1].plot(bal_map, color='r', alpha=bal_alpha, linewidth=linewidth) - line, = ax[1, 1].plot(test_map, label='CNN14,32kHz', color='r', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14_16k', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[1, 1].plot(bal_map, color='b', alpha=bal_alpha, linewidth=linewidth) - line, = ax[1, 1].plot(test_map, label='CNN14,16kHz', color='b', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14_8k', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[1, 1].plot(bal_map, color='g', alpha=bal_alpha, linewidth=linewidth) - line, = ax[1, 1].plot(test_map, label='CNN14,8kHz', color='g', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - ax[1, 1].legend(handles=lines, loc=2) - ax[1, 1].set_title('(e) Comparison of sampling rate') - - if True: - lines = [] - iterations = np.arange(0, max_plot_iteration, 2000) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[1, 2].plot(bal_map, color='r', alpha=bal_alpha, linewidth=linewidth) - line, = ax[1, 2].plot(test_map, label='CNN14,64-melbins', color='r', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 32, 50, 14000, 'full_train', 'Cnn14_mel32', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[1, 2].plot(bal_map, color='b', alpha=bal_alpha) - line, = ax[1, 2].plot(test_map, label='CNN14,32-melbins', color='b', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - (bal_map, test_map, legend) = _load_metrics('main', 32000, 1024, - 320, 128, 50, 14000, 'full_train', 'Cnn14_mel128', 'clip_bce', 'balanced', 'mixup', 32) - line, = ax[1, 2].plot(bal_map, color='g', alpha=bal_alpha) - line, = ax[1, 2].plot(test_map, label='CNN14,128-melbins', color='g', alpha=test_alpha, linewidth=linewidth) - lines.append(line) - - ax[1, 2].legend(handles=lines, loc=2) - ax[1, 2].set_title('(f) Comparison of mel bins number') - - for i in range(2): - for j in range(3): - ax[i, j].set_ylim(0, 0.8) - ax[i, j].set_xlim(0, len(iterations)) - ax[i, j].set_xlabel('Iterations') - ax[i, j].set_ylabel('mAP') - ax[i, j].xaxis.set_ticks(np.arange(0, len(iterations), 50)) - # ax.xaxis.set_ticklabels(np.arange(0, max_plot_iteration, 50000)) - ax[i, j].xaxis.set_ticklabels(['0', '100k', '200k', '300k', '400k', '500k']) - ax[i, j].yaxis.set_ticks(np.arange(0, 0.81, 0.05)) - ax[i, j].yaxis.set_ticklabels(['0', '', '0.1', '', '0.2', '', '0.3', '', '0.4', '', '0.5', '', '0.6', '', '0.7', '', '0.8']) - # ax.yaxis.set_ticklabels(np.around(np.arange(0, 0.81, 0.05), decimals=2)) - ax[i, j].yaxis.grid(color='k', linestyle='solid', alpha=0.3, linewidth=0.3) - ax[i, j].xaxis.grid(color='k', linestyle='solid', alpha=0.3, linewidth=0.3) - - plt.tight_layout(0, 1, 0) - # box = ax.get_position() - # ax.set_position([box.x0, box.y0, box.width * 0.8, box.height]) - # ax.legend(handles=lines, bbox_to_anchor=(1.0, 1.0)) - - plt.savefig(save_out_path) - print('Save figure to {}'.format(save_out_path)) - - -def table_values(args): - - # Arguments & parameters - dataset_dir = args.dataset_dir - workspace = args.workspace - select = args.select - - def _load_metrics(filename, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, data_type, model_type, loss_type, balanced, augmentation, batch_size, iteration): - statistics_path = os.path.join(workspace, 'statistics', filename, - 'sample_rate={},window_size={},hop_size={},mel_bins={},fmin={},fmax={}'.format( - sample_rate, window_size, hop_size, mel_bins, fmin, fmax), - 'data_type={}'.format(data_type), model_type, - 'loss_type={}'.format(loss_type), 'balanced={}'.format(balanced), - 'augmentation={}'.format(augmentation), 'batch_size={}'.format(batch_size), - 'statistics.pkl') - - statistics_dict = cPickle.load(open(statistics_path, 'rb')) - - idx = iteration // 2000 - mAP = np.mean(statistics_dict['test'][idx]['average_precision']) - mAUC = np.mean(statistics_dict['test'][idx]['auc']) - dprime = d_prime(mAUC) - - print('mAP: {:.3f}'.format(mAP)) - print('mAUC: {:.3f}'.format(mAUC)) - print('dprime: {:.3f}'.format(dprime)) - - - if select == 'cnn13': - iteration = 600000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'cnn5': - iteration = 440000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn5', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'cnn9': - iteration = 440000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn9', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'cnn13_decisionlevelmax': - iteration = 400000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_DecisionLevelMax', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'cnn13_decisionlevelavg': - iteration = 600000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_DecisionLevelAvg', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'cnn13_decisionlevelatt': - iteration = 600000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_DecisionLevelAtt', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'cnn13_emb32': - iteration = 560000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb32', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'cnn13_emb128': - iteration = 560000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb128', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'cnn13_emb512': - iteration = 440000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_emb512', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'cnn13_hop500': - iteration = 440000 - _load_metrics('main', 32000, 1024, - 500, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'cnn13_hop640': - iteration = 440000 - _load_metrics('main', 32000, 1024, - 640, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'cnn13_hop1000': - iteration = 540000 - _load_metrics('main', 32000, 1024, - 1000, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'mobilenetv1': - iteration = 560000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'MobileNetV1', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'mobilenetv2': - iteration = 560000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'MobileNetV2', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'resnet18': - iteration = 600000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'ResNet18', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'resnet34': - iteration = 600000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'ResNet34', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'resnet50': - iteration = 600000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'ResNet50', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'dainet': - iteration = 600000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn1d_DaiNet', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'leenet': - iteration = 540000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn1d_LeeNet', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'leenet18': - iteration = 440000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn1d_LeeNet18', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'resnet34_1d': - iteration = 500000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn1d_ResNet34', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'resnet50_1d': - iteration = 500000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn1d_ResNet50', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'waveform_cnn2d': - iteration = 660000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_WavCnn2d', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - elif select == 'waveform_spandwav': - iteration = 700000 - _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_SpAndWav', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - -def crop_label(label): - max_len = 16 - if len(label) <= max_len: - return label - else: - words = label.split(' ') - cropped_label = '' - for w in words: - if len(cropped_label + ' ' + w) > max_len: - break - else: - cropped_label += ' {}'.format(w) - return cropped_label - -def add_comma(integer): - integer = int(integer) - if integer >= 1000: - return str(integer // 1000) + ',' + str(integer % 1000) - else: - return str(integer) - - -def plot_class_iteration(args): - - # Arguments & parameters - workspace = args.workspace - select = args.select - - save_out_path = 'results_map/class_iteration_map.pdf' - create_folder(os.path.dirname(save_out_path)) - - def _load_metrics(filename, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, data_type, model_type, loss_type, balanced, augmentation, batch_size, iteration): - statistics_path = os.path.join(workspace, 'statistics', filename, - 'sample_rate={},window_size={},hop_size={},mel_bins={},fmin={},fmax={}'.format( - sample_rate, window_size, hop_size, mel_bins, fmin, fmax), - 'data_type={}'.format(data_type), model_type, - 'loss_type={}'.format(loss_type), 'balanced={}'.format(balanced), - 'augmentation={}'.format(augmentation), 'batch_size={}'.format(batch_size), - 'statistics.pkl') - - statistics_dict = cPickle.load(open(statistics_path, 'rb')) - return statistics_dict - - iteration = 600000 - statistics_dict = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - mAP_mat = np.array([e['average_precision'] for e in statistics_dict['test']]) - mAP_mat = mAP_mat[0 : 300, :] - sorted_indexes = np.argsort(config.full_samples_per_class)[::-1] - - - fig, axs = plt.subplots(1, 3, figsize=(20, 5)) - ranges = [np.arange(0, 10), np.arange(250, 260), np.arange(517, 527)] - axs[0].set_ylabel('AP') - - for col in range(0, 3): - axs[col].set_ylim(0, 1.) - axs[col].set_xlim(0, 301) - axs[col].set_xlabel('Iterations') - axs[col].set_ylabel('AP') - axs[col].xaxis.set_ticks(np.arange(0, 301, 100)) - axs[col].xaxis.set_ticklabels(['0', '200k', '400k', '600k']) - lines = [] - for _ix in ranges[col]: - _label = crop_label(config.labels[sorted_indexes[_ix]]) + \ - ' ({})'.format(add_comma(config.full_samples_per_class[sorted_indexes[_ix]])) - line, = axs[col].plot(mAP_mat[:, sorted_indexes[_ix]], label=_label) - lines.append(line) - box = axs[col].get_position() - axs[col].set_position([box.x0, box.y0, box.width * 1., box.height]) - axs[col].legend(handles=lines, bbox_to_anchor=(1., 1.)) - axs[col].yaxis.grid(color='k', linestyle='solid', alpha=0.3, linewidth=0.3) - - plt.tight_layout(pad=4, w_pad=1, h_pad=1) - plt.savefig(save_out_path) - print(save_out_path) - - -def _load_old_metrics(workspace, filename, iteration, data_type): - - assert data_type in ['train', 'test'] - - stat_name = "stat_{}_iters.p".format(iteration) - - # Load stats - stat_path = os.path.join(workspace, "stats", filename, data_type, stat_name) - try: - stats = cPickle.load(open(stat_path, 'rb')) - except: - stats = cPickle.load(open(stat_path, 'rb'), encoding='latin1') - - precisions = [stat['precisions'] for stat in stats] - recalls = [stat['recalls'] for stat in stats] - maps = np.array([stat['AP'] for stat in stats]) - aucs = np.array([stat['auc'] for stat in stats]) - - return {'average_precision': maps, 'AUC': aucs} - -def _sort(ys): - sorted_idxes = np.argsort(ys) - sorted_idxes = sorted_idxes[::-1] - sorted_ys = ys[sorted_idxes] - sorted_lbs = [config.labels[e] for e in sorted_idxes] - return sorted_ys, sorted_idxes, sorted_lbs - -def load_data(hdf5_path): - with h5py.File(hdf5_path, 'r') as hf: - x = hf['x'][:] - y = hf['y'][:] - video_id_list = list(hf['video_id_list'][:]) - return x, y, video_id_list - -def get_avg_stats(workspace, bgn_iter, fin_iter, interval_iter, filename, data_type): - - assert data_type in ['train', 'test'] - bal_train_hdf5 = "/vol/vssp/msos/audioset/packed_features/bal_train.h5" - eval_hdf5 = "/vol/vssp/msos/audioset/packed_features/eval.h5" - unbal_train_hdf5 = "/vol/vssp/msos/audioset/packed_features/unbal_train.h5" - - t1 = time.time() - if data_type == 'test': - (te_x, te_y, te_id_list) = load_data(eval_hdf5) - elif data_type == 'train': - (te_x, te_y, te_id_list) = load_data(bal_train_hdf5) - y = te_y - - prob_dir = os.path.join(workspace, "probs", filename, data_type) - names = os.listdir(prob_dir) - - probs = [] - iters = range(bgn_iter, fin_iter, interval_iter) - for iter in iters: - pickle_path = os.path.join(prob_dir, "prob_%d_iters.p" % iter) - try: - prob = cPickle.load(open(pickle_path, 'rb')) - except: - prob = cPickle.load(open(pickle_path, 'rb'), encoding='latin1') - probs.append(prob) - - avg_prob = np.mean(np.array(probs), axis=0) - - n_out = y.shape[1] - stats = [] - for k in range(n_out): # around 7 seconds - (precisions, recalls, thresholds) = metrics.precision_recall_curve(y[:, k], avg_prob[:, k]) - avg_precision = metrics.average_precision_score(y[:, k], avg_prob[:, k], average=None) - (fpr, tpr, thresholds) = metrics.roc_curve(y[:, k], avg_prob[:, k]) - auc = metrics.roc_auc_score(y[:, k], avg_prob[:, k], average=None) - # eer = pp_data.eer(avg_prob[:, k], y[:, k]) - - skip = 1000 - dict = {'precisions': precisions[0::skip], 'recalls': recalls[0::skip], 'AP': avg_precision, - 'fpr': fpr[0::skip], 'fnr': 1. - tpr[0::skip], 'auc': auc} - - stats.append(dict) - - mAPs = np.array([e['AP'] for e in stats]) - aucs = np.array([e['auc'] for e in stats]) - - print("Get avg time: {}".format(time.time() - t1)) - - return {'average_precision': mAPs, 'auc': aucs} - - -def _samples_num_per_class(): - bal_train_hdf5 = "/vol/vssp/msos/audioset/packed_features/bal_train.h5" - eval_hdf5 = "/vol/vssp/msos/audioset/packed_features/eval.h5" - unbal_train_hdf5 = "/vol/vssp/msos/audioset/packed_features/unbal_train.h5" - - (x, y, id_list) = load_data(eval_hdf5) - eval_num = np.sum(y, axis=0) - - (x, y, id_list) = load_data(bal_train_hdf5) - bal_num = np.sum(y, axis=0) - - (x, y, id_list) = load_data(unbal_train_hdf5) - unbal_num = np.sum(y, axis=0) - - return bal_num, unbal_num, eval_num - - -def get_label_quality(): - - rate_csv = '/vol/vssp/msos/qk/workspaces/pub_audioset_tagging_cnn_transfer/metadata/qa_true_counts.csv' - - with open(rate_csv, 'r') as f: - reader = csv.reader(f, delimiter=',') - lis = list(reader) - - rates = [] - - for n in range(1, len(lis)): - li = lis[n] - if float(li[1]) == 0: - rate = None - else: - rate = float(li[2]) / float(li[1]) - rates.append(rate) - - return rates - - -def summary_stats(args): - # Arguments & parameters - workspace = args.workspace - - out_stat_path = os.path.join(workspace, 'results', 'stats_for_paper.pkl') - create_folder(os.path.dirname(out_stat_path)) - - # Old workspace - old_workspace = '/vol/vssp/msos/qk/workspaces/audioset_classification' - - # bal_train_metrics = _load_old_metrics(old_workspace, 'tmp127', 20000, 'train') - # eval_metrics = _load_old_metrics(old_workspace, 'tmp127', 20000, 'test') - - bal_train_metrics = get_avg_stats(old_workspace, bgn_iter=10000, fin_iter=50001, interval_iter=5000, filename='tmp127_re', data_type='train') - eval_metrics = get_avg_stats(old_workspace, bgn_iter=10000, fin_iter=50001, interval_iter=5000, filename='tmp127_re', data_type='test') - - maps0te = eval_metrics['average_precision'] - (maps0te, sorted_idxes, sorted_lbs) = _sort(maps0te) - - bal_num, unbal_num, eval_num = _samples_num_per_class() - - output_dict = { - 'labels': config.labels, - 'label_quality': get_label_quality(), - 'sorted_indexes_for_plot': sorted_idxes, - 'official_balanced_trainig_samples': bal_num, - 'official_unbalanced_training_samples': unbal_num, - 'official_eval_samples': eval_num, - 'downloaded_full_training_samples': config.full_samples_per_class, - 'averaging_instance_system_avg_9_probs_from_10000_to_50000_iterations': - {'bal_train': bal_train_metrics, 'eval': eval_metrics} - } - - def _load_metrics(filename, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, data_type, model_type, loss_type, balanced, augmentation, batch_size, iteration): - _workspace = '/vol/vssp/msos/qk/bytedance/workspaces_important/pub_audioset_tagging_cnn_transfer' - statistics_path = os.path.join(_workspace, 'statistics', filename, - 'sample_rate={},window_size={},hop_size={},mel_bins={},fmin={},fmax={}'.format( - sample_rate, window_size, hop_size, mel_bins, fmin, fmax), - 'data_type={}'.format(data_type), model_type, - 'loss_type={}'.format(loss_type), 'balanced={}'.format(balanced), - 'augmentation={}'.format(augmentation), 'batch_size={}'.format(batch_size), - 'statistics.pkl') - - statistics_dict = cPickle.load(open(statistics_path, 'rb')) - - _idx = iteration // 2000 - _dict = {'bal_train': {'average_precision': statistics_dict['bal'][_idx]['average_precision'], - 'auc': statistics_dict['bal'][_idx]['auc']}, - 'eval': {'average_precision': statistics_dict['test'][_idx]['average_precision'], - 'auc': statistics_dict['test'][_idx]['auc']}} - return _dict - - iteration = 600000 - output_dict['cnn13_system_iteration60k'] = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - iteration = 560000 - output_dict['mobilenetv1_system_iteration56k'] = _load_metrics('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'MobileNetV1', 'clip_bce', 'balanced', 'mixup', 32, iteration) - - cPickle.dump(output_dict, open(out_stat_path, 'wb')) - print('Write stats for paper to {}'.format(out_stat_path)) - - -def prepare_plot_long_4_rows(sorted_lbs): - N = len(sorted_lbs) - - f,(ax1a, ax2a, ax3a, ax4a) = plt.subplots(4, 1,sharey=False, facecolor='w', figsize=(10, 12)) - - fontsize = 5 - - K = 132 - ax1a.set_xlim(0, K) - ax2a.set_xlim(K, 2 * K) - ax3a.set_xlim(2 * K, 3 * K) - ax4a.set_xlim(3 * K, N) - - truncated_sorted_lbs = [] - for lb in sorted_lbs: - lb = lb[0 : 25] - words = lb.split(' ') - if len(words[-1]) < 3: - lb = ' '.join(words[0:-1]) - truncated_sorted_lbs.append(lb) - - ax1a.grid(which='major', axis='x', linestyle='-', alpha=0.3) - ax2a.grid(which='major', axis='x', linestyle='-', alpha=0.3) - ax3a.grid(which='major', axis='x', linestyle='-', alpha=0.3) - ax4a.grid(which='major', axis='x', linestyle='-', alpha=0.3) - - ax1a.set_yscale('log') - ax2a.set_yscale('log') - ax3a.set_yscale('log') - ax4a.set_yscale('log') - - ax1b = ax1a.twinx() - ax2b = ax2a.twinx() - ax3b = ax3a.twinx() - ax4b = ax4a.twinx() - ax1b.set_ylim(0., 1.) - ax2b.set_ylim(0., 1.) - ax3b.set_ylim(0., 1.) - ax4b.set_ylim(0., 1.) - ax1b.set_ylabel('Average precision') - ax2b.set_ylabel('Average precision') - ax3b.set_ylabel('Average precision') - ax4b.set_ylabel('Average precision') - - ax1b.yaxis.grid(color='grey', linestyle='--', alpha=0.5) - ax2b.yaxis.grid(color='grey', linestyle='--', alpha=0.5) - ax3b.yaxis.grid(color='grey', linestyle='--', alpha=0.5) - ax4b.yaxis.grid(color='grey', linestyle='--', alpha=0.5) - - ax1a.xaxis.set_ticks(np.arange(K)) - ax1a.xaxis.set_ticklabels(truncated_sorted_lbs[0:K], rotation=90, fontsize=fontsize) - ax1a.xaxis.tick_bottom() - ax1a.set_ylabel("Number of audio clips") - - ax2a.xaxis.set_ticks(np.arange(K, 2*K)) - ax2a.xaxis.set_ticklabels(truncated_sorted_lbs[K:2*K], rotation=90, fontsize=fontsize) - ax2a.xaxis.tick_bottom() - # ax2a.tick_params(left='off', which='both') - ax2a.set_ylabel("Number of audio clips") - - ax3a.xaxis.set_ticks(np.arange(2*K, 3*K)) - ax3a.xaxis.set_ticklabels(truncated_sorted_lbs[2*K:3*K], rotation=90, fontsize=fontsize) - ax3a.xaxis.tick_bottom() - ax3a.set_ylabel("Number of audio clips") - - ax4a.xaxis.set_ticks(np.arange(3*K, N)) - ax4a.xaxis.set_ticklabels(truncated_sorted_lbs[3*K:], rotation=90, fontsize=fontsize) - ax4a.xaxis.tick_bottom() - # ax4a.tick_params(left='off', which='both') - ax4a.set_ylabel("Number of audio clips") - - ax1a.spines['right'].set_visible(False) - ax1b.spines['right'].set_visible(False) - ax2a.spines['left'].set_visible(False) - ax2b.spines['left'].set_visible(False) - ax2a.spines['right'].set_visible(False) - ax2b.spines['right'].set_visible(False) - ax3a.spines['left'].set_visible(False) - ax3b.spines['left'].set_visible(False) - ax3a.spines['right'].set_visible(False) - ax3b.spines['right'].set_visible(False) - ax4a.spines['left'].set_visible(False) - ax4b.spines['left'].set_visible(False) - - plt.subplots_adjust(hspace = 0.8) - - return ax1a, ax2a, ax3a, ax4a, ax1b, ax2b, ax3b, ax4b - -def _scatter_4_rows(x, ax, ax2, ax3, ax4, s, c, marker='.', alpha=1.): - N = len(x) - ax.scatter(np.arange(N), x, s=s, c=c, marker=marker, alpha=alpha) - ax2.scatter(np.arange(N), x, s=s, c=c, marker=marker, alpha=alpha) - ax3.scatter(np.arange(N), x, s=s, c=c, marker=marker, alpha=alpha) - ax4.scatter(np.arange(N), x, s=s, c=c, marker=marker, alpha=alpha) - -def _plot_4_rows(x, ax, ax2, ax3, ax4, c, linewidth=1.0, alpha=1.0, label=""): - N = len(x) - ax.plot(x, c=c, linewidth=linewidth, alpha=alpha) - ax2.plot(x, c=c, linewidth=linewidth, alpha=alpha) - ax3.plot(x, c=c, linewidth=linewidth, alpha=alpha) - line, = ax4.plot(x, c=c, linewidth=linewidth, alpha=alpha, label=label) - return line - -def plot_long_fig(args): - # Arguments & parameters - workspace = args.workspace - - # Paths - stat_path = os.path.join(workspace, 'results', 'stats_for_paper.pkl') - save_out_path = 'results/long_fig.pdf' - create_folder(os.path.dirname(save_out_path)) - - # Stats - stats = cPickle.load(open(stat_path, 'rb')) - - N = len(config.labels) - sorted_indexes = stats['sorted_indexes_for_plot'] - sorted_labels = np.array(config.labels)[sorted_indexes] - audio_clips_per_class = stats['official_balanced_trainig_samples'] + stats['official_unbalanced_training_samples'] - audio_clips_per_class = audio_clips_per_class[sorted_indexes] - - (ax1a, ax2a, ax3a, ax4a, ax1b, ax2b, ax3b, ax4b) = prepare_plot_long_4_rows(sorted_labels) - - # plot the same data on both axes - ax1a.bar(np.arange(N), audio_clips_per_class, alpha=0.3) - ax2a.bar(np.arange(N), audio_clips_per_class, alpha=0.3) - ax3a.bar(np.arange(N), audio_clips_per_class, alpha=0.3) - ax4a.bar(np.arange(N), audio_clips_per_class, alpha=0.3) - - maps_avg_instances = stats['averaging_instance_system_avg_9_probs_from_10000_to_50000_iterations']['eval']['average_precision'] - maps_avg_instances = maps_avg_instances[sorted_indexes] - - maps_cnn13 = stats['cnn13_system_iteration60k']['eval']['average_precision'] - maps_cnn13 = maps_cnn13[sorted_indexes] - - maps_mobilenetv1 = stats['mobilenetv1_system_iteration56k']['eval']['average_precision'] - maps_mobilenetv1 = maps_mobilenetv1[sorted_indexes] - - maps_logmel_wavegram_cnn = _load_metrics0_classwise('main', 32000, 1024, - 320, 64, 50, 14000, 'full_train', 'Cnn13_SpAndWav', 'clip_bce', 'balanced', 'mixup', 32) - maps_logmel_wavegram_cnn = maps_logmel_wavegram_cnn[sorted_indexes] - - _scatter_4_rows(maps_avg_instances, ax1b, ax2b, ax3b, ax4b, s=5, c='k') - _scatter_4_rows(maps_cnn13, ax1b, ax2b, ax3b, ax4b, s=5, c='r') - _scatter_4_rows(maps_mobilenetv1, ax1b, ax2b, ax3b, ax4b, s=5, c='b') - _scatter_4_rows(maps_logmel_wavegram_cnn, ax1b, ax2b, ax3b, ax4b, s=5, c='g') - - linewidth = 0.7 - line0te = _plot_4_rows(maps_avg_instances, ax1b, ax2b, ax3b, ax4b, c='k', linewidth=linewidth, label='AP with averaging instances (baseline)') - line1te = _plot_4_rows(maps_cnn13, ax1b, ax2b, ax3b, ax4b, c='r', linewidth=linewidth, label='AP with CNN14') - line2te = _plot_4_rows(maps_mobilenetv1, ax1b, ax2b, ax3b, ax4b, c='b', linewidth=linewidth, label='AP with MobileNetV1') - line3te = _plot_4_rows(maps_logmel_wavegram_cnn, ax1b, ax2b, ax3b, ax4b, c='g', linewidth=linewidth, label='AP with Wavegram-Logmel-CNN') - - label_quality = stats['label_quality'] - sorted_rate = np.array(label_quality)[sorted_indexes] - for k in range(len(sorted_rate)): - if sorted_rate[k] and sorted_rate[k] == 1: - sorted_rate[k] = 0.99 - - ax1b.scatter(np.arange(N)[sorted_rate != None], sorted_rate[sorted_rate != None], s=12, c='r', linewidth=0.8, marker='+') - ax2b.scatter(np.arange(N)[sorted_rate != None], sorted_rate[sorted_rate != None], s=12, c='r', linewidth=0.8, marker='+') - ax3b.scatter(np.arange(N)[sorted_rate != None], sorted_rate[sorted_rate != None], s=12, c='r', linewidth=0.8, marker='+') - line_label_quality = ax4b.scatter(np.arange(N)[sorted_rate != None], sorted_rate[sorted_rate != None], s=12, c='r', linewidth=0.8, marker='+', label='Label quality') - ax1b.scatter(np.arange(N)[sorted_rate == None], 0.5 * np.ones(len(np.arange(N)[sorted_rate == None])), s=12, c='r', linewidth=0.8, marker='_') - ax2b.scatter(np.arange(N)[sorted_rate == None], 0.5 * np.ones(len(np.arange(N)[sorted_rate == None])), s=12, c='r', linewidth=0.8, marker='_') - ax3b.scatter(np.arange(N)[sorted_rate == None], 0.5 * np.ones(len(np.arange(N)[sorted_rate == None])), s=12, c='r', linewidth=0.8, marker='_') - ax4b.scatter(np.arange(N)[sorted_rate == None], 0.5 * np.ones(len(np.arange(N)[sorted_rate == None])), s=12, c='r', linewidth=0.8, marker='_') - - plt.legend(handles=[line0te, line1te, line2te, line3te, line_label_quality], fontsize=6, loc=1) - - plt.savefig(save_out_path) - print('Save fig to {}'.format(save_out_path)) - -def plot_flops(args): - - # Arguments & parameters - workspace = args.workspace - - # Paths - save_out_path = 'results_map/flops.pdf' - create_folder(os.path.dirname(save_out_path)) - - plt.figure(figsize=(5, 5)) - fig, ax = plt.subplots(1, 1) - - model_types = np.array(['Cnn6', 'Cnn10', 'Cnn14', 'ResNet22', 'ResNet38', 'ResNet54', - 'MobileNetV1', 'MobileNetV2', 'DaiNet', 'LeeNet', 'LeeNet18', - 'Res1dNet30', 'Res1dNet44', 'Wavegram-CNN', 'Wavegram-\nLogmel-CNN']) - flops = np.array([21.986, 21.986, 42.220, 30.081, 48.962, 54.563, 3.614, 2.810, - 30.395, 4.741, 26.369, 32.688, 61.833, 44.234, 53.510]) - mAPs = np.array([0.343, 0.380, 0.431, 0.430, 0.434, 0.429, 0.389, 0.383, 0.295, - 0.266, 0.336, 0.365, 0.355, 0.389, 0.439]) - - sorted_indexes = np.sort(flops) - ax.scatter(flops, mAPs) - - shift = [[1, 0.002], [1, -0.006], [-1, -0.014], [-2, 0.006], [-7, 0.006], - [1, -0.01], [0.5, 0.004], [-1, -0.014], [1, -0.007], [0.8, -0.008], - [1, -0.007], [1, 0.002], [-6, -0.015], [1, -0.008], [0.8, 0]] - - for i, model_type in enumerate(model_types): - ax.annotate(model_type, (flops[i] + shift[i][0], mAPs[i] + shift[i][1])) - - ax.plot(flops[[0, 1, 2]], mAPs[[0, 1, 2]]) - ax.plot(flops[[3, 4, 5]], mAPs[[3, 4, 5]]) - ax.plot(flops[[6, 7]], mAPs[[6, 7]]) - ax.plot(flops[[9, 10]], mAPs[[9, 10]]) - ax.plot(flops[[11, 12]], mAPs[[11, 12]]) - ax.plot(flops[[13, 14]], mAPs[[13, 14]]) - - ax.set_xlim(0, 70) - ax.set_ylim(0.2, 0.5) - ax.set_xlabel('Multi-adds (million)') - ax.set_ylabel('mAP') - - plt.tight_layout(0, 0, 0) - - plt.savefig(save_out_path) - print('Write out figure to {}'.format(save_out_path)) - - -def spearman(args): - - # Arguments & parameters - workspace = args.workspace - - # Paths - stat_path = os.path.join(workspace, 'results', 'stats_for_paper.pkl') - - # Stats - stats = cPickle.load(open(stat_path, 'rb')) - - label_quality = np.array([qu if qu else 0.5 for qu in stats['label_quality']]) - training_samples = np.array(stats['official_balanced_trainig_samples']) + \ - np.array(stats['official_unbalanced_training_samples']) - mAP = stats['averaging_instance_system_avg_9_probs_from_10000_to_50000_iterations']['eval']['average_precision'] - - import scipy - samples_spearman = scipy.stats.spearmanr(training_samples, mAP)[0] - quality_spearman = scipy.stats.spearmanr(label_quality, mAP)[0] - - print('Training samples spearman: {:.3f}'.format(samples_spearman)) - print('Quality spearman: {:.3f}'.format(quality_spearman)) - - -def print_results(args): - - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 64, 50, 14000, 'full_train', 'Cnn14_mixup_time_domain', 'clip_bce', 'balanced', 'mixup', 32) - - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'balanced', 'none', 32) - - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 64, 50, 14000, 'full_train', 'Cnn14', 'clip_bce', 'none', 'none', 32) - - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 64, 50, 14000, 'balanced_train', 'Cnn14', 'clip_bce', 'none', 'none', 32) - - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 64, 50, 14000, 'balanced_train', 'Cnn14', 'clip_bce', 'balanced', 'none', 32) - - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 64, 50, 14000, 'balanced_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - - # - (mAP, mAUC, dprime) = _load_metrics0_classwise2('main', 32000, 1024, 320, 64, 50, 14000, 'full_train', 'Cnn13_emb32', 'clip_bce', 'balanced', 'mixup', 32) - - (mAP, mAUC, dprime) = _load_metrics0_classwise2('main', 32000, 1024, 320, 64, 50, 14000, 'full_train', 'Cnn13_emb128', 'clip_bce', 'balanced', 'mixup', 32) - - # partial - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 64, 50, 14000, 'partial_0.8_full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 64, 50, 14000, 'partial_0.5_full_train', 'Cnn14', 'clip_bce', 'balanced', 'mixup', 32) - - # Sample rate - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 64, 50, 14000, 'full_train', 'Cnn14_16k', 'clip_bce', 'balanced', 'mixup', 32) - - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 64, 50, 14000, 'full_train', 'Cnn14_8k', 'clip_bce', 'balanced', 'mixup', 32) - - # Mel bins - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 128, 50, 14000, 'full_train', 'Cnn14_mel128', 'clip_bce', 'balanced', 'mixup', 32) - - (mAP, mAUC, dprime) = _load_metrics_classwise('main', 32000, 1024, 320, 32, 50, 14000, 'full_train', 'Cnn14_mel32', 'clip_bce', 'balanced', 'mixup', 32) - - import crash - asdf - -if __name__ == '__main__': - - parser = argparse.ArgumentParser(description='') - subparsers = parser.add_subparsers(dest='mode') - - parser_plot = subparsers.add_parser('plot') - parser_plot.add_argument('--dataset_dir', type=str, required=True) - parser_plot.add_argument('--workspace', type=str, required=True) - parser_plot.add_argument('--select', type=str, required=True) - - parser_plot = subparsers.add_parser('plot_for_paper') - parser_plot.add_argument('--dataset_dir', type=str, required=True) - parser_plot.add_argument('--workspace', type=str, required=True) - parser_plot.add_argument('--select', type=str, required=True) - - parser_plot = subparsers.add_parser('plot_for_paper2') - parser_plot.add_argument('--dataset_dir', type=str, required=True) - parser_plot.add_argument('--workspace', type=str, required=True) - - parser_values = subparsers.add_parser('plot_class_iteration') - parser_values.add_argument('--workspace', type=str, required=True) - parser_values.add_argument('--select', type=str, required=True) - - parser_summary_stats = subparsers.add_parser('summary_stats') - parser_summary_stats.add_argument('--workspace', type=str, required=True) - - parser_plot_long = subparsers.add_parser('plot_long_fig') - parser_plot_long.add_argument('--workspace', type=str, required=True) - - parser_plot_flops = subparsers.add_parser('plot_flops') - parser_plot_flops.add_argument('--workspace', type=str, required=True) - - parser_spearman = subparsers.add_parser('spearman') - parser_spearman.add_argument('--workspace', type=str, required=True) - - parser_print = subparsers.add_parser('print') - parser_print.add_argument('--workspace', type=str, required=True) - - args = parser.parse_args() - - if args.mode == 'plot': - plot(args) - - elif args.mode == 'plot_for_paper': - plot_for_paper(args) - - elif args.mode == 'plot_for_paper2': - plot_for_paper2(args) - - elif args.mode == 'table_values': - table_values(args) - - elif args.mode == 'plot_class_iteration': - plot_class_iteration(args) - - elif args.mode == 'summary_stats': - summary_stats(args) - - elif args.mode == 'plot_long_fig': - plot_long_fig(args) - - elif args.mode == 'plot_flops': - plot_flops(args) - - elif args.mode == 'spearman': - spearman(args) - - elif args.mode == 'print': - print_results(args) - - else: - raise Exception('Error argument!') \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/tts/commons/align_ops.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/tts/commons/align_ops.py deleted file mode 100644 index a190d63a3f3ba31f41754975569336a87c63089d..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/tts/commons/align_ops.py +++ /dev/null @@ -1,25 +0,0 @@ -import torch -import torch.nn.functional as F - - -def build_word_mask(x2word, y2word): - return (x2word[:, :, None] == y2word[:, None, :]).long() - - -def mel2ph_to_mel2word(mel2ph, ph2word): - mel2word = (ph2word - 1).gather(1, (mel2ph - 1).clamp(min=0)) + 1 - mel2word = mel2word * (mel2ph > 0).long() - return mel2word - - -def clip_mel2token_to_multiple(mel2token, frames_multiple): - max_frames = mel2token.shape[1] // frames_multiple * frames_multiple - mel2token = mel2token[:, :max_frames] - return mel2token - - -def expand_states(h, mel2token): - h = F.pad(h, [0, 0, 1, 0]) - mel2token_ = mel2token[..., None].repeat([1, 1, h.shape[-1]]) - h = torch.gather(h, 1, mel2token_) # [B, T, H] - return h diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/layers/residual_stack.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/layers/residual_stack.py deleted file mode 100644 index 6e07c8803ad348dd923f6b7c0f7aff14aab9cf78..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/layers/residual_stack.py +++ /dev/null @@ -1,75 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2020 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""Residual stack module in MelGAN.""" - -import torch - -from . import CausalConv1d - - -class ResidualStack(torch.nn.Module): - """Residual stack module introduced in MelGAN.""" - - def __init__(self, - kernel_size=3, - channels=32, - dilation=1, - bias=True, - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - use_causal_conv=False, - ): - """Initialize ResidualStack module. - - Args: - kernel_size (int): Kernel size of dilation convolution layer. - channels (int): Number of channels of convolution layers. - dilation (int): Dilation factor. - bias (bool): Whether to add bias parameter in convolution layers. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - use_causal_conv (bool): Whether to use causal convolution. - - """ - super(ResidualStack, self).__init__() - - # defile residual stack part - if not use_causal_conv: - assert (kernel_size - 1) % 2 == 0, "Not support even number kernel size." - self.stack = torch.nn.Sequential( - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - getattr(torch.nn, pad)((kernel_size - 1) // 2 * dilation, **pad_params), - torch.nn.Conv1d(channels, channels, kernel_size, dilation=dilation, bias=bias), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - torch.nn.Conv1d(channels, channels, 1, bias=bias), - ) - else: - self.stack = torch.nn.Sequential( - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - CausalConv1d(channels, channels, kernel_size, dilation=dilation, - bias=bias, pad=pad, pad_params=pad_params), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - torch.nn.Conv1d(channels, channels, 1, bias=bias), - ) - - # defile extra layer for skip connection - self.skip_layer = torch.nn.Conv1d(channels, channels, 1, bias=bias) - - def forward(self, c): - """Calculate forward propagation. - - Args: - c (Tensor): Input tensor (B, channels, T). - - Returns: - Tensor: Output tensor (B, chennels, T). - - """ - return self.stack(c) + self.skip_layer(c) diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/layers/upsample.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/layers/upsample.py deleted file mode 100644 index 18c6397c420a81fadc5320e3a48f3249534decd8..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/layers/upsample.py +++ /dev/null @@ -1,183 +0,0 @@ -# -*- coding: utf-8 -*- - -"""Upsampling module. - -This code is modified from https://github.com/r9y9/wavenet_vocoder. - -""" - -import numpy as np -import torch -import torch.nn.functional as F - -from . import Conv1d - - -class Stretch2d(torch.nn.Module): - """Stretch2d module.""" - - def __init__(self, x_scale, y_scale, mode="nearest"): - """Initialize Stretch2d module. - - Args: - x_scale (int): X scaling factor (Time axis in spectrogram). - y_scale (int): Y scaling factor (Frequency axis in spectrogram). - mode (str): Interpolation mode. - - """ - super(Stretch2d, self).__init__() - self.x_scale = x_scale - self.y_scale = y_scale - self.mode = mode - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, C, F, T). - - Returns: - Tensor: Interpolated tensor (B, C, F * y_scale, T * x_scale), - - """ - return F.interpolate( - x, scale_factor=(self.y_scale, self.x_scale), mode=self.mode) - - -class Conv2d(torch.nn.Conv2d): - """Conv2d module with customized initialization.""" - - def __init__(self, *args, **kwargs): - """Initialize Conv2d module.""" - super(Conv2d, self).__init__(*args, **kwargs) - - def reset_parameters(self): - """Reset parameters.""" - self.weight.data.fill_(1. / np.prod(self.kernel_size)) - if self.bias is not None: - torch.nn.init.constant_(self.bias, 0.0) - - -class UpsampleNetwork(torch.nn.Module): - """Upsampling network module.""" - - def __init__(self, - upsample_scales, - nonlinear_activation=None, - nonlinear_activation_params={}, - interpolate_mode="nearest", - freq_axis_kernel_size=1, - use_causal_conv=False, - ): - """Initialize upsampling network module. - - Args: - upsample_scales (list): List of upsampling scales. - nonlinear_activation (str): Activation function name. - nonlinear_activation_params (dict): Arguments for specified activation function. - interpolate_mode (str): Interpolation mode. - freq_axis_kernel_size (int): Kernel size in the direction of frequency axis. - - """ - super(UpsampleNetwork, self).__init__() - self.use_causal_conv = use_causal_conv - self.up_layers = torch.nn.ModuleList() - for scale in upsample_scales: - # interpolation layer - stretch = Stretch2d(scale, 1, interpolate_mode) - self.up_layers += [stretch] - - # conv layer - assert (freq_axis_kernel_size - 1) % 2 == 0, "Not support even number freq axis kernel size." - freq_axis_padding = (freq_axis_kernel_size - 1) // 2 - kernel_size = (freq_axis_kernel_size, scale * 2 + 1) - if use_causal_conv: - padding = (freq_axis_padding, scale * 2) - else: - padding = (freq_axis_padding, scale) - conv = Conv2d(1, 1, kernel_size=kernel_size, padding=padding, bias=False) - self.up_layers += [conv] - - # nonlinear - if nonlinear_activation is not None: - nonlinear = getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params) - self.up_layers += [nonlinear] - - def forward(self, c): - """Calculate forward propagation. - - Args: - c : Input tensor (B, C, T). - - Returns: - Tensor: Upsampled tensor (B, C, T'), where T' = T * prod(upsample_scales). - - """ - c = c.unsqueeze(1) # (B, 1, C, T) - for f in self.up_layers: - if self.use_causal_conv and isinstance(f, Conv2d): - c = f(c)[..., :c.size(-1)] - else: - c = f(c) - return c.squeeze(1) # (B, C, T') - - -class ConvInUpsampleNetwork(torch.nn.Module): - """Convolution + upsampling network module.""" - - def __init__(self, - upsample_scales, - nonlinear_activation=None, - nonlinear_activation_params={}, - interpolate_mode="nearest", - freq_axis_kernel_size=1, - aux_channels=80, - aux_context_window=0, - use_causal_conv=False - ): - """Initialize convolution + upsampling network module. - - Args: - upsample_scales (list): List of upsampling scales. - nonlinear_activation (str): Activation function name. - nonlinear_activation_params (dict): Arguments for specified activation function. - mode (str): Interpolation mode. - freq_axis_kernel_size (int): Kernel size in the direction of frequency axis. - aux_channels (int): Number of channels of pre-convolutional layer. - aux_context_window (int): Context window size of the pre-convolutional layer. - use_causal_conv (bool): Whether to use causal structure. - - """ - super(ConvInUpsampleNetwork, self).__init__() - self.aux_context_window = aux_context_window - self.use_causal_conv = use_causal_conv and aux_context_window > 0 - # To capture wide-context information in conditional features - kernel_size = aux_context_window + 1 if use_causal_conv else 2 * aux_context_window + 1 - # NOTE(kan-bayashi): Here do not use padding because the input is already padded - self.conv_in = Conv1d(aux_channels, aux_channels, kernel_size=kernel_size, bias=False) - self.upsample = UpsampleNetwork( - upsample_scales=upsample_scales, - nonlinear_activation=nonlinear_activation, - nonlinear_activation_params=nonlinear_activation_params, - interpolate_mode=interpolate_mode, - freq_axis_kernel_size=freq_axis_kernel_size, - use_causal_conv=use_causal_conv, - ) - - def forward(self, c): - """Calculate forward propagation. - - Args: - c : Input tensor (B, C, T'). - - Returns: - Tensor: Upsampled tensor (B, C, T), - where T = (T' - aux_context_window * 2) * prod(upsample_scales). - - Note: - The length of inputs considers the context window size. - - """ - c_ = self.conv_in(c) - c = c_[:, :, :-self.aux_context_window] if self.use_causal_conv else c_ - return self.upsample(c) diff --git a/spaces/AIGText/GlyphControl/ldm/models/diffusion/sampling_util.py b/spaces/AIGText/GlyphControl/ldm/models/diffusion/sampling_util.py deleted file mode 100644 index 7eff02be6d7c54d43ee6680636ac0698dd3b3f33..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/models/diffusion/sampling_util.py +++ /dev/null @@ -1,22 +0,0 @@ -import torch -import numpy as np - - -def append_dims(x, target_dims): - """Appends dimensions to the end of a tensor until it has target_dims dimensions. - From https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/utils.py""" - dims_to_append = target_dims - x.ndim - if dims_to_append < 0: - raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less') - return x[(...,) + (None,) * dims_to_append] - - -def norm_thresholding(x0, value): - s = append_dims(x0.pow(2).flatten(1).mean(1).sqrt().clamp(min=value), x0.ndim) - return x0 * (value / s) - - -def spatial_norm_thresholding(x0, value): - # b c h w - s = x0.pow(2).mean(1, keepdim=True).sqrt().clamp(min=value) - return x0 * (value / s) \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/crowdhuman/yolov5_s-v61_8xb16-300e_ignore_crowdhuman.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/crowdhuman/yolov5_s-v61_8xb16-300e_ignore_crowdhuman.py deleted file mode 100644 index 90ba758a58a6168ee2c68086af28ae6a999bd739..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/crowdhuman/yolov5_s-v61_8xb16-300e_ignore_crowdhuman.py +++ /dev/null @@ -1,63 +0,0 @@ -_base_ = 'yolov5_s-v61_fast_8xb16-300e_crowdhuman.py' - -model = dict( - data_preprocessor=dict( - _delete_=True, - type='mmdet.DetDataPreprocessor', - mean=[0., 0., 0.], - std=[255., 255., 255.], - bgr_to_rgb=True), - bbox_head=dict(ignore_iof_thr=0.5)) - -img_scale = _base_.img_scale - -albu_train_transforms = [ - dict(type='Blur', p=0.01), - dict(type='MedianBlur', p=0.01), - dict(type='ToGray', p=0.01), - dict(type='CLAHE', p=0.01) -] - -pre_transform = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - # only change this - dict(type='mmdet.LoadAnnotations', with_bbox=True) -] - -train_pipeline = [ - *pre_transform, - dict( - type='Mosaic', - img_scale=img_scale, - pad_val=114.0, - pre_transform=pre_transform), - dict( - type='YOLOv5RandomAffine', - max_rotate_degree=0.0, - max_shear_degree=0.0, - scaling_ratio_range=(0.5, 1.5), - # img_scale is (width, height) - border=(-img_scale[0] // 2, -img_scale[1] // 2), - border_val=(114, 114, 114)), - dict( - type='mmdet.Albu', - transforms=albu_train_transforms, - bbox_params=dict( - type='BboxParams', - format='pascal_voc', - label_fields=['gt_bboxes_labels', 'gt_ignore_flags']), - keymap={ - 'img': 'image', - 'gt_bboxes': 'bboxes' - }), - dict(type='YOLOv5HSVRandomAug'), - dict(type='mmdet.RandomFlip', prob=0.5), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', - 'flip_direction')) -] - -train_dataloader = dict( - collate_fn=dict(type='pseudo_collate'), - dataset=dict(pipeline=train_pipeline)) diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/needs_auth/OpenaiChat.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/needs_auth/OpenaiChat.py deleted file mode 100644 index c41909e340cffa45021acf97a33051abcb72f9db..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/needs_auth/OpenaiChat.py +++ /dev/null @@ -1,125 +0,0 @@ -from __future__ import annotations - -import uuid, json, time - -from ..base_provider import AsyncGeneratorProvider -from ..helper import get_browser, get_cookies, format_prompt -from ...typing import AsyncGenerator -from ...requests import StreamSession - -class OpenaiChat(AsyncGeneratorProvider): - url = "https://chat.openai.com" - needs_auth = True - working = True - supports_gpt_35_turbo = True - _access_token = None - - @classmethod - async def create_async_generator( - cls, - model: str, - messages: list[dict[str, str]], - proxy: str = None, - access_token: str = None, - cookies: dict = None, - timeout: int = 30, - **kwargs: dict - ) -> AsyncGenerator: - proxies = {"https": proxy} - if not access_token: - access_token = await cls.get_access_token(cookies, proxies) - headers = { - "Accept": "text/event-stream", - "Authorization": f"Bearer {access_token}", - } - async with StreamSession(proxies=proxies, headers=headers, impersonate="chrome107", timeout=timeout) as session: - messages = [ - { - "id": str(uuid.uuid4()), - "author": {"role": "user"}, - "content": {"content_type": "text", "parts": [format_prompt(messages)]}, - }, - ] - data = { - "action": "next", - "messages": messages, - "conversation_id": None, - "parent_message_id": str(uuid.uuid4()), - "model": "text-davinci-002-render-sha", - "history_and_training_disabled": True, - } - async with session.post(f"{cls.url}/backend-api/conversation", json=data) as response: - response.raise_for_status() - last_message = "" - async for line in response.iter_lines(): - if line.startswith(b"data: "): - line = line[6:] - if line == b"[DONE]": - break - try: - line = json.loads(line) - except: - continue - if "message" not in line or "message_type" not in line["message"]["metadata"]: - continue - if line["message"]["metadata"]["message_type"] == "next": - new_message = line["message"]["content"]["parts"][0] - yield new_message[len(last_message):] - last_message = new_message - - @classmethod - def browse_access_token(cls) -> str: - try: - from selenium.webdriver.common.by import By - from selenium.webdriver.support.ui import WebDriverWait - from selenium.webdriver.support import expected_conditions as EC - - driver = get_browser() - except ImportError: - return - - driver.get(f"{cls.url}/") - try: - WebDriverWait(driver, 1200).until( - EC.presence_of_element_located((By.ID, "prompt-textarea")) - ) - javascript = "return (await (await fetch('/api/auth/session')).json())['accessToken']" - return driver.execute_script(javascript) - finally: - time.sleep(1) - driver.quit() - - @classmethod - async def fetch_access_token(cls, cookies: dict, proxies: dict = None) -> str: - async with StreamSession(proxies=proxies, cookies=cookies, impersonate="chrome107") as session: - async with session.get(f"{cls.url}/api/auth/session") as response: - response.raise_for_status() - auth = await response.json() - if "accessToken" in auth: - return auth["accessToken"] - - @classmethod - async def get_access_token(cls, cookies: dict = None, proxies: dict = None) -> str: - if not cls._access_token: - cookies = cookies if cookies else get_cookies("chat.openai.com") - if cookies: - cls._access_token = await cls.fetch_access_token(cookies, proxies) - if not cls._access_token: - cls._access_token = cls.browse_access_token() - if not cls._access_token: - raise RuntimeError("Read access token failed") - return cls._access_token - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ("proxy", "str"), - ("access_token", "str"), - ("cookies", "dict[str, str]") - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/boids-plugin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/boids-plugin.js deleted file mode 100644 index 5a26632e68f15aaf7645fd5cee33f94d998a3188..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/boids-plugin.js +++ /dev/null @@ -1,18 +0,0 @@ -import Boids from './boids.js'; - -class BoidsPlugin extends Phaser.Plugins.BasePlugin { - - constructor(pluginManager) { - super(pluginManager); - } - - start() { - var eventEmitter = this.game.events; - eventEmitter.on('destroy', this.destroy, this); - } - - add(gameObject, config) { - return new Boids(gameObject, config); - } -} -export default BoidsPlugin; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/stringtemplate.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/stringtemplate.js deleted file mode 100644 index 6d42fbb0d0f5131372d8af3562e71f2738be7b60..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/stringtemplate.js +++ /dev/null @@ -1,2 +0,0 @@ -import StringTemplate from './string/stringtemplate/StringTemplate.js'; -export default StringTemplate; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/ClickMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/ClickMethods.js deleted file mode 100644 index 7691bbebba8f3f93a4360318fb8827c382ef3744..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/ClickMethods.js +++ /dev/null @@ -1,65 +0,0 @@ -import Click from '../click/Click.js'; - -export default { - onClick(gameObject, callback, scope, config) { - if (!gameObject) { - return this; - } - - if (typeof (gameObject) === 'function') { - config = scope; - scope = callback; - callback = gameObject; - gameObject = this; - } - - if (gameObject._click === undefined) { - gameObject._click = new Click(gameObject, config); - } - gameObject._click.on('click', callback, scope); - - return this; - }, - - offClick(gameObject, callback, scope) { - if (typeof (gameObject) === 'function') { - scope = callback; - callback = gameObject; - gameObject = this; - } - - if (gameObject._click === undefined) { - return this; - } - gameObject._click.off('click', callback, scope); - - return this; - }, - - enableClick(gameObject, enabled) { - if (gameObject && typeof (gameObject) !== 'object') { - enabled = gameObject; - gameObject = this; - } - - if (gameObject._click === undefined) { - return this; - } - - gameObject._click.setEnable(enabled); - return this; - }, - - disableClick(gameObject) { - if (gameObject && typeof (gameObject) !== 'object') { - gameObject = this; - } - - if (gameObject._click === undefined) { - return this; - } - gameObject._click.setEnable(false); - - return this; - } -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/Factory.d.ts deleted file mode 100644 index 5b146e27a82ea22aa73e851b38436b1451be351a..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/Factory.d.ts +++ /dev/null @@ -1,24 +0,0 @@ -import GridSizer from './GridSizer'; - - -export default function ( - config?: GridSizer.IConfig -): GridSizer; - -export default function ( - x: number, y: number, - config?: GridSizer.IConfig -): GridSizer; - -export default function ( - x: number, y: number, - width: number, height: number, - config?: GridSizer.IConfig -): GridSizer; - -export default function ( - x: number, y: number, - width: number, height: number, - column: number, row: number, - config?: GridSizer.IConfig -): GridSizer; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetChildrenSizers.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetChildrenSizers.js deleted file mode 100644 index ed9d013699d282645b567d9cae47df9f73a01d6f..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetChildrenSizers.js +++ /dev/null @@ -1,15 +0,0 @@ -var GetChildrenSizers = function(out) { - if (out === undefined) { - out = []; - } - var children = this.sizerChildren, - child; - for (var i = 0, cnt = children.length; i < cnt; i++) { - child = children[i]; - if (child.isRexSizer) { - out.push(child); - } - } - return out; -} -export default GetChildrenSizers; \ No newline at end of file diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/bin/de-accent.pl b/spaces/Akmyradov/TurkmenTTSweSTT/uroman/bin/de-accent.pl deleted file mode 100644 index d73ed8361f2a65377e605504b67d74d8fb1a755b..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/bin/de-accent.pl +++ /dev/null @@ -1,201 +0,0 @@ -#!/usr/bin/perl -w - -sub print_version { - print STDERR "$0 version 1.1\n"; - print STDERR " Author: Ulf Hermjakob\n"; - print STDERR " Last changed: March 14, 2011\n"; -} - -sub print_usage { - print STDERR "$0 [options] < with_accents.txt > without_accents.txt\n"; - print STDERR " -h or -help\n"; - print STDERR " -v or -version\n"; -} - -sub de_accent_string { - local($s) = @_; - - # $s =~ tr/A-Z/a-z/; - unless (0) { - # Latin-1 - if ($s =~ /\xC3[\x80-\xBF]/) { - $s =~ s/(À|Á|Â|Ã|Ä|Å)/A/g; - $s =~ s/Æ/Ae/g; - $s =~ s/Ç/C/g; - $s =~ s/Ð/D/g; - $s =~ s/(È|É|Ê|Ë)/E/g; - $s =~ s/(Ì|Í|Î|Ï)/I/g; - $s =~ s/Ñ/N/g; - $s =~ s/(Ò|Ó|Ô|Õ|Ö|Ø)/O/g; - $s =~ s/(Ù|Ú|Û|Ü)/U/g; - $s =~ s/Þ/Th/g; - $s =~ s/Ý/Y/g; - $s =~ s/(à|á|â|ã|ä|å)/a/g; - $s =~ s/æ/ae/g; - $s =~ s/ç/c/g; - $s =~ s/(è|é|ê|ë)/e/g; - $s =~ s/(ì|í|î|ï)/i/g; - $s =~ s/ð/d/g; - $s =~ s/ñ/n/g; - $s =~ s/(ò|ó|ô|õ|ö)/o/g; - $s =~ s/ß/ss/g; - $s =~ s/þ/th/g; - $s =~ s/(ù|ú|û|ü)/u/g; - $s =~ s/(ý|ÿ)/y/g; - } - # Latin Extended-A - if ($s =~ /[\xC4-\xC5][\x80-\xBF]/) { - $s =~ s/(Ā|Ă|Ą)/A/g; - $s =~ s/(ā|ă|ą)/a/g; - $s =~ s/(Ć|Ĉ|Ċ|Č)/C/g; - $s =~ s/(ć|ĉ|ċ|č)/c/g; - $s =~ s/(Ď|Đ)/D/g; - $s =~ s/(ď|đ)/d/g; - $s =~ s/(Ē|Ĕ|Ė|Ę|Ě)/E/g; - $s =~ s/(ē|ĕ|ė|ę|ě)/e/g; - $s =~ s/(Ĝ|Ğ|Ġ|Ģ)/G/g; - $s =~ s/(ĝ|ğ|ġ|ģ)/g/g; - $s =~ s/(Ĥ|Ħ)/H/g; - $s =~ s/(ĥ|ħ)/h/g; - $s =~ s/(Ĩ|Ī|Ĭ|Į|İ)/I/g; - $s =~ s/(ĩ|ī|ĭ|į|ı)/i/g; - $s =~ s/IJ/Ij/g; - $s =~ s/ij/ij/g; - $s =~ s/Ĵ/J/g; - $s =~ s/ĵ/j/g; - $s =~ s/Ķ/K/g; - $s =~ s/(ķ|ĸ)/k/g; - $s =~ s/(Ĺ|Ļ|Ľ|Ŀ|Ł)/L/g; - $s =~ s/(ļ|ľ|ŀ|ł)/l/g; - $s =~ s/(Ń|Ņ|Ň|Ŋ)/N/g; - $s =~ s/(ń|ņ|ň|ʼn|ŋ)/n/g; - $s =~ s/(Ō|Ŏ|Ő)/O/g; - $s =~ s/(ō|ŏ|ő)/o/g; - $s =~ s/Œ/Oe/g; - $s =~ s/œ/oe/g; - $s =~ s/(Ŕ|Ŗ|Ř)/R/g; - $s =~ s/(ŕ|ŗ|ř)/r/g; - $s =~ s/(Ś|Ŝ|Ş|Š)/S/g; - $s =~ s/(ś|ŝ|ş|š|ſ)/s/g; - $s =~ s/(Ţ|Ť|Ŧ)/T/g; - $s =~ s/(ţ|ť|ŧ)/t/g; - $s =~ s/(Ũ|Ū|Ŭ|Ů|Ű|Ų)/U/g; - $s =~ s/(ũ|ū|ŭ|ů|ű|ų)/u/g; - $s =~ s/Ŵ/W/g; - $s =~ s/ŵ/w/g; - $s =~ s/(Ŷ|Ÿ)/Y/g; - $s =~ s/ŷ/y/g; - $s =~ s/(Ź|Ż|Ž)/Z/g; - $s =~ s/(ź|ż|ž)/z/g; - } - # Latin Extended Additional - if ($s =~ /\xE1[\xB8-\xBF][\x80-\xBF]/) { - $s =~ s/(ḁ|ạ|ả|ấ|ầ|ẩ|ẫ|ậ|ắ|ằ|ẳ|ẵ|ặ|ẚ)/a/g; - $s =~ s/(ḃ|ḅ|ḇ)/b/g; - $s =~ s/(ḉ)/c/g; - $s =~ s/(ḋ|ḍ|ḏ|ḑ|ḓ)/d/g; - $s =~ s/(ḕ|ḗ|ḙ|ḛ|ḝ|ẹ|ẻ|ẽ|ế|ề|ể|ễ|ệ)/e/g; - $s =~ s/(ḟ)/f/g; - $s =~ s/(ḡ)/g/g; - $s =~ s/(ḣ|ḥ|ḧ|ḩ|ḫ)/h/g; - $s =~ s/(ḭ|ḯ|ỉ|ị)/i/g; - $s =~ s/(ḱ|ḳ|ḵ)/k/g; - $s =~ s/(ḷ|ḹ|ḻ|ḽ)/l/g; - $s =~ s/(ḿ|ṁ|ṃ)/m/g; - $s =~ s/(ṅ|ṇ|ṉ|ṋ)/m/g; - $s =~ s/(ọ|ỏ|ố|ồ|ổ|ỗ|ộ|ớ|ờ|ở|ỡ|ợ|ṍ|ṏ|ṑ|ṓ)/o/g; - $s =~ s/(ṕ|ṗ)/p/g; - $s =~ s/(ṙ|ṛ|ṝ|ṟ)/r/g; - $s =~ s/(ṡ|ṣ|ṥ|ṧ|ṩ|ẛ)/s/g; - $s =~ s/(ṫ|ṭ|ṯ|ṱ)/t/g; - $s =~ s/(ṳ|ṵ|ṷ|ṹ|ṻ|ụ|ủ|ứ|ừ|ử|ữ|ự)/u/g; - $s =~ s/(ṽ|ṿ)/v/g; - $s =~ s/(ẁ|ẃ|ẅ|ẇ|ẉ|ẘ)/w/g; - $s =~ s/(ẋ|ẍ)/x/g; - $s =~ s/(ẏ|ỳ|ỵ|ỷ|ỹ|ẙ)/y/g; - $s =~ s/(ẑ|ẓ|ẕ)/z/g; - $s =~ s/(Ḁ|Ạ|Ả|Ấ|Ầ|Ẩ|Ẫ|Ậ|Ắ|Ằ|Ẳ|Ẵ|Ặ)/A/g; - $s =~ s/(Ḃ|Ḅ|Ḇ)/B/g; - $s =~ s/(Ḉ)/C/g; - $s =~ s/(Ḋ|Ḍ|Ḏ|Ḑ|Ḓ)/D/g; - $s =~ s/(Ḕ|Ḗ|Ḙ|Ḛ|Ḝ|Ẹ|Ẻ|Ẽ|Ế|Ề|Ể|Ễ|Ệ)/E/g; - $s =~ s/(Ḟ)/F/g; - $s =~ s/(Ḡ)/G/g; - $s =~ s/(Ḣ|Ḥ|Ḧ|Ḩ|Ḫ)/H/g; - $s =~ s/(Ḭ|Ḯ|Ỉ|Ị)/I/g; - $s =~ s/(Ḱ|Ḳ|Ḵ)/K/g; - $s =~ s/(Ḷ|Ḹ|Ḻ|Ḽ)/L/g; - $s =~ s/(Ḿ|Ṁ|Ṃ)/M/g; - $s =~ s/(Ṅ|Ṇ|Ṉ|Ṋ)/N/g; - $s =~ s/(Ṍ|Ṏ|Ṑ|Ṓ|Ọ|Ỏ|Ố|Ồ|Ổ|Ỗ|Ộ|Ớ|Ờ|Ở|Ỡ|Ợ)/O/g; - $s =~ s/(Ṕ|Ṗ)/P/g; - $s =~ s/(Ṙ|Ṛ|Ṝ|Ṟ)/R/g; - $s =~ s/(Ṡ|Ṣ|Ṥ|Ṧ|Ṩ)/S/g; - $s =~ s/(Ṫ|Ṭ|Ṯ|Ṱ)/T/g; - $s =~ s/(Ṳ|Ṵ|Ṷ|Ṹ|Ṻ|Ụ|Ủ|Ứ|Ừ|Ử|Ữ|Ự)/U/g; - $s =~ s/(Ṽ|Ṿ)/V/g; - $s =~ s/(Ẁ|Ẃ|Ẅ|Ẇ|Ẉ)/W/g; - $s =~ s/(Ẍ)/X/g; - $s =~ s/(Ẏ|Ỳ|Ỵ|Ỷ|Ỹ)/Y/g; - $s =~ s/(Ẑ|Ẓ|Ẕ)/Z/g; - } - # Greek letters - if ($s =~ /\xCE[\x86-\xAB]/) { - $s =~ s/ά/α/g; - $s =~ s/έ/ε/g; - $s =~ s/ί/ι/g; - $s =~ s/ϊ/ι/g; - $s =~ s/ΐ/ι/g; - $s =~ s/ό/ο/g; - $s =~ s/ύ/υ/g; - $s =~ s/ϋ/υ/g; - $s =~ s/ΰ/υ/g; - $s =~ s/ώ/ω/g; - $s =~ s/Ά/Α/g; - $s =~ s/Έ/Ε/g; - $s =~ s/Ή/Η/g; - $s =~ s/Ί/Ι/g; - $s =~ s/Ϊ/Ι/g; - $s =~ s/Ύ/Υ/g; - $s =~ s/Ϋ/Υ/g; - $s =~ s/Ώ/Ω/g; - } - # Cyrillic letters - if ($s =~ /\xD0[\x80-\xAF]/) { - $s =~ s/Ѐ/Е/g; - $s =~ s/Ё/Е/g; - $s =~ s/Ѓ/Г/g; - $s =~ s/Ќ/К/g; - $s =~ s/Ѝ/И/g; - $s =~ s/Й/И/g; - $s =~ s/ѐ/е/g; - $s =~ s/ё/е/g; - $s =~ s/ѓ/г/g; - $s =~ s/ќ/к/g; - $s =~ s/ѝ/и/g; - $s =~ s/й/и/g; - } - } - return $s; -} - -while (@ARGV) { - $arg = shift @ARGV; - if ($arg =~ /^-*(h|help)$/i) { - &print_usage; - exit 1; - } elsif ($arg =~ /^-*(v|version)$/i) { - &print_version; - exit 1; - } else { - print STDERR "Ignoring unrecognized argument $arg\n"; - } -} - -$line_number = 0; -while (<>) { - $line_number++; - print &de_accent_string($_); -} -exit 0; - diff --git a/spaces/Aloento/9Nine-PITS/transforms.py b/spaces/Aloento/9Nine-PITS/transforms.py deleted file mode 100644 index 122f91ebe290f153918b7717214d065d14180947..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-PITS/transforms.py +++ /dev/null @@ -1,198 +0,0 @@ -# from https://github.com/jaywalnut310/vits -import numpy as np -import torch -from torch.nn import functional as F - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/bfm.py b/spaces/Alpaca233/SadTalker/src/face3d/models/bfm.py deleted file mode 100644 index a75db682f02dd1979d4a7de1d11dd3aa5cdf5279..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/face3d/models/bfm.py +++ /dev/null @@ -1,331 +0,0 @@ -"""This script defines the parametric 3d face model for Deep3DFaceRecon_pytorch -""" - -import numpy as np -import torch -import torch.nn.functional as F -from scipy.io import loadmat -from src.face3d.util.load_mats import transferBFM09 -import os - -def perspective_projection(focal, center): - # return p.T (N, 3) @ (3, 3) - return np.array([ - focal, 0, center, - 0, focal, center, - 0, 0, 1 - ]).reshape([3, 3]).astype(np.float32).transpose() - -class SH: - def __init__(self): - self.a = [np.pi, 2 * np.pi / np.sqrt(3.), 2 * np.pi / np.sqrt(8.)] - self.c = [1/np.sqrt(4 * np.pi), np.sqrt(3.) / np.sqrt(4 * np.pi), 3 * np.sqrt(5.) / np.sqrt(12 * np.pi)] - - - -class ParametricFaceModel: - def __init__(self, - bfm_folder='./BFM', - recenter=True, - camera_distance=10., - init_lit=np.array([ - 0.8, 0, 0, 0, 0, 0, 0, 0, 0 - ]), - focal=1015., - center=112., - is_train=True, - default_name='BFM_model_front.mat'): - - if not os.path.isfile(os.path.join(bfm_folder, default_name)): - transferBFM09(bfm_folder) - - model = loadmat(os.path.join(bfm_folder, default_name)) - # mean face shape. [3*N,1] - self.mean_shape = model['meanshape'].astype(np.float32) - # identity basis. [3*N,80] - self.id_base = model['idBase'].astype(np.float32) - # expression basis. [3*N,64] - self.exp_base = model['exBase'].astype(np.float32) - # mean face texture. [3*N,1] (0-255) - self.mean_tex = model['meantex'].astype(np.float32) - # texture basis. [3*N,80] - self.tex_base = model['texBase'].astype(np.float32) - # face indices for each vertex that lies in. starts from 0. [N,8] - self.point_buf = model['point_buf'].astype(np.int64) - 1 - # vertex indices for each face. starts from 0. [F,3] - self.face_buf = model['tri'].astype(np.int64) - 1 - # vertex indices for 68 landmarks. starts from 0. [68,1] - self.keypoints = np.squeeze(model['keypoints']).astype(np.int64) - 1 - - if is_train: - # vertex indices for small face region to compute photometric error. starts from 0. - self.front_mask = np.squeeze(model['frontmask2_idx']).astype(np.int64) - 1 - # vertex indices for each face from small face region. starts from 0. [f,3] - self.front_face_buf = model['tri_mask2'].astype(np.int64) - 1 - # vertex indices for pre-defined skin region to compute reflectance loss - self.skin_mask = np.squeeze(model['skinmask']) - - if recenter: - mean_shape = self.mean_shape.reshape([-1, 3]) - mean_shape = mean_shape - np.mean(mean_shape, axis=0, keepdims=True) - self.mean_shape = mean_shape.reshape([-1, 1]) - - self.persc_proj = perspective_projection(focal, center) - self.device = 'cpu' - self.camera_distance = camera_distance - self.SH = SH() - self.init_lit = init_lit.reshape([1, 1, -1]).astype(np.float32) - - - def to(self, device): - self.device = device - for key, value in self.__dict__.items(): - if type(value).__module__ == np.__name__: - setattr(self, key, torch.tensor(value).to(device)) - - - def compute_shape(self, id_coeff, exp_coeff): - """ - Return: - face_shape -- torch.tensor, size (B, N, 3) - - Parameters: - id_coeff -- torch.tensor, size (B, 80), identity coeffs - exp_coeff -- torch.tensor, size (B, 64), expression coeffs - """ - batch_size = id_coeff.shape[0] - id_part = torch.einsum('ij,aj->ai', self.id_base, id_coeff) - exp_part = torch.einsum('ij,aj->ai', self.exp_base, exp_coeff) - face_shape = id_part + exp_part + self.mean_shape.reshape([1, -1]) - return face_shape.reshape([batch_size, -1, 3]) - - - def compute_texture(self, tex_coeff, normalize=True): - """ - Return: - face_texture -- torch.tensor, size (B, N, 3), in RGB order, range (0, 1.) - - Parameters: - tex_coeff -- torch.tensor, size (B, 80) - """ - batch_size = tex_coeff.shape[0] - face_texture = torch.einsum('ij,aj->ai', self.tex_base, tex_coeff) + self.mean_tex - if normalize: - face_texture = face_texture / 255. - return face_texture.reshape([batch_size, -1, 3]) - - - def compute_norm(self, face_shape): - """ - Return: - vertex_norm -- torch.tensor, size (B, N, 3) - - Parameters: - face_shape -- torch.tensor, size (B, N, 3) - """ - - v1 = face_shape[:, self.face_buf[:, 0]] - v2 = face_shape[:, self.face_buf[:, 1]] - v3 = face_shape[:, self.face_buf[:, 2]] - e1 = v1 - v2 - e2 = v2 - v3 - face_norm = torch.cross(e1, e2, dim=-1) - face_norm = F.normalize(face_norm, dim=-1, p=2) - face_norm = torch.cat([face_norm, torch.zeros(face_norm.shape[0], 1, 3).to(self.device)], dim=1) - - vertex_norm = torch.sum(face_norm[:, self.point_buf], dim=2) - vertex_norm = F.normalize(vertex_norm, dim=-1, p=2) - return vertex_norm - - - def compute_color(self, face_texture, face_norm, gamma): - """ - Return: - face_color -- torch.tensor, size (B, N, 3), range (0, 1.) - - Parameters: - face_texture -- torch.tensor, size (B, N, 3), from texture model, range (0, 1.) - face_norm -- torch.tensor, size (B, N, 3), rotated face normal - gamma -- torch.tensor, size (B, 27), SH coeffs - """ - batch_size = gamma.shape[0] - v_num = face_texture.shape[1] - a, c = self.SH.a, self.SH.c - gamma = gamma.reshape([batch_size, 3, 9]) - gamma = gamma + self.init_lit - gamma = gamma.permute(0, 2, 1) - Y = torch.cat([ - a[0] * c[0] * torch.ones_like(face_norm[..., :1]).to(self.device), - -a[1] * c[1] * face_norm[..., 1:2], - a[1] * c[1] * face_norm[..., 2:], - -a[1] * c[1] * face_norm[..., :1], - a[2] * c[2] * face_norm[..., :1] * face_norm[..., 1:2], - -a[2] * c[2] * face_norm[..., 1:2] * face_norm[..., 2:], - 0.5 * a[2] * c[2] / np.sqrt(3.) * (3 * face_norm[..., 2:] ** 2 - 1), - -a[2] * c[2] * face_norm[..., :1] * face_norm[..., 2:], - 0.5 * a[2] * c[2] * (face_norm[..., :1] ** 2 - face_norm[..., 1:2] ** 2) - ], dim=-1) - r = Y @ gamma[..., :1] - g = Y @ gamma[..., 1:2] - b = Y @ gamma[..., 2:] - face_color = torch.cat([r, g, b], dim=-1) * face_texture - return face_color - - - def compute_rotation(self, angles): - """ - Return: - rot -- torch.tensor, size (B, 3, 3) pts @ trans_mat - - Parameters: - angles -- torch.tensor, size (B, 3), radian - """ - - batch_size = angles.shape[0] - ones = torch.ones([batch_size, 1]).to(self.device) - zeros = torch.zeros([batch_size, 1]).to(self.device) - x, y, z = angles[:, :1], angles[:, 1:2], angles[:, 2:], - - rot_x = torch.cat([ - ones, zeros, zeros, - zeros, torch.cos(x), -torch.sin(x), - zeros, torch.sin(x), torch.cos(x) - ], dim=1).reshape([batch_size, 3, 3]) - - rot_y = torch.cat([ - torch.cos(y), zeros, torch.sin(y), - zeros, ones, zeros, - -torch.sin(y), zeros, torch.cos(y) - ], dim=1).reshape([batch_size, 3, 3]) - - rot_z = torch.cat([ - torch.cos(z), -torch.sin(z), zeros, - torch.sin(z), torch.cos(z), zeros, - zeros, zeros, ones - ], dim=1).reshape([batch_size, 3, 3]) - - rot = rot_z @ rot_y @ rot_x - return rot.permute(0, 2, 1) - - - def to_camera(self, face_shape): - face_shape[..., -1] = self.camera_distance - face_shape[..., -1] - return face_shape - - def to_image(self, face_shape): - """ - Return: - face_proj -- torch.tensor, size (B, N, 2), y direction is opposite to v direction - - Parameters: - face_shape -- torch.tensor, size (B, N, 3) - """ - # to image_plane - face_proj = face_shape @ self.persc_proj - face_proj = face_proj[..., :2] / face_proj[..., 2:] - - return face_proj - - - def transform(self, face_shape, rot, trans): - """ - Return: - face_shape -- torch.tensor, size (B, N, 3) pts @ rot + trans - - Parameters: - face_shape -- torch.tensor, size (B, N, 3) - rot -- torch.tensor, size (B, 3, 3) - trans -- torch.tensor, size (B, 3) - """ - return face_shape @ rot + trans.unsqueeze(1) - - - def get_landmarks(self, face_proj): - """ - Return: - face_lms -- torch.tensor, size (B, 68, 2) - - Parameters: - face_proj -- torch.tensor, size (B, N, 2) - """ - return face_proj[:, self.keypoints] - - def split_coeff(self, coeffs): - """ - Return: - coeffs_dict -- a dict of torch.tensors - - Parameters: - coeffs -- torch.tensor, size (B, 256) - """ - id_coeffs = coeffs[:, :80] - exp_coeffs = coeffs[:, 80: 144] - tex_coeffs = coeffs[:, 144: 224] - angles = coeffs[:, 224: 227] - gammas = coeffs[:, 227: 254] - translations = coeffs[:, 254:] - return { - 'id': id_coeffs, - 'exp': exp_coeffs, - 'tex': tex_coeffs, - 'angle': angles, - 'gamma': gammas, - 'trans': translations - } - def compute_for_render(self, coeffs): - """ - Return: - face_vertex -- torch.tensor, size (B, N, 3), in camera coordinate - face_color -- torch.tensor, size (B, N, 3), in RGB order - landmark -- torch.tensor, size (B, 68, 2), y direction is opposite to v direction - Parameters: - coeffs -- torch.tensor, size (B, 257) - """ - coef_dict = self.split_coeff(coeffs) - face_shape = self.compute_shape(coef_dict['id'], coef_dict['exp']) - rotation = self.compute_rotation(coef_dict['angle']) - - - face_shape_transformed = self.transform(face_shape, rotation, coef_dict['trans']) - face_vertex = self.to_camera(face_shape_transformed) - - face_proj = self.to_image(face_vertex) - landmark = self.get_landmarks(face_proj) - - face_texture = self.compute_texture(coef_dict['tex']) - face_norm = self.compute_norm(face_shape) - face_norm_roted = face_norm @ rotation - face_color = self.compute_color(face_texture, face_norm_roted, coef_dict['gamma']) - - return face_vertex, face_texture, face_color, landmark - - def compute_for_render_woRotation(self, coeffs): - """ - Return: - face_vertex -- torch.tensor, size (B, N, 3), in camera coordinate - face_color -- torch.tensor, size (B, N, 3), in RGB order - landmark -- torch.tensor, size (B, 68, 2), y direction is opposite to v direction - Parameters: - coeffs -- torch.tensor, size (B, 257) - """ - coef_dict = self.split_coeff(coeffs) - face_shape = self.compute_shape(coef_dict['id'], coef_dict['exp']) - #rotation = self.compute_rotation(coef_dict['angle']) - - - #face_shape_transformed = self.transform(face_shape, rotation, coef_dict['trans']) - face_vertex = self.to_camera(face_shape) - - face_proj = self.to_image(face_vertex) - landmark = self.get_landmarks(face_proj) - - face_texture = self.compute_texture(coef_dict['tex']) - face_norm = self.compute_norm(face_shape) - face_norm_roted = face_norm # @ rotation - face_color = self.compute_color(face_texture, face_norm_roted, coef_dict['gamma']) - - return face_vertex, face_texture, face_color, landmark - - -if __name__ == '__main__': - transferBFM09() \ No newline at end of file diff --git "a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" "b/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" deleted file mode 100644 index 3da831fd07e361a532777c83bb02cff265b94abd..0000000000000000000000000000000000000000 --- "a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" +++ /dev/null @@ -1,194 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file, get_conf -import re, requests, unicodedata, os -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -def download_arxiv_(url_pdf): - if 'arxiv.org' not in url_pdf: - if ('.' in url_pdf) and ('/' not in url_pdf): - new_url = 'https://arxiv.org/abs/'+url_pdf - print('下载编号:', url_pdf, '自动定位:', new_url) - # download_arxiv_(new_url) - return download_arxiv_(new_url) - else: - print('不能识别的URL!') - return None - if 'abs' in url_pdf: - url_pdf = url_pdf.replace('abs', 'pdf') - url_pdf = url_pdf + '.pdf' - - url_abs = url_pdf.replace('.pdf', '').replace('pdf', 'abs') - title, other_info = get_name(_url_=url_abs) - - paper_id = title.split()[0] # '[1712.00559]' - if '2' in other_info['year']: - title = other_info['year'] + ' ' + title - - known_conf = ['NeurIPS', 'NIPS', 'Nature', 'Science', 'ICLR', 'AAAI'] - for k in known_conf: - if k in other_info['comment']: - title = k + ' ' + title - - download_dir = './gpt_log/arxiv/' - os.makedirs(download_dir, exist_ok=True) - - title_str = title.replace('?', '?')\ - .replace(':', ':')\ - .replace('\"', '“')\ - .replace('\n', '')\ - .replace(' ', ' ')\ - .replace(' ', ' ') - - requests_pdf_url = url_pdf - file_path = download_dir+title_str - # if os.path.exists(file_path): - # print('返回缓存文件') - # return './gpt_log/arxiv/'+title_str - - print('下载中') - proxies, = get_conf('proxies') - r = requests.get(requests_pdf_url, proxies=proxies) - with open(file_path, 'wb+') as f: - f.write(r.content) - print('下载完成') - - # print('输出下载命令:','aria2c -o \"%s\" %s'%(title_str,url_pdf)) - # subprocess.call('aria2c --all-proxy=\"172.18.116.150:11084\" -o \"%s\" %s'%(download_dir+title_str,url_pdf), shell=True) - - x = "%s %s %s.bib" % (paper_id, other_info['year'], other_info['authors']) - x = x.replace('?', '?')\ - .replace(':', ':')\ - .replace('\"', '“')\ - .replace('\n', '')\ - .replace(' ', ' ')\ - .replace(' ', ' ') - return './gpt_log/arxiv/'+title_str, other_info - - -def get_name(_url_): - import os - from bs4 import BeautifulSoup - print('正在获取文献名!') - print(_url_) - - # arxiv_recall = {} - # if os.path.exists('./arxiv_recall.pkl'): - # with open('./arxiv_recall.pkl', 'rb') as f: - # arxiv_recall = pickle.load(f) - - # if _url_ in arxiv_recall: - # print('在缓存中') - # return arxiv_recall[_url_] - - proxies, = get_conf('proxies') - res = requests.get(_url_, proxies=proxies) - - bs = BeautifulSoup(res.text, 'html.parser') - other_details = {} - - # get year - try: - year = bs.find_all(class_='dateline')[0].text - year = re.search(r'(\d{4})', year, re.M | re.I).group(1) - other_details['year'] = year - abstract = bs.find_all(class_='abstract mathjax')[0].text - other_details['abstract'] = abstract - except: - other_details['year'] = '' - print('年份获取失败') - - # get author - try: - authors = bs.find_all(class_='authors')[0].text - authors = authors.split('Authors:')[1] - other_details['authors'] = authors - except: - other_details['authors'] = '' - print('authors获取失败') - - # get comment - try: - comment = bs.find_all(class_='metatable')[0].text - real_comment = None - for item in comment.replace('\n', ' ').split(' '): - if 'Comments' in item: - real_comment = item - if real_comment is not None: - other_details['comment'] = real_comment - else: - other_details['comment'] = '' - except: - other_details['comment'] = '' - print('年份获取失败') - - title_str = BeautifulSoup( - res.text, 'html.parser').find('title').contents[0] - print('获取成功:', title_str) - # arxiv_recall[_url_] = (title_str+'.pdf', other_details) - # with open('./arxiv_recall.pkl', 'wb') as f: - # pickle.dump(arxiv_recall, f) - - return title_str+'.pdf', other_details - - - -@CatchException -def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - - CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……" - import glob - import os - - # 基本信息:功能、贡献者 - chatbot.append(["函数插件功能?", CRAZY_FUNCTION_INFO]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import pdfminer, bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 提取摘要,下载PDF文档 - try: - pdf_path, info = download_arxiv_(txt) - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"下载pdf文件未成功") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 翻译摘要等 - i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}" - i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - msg = '正常' - # ** gpt request ** - # 单线,获取文章meta信息 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, history=[], - sys_prompt="Your job is to collect information from materials and translate to Chinese。", - ) - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - # 写入文件 - import shutil - # 重置文件的创建时间 - shutil.copyfile(pdf_path, f'./gpt_log/{os.path.basename(pdf_path)}'); os.remove(pdf_path) - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res + "\n\nPDF文件也已经下载")) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/__init__.py deleted file mode 100644 index 4f1603adeb6fcf9bc1c4a16a9b6e16223c6534f3..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/__init__.py +++ /dev/null @@ -1,608 +0,0 @@ -# Copyright 2016-2018 Julien Danjou -# Copyright 2017 Elisey Zanko -# Copyright 2016 Étienne Bersac -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import functools -import sys -import threading -import time -import typing as t -import warnings -from abc import ABC, abstractmethod -from concurrent import futures -from inspect import iscoroutinefunction - -# Import all built-in retry strategies for easier usage. -from .retry import retry_base # noqa -from .retry import retry_all # noqa -from .retry import retry_always # noqa -from .retry import retry_any # noqa -from .retry import retry_if_exception # noqa -from .retry import retry_if_exception_type # noqa -from .retry import retry_if_exception_cause_type # noqa -from .retry import retry_if_not_exception_type # noqa -from .retry import retry_if_not_result # noqa -from .retry import retry_if_result # noqa -from .retry import retry_never # noqa -from .retry import retry_unless_exception_type # noqa -from .retry import retry_if_exception_message # noqa -from .retry import retry_if_not_exception_message # noqa - -# Import all nap strategies for easier usage. -from .nap import sleep # noqa -from .nap import sleep_using_event # noqa - -# Import all built-in stop strategies for easier usage. -from .stop import stop_after_attempt # noqa -from .stop import stop_after_delay # noqa -from .stop import stop_all # noqa -from .stop import stop_any # noqa -from .stop import stop_never # noqa -from .stop import stop_when_event_set # noqa - -# Import all built-in wait strategies for easier usage. -from .wait import wait_chain # noqa -from .wait import wait_combine # noqa -from .wait import wait_exponential # noqa -from .wait import wait_fixed # noqa -from .wait import wait_incrementing # noqa -from .wait import wait_none # noqa -from .wait import wait_random # noqa -from .wait import wait_random_exponential # noqa -from .wait import wait_random_exponential as wait_full_jitter # noqa -from .wait import wait_exponential_jitter # noqa - -# Import all built-in before strategies for easier usage. -from .before import before_log # noqa -from .before import before_nothing # noqa - -# Import all built-in after strategies for easier usage. -from .after import after_log # noqa -from .after import after_nothing # noqa - -# Import all built-in after strategies for easier usage. -from .before_sleep import before_sleep_log # noqa -from .before_sleep import before_sleep_nothing # noqa - -# Replace a conditional import with a hard-coded None so that pip does -# not attempt to use tornado even if it is present in the environment. -# If tornado is non-None, tenacity will attempt to execute some code -# that is sensitive to the version of tornado, which could break pip -# if an old version is found. -tornado = None # type: ignore - -if t.TYPE_CHECKING: - import types - - from .retry import RetryBaseT - from .stop import StopBaseT - from .wait import WaitBaseT - - -WrappedFnReturnT = t.TypeVar("WrappedFnReturnT") -WrappedFn = t.TypeVar("WrappedFn", bound=t.Callable[..., t.Any]) - - -class TryAgain(Exception): - """Always retry the executed function when raised.""" - - -NO_RESULT = object() - - -class DoAttempt: - pass - - -class DoSleep(float): - pass - - -class BaseAction: - """Base class for representing actions to take by retry object. - - Concrete implementations must define: - - __init__: to initialize all necessary fields - - REPR_FIELDS: class variable specifying attributes to include in repr(self) - - NAME: for identification in retry object methods and callbacks - """ - - REPR_FIELDS: t.Sequence[str] = () - NAME: t.Optional[str] = None - - def __repr__(self) -> str: - state_str = ", ".join(f"{field}={getattr(self, field)!r}" for field in self.REPR_FIELDS) - return f"{self.__class__.__name__}({state_str})" - - def __str__(self) -> str: - return repr(self) - - -class RetryAction(BaseAction): - REPR_FIELDS = ("sleep",) - NAME = "retry" - - def __init__(self, sleep: t.SupportsFloat) -> None: - self.sleep = float(sleep) - - -_unset = object() - - -def _first_set(first: t.Union[t.Any, object], second: t.Any) -> t.Any: - return second if first is _unset else first - - -class RetryError(Exception): - """Encapsulates the last attempt instance right before giving up.""" - - def __init__(self, last_attempt: "Future") -> None: - self.last_attempt = last_attempt - super().__init__(last_attempt) - - def reraise(self) -> "t.NoReturn": - if self.last_attempt.failed: - raise self.last_attempt.result() - raise self - - def __str__(self) -> str: - return f"{self.__class__.__name__}[{self.last_attempt}]" - - -class AttemptManager: - """Manage attempt context.""" - - def __init__(self, retry_state: "RetryCallState"): - self.retry_state = retry_state - - def __enter__(self) -> None: - pass - - def __exit__( - self, - exc_type: t.Optional[t.Type[BaseException]], - exc_value: t.Optional[BaseException], - traceback: t.Optional["types.TracebackType"], - ) -> t.Optional[bool]: - if exc_type is not None and exc_value is not None: - self.retry_state.set_exception((exc_type, exc_value, traceback)) - return True # Swallow exception. - else: - # We don't have the result, actually. - self.retry_state.set_result(None) - return None - - -class BaseRetrying(ABC): - def __init__( - self, - sleep: t.Callable[[t.Union[int, float]], None] = sleep, - stop: "StopBaseT" = stop_never, - wait: "WaitBaseT" = wait_none(), - retry: "RetryBaseT" = retry_if_exception_type(), - before: t.Callable[["RetryCallState"], None] = before_nothing, - after: t.Callable[["RetryCallState"], None] = after_nothing, - before_sleep: t.Optional[t.Callable[["RetryCallState"], None]] = None, - reraise: bool = False, - retry_error_cls: t.Type[RetryError] = RetryError, - retry_error_callback: t.Optional[t.Callable[["RetryCallState"], t.Any]] = None, - ): - self.sleep = sleep - self.stop = stop - self.wait = wait - self.retry = retry - self.before = before - self.after = after - self.before_sleep = before_sleep - self.reraise = reraise - self._local = threading.local() - self.retry_error_cls = retry_error_cls - self.retry_error_callback = retry_error_callback - - def copy( - self, - sleep: t.Union[t.Callable[[t.Union[int, float]], None], object] = _unset, - stop: t.Union["StopBaseT", object] = _unset, - wait: t.Union["WaitBaseT", object] = _unset, - retry: t.Union[retry_base, object] = _unset, - before: t.Union[t.Callable[["RetryCallState"], None], object] = _unset, - after: t.Union[t.Callable[["RetryCallState"], None], object] = _unset, - before_sleep: t.Union[t.Optional[t.Callable[["RetryCallState"], None]], object] = _unset, - reraise: t.Union[bool, object] = _unset, - retry_error_cls: t.Union[t.Type[RetryError], object] = _unset, - retry_error_callback: t.Union[t.Optional[t.Callable[["RetryCallState"], t.Any]], object] = _unset, - ) -> "BaseRetrying": - """Copy this object with some parameters changed if needed.""" - return self.__class__( - sleep=_first_set(sleep, self.sleep), - stop=_first_set(stop, self.stop), - wait=_first_set(wait, self.wait), - retry=_first_set(retry, self.retry), - before=_first_set(before, self.before), - after=_first_set(after, self.after), - before_sleep=_first_set(before_sleep, self.before_sleep), - reraise=_first_set(reraise, self.reraise), - retry_error_cls=_first_set(retry_error_cls, self.retry_error_cls), - retry_error_callback=_first_set(retry_error_callback, self.retry_error_callback), - ) - - def __repr__(self) -> str: - return ( - f"<{self.__class__.__name__} object at 0x{id(self):x} (" - f"stop={self.stop}, " - f"wait={self.wait}, " - f"sleep={self.sleep}, " - f"retry={self.retry}, " - f"before={self.before}, " - f"after={self.after})>" - ) - - @property - def statistics(self) -> t.Dict[str, t.Any]: - """Return a dictionary of runtime statistics. - - This dictionary will be empty when the controller has never been - ran. When it is running or has ran previously it should have (but - may not) have useful and/or informational keys and values when - running is underway and/or completed. - - .. warning:: The keys in this dictionary **should** be some what - stable (not changing), but there existence **may** - change between major releases as new statistics are - gathered or removed so before accessing keys ensure that - they actually exist and handle when they do not. - - .. note:: The values in this dictionary are local to the thread - running call (so if multiple threads share the same retrying - object - either directly or indirectly) they will each have - there own view of statistics they have collected (in the - future we may provide a way to aggregate the various - statistics from each thread). - """ - try: - return self._local.statistics # type: ignore[no-any-return] - except AttributeError: - self._local.statistics = t.cast(t.Dict[str, t.Any], {}) - return self._local.statistics - - def wraps(self, f: WrappedFn) -> WrappedFn: - """Wrap a function for retrying. - - :param f: A function to wraps for retrying. - """ - - @functools.wraps(f) - def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any: - return self(f, *args, **kw) - - def retry_with(*args: t.Any, **kwargs: t.Any) -> WrappedFn: - return self.copy(*args, **kwargs).wraps(f) - - wrapped_f.retry = self # type: ignore[attr-defined] - wrapped_f.retry_with = retry_with # type: ignore[attr-defined] - - return wrapped_f # type: ignore[return-value] - - def begin(self) -> None: - self.statistics.clear() - self.statistics["start_time"] = time.monotonic() - self.statistics["attempt_number"] = 1 - self.statistics["idle_for"] = 0 - - def iter(self, retry_state: "RetryCallState") -> t.Union[DoAttempt, DoSleep, t.Any]: # noqa - fut = retry_state.outcome - if fut is None: - if self.before is not None: - self.before(retry_state) - return DoAttempt() - - is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain) - if not (is_explicit_retry or self.retry(retry_state)): - return fut.result() - - if self.after is not None: - self.after(retry_state) - - self.statistics["delay_since_first_attempt"] = retry_state.seconds_since_start - if self.stop(retry_state): - if self.retry_error_callback: - return self.retry_error_callback(retry_state) - retry_exc = self.retry_error_cls(fut) - if self.reraise: - raise retry_exc.reraise() - raise retry_exc from fut.exception() - - if self.wait: - sleep = self.wait(retry_state) - else: - sleep = 0.0 - retry_state.next_action = RetryAction(sleep) - retry_state.idle_for += sleep - self.statistics["idle_for"] += sleep - self.statistics["attempt_number"] += 1 - - if self.before_sleep is not None: - self.before_sleep(retry_state) - - return DoSleep(sleep) - - def __iter__(self) -> t.Generator[AttemptManager, None, None]: - self.begin() - - retry_state = RetryCallState(self, fn=None, args=(), kwargs={}) - while True: - do = self.iter(retry_state=retry_state) - if isinstance(do, DoAttempt): - yield AttemptManager(retry_state=retry_state) - elif isinstance(do, DoSleep): - retry_state.prepare_for_next_attempt() - self.sleep(do) - else: - break - - @abstractmethod - def __call__( - self, - fn: t.Callable[..., WrappedFnReturnT], - *args: t.Any, - **kwargs: t.Any, - ) -> WrappedFnReturnT: - pass - - -class Retrying(BaseRetrying): - """Retrying controller.""" - - def __call__( - self, - fn: t.Callable[..., WrappedFnReturnT], - *args: t.Any, - **kwargs: t.Any, - ) -> WrappedFnReturnT: - self.begin() - - retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs) - while True: - do = self.iter(retry_state=retry_state) - if isinstance(do, DoAttempt): - try: - result = fn(*args, **kwargs) - except BaseException: # noqa: B902 - retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type] - else: - retry_state.set_result(result) - elif isinstance(do, DoSleep): - retry_state.prepare_for_next_attempt() - self.sleep(do) - else: - return do # type: ignore[no-any-return] - - -if sys.version_info[1] >= 9: - FutureGenericT = futures.Future[t.Any] -else: - FutureGenericT = futures.Future - - -class Future(FutureGenericT): - """Encapsulates a (future or past) attempted call to a target function.""" - - def __init__(self, attempt_number: int) -> None: - super().__init__() - self.attempt_number = attempt_number - - @property - def failed(self) -> bool: - """Return whether a exception is being held in this future.""" - return self.exception() is not None - - @classmethod - def construct(cls, attempt_number: int, value: t.Any, has_exception: bool) -> "Future": - """Construct a new Future object.""" - fut = cls(attempt_number) - if has_exception: - fut.set_exception(value) - else: - fut.set_result(value) - return fut - - -class RetryCallState: - """State related to a single call wrapped with Retrying.""" - - def __init__( - self, - retry_object: BaseRetrying, - fn: t.Optional[WrappedFn], - args: t.Any, - kwargs: t.Any, - ) -> None: - #: Retry call start timestamp - self.start_time = time.monotonic() - #: Retry manager object - self.retry_object = retry_object - #: Function wrapped by this retry call - self.fn = fn - #: Arguments of the function wrapped by this retry call - self.args = args - #: Keyword arguments of the function wrapped by this retry call - self.kwargs = kwargs - - #: The number of the current attempt - self.attempt_number: int = 1 - #: Last outcome (result or exception) produced by the function - self.outcome: t.Optional[Future] = None - #: Timestamp of the last outcome - self.outcome_timestamp: t.Optional[float] = None - #: Time spent sleeping in retries - self.idle_for: float = 0.0 - #: Next action as decided by the retry manager - self.next_action: t.Optional[RetryAction] = None - - @property - def seconds_since_start(self) -> t.Optional[float]: - if self.outcome_timestamp is None: - return None - return self.outcome_timestamp - self.start_time - - def prepare_for_next_attempt(self) -> None: - self.outcome = None - self.outcome_timestamp = None - self.attempt_number += 1 - self.next_action = None - - def set_result(self, val: t.Any) -> None: - ts = time.monotonic() - fut = Future(self.attempt_number) - fut.set_result(val) - self.outcome, self.outcome_timestamp = fut, ts - - def set_exception( - self, exc_info: t.Tuple[t.Type[BaseException], BaseException, "types.TracebackType| None"] - ) -> None: - ts = time.monotonic() - fut = Future(self.attempt_number) - fut.set_exception(exc_info[1]) - self.outcome, self.outcome_timestamp = fut, ts - - def __repr__(self) -> str: - if self.outcome is None: - result = "none yet" - elif self.outcome.failed: - exception = self.outcome.exception() - result = f"failed ({exception.__class__.__name__} {exception})" - else: - result = f"returned {self.outcome.result()}" - - slept = float(round(self.idle_for, 2)) - clsname = self.__class__.__name__ - return f"<{clsname} {id(self)}: attempt #{self.attempt_number}; slept for {slept}; last result: {result}>" - - -@t.overload -def retry(func: WrappedFn) -> WrappedFn: - ... - - -@t.overload -def retry( - sleep: t.Callable[[t.Union[int, float]], None] = sleep, - stop: "StopBaseT" = stop_never, - wait: "WaitBaseT" = wait_none(), - retry: "RetryBaseT" = retry_if_exception_type(), - before: t.Callable[["RetryCallState"], None] = before_nothing, - after: t.Callable[["RetryCallState"], None] = after_nothing, - before_sleep: t.Optional[t.Callable[["RetryCallState"], None]] = None, - reraise: bool = False, - retry_error_cls: t.Type["RetryError"] = RetryError, - retry_error_callback: t.Optional[t.Callable[["RetryCallState"], t.Any]] = None, -) -> t.Callable[[WrappedFn], WrappedFn]: - ... - - -def retry(*dargs: t.Any, **dkw: t.Any) -> t.Any: - """Wrap a function with a new `Retrying` object. - - :param dargs: positional arguments passed to Retrying object - :param dkw: keyword arguments passed to the Retrying object - """ - # support both @retry and @retry() as valid syntax - if len(dargs) == 1 and callable(dargs[0]): - return retry()(dargs[0]) - else: - - def wrap(f: WrappedFn) -> WrappedFn: - if isinstance(f, retry_base): - warnings.warn( - f"Got retry_base instance ({f.__class__.__name__}) as callable argument, " - f"this will probably hang indefinitely (did you mean retry={f.__class__.__name__}(...)?)" - ) - r: "BaseRetrying" - if iscoroutinefunction(f): - r = AsyncRetrying(*dargs, **dkw) - elif tornado and hasattr(tornado.gen, "is_coroutine_function") and tornado.gen.is_coroutine_function(f): - r = TornadoRetrying(*dargs, **dkw) - else: - r = Retrying(*dargs, **dkw) - - return r.wraps(f) - - return wrap - - -from pip._vendor.tenacity._asyncio import AsyncRetrying # noqa:E402,I100 - -if tornado: - from pip._vendor.tenacity.tornadoweb import TornadoRetrying - - -__all__ = [ - "retry_base", - "retry_all", - "retry_always", - "retry_any", - "retry_if_exception", - "retry_if_exception_type", - "retry_if_exception_cause_type", - "retry_if_not_exception_type", - "retry_if_not_result", - "retry_if_result", - "retry_never", - "retry_unless_exception_type", - "retry_if_exception_message", - "retry_if_not_exception_message", - "sleep", - "sleep_using_event", - "stop_after_attempt", - "stop_after_delay", - "stop_all", - "stop_any", - "stop_never", - "stop_when_event_set", - "wait_chain", - "wait_combine", - "wait_exponential", - "wait_fixed", - "wait_incrementing", - "wait_none", - "wait_random", - "wait_random_exponential", - "wait_full_jitter", - "wait_exponential_jitter", - "before_log", - "before_nothing", - "after_log", - "after_nothing", - "before_sleep_log", - "before_sleep_nothing", - "retry", - "WrappedFn", - "TryAgain", - "NO_RESULT", - "DoAttempt", - "DoSleep", - "BaseAction", - "RetryAction", - "RetryError", - "AttemptManager", - "BaseRetrying", - "Retrying", - "Future", - "RetryCallState", - "AsyncRetrying", -] diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_reqs.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_reqs.py deleted file mode 100644 index ca7241746b18940a5f9a4bcd9dddd4b70a12e3f7..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_reqs.py +++ /dev/null @@ -1,19 +0,0 @@ -import setuptools.extern.jaraco.text as text - -from pkg_resources import Requirement - - -def parse_strings(strs): - """ - Yield requirement strings for each specification in `strs`. - - `strs` must be a string, or a (possibly-nested) iterable thereof. - """ - return text.join_continuation(map(text.drop_comment, text.yield_lines(strs))) - - -def parse(strs): - """ - Deprecated drop-in replacement for pkg_resources.parse_requirements. - """ - return map(Requirement, parse_strings(strs)) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/sandbox.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/sandbox.py deleted file mode 100644 index 034fc80d20ea4a59d77af6f808dbcfc3b87612c3..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/sandbox.py +++ /dev/null @@ -1,530 +0,0 @@ -import os -import sys -import tempfile -import operator -import functools -import itertools -import re -import contextlib -import pickle -import textwrap -import builtins - -import pkg_resources -from distutils.errors import DistutilsError -from pkg_resources import working_set - -if sys.platform.startswith('java'): - import org.python.modules.posix.PosixModule as _os -else: - _os = sys.modules[os.name] -try: - _file = file -except NameError: - _file = None -_open = open - - -__all__ = [ - "AbstractSandbox", - "DirectorySandbox", - "SandboxViolation", - "run_setup", -] - - -def _execfile(filename, globals, locals=None): - """ - Python 3 implementation of execfile. - """ - mode = 'rb' - with open(filename, mode) as stream: - script = stream.read() - if locals is None: - locals = globals - code = compile(script, filename, 'exec') - exec(code, globals, locals) - - -@contextlib.contextmanager -def save_argv(repl=None): - saved = sys.argv[:] - if repl is not None: - sys.argv[:] = repl - try: - yield saved - finally: - sys.argv[:] = saved - - -@contextlib.contextmanager -def save_path(): - saved = sys.path[:] - try: - yield saved - finally: - sys.path[:] = saved - - -@contextlib.contextmanager -def override_temp(replacement): - """ - Monkey-patch tempfile.tempdir with replacement, ensuring it exists - """ - os.makedirs(replacement, exist_ok=True) - - saved = tempfile.tempdir - - tempfile.tempdir = replacement - - try: - yield - finally: - tempfile.tempdir = saved - - -@contextlib.contextmanager -def pushd(target): - saved = os.getcwd() - os.chdir(target) - try: - yield saved - finally: - os.chdir(saved) - - -class UnpickleableException(Exception): - """ - An exception representing another Exception that could not be pickled. - """ - - @staticmethod - def dump(type, exc): - """ - Always return a dumped (pickled) type and exc. If exc can't be pickled, - wrap it in UnpickleableException first. - """ - try: - return pickle.dumps(type), pickle.dumps(exc) - except Exception: - # get UnpickleableException inside the sandbox - from setuptools.sandbox import UnpickleableException as cls - - return cls.dump(cls, cls(repr(exc))) - - -class ExceptionSaver: - """ - A Context Manager that will save an exception, serialized, and restore it - later. - """ - - def __enter__(self): - return self - - def __exit__(self, type, exc, tb): - if not exc: - return - - # dump the exception - self._saved = UnpickleableException.dump(type, exc) - self._tb = tb - - # suppress the exception - return True - - def resume(self): - "restore and re-raise any exception" - - if '_saved' not in vars(self): - return - - type, exc = map(pickle.loads, self._saved) - raise exc.with_traceback(self._tb) - - -@contextlib.contextmanager -def save_modules(): - """ - Context in which imported modules are saved. - - Translates exceptions internal to the context into the equivalent exception - outside the context. - """ - saved = sys.modules.copy() - with ExceptionSaver() as saved_exc: - yield saved - - sys.modules.update(saved) - # remove any modules imported since - del_modules = ( - mod_name - for mod_name in sys.modules - if mod_name not in saved - # exclude any encodings modules. See #285 - and not mod_name.startswith('encodings.') - ) - _clear_modules(del_modules) - - saved_exc.resume() - - -def _clear_modules(module_names): - for mod_name in list(module_names): - del sys.modules[mod_name] - - -@contextlib.contextmanager -def save_pkg_resources_state(): - saved = pkg_resources.__getstate__() - try: - yield saved - finally: - pkg_resources.__setstate__(saved) - - -@contextlib.contextmanager -def setup_context(setup_dir): - temp_dir = os.path.join(setup_dir, 'temp') - with save_pkg_resources_state(): - with save_modules(): - with save_path(): - hide_setuptools() - with save_argv(): - with override_temp(temp_dir): - with pushd(setup_dir): - # ensure setuptools commands are available - __import__('setuptools') - yield - - -_MODULES_TO_HIDE = { - 'setuptools', - 'distutils', - 'pkg_resources', - 'Cython', - '_distutils_hack', -} - - -def _needs_hiding(mod_name): - """ - >>> _needs_hiding('setuptools') - True - >>> _needs_hiding('pkg_resources') - True - >>> _needs_hiding('setuptools_plugin') - False - >>> _needs_hiding('setuptools.__init__') - True - >>> _needs_hiding('distutils') - True - >>> _needs_hiding('os') - False - >>> _needs_hiding('Cython') - True - """ - base_module = mod_name.split('.', 1)[0] - return base_module in _MODULES_TO_HIDE - - -def hide_setuptools(): - """ - Remove references to setuptools' modules from sys.modules to allow the - invocation to import the most appropriate setuptools. This technique is - necessary to avoid issues such as #315 where setuptools upgrading itself - would fail to find a function declared in the metadata. - """ - _distutils_hack = sys.modules.get('_distutils_hack', None) - if _distutils_hack is not None: - _distutils_hack.remove_shim() - - modules = filter(_needs_hiding, sys.modules) - _clear_modules(modules) - - -def run_setup(setup_script, args): - """Run a distutils setup script, sandboxed in its directory""" - setup_dir = os.path.abspath(os.path.dirname(setup_script)) - with setup_context(setup_dir): - try: - sys.argv[:] = [setup_script] + list(args) - sys.path.insert(0, setup_dir) - # reset to include setup dir, w/clean callback list - working_set.__init__() - working_set.callbacks.append(lambda dist: dist.activate()) - - with DirectorySandbox(setup_dir): - ns = dict(__file__=setup_script, __name__='__main__') - _execfile(setup_script, ns) - except SystemExit as v: - if v.args and v.args[0]: - raise - # Normal exit, just return - - -class AbstractSandbox: - """Wrap 'os' module and 'open()' builtin for virtualizing setup scripts""" - - _active = False - - def __init__(self): - self._attrs = [ - name - for name in dir(_os) - if not name.startswith('_') and hasattr(self, name) - ] - - def _copy(self, source): - for name in self._attrs: - setattr(os, name, getattr(source, name)) - - def __enter__(self): - self._copy(self) - if _file: - builtins.file = self._file - builtins.open = self._open - self._active = True - - def __exit__(self, exc_type, exc_value, traceback): - self._active = False - if _file: - builtins.file = _file - builtins.open = _open - self._copy(_os) - - def run(self, func): - """Run 'func' under os sandboxing""" - with self: - return func() - - def _mk_dual_path_wrapper(name): - original = getattr(_os, name) - - def wrap(self, src, dst, *args, **kw): - if self._active: - src, dst = self._remap_pair(name, src, dst, *args, **kw) - return original(src, dst, *args, **kw) - - return wrap - - for name in ["rename", "link", "symlink"]: - if hasattr(_os, name): - locals()[name] = _mk_dual_path_wrapper(name) - - def _mk_single_path_wrapper(name, original=None): - original = original or getattr(_os, name) - - def wrap(self, path, *args, **kw): - if self._active: - path = self._remap_input(name, path, *args, **kw) - return original(path, *args, **kw) - - return wrap - - if _file: - _file = _mk_single_path_wrapper('file', _file) - _open = _mk_single_path_wrapper('open', _open) - for name in [ - "stat", - "listdir", - "chdir", - "open", - "chmod", - "chown", - "mkdir", - "remove", - "unlink", - "rmdir", - "utime", - "lchown", - "chroot", - "lstat", - "startfile", - "mkfifo", - "mknod", - "pathconf", - "access", - ]: - if hasattr(_os, name): - locals()[name] = _mk_single_path_wrapper(name) - - def _mk_single_with_return(name): - original = getattr(_os, name) - - def wrap(self, path, *args, **kw): - if self._active: - path = self._remap_input(name, path, *args, **kw) - return self._remap_output(name, original(path, *args, **kw)) - return original(path, *args, **kw) - - return wrap - - for name in ['readlink', 'tempnam']: - if hasattr(_os, name): - locals()[name] = _mk_single_with_return(name) - - def _mk_query(name): - original = getattr(_os, name) - - def wrap(self, *args, **kw): - retval = original(*args, **kw) - if self._active: - return self._remap_output(name, retval) - return retval - - return wrap - - for name in ['getcwd', 'tmpnam']: - if hasattr(_os, name): - locals()[name] = _mk_query(name) - - def _validate_path(self, path): - """Called to remap or validate any path, whether input or output""" - return path - - def _remap_input(self, operation, path, *args, **kw): - """Called for path inputs""" - return self._validate_path(path) - - def _remap_output(self, operation, path): - """Called for path outputs""" - return self._validate_path(path) - - def _remap_pair(self, operation, src, dst, *args, **kw): - """Called for path pairs like rename, link, and symlink operations""" - return ( - self._remap_input(operation + '-from', src, *args, **kw), - self._remap_input(operation + '-to', dst, *args, **kw), - ) - - -if hasattr(os, 'devnull'): - _EXCEPTIONS = [os.devnull] -else: - _EXCEPTIONS = [] - - -class DirectorySandbox(AbstractSandbox): - """Restrict operations to a single subdirectory - pseudo-chroot""" - - write_ops = dict.fromkeys( - [ - "open", - "chmod", - "chown", - "mkdir", - "remove", - "unlink", - "rmdir", - "utime", - "lchown", - "chroot", - "mkfifo", - "mknod", - "tempnam", - ] - ) - - _exception_patterns = [] - "exempt writing to paths that match the pattern" - - def __init__(self, sandbox, exceptions=_EXCEPTIONS): - self._sandbox = os.path.normcase(os.path.realpath(sandbox)) - self._prefix = os.path.join(self._sandbox, '') - self._exceptions = [ - os.path.normcase(os.path.realpath(path)) for path in exceptions - ] - AbstractSandbox.__init__(self) - - def _violation(self, operation, *args, **kw): - from setuptools.sandbox import SandboxViolation - - raise SandboxViolation(operation, args, kw) - - if _file: - - def _file(self, path, mode='r', *args, **kw): - if mode not in ('r', 'rt', 'rb', 'rU', 'U') and not self._ok(path): - self._violation("file", path, mode, *args, **kw) - return _file(path, mode, *args, **kw) - - def _open(self, path, mode='r', *args, **kw): - if mode not in ('r', 'rt', 'rb', 'rU', 'U') and not self._ok(path): - self._violation("open", path, mode, *args, **kw) - return _open(path, mode, *args, **kw) - - def tmpnam(self): - self._violation("tmpnam") - - def _ok(self, path): - active = self._active - try: - self._active = False - realpath = os.path.normcase(os.path.realpath(path)) - return ( - self._exempted(realpath) - or realpath == self._sandbox - or realpath.startswith(self._prefix) - ) - finally: - self._active = active - - def _exempted(self, filepath): - start_matches = ( - filepath.startswith(exception) for exception in self._exceptions - ) - pattern_matches = ( - re.match(pattern, filepath) for pattern in self._exception_patterns - ) - candidates = itertools.chain(start_matches, pattern_matches) - return any(candidates) - - def _remap_input(self, operation, path, *args, **kw): - """Called for path inputs""" - if operation in self.write_ops and not self._ok(path): - self._violation(operation, os.path.realpath(path), *args, **kw) - return path - - def _remap_pair(self, operation, src, dst, *args, **kw): - """Called for path pairs like rename, link, and symlink operations""" - if not self._ok(src) or not self._ok(dst): - self._violation(operation, src, dst, *args, **kw) - return (src, dst) - - def open(self, file, flags, mode=0o777, *args, **kw): - """Called for low-level os.open()""" - if flags & WRITE_FLAGS and not self._ok(file): - self._violation("os.open", file, flags, mode, *args, **kw) - return _os.open(file, flags, mode, *args, **kw) - - -WRITE_FLAGS = functools.reduce( - operator.or_, - [ - getattr(_os, a, 0) - for a in "O_WRONLY O_RDWR O_APPEND O_CREAT O_TRUNC O_TEMPORARY".split() - ], -) - - -class SandboxViolation(DistutilsError): - """A setup script attempted to modify the filesystem outside the sandbox""" - - tmpl = textwrap.dedent( - """ - SandboxViolation: {cmd}{args!r} {kwargs} - - The package setup script has attempted to modify files on your system - that are not within the EasyInstall build area, and has been aborted. - - This package cannot be safely installed by EasyInstall, and may not - support alternate installation locations even if you run its setup - script by hand. Please inform the package's author and the EasyInstall - maintainers to find out if a fix or workaround is available. - """ - ).lstrip() - - def __str__(self): - cmd, args, kwargs = self.args - return self.tmpl.format(**locals()) diff --git a/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/upfirdn2d.cpp b/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/upfirdn2d.cpp deleted file mode 100644 index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/upfirdn2d.cpp +++ /dev/null @@ -1,23 +0,0 @@ -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/README.md b/spaces/Awiny/Image2Paragraph/README.md deleted file mode 100644 index 4a5e8b4e06dd7091161ea9f1cba906455daedc8f..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image2Paragraph -emoji: 🌖 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/pascal_voc.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/pascal_voc.py deleted file mode 100644 index dbbf82cb96442bfa0cf05ed0f4dddf3645434b7e..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/pascal_voc.py +++ /dev/null @@ -1,82 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import numpy as np -import os -import xml.etree.ElementTree as ET -from typing import List, Tuple, Union - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.structures import BoxMode -from detectron2.utils.file_io import PathManager - -__all__ = ["load_voc_instances", "register_pascal_voc"] - - -# fmt: off -CLASS_NAMES = ( - "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", - "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", - "pottedplant", "sheep", "sofa", "train", "tvmonitor" -) -# fmt: on - - -def load_voc_instances(dirname: str, split: str, class_names: Union[List[str], Tuple[str, ...]]): - """ - Load Pascal VOC detection annotations to Detectron2 format. - - Args: - dirname: Contain "Annotations", "ImageSets", "JPEGImages" - split (str): one of "train", "test", "val", "trainval" - class_names: list or tuple of class names - """ - with PathManager.open(os.path.join(dirname, "ImageSets", "Main", split + ".txt")) as f: - fileids = np.loadtxt(f, dtype=np.str) - - # Needs to read many small annotation files. Makes sense at local - annotation_dirname = PathManager.get_local_path(os.path.join(dirname, "Annotations/")) - dicts = [] - for fileid in fileids: - anno_file = os.path.join(annotation_dirname, fileid + ".xml") - jpeg_file = os.path.join(dirname, "JPEGImages", fileid + ".jpg") - - with PathManager.open(anno_file) as f: - tree = ET.parse(f) - - r = { - "file_name": jpeg_file, - "image_id": fileid, - "height": int(tree.findall("./size/height")[0].text), - "width": int(tree.findall("./size/width")[0].text), - } - instances = [] - - for obj in tree.findall("object"): - cls = obj.find("name").text - # We include "difficult" samples in training. - # Based on limited experiments, they don't hurt accuracy. - # difficult = int(obj.find("difficult").text) - # if difficult == 1: - # continue - bbox = obj.find("bndbox") - bbox = [float(bbox.find(x).text) for x in ["xmin", "ymin", "xmax", "ymax"]] - # Original annotations are integers in the range [1, W or H] - # Assuming they mean 1-based pixel indices (inclusive), - # a box with annotation (xmin=1, xmax=W) covers the whole image. - # In coordinate space this is represented by (xmin=0, xmax=W) - bbox[0] -= 1.0 - bbox[1] -= 1.0 - instances.append( - {"category_id": class_names.index(cls), "bbox": bbox, "bbox_mode": BoxMode.XYXY_ABS} - ) - r["annotations"] = instances - dicts.append(r) - return dicts - - -def register_pascal_voc(name, dirname, split, year, class_names=CLASS_NAMES): - DatasetCatalog.register(name, lambda: load_voc_instances(dirname, split, class_names)) - MetadataCatalog.get(name).set( - thing_classes=list(class_names), dirname=dirname, year=year, split=split - ) diff --git a/spaces/Benson/text-generation/Examples/Bet 365.gr.md b/spaces/Benson/text-generation/Examples/Bet 365.gr.md deleted file mode 100644 index 3c1fe1107b90a9a055b82a746b9ea65307715707..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Bet 365.gr.md +++ /dev/null @@ -1,80 +0,0 @@ -
    -

    Zenless Zone Zero: Un nuevo juego de acción ambientado en un mundo post-apocalíptico

    -

    ¿Estás buscando un nuevo juego de acción que desafíe tus habilidades y te sumerja en una historia emocionante? Si es así, es posible que desee echa un vistazo a Zenless Zone Zero, un nuevo juego desarrollado por HoYoverse. En este artículo, te diremos todo lo que necesitas saber sobre este juego, incluyendo qué es, cómo descargarlo y jugarlo en el PC, y dónde encontrar más información sobre él. ¡Vamos a empezar!

    -

    ¿Qué es la zona cero de Zenless?

    -

    Zenless Zone Zero es un juego de rol que combina acción, aventura y misterio. El juego tiene lugar en un futuro próximo, donde ha ocurrido un misterioso desastre natural conocido como los "Hollows". Los Hollows han causado destrucción masiva y caos en todo el mundo, dejando solo una ciudad en pie: New Eridu. New Eridu es una ciudad que se ha adaptado a las nuevas condiciones y depende de los Hollows para sobrevivir. Sin embargo, esto también trae nuevos enemigos y peligros, como pandillas, mafias y monstruos. Como jugador, asumirás el papel de Proxy, una persona que guía a los visitantes por los Hollows para hacer turismo o explorar. También tendrás que luchar contra varias amenazas y descubrir los secretos detrás de los Hollows y New Eridu.

    -

    bet 365.gr


    Download Ziphttps://bltlly.com/2v6J5c



    -

    La historia y la configuración del juego

    -

    La historia de Zenless Zone Zero se desarrolla en un mundo post-apocalíptico que ha sido devastado por los Hollows. Los Hollows son fenómenos misteriosos que han aparecido en todo el mundo, creando enormes cráteres y alterando el medio ambiente. Nadie sabe lo que los causó o lo que son, pero parecen tener algún tipo de inteligencia y poder. Algunas personas creen que son castigos divinos, mientras que otros piensan que son invasiones alienígenas. Lo único seguro es que lo han cambiado todo.

    - -

    La jugabilidad y características del juego

    -

    Zenless Zone Zero es un juego que ofrece muchas opciones de juego y características para que los jugadores disfruten. Podrás personalizar la apariencia, habilidades, armas y equipo de tu personaje a medida que avanzas en el juego. También podrás elegir a tus aliados de diferentes facciones y personajes, cada uno con sus propias personalidades y habilidades. Tendrás que cooperar con ellos en combate y diálogo, así como tomar decisiones que afectarán tus relaciones y resultados.

    -

    El juego también tiene un sistema de combate dinámico que te permite usar diferentes estrategias y tácticas dependiendo de tus enemigos y situaciones. Podrás cambiar entre ataques cuerpo a cuerpo y a distancia, usar habilidades y objetos especiales, esquivar y detener ataques, y realizar combos y finalizadores. También tendrá que lidiar con peligros y obstáculos ambientales, como trampas, explosivos, escombros y efectos climáticos. El juego desafiará tus reflejos y habilidades mientras te enfrentas a varios enemigos y jefes.

    Los personajes y facciones del juego

    - -

    ¿Cómo descargar y jugar Zenless Zone Zero en PC?

    -

    Zenless Zone Zero es un juego que está disponible para dispositivos móviles y PC. Sin embargo, si quieres disfrutar del juego en una pantalla más grande, con mejores gráficos, sonido y rendimiento, es posible que quieras jugarlo en PC. La mejor manera de hacerlo es mediante el uso de BlueStacks, que es un emulador de Android potente y confiable que le permite ejecutar cualquier aplicación o juego de Android en su PC. Estos son algunos de los beneficios de jugar Zenless Zone Zero en PC con BlueStacks:

    -

    Los beneficios de jugar en PC con BlueStacks

    - -

    Los pasos para instalar y ejecutar el juego en el PC con BlueStacks

    -
      -
    1. Descargar e instalar BlueStacks en su PC desde el sitio web oficial: [BlueStacks].
    2. -
    3. Inicie BlueStacks e inicie sesión con su cuenta de Google.
    4. -
    5. Buscar Zenless Zone Zero en la barra de búsqueda o ir a la aplicación Google Play Store.
    6. -
    7. Haga clic en el icono del juego e instálelo en su PC.
    8. -
    9. Una vez completada la instalación, haga clic en el icono del juego en la pantalla de inicio o en la pestaña Mis juegos.
    10. -
    11. Disfruta jugando Zenless Zone Zero en PC con BlueStacks!
    12. -

    Los consejos y trucos para disfrutar del juego en el PC con BlueStacks

    - - -

    Sitio oficial de Zenless Zone Zero y redes sociales

    -

    Si desea obtener más información sobre Zenless Zone Zero, es posible que desee visitar su sitio oficial y las cuentas de redes sociales. Allí, podrás encontrar más detalles sobre el juego, como su historia, personajes, características, etc. También podrás acceder a sus últimas noticias y actualizaciones, como nuevos lanzamientos, parches, eventos, etc. También podrás interactuar con otros fans y jugadores del juego, así como con los propios desarrolladores. Estos son algunos de los enlaces que puedes consultar:

    -

    El sitio oficial del juego

    -

    El sitio oficial de Zenless Zone Zero es [Zenless Zone Zero]. Allí, podrás encontrar todo lo que necesitas saber sobre el juego, como su resumen, tráiler, capturas de pantalla, requisitos del sistema, etc. También podrás descargar el juego de forma gratuita desde allí.

    -

    Las cuentas oficiales de redes sociales del juego

    - - -

    Las últimas noticias y actualizaciones del juego

    -

    Zenless Zone Zero es un juego que está siendo constantemente actualizado y mejorado por sus desarrolladores. Siempre están trabajando para añadir nuevos contenidos y características al juego, así como para solucionar errores y problemas. También están escuchando los comentarios y sugerencias de los jugadores y fans. Si desea mantenerse al día sobre lo que es nuevo y lo que viene a continuación para Zenless Zone Zero, es posible que desee consultar su blog o boletín de noticias. Allí, podrás leer sus últimos artículos y anuncios sobre el desarrollo y progreso del juego. También podrás registrarte en su lista de correo electrónico y recibir ofertas y recompensas exclusivas. Estos son algunos de los enlaces que puedes consultar:

    - -

    Conclusión

    -

    Zenless Zone Zero es un nuevo juego de acción que se desarrolla en un mundo post-apocalíptico donde ha ocurrido un misterioso desastre natural conocido como los Hollows. El juego te permite jugar como un Proxy, una persona que guía a los visitantes alrededor de los Hollows por varias razones. También tendrás que luchar contra varios enemigos y descubrir los secretos detrás de los Hollows y New Eridu.

    -

    -

    Si quieres jugar Zenless Zone Zero en PC con mejores gráficos, sonido, rendimiento y características, puedes usar BlueStacks, que es un potente emulador de Android que te permite ejecutar cualquier aplicación o juego de Android en tu PC. Puede descargar BlueStacks gratis desde su sitio web oficial e instalar Zenless Zone Zero en su PC fácilmente. También puede utilizar varias características y herramientas que BlueStacks ofrece para mejorar su experiencia de juego.

    - -

    Zenless Zone Zero es un juego que te mantendrá entretenido y comprometido durante horas con su emocionante historia, jugabilidad y características. Si eres un fan de los juegos de acción, definitivamente deberías intentarlo. Puedes descargarlo gratis desde la Google Play Store o el sitio oficial del juego. También puede jugar en el PC con BlueStacks para una mejor experiencia de juego. ¡No pierdas esta oportunidad de explorar los Huecos y el Nuevo Eridu con Zenless Zone Zero!

    -

    Preguntas frecuentes

    -

    Estas son algunas de las preguntas más frecuentes sobre Zenless Zone Zero:

    -
      -
    1. ¿Cuál es el género de Zenless Zone Zero?
    2. -

      Zenless Zone Zero es un juego de rol que combina acción, aventura y misterio.

      -
    3. ¿Cuál es la calificación de Zenless Zone Zero?
    4. -

      Zenless Zone Zero está clasificado T para Teen por la ESRB. Contiene violencia, sangre, lenguaje y temas sugerentes.

      -
    5. ¿Cuánto dura la zona cero de Zenless?
    6. -

      Zenless Zone Zero tiene un tiempo de reproducción estimado de 20 horas para la historia principal, y 40 horas para el completionista.

      -
    7. ¿Zenless Zone Zero tiene modo multijugador?
    8. -

      Zenless Zone Zero no tiene modo multijugador en este momento, pero podría añadirse en futuras actualizaciones.

      -
    9. ¿Tiene Zenless Zone Zero microtransacciones?
    10. -

      Zenless Zone Zero no tiene microtransacciones, pero podría tener compras opcionales en la aplicación en futuras actualizaciones.

      -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Carreras De Trfico De Coches - Juegos 3d.md b/spaces/Benson/text-generation/Examples/Carreras De Trfico De Coches - Juegos 3d.md deleted file mode 100644 index ba69e5a0e905b474a7ac0d88e14ebf4552490edb..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Carreras De Trfico De Coches - Juegos 3d.md +++ /dev/null @@ -1,75 +0,0 @@ - -

    Carreras de tráfico de coches - Juegos 3D: Una guía para principiantes

    -

    Si te gusta conducir coches rápidos y tejer a través del tráfico, entonces usted puede disfrutar jugando carreras de tráfico de coches - juegos 3d. Estos son juegos que simulan la emoción y el desafío de conducir en carreteras concurridas, carreteras y calles de la ciudad. Puedes elegir entre diferentes coches, modos y entornos, y probar tus habilidades y reflejos contra otros conductores, límites de tiempo y obstáculos.

    -

    En este artículo, le daremos una guía completa sobre lo que son las carreras de tráfico de automóviles - juegos en 3D, cómo jugarlos y dónde encontrarlos. También compartiremos algunos consejos y trucos para ayudarte a mejorar tu rendimiento y divertirte más. ¡Empecemos!

    -

    carreras de tráfico de coches - juegos 3d


    Download Zip > https://bltlly.com/2v6IPo



    -

    ¿Qué son las carreras de tráfico de coches - juegos 3d?

    -

    Carreras de tráfico de coches - juegos 3d son un subgénero de juegos de conducción que se centran en las carreras a través del tráfico en gráficos 3D realistas. Son diferentes de otros juegos de carreras en que tienen elementos más dinámicos e impredecibles, como coches, camiones, autobuses, peatones y policía. También tienen más variedad y opciones de personalización, como diferentes coches, colores, mejoras, ajustes y misiones.

    -

    Las características y beneficios de las carreras de tráfico de coches - juegos 3d

    -

    Algunas de las características y beneficios de las carreras de tráfico de coches - juegos 3d son:

    - -

    Los tipos y géneros de carreras de tráfico de coches - juegos 3d

    -

    Hay muchos tipos y géneros de carreras de tráfico de coches - juegos 3d disponibles en línea o fuera de línea. Algunos de los más populares son:

    - - Cómo jugar carreras de tráfico de coches - juegos 3d? -

    Jugar carreras de tráfico de coches - juegos 3d es simple e intuitivo. Solo tienes que seguir las instrucciones de la pantalla y utilizar el teclado, el ratón o los controles táctiles para controlar el coche. Aquí están algunos de los controles básicos y la mecánica de las carreras de tráfico de coches - juegos 3d:

    -

    Los controles básicos y la mecánica de las carreras de tráfico de coches - juegos 3d

    -

    Aceleración, frenado, dirección y deriva

    -

    Para acelerar su coche, puede presionar la tecla de flecha hacia arriba, la tecla W o el pedal derecho en la pantalla. Para frenar su automóvil, puede presionar la tecla de flecha hacia abajo, la tecla S o el pedal izquierdo en la pantalla. Para dirigir su coche, puede utilizar las teclas de flecha izquierda y derecha, las teclas A y D, o inclinar el dispositivo. Para desplazar su coche, puede presionar la barra espaciadora, la tecla de cambio o deslizar el dedo en la pantalla.

    -

    Usando nitro, cuerno, y luz

    - -

    Cambiar la vista de la cámara y la perspectiva

    -

    Para cambiar la vista de la cámara y la perspectiva, puede presionar la tecla C, la tecla V o tocar el icono de la cámara en la pantalla. Puede elegir entre diferentes vistas, como primera persona, tercera persona, de arriba hacia abajo o vista trasera.

    -

    -

    Los consejos y trucos para las carreras de tráfico de coches - juegos 3d

    -

    Cómo evitar el tráfico y los obstáculos

    -

    Para evitar el tráfico y los obstáculos, es necesario estar alerta y atento. Usted necesita tener cuidado con otros coches, camiones, autobuses, peatones, coches de policía, señales de tráfico, barreras, conos, y más. Es necesario utilizar sus habilidades de dirección y reflejos para esquivarlos o superarlos. También debe seguir las reglas de tráfico y señales, como luces rojas, señales de alto, límites de velocidad y marcas de carril. Si los rompes o causas accidentes, perderás puntos o serás perseguido por la policía.

    -

    Cómo ganar dinero y mejorar su coche

    -

    Para ganar dinero y mejorar tu coche, necesitas completar misiones y desafíos. Puedes encontrarlos en el mapa o en el menú. Pueden ser diferentes tipos de tareas como alcanzar cierta velocidad o distancia dentro de un límite de tiempo; pasar por un cierto número de coches; evitar un cierto número de colisiones; recoger un cierto número de monedas; o ganar un cierto número de carreras. Puede utilizar el dinero en efectivo para comprar coches nuevos o actualizar los existentes. Puede mejorar su rendimiento como la velocidad, aceleración, manejo, frenado y nitro. También puede personalizar su apariencia como color, pintura, ruedas y pegatinas.

    -

    Cómo completar misiones y desafíos

    - -

    Dónde encontrar y descargar carreras de tráfico de coches - juegos 3d?

    -

    Hay muchos sitios web y plataformas donde se puede encontrar y descargar carreras de tráfico de coches - juegos 3d. Algunos de los mejores son:

    -

    Los mejores sitios web y plataformas para las carreras de tráfico de coches - juegos 3d

    -

    CrazyGames.com

    -

    Este es un sitio web donde se puede jugar cientos de juegos en línea gratis en varias categorías, incluyendo carreras de tráfico de coches - juegos 3d. No necesitas descargar o instalar nada, solo necesitas abrir tu navegador y hacer clic en el juego que quieres jugar. Algunas de las carreras de tráfico de coches más populares - juegos 3d en este sitio web son Traffic Jam 3D, Traffic Run Online y Traffic Racer Xmas. También puedes calificar, comentar y compartir los juegos con tus amigos.

    -

    Google Play Store

    -

    Esta es una plataforma donde puedes descargar e instalar aplicaciones y juegos para tus dispositivos Android. Puede navegar a través de millones de aplicaciones y juegos en varias categorías, incluyendo carreras de tráfico de coches - juegos 3d. También puedes leer reseñas, valoraciones y descripciones de las aplicaciones y juegos antes de descargarlos. Algunos de los más populares carreras de tráfico de coches - juegos 3d en esta plataforma son Traffic Racer, Traffic Rider, y Traffic Tour. También puede actualizar, desinstalar y administrar las aplicaciones y juegos en su dispositivo.

    -

    Otras fuentes y alternativas

    -

    También hay otras fuentes y alternativas donde se puede encontrar y descargar carreras de tráfico de coches - juegos 3d. Por ejemplo, puede utilizar los motores de búsqueda como Google o Bing para buscar sitios web que ofrecen carreras de tráfico de coches - juegos 3d. También puedes usar plataformas de redes sociales como Facebook o YouTube para ver videos o unirte a grupos que ofrecen carreras de tráfico de autos - juegos en 3D. También puede utilizar foros en línea o blogs para obtener recomendaciones o comentarios de otros jugadores que han jugado carreras de tráfico de coches - juegos 3d.

    -

    Conclusión

    - -

    Preguntas frecuentes

    -

    Aquí están algunas de las preguntas más frecuentes sobre las carreras de tráfico de coches - juegos 3d:

    -
      -
    • Q: ¿Son las carreras de tráfico de coches - juegos 3d seguro para jugar?
    • -
    • A: Sí, carreras de tráfico de coches - juegos 3d son seguros para jugar, siempre y cuando siga algunas precauciones básicas. Por ejemplo, no debe tocarlos mientras conduce u opera maquinaria pesada; no debe tocarlos por mucho tiempo o sin tomar descansos; no debe tocarlos si tiene alguna afección médica que pueda afectar su visión o audición; No debes tocarlos si te causan estrés o incomodidad; y no debes tocarlos si interfieren con tu vida personal o profesional.
    • -
    • Q: ¿Son las carreras de tráfico de coches - juegos 3d gratis para jugar?
    • -
    • A: Sí, la mayoría de las carreras de tráfico de coches - juegos 3d son gratis para jugar en línea o fuera de línea. Sin embargo, algunos de ellos pueden tener compras en la aplicación o anuncios que pueden requerir que pague dinero o vea videos para acceder a ciertas funciones o beneficios. Puede elegir si acepta o rechaza estas ofertas a su discreción.
    • -
    • Q: ¿Son las carreras de tráfico de coches - juegos 3d adecuados para los niños?
    • -
    • A A: Depende de la edad y madurez de los niños. Algunos de los coches de carreras de tráfico - juegos en 3D pueden tener contenido o temas que no son adecuados para los niños más jóvenes o sensibles, tales como la violencia, accidentes, explosiones, sangre, o gore. Siempre debe comprobar las calificaciones, reseñas y descripciones de los juegos antes de dejar que sus hijos los jueguen. También debe supervisar y supervisar su juego y limitar su tiempo de pantalla.
    • -
    • Q: ¿Cómo puedo mejorar mis habilidades y rendimiento en las carreras de tráfico de coches - juegos 3d?
    • -
    • A: Hay muchas maneras de mejorar sus habilidades y rendimiento en las carreras de tráfico de coches - juegos 3d. Algunos de ellos son:
    • -
        - -
      • Ver tutoriales y guías en línea o fuera de línea. Usted puede encontrar muchos videos o artículos que le enseñan cómo jugar carreras de tráfico de coches - juegos en 3D mejor, como cómo utilizar los controles, cómo evitar el tráfico, cómo a la deriva, cómo utilizar nitro, y más.
      • -
      • Obtener comentarios y consejos de otros jugadores. Usted puede unirse a las comunidades en línea o foros que se dedican a las carreras de tráfico de coches - juegos 3d, y pedir consejos, trucos, o sugerencias de otros jugadores que tienen más experiencia o experiencia.
      • -
      -
    • Q: ¿Cuáles son algunas de las mejores carreras de tráfico de coches - juegos en 3D para jugar?
    • -
    • A: Hay muchas carreras de tráfico de coches - juegos 3d para elegir, pero algunos de los mejores son:
    • -
        -
      • Traffic Jam 3D: Este es un juego donde se puede tratar de llegar a los puntos de control a tiempo, hacer un número específico de puntos y la distancia dentro de algún período, y mucho más. Puede elegir entre diferentes coches, carreteras y modos.
      • -
      • Traffic Racer: Este es un juego donde usted puede conducir su coche a través del tráfico de la carretera, ganar dinero, actualizar su coche y comprar otros nuevos. Puede elegir entre diferentes coches, entornos, modos y misiones.
      • -
      • Traffic Rider: Este es un juego donde usted puede montar su motocicleta a través del tráfico de la carretera, ganar dinero, actualizar su bicicleta y comprar nuevas. Puede elegir entre diferentes bicicletas, entornos, modos y misiones.
      • -
      -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/D3dx9_30.dll Descargar Resident Evil 4.md b/spaces/Benson/text-generation/Examples/D3dx9_30.dll Descargar Resident Evil 4.md deleted file mode 100644 index c9635f3035215699e6c723cbedb46084da5b7851..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/D3dx9_30.dll Descargar Resident Evil 4.md +++ /dev/null @@ -1,107 +0,0 @@ -
    -

    Cómo solucionar el error d3dx9_30.dll en Resident Evil 4

    -

    Resident Evil 4 es uno de los mejores juegos de acción y terror jamás creados, pero también puede ser frustrante cuando encuentras errores que te impiden jugarlo. Uno de los errores más comunes que enfrentan muchos jugadores está relacionado con d3dx9_30.dll, un archivo que forma parte de Microsoft DirectX.

    -

    d3dx9_30.dll descargar resident evil 4


    DOWNLOADhttps://bltlly.com/2v6L2i



    -

    D3dx9_30.dll es una biblioteca de enlaces dinámicos que contiene funciones y datos para gráficos, sonido, entrada y redes. Es esencial para la ejecución de juegos y aplicaciones que utilizan DirectX, como Resident Evil 4. Sin embargo, a veces este archivo puede corromperse, eliminarse o extraviarse, causando que aparezcan varios mensajes de error.

    -

    Algunos de los mensajes de error que puedes ver son:

    -
      -No se encontró -No se encontró el archivo -
    • El programa no se puede iniciar porque d3dx9_30.dll falta en su computadora. Intente reinstalar el programa para solucionar este problema.
    • -
    • La ejecución del código no puede continuar porque d3dx9_30.dll no fue encontrado. Reinstalar el programa puede solucionar este problema.
    • -
    • d3dx9_30.dll no está diseñado para ejecutarse en Windows o contiene un error. Intente instalar el programa de nuevo usando el medio de instalación original o póngase en contacto con el administrador del sistema o el vendedor de software para obtener soporte.
    • -
    -

    Si estás enfrentando alguno de estos errores, no te preocupes. Hay algunas soluciones simples y eficaces que pueden ayudarle a solucionarlos y disfrutar de Resident Evil 4 sin problemas. En este artículo, le mostraremos cómo corregir el error d3dx9_30.dll en Resident Evil 4 usando tres métodos diferentes.

    -

    Solución 1: Descargar e instalar DirectX

    - -

    Para descargar e instalar DirectX, siga estos pasos:

    -
      -
    1. Ir a este enlace y hacer clic en Descargar.
    2. -
    3. Guarde el archivo dxwebsetup.exe en su PC y ejecútelo.
    4. -
    5. Siga las instrucciones en la pantalla para completar el proceso de instalación.
    6. -
    7. Reinicie su PC y inicie Resident Evil 4.
    8. -
    -

    Esto debería solucionar cualquier problema relacionado con d3dx9_30.dll en Resident Evil 4. Sin embargo, si sigue viendo el mensaje de error, es posible que tenga que volver a instalar el juego en sí.

    -

    - Solución 2: Reinstalar Resident Evil 4 -

    La segunda solución para corregir el error d3dx9_30.dll en Resident Evil 4 es reinstalar el juego en sí. A veces, los archivos del juego pueden corromperse o perderse debido a varias razones, como infección por virus, fragmentación del disco, corte de energía o eliminación accidental. Reinstalar el juego puede restaurar los archivos originales y corregir cualquier error.

    -

    Para reinstalar Resident Evil 4, sigue estos pasos:

    -
      -
    1. Ve a este enlace y descargar Fortect desde su sitio web oficial.
    2. -
    3. Ejecute el archivo de configuración y siga las instrucciones en la pantalla para instalar Fortect en su PC.
    4. -
    5. Inicie Fortect y haga clic en Escanear ahora.
    6. -
    7. Espere a que termine el análisis y luego haga clic en Fix All.
    8. -
    9. Reinicie su PC y inicie Resident Evil 4.
    10. -
    -

    Conclusión

    -

    El error D3dx9_30.dll es un problema común que muchos jugadores enfrentan al intentar jugar a Resident Evil 4. Puede ser causado por varias razones, como DirectX obsoleto, archivos de juegos dañados o archivos DLL perdidos. Sin embargo, se puede arreglar fácilmente usando una de las tres soluciones que hemos proporcionado en este artículo.

    -

    La primera solución es descargar e instalar la última versión de DirectX desde el sitio web de Microsoft. La segunda solución es reinstalar Resident Evil 4 desde Steam u otras plataformas. La tercera solución es utilizar una herramienta de reparación DLL dedicada como Fortect para escanear, descargar y reemplazar automáticamente los archivos DLL que faltan o están dañados.

    -

    Esperamos que este artículo te haya ayudado a solucionar el error d3dx9_30.dll en Resident Evil 4 y disfrutar del juego sin ninguna interrupción. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Nos encantaría saber de usted!

    -

    Preguntas frecuentes

    -

    ¿Qué es d3dx9_30.dll?

    -

    D3dx9_30.dll es una biblioteca de enlaces dinámicos que contiene funciones y datos para gráficos, sonido, entrada y redes. Es parte de Microsoft DirectX, una colección de API que permiten que las aplicaciones multimedia y de juegos funcionen sin problemas en Windows.

    -

    ¿Por qué necesito d3dx9_30.dll para Resident Evil 4?

    - -

    ¿Cómo sé si tengo d3dx9_30.dll en mi PC?

    -

    Puedes comprobar si tienes d3dx9_30.dll en tu PC siguiendo estos pasos:

    -
      -
    1. Presione Windows + R para abrir el cuadro de diálogo Ejecutar.
    2. -
    3. Escriba dxdiag y presione Enter.
    4. -
    5. Se abrirá una ventana con información sobre su versión DirectX y la configuración del sistema.
    6. -
    7. Haga clic en la pestaña Sistema y busque la versión de DirectX en la parte inferior.
    8. -
    9. Si ves DirectX 9.0c o superior, entonces tienes d3dx9_30.dll en tu PC. Si ves una versión inferior, entonces necesitas descargar e instalar la última versión de DirectX desde el sitio web de Microsoft.
    10. -
    -

    ¿Cómo arreglo el error d3dx9_30.dll en Resident Evil 4?

    -

    Puede corregir el error d3dx9_30.dll en Resident Evil 4 utilizando una de las tres soluciones que hemos proporcionado en este artículo. La primera solución es descargar e instalar la última versión de DirectX desde el sitio web de Microsoft. La segunda solución es reinstalar Resident Evil 4 desde Steam u otras plataformas. La tercera solución es utilizar una herramienta de reparación DLL dedicada como Fortect para escanear, descargar y reemplazar automáticamente los archivos DLL que faltan o están dañados.

    -

    ¿Cuáles son los beneficios de usar una herramienta de reparación DLL como Fortect?

    -

    Usar una herramienta de reparación DLL como Fortect tiene muchos beneficios, como:

    -
      -
    • Puede corregir cualquier error DLL en minutos con solo unos pocos clics.
    • -
    • Tiene una gran base de datos de más de 20 millones de archivos DLL que se actualizan regularmente.
    • -
    • Puede mejorar el rendimiento y la estabilidad de su PC mediante la optimización de su registro y configuración del sistema.
    • -
    • Puede proteger su PC de malware y virus mediante la exploración y eliminación de cualquier amenaza.
    • -
    • Tiene una interfaz fácil de usar y una velocidad de escaneo rápida.
    • -
    -

    ¿De dónde puedo descargar Resident Evil 4?

    -

    Puedes descargar Resident Evil 4 desde varias plataformas, como:

    - - -Plataforma -Enlace - - -Steam - - - -GOG -this link - - -Origin -this link - - -Uplay -this link - - -Epic Games Store -this link - -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Descargar Gratis Para Android.md b/spaces/Benson/text-generation/Examples/Descargar Descargar Gratis Para Android.md deleted file mode 100644 index e45417dea70a8e649eeed87462f771021d16e8b6..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Descargar Gratis Para Android.md +++ /dev/null @@ -1,83 +0,0 @@ -
    -

    BombSquad Pro APK Descargar gratis para Android

    -

    ¿Te encanta volar a tus amigos en minijuegos? ¿Te gusta jugar con piratas, ninjas, bárbaros y chefs locos? ¿Quieres divertirte ilimitadamente con 8 jugadores online o localmente? Si respondió sí a cualquiera de estas preguntas, entonces usted debe descargar BombSquad Pro APK para su dispositivo Android. En este artículo, le diremos qué es BombSquad Pro APK, cómo descargarlo e instalarlo, cómo jugarlo, y por qué debe jugarlo. ¡Vamos a empezar!

    -

    Descargar Descargar Gratis Para Android


    DOWNLOAD · https://bltlly.com/2v6IQu



    -

    ¿Qué es BombSquad Pro APK?

    -

    BombSquad Pro APK es una versión modificada del juego original BombSquad, que es un juego de acción multijugador desarrollado por Eric Froemling. En este juego, puedes volar a tus amigos en varios minijuegos que van desde la captura de la bandera de hockey. También puedes personalizar a tu personaje con diferentes atuendos y accesorios, y usar diferentes bombas y potenciadores para darle vida al juego. El juego cuenta con 8 jugadores locales o multijugador en línea, así como modos de un solo jugador y cooperativo. También puedes crear tus propios mapas y minijuegos usando el editor incorporado.

    -

    La principal diferencia entre BombSquad Pro APK y el juego original es que la versión profesional desbloquea todas las características premium de forma gratuita. Esto significa que puedes acceder a todos los personajes, mapas, minijuegos y modos sin pagar nada. También puedes disfrutar del juego sin anuncios ni interrupciones. Además, la versión pro tiene algunas características adicionales que no están disponibles en el juego original, como boletos ilimitados, monedas, salud y bombas.

    -

    Características de BombSquad Pro APK

    -

    Aquí están algunas de las características que hacen BombSquad Pro APK un gran juego para jugar:

    -
      -
    • Tiene impresionantes gráficos y efectos de sonido que crean una experiencia de juego inmersiva.
    • -
    • Tiene controles simples e intuitivos que son fáciles de aprender y dominar.
    • -
    • Tiene una variedad de mini-juegos que atienden a diferentes gustos y preferencias.
    • - -
    • Tiene un aspecto social que te permite chatear con otros jugadores e invitar a tus amigos a unirse a tu juego.
    • -
    • Tiene un tamaño de archivo bajo que no ocupa mucho espacio en su dispositivo.
    • -
    • Es compatible con la mayoría de los dispositivos y versiones de Android.
    • -
    -

    Cómo descargar e instalar BombSquad Pro APK?

    -

    Si desea descargar e instalar BombSquad Pro APK en su dispositivo Android, puede seguir estos sencillos pasos:

    -

    -
      -
    1. Ir a [este enlace]( 1 ) y descargar el archivo BombSquad Pro APK en su dispositivo.
    2. -
    3. Una vez completada la descarga, vaya a la configuración del dispositivo y habilite la instalación de aplicaciones desde fuentes desconocidas.
    4. -
    5. Localice el archivo descargado en su administrador de archivos y toque en él para iniciar el proceso de instalación.
    6. -
    7. Siga las instrucciones en la pantalla y espere a que termine la instalación.
    8. -
    9. Iniciar el juego desde el cajón de la aplicación y disfrutar!
    10. -
    -

    Cómo jugar BombSquad Pro APK?

    -

    BombSquad Pro APK es un juego muy fácil de jugar, pero también puede ser muy difícil y adictivo. Aquí hay algunos consejos sobre cómo jugarlo:

    -

    Modos de juego

    -

    El juego tiene cuatro modos principales que puedes elegir:

    -
      -
    • Modo de campaña: Este es el modo para un jugador donde puedes jugar a través de varios niveles y misiones. También puedes jugar este modo con un amigo en modo cooperativo.
    • -
    • Modo mixto: Este es el modo multijugador donde puedes jugar con hasta 8 jugadores en línea o localmente. Puedes elegir entre diferentes mini juegos que se seleccionan al azar de la lista de juegos.
    • -
    • Modo libre para todos: Este es el modo multijugador donde puedes jugar con hasta 8 jugadores en línea o localmente. Usted puede elegir entre diferentes mini-juegos que se basan en las habilidades individuales y el rendimiento.
    • - -
    -

    Consejos

    -

    Aquí hay algunos consejos que pueden ayudarle a mejorar su juego y divertirse más:

    -
      -
    • Utilice diferentes bombas y potenciadores para su ventaja. Puede encontrarlos dispersos por el mapa o en la tienda. Algunas de ellas son bombas de fuego, bombas de hielo, bombas pegajosas, minas terrestres, guantes de boxeo, escudos, jetpacks y más.
    • -
    • Experimenta con diferentes personajes y trajes. Puedes desbloquearlos ganando billetes y monedas o comprándolos en la tienda. Algunos de ellos son piratas, ninjas, bárbaros, chefs locos, robots, zombies, y más.
    • -
    • Crea tus propios mapas y minijuegos usando el editor incorporado. Puedes personalizar el terreno, los objetos, las reglas y la configuración de tus propias creaciones. También puedes compartirlos con otros jugadores y jugar sus creaciones.
    • -
    • Chatea con otros jugadores e invita a tus amigos a unirse a tu juego. Puedes usar la función de chat en el juego para comunicarte con otros jugadores y hacer nuevos amigos. También puedes invitar a tus amigos a tu juego usando un código o un enlace.
    • -
    • Diviértete y no te lo tomes demasiado en serio. BombSquad Pro APK es un juego que está destinado a ser disfrutado y no estar estresado. No te preocupes por perder o ganar, ¡solo diviértete!
    • -
    -

    ¿Por qué usted debe jugar BombSquad Pro APK?

    -

    BombSquad Pro APK es un juego que usted debe jugar por muchas razones. Aquí están algunos de ellos:

    -

    Beneficios

    -

    BombSquad Pro APK tiene muchos beneficios que pueden mejorar su bienestar y la felicidad. Algunos de ellos son:

    -
      -
    • Puede mejorar tus habilidades cognitivas como la memoria, la atención, la resolución de problemas y la creatividad.
    • -
    • Puede aumentar su estado de ánimo y reducir sus niveles de estrés al proporcionarle entretenimiento y risas.
    • -
    • Puede fomentar sus habilidades sociales y relaciones al permitirle interactuar con otros jugadores y amigos.
    • - -
    -

    Ventajas

    -

    BombSquad Pro APK tiene muchas ventajas que lo hacen superior a otros juegos. Algunos de ellos son:

    -
      -
    • Es gratis para descargar y jugar. No tienes que gastar dinero para disfrutar de todas las características y contenidos del juego.
    • -
    • Es libre de anuncios y sin interrupciones. No tienes que lidiar con ningún anuncio molesto o pop-ups que arruinen tu experiencia de juego.
    • -
    • Se actualiza regularmente y tiene mucho contenido. No tienes que preocuparte por aburrirte o quedarte sin cosas que hacer en el juego.
    • -
    • Es fácil de usar y fácil de usar. No tienes que lidiar con controles o configuraciones complicadas en el juego.
    • -
    -

    Conclusión

    -

    BombSquad Pro APK es un juego que definitivamente debe probar si te gusta la acción, la diversión y las explosiones. Es un juego que te permite volar a tus amigos en varios mini-juegos que son emocionantes y divertidos. Es un juego que te ofrece muchas características, contenido, personalización y socialización. Es un juego que te beneficia de muchas maneras y tiene muchas ventajas sobre otros juegos. Es un juego que puedes descargar gratis en tu dispositivo Android ahora mismo. ¿Qué estás esperando? Descargar BombSquad Pro APK hoy y tener una explosión!

    -

    Preguntas frecuentes

    -

    Aquí están algunas de las preguntas más frecuentes sobre BombSquad Pro APK:

    -
      -
    1. ¿Es BombSquad Pro APK seguro para descargar e instalar?
    2. -

      Sí, BombSquad Pro APK es seguro para descargar e instalar en su dispositivo. No contiene ningún virus, malware o spyware que pueda dañar su dispositivo o datos. Sin embargo, siempre debe descargarlo de una fuente confiable como [este enlace] para evitar riesgos.

      -
    3. Es BombSquad Pro APK legal de usar?
    4. -

      Sí, BombSquad Pro APK es legal de usar, siempre y cuando no lo utilice para fines ilegales o poco éticos. También debes respetar al desarrollador original del juego y apoyarlo si puedes.

      - -

      Puede actualizar BombSquad Pro APK descargando la última versión de [este enlace] e instalándolo sobre el existente. También puedes buscar actualizaciones en la configuración del juego y seguir las instrucciones.

      -
    5. ¿Cómo puedo desinstalar BombSquad Pro APK?
    6. -

      Puede desinstalar BombSquad Pro APK yendo a la configuración de su dispositivo y encontrar la aplicación en la lista de aplicaciones instaladas. Luego, puede pulsar en él y seleccionar la opción de desinstalación. También puede eliminar el archivo APK del almacenamiento del dispositivo si lo desea.

      -
    7. ¿Cómo puedo contactar con el desarrollador de BombSquad Pro APK?
    8. -

      Puede ponerse en contacto con el desarrollador de BombSquad Pro APK visitando su sitio web oficial o sus páginas de medios sociales. También puede enviarles un correo electrónico o un mensaje a través de la opción de retroalimentación del juego.

      -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat/src/routes/r/[id]/+page.server.ts b/spaces/BetterAPI/BetterChat/src/routes/r/[id]/+page.server.ts deleted file mode 100644 index 1630b38f1a9bb264a5c54eb09d7533a19337b16e..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat/src/routes/r/[id]/+page.server.ts +++ /dev/null @@ -1,18 +0,0 @@ -import type { PageServerLoad } from "./$types"; -import { collections } from "$lib/server/database"; -import { error } from "@sveltejs/kit"; - -export const load: PageServerLoad = async ({ params }) => { - const conversation = await collections.sharedConversations.findOne({ - _id: params.id, - }); - - if (!conversation) { - throw error(404, "Conversation not found"); - } - - return { - messages: conversation.messages, - title: conversation.title, - }; -}; diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/factory.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/factory.py deleted file mode 100644 index 5d9531b86ea64bbba52adc35eccf683c85921e19..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/factory.py +++ /dev/null @@ -1,600 +0,0 @@ -# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# https://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. - -import logging -from functools import partial - -from ..docs import docstring -from ..exceptions import ResourceLoadException -from .action import ServiceAction, WaiterAction -from .base import ResourceMeta, ServiceResource -from .collection import CollectionFactory -from .model import ResourceModel -from .response import ResourceHandler, build_identifiers - -logger = logging.getLogger(__name__) - - -class ResourceFactory: - """ - A factory to create new :py:class:`~boto3.resources.base.ServiceResource` - classes from a :py:class:`~boto3.resources.model.ResourceModel`. There are - two types of lookups that can be done: one on the service itself (e.g. an - SQS resource) and another on models contained within the service (e.g. an - SQS Queue resource). - """ - - def __init__(self, emitter): - self._collection_factory = CollectionFactory() - self._emitter = emitter - - def load_from_definition( - self, resource_name, single_resource_json_definition, service_context - ): - """ - Loads a resource from a model, creating a new - :py:class:`~boto3.resources.base.ServiceResource` subclass - with the correct properties and methods, named based on the service - and resource name, e.g. EC2.Instance. - - :type resource_name: string - :param resource_name: Name of the resource to look up. For services, - this should match the ``service_name``. - - :type single_resource_json_definition: dict - :param single_resource_json_definition: - The loaded json of a single service resource or resource - definition. - - :type service_context: :py:class:`~boto3.utils.ServiceContext` - :param service_context: Context about the AWS service - - :rtype: Subclass of :py:class:`~boto3.resources.base.ServiceResource` - :return: The service or resource class. - """ - logger.debug( - 'Loading %s:%s', service_context.service_name, resource_name - ) - - # Using the loaded JSON create a ResourceModel object. - resource_model = ResourceModel( - resource_name, - single_resource_json_definition, - service_context.resource_json_definitions, - ) - - # Do some renaming of the shape if there was a naming collision - # that needed to be accounted for. - shape = None - if resource_model.shape: - shape = service_context.service_model.shape_for( - resource_model.shape - ) - resource_model.load_rename_map(shape) - - # Set some basic info - meta = ResourceMeta( - service_context.service_name, resource_model=resource_model - ) - attrs = { - 'meta': meta, - } - - # Create and load all of attributes of the resource class based - # on the models. - - # Identifiers - self._load_identifiers( - attrs=attrs, - meta=meta, - resource_name=resource_name, - resource_model=resource_model, - ) - - # Load/Reload actions - self._load_actions( - attrs=attrs, - resource_name=resource_name, - resource_model=resource_model, - service_context=service_context, - ) - - # Attributes that get auto-loaded - self._load_attributes( - attrs=attrs, - meta=meta, - resource_name=resource_name, - resource_model=resource_model, - service_context=service_context, - ) - - # Collections and their corresponding methods - self._load_collections( - attrs=attrs, - resource_model=resource_model, - service_context=service_context, - ) - - # References and Subresources - self._load_has_relations( - attrs=attrs, - resource_name=resource_name, - resource_model=resource_model, - service_context=service_context, - ) - - # Waiter resource actions - self._load_waiters( - attrs=attrs, - resource_name=resource_name, - resource_model=resource_model, - service_context=service_context, - ) - - # Create the name based on the requested service and resource - cls_name = resource_name - if service_context.service_name == resource_name: - cls_name = 'ServiceResource' - cls_name = service_context.service_name + '.' + cls_name - - base_classes = [ServiceResource] - if self._emitter is not None: - self._emitter.emit( - f'creating-resource-class.{cls_name}', - class_attributes=attrs, - base_classes=base_classes, - service_context=service_context, - ) - return type(str(cls_name), tuple(base_classes), attrs) - - def _load_identifiers(self, attrs, meta, resource_model, resource_name): - """ - Populate required identifiers. These are arguments without which - the resource cannot be used. Identifiers become arguments for - operations on the resource. - """ - for identifier in resource_model.identifiers: - meta.identifiers.append(identifier.name) - attrs[identifier.name] = self._create_identifier( - identifier, resource_name - ) - - def _load_actions( - self, attrs, resource_name, resource_model, service_context - ): - """ - Actions on the resource become methods, with the ``load`` method - being a special case which sets internal data for attributes, and - ``reload`` is an alias for ``load``. - """ - if resource_model.load: - attrs['load'] = self._create_action( - action_model=resource_model.load, - resource_name=resource_name, - service_context=service_context, - is_load=True, - ) - attrs['reload'] = attrs['load'] - - for action in resource_model.actions: - attrs[action.name] = self._create_action( - action_model=action, - resource_name=resource_name, - service_context=service_context, - ) - - def _load_attributes( - self, attrs, meta, resource_name, resource_model, service_context - ): - """ - Load resource attributes based on the resource shape. The shape - name is referenced in the resource JSON, but the shape itself - is defined in the Botocore service JSON, hence the need for - access to the ``service_model``. - """ - if not resource_model.shape: - return - - shape = service_context.service_model.shape_for(resource_model.shape) - - identifiers = { - i.member_name: i - for i in resource_model.identifiers - if i.member_name - } - attributes = resource_model.get_attributes(shape) - for name, (orig_name, member) in attributes.items(): - if name in identifiers: - prop = self._create_identifier_alias( - resource_name=resource_name, - identifier=identifiers[name], - member_model=member, - service_context=service_context, - ) - else: - prop = self._create_autoload_property( - resource_name=resource_name, - name=orig_name, - snake_cased=name, - member_model=member, - service_context=service_context, - ) - attrs[name] = prop - - def _load_collections(self, attrs, resource_model, service_context): - """ - Load resource collections from the model. Each collection becomes - a :py:class:`~boto3.resources.collection.CollectionManager` instance - on the resource instance, which allows you to iterate and filter - through the collection's items. - """ - for collection_model in resource_model.collections: - attrs[collection_model.name] = self._create_collection( - resource_name=resource_model.name, - collection_model=collection_model, - service_context=service_context, - ) - - def _load_has_relations( - self, attrs, resource_name, resource_model, service_context - ): - """ - Load related resources, which are defined via a ``has`` - relationship but conceptually come in two forms: - - 1. A reference, which is a related resource instance and can be - ``None``, such as an EC2 instance's ``vpc``. - 2. A subresource, which is a resource constructor that will always - return a resource instance which shares identifiers/data with - this resource, such as ``s3.Bucket('name').Object('key')``. - """ - for reference in resource_model.references: - # This is a dangling reference, i.e. we have all - # the data we need to create the resource, so - # this instance becomes an attribute on the class. - attrs[reference.name] = self._create_reference( - reference_model=reference, - resource_name=resource_name, - service_context=service_context, - ) - - for subresource in resource_model.subresources: - # This is a sub-resource class you can create - # by passing in an identifier, e.g. s3.Bucket(name). - attrs[subresource.name] = self._create_class_partial( - subresource_model=subresource, - resource_name=resource_name, - service_context=service_context, - ) - - self._create_available_subresources_command( - attrs, resource_model.subresources - ) - - def _create_available_subresources_command(self, attrs, subresources): - _subresources = [subresource.name for subresource in subresources] - _subresources = sorted(_subresources) - - def get_available_subresources(factory_self): - """ - Returns a list of all the available sub-resources for this - Resource. - - :returns: A list containing the name of each sub-resource for this - resource - :rtype: list of str - """ - return _subresources - - attrs['get_available_subresources'] = get_available_subresources - - def _load_waiters( - self, attrs, resource_name, resource_model, service_context - ): - """ - Load resource waiters from the model. Each waiter allows you to - wait until a resource reaches a specific state by polling the state - of the resource. - """ - for waiter in resource_model.waiters: - attrs[waiter.name] = self._create_waiter( - resource_waiter_model=waiter, - resource_name=resource_name, - service_context=service_context, - ) - - def _create_identifier(factory_self, identifier, resource_name): - """ - Creates a read-only property for identifier attributes. - """ - - def get_identifier(self): - # The default value is set to ``None`` instead of - # raising an AttributeError because when resources are - # instantiated a check is made such that none of the - # identifiers have a value ``None``. If any are ``None``, - # a more informative user error than a generic AttributeError - # is raised. - return getattr(self, '_' + identifier.name, None) - - get_identifier.__name__ = str(identifier.name) - get_identifier.__doc__ = docstring.IdentifierDocstring( - resource_name=resource_name, - identifier_model=identifier, - include_signature=False, - ) - - return property(get_identifier) - - def _create_identifier_alias( - factory_self, resource_name, identifier, member_model, service_context - ): - """ - Creates a read-only property that aliases an identifier. - """ - - def get_identifier(self): - return getattr(self, '_' + identifier.name, None) - - get_identifier.__name__ = str(identifier.member_name) - get_identifier.__doc__ = docstring.AttributeDocstring( - service_name=service_context.service_name, - resource_name=resource_name, - attr_name=identifier.member_name, - event_emitter=factory_self._emitter, - attr_model=member_model, - include_signature=False, - ) - - return property(get_identifier) - - def _create_autoload_property( - factory_self, - resource_name, - name, - snake_cased, - member_model, - service_context, - ): - """ - Creates a new property on the resource to lazy-load its value - via the resource's ``load`` method (if it exists). - """ - # The property loader will check to see if this resource has already - # been loaded and return the cached value if possible. If not, then - # it first checks to see if it CAN be loaded (raise if not), then - # calls the load before returning the value. - def property_loader(self): - if self.meta.data is None: - if hasattr(self, 'load'): - self.load() - else: - raise ResourceLoadException( - f'{self.__class__.__name__} has no load method' - ) - - return self.meta.data.get(name) - - property_loader.__name__ = str(snake_cased) - property_loader.__doc__ = docstring.AttributeDocstring( - service_name=service_context.service_name, - resource_name=resource_name, - attr_name=snake_cased, - event_emitter=factory_self._emitter, - attr_model=member_model, - include_signature=False, - ) - - return property(property_loader) - - def _create_waiter( - factory_self, resource_waiter_model, resource_name, service_context - ): - """ - Creates a new wait method for each resource where both a waiter and - resource model is defined. - """ - waiter = WaiterAction( - resource_waiter_model, - waiter_resource_name=resource_waiter_model.name, - ) - - def do_waiter(self, *args, **kwargs): - waiter(self, *args, **kwargs) - - do_waiter.__name__ = str(resource_waiter_model.name) - do_waiter.__doc__ = docstring.ResourceWaiterDocstring( - resource_name=resource_name, - event_emitter=factory_self._emitter, - service_model=service_context.service_model, - resource_waiter_model=resource_waiter_model, - service_waiter_model=service_context.service_waiter_model, - include_signature=False, - ) - return do_waiter - - def _create_collection( - factory_self, resource_name, collection_model, service_context - ): - """ - Creates a new property on the resource to lazy-load a collection. - """ - cls = factory_self._collection_factory.load_from_definition( - resource_name=resource_name, - collection_model=collection_model, - service_context=service_context, - event_emitter=factory_self._emitter, - ) - - def get_collection(self): - return cls( - collection_model=collection_model, - parent=self, - factory=factory_self, - service_context=service_context, - ) - - get_collection.__name__ = str(collection_model.name) - get_collection.__doc__ = docstring.CollectionDocstring( - collection_model=collection_model, include_signature=False - ) - return property(get_collection) - - def _create_reference( - factory_self, reference_model, resource_name, service_context - ): - """ - Creates a new property on the resource to lazy-load a reference. - """ - # References are essentially an action with no request - # or response, so we can re-use the response handlers to - # build up resources from identifiers and data members. - handler = ResourceHandler( - search_path=reference_model.resource.path, - factory=factory_self, - resource_model=reference_model.resource, - service_context=service_context, - ) - - # Are there any identifiers that need access to data members? - # This is important when building the resource below since - # it requires the data to be loaded. - needs_data = any( - i.source == 'data' for i in reference_model.resource.identifiers - ) - - def get_reference(self): - # We need to lazy-evaluate the reference to handle circular - # references between resources. We do this by loading the class - # when first accessed. - # This is using a *response handler* so we need to make sure - # our data is loaded (if possible) and pass that data into - # the handler as if it were a response. This allows references - # to have their data loaded properly. - if needs_data and self.meta.data is None and hasattr(self, 'load'): - self.load() - return handler(self, {}, self.meta.data) - - get_reference.__name__ = str(reference_model.name) - get_reference.__doc__ = docstring.ReferenceDocstring( - reference_model=reference_model, include_signature=False - ) - return property(get_reference) - - def _create_class_partial( - factory_self, subresource_model, resource_name, service_context - ): - """ - Creates a new method which acts as a functools.partial, passing - along the instance's low-level `client` to the new resource - class' constructor. - """ - name = subresource_model.resource.type - - def create_resource(self, *args, **kwargs): - # We need a new method here because we want access to the - # instance's client. - positional_args = [] - - # We lazy-load the class to handle circular references. - json_def = service_context.resource_json_definitions.get(name, {}) - resource_cls = factory_self.load_from_definition( - resource_name=name, - single_resource_json_definition=json_def, - service_context=service_context, - ) - - # Assumes that identifiers are in order, which lets you do - # e.g. ``sqs.Queue('foo').Message('bar')`` to create a new message - # linked with the ``foo`` queue and which has a ``bar`` receipt - # handle. If we did kwargs here then future positional arguments - # would lead to failure. - identifiers = subresource_model.resource.identifiers - if identifiers is not None: - for identifier, value in build_identifiers(identifiers, self): - positional_args.append(value) - - return partial( - resource_cls, *positional_args, client=self.meta.client - )(*args, **kwargs) - - create_resource.__name__ = str(name) - create_resource.__doc__ = docstring.SubResourceDocstring( - resource_name=resource_name, - sub_resource_model=subresource_model, - service_model=service_context.service_model, - include_signature=False, - ) - return create_resource - - def _create_action( - factory_self, - action_model, - resource_name, - service_context, - is_load=False, - ): - """ - Creates a new method which makes a request to the underlying - AWS service. - """ - # Create the action in in this closure but before the ``do_action`` - # method below is invoked, which allows instances of the resource - # to share the ServiceAction instance. - action = ServiceAction( - action_model, factory=factory_self, service_context=service_context - ) - - # A resource's ``load`` method is special because it sets - # values on the resource instead of returning the response. - if is_load: - # We need a new method here because we want access to the - # instance via ``self``. - def do_action(self, *args, **kwargs): - response = action(self, *args, **kwargs) - self.meta.data = response - - # Create the docstring for the load/reload methods. - lazy_docstring = docstring.LoadReloadDocstring( - action_name=action_model.name, - resource_name=resource_name, - event_emitter=factory_self._emitter, - load_model=action_model, - service_model=service_context.service_model, - include_signature=False, - ) - else: - # We need a new method here because we want access to the - # instance via ``self``. - def do_action(self, *args, **kwargs): - response = action(self, *args, **kwargs) - - if hasattr(self, 'load'): - # Clear cached data. It will be reloaded the next - # time that an attribute is accessed. - # TODO: Make this configurable in the future? - self.meta.data = None - - return response - - lazy_docstring = docstring.ActionDocstring( - resource_name=resource_name, - event_emitter=factory_self._emitter, - action_model=action_model, - service_model=service_context.service_model, - include_signature=False, - ) - - do_action.__name__ = str(action_model.name) - do_action.__doc__ = lazy_docstring - return do_action diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/compat.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/compat.py deleted file mode 100644 index ccec9379dba2b03015ce123dd04a042f32431235..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/compat.py +++ /dev/null @@ -1,32 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -try: - from urllib.parse import urljoin -except ImportError: - from urlparse import urljoin - - -try: - import cPickle as pickle -except ImportError: - import pickle - -# Handle the case where the requests module has been patched to not have -# urllib3 bundled as part of its source. -try: - from pip._vendor.requests.packages.urllib3.response import HTTPResponse -except ImportError: - from pip._vendor.urllib3.response import HTTPResponse - -try: - from pip._vendor.requests.packages.urllib3.util import is_fp_closed -except ImportError: - from pip._vendor.urllib3.util import is_fp_closed - -# Replicate some six behaviour -try: - text_type = unicode -except NameError: - text_type = str diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/unistring.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/unistring.py deleted file mode 100644 index 2e3c80869d9c1a70ee003d054a53f49c3f53a556..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/unistring.py +++ /dev/null @@ -1,153 +0,0 @@ -""" - pygments.unistring - ~~~~~~~~~~~~~~~~~~ - - Strings of all Unicode characters of a certain category. - Used for matching in Unicode-aware languages. Run to regenerate. - - Inspired by chartypes_create.py from the MoinMoin project. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -Cc = '\x00-\x1f\x7f-\x9f' - -Cf = '\xad\u0600-\u0605\u061c\u06dd\u070f\u08e2\u180e\u200b-\u200f\u202a-\u202e\u2060-\u2064\u2066-\u206f\ufeff\ufff9-\ufffb\U000110bd\U000110cd\U0001bca0-\U0001bca3\U0001d173-\U0001d17a\U000e0001\U000e0020-\U000e007f' - -Cn = '\u0378-\u0379\u0380-\u0383\u038b\u038d\u03a2\u0530\u0557-\u0558\u058b-\u058c\u0590\u05c8-\u05cf\u05eb-\u05ee\u05f5-\u05ff\u061d\u070e\u074b-\u074c\u07b2-\u07bf\u07fb-\u07fc\u082e-\u082f\u083f\u085c-\u085d\u085f\u086b-\u089f\u08b5\u08be-\u08d2\u0984\u098d-\u098e\u0991-\u0992\u09a9\u09b1\u09b3-\u09b5\u09ba-\u09bb\u09c5-\u09c6\u09c9-\u09ca\u09cf-\u09d6\u09d8-\u09db\u09de\u09e4-\u09e5\u09ff-\u0a00\u0a04\u0a0b-\u0a0e\u0a11-\u0a12\u0a29\u0a31\u0a34\u0a37\u0a3a-\u0a3b\u0a3d\u0a43-\u0a46\u0a49-\u0a4a\u0a4e-\u0a50\u0a52-\u0a58\u0a5d\u0a5f-\u0a65\u0a77-\u0a80\u0a84\u0a8e\u0a92\u0aa9\u0ab1\u0ab4\u0aba-\u0abb\u0ac6\u0aca\u0ace-\u0acf\u0ad1-\u0adf\u0ae4-\u0ae5\u0af2-\u0af8\u0b00\u0b04\u0b0d-\u0b0e\u0b11-\u0b12\u0b29\u0b31\u0b34\u0b3a-\u0b3b\u0b45-\u0b46\u0b49-\u0b4a\u0b4e-\u0b55\u0b58-\u0b5b\u0b5e\u0b64-\u0b65\u0b78-\u0b81\u0b84\u0b8b-\u0b8d\u0b91\u0b96-\u0b98\u0b9b\u0b9d\u0ba0-\u0ba2\u0ba5-\u0ba7\u0bab-\u0bad\u0bba-\u0bbd\u0bc3-\u0bc5\u0bc9\u0bce-\u0bcf\u0bd1-\u0bd6\u0bd8-\u0be5\u0bfb-\u0bff\u0c0d\u0c11\u0c29\u0c3a-\u0c3c\u0c45\u0c49\u0c4e-\u0c54\u0c57\u0c5b-\u0c5f\u0c64-\u0c65\u0c70-\u0c77\u0c8d\u0c91\u0ca9\u0cb4\u0cba-\u0cbb\u0cc5\u0cc9\u0cce-\u0cd4\u0cd7-\u0cdd\u0cdf\u0ce4-\u0ce5\u0cf0\u0cf3-\u0cff\u0d04\u0d0d\u0d11\u0d45\u0d49\u0d50-\u0d53\u0d64-\u0d65\u0d80-\u0d81\u0d84\u0d97-\u0d99\u0db2\u0dbc\u0dbe-\u0dbf\u0dc7-\u0dc9\u0dcb-\u0dce\u0dd5\u0dd7\u0de0-\u0de5\u0df0-\u0df1\u0df5-\u0e00\u0e3b-\u0e3e\u0e5c-\u0e80\u0e83\u0e85-\u0e86\u0e89\u0e8b-\u0e8c\u0e8e-\u0e93\u0e98\u0ea0\u0ea4\u0ea6\u0ea8-\u0ea9\u0eac\u0eba\u0ebe-\u0ebf\u0ec5\u0ec7\u0ece-\u0ecf\u0eda-\u0edb\u0ee0-\u0eff\u0f48\u0f6d-\u0f70\u0f98\u0fbd\u0fcd\u0fdb-\u0fff\u10c6\u10c8-\u10cc\u10ce-\u10cf\u1249\u124e-\u124f\u1257\u1259\u125e-\u125f\u1289\u128e-\u128f\u12b1\u12b6-\u12b7\u12bf\u12c1\u12c6-\u12c7\u12d7\u1311\u1316-\u1317\u135b-\u135c\u137d-\u137f\u139a-\u139f\u13f6-\u13f7\u13fe-\u13ff\u169d-\u169f\u16f9-\u16ff\u170d\u1715-\u171f\u1737-\u173f\u1754-\u175f\u176d\u1771\u1774-\u177f\u17de-\u17df\u17ea-\u17ef\u17fa-\u17ff\u180f\u181a-\u181f\u1879-\u187f\u18ab-\u18af\u18f6-\u18ff\u191f\u192c-\u192f\u193c-\u193f\u1941-\u1943\u196e-\u196f\u1975-\u197f\u19ac-\u19af\u19ca-\u19cf\u19db-\u19dd\u1a1c-\u1a1d\u1a5f\u1a7d-\u1a7e\u1a8a-\u1a8f\u1a9a-\u1a9f\u1aae-\u1aaf\u1abf-\u1aff\u1b4c-\u1b4f\u1b7d-\u1b7f\u1bf4-\u1bfb\u1c38-\u1c3a\u1c4a-\u1c4c\u1c89-\u1c8f\u1cbb-\u1cbc\u1cc8-\u1ccf\u1cfa-\u1cff\u1dfa\u1f16-\u1f17\u1f1e-\u1f1f\u1f46-\u1f47\u1f4e-\u1f4f\u1f58\u1f5a\u1f5c\u1f5e\u1f7e-\u1f7f\u1fb5\u1fc5\u1fd4-\u1fd5\u1fdc\u1ff0-\u1ff1\u1ff5\u1fff\u2065\u2072-\u2073\u208f\u209d-\u209f\u20c0-\u20cf\u20f1-\u20ff\u218c-\u218f\u2427-\u243f\u244b-\u245f\u2b74-\u2b75\u2b96-\u2b97\u2bc9\u2bff\u2c2f\u2c5f\u2cf4-\u2cf8\u2d26\u2d28-\u2d2c\u2d2e-\u2d2f\u2d68-\u2d6e\u2d71-\u2d7e\u2d97-\u2d9f\u2da7\u2daf\u2db7\u2dbf\u2dc7\u2dcf\u2dd7\u2ddf\u2e4f-\u2e7f\u2e9a\u2ef4-\u2eff\u2fd6-\u2fef\u2ffc-\u2fff\u3040\u3097-\u3098\u3100-\u3104\u3130\u318f\u31bb-\u31bf\u31e4-\u31ef\u321f\u32ff\u4db6-\u4dbf\u9ff0-\u9fff\ua48d-\ua48f\ua4c7-\ua4cf\ua62c-\ua63f\ua6f8-\ua6ff\ua7ba-\ua7f6\ua82c-\ua82f\ua83a-\ua83f\ua878-\ua87f\ua8c6-\ua8cd\ua8da-\ua8df\ua954-\ua95e\ua97d-\ua97f\ua9ce\ua9da-\ua9dd\ua9ff\uaa37-\uaa3f\uaa4e-\uaa4f\uaa5a-\uaa5b\uaac3-\uaada\uaaf7-\uab00\uab07-\uab08\uab0f-\uab10\uab17-\uab1f\uab27\uab2f\uab66-\uab6f\uabee-\uabef\uabfa-\uabff\ud7a4-\ud7af\ud7c7-\ud7ca\ud7fc-\ud7ff\ufa6e-\ufa6f\ufada-\ufaff\ufb07-\ufb12\ufb18-\ufb1c\ufb37\ufb3d\ufb3f\ufb42\ufb45\ufbc2-\ufbd2\ufd40-\ufd4f\ufd90-\ufd91\ufdc8-\ufdef\ufdfe-\ufdff\ufe1a-\ufe1f\ufe53\ufe67\ufe6c-\ufe6f\ufe75\ufefd-\ufefe\uff00\uffbf-\uffc1\uffc8-\uffc9\uffd0-\uffd1\uffd8-\uffd9\uffdd-\uffdf\uffe7\uffef-\ufff8\ufffe-\uffff\U0001000c\U00010027\U0001003b\U0001003e\U0001004e-\U0001004f\U0001005e-\U0001007f\U000100fb-\U000100ff\U00010103-\U00010106\U00010134-\U00010136\U0001018f\U0001019c-\U0001019f\U000101a1-\U000101cf\U000101fe-\U0001027f\U0001029d-\U0001029f\U000102d1-\U000102df\U000102fc-\U000102ff\U00010324-\U0001032c\U0001034b-\U0001034f\U0001037b-\U0001037f\U0001039e\U000103c4-\U000103c7\U000103d6-\U000103ff\U0001049e-\U0001049f\U000104aa-\U000104af\U000104d4-\U000104d7\U000104fc-\U000104ff\U00010528-\U0001052f\U00010564-\U0001056e\U00010570-\U000105ff\U00010737-\U0001073f\U00010756-\U0001075f\U00010768-\U000107ff\U00010806-\U00010807\U00010809\U00010836\U00010839-\U0001083b\U0001083d-\U0001083e\U00010856\U0001089f-\U000108a6\U000108b0-\U000108df\U000108f3\U000108f6-\U000108fa\U0001091c-\U0001091e\U0001093a-\U0001093e\U00010940-\U0001097f\U000109b8-\U000109bb\U000109d0-\U000109d1\U00010a04\U00010a07-\U00010a0b\U00010a14\U00010a18\U00010a36-\U00010a37\U00010a3b-\U00010a3e\U00010a49-\U00010a4f\U00010a59-\U00010a5f\U00010aa0-\U00010abf\U00010ae7-\U00010aea\U00010af7-\U00010aff\U00010b36-\U00010b38\U00010b56-\U00010b57\U00010b73-\U00010b77\U00010b92-\U00010b98\U00010b9d-\U00010ba8\U00010bb0-\U00010bff\U00010c49-\U00010c7f\U00010cb3-\U00010cbf\U00010cf3-\U00010cf9\U00010d28-\U00010d2f\U00010d3a-\U00010e5f\U00010e7f-\U00010eff\U00010f28-\U00010f2f\U00010f5a-\U00010fff\U0001104e-\U00011051\U00011070-\U0001107e\U000110c2-\U000110cc\U000110ce-\U000110cf\U000110e9-\U000110ef\U000110fa-\U000110ff\U00011135\U00011147-\U0001114f\U00011177-\U0001117f\U000111ce-\U000111cf\U000111e0\U000111f5-\U000111ff\U00011212\U0001123f-\U0001127f\U00011287\U00011289\U0001128e\U0001129e\U000112aa-\U000112af\U000112eb-\U000112ef\U000112fa-\U000112ff\U00011304\U0001130d-\U0001130e\U00011311-\U00011312\U00011329\U00011331\U00011334\U0001133a\U00011345-\U00011346\U00011349-\U0001134a\U0001134e-\U0001134f\U00011351-\U00011356\U00011358-\U0001135c\U00011364-\U00011365\U0001136d-\U0001136f\U00011375-\U000113ff\U0001145a\U0001145c\U0001145f-\U0001147f\U000114c8-\U000114cf\U000114da-\U0001157f\U000115b6-\U000115b7\U000115de-\U000115ff\U00011645-\U0001164f\U0001165a-\U0001165f\U0001166d-\U0001167f\U000116b8-\U000116bf\U000116ca-\U000116ff\U0001171b-\U0001171c\U0001172c-\U0001172f\U00011740-\U000117ff\U0001183c-\U0001189f\U000118f3-\U000118fe\U00011900-\U000119ff\U00011a48-\U00011a4f\U00011a84-\U00011a85\U00011aa3-\U00011abf\U00011af9-\U00011bff\U00011c09\U00011c37\U00011c46-\U00011c4f\U00011c6d-\U00011c6f\U00011c90-\U00011c91\U00011ca8\U00011cb7-\U00011cff\U00011d07\U00011d0a\U00011d37-\U00011d39\U00011d3b\U00011d3e\U00011d48-\U00011d4f\U00011d5a-\U00011d5f\U00011d66\U00011d69\U00011d8f\U00011d92\U00011d99-\U00011d9f\U00011daa-\U00011edf\U00011ef9-\U00011fff\U0001239a-\U000123ff\U0001246f\U00012475-\U0001247f\U00012544-\U00012fff\U0001342f-\U000143ff\U00014647-\U000167ff\U00016a39-\U00016a3f\U00016a5f\U00016a6a-\U00016a6d\U00016a70-\U00016acf\U00016aee-\U00016aef\U00016af6-\U00016aff\U00016b46-\U00016b4f\U00016b5a\U00016b62\U00016b78-\U00016b7c\U00016b90-\U00016e3f\U00016e9b-\U00016eff\U00016f45-\U00016f4f\U00016f7f-\U00016f8e\U00016fa0-\U00016fdf\U00016fe2-\U00016fff\U000187f2-\U000187ff\U00018af3-\U0001afff\U0001b11f-\U0001b16f\U0001b2fc-\U0001bbff\U0001bc6b-\U0001bc6f\U0001bc7d-\U0001bc7f\U0001bc89-\U0001bc8f\U0001bc9a-\U0001bc9b\U0001bca4-\U0001cfff\U0001d0f6-\U0001d0ff\U0001d127-\U0001d128\U0001d1e9-\U0001d1ff\U0001d246-\U0001d2df\U0001d2f4-\U0001d2ff\U0001d357-\U0001d35f\U0001d379-\U0001d3ff\U0001d455\U0001d49d\U0001d4a0-\U0001d4a1\U0001d4a3-\U0001d4a4\U0001d4a7-\U0001d4a8\U0001d4ad\U0001d4ba\U0001d4bc\U0001d4c4\U0001d506\U0001d50b-\U0001d50c\U0001d515\U0001d51d\U0001d53a\U0001d53f\U0001d545\U0001d547-\U0001d549\U0001d551\U0001d6a6-\U0001d6a7\U0001d7cc-\U0001d7cd\U0001da8c-\U0001da9a\U0001daa0\U0001dab0-\U0001dfff\U0001e007\U0001e019-\U0001e01a\U0001e022\U0001e025\U0001e02b-\U0001e7ff\U0001e8c5-\U0001e8c6\U0001e8d7-\U0001e8ff\U0001e94b-\U0001e94f\U0001e95a-\U0001e95d\U0001e960-\U0001ec70\U0001ecb5-\U0001edff\U0001ee04\U0001ee20\U0001ee23\U0001ee25-\U0001ee26\U0001ee28\U0001ee33\U0001ee38\U0001ee3a\U0001ee3c-\U0001ee41\U0001ee43-\U0001ee46\U0001ee48\U0001ee4a\U0001ee4c\U0001ee50\U0001ee53\U0001ee55-\U0001ee56\U0001ee58\U0001ee5a\U0001ee5c\U0001ee5e\U0001ee60\U0001ee63\U0001ee65-\U0001ee66\U0001ee6b\U0001ee73\U0001ee78\U0001ee7d\U0001ee7f\U0001ee8a\U0001ee9c-\U0001eea0\U0001eea4\U0001eeaa\U0001eebc-\U0001eeef\U0001eef2-\U0001efff\U0001f02c-\U0001f02f\U0001f094-\U0001f09f\U0001f0af-\U0001f0b0\U0001f0c0\U0001f0d0\U0001f0f6-\U0001f0ff\U0001f10d-\U0001f10f\U0001f16c-\U0001f16f\U0001f1ad-\U0001f1e5\U0001f203-\U0001f20f\U0001f23c-\U0001f23f\U0001f249-\U0001f24f\U0001f252-\U0001f25f\U0001f266-\U0001f2ff\U0001f6d5-\U0001f6df\U0001f6ed-\U0001f6ef\U0001f6fa-\U0001f6ff\U0001f774-\U0001f77f\U0001f7d9-\U0001f7ff\U0001f80c-\U0001f80f\U0001f848-\U0001f84f\U0001f85a-\U0001f85f\U0001f888-\U0001f88f\U0001f8ae-\U0001f8ff\U0001f90c-\U0001f90f\U0001f93f\U0001f971-\U0001f972\U0001f977-\U0001f979\U0001f97b\U0001f9a3-\U0001f9af\U0001f9ba-\U0001f9bf\U0001f9c3-\U0001f9cf\U0001fa00-\U0001fa5f\U0001fa6e-\U0001ffff\U0002a6d7-\U0002a6ff\U0002b735-\U0002b73f\U0002b81e-\U0002b81f\U0002cea2-\U0002ceaf\U0002ebe1-\U0002f7ff\U0002fa1e-\U000e0000\U000e0002-\U000e001f\U000e0080-\U000e00ff\U000e01f0-\U000effff\U000ffffe-\U000fffff\U0010fffe-\U0010ffff' - -Co = '\ue000-\uf8ff\U000f0000-\U000ffffd\U00100000-\U0010fffd' - -Cs = '\ud800-\udbff\\\udc00\udc01-\udfff' - -Ll = 'a-z\xb5\xdf-\xf6\xf8-\xff\u0101\u0103\u0105\u0107\u0109\u010b\u010d\u010f\u0111\u0113\u0115\u0117\u0119\u011b\u011d\u011f\u0121\u0123\u0125\u0127\u0129\u012b\u012d\u012f\u0131\u0133\u0135\u0137-\u0138\u013a\u013c\u013e\u0140\u0142\u0144\u0146\u0148-\u0149\u014b\u014d\u014f\u0151\u0153\u0155\u0157\u0159\u015b\u015d\u015f\u0161\u0163\u0165\u0167\u0169\u016b\u016d\u016f\u0171\u0173\u0175\u0177\u017a\u017c\u017e-\u0180\u0183\u0185\u0188\u018c-\u018d\u0192\u0195\u0199-\u019b\u019e\u01a1\u01a3\u01a5\u01a8\u01aa-\u01ab\u01ad\u01b0\u01b4\u01b6\u01b9-\u01ba\u01bd-\u01bf\u01c6\u01c9\u01cc\u01ce\u01d0\u01d2\u01d4\u01d6\u01d8\u01da\u01dc-\u01dd\u01df\u01e1\u01e3\u01e5\u01e7\u01e9\u01eb\u01ed\u01ef-\u01f0\u01f3\u01f5\u01f9\u01fb\u01fd\u01ff\u0201\u0203\u0205\u0207\u0209\u020b\u020d\u020f\u0211\u0213\u0215\u0217\u0219\u021b\u021d\u021f\u0221\u0223\u0225\u0227\u0229\u022b\u022d\u022f\u0231\u0233-\u0239\u023c\u023f-\u0240\u0242\u0247\u0249\u024b\u024d\u024f-\u0293\u0295-\u02af\u0371\u0373\u0377\u037b-\u037d\u0390\u03ac-\u03ce\u03d0-\u03d1\u03d5-\u03d7\u03d9\u03db\u03dd\u03df\u03e1\u03e3\u03e5\u03e7\u03e9\u03eb\u03ed\u03ef-\u03f3\u03f5\u03f8\u03fb-\u03fc\u0430-\u045f\u0461\u0463\u0465\u0467\u0469\u046b\u046d\u046f\u0471\u0473\u0475\u0477\u0479\u047b\u047d\u047f\u0481\u048b\u048d\u048f\u0491\u0493\u0495\u0497\u0499\u049b\u049d\u049f\u04a1\u04a3\u04a5\u04a7\u04a9\u04ab\u04ad\u04af\u04b1\u04b3\u04b5\u04b7\u04b9\u04bb\u04bd\u04bf\u04c2\u04c4\u04c6\u04c8\u04ca\u04cc\u04ce-\u04cf\u04d1\u04d3\u04d5\u04d7\u04d9\u04db\u04dd\u04df\u04e1\u04e3\u04e5\u04e7\u04e9\u04eb\u04ed\u04ef\u04f1\u04f3\u04f5\u04f7\u04f9\u04fb\u04fd\u04ff\u0501\u0503\u0505\u0507\u0509\u050b\u050d\u050f\u0511\u0513\u0515\u0517\u0519\u051b\u051d\u051f\u0521\u0523\u0525\u0527\u0529\u052b\u052d\u052f\u0560-\u0588\u10d0-\u10fa\u10fd-\u10ff\u13f8-\u13fd\u1c80-\u1c88\u1d00-\u1d2b\u1d6b-\u1d77\u1d79-\u1d9a\u1e01\u1e03\u1e05\u1e07\u1e09\u1e0b\u1e0d\u1e0f\u1e11\u1e13\u1e15\u1e17\u1e19\u1e1b\u1e1d\u1e1f\u1e21\u1e23\u1e25\u1e27\u1e29\u1e2b\u1e2d\u1e2f\u1e31\u1e33\u1e35\u1e37\u1e39\u1e3b\u1e3d\u1e3f\u1e41\u1e43\u1e45\u1e47\u1e49\u1e4b\u1e4d\u1e4f\u1e51\u1e53\u1e55\u1e57\u1e59\u1e5b\u1e5d\u1e5f\u1e61\u1e63\u1e65\u1e67\u1e69\u1e6b\u1e6d\u1e6f\u1e71\u1e73\u1e75\u1e77\u1e79\u1e7b\u1e7d\u1e7f\u1e81\u1e83\u1e85\u1e87\u1e89\u1e8b\u1e8d\u1e8f\u1e91\u1e93\u1e95-\u1e9d\u1e9f\u1ea1\u1ea3\u1ea5\u1ea7\u1ea9\u1eab\u1ead\u1eaf\u1eb1\u1eb3\u1eb5\u1eb7\u1eb9\u1ebb\u1ebd\u1ebf\u1ec1\u1ec3\u1ec5\u1ec7\u1ec9\u1ecb\u1ecd\u1ecf\u1ed1\u1ed3\u1ed5\u1ed7\u1ed9\u1edb\u1edd\u1edf\u1ee1\u1ee3\u1ee5\u1ee7\u1ee9\u1eeb\u1eed\u1eef\u1ef1\u1ef3\u1ef5\u1ef7\u1ef9\u1efb\u1efd\u1eff-\u1f07\u1f10-\u1f15\u1f20-\u1f27\u1f30-\u1f37\u1f40-\u1f45\u1f50-\u1f57\u1f60-\u1f67\u1f70-\u1f7d\u1f80-\u1f87\u1f90-\u1f97\u1fa0-\u1fa7\u1fb0-\u1fb4\u1fb6-\u1fb7\u1fbe\u1fc2-\u1fc4\u1fc6-\u1fc7\u1fd0-\u1fd3\u1fd6-\u1fd7\u1fe0-\u1fe7\u1ff2-\u1ff4\u1ff6-\u1ff7\u210a\u210e-\u210f\u2113\u212f\u2134\u2139\u213c-\u213d\u2146-\u2149\u214e\u2184\u2c30-\u2c5e\u2c61\u2c65-\u2c66\u2c68\u2c6a\u2c6c\u2c71\u2c73-\u2c74\u2c76-\u2c7b\u2c81\u2c83\u2c85\u2c87\u2c89\u2c8b\u2c8d\u2c8f\u2c91\u2c93\u2c95\u2c97\u2c99\u2c9b\u2c9d\u2c9f\u2ca1\u2ca3\u2ca5\u2ca7\u2ca9\u2cab\u2cad\u2caf\u2cb1\u2cb3\u2cb5\u2cb7\u2cb9\u2cbb\u2cbd\u2cbf\u2cc1\u2cc3\u2cc5\u2cc7\u2cc9\u2ccb\u2ccd\u2ccf\u2cd1\u2cd3\u2cd5\u2cd7\u2cd9\u2cdb\u2cdd\u2cdf\u2ce1\u2ce3-\u2ce4\u2cec\u2cee\u2cf3\u2d00-\u2d25\u2d27\u2d2d\ua641\ua643\ua645\ua647\ua649\ua64b\ua64d\ua64f\ua651\ua653\ua655\ua657\ua659\ua65b\ua65d\ua65f\ua661\ua663\ua665\ua667\ua669\ua66b\ua66d\ua681\ua683\ua685\ua687\ua689\ua68b\ua68d\ua68f\ua691\ua693\ua695\ua697\ua699\ua69b\ua723\ua725\ua727\ua729\ua72b\ua72d\ua72f-\ua731\ua733\ua735\ua737\ua739\ua73b\ua73d\ua73f\ua741\ua743\ua745\ua747\ua749\ua74b\ua74d\ua74f\ua751\ua753\ua755\ua757\ua759\ua75b\ua75d\ua75f\ua761\ua763\ua765\ua767\ua769\ua76b\ua76d\ua76f\ua771-\ua778\ua77a\ua77c\ua77f\ua781\ua783\ua785\ua787\ua78c\ua78e\ua791\ua793-\ua795\ua797\ua799\ua79b\ua79d\ua79f\ua7a1\ua7a3\ua7a5\ua7a7\ua7a9\ua7af\ua7b5\ua7b7\ua7b9\ua7fa\uab30-\uab5a\uab60-\uab65\uab70-\uabbf\ufb00-\ufb06\ufb13-\ufb17\uff41-\uff5a\U00010428-\U0001044f\U000104d8-\U000104fb\U00010cc0-\U00010cf2\U000118c0-\U000118df\U00016e60-\U00016e7f\U0001d41a-\U0001d433\U0001d44e-\U0001d454\U0001d456-\U0001d467\U0001d482-\U0001d49b\U0001d4b6-\U0001d4b9\U0001d4bb\U0001d4bd-\U0001d4c3\U0001d4c5-\U0001d4cf\U0001d4ea-\U0001d503\U0001d51e-\U0001d537\U0001d552-\U0001d56b\U0001d586-\U0001d59f\U0001d5ba-\U0001d5d3\U0001d5ee-\U0001d607\U0001d622-\U0001d63b\U0001d656-\U0001d66f\U0001d68a-\U0001d6a5\U0001d6c2-\U0001d6da\U0001d6dc-\U0001d6e1\U0001d6fc-\U0001d714\U0001d716-\U0001d71b\U0001d736-\U0001d74e\U0001d750-\U0001d755\U0001d770-\U0001d788\U0001d78a-\U0001d78f\U0001d7aa-\U0001d7c2\U0001d7c4-\U0001d7c9\U0001d7cb\U0001e922-\U0001e943' - -Lm = '\u02b0-\u02c1\u02c6-\u02d1\u02e0-\u02e4\u02ec\u02ee\u0374\u037a\u0559\u0640\u06e5-\u06e6\u07f4-\u07f5\u07fa\u081a\u0824\u0828\u0971\u0e46\u0ec6\u10fc\u17d7\u1843\u1aa7\u1c78-\u1c7d\u1d2c-\u1d6a\u1d78\u1d9b-\u1dbf\u2071\u207f\u2090-\u209c\u2c7c-\u2c7d\u2d6f\u2e2f\u3005\u3031-\u3035\u303b\u309d-\u309e\u30fc-\u30fe\ua015\ua4f8-\ua4fd\ua60c\ua67f\ua69c-\ua69d\ua717-\ua71f\ua770\ua788\ua7f8-\ua7f9\ua9cf\ua9e6\uaa70\uaadd\uaaf3-\uaaf4\uab5c-\uab5f\uff70\uff9e-\uff9f\U00016b40-\U00016b43\U00016f93-\U00016f9f\U00016fe0-\U00016fe1' - -Lo = '\xaa\xba\u01bb\u01c0-\u01c3\u0294\u05d0-\u05ea\u05ef-\u05f2\u0620-\u063f\u0641-\u064a\u066e-\u066f\u0671-\u06d3\u06d5\u06ee-\u06ef\u06fa-\u06fc\u06ff\u0710\u0712-\u072f\u074d-\u07a5\u07b1\u07ca-\u07ea\u0800-\u0815\u0840-\u0858\u0860-\u086a\u08a0-\u08b4\u08b6-\u08bd\u0904-\u0939\u093d\u0950\u0958-\u0961\u0972-\u0980\u0985-\u098c\u098f-\u0990\u0993-\u09a8\u09aa-\u09b0\u09b2\u09b6-\u09b9\u09bd\u09ce\u09dc-\u09dd\u09df-\u09e1\u09f0-\u09f1\u09fc\u0a05-\u0a0a\u0a0f-\u0a10\u0a13-\u0a28\u0a2a-\u0a30\u0a32-\u0a33\u0a35-\u0a36\u0a38-\u0a39\u0a59-\u0a5c\u0a5e\u0a72-\u0a74\u0a85-\u0a8d\u0a8f-\u0a91\u0a93-\u0aa8\u0aaa-\u0ab0\u0ab2-\u0ab3\u0ab5-\u0ab9\u0abd\u0ad0\u0ae0-\u0ae1\u0af9\u0b05-\u0b0c\u0b0f-\u0b10\u0b13-\u0b28\u0b2a-\u0b30\u0b32-\u0b33\u0b35-\u0b39\u0b3d\u0b5c-\u0b5d\u0b5f-\u0b61\u0b71\u0b83\u0b85-\u0b8a\u0b8e-\u0b90\u0b92-\u0b95\u0b99-\u0b9a\u0b9c\u0b9e-\u0b9f\u0ba3-\u0ba4\u0ba8-\u0baa\u0bae-\u0bb9\u0bd0\u0c05-\u0c0c\u0c0e-\u0c10\u0c12-\u0c28\u0c2a-\u0c39\u0c3d\u0c58-\u0c5a\u0c60-\u0c61\u0c80\u0c85-\u0c8c\u0c8e-\u0c90\u0c92-\u0ca8\u0caa-\u0cb3\u0cb5-\u0cb9\u0cbd\u0cde\u0ce0-\u0ce1\u0cf1-\u0cf2\u0d05-\u0d0c\u0d0e-\u0d10\u0d12-\u0d3a\u0d3d\u0d4e\u0d54-\u0d56\u0d5f-\u0d61\u0d7a-\u0d7f\u0d85-\u0d96\u0d9a-\u0db1\u0db3-\u0dbb\u0dbd\u0dc0-\u0dc6\u0e01-\u0e30\u0e32-\u0e33\u0e40-\u0e45\u0e81-\u0e82\u0e84\u0e87-\u0e88\u0e8a\u0e8d\u0e94-\u0e97\u0e99-\u0e9f\u0ea1-\u0ea3\u0ea5\u0ea7\u0eaa-\u0eab\u0ead-\u0eb0\u0eb2-\u0eb3\u0ebd\u0ec0-\u0ec4\u0edc-\u0edf\u0f00\u0f40-\u0f47\u0f49-\u0f6c\u0f88-\u0f8c\u1000-\u102a\u103f\u1050-\u1055\u105a-\u105d\u1061\u1065-\u1066\u106e-\u1070\u1075-\u1081\u108e\u1100-\u1248\u124a-\u124d\u1250-\u1256\u1258\u125a-\u125d\u1260-\u1288\u128a-\u128d\u1290-\u12b0\u12b2-\u12b5\u12b8-\u12be\u12c0\u12c2-\u12c5\u12c8-\u12d6\u12d8-\u1310\u1312-\u1315\u1318-\u135a\u1380-\u138f\u1401-\u166c\u166f-\u167f\u1681-\u169a\u16a0-\u16ea\u16f1-\u16f8\u1700-\u170c\u170e-\u1711\u1720-\u1731\u1740-\u1751\u1760-\u176c\u176e-\u1770\u1780-\u17b3\u17dc\u1820-\u1842\u1844-\u1878\u1880-\u1884\u1887-\u18a8\u18aa\u18b0-\u18f5\u1900-\u191e\u1950-\u196d\u1970-\u1974\u1980-\u19ab\u19b0-\u19c9\u1a00-\u1a16\u1a20-\u1a54\u1b05-\u1b33\u1b45-\u1b4b\u1b83-\u1ba0\u1bae-\u1baf\u1bba-\u1be5\u1c00-\u1c23\u1c4d-\u1c4f\u1c5a-\u1c77\u1ce9-\u1cec\u1cee-\u1cf1\u1cf5-\u1cf6\u2135-\u2138\u2d30-\u2d67\u2d80-\u2d96\u2da0-\u2da6\u2da8-\u2dae\u2db0-\u2db6\u2db8-\u2dbe\u2dc0-\u2dc6\u2dc8-\u2dce\u2dd0-\u2dd6\u2dd8-\u2dde\u3006\u303c\u3041-\u3096\u309f\u30a1-\u30fa\u30ff\u3105-\u312f\u3131-\u318e\u31a0-\u31ba\u31f0-\u31ff\u3400-\u4db5\u4e00-\u9fef\ua000-\ua014\ua016-\ua48c\ua4d0-\ua4f7\ua500-\ua60b\ua610-\ua61f\ua62a-\ua62b\ua66e\ua6a0-\ua6e5\ua78f\ua7f7\ua7fb-\ua801\ua803-\ua805\ua807-\ua80a\ua80c-\ua822\ua840-\ua873\ua882-\ua8b3\ua8f2-\ua8f7\ua8fb\ua8fd-\ua8fe\ua90a-\ua925\ua930-\ua946\ua960-\ua97c\ua984-\ua9b2\ua9e0-\ua9e4\ua9e7-\ua9ef\ua9fa-\ua9fe\uaa00-\uaa28\uaa40-\uaa42\uaa44-\uaa4b\uaa60-\uaa6f\uaa71-\uaa76\uaa7a\uaa7e-\uaaaf\uaab1\uaab5-\uaab6\uaab9-\uaabd\uaac0\uaac2\uaadb-\uaadc\uaae0-\uaaea\uaaf2\uab01-\uab06\uab09-\uab0e\uab11-\uab16\uab20-\uab26\uab28-\uab2e\uabc0-\uabe2\uac00-\ud7a3\ud7b0-\ud7c6\ud7cb-\ud7fb\uf900-\ufa6d\ufa70-\ufad9\ufb1d\ufb1f-\ufb28\ufb2a-\ufb36\ufb38-\ufb3c\ufb3e\ufb40-\ufb41\ufb43-\ufb44\ufb46-\ufbb1\ufbd3-\ufd3d\ufd50-\ufd8f\ufd92-\ufdc7\ufdf0-\ufdfb\ufe70-\ufe74\ufe76-\ufefc\uff66-\uff6f\uff71-\uff9d\uffa0-\uffbe\uffc2-\uffc7\uffca-\uffcf\uffd2-\uffd7\uffda-\uffdc\U00010000-\U0001000b\U0001000d-\U00010026\U00010028-\U0001003a\U0001003c-\U0001003d\U0001003f-\U0001004d\U00010050-\U0001005d\U00010080-\U000100fa\U00010280-\U0001029c\U000102a0-\U000102d0\U00010300-\U0001031f\U0001032d-\U00010340\U00010342-\U00010349\U00010350-\U00010375\U00010380-\U0001039d\U000103a0-\U000103c3\U000103c8-\U000103cf\U00010450-\U0001049d\U00010500-\U00010527\U00010530-\U00010563\U00010600-\U00010736\U00010740-\U00010755\U00010760-\U00010767\U00010800-\U00010805\U00010808\U0001080a-\U00010835\U00010837-\U00010838\U0001083c\U0001083f-\U00010855\U00010860-\U00010876\U00010880-\U0001089e\U000108e0-\U000108f2\U000108f4-\U000108f5\U00010900-\U00010915\U00010920-\U00010939\U00010980-\U000109b7\U000109be-\U000109bf\U00010a00\U00010a10-\U00010a13\U00010a15-\U00010a17\U00010a19-\U00010a35\U00010a60-\U00010a7c\U00010a80-\U00010a9c\U00010ac0-\U00010ac7\U00010ac9-\U00010ae4\U00010b00-\U00010b35\U00010b40-\U00010b55\U00010b60-\U00010b72\U00010b80-\U00010b91\U00010c00-\U00010c48\U00010d00-\U00010d23\U00010f00-\U00010f1c\U00010f27\U00010f30-\U00010f45\U00011003-\U00011037\U00011083-\U000110af\U000110d0-\U000110e8\U00011103-\U00011126\U00011144\U00011150-\U00011172\U00011176\U00011183-\U000111b2\U000111c1-\U000111c4\U000111da\U000111dc\U00011200-\U00011211\U00011213-\U0001122b\U00011280-\U00011286\U00011288\U0001128a-\U0001128d\U0001128f-\U0001129d\U0001129f-\U000112a8\U000112b0-\U000112de\U00011305-\U0001130c\U0001130f-\U00011310\U00011313-\U00011328\U0001132a-\U00011330\U00011332-\U00011333\U00011335-\U00011339\U0001133d\U00011350\U0001135d-\U00011361\U00011400-\U00011434\U00011447-\U0001144a\U00011480-\U000114af\U000114c4-\U000114c5\U000114c7\U00011580-\U000115ae\U000115d8-\U000115db\U00011600-\U0001162f\U00011644\U00011680-\U000116aa\U00011700-\U0001171a\U00011800-\U0001182b\U000118ff\U00011a00\U00011a0b-\U00011a32\U00011a3a\U00011a50\U00011a5c-\U00011a83\U00011a86-\U00011a89\U00011a9d\U00011ac0-\U00011af8\U00011c00-\U00011c08\U00011c0a-\U00011c2e\U00011c40\U00011c72-\U00011c8f\U00011d00-\U00011d06\U00011d08-\U00011d09\U00011d0b-\U00011d30\U00011d46\U00011d60-\U00011d65\U00011d67-\U00011d68\U00011d6a-\U00011d89\U00011d98\U00011ee0-\U00011ef2\U00012000-\U00012399\U00012480-\U00012543\U00013000-\U0001342e\U00014400-\U00014646\U00016800-\U00016a38\U00016a40-\U00016a5e\U00016ad0-\U00016aed\U00016b00-\U00016b2f\U00016b63-\U00016b77\U00016b7d-\U00016b8f\U00016f00-\U00016f44\U00016f50\U00017000-\U000187f1\U00018800-\U00018af2\U0001b000-\U0001b11e\U0001b170-\U0001b2fb\U0001bc00-\U0001bc6a\U0001bc70-\U0001bc7c\U0001bc80-\U0001bc88\U0001bc90-\U0001bc99\U0001e800-\U0001e8c4\U0001ee00-\U0001ee03\U0001ee05-\U0001ee1f\U0001ee21-\U0001ee22\U0001ee24\U0001ee27\U0001ee29-\U0001ee32\U0001ee34-\U0001ee37\U0001ee39\U0001ee3b\U0001ee42\U0001ee47\U0001ee49\U0001ee4b\U0001ee4d-\U0001ee4f\U0001ee51-\U0001ee52\U0001ee54\U0001ee57\U0001ee59\U0001ee5b\U0001ee5d\U0001ee5f\U0001ee61-\U0001ee62\U0001ee64\U0001ee67-\U0001ee6a\U0001ee6c-\U0001ee72\U0001ee74-\U0001ee77\U0001ee79-\U0001ee7c\U0001ee7e\U0001ee80-\U0001ee89\U0001ee8b-\U0001ee9b\U0001eea1-\U0001eea3\U0001eea5-\U0001eea9\U0001eeab-\U0001eebb\U00020000-\U0002a6d6\U0002a700-\U0002b734\U0002b740-\U0002b81d\U0002b820-\U0002cea1\U0002ceb0-\U0002ebe0\U0002f800-\U0002fa1d' - -Lt = '\u01c5\u01c8\u01cb\u01f2\u1f88-\u1f8f\u1f98-\u1f9f\u1fa8-\u1faf\u1fbc\u1fcc\u1ffc' - -Lu = 'A-Z\xc0-\xd6\xd8-\xde\u0100\u0102\u0104\u0106\u0108\u010a\u010c\u010e\u0110\u0112\u0114\u0116\u0118\u011a\u011c\u011e\u0120\u0122\u0124\u0126\u0128\u012a\u012c\u012e\u0130\u0132\u0134\u0136\u0139\u013b\u013d\u013f\u0141\u0143\u0145\u0147\u014a\u014c\u014e\u0150\u0152\u0154\u0156\u0158\u015a\u015c\u015e\u0160\u0162\u0164\u0166\u0168\u016a\u016c\u016e\u0170\u0172\u0174\u0176\u0178-\u0179\u017b\u017d\u0181-\u0182\u0184\u0186-\u0187\u0189-\u018b\u018e-\u0191\u0193-\u0194\u0196-\u0198\u019c-\u019d\u019f-\u01a0\u01a2\u01a4\u01a6-\u01a7\u01a9\u01ac\u01ae-\u01af\u01b1-\u01b3\u01b5\u01b7-\u01b8\u01bc\u01c4\u01c7\u01ca\u01cd\u01cf\u01d1\u01d3\u01d5\u01d7\u01d9\u01db\u01de\u01e0\u01e2\u01e4\u01e6\u01e8\u01ea\u01ec\u01ee\u01f1\u01f4\u01f6-\u01f8\u01fa\u01fc\u01fe\u0200\u0202\u0204\u0206\u0208\u020a\u020c\u020e\u0210\u0212\u0214\u0216\u0218\u021a\u021c\u021e\u0220\u0222\u0224\u0226\u0228\u022a\u022c\u022e\u0230\u0232\u023a-\u023b\u023d-\u023e\u0241\u0243-\u0246\u0248\u024a\u024c\u024e\u0370\u0372\u0376\u037f\u0386\u0388-\u038a\u038c\u038e-\u038f\u0391-\u03a1\u03a3-\u03ab\u03cf\u03d2-\u03d4\u03d8\u03da\u03dc\u03de\u03e0\u03e2\u03e4\u03e6\u03e8\u03ea\u03ec\u03ee\u03f4\u03f7\u03f9-\u03fa\u03fd-\u042f\u0460\u0462\u0464\u0466\u0468\u046a\u046c\u046e\u0470\u0472\u0474\u0476\u0478\u047a\u047c\u047e\u0480\u048a\u048c\u048e\u0490\u0492\u0494\u0496\u0498\u049a\u049c\u049e\u04a0\u04a2\u04a4\u04a6\u04a8\u04aa\u04ac\u04ae\u04b0\u04b2\u04b4\u04b6\u04b8\u04ba\u04bc\u04be\u04c0-\u04c1\u04c3\u04c5\u04c7\u04c9\u04cb\u04cd\u04d0\u04d2\u04d4\u04d6\u04d8\u04da\u04dc\u04de\u04e0\u04e2\u04e4\u04e6\u04e8\u04ea\u04ec\u04ee\u04f0\u04f2\u04f4\u04f6\u04f8\u04fa\u04fc\u04fe\u0500\u0502\u0504\u0506\u0508\u050a\u050c\u050e\u0510\u0512\u0514\u0516\u0518\u051a\u051c\u051e\u0520\u0522\u0524\u0526\u0528\u052a\u052c\u052e\u0531-\u0556\u10a0-\u10c5\u10c7\u10cd\u13a0-\u13f5\u1c90-\u1cba\u1cbd-\u1cbf\u1e00\u1e02\u1e04\u1e06\u1e08\u1e0a\u1e0c\u1e0e\u1e10\u1e12\u1e14\u1e16\u1e18\u1e1a\u1e1c\u1e1e\u1e20\u1e22\u1e24\u1e26\u1e28\u1e2a\u1e2c\u1e2e\u1e30\u1e32\u1e34\u1e36\u1e38\u1e3a\u1e3c\u1e3e\u1e40\u1e42\u1e44\u1e46\u1e48\u1e4a\u1e4c\u1e4e\u1e50\u1e52\u1e54\u1e56\u1e58\u1e5a\u1e5c\u1e5e\u1e60\u1e62\u1e64\u1e66\u1e68\u1e6a\u1e6c\u1e6e\u1e70\u1e72\u1e74\u1e76\u1e78\u1e7a\u1e7c\u1e7e\u1e80\u1e82\u1e84\u1e86\u1e88\u1e8a\u1e8c\u1e8e\u1e90\u1e92\u1e94\u1e9e\u1ea0\u1ea2\u1ea4\u1ea6\u1ea8\u1eaa\u1eac\u1eae\u1eb0\u1eb2\u1eb4\u1eb6\u1eb8\u1eba\u1ebc\u1ebe\u1ec0\u1ec2\u1ec4\u1ec6\u1ec8\u1eca\u1ecc\u1ece\u1ed0\u1ed2\u1ed4\u1ed6\u1ed8\u1eda\u1edc\u1ede\u1ee0\u1ee2\u1ee4\u1ee6\u1ee8\u1eea\u1eec\u1eee\u1ef0\u1ef2\u1ef4\u1ef6\u1ef8\u1efa\u1efc\u1efe\u1f08-\u1f0f\u1f18-\u1f1d\u1f28-\u1f2f\u1f38-\u1f3f\u1f48-\u1f4d\u1f59\u1f5b\u1f5d\u1f5f\u1f68-\u1f6f\u1fb8-\u1fbb\u1fc8-\u1fcb\u1fd8-\u1fdb\u1fe8-\u1fec\u1ff8-\u1ffb\u2102\u2107\u210b-\u210d\u2110-\u2112\u2115\u2119-\u211d\u2124\u2126\u2128\u212a-\u212d\u2130-\u2133\u213e-\u213f\u2145\u2183\u2c00-\u2c2e\u2c60\u2c62-\u2c64\u2c67\u2c69\u2c6b\u2c6d-\u2c70\u2c72\u2c75\u2c7e-\u2c80\u2c82\u2c84\u2c86\u2c88\u2c8a\u2c8c\u2c8e\u2c90\u2c92\u2c94\u2c96\u2c98\u2c9a\u2c9c\u2c9e\u2ca0\u2ca2\u2ca4\u2ca6\u2ca8\u2caa\u2cac\u2cae\u2cb0\u2cb2\u2cb4\u2cb6\u2cb8\u2cba\u2cbc\u2cbe\u2cc0\u2cc2\u2cc4\u2cc6\u2cc8\u2cca\u2ccc\u2cce\u2cd0\u2cd2\u2cd4\u2cd6\u2cd8\u2cda\u2cdc\u2cde\u2ce0\u2ce2\u2ceb\u2ced\u2cf2\ua640\ua642\ua644\ua646\ua648\ua64a\ua64c\ua64e\ua650\ua652\ua654\ua656\ua658\ua65a\ua65c\ua65e\ua660\ua662\ua664\ua666\ua668\ua66a\ua66c\ua680\ua682\ua684\ua686\ua688\ua68a\ua68c\ua68e\ua690\ua692\ua694\ua696\ua698\ua69a\ua722\ua724\ua726\ua728\ua72a\ua72c\ua72e\ua732\ua734\ua736\ua738\ua73a\ua73c\ua73e\ua740\ua742\ua744\ua746\ua748\ua74a\ua74c\ua74e\ua750\ua752\ua754\ua756\ua758\ua75a\ua75c\ua75e\ua760\ua762\ua764\ua766\ua768\ua76a\ua76c\ua76e\ua779\ua77b\ua77d-\ua77e\ua780\ua782\ua784\ua786\ua78b\ua78d\ua790\ua792\ua796\ua798\ua79a\ua79c\ua79e\ua7a0\ua7a2\ua7a4\ua7a6\ua7a8\ua7aa-\ua7ae\ua7b0-\ua7b4\ua7b6\ua7b8\uff21-\uff3a\U00010400-\U00010427\U000104b0-\U000104d3\U00010c80-\U00010cb2\U000118a0-\U000118bf\U00016e40-\U00016e5f\U0001d400-\U0001d419\U0001d434-\U0001d44d\U0001d468-\U0001d481\U0001d49c\U0001d49e-\U0001d49f\U0001d4a2\U0001d4a5-\U0001d4a6\U0001d4a9-\U0001d4ac\U0001d4ae-\U0001d4b5\U0001d4d0-\U0001d4e9\U0001d504-\U0001d505\U0001d507-\U0001d50a\U0001d50d-\U0001d514\U0001d516-\U0001d51c\U0001d538-\U0001d539\U0001d53b-\U0001d53e\U0001d540-\U0001d544\U0001d546\U0001d54a-\U0001d550\U0001d56c-\U0001d585\U0001d5a0-\U0001d5b9\U0001d5d4-\U0001d5ed\U0001d608-\U0001d621\U0001d63c-\U0001d655\U0001d670-\U0001d689\U0001d6a8-\U0001d6c0\U0001d6e2-\U0001d6fa\U0001d71c-\U0001d734\U0001d756-\U0001d76e\U0001d790-\U0001d7a8\U0001d7ca\U0001e900-\U0001e921' - -Mc = '\u0903\u093b\u093e-\u0940\u0949-\u094c\u094e-\u094f\u0982-\u0983\u09be-\u09c0\u09c7-\u09c8\u09cb-\u09cc\u09d7\u0a03\u0a3e-\u0a40\u0a83\u0abe-\u0ac0\u0ac9\u0acb-\u0acc\u0b02-\u0b03\u0b3e\u0b40\u0b47-\u0b48\u0b4b-\u0b4c\u0b57\u0bbe-\u0bbf\u0bc1-\u0bc2\u0bc6-\u0bc8\u0bca-\u0bcc\u0bd7\u0c01-\u0c03\u0c41-\u0c44\u0c82-\u0c83\u0cbe\u0cc0-\u0cc4\u0cc7-\u0cc8\u0cca-\u0ccb\u0cd5-\u0cd6\u0d02-\u0d03\u0d3e-\u0d40\u0d46-\u0d48\u0d4a-\u0d4c\u0d57\u0d82-\u0d83\u0dcf-\u0dd1\u0dd8-\u0ddf\u0df2-\u0df3\u0f3e-\u0f3f\u0f7f\u102b-\u102c\u1031\u1038\u103b-\u103c\u1056-\u1057\u1062-\u1064\u1067-\u106d\u1083-\u1084\u1087-\u108c\u108f\u109a-\u109c\u17b6\u17be-\u17c5\u17c7-\u17c8\u1923-\u1926\u1929-\u192b\u1930-\u1931\u1933-\u1938\u1a19-\u1a1a\u1a55\u1a57\u1a61\u1a63-\u1a64\u1a6d-\u1a72\u1b04\u1b35\u1b3b\u1b3d-\u1b41\u1b43-\u1b44\u1b82\u1ba1\u1ba6-\u1ba7\u1baa\u1be7\u1bea-\u1bec\u1bee\u1bf2-\u1bf3\u1c24-\u1c2b\u1c34-\u1c35\u1ce1\u1cf2-\u1cf3\u1cf7\u302e-\u302f\ua823-\ua824\ua827\ua880-\ua881\ua8b4-\ua8c3\ua952-\ua953\ua983\ua9b4-\ua9b5\ua9ba-\ua9bb\ua9bd-\ua9c0\uaa2f-\uaa30\uaa33-\uaa34\uaa4d\uaa7b\uaa7d\uaaeb\uaaee-\uaaef\uaaf5\uabe3-\uabe4\uabe6-\uabe7\uabe9-\uabea\uabec\U00011000\U00011002\U00011082\U000110b0-\U000110b2\U000110b7-\U000110b8\U0001112c\U00011145-\U00011146\U00011182\U000111b3-\U000111b5\U000111bf-\U000111c0\U0001122c-\U0001122e\U00011232-\U00011233\U00011235\U000112e0-\U000112e2\U00011302-\U00011303\U0001133e-\U0001133f\U00011341-\U00011344\U00011347-\U00011348\U0001134b-\U0001134d\U00011357\U00011362-\U00011363\U00011435-\U00011437\U00011440-\U00011441\U00011445\U000114b0-\U000114b2\U000114b9\U000114bb-\U000114be\U000114c1\U000115af-\U000115b1\U000115b8-\U000115bb\U000115be\U00011630-\U00011632\U0001163b-\U0001163c\U0001163e\U000116ac\U000116ae-\U000116af\U000116b6\U00011720-\U00011721\U00011726\U0001182c-\U0001182e\U00011838\U00011a39\U00011a57-\U00011a58\U00011a97\U00011c2f\U00011c3e\U00011ca9\U00011cb1\U00011cb4\U00011d8a-\U00011d8e\U00011d93-\U00011d94\U00011d96\U00011ef5-\U00011ef6\U00016f51-\U00016f7e\U0001d165-\U0001d166\U0001d16d-\U0001d172' - -Me = '\u0488-\u0489\u1abe\u20dd-\u20e0\u20e2-\u20e4\ua670-\ua672' - -Mn = '\u0300-\u036f\u0483-\u0487\u0591-\u05bd\u05bf\u05c1-\u05c2\u05c4-\u05c5\u05c7\u0610-\u061a\u064b-\u065f\u0670\u06d6-\u06dc\u06df-\u06e4\u06e7-\u06e8\u06ea-\u06ed\u0711\u0730-\u074a\u07a6-\u07b0\u07eb-\u07f3\u07fd\u0816-\u0819\u081b-\u0823\u0825-\u0827\u0829-\u082d\u0859-\u085b\u08d3-\u08e1\u08e3-\u0902\u093a\u093c\u0941-\u0948\u094d\u0951-\u0957\u0962-\u0963\u0981\u09bc\u09c1-\u09c4\u09cd\u09e2-\u09e3\u09fe\u0a01-\u0a02\u0a3c\u0a41-\u0a42\u0a47-\u0a48\u0a4b-\u0a4d\u0a51\u0a70-\u0a71\u0a75\u0a81-\u0a82\u0abc\u0ac1-\u0ac5\u0ac7-\u0ac8\u0acd\u0ae2-\u0ae3\u0afa-\u0aff\u0b01\u0b3c\u0b3f\u0b41-\u0b44\u0b4d\u0b56\u0b62-\u0b63\u0b82\u0bc0\u0bcd\u0c00\u0c04\u0c3e-\u0c40\u0c46-\u0c48\u0c4a-\u0c4d\u0c55-\u0c56\u0c62-\u0c63\u0c81\u0cbc\u0cbf\u0cc6\u0ccc-\u0ccd\u0ce2-\u0ce3\u0d00-\u0d01\u0d3b-\u0d3c\u0d41-\u0d44\u0d4d\u0d62-\u0d63\u0dca\u0dd2-\u0dd4\u0dd6\u0e31\u0e34-\u0e3a\u0e47-\u0e4e\u0eb1\u0eb4-\u0eb9\u0ebb-\u0ebc\u0ec8-\u0ecd\u0f18-\u0f19\u0f35\u0f37\u0f39\u0f71-\u0f7e\u0f80-\u0f84\u0f86-\u0f87\u0f8d-\u0f97\u0f99-\u0fbc\u0fc6\u102d-\u1030\u1032-\u1037\u1039-\u103a\u103d-\u103e\u1058-\u1059\u105e-\u1060\u1071-\u1074\u1082\u1085-\u1086\u108d\u109d\u135d-\u135f\u1712-\u1714\u1732-\u1734\u1752-\u1753\u1772-\u1773\u17b4-\u17b5\u17b7-\u17bd\u17c6\u17c9-\u17d3\u17dd\u180b-\u180d\u1885-\u1886\u18a9\u1920-\u1922\u1927-\u1928\u1932\u1939-\u193b\u1a17-\u1a18\u1a1b\u1a56\u1a58-\u1a5e\u1a60\u1a62\u1a65-\u1a6c\u1a73-\u1a7c\u1a7f\u1ab0-\u1abd\u1b00-\u1b03\u1b34\u1b36-\u1b3a\u1b3c\u1b42\u1b6b-\u1b73\u1b80-\u1b81\u1ba2-\u1ba5\u1ba8-\u1ba9\u1bab-\u1bad\u1be6\u1be8-\u1be9\u1bed\u1bef-\u1bf1\u1c2c-\u1c33\u1c36-\u1c37\u1cd0-\u1cd2\u1cd4-\u1ce0\u1ce2-\u1ce8\u1ced\u1cf4\u1cf8-\u1cf9\u1dc0-\u1df9\u1dfb-\u1dff\u20d0-\u20dc\u20e1\u20e5-\u20f0\u2cef-\u2cf1\u2d7f\u2de0-\u2dff\u302a-\u302d\u3099-\u309a\ua66f\ua674-\ua67d\ua69e-\ua69f\ua6f0-\ua6f1\ua802\ua806\ua80b\ua825-\ua826\ua8c4-\ua8c5\ua8e0-\ua8f1\ua8ff\ua926-\ua92d\ua947-\ua951\ua980-\ua982\ua9b3\ua9b6-\ua9b9\ua9bc\ua9e5\uaa29-\uaa2e\uaa31-\uaa32\uaa35-\uaa36\uaa43\uaa4c\uaa7c\uaab0\uaab2-\uaab4\uaab7-\uaab8\uaabe-\uaabf\uaac1\uaaec-\uaaed\uaaf6\uabe5\uabe8\uabed\ufb1e\ufe00-\ufe0f\ufe20-\ufe2f\U000101fd\U000102e0\U00010376-\U0001037a\U00010a01-\U00010a03\U00010a05-\U00010a06\U00010a0c-\U00010a0f\U00010a38-\U00010a3a\U00010a3f\U00010ae5-\U00010ae6\U00010d24-\U00010d27\U00010f46-\U00010f50\U00011001\U00011038-\U00011046\U0001107f-\U00011081\U000110b3-\U000110b6\U000110b9-\U000110ba\U00011100-\U00011102\U00011127-\U0001112b\U0001112d-\U00011134\U00011173\U00011180-\U00011181\U000111b6-\U000111be\U000111c9-\U000111cc\U0001122f-\U00011231\U00011234\U00011236-\U00011237\U0001123e\U000112df\U000112e3-\U000112ea\U00011300-\U00011301\U0001133b-\U0001133c\U00011340\U00011366-\U0001136c\U00011370-\U00011374\U00011438-\U0001143f\U00011442-\U00011444\U00011446\U0001145e\U000114b3-\U000114b8\U000114ba\U000114bf-\U000114c0\U000114c2-\U000114c3\U000115b2-\U000115b5\U000115bc-\U000115bd\U000115bf-\U000115c0\U000115dc-\U000115dd\U00011633-\U0001163a\U0001163d\U0001163f-\U00011640\U000116ab\U000116ad\U000116b0-\U000116b5\U000116b7\U0001171d-\U0001171f\U00011722-\U00011725\U00011727-\U0001172b\U0001182f-\U00011837\U00011839-\U0001183a\U00011a01-\U00011a0a\U00011a33-\U00011a38\U00011a3b-\U00011a3e\U00011a47\U00011a51-\U00011a56\U00011a59-\U00011a5b\U00011a8a-\U00011a96\U00011a98-\U00011a99\U00011c30-\U00011c36\U00011c38-\U00011c3d\U00011c3f\U00011c92-\U00011ca7\U00011caa-\U00011cb0\U00011cb2-\U00011cb3\U00011cb5-\U00011cb6\U00011d31-\U00011d36\U00011d3a\U00011d3c-\U00011d3d\U00011d3f-\U00011d45\U00011d47\U00011d90-\U00011d91\U00011d95\U00011d97\U00011ef3-\U00011ef4\U00016af0-\U00016af4\U00016b30-\U00016b36\U00016f8f-\U00016f92\U0001bc9d-\U0001bc9e\U0001d167-\U0001d169\U0001d17b-\U0001d182\U0001d185-\U0001d18b\U0001d1aa-\U0001d1ad\U0001d242-\U0001d244\U0001da00-\U0001da36\U0001da3b-\U0001da6c\U0001da75\U0001da84\U0001da9b-\U0001da9f\U0001daa1-\U0001daaf\U0001e000-\U0001e006\U0001e008-\U0001e018\U0001e01b-\U0001e021\U0001e023-\U0001e024\U0001e026-\U0001e02a\U0001e8d0-\U0001e8d6\U0001e944-\U0001e94a\U000e0100-\U000e01ef' - -Nd = '0-9\u0660-\u0669\u06f0-\u06f9\u07c0-\u07c9\u0966-\u096f\u09e6-\u09ef\u0a66-\u0a6f\u0ae6-\u0aef\u0b66-\u0b6f\u0be6-\u0bef\u0c66-\u0c6f\u0ce6-\u0cef\u0d66-\u0d6f\u0de6-\u0def\u0e50-\u0e59\u0ed0-\u0ed9\u0f20-\u0f29\u1040-\u1049\u1090-\u1099\u17e0-\u17e9\u1810-\u1819\u1946-\u194f\u19d0-\u19d9\u1a80-\u1a89\u1a90-\u1a99\u1b50-\u1b59\u1bb0-\u1bb9\u1c40-\u1c49\u1c50-\u1c59\ua620-\ua629\ua8d0-\ua8d9\ua900-\ua909\ua9d0-\ua9d9\ua9f0-\ua9f9\uaa50-\uaa59\uabf0-\uabf9\uff10-\uff19\U000104a0-\U000104a9\U00010d30-\U00010d39\U00011066-\U0001106f\U000110f0-\U000110f9\U00011136-\U0001113f\U000111d0-\U000111d9\U000112f0-\U000112f9\U00011450-\U00011459\U000114d0-\U000114d9\U00011650-\U00011659\U000116c0-\U000116c9\U00011730-\U00011739\U000118e0-\U000118e9\U00011c50-\U00011c59\U00011d50-\U00011d59\U00011da0-\U00011da9\U00016a60-\U00016a69\U00016b50-\U00016b59\U0001d7ce-\U0001d7ff\U0001e950-\U0001e959' - -Nl = '\u16ee-\u16f0\u2160-\u2182\u2185-\u2188\u3007\u3021-\u3029\u3038-\u303a\ua6e6-\ua6ef\U00010140-\U00010174\U00010341\U0001034a\U000103d1-\U000103d5\U00012400-\U0001246e' - -No = '\xb2-\xb3\xb9\xbc-\xbe\u09f4-\u09f9\u0b72-\u0b77\u0bf0-\u0bf2\u0c78-\u0c7e\u0d58-\u0d5e\u0d70-\u0d78\u0f2a-\u0f33\u1369-\u137c\u17f0-\u17f9\u19da\u2070\u2074-\u2079\u2080-\u2089\u2150-\u215f\u2189\u2460-\u249b\u24ea-\u24ff\u2776-\u2793\u2cfd\u3192-\u3195\u3220-\u3229\u3248-\u324f\u3251-\u325f\u3280-\u3289\u32b1-\u32bf\ua830-\ua835\U00010107-\U00010133\U00010175-\U00010178\U0001018a-\U0001018b\U000102e1-\U000102fb\U00010320-\U00010323\U00010858-\U0001085f\U00010879-\U0001087f\U000108a7-\U000108af\U000108fb-\U000108ff\U00010916-\U0001091b\U000109bc-\U000109bd\U000109c0-\U000109cf\U000109d2-\U000109ff\U00010a40-\U00010a48\U00010a7d-\U00010a7e\U00010a9d-\U00010a9f\U00010aeb-\U00010aef\U00010b58-\U00010b5f\U00010b78-\U00010b7f\U00010ba9-\U00010baf\U00010cfa-\U00010cff\U00010e60-\U00010e7e\U00010f1d-\U00010f26\U00010f51-\U00010f54\U00011052-\U00011065\U000111e1-\U000111f4\U0001173a-\U0001173b\U000118ea-\U000118f2\U00011c5a-\U00011c6c\U00016b5b-\U00016b61\U00016e80-\U00016e96\U0001d2e0-\U0001d2f3\U0001d360-\U0001d378\U0001e8c7-\U0001e8cf\U0001ec71-\U0001ecab\U0001ecad-\U0001ecaf\U0001ecb1-\U0001ecb4\U0001f100-\U0001f10c' - -Pc = '_\u203f-\u2040\u2054\ufe33-\ufe34\ufe4d-\ufe4f\uff3f' - -Pd = '\\-\u058a\u05be\u1400\u1806\u2010-\u2015\u2e17\u2e1a\u2e3a-\u2e3b\u2e40\u301c\u3030\u30a0\ufe31-\ufe32\ufe58\ufe63\uff0d' - -Pe = ')\\]}\u0f3b\u0f3d\u169c\u2046\u207e\u208e\u2309\u230b\u232a\u2769\u276b\u276d\u276f\u2771\u2773\u2775\u27c6\u27e7\u27e9\u27eb\u27ed\u27ef\u2984\u2986\u2988\u298a\u298c\u298e\u2990\u2992\u2994\u2996\u2998\u29d9\u29db\u29fd\u2e23\u2e25\u2e27\u2e29\u3009\u300b\u300d\u300f\u3011\u3015\u3017\u3019\u301b\u301e-\u301f\ufd3e\ufe18\ufe36\ufe38\ufe3a\ufe3c\ufe3e\ufe40\ufe42\ufe44\ufe48\ufe5a\ufe5c\ufe5e\uff09\uff3d\uff5d\uff60\uff63' - -Pf = '\xbb\u2019\u201d\u203a\u2e03\u2e05\u2e0a\u2e0d\u2e1d\u2e21' - -Pi = '\xab\u2018\u201b-\u201c\u201f\u2039\u2e02\u2e04\u2e09\u2e0c\u2e1c\u2e20' - -Po = "!-#%-'*,.-/:-;?-@\\\\\xa1\xa7\xb6-\xb7\xbf\u037e\u0387\u055a-\u055f\u0589\u05c0\u05c3\u05c6\u05f3-\u05f4\u0609-\u060a\u060c-\u060d\u061b\u061e-\u061f\u066a-\u066d\u06d4\u0700-\u070d\u07f7-\u07f9\u0830-\u083e\u085e\u0964-\u0965\u0970\u09fd\u0a76\u0af0\u0c84\u0df4\u0e4f\u0e5a-\u0e5b\u0f04-\u0f12\u0f14\u0f85\u0fd0-\u0fd4\u0fd9-\u0fda\u104a-\u104f\u10fb\u1360-\u1368\u166d-\u166e\u16eb-\u16ed\u1735-\u1736\u17d4-\u17d6\u17d8-\u17da\u1800-\u1805\u1807-\u180a\u1944-\u1945\u1a1e-\u1a1f\u1aa0-\u1aa6\u1aa8-\u1aad\u1b5a-\u1b60\u1bfc-\u1bff\u1c3b-\u1c3f\u1c7e-\u1c7f\u1cc0-\u1cc7\u1cd3\u2016-\u2017\u2020-\u2027\u2030-\u2038\u203b-\u203e\u2041-\u2043\u2047-\u2051\u2053\u2055-\u205e\u2cf9-\u2cfc\u2cfe-\u2cff\u2d70\u2e00-\u2e01\u2e06-\u2e08\u2e0b\u2e0e-\u2e16\u2e18-\u2e19\u2e1b\u2e1e-\u2e1f\u2e2a-\u2e2e\u2e30-\u2e39\u2e3c-\u2e3f\u2e41\u2e43-\u2e4e\u3001-\u3003\u303d\u30fb\ua4fe-\ua4ff\ua60d-\ua60f\ua673\ua67e\ua6f2-\ua6f7\ua874-\ua877\ua8ce-\ua8cf\ua8f8-\ua8fa\ua8fc\ua92e-\ua92f\ua95f\ua9c1-\ua9cd\ua9de-\ua9df\uaa5c-\uaa5f\uaade-\uaadf\uaaf0-\uaaf1\uabeb\ufe10-\ufe16\ufe19\ufe30\ufe45-\ufe46\ufe49-\ufe4c\ufe50-\ufe52\ufe54-\ufe57\ufe5f-\ufe61\ufe68\ufe6a-\ufe6b\uff01-\uff03\uff05-\uff07\uff0a\uff0c\uff0e-\uff0f\uff1a-\uff1b\uff1f-\uff20\uff3c\uff61\uff64-\uff65\U00010100-\U00010102\U0001039f\U000103d0\U0001056f\U00010857\U0001091f\U0001093f\U00010a50-\U00010a58\U00010a7f\U00010af0-\U00010af6\U00010b39-\U00010b3f\U00010b99-\U00010b9c\U00010f55-\U00010f59\U00011047-\U0001104d\U000110bb-\U000110bc\U000110be-\U000110c1\U00011140-\U00011143\U00011174-\U00011175\U000111c5-\U000111c8\U000111cd\U000111db\U000111dd-\U000111df\U00011238-\U0001123d\U000112a9\U0001144b-\U0001144f\U0001145b\U0001145d\U000114c6\U000115c1-\U000115d7\U00011641-\U00011643\U00011660-\U0001166c\U0001173c-\U0001173e\U0001183b\U00011a3f-\U00011a46\U00011a9a-\U00011a9c\U00011a9e-\U00011aa2\U00011c41-\U00011c45\U00011c70-\U00011c71\U00011ef7-\U00011ef8\U00012470-\U00012474\U00016a6e-\U00016a6f\U00016af5\U00016b37-\U00016b3b\U00016b44\U00016e97-\U00016e9a\U0001bc9f\U0001da87-\U0001da8b\U0001e95e-\U0001e95f" - -Ps = '(\\[{\u0f3a\u0f3c\u169b\u201a\u201e\u2045\u207d\u208d\u2308\u230a\u2329\u2768\u276a\u276c\u276e\u2770\u2772\u2774\u27c5\u27e6\u27e8\u27ea\u27ec\u27ee\u2983\u2985\u2987\u2989\u298b\u298d\u298f\u2991\u2993\u2995\u2997\u29d8\u29da\u29fc\u2e22\u2e24\u2e26\u2e28\u2e42\u3008\u300a\u300c\u300e\u3010\u3014\u3016\u3018\u301a\u301d\ufd3f\ufe17\ufe35\ufe37\ufe39\ufe3b\ufe3d\ufe3f\ufe41\ufe43\ufe47\ufe59\ufe5b\ufe5d\uff08\uff3b\uff5b\uff5f\uff62' - -Sc = '$\xa2-\xa5\u058f\u060b\u07fe-\u07ff\u09f2-\u09f3\u09fb\u0af1\u0bf9\u0e3f\u17db\u20a0-\u20bf\ua838\ufdfc\ufe69\uff04\uffe0-\uffe1\uffe5-\uffe6\U0001ecb0' - -Sk = '\\^`\xa8\xaf\xb4\xb8\u02c2-\u02c5\u02d2-\u02df\u02e5-\u02eb\u02ed\u02ef-\u02ff\u0375\u0384-\u0385\u1fbd\u1fbf-\u1fc1\u1fcd-\u1fcf\u1fdd-\u1fdf\u1fed-\u1fef\u1ffd-\u1ffe\u309b-\u309c\ua700-\ua716\ua720-\ua721\ua789-\ua78a\uab5b\ufbb2-\ufbc1\uff3e\uff40\uffe3\U0001f3fb-\U0001f3ff' - -Sm = '+<->|~\xac\xb1\xd7\xf7\u03f6\u0606-\u0608\u2044\u2052\u207a-\u207c\u208a-\u208c\u2118\u2140-\u2144\u214b\u2190-\u2194\u219a-\u219b\u21a0\u21a3\u21a6\u21ae\u21ce-\u21cf\u21d2\u21d4\u21f4-\u22ff\u2320-\u2321\u237c\u239b-\u23b3\u23dc-\u23e1\u25b7\u25c1\u25f8-\u25ff\u266f\u27c0-\u27c4\u27c7-\u27e5\u27f0-\u27ff\u2900-\u2982\u2999-\u29d7\u29dc-\u29fb\u29fe-\u2aff\u2b30-\u2b44\u2b47-\u2b4c\ufb29\ufe62\ufe64-\ufe66\uff0b\uff1c-\uff1e\uff5c\uff5e\uffe2\uffe9-\uffec\U0001d6c1\U0001d6db\U0001d6fb\U0001d715\U0001d735\U0001d74f\U0001d76f\U0001d789\U0001d7a9\U0001d7c3\U0001eef0-\U0001eef1' - -So = '\xa6\xa9\xae\xb0\u0482\u058d-\u058e\u060e-\u060f\u06de\u06e9\u06fd-\u06fe\u07f6\u09fa\u0b70\u0bf3-\u0bf8\u0bfa\u0c7f\u0d4f\u0d79\u0f01-\u0f03\u0f13\u0f15-\u0f17\u0f1a-\u0f1f\u0f34\u0f36\u0f38\u0fbe-\u0fc5\u0fc7-\u0fcc\u0fce-\u0fcf\u0fd5-\u0fd8\u109e-\u109f\u1390-\u1399\u1940\u19de-\u19ff\u1b61-\u1b6a\u1b74-\u1b7c\u2100-\u2101\u2103-\u2106\u2108-\u2109\u2114\u2116-\u2117\u211e-\u2123\u2125\u2127\u2129\u212e\u213a-\u213b\u214a\u214c-\u214d\u214f\u218a-\u218b\u2195-\u2199\u219c-\u219f\u21a1-\u21a2\u21a4-\u21a5\u21a7-\u21ad\u21af-\u21cd\u21d0-\u21d1\u21d3\u21d5-\u21f3\u2300-\u2307\u230c-\u231f\u2322-\u2328\u232b-\u237b\u237d-\u239a\u23b4-\u23db\u23e2-\u2426\u2440-\u244a\u249c-\u24e9\u2500-\u25b6\u25b8-\u25c0\u25c2-\u25f7\u2600-\u266e\u2670-\u2767\u2794-\u27bf\u2800-\u28ff\u2b00-\u2b2f\u2b45-\u2b46\u2b4d-\u2b73\u2b76-\u2b95\u2b98-\u2bc8\u2bca-\u2bfe\u2ce5-\u2cea\u2e80-\u2e99\u2e9b-\u2ef3\u2f00-\u2fd5\u2ff0-\u2ffb\u3004\u3012-\u3013\u3020\u3036-\u3037\u303e-\u303f\u3190-\u3191\u3196-\u319f\u31c0-\u31e3\u3200-\u321e\u322a-\u3247\u3250\u3260-\u327f\u328a-\u32b0\u32c0-\u32fe\u3300-\u33ff\u4dc0-\u4dff\ua490-\ua4c6\ua828-\ua82b\ua836-\ua837\ua839\uaa77-\uaa79\ufdfd\uffe4\uffe8\uffed-\uffee\ufffc-\ufffd\U00010137-\U0001013f\U00010179-\U00010189\U0001018c-\U0001018e\U00010190-\U0001019b\U000101a0\U000101d0-\U000101fc\U00010877-\U00010878\U00010ac8\U0001173f\U00016b3c-\U00016b3f\U00016b45\U0001bc9c\U0001d000-\U0001d0f5\U0001d100-\U0001d126\U0001d129-\U0001d164\U0001d16a-\U0001d16c\U0001d183-\U0001d184\U0001d18c-\U0001d1a9\U0001d1ae-\U0001d1e8\U0001d200-\U0001d241\U0001d245\U0001d300-\U0001d356\U0001d800-\U0001d9ff\U0001da37-\U0001da3a\U0001da6d-\U0001da74\U0001da76-\U0001da83\U0001da85-\U0001da86\U0001ecac\U0001f000-\U0001f02b\U0001f030-\U0001f093\U0001f0a0-\U0001f0ae\U0001f0b1-\U0001f0bf\U0001f0c1-\U0001f0cf\U0001f0d1-\U0001f0f5\U0001f110-\U0001f16b\U0001f170-\U0001f1ac\U0001f1e6-\U0001f202\U0001f210-\U0001f23b\U0001f240-\U0001f248\U0001f250-\U0001f251\U0001f260-\U0001f265\U0001f300-\U0001f3fa\U0001f400-\U0001f6d4\U0001f6e0-\U0001f6ec\U0001f6f0-\U0001f6f9\U0001f700-\U0001f773\U0001f780-\U0001f7d8\U0001f800-\U0001f80b\U0001f810-\U0001f847\U0001f850-\U0001f859\U0001f860-\U0001f887\U0001f890-\U0001f8ad\U0001f900-\U0001f90b\U0001f910-\U0001f93e\U0001f940-\U0001f970\U0001f973-\U0001f976\U0001f97a\U0001f97c-\U0001f9a2\U0001f9b0-\U0001f9b9\U0001f9c0-\U0001f9c2\U0001f9d0-\U0001f9ff\U0001fa60-\U0001fa6d' - -Zl = '\u2028' - -Zp = '\u2029' - -Zs = ' \xa0\u1680\u2000-\u200a\u202f\u205f\u3000' - -xid_continue = '0-9A-Z_a-z\xaa\xb5\xb7\xba\xc0-\xd6\xd8-\xf6\xf8-\u02c1\u02c6-\u02d1\u02e0-\u02e4\u02ec\u02ee\u0300-\u0374\u0376-\u0377\u037b-\u037d\u037f\u0386-\u038a\u038c\u038e-\u03a1\u03a3-\u03f5\u03f7-\u0481\u0483-\u0487\u048a-\u052f\u0531-\u0556\u0559\u0560-\u0588\u0591-\u05bd\u05bf\u05c1-\u05c2\u05c4-\u05c5\u05c7\u05d0-\u05ea\u05ef-\u05f2\u0610-\u061a\u0620-\u0669\u066e-\u06d3\u06d5-\u06dc\u06df-\u06e8\u06ea-\u06fc\u06ff\u0710-\u074a\u074d-\u07b1\u07c0-\u07f5\u07fa\u07fd\u0800-\u082d\u0840-\u085b\u0860-\u086a\u08a0-\u08b4\u08b6-\u08bd\u08d3-\u08e1\u08e3-\u0963\u0966-\u096f\u0971-\u0983\u0985-\u098c\u098f-\u0990\u0993-\u09a8\u09aa-\u09b0\u09b2\u09b6-\u09b9\u09bc-\u09c4\u09c7-\u09c8\u09cb-\u09ce\u09d7\u09dc-\u09dd\u09df-\u09e3\u09e6-\u09f1\u09fc\u09fe\u0a01-\u0a03\u0a05-\u0a0a\u0a0f-\u0a10\u0a13-\u0a28\u0a2a-\u0a30\u0a32-\u0a33\u0a35-\u0a36\u0a38-\u0a39\u0a3c\u0a3e-\u0a42\u0a47-\u0a48\u0a4b-\u0a4d\u0a51\u0a59-\u0a5c\u0a5e\u0a66-\u0a75\u0a81-\u0a83\u0a85-\u0a8d\u0a8f-\u0a91\u0a93-\u0aa8\u0aaa-\u0ab0\u0ab2-\u0ab3\u0ab5-\u0ab9\u0abc-\u0ac5\u0ac7-\u0ac9\u0acb-\u0acd\u0ad0\u0ae0-\u0ae3\u0ae6-\u0aef\u0af9-\u0aff\u0b01-\u0b03\u0b05-\u0b0c\u0b0f-\u0b10\u0b13-\u0b28\u0b2a-\u0b30\u0b32-\u0b33\u0b35-\u0b39\u0b3c-\u0b44\u0b47-\u0b48\u0b4b-\u0b4d\u0b56-\u0b57\u0b5c-\u0b5d\u0b5f-\u0b63\u0b66-\u0b6f\u0b71\u0b82-\u0b83\u0b85-\u0b8a\u0b8e-\u0b90\u0b92-\u0b95\u0b99-\u0b9a\u0b9c\u0b9e-\u0b9f\u0ba3-\u0ba4\u0ba8-\u0baa\u0bae-\u0bb9\u0bbe-\u0bc2\u0bc6-\u0bc8\u0bca-\u0bcd\u0bd0\u0bd7\u0be6-\u0bef\u0c00-\u0c0c\u0c0e-\u0c10\u0c12-\u0c28\u0c2a-\u0c39\u0c3d-\u0c44\u0c46-\u0c48\u0c4a-\u0c4d\u0c55-\u0c56\u0c58-\u0c5a\u0c60-\u0c63\u0c66-\u0c6f\u0c80-\u0c83\u0c85-\u0c8c\u0c8e-\u0c90\u0c92-\u0ca8\u0caa-\u0cb3\u0cb5-\u0cb9\u0cbc-\u0cc4\u0cc6-\u0cc8\u0cca-\u0ccd\u0cd5-\u0cd6\u0cde\u0ce0-\u0ce3\u0ce6-\u0cef\u0cf1-\u0cf2\u0d00-\u0d03\u0d05-\u0d0c\u0d0e-\u0d10\u0d12-\u0d44\u0d46-\u0d48\u0d4a-\u0d4e\u0d54-\u0d57\u0d5f-\u0d63\u0d66-\u0d6f\u0d7a-\u0d7f\u0d82-\u0d83\u0d85-\u0d96\u0d9a-\u0db1\u0db3-\u0dbb\u0dbd\u0dc0-\u0dc6\u0dca\u0dcf-\u0dd4\u0dd6\u0dd8-\u0ddf\u0de6-\u0def\u0df2-\u0df3\u0e01-\u0e3a\u0e40-\u0e4e\u0e50-\u0e59\u0e81-\u0e82\u0e84\u0e87-\u0e88\u0e8a\u0e8d\u0e94-\u0e97\u0e99-\u0e9f\u0ea1-\u0ea3\u0ea5\u0ea7\u0eaa-\u0eab\u0ead-\u0eb9\u0ebb-\u0ebd\u0ec0-\u0ec4\u0ec6\u0ec8-\u0ecd\u0ed0-\u0ed9\u0edc-\u0edf\u0f00\u0f18-\u0f19\u0f20-\u0f29\u0f35\u0f37\u0f39\u0f3e-\u0f47\u0f49-\u0f6c\u0f71-\u0f84\u0f86-\u0f97\u0f99-\u0fbc\u0fc6\u1000-\u1049\u1050-\u109d\u10a0-\u10c5\u10c7\u10cd\u10d0-\u10fa\u10fc-\u1248\u124a-\u124d\u1250-\u1256\u1258\u125a-\u125d\u1260-\u1288\u128a-\u128d\u1290-\u12b0\u12b2-\u12b5\u12b8-\u12be\u12c0\u12c2-\u12c5\u12c8-\u12d6\u12d8-\u1310\u1312-\u1315\u1318-\u135a\u135d-\u135f\u1369-\u1371\u1380-\u138f\u13a0-\u13f5\u13f8-\u13fd\u1401-\u166c\u166f-\u167f\u1681-\u169a\u16a0-\u16ea\u16ee-\u16f8\u1700-\u170c\u170e-\u1714\u1720-\u1734\u1740-\u1753\u1760-\u176c\u176e-\u1770\u1772-\u1773\u1780-\u17d3\u17d7\u17dc-\u17dd\u17e0-\u17e9\u180b-\u180d\u1810-\u1819\u1820-\u1878\u1880-\u18aa\u18b0-\u18f5\u1900-\u191e\u1920-\u192b\u1930-\u193b\u1946-\u196d\u1970-\u1974\u1980-\u19ab\u19b0-\u19c9\u19d0-\u19da\u1a00-\u1a1b\u1a20-\u1a5e\u1a60-\u1a7c\u1a7f-\u1a89\u1a90-\u1a99\u1aa7\u1ab0-\u1abd\u1b00-\u1b4b\u1b50-\u1b59\u1b6b-\u1b73\u1b80-\u1bf3\u1c00-\u1c37\u1c40-\u1c49\u1c4d-\u1c7d\u1c80-\u1c88\u1c90-\u1cba\u1cbd-\u1cbf\u1cd0-\u1cd2\u1cd4-\u1cf9\u1d00-\u1df9\u1dfb-\u1f15\u1f18-\u1f1d\u1f20-\u1f45\u1f48-\u1f4d\u1f50-\u1f57\u1f59\u1f5b\u1f5d\u1f5f-\u1f7d\u1f80-\u1fb4\u1fb6-\u1fbc\u1fbe\u1fc2-\u1fc4\u1fc6-\u1fcc\u1fd0-\u1fd3\u1fd6-\u1fdb\u1fe0-\u1fec\u1ff2-\u1ff4\u1ff6-\u1ffc\u203f-\u2040\u2054\u2071\u207f\u2090-\u209c\u20d0-\u20dc\u20e1\u20e5-\u20f0\u2102\u2107\u210a-\u2113\u2115\u2118-\u211d\u2124\u2126\u2128\u212a-\u2139\u213c-\u213f\u2145-\u2149\u214e\u2160-\u2188\u2c00-\u2c2e\u2c30-\u2c5e\u2c60-\u2ce4\u2ceb-\u2cf3\u2d00-\u2d25\u2d27\u2d2d\u2d30-\u2d67\u2d6f\u2d7f-\u2d96\u2da0-\u2da6\u2da8-\u2dae\u2db0-\u2db6\u2db8-\u2dbe\u2dc0-\u2dc6\u2dc8-\u2dce\u2dd0-\u2dd6\u2dd8-\u2dde\u2de0-\u2dff\u3005-\u3007\u3021-\u302f\u3031-\u3035\u3038-\u303c\u3041-\u3096\u3099-\u309a\u309d-\u309f\u30a1-\u30fa\u30fc-\u30ff\u3105-\u312f\u3131-\u318e\u31a0-\u31ba\u31f0-\u31ff\u3400-\u4db5\u4e00-\u9fef\ua000-\ua48c\ua4d0-\ua4fd\ua500-\ua60c\ua610-\ua62b\ua640-\ua66f\ua674-\ua67d\ua67f-\ua6f1\ua717-\ua71f\ua722-\ua788\ua78b-\ua7b9\ua7f7-\ua827\ua840-\ua873\ua880-\ua8c5\ua8d0-\ua8d9\ua8e0-\ua8f7\ua8fb\ua8fd-\ua92d\ua930-\ua953\ua960-\ua97c\ua980-\ua9c0\ua9cf-\ua9d9\ua9e0-\ua9fe\uaa00-\uaa36\uaa40-\uaa4d\uaa50-\uaa59\uaa60-\uaa76\uaa7a-\uaac2\uaadb-\uaadd\uaae0-\uaaef\uaaf2-\uaaf6\uab01-\uab06\uab09-\uab0e\uab11-\uab16\uab20-\uab26\uab28-\uab2e\uab30-\uab5a\uab5c-\uab65\uab70-\uabea\uabec-\uabed\uabf0-\uabf9\uac00-\ud7a3\ud7b0-\ud7c6\ud7cb-\ud7fb\uf900-\ufa6d\ufa70-\ufad9\ufb00-\ufb06\ufb13-\ufb17\ufb1d-\ufb28\ufb2a-\ufb36\ufb38-\ufb3c\ufb3e\ufb40-\ufb41\ufb43-\ufb44\ufb46-\ufbb1\ufbd3-\ufc5d\ufc64-\ufd3d\ufd50-\ufd8f\ufd92-\ufdc7\ufdf0-\ufdf9\ufe00-\ufe0f\ufe20-\ufe2f\ufe33-\ufe34\ufe4d-\ufe4f\ufe71\ufe73\ufe77\ufe79\ufe7b\ufe7d\ufe7f-\ufefc\uff10-\uff19\uff21-\uff3a\uff3f\uff41-\uff5a\uff66-\uffbe\uffc2-\uffc7\uffca-\uffcf\uffd2-\uffd7\uffda-\uffdc\U00010000-\U0001000b\U0001000d-\U00010026\U00010028-\U0001003a\U0001003c-\U0001003d\U0001003f-\U0001004d\U00010050-\U0001005d\U00010080-\U000100fa\U00010140-\U00010174\U000101fd\U00010280-\U0001029c\U000102a0-\U000102d0\U000102e0\U00010300-\U0001031f\U0001032d-\U0001034a\U00010350-\U0001037a\U00010380-\U0001039d\U000103a0-\U000103c3\U000103c8-\U000103cf\U000103d1-\U000103d5\U00010400-\U0001049d\U000104a0-\U000104a9\U000104b0-\U000104d3\U000104d8-\U000104fb\U00010500-\U00010527\U00010530-\U00010563\U00010600-\U00010736\U00010740-\U00010755\U00010760-\U00010767\U00010800-\U00010805\U00010808\U0001080a-\U00010835\U00010837-\U00010838\U0001083c\U0001083f-\U00010855\U00010860-\U00010876\U00010880-\U0001089e\U000108e0-\U000108f2\U000108f4-\U000108f5\U00010900-\U00010915\U00010920-\U00010939\U00010980-\U000109b7\U000109be-\U000109bf\U00010a00-\U00010a03\U00010a05-\U00010a06\U00010a0c-\U00010a13\U00010a15-\U00010a17\U00010a19-\U00010a35\U00010a38-\U00010a3a\U00010a3f\U00010a60-\U00010a7c\U00010a80-\U00010a9c\U00010ac0-\U00010ac7\U00010ac9-\U00010ae6\U00010b00-\U00010b35\U00010b40-\U00010b55\U00010b60-\U00010b72\U00010b80-\U00010b91\U00010c00-\U00010c48\U00010c80-\U00010cb2\U00010cc0-\U00010cf2\U00010d00-\U00010d27\U00010d30-\U00010d39\U00010f00-\U00010f1c\U00010f27\U00010f30-\U00010f50\U00011000-\U00011046\U00011066-\U0001106f\U0001107f-\U000110ba\U000110d0-\U000110e8\U000110f0-\U000110f9\U00011100-\U00011134\U00011136-\U0001113f\U00011144-\U00011146\U00011150-\U00011173\U00011176\U00011180-\U000111c4\U000111c9-\U000111cc\U000111d0-\U000111da\U000111dc\U00011200-\U00011211\U00011213-\U00011237\U0001123e\U00011280-\U00011286\U00011288\U0001128a-\U0001128d\U0001128f-\U0001129d\U0001129f-\U000112a8\U000112b0-\U000112ea\U000112f0-\U000112f9\U00011300-\U00011303\U00011305-\U0001130c\U0001130f-\U00011310\U00011313-\U00011328\U0001132a-\U00011330\U00011332-\U00011333\U00011335-\U00011339\U0001133b-\U00011344\U00011347-\U00011348\U0001134b-\U0001134d\U00011350\U00011357\U0001135d-\U00011363\U00011366-\U0001136c\U00011370-\U00011374\U00011400-\U0001144a\U00011450-\U00011459\U0001145e\U00011480-\U000114c5\U000114c7\U000114d0-\U000114d9\U00011580-\U000115b5\U000115b8-\U000115c0\U000115d8-\U000115dd\U00011600-\U00011640\U00011644\U00011650-\U00011659\U00011680-\U000116b7\U000116c0-\U000116c9\U00011700-\U0001171a\U0001171d-\U0001172b\U00011730-\U00011739\U00011800-\U0001183a\U000118a0-\U000118e9\U000118ff\U00011a00-\U00011a3e\U00011a47\U00011a50-\U00011a83\U00011a86-\U00011a99\U00011a9d\U00011ac0-\U00011af8\U00011c00-\U00011c08\U00011c0a-\U00011c36\U00011c38-\U00011c40\U00011c50-\U00011c59\U00011c72-\U00011c8f\U00011c92-\U00011ca7\U00011ca9-\U00011cb6\U00011d00-\U00011d06\U00011d08-\U00011d09\U00011d0b-\U00011d36\U00011d3a\U00011d3c-\U00011d3d\U00011d3f-\U00011d47\U00011d50-\U00011d59\U00011d60-\U00011d65\U00011d67-\U00011d68\U00011d6a-\U00011d8e\U00011d90-\U00011d91\U00011d93-\U00011d98\U00011da0-\U00011da9\U00011ee0-\U00011ef6\U00012000-\U00012399\U00012400-\U0001246e\U00012480-\U00012543\U00013000-\U0001342e\U00014400-\U00014646\U00016800-\U00016a38\U00016a40-\U00016a5e\U00016a60-\U00016a69\U00016ad0-\U00016aed\U00016af0-\U00016af4\U00016b00-\U00016b36\U00016b40-\U00016b43\U00016b50-\U00016b59\U00016b63-\U00016b77\U00016b7d-\U00016b8f\U00016e40-\U00016e7f\U00016f00-\U00016f44\U00016f50-\U00016f7e\U00016f8f-\U00016f9f\U00016fe0-\U00016fe1\U00017000-\U000187f1\U00018800-\U00018af2\U0001b000-\U0001b11e\U0001b170-\U0001b2fb\U0001bc00-\U0001bc6a\U0001bc70-\U0001bc7c\U0001bc80-\U0001bc88\U0001bc90-\U0001bc99\U0001bc9d-\U0001bc9e\U0001d165-\U0001d169\U0001d16d-\U0001d172\U0001d17b-\U0001d182\U0001d185-\U0001d18b\U0001d1aa-\U0001d1ad\U0001d242-\U0001d244\U0001d400-\U0001d454\U0001d456-\U0001d49c\U0001d49e-\U0001d49f\U0001d4a2\U0001d4a5-\U0001d4a6\U0001d4a9-\U0001d4ac\U0001d4ae-\U0001d4b9\U0001d4bb\U0001d4bd-\U0001d4c3\U0001d4c5-\U0001d505\U0001d507-\U0001d50a\U0001d50d-\U0001d514\U0001d516-\U0001d51c\U0001d51e-\U0001d539\U0001d53b-\U0001d53e\U0001d540-\U0001d544\U0001d546\U0001d54a-\U0001d550\U0001d552-\U0001d6a5\U0001d6a8-\U0001d6c0\U0001d6c2-\U0001d6da\U0001d6dc-\U0001d6fa\U0001d6fc-\U0001d714\U0001d716-\U0001d734\U0001d736-\U0001d74e\U0001d750-\U0001d76e\U0001d770-\U0001d788\U0001d78a-\U0001d7a8\U0001d7aa-\U0001d7c2\U0001d7c4-\U0001d7cb\U0001d7ce-\U0001d7ff\U0001da00-\U0001da36\U0001da3b-\U0001da6c\U0001da75\U0001da84\U0001da9b-\U0001da9f\U0001daa1-\U0001daaf\U0001e000-\U0001e006\U0001e008-\U0001e018\U0001e01b-\U0001e021\U0001e023-\U0001e024\U0001e026-\U0001e02a\U0001e800-\U0001e8c4\U0001e8d0-\U0001e8d6\U0001e900-\U0001e94a\U0001e950-\U0001e959\U0001ee00-\U0001ee03\U0001ee05-\U0001ee1f\U0001ee21-\U0001ee22\U0001ee24\U0001ee27\U0001ee29-\U0001ee32\U0001ee34-\U0001ee37\U0001ee39\U0001ee3b\U0001ee42\U0001ee47\U0001ee49\U0001ee4b\U0001ee4d-\U0001ee4f\U0001ee51-\U0001ee52\U0001ee54\U0001ee57\U0001ee59\U0001ee5b\U0001ee5d\U0001ee5f\U0001ee61-\U0001ee62\U0001ee64\U0001ee67-\U0001ee6a\U0001ee6c-\U0001ee72\U0001ee74-\U0001ee77\U0001ee79-\U0001ee7c\U0001ee7e\U0001ee80-\U0001ee89\U0001ee8b-\U0001ee9b\U0001eea1-\U0001eea3\U0001eea5-\U0001eea9\U0001eeab-\U0001eebb\U00020000-\U0002a6d6\U0002a700-\U0002b734\U0002b740-\U0002b81d\U0002b820-\U0002cea1\U0002ceb0-\U0002ebe0\U0002f800-\U0002fa1d\U000e0100-\U000e01ef' - -xid_start = 'A-Z_a-z\xaa\xb5\xba\xc0-\xd6\xd8-\xf6\xf8-\u02c1\u02c6-\u02d1\u02e0-\u02e4\u02ec\u02ee\u0370-\u0374\u0376-\u0377\u037b-\u037d\u037f\u0386\u0388-\u038a\u038c\u038e-\u03a1\u03a3-\u03f5\u03f7-\u0481\u048a-\u052f\u0531-\u0556\u0559\u0560-\u0588\u05d0-\u05ea\u05ef-\u05f2\u0620-\u064a\u066e-\u066f\u0671-\u06d3\u06d5\u06e5-\u06e6\u06ee-\u06ef\u06fa-\u06fc\u06ff\u0710\u0712-\u072f\u074d-\u07a5\u07b1\u07ca-\u07ea\u07f4-\u07f5\u07fa\u0800-\u0815\u081a\u0824\u0828\u0840-\u0858\u0860-\u086a\u08a0-\u08b4\u08b6-\u08bd\u0904-\u0939\u093d\u0950\u0958-\u0961\u0971-\u0980\u0985-\u098c\u098f-\u0990\u0993-\u09a8\u09aa-\u09b0\u09b2\u09b6-\u09b9\u09bd\u09ce\u09dc-\u09dd\u09df-\u09e1\u09f0-\u09f1\u09fc\u0a05-\u0a0a\u0a0f-\u0a10\u0a13-\u0a28\u0a2a-\u0a30\u0a32-\u0a33\u0a35-\u0a36\u0a38-\u0a39\u0a59-\u0a5c\u0a5e\u0a72-\u0a74\u0a85-\u0a8d\u0a8f-\u0a91\u0a93-\u0aa8\u0aaa-\u0ab0\u0ab2-\u0ab3\u0ab5-\u0ab9\u0abd\u0ad0\u0ae0-\u0ae1\u0af9\u0b05-\u0b0c\u0b0f-\u0b10\u0b13-\u0b28\u0b2a-\u0b30\u0b32-\u0b33\u0b35-\u0b39\u0b3d\u0b5c-\u0b5d\u0b5f-\u0b61\u0b71\u0b83\u0b85-\u0b8a\u0b8e-\u0b90\u0b92-\u0b95\u0b99-\u0b9a\u0b9c\u0b9e-\u0b9f\u0ba3-\u0ba4\u0ba8-\u0baa\u0bae-\u0bb9\u0bd0\u0c05-\u0c0c\u0c0e-\u0c10\u0c12-\u0c28\u0c2a-\u0c39\u0c3d\u0c58-\u0c5a\u0c60-\u0c61\u0c80\u0c85-\u0c8c\u0c8e-\u0c90\u0c92-\u0ca8\u0caa-\u0cb3\u0cb5-\u0cb9\u0cbd\u0cde\u0ce0-\u0ce1\u0cf1-\u0cf2\u0d05-\u0d0c\u0d0e-\u0d10\u0d12-\u0d3a\u0d3d\u0d4e\u0d54-\u0d56\u0d5f-\u0d61\u0d7a-\u0d7f\u0d85-\u0d96\u0d9a-\u0db1\u0db3-\u0dbb\u0dbd\u0dc0-\u0dc6\u0e01-\u0e30\u0e32\u0e40-\u0e46\u0e81-\u0e82\u0e84\u0e87-\u0e88\u0e8a\u0e8d\u0e94-\u0e97\u0e99-\u0e9f\u0ea1-\u0ea3\u0ea5\u0ea7\u0eaa-\u0eab\u0ead-\u0eb0\u0eb2\u0ebd\u0ec0-\u0ec4\u0ec6\u0edc-\u0edf\u0f00\u0f40-\u0f47\u0f49-\u0f6c\u0f88-\u0f8c\u1000-\u102a\u103f\u1050-\u1055\u105a-\u105d\u1061\u1065-\u1066\u106e-\u1070\u1075-\u1081\u108e\u10a0-\u10c5\u10c7\u10cd\u10d0-\u10fa\u10fc-\u1248\u124a-\u124d\u1250-\u1256\u1258\u125a-\u125d\u1260-\u1288\u128a-\u128d\u1290-\u12b0\u12b2-\u12b5\u12b8-\u12be\u12c0\u12c2-\u12c5\u12c8-\u12d6\u12d8-\u1310\u1312-\u1315\u1318-\u135a\u1380-\u138f\u13a0-\u13f5\u13f8-\u13fd\u1401-\u166c\u166f-\u167f\u1681-\u169a\u16a0-\u16ea\u16ee-\u16f8\u1700-\u170c\u170e-\u1711\u1720-\u1731\u1740-\u1751\u1760-\u176c\u176e-\u1770\u1780-\u17b3\u17d7\u17dc\u1820-\u1878\u1880-\u18a8\u18aa\u18b0-\u18f5\u1900-\u191e\u1950-\u196d\u1970-\u1974\u1980-\u19ab\u19b0-\u19c9\u1a00-\u1a16\u1a20-\u1a54\u1aa7\u1b05-\u1b33\u1b45-\u1b4b\u1b83-\u1ba0\u1bae-\u1baf\u1bba-\u1be5\u1c00-\u1c23\u1c4d-\u1c4f\u1c5a-\u1c7d\u1c80-\u1c88\u1c90-\u1cba\u1cbd-\u1cbf\u1ce9-\u1cec\u1cee-\u1cf1\u1cf5-\u1cf6\u1d00-\u1dbf\u1e00-\u1f15\u1f18-\u1f1d\u1f20-\u1f45\u1f48-\u1f4d\u1f50-\u1f57\u1f59\u1f5b\u1f5d\u1f5f-\u1f7d\u1f80-\u1fb4\u1fb6-\u1fbc\u1fbe\u1fc2-\u1fc4\u1fc6-\u1fcc\u1fd0-\u1fd3\u1fd6-\u1fdb\u1fe0-\u1fec\u1ff2-\u1ff4\u1ff6-\u1ffc\u2071\u207f\u2090-\u209c\u2102\u2107\u210a-\u2113\u2115\u2118-\u211d\u2124\u2126\u2128\u212a-\u2139\u213c-\u213f\u2145-\u2149\u214e\u2160-\u2188\u2c00-\u2c2e\u2c30-\u2c5e\u2c60-\u2ce4\u2ceb-\u2cee\u2cf2-\u2cf3\u2d00-\u2d25\u2d27\u2d2d\u2d30-\u2d67\u2d6f\u2d80-\u2d96\u2da0-\u2da6\u2da8-\u2dae\u2db0-\u2db6\u2db8-\u2dbe\u2dc0-\u2dc6\u2dc8-\u2dce\u2dd0-\u2dd6\u2dd8-\u2dde\u3005-\u3007\u3021-\u3029\u3031-\u3035\u3038-\u303c\u3041-\u3096\u309d-\u309f\u30a1-\u30fa\u30fc-\u30ff\u3105-\u312f\u3131-\u318e\u31a0-\u31ba\u31f0-\u31ff\u3400-\u4db5\u4e00-\u9fef\ua000-\ua48c\ua4d0-\ua4fd\ua500-\ua60c\ua610-\ua61f\ua62a-\ua62b\ua640-\ua66e\ua67f-\ua69d\ua6a0-\ua6ef\ua717-\ua71f\ua722-\ua788\ua78b-\ua7b9\ua7f7-\ua801\ua803-\ua805\ua807-\ua80a\ua80c-\ua822\ua840-\ua873\ua882-\ua8b3\ua8f2-\ua8f7\ua8fb\ua8fd-\ua8fe\ua90a-\ua925\ua930-\ua946\ua960-\ua97c\ua984-\ua9b2\ua9cf\ua9e0-\ua9e4\ua9e6-\ua9ef\ua9fa-\ua9fe\uaa00-\uaa28\uaa40-\uaa42\uaa44-\uaa4b\uaa60-\uaa76\uaa7a\uaa7e-\uaaaf\uaab1\uaab5-\uaab6\uaab9-\uaabd\uaac0\uaac2\uaadb-\uaadd\uaae0-\uaaea\uaaf2-\uaaf4\uab01-\uab06\uab09-\uab0e\uab11-\uab16\uab20-\uab26\uab28-\uab2e\uab30-\uab5a\uab5c-\uab65\uab70-\uabe2\uac00-\ud7a3\ud7b0-\ud7c6\ud7cb-\ud7fb\uf900-\ufa6d\ufa70-\ufad9\ufb00-\ufb06\ufb13-\ufb17\ufb1d\ufb1f-\ufb28\ufb2a-\ufb36\ufb38-\ufb3c\ufb3e\ufb40-\ufb41\ufb43-\ufb44\ufb46-\ufbb1\ufbd3-\ufc5d\ufc64-\ufd3d\ufd50-\ufd8f\ufd92-\ufdc7\ufdf0-\ufdf9\ufe71\ufe73\ufe77\ufe79\ufe7b\ufe7d\ufe7f-\ufefc\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d\uffa0-\uffbe\uffc2-\uffc7\uffca-\uffcf\uffd2-\uffd7\uffda-\uffdc\U00010000-\U0001000b\U0001000d-\U00010026\U00010028-\U0001003a\U0001003c-\U0001003d\U0001003f-\U0001004d\U00010050-\U0001005d\U00010080-\U000100fa\U00010140-\U00010174\U00010280-\U0001029c\U000102a0-\U000102d0\U00010300-\U0001031f\U0001032d-\U0001034a\U00010350-\U00010375\U00010380-\U0001039d\U000103a0-\U000103c3\U000103c8-\U000103cf\U000103d1-\U000103d5\U00010400-\U0001049d\U000104b0-\U000104d3\U000104d8-\U000104fb\U00010500-\U00010527\U00010530-\U00010563\U00010600-\U00010736\U00010740-\U00010755\U00010760-\U00010767\U00010800-\U00010805\U00010808\U0001080a-\U00010835\U00010837-\U00010838\U0001083c\U0001083f-\U00010855\U00010860-\U00010876\U00010880-\U0001089e\U000108e0-\U000108f2\U000108f4-\U000108f5\U00010900-\U00010915\U00010920-\U00010939\U00010980-\U000109b7\U000109be-\U000109bf\U00010a00\U00010a10-\U00010a13\U00010a15-\U00010a17\U00010a19-\U00010a35\U00010a60-\U00010a7c\U00010a80-\U00010a9c\U00010ac0-\U00010ac7\U00010ac9-\U00010ae4\U00010b00-\U00010b35\U00010b40-\U00010b55\U00010b60-\U00010b72\U00010b80-\U00010b91\U00010c00-\U00010c48\U00010c80-\U00010cb2\U00010cc0-\U00010cf2\U00010d00-\U00010d23\U00010f00-\U00010f1c\U00010f27\U00010f30-\U00010f45\U00011003-\U00011037\U00011083-\U000110af\U000110d0-\U000110e8\U00011103-\U00011126\U00011144\U00011150-\U00011172\U00011176\U00011183-\U000111b2\U000111c1-\U000111c4\U000111da\U000111dc\U00011200-\U00011211\U00011213-\U0001122b\U00011280-\U00011286\U00011288\U0001128a-\U0001128d\U0001128f-\U0001129d\U0001129f-\U000112a8\U000112b0-\U000112de\U00011305-\U0001130c\U0001130f-\U00011310\U00011313-\U00011328\U0001132a-\U00011330\U00011332-\U00011333\U00011335-\U00011339\U0001133d\U00011350\U0001135d-\U00011361\U00011400-\U00011434\U00011447-\U0001144a\U00011480-\U000114af\U000114c4-\U000114c5\U000114c7\U00011580-\U000115ae\U000115d8-\U000115db\U00011600-\U0001162f\U00011644\U00011680-\U000116aa\U00011700-\U0001171a\U00011800-\U0001182b\U000118a0-\U000118df\U000118ff\U00011a00\U00011a0b-\U00011a32\U00011a3a\U00011a50\U00011a5c-\U00011a83\U00011a86-\U00011a89\U00011a9d\U00011ac0-\U00011af8\U00011c00-\U00011c08\U00011c0a-\U00011c2e\U00011c40\U00011c72-\U00011c8f\U00011d00-\U00011d06\U00011d08-\U00011d09\U00011d0b-\U00011d30\U00011d46\U00011d60-\U00011d65\U00011d67-\U00011d68\U00011d6a-\U00011d89\U00011d98\U00011ee0-\U00011ef2\U00012000-\U00012399\U00012400-\U0001246e\U00012480-\U00012543\U00013000-\U0001342e\U00014400-\U00014646\U00016800-\U00016a38\U00016a40-\U00016a5e\U00016ad0-\U00016aed\U00016b00-\U00016b2f\U00016b40-\U00016b43\U00016b63-\U00016b77\U00016b7d-\U00016b8f\U00016e40-\U00016e7f\U00016f00-\U00016f44\U00016f50\U00016f93-\U00016f9f\U00016fe0-\U00016fe1\U00017000-\U000187f1\U00018800-\U00018af2\U0001b000-\U0001b11e\U0001b170-\U0001b2fb\U0001bc00-\U0001bc6a\U0001bc70-\U0001bc7c\U0001bc80-\U0001bc88\U0001bc90-\U0001bc99\U0001d400-\U0001d454\U0001d456-\U0001d49c\U0001d49e-\U0001d49f\U0001d4a2\U0001d4a5-\U0001d4a6\U0001d4a9-\U0001d4ac\U0001d4ae-\U0001d4b9\U0001d4bb\U0001d4bd-\U0001d4c3\U0001d4c5-\U0001d505\U0001d507-\U0001d50a\U0001d50d-\U0001d514\U0001d516-\U0001d51c\U0001d51e-\U0001d539\U0001d53b-\U0001d53e\U0001d540-\U0001d544\U0001d546\U0001d54a-\U0001d550\U0001d552-\U0001d6a5\U0001d6a8-\U0001d6c0\U0001d6c2-\U0001d6da\U0001d6dc-\U0001d6fa\U0001d6fc-\U0001d714\U0001d716-\U0001d734\U0001d736-\U0001d74e\U0001d750-\U0001d76e\U0001d770-\U0001d788\U0001d78a-\U0001d7a8\U0001d7aa-\U0001d7c2\U0001d7c4-\U0001d7cb\U0001e800-\U0001e8c4\U0001e900-\U0001e943\U0001ee00-\U0001ee03\U0001ee05-\U0001ee1f\U0001ee21-\U0001ee22\U0001ee24\U0001ee27\U0001ee29-\U0001ee32\U0001ee34-\U0001ee37\U0001ee39\U0001ee3b\U0001ee42\U0001ee47\U0001ee49\U0001ee4b\U0001ee4d-\U0001ee4f\U0001ee51-\U0001ee52\U0001ee54\U0001ee57\U0001ee59\U0001ee5b\U0001ee5d\U0001ee5f\U0001ee61-\U0001ee62\U0001ee64\U0001ee67-\U0001ee6a\U0001ee6c-\U0001ee72\U0001ee74-\U0001ee77\U0001ee79-\U0001ee7c\U0001ee7e\U0001ee80-\U0001ee89\U0001ee8b-\U0001ee9b\U0001eea1-\U0001eea3\U0001eea5-\U0001eea9\U0001eeab-\U0001eebb\U00020000-\U0002a6d6\U0002a700-\U0002b734\U0002b740-\U0002b81d\U0002b820-\U0002cea1\U0002ceb0-\U0002ebe0\U0002f800-\U0002fa1d' - -cats = ['Cc', 'Cf', 'Cn', 'Co', 'Cs', 'Ll', 'Lm', 'Lo', 'Lt', 'Lu', 'Mc', 'Me', 'Mn', 'Nd', 'Nl', 'No', 'Pc', 'Pd', 'Pe', 'Pf', 'Pi', 'Po', 'Ps', 'Sc', 'Sk', 'Sm', 'So', 'Zl', 'Zp', 'Zs'] - -# Generated from unidata 11.0.0 - -def combine(*args): - return ''.join(globals()[cat] for cat in args) - - -def allexcept(*args): - newcats = cats[:] - for arg in args: - newcats.remove(arg) - return ''.join(globals()[cat] for cat in newcats) - - -def _handle_runs(char_list): # pragma: no cover - buf = [] - for c in char_list: - if len(c) == 1: - if buf and buf[-1][1] == chr(ord(c)-1): - buf[-1] = (buf[-1][0], c) - else: - buf.append((c, c)) - else: - buf.append((c, c)) - for a, b in buf: - if a == b: - yield a - else: - yield '%s-%s' % (a, b) - - -if __name__ == '__main__': # pragma: no cover - import unicodedata - - categories = {'xid_start': [], 'xid_continue': []} - - with open(__file__) as fp: - content = fp.read() - - header = content[:content.find('Cc =')] - footer = content[content.find("def combine("):] - - for code in range(0x110000): - c = chr(code) - cat = unicodedata.category(c) - if ord(c) == 0xdc00: - # Hack to avoid combining this combining with the preceding high - # surrogate, 0xdbff, when doing a repr. - c = '\\' + c - elif ord(c) in (0x2d, 0x5b, 0x5c, 0x5d, 0x5e): - # Escape regex metachars. - c = '\\' + c - categories.setdefault(cat, []).append(c) - # XID_START and XID_CONTINUE are special categories used for matching - # identifiers in Python 3. - if c.isidentifier(): - categories['xid_start'].append(c) - if ('a' + c).isidentifier(): - categories['xid_continue'].append(c) - - with open(__file__, 'w') as fp: - fp.write(header) - - for cat in sorted(categories): - val = ''.join(_handle_runs(categories[cat])) - fp.write('%s = %a\n\n' % (cat, val)) - - cats = sorted(categories) - cats.remove('xid_start') - cats.remove('xid_continue') - fp.write('cats = %r\n\n' % cats) - - fp.write('# Generated from unidata %s\n\n' % (unicodedata.unidata_version,)) - - fp.write(footer) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/response.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/response.py deleted file mode 100644 index 0bd13d40b8ac751e4e57f2e4a2f7b447283dca9d..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/response.py +++ /dev/null @@ -1,885 +0,0 @@ -from __future__ import absolute_import - -import io -import logging -import sys -import warnings -import zlib -from contextlib import contextmanager -from socket import error as SocketError -from socket import timeout as SocketTimeout - -try: - try: - import brotlicffi as brotli - except ImportError: - import brotli -except ImportError: - brotli = None - -from . import util -from ._collections import HTTPHeaderDict -from .connection import BaseSSLError, HTTPException -from .exceptions import ( - BodyNotHttplibCompatible, - DecodeError, - HTTPError, - IncompleteRead, - InvalidChunkLength, - InvalidHeader, - ProtocolError, - ReadTimeoutError, - ResponseNotChunked, - SSLError, -) -from .packages import six -from .util.response import is_fp_closed, is_response_to_head - -log = logging.getLogger(__name__) - - -class DeflateDecoder(object): - def __init__(self): - self._first_try = True - self._data = b"" - self._obj = zlib.decompressobj() - - def __getattr__(self, name): - return getattr(self._obj, name) - - def decompress(self, data): - if not data: - return data - - if not self._first_try: - return self._obj.decompress(data) - - self._data += data - try: - decompressed = self._obj.decompress(data) - if decompressed: - self._first_try = False - self._data = None - return decompressed - except zlib.error: - self._first_try = False - self._obj = zlib.decompressobj(-zlib.MAX_WBITS) - try: - return self.decompress(self._data) - finally: - self._data = None - - -class GzipDecoderState(object): - - FIRST_MEMBER = 0 - OTHER_MEMBERS = 1 - SWALLOW_DATA = 2 - - -class GzipDecoder(object): - def __init__(self): - self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS) - self._state = GzipDecoderState.FIRST_MEMBER - - def __getattr__(self, name): - return getattr(self._obj, name) - - def decompress(self, data): - ret = bytearray() - if self._state == GzipDecoderState.SWALLOW_DATA or not data: - return bytes(ret) - while True: - try: - ret += self._obj.decompress(data) - except zlib.error: - previous_state = self._state - # Ignore data after the first error - self._state = GzipDecoderState.SWALLOW_DATA - if previous_state == GzipDecoderState.OTHER_MEMBERS: - # Allow trailing garbage acceptable in other gzip clients - return bytes(ret) - raise - data = self._obj.unused_data - if not data: - return bytes(ret) - self._state = GzipDecoderState.OTHER_MEMBERS - self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS) - - -if brotli is not None: - - class BrotliDecoder(object): - # Supports both 'brotlipy' and 'Brotli' packages - # since they share an import name. The top branches - # are for 'brotlipy' and bottom branches for 'Brotli' - def __init__(self): - self._obj = brotli.Decompressor() - if hasattr(self._obj, "decompress"): - self.decompress = self._obj.decompress - else: - self.decompress = self._obj.process - - def flush(self): - if hasattr(self._obj, "flush"): - return self._obj.flush() - return b"" - - -class MultiDecoder(object): - """ - From RFC7231: - If one or more encodings have been applied to a representation, the - sender that applied the encodings MUST generate a Content-Encoding - header field that lists the content codings in the order in which - they were applied. - """ - - def __init__(self, modes): - self._decoders = [_get_decoder(m.strip()) for m in modes.split(",")] - - def flush(self): - return self._decoders[0].flush() - - def decompress(self, data): - for d in reversed(self._decoders): - data = d.decompress(data) - return data - - -def _get_decoder(mode): - if "," in mode: - return MultiDecoder(mode) - - if mode == "gzip": - return GzipDecoder() - - if brotli is not None and mode == "br": - return BrotliDecoder() - - return DeflateDecoder() - - -class HTTPResponse(io.IOBase): - """ - HTTP Response container. - - Backwards-compatible with :class:`http.client.HTTPResponse` but the response ``body`` is - loaded and decoded on-demand when the ``data`` property is accessed. This - class is also compatible with the Python standard library's :mod:`io` - module, and can hence be treated as a readable object in the context of that - framework. - - Extra parameters for behaviour not present in :class:`http.client.HTTPResponse`: - - :param preload_content: - If True, the response's body will be preloaded during construction. - - :param decode_content: - If True, will attempt to decode the body based on the - 'content-encoding' header. - - :param original_response: - When this HTTPResponse wrapper is generated from an :class:`http.client.HTTPResponse` - object, it's convenient to include the original for debug purposes. It's - otherwise unused. - - :param retries: - The retries contains the last :class:`~urllib3.util.retry.Retry` that - was used during the request. - - :param enforce_content_length: - Enforce content length checking. Body returned by server must match - value of Content-Length header, if present. Otherwise, raise error. - """ - - CONTENT_DECODERS = ["gzip", "deflate"] - if brotli is not None: - CONTENT_DECODERS += ["br"] - REDIRECT_STATUSES = [301, 302, 303, 307, 308] - - def __init__( - self, - body="", - headers=None, - status=0, - version=0, - reason=None, - strict=0, - preload_content=True, - decode_content=True, - original_response=None, - pool=None, - connection=None, - msg=None, - retries=None, - enforce_content_length=False, - request_method=None, - request_url=None, - auto_close=True, - ): - - if isinstance(headers, HTTPHeaderDict): - self.headers = headers - else: - self.headers = HTTPHeaderDict(headers) - self.status = status - self.version = version - self.reason = reason - self.strict = strict - self.decode_content = decode_content - self.retries = retries - self.enforce_content_length = enforce_content_length - self.auto_close = auto_close - - self._decoder = None - self._body = None - self._fp = None - self._original_response = original_response - self._fp_bytes_read = 0 - self.msg = msg - self._request_url = request_url - - if body and isinstance(body, (six.string_types, bytes)): - self._body = body - - self._pool = pool - self._connection = connection - - if hasattr(body, "read"): - self._fp = body - - # Are we using the chunked-style of transfer encoding? - self.chunked = False - self.chunk_left = None - tr_enc = self.headers.get("transfer-encoding", "").lower() - # Don't incur the penalty of creating a list and then discarding it - encodings = (enc.strip() for enc in tr_enc.split(",")) - if "chunked" in encodings: - self.chunked = True - - # Determine length of response - self.length_remaining = self._init_length(request_method) - - # If requested, preload the body. - if preload_content and not self._body: - self._body = self.read(decode_content=decode_content) - - def get_redirect_location(self): - """ - Should we redirect and where to? - - :returns: Truthy redirect location string if we got a redirect status - code and valid location. ``None`` if redirect status and no - location. ``False`` if not a redirect status code. - """ - if self.status in self.REDIRECT_STATUSES: - return self.headers.get("location") - - return False - - def release_conn(self): - if not self._pool or not self._connection: - return - - self._pool._put_conn(self._connection) - self._connection = None - - def drain_conn(self): - """ - Read and discard any remaining HTTP response data in the response connection. - - Unread data in the HTTPResponse connection blocks the connection from being released back to the pool. - """ - try: - self.read() - except (HTTPError, SocketError, BaseSSLError, HTTPException): - pass - - @property - def data(self): - # For backwards-compat with earlier urllib3 0.4 and earlier. - if self._body: - return self._body - - if self._fp: - return self.read(cache_content=True) - - @property - def connection(self): - return self._connection - - def isclosed(self): - return is_fp_closed(self._fp) - - def tell(self): - """ - Obtain the number of bytes pulled over the wire so far. May differ from - the amount of content returned by :meth:``urllib3.response.HTTPResponse.read`` - if bytes are encoded on the wire (e.g, compressed). - """ - return self._fp_bytes_read - - def _init_length(self, request_method): - """ - Set initial length value for Response content if available. - """ - length = self.headers.get("content-length") - - if length is not None: - if self.chunked: - # This Response will fail with an IncompleteRead if it can't be - # received as chunked. This method falls back to attempt reading - # the response before raising an exception. - log.warning( - "Received response with both Content-Length and " - "Transfer-Encoding set. This is expressly forbidden " - "by RFC 7230 sec 3.3.2. Ignoring Content-Length and " - "attempting to process response as Transfer-Encoding: " - "chunked." - ) - return None - - try: - # RFC 7230 section 3.3.2 specifies multiple content lengths can - # be sent in a single Content-Length header - # (e.g. Content-Length: 42, 42). This line ensures the values - # are all valid ints and that as long as the `set` length is 1, - # all values are the same. Otherwise, the header is invalid. - lengths = set([int(val) for val in length.split(",")]) - if len(lengths) > 1: - raise InvalidHeader( - "Content-Length contained multiple " - "unmatching values (%s)" % length - ) - length = lengths.pop() - except ValueError: - length = None - else: - if length < 0: - length = None - - # Convert status to int for comparison - # In some cases, httplib returns a status of "_UNKNOWN" - try: - status = int(self.status) - except ValueError: - status = 0 - - # Check for responses that shouldn't include a body - if status in (204, 304) or 100 <= status < 200 or request_method == "HEAD": - length = 0 - - return length - - def _init_decoder(self): - """ - Set-up the _decoder attribute if necessary. - """ - # Note: content-encoding value should be case-insensitive, per RFC 7230 - # Section 3.2 - content_encoding = self.headers.get("content-encoding", "").lower() - if self._decoder is None: - if content_encoding in self.CONTENT_DECODERS: - self._decoder = _get_decoder(content_encoding) - elif "," in content_encoding: - encodings = [ - e.strip() - for e in content_encoding.split(",") - if e.strip() in self.CONTENT_DECODERS - ] - if len(encodings): - self._decoder = _get_decoder(content_encoding) - - DECODER_ERROR_CLASSES = (IOError, zlib.error) - if brotli is not None: - DECODER_ERROR_CLASSES += (brotli.error,) - - def _decode(self, data, decode_content, flush_decoder): - """ - Decode the data passed in and potentially flush the decoder. - """ - if not decode_content: - return data - - try: - if self._decoder: - data = self._decoder.decompress(data) - except self.DECODER_ERROR_CLASSES as e: - content_encoding = self.headers.get("content-encoding", "").lower() - raise DecodeError( - "Received response with content-encoding: %s, but " - "failed to decode it." % content_encoding, - e, - ) - if flush_decoder: - data += self._flush_decoder() - - return data - - def _flush_decoder(self): - """ - Flushes the decoder. Should only be called if the decoder is actually - being used. - """ - if self._decoder: - buf = self._decoder.decompress(b"") - return buf + self._decoder.flush() - - return b"" - - @contextmanager - def _error_catcher(self): - """ - Catch low-level python exceptions, instead re-raising urllib3 - variants, so that low-level exceptions are not leaked in the - high-level api. - - On exit, release the connection back to the pool. - """ - clean_exit = False - - try: - try: - yield - - except SocketTimeout: - # FIXME: Ideally we'd like to include the url in the ReadTimeoutError but - # there is yet no clean way to get at it from this context. - raise ReadTimeoutError(self._pool, None, "Read timed out.") - - except BaseSSLError as e: - # FIXME: Is there a better way to differentiate between SSLErrors? - if "read operation timed out" not in str(e): - # SSL errors related to framing/MAC get wrapped and reraised here - raise SSLError(e) - - raise ReadTimeoutError(self._pool, None, "Read timed out.") - - except (HTTPException, SocketError) as e: - # This includes IncompleteRead. - raise ProtocolError("Connection broken: %r" % e, e) - - # If no exception is thrown, we should avoid cleaning up - # unnecessarily. - clean_exit = True - finally: - # If we didn't terminate cleanly, we need to throw away our - # connection. - if not clean_exit: - # The response may not be closed but we're not going to use it - # anymore so close it now to ensure that the connection is - # released back to the pool. - if self._original_response: - self._original_response.close() - - # Closing the response may not actually be sufficient to close - # everything, so if we have a hold of the connection close that - # too. - if self._connection: - self._connection.close() - - # If we hold the original response but it's closed now, we should - # return the connection back to the pool. - if self._original_response and self._original_response.isclosed(): - self.release_conn() - - def _fp_read(self, amt): - """ - Read a response with the thought that reading the number of bytes - larger than can fit in a 32-bit int at a time via SSL in some - known cases leads to an overflow error that has to be prevented - if `amt` or `self.length_remaining` indicate that a problem may - happen. - - The known cases: - * 3.8 <= CPython < 3.9.7 because of a bug - https://github.com/urllib3/urllib3/issues/2513#issuecomment-1152559900. - * urllib3 injected with pyOpenSSL-backed SSL-support. - * CPython < 3.10 only when `amt` does not fit 32-bit int. - """ - assert self._fp - c_int_max = 2 ** 31 - 1 - if ( - ( - (amt and amt > c_int_max) - or (self.length_remaining and self.length_remaining > c_int_max) - ) - and not util.IS_SECURETRANSPORT - and (util.IS_PYOPENSSL or sys.version_info < (3, 10)) - ): - buffer = io.BytesIO() - # Besides `max_chunk_amt` being a maximum chunk size, it - # affects memory overhead of reading a response by this - # method in CPython. - # `c_int_max` equal to 2 GiB - 1 byte is the actual maximum - # chunk size that does not lead to an overflow error, but - # 256 MiB is a compromise. - max_chunk_amt = 2 ** 28 - while amt is None or amt != 0: - if amt is not None: - chunk_amt = min(amt, max_chunk_amt) - amt -= chunk_amt - else: - chunk_amt = max_chunk_amt - data = self._fp.read(chunk_amt) - if not data: - break - buffer.write(data) - del data # to reduce peak memory usage by `max_chunk_amt`. - return buffer.getvalue() - else: - # StringIO doesn't like amt=None - return self._fp.read(amt) if amt is not None else self._fp.read() - - def read(self, amt=None, decode_content=None, cache_content=False): - """ - Similar to :meth:`http.client.HTTPResponse.read`, but with two additional - parameters: ``decode_content`` and ``cache_content``. - - :param amt: - How much of the content to read. If specified, caching is skipped - because it doesn't make sense to cache partial content as the full - response. - - :param decode_content: - If True, will attempt to decode the body based on the - 'content-encoding' header. - - :param cache_content: - If True, will save the returned data such that the same result is - returned despite of the state of the underlying file object. This - is useful if you want the ``.data`` property to continue working - after having ``.read()`` the file object. (Overridden if ``amt`` is - set.) - """ - self._init_decoder() - if decode_content is None: - decode_content = self.decode_content - - if self._fp is None: - return - - flush_decoder = False - fp_closed = getattr(self._fp, "closed", False) - - with self._error_catcher(): - data = self._fp_read(amt) if not fp_closed else b"" - if amt is None: - flush_decoder = True - else: - cache_content = False - if ( - amt != 0 and not data - ): # Platform-specific: Buggy versions of Python. - # Close the connection when no data is returned - # - # This is redundant to what httplib/http.client _should_ - # already do. However, versions of python released before - # December 15, 2012 (http://bugs.python.org/issue16298) do - # not properly close the connection in all cases. There is - # no harm in redundantly calling close. - self._fp.close() - flush_decoder = True - if self.enforce_content_length and self.length_remaining not in ( - 0, - None, - ): - # This is an edge case that httplib failed to cover due - # to concerns of backward compatibility. We're - # addressing it here to make sure IncompleteRead is - # raised during streaming, so all calls with incorrect - # Content-Length are caught. - raise IncompleteRead(self._fp_bytes_read, self.length_remaining) - - if data: - self._fp_bytes_read += len(data) - if self.length_remaining is not None: - self.length_remaining -= len(data) - - data = self._decode(data, decode_content, flush_decoder) - - if cache_content: - self._body = data - - return data - - def stream(self, amt=2 ** 16, decode_content=None): - """ - A generator wrapper for the read() method. A call will block until - ``amt`` bytes have been read from the connection or until the - connection is closed. - - :param amt: - How much of the content to read. The generator will return up to - much data per iteration, but may return less. This is particularly - likely when using compressed data. However, the empty string will - never be returned. - - :param decode_content: - If True, will attempt to decode the body based on the - 'content-encoding' header. - """ - if self.chunked and self.supports_chunked_reads(): - for line in self.read_chunked(amt, decode_content=decode_content): - yield line - else: - while not is_fp_closed(self._fp): - data = self.read(amt=amt, decode_content=decode_content) - - if data: - yield data - - @classmethod - def from_httplib(ResponseCls, r, **response_kw): - """ - Given an :class:`http.client.HTTPResponse` instance ``r``, return a - corresponding :class:`urllib3.response.HTTPResponse` object. - - Remaining parameters are passed to the HTTPResponse constructor, along - with ``original_response=r``. - """ - headers = r.msg - - if not isinstance(headers, HTTPHeaderDict): - if six.PY2: - # Python 2.7 - headers = HTTPHeaderDict.from_httplib(headers) - else: - headers = HTTPHeaderDict(headers.items()) - - # HTTPResponse objects in Python 3 don't have a .strict attribute - strict = getattr(r, "strict", 0) - resp = ResponseCls( - body=r, - headers=headers, - status=r.status, - version=r.version, - reason=r.reason, - strict=strict, - original_response=r, - **response_kw - ) - return resp - - # Backwards-compatibility methods for http.client.HTTPResponse - def getheaders(self): - warnings.warn( - "HTTPResponse.getheaders() is deprecated and will be removed " - "in urllib3 v2.1.0. Instead access HTTPResponse.headers directly.", - category=DeprecationWarning, - stacklevel=2, - ) - return self.headers - - def getheader(self, name, default=None): - warnings.warn( - "HTTPResponse.getheader() is deprecated and will be removed " - "in urllib3 v2.1.0. Instead use HTTPResponse.headers.get(name, default).", - category=DeprecationWarning, - stacklevel=2, - ) - return self.headers.get(name, default) - - # Backwards compatibility for http.cookiejar - def info(self): - return self.headers - - # Overrides from io.IOBase - def close(self): - if not self.closed: - self._fp.close() - - if self._connection: - self._connection.close() - - if not self.auto_close: - io.IOBase.close(self) - - @property - def closed(self): - if not self.auto_close: - return io.IOBase.closed.__get__(self) - elif self._fp is None: - return True - elif hasattr(self._fp, "isclosed"): - return self._fp.isclosed() - elif hasattr(self._fp, "closed"): - return self._fp.closed - else: - return True - - def fileno(self): - if self._fp is None: - raise IOError("HTTPResponse has no file to get a fileno from") - elif hasattr(self._fp, "fileno"): - return self._fp.fileno() - else: - raise IOError( - "The file-like object this HTTPResponse is wrapped " - "around has no file descriptor" - ) - - def flush(self): - if ( - self._fp is not None - and hasattr(self._fp, "flush") - and not getattr(self._fp, "closed", False) - ): - return self._fp.flush() - - def readable(self): - # This method is required for `io` module compatibility. - return True - - def readinto(self, b): - # This method is required for `io` module compatibility. - temp = self.read(len(b)) - if len(temp) == 0: - return 0 - else: - b[: len(temp)] = temp - return len(temp) - - def supports_chunked_reads(self): - """ - Checks if the underlying file-like object looks like a - :class:`http.client.HTTPResponse` object. We do this by testing for - the fp attribute. If it is present we assume it returns raw chunks as - processed by read_chunked(). - """ - return hasattr(self._fp, "fp") - - def _update_chunk_length(self): - # First, we'll figure out length of a chunk and then - # we'll try to read it from socket. - if self.chunk_left is not None: - return - line = self._fp.fp.readline() - line = line.split(b";", 1)[0] - try: - self.chunk_left = int(line, 16) - except ValueError: - # Invalid chunked protocol response, abort. - self.close() - raise InvalidChunkLength(self, line) - - def _handle_chunk(self, amt): - returned_chunk = None - if amt is None: - chunk = self._fp._safe_read(self.chunk_left) - returned_chunk = chunk - self._fp._safe_read(2) # Toss the CRLF at the end of the chunk. - self.chunk_left = None - elif amt < self.chunk_left: - value = self._fp._safe_read(amt) - self.chunk_left = self.chunk_left - amt - returned_chunk = value - elif amt == self.chunk_left: - value = self._fp._safe_read(amt) - self._fp._safe_read(2) # Toss the CRLF at the end of the chunk. - self.chunk_left = None - returned_chunk = value - else: # amt > self.chunk_left - returned_chunk = self._fp._safe_read(self.chunk_left) - self._fp._safe_read(2) # Toss the CRLF at the end of the chunk. - self.chunk_left = None - return returned_chunk - - def read_chunked(self, amt=None, decode_content=None): - """ - Similar to :meth:`HTTPResponse.read`, but with an additional - parameter: ``decode_content``. - - :param amt: - How much of the content to read. If specified, caching is skipped - because it doesn't make sense to cache partial content as the full - response. - - :param decode_content: - If True, will attempt to decode the body based on the - 'content-encoding' header. - """ - self._init_decoder() - # FIXME: Rewrite this method and make it a class with a better structured logic. - if not self.chunked: - raise ResponseNotChunked( - "Response is not chunked. " - "Header 'transfer-encoding: chunked' is missing." - ) - if not self.supports_chunked_reads(): - raise BodyNotHttplibCompatible( - "Body should be http.client.HTTPResponse like. " - "It should have have an fp attribute which returns raw chunks." - ) - - with self._error_catcher(): - # Don't bother reading the body of a HEAD request. - if self._original_response and is_response_to_head(self._original_response): - self._original_response.close() - return - - # If a response is already read and closed - # then return immediately. - if self._fp.fp is None: - return - - while True: - self._update_chunk_length() - if self.chunk_left == 0: - break - chunk = self._handle_chunk(amt) - decoded = self._decode( - chunk, decode_content=decode_content, flush_decoder=False - ) - if decoded: - yield decoded - - if decode_content: - # On CPython and PyPy, we should never need to flush the - # decoder. However, on Jython we *might* need to, so - # lets defensively do it anyway. - decoded = self._flush_decoder() - if decoded: # Platform-specific: Jython. - yield decoded - - # Chunk content ends with \r\n: discard it. - while True: - line = self._fp.fp.readline() - if not line: - # Some sites may not end with '\r\n'. - break - if line == b"\r\n": - break - - # We read everything; close the "file". - if self._original_response: - self._original_response.close() - - def geturl(self): - """ - Returns the URL that was the source of this response. - If the request that generated this response redirected, this method - will return the final redirect location. - """ - if self.retries is not None and len(self.retries.history): - return self.retries.history[-1].redirect_location - else: - return self._request_url - - def __iter__(self): - buffer = [] - for chunk in self.stream(decode_content=True): - if b"\n" in chunk: - chunk = chunk.split(b"\n") - yield b"".join(buffer) + chunk[0] + b"\n" - for x in chunk[1:-1]: - yield x + b"\n" - if chunk[-1]: - buffer = [chunk[-1]] - else: - buffer = [] - else: - buffer.append(chunk) - if buffer: - yield b"".join(buffer) diff --git a/spaces/BilalSardar/Voice-Cloning/README.md b/spaces/BilalSardar/Voice-Cloning/README.md deleted file mode 100644 index 00ebaaad0d708d9c58974d141a91355dbf489733..0000000000000000000000000000000000000000 --- a/spaces/BilalSardar/Voice-Cloning/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Voice Cloning -emoji: ⚡ -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.11 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CVPR/Example-Echocardiogram-Segmentation/README.md b/spaces/CVPR/Example-Echocardiogram-Segmentation/README.md deleted file mode 100644 index 0e3be69dfc0351921b5cc8d655758609cb4b558c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Example-Echocardiogram-Segmentation/README.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Echocardiogram Segmentation -emoji: 🦀 -colorFrom: indigo -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/CVPR/WALT/mmdet/models/losses/gfocal_loss.py b/spaces/CVPR/WALT/mmdet/models/losses/gfocal_loss.py deleted file mode 100644 index 9d3b8833dc50c76f6741db5341dbf8da3402d07b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/losses/gfocal_loss.py +++ /dev/null @@ -1,188 +0,0 @@ -import mmcv -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def quality_focal_loss(pred, target, beta=2.0): - r"""Quality Focal Loss (QFL) is from `Generalized Focal Loss: Learning - Qualified and Distributed Bounding Boxes for Dense Object Detection - `_. - - Args: - pred (torch.Tensor): Predicted joint representation of classification - and quality (IoU) estimation with shape (N, C), C is the number of - classes. - target (tuple([torch.Tensor])): Target category label with shape (N,) - and target quality label with shape (N,). - beta (float): The beta parameter for calculating the modulating factor. - Defaults to 2.0. - - Returns: - torch.Tensor: Loss tensor with shape (N,). - """ - assert len(target) == 2, """target for QFL must be a tuple of two elements, - including category label and quality label, respectively""" - # label denotes the category id, score denotes the quality score - label, score = target - - # negatives are supervised by 0 quality score - pred_sigmoid = pred.sigmoid() - scale_factor = pred_sigmoid - zerolabel = scale_factor.new_zeros(pred.shape) - loss = F.binary_cross_entropy_with_logits( - pred, zerolabel, reduction='none') * scale_factor.pow(beta) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = pred.size(1) - pos = ((label >= 0) & (label < bg_class_ind)).nonzero().squeeze(1) - pos_label = label[pos].long() - # positives are supervised by bbox quality (IoU) score - scale_factor = score[pos] - pred_sigmoid[pos, pos_label] - loss[pos, pos_label] = F.binary_cross_entropy_with_logits( - pred[pos, pos_label], score[pos], - reduction='none') * scale_factor.abs().pow(beta) - - loss = loss.sum(dim=1, keepdim=False) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def distribution_focal_loss(pred, label): - r"""Distribution Focal Loss (DFL) is from `Generalized Focal Loss: Learning - Qualified and Distributed Bounding Boxes for Dense Object Detection - `_. - - Args: - pred (torch.Tensor): Predicted general distribution of bounding boxes - (before softmax) with shape (N, n+1), n is the max value of the - integral set `{0, ..., n}` in paper. - label (torch.Tensor): Target distance label for bounding boxes with - shape (N,). - - Returns: - torch.Tensor: Loss tensor with shape (N,). - """ - dis_left = label.long() - dis_right = dis_left + 1 - weight_left = dis_right.float() - label - weight_right = label - dis_left.float() - loss = F.cross_entropy(pred, dis_left, reduction='none') * weight_left \ - + F.cross_entropy(pred, dis_right, reduction='none') * weight_right - return loss - - -@LOSSES.register_module() -class QualityFocalLoss(nn.Module): - r"""Quality Focal Loss (QFL) is a variant of `Generalized Focal Loss: - Learning Qualified and Distributed Bounding Boxes for Dense Object - Detection `_. - - Args: - use_sigmoid (bool): Whether sigmoid operation is conducted in QFL. - Defaults to True. - beta (float): The beta parameter for calculating the modulating factor. - Defaults to 2.0. - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Loss weight of current loss. - """ - - def __init__(self, - use_sigmoid=True, - beta=2.0, - reduction='mean', - loss_weight=1.0): - super(QualityFocalLoss, self).__init__() - assert use_sigmoid is True, 'Only sigmoid in QFL supported now.' - self.use_sigmoid = use_sigmoid - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): Predicted joint representation of - classification and quality (IoU) estimation with shape (N, C), - C is the number of classes. - target (tuple([torch.Tensor])): Target category label with shape - (N,) and target quality label with shape (N,). - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - loss_cls = self.loss_weight * quality_focal_loss( - pred, - target, - weight, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor) - else: - raise NotImplementedError - return loss_cls - - -@LOSSES.register_module() -class DistributionFocalLoss(nn.Module): - r"""Distribution Focal Loss (DFL) is a variant of `Generalized Focal Loss: - Learning Qualified and Distributed Bounding Boxes for Dense Object - Detection `_. - - Args: - reduction (str): Options are `'none'`, `'mean'` and `'sum'`. - loss_weight (float): Loss weight of current loss. - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super(DistributionFocalLoss, self).__init__() - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): Predicted general distribution of bounding - boxes (before softmax) with shape (N, n+1), n is the max value - of the integral set `{0, ..., n}` in paper. - target (torch.Tensor): Target distance label for bounding boxes - with shape (N,). - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_cls = self.loss_weight * distribution_focal_loss( - pred, target, weight, reduction=reduction, avg_factor=avg_factor) - return loss_cls diff --git a/spaces/CVPR/lama-example/fetch_data/eval_sampler.py b/spaces/CVPR/lama-example/fetch_data/eval_sampler.py deleted file mode 100644 index bf2d70d875a44b5a74daeec9b4ba747600287f2a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/fetch_data/eval_sampler.py +++ /dev/null @@ -1,21 +0,0 @@ -import os -import random - - -val_files_path = os.path.abspath('.') + '/places_standard_dataset/original/val/' -val_files = [val_files_path + image for image in os.listdir(val_files_path)] - -print(f'found {len(val_files)} images in {val_files_path}') - -random.shuffle(val_files) -val_files_random = val_files[0:2000] - -list_of_random_val_files = os.path.abspath('.') \ -+ '/places_standard_dataset/original/eval_random_files.txt' - -print(f'copying 2000 random images to {list_of_random_val_files}') -with open(list_of_random_val_files, 'w') as fw: - for filename in val_files_random: - fw.write(filename+'\n') -print('...done') - diff --git a/spaces/CVPR/regionclip-demo/detectron2/export/c10.py b/spaces/CVPR/regionclip-demo/detectron2/export/c10.py deleted file mode 100644 index ffb47c6cf19ae07f334b751ccadd071ebbd25e2e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/export/c10.py +++ /dev/null @@ -1,527 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import math -import torch -import torch.nn.functional as F - -from detectron2.layers import cat -from detectron2.layers.roi_align_rotated import ROIAlignRotated -from detectron2.modeling import poolers -from detectron2.modeling.proposal_generator import rpn -from detectron2.modeling.roi_heads.mask_head import mask_rcnn_inference -from detectron2.structures import Boxes, ImageList, Instances, Keypoints - -from .shared import alias, to_device - - -""" -This file contains caffe2-compatible implementation of several detectron2 components. -""" - - -class Caffe2Boxes(Boxes): - """ - Representing a list of detectron2.structures.Boxes from minibatch, each box - is represented by a 5d vector (batch index + 4 coordinates), or a 6d vector - (batch index + 5 coordinates) for RotatedBoxes. - """ - - def __init__(self, tensor): - assert isinstance(tensor, torch.Tensor) - assert tensor.dim() == 2 and tensor.size(-1) in [4, 5, 6], tensor.size() - # TODO: make tensor immutable when dim is Nx5 for Boxes, - # and Nx6 for RotatedBoxes? - self.tensor = tensor - - -# TODO clean up this class, maybe just extend Instances -class InstancesList(object): - """ - Tensor representation of a list of Instances object for a batch of images. - - When dealing with a batch of images with Caffe2 ops, a list of bboxes - (instances) are usually represented by single Tensor with size - (sigma(Ni), 5) or (sigma(Ni), 4) plus a batch split Tensor. This class is - for providing common functions to convert between these two representations. - """ - - def __init__(self, im_info, indices, extra_fields=None): - # [N, 3] -> (H, W, Scale) - self.im_info = im_info - # [N,] -> indice of batch to which the instance belongs - self.indices = indices - # [N, ...] - self.batch_extra_fields = extra_fields or {} - - self.image_size = self.im_info - - def get_fields(self): - """like `get_fields` in the Instances object, - but return each field in tensor representations""" - ret = {} - for k, v in self.batch_extra_fields.items(): - # if isinstance(v, torch.Tensor): - # tensor_rep = v - # elif isinstance(v, (Boxes, Keypoints)): - # tensor_rep = v.tensor - # else: - # raise ValueError("Can't find tensor representation for: {}".format()) - ret[k] = v - return ret - - def has(self, name): - return name in self.batch_extra_fields - - def set(self, name, value): - data_len = len(value) - if len(self.batch_extra_fields): - assert ( - len(self) == data_len - ), "Adding a field of length {} to a Instances of length {}".format(data_len, len(self)) - self.batch_extra_fields[name] = value - - def __setattr__(self, name, val): - if name in ["im_info", "indices", "batch_extra_fields", "image_size"]: - super().__setattr__(name, val) - else: - self.set(name, val) - - def __getattr__(self, name): - if name not in self.batch_extra_fields: - raise AttributeError("Cannot find field '{}' in the given Instances!".format(name)) - return self.batch_extra_fields[name] - - def __len__(self): - return len(self.indices) - - def flatten(self): - ret = [] - for _, v in self.batch_extra_fields.items(): - if isinstance(v, (Boxes, Keypoints)): - ret.append(v.tensor) - else: - ret.append(v) - return ret - - @staticmethod - def to_d2_instances_list(instances_list): - """ - Convert InstancesList to List[Instances]. The input `instances_list` can - also be a List[Instances], in this case this method is a non-op. - """ - if not isinstance(instances_list, InstancesList): - assert all(isinstance(x, Instances) for x in instances_list) - return instances_list - - ret = [] - for i, info in enumerate(instances_list.im_info): - instances = Instances(torch.Size([int(info[0].item()), int(info[1].item())])) - - ids = instances_list.indices == i - for k, v in instances_list.batch_extra_fields.items(): - if isinstance(v, torch.Tensor): - instances.set(k, v[ids]) - continue - elif isinstance(v, Boxes): - instances.set(k, v[ids, -4:]) - continue - - target_type, tensor_source = v - assert isinstance(tensor_source, torch.Tensor) - assert tensor_source.shape[0] == instances_list.indices.shape[0] - tensor_source = tensor_source[ids] - - if issubclass(target_type, Boxes): - instances.set(k, Boxes(tensor_source[:, -4:])) - elif issubclass(target_type, Keypoints): - instances.set(k, Keypoints(tensor_source)) - elif issubclass(target_type, torch.Tensor): - instances.set(k, tensor_source) - else: - raise ValueError("Can't handle targe type: {}".format(target_type)) - - ret.append(instances) - return ret - - -class Caffe2Compatible(object): - """ - A model can inherit this class to indicate that it can be traced and deployed with caffe2. - """ - - def _get_tensor_mode(self): - return self._tensor_mode - - def _set_tensor_mode(self, v): - self._tensor_mode = v - - tensor_mode = property(_get_tensor_mode, _set_tensor_mode) - """ - If true, the model expects C2-style tensor only inputs/outputs format. - """ - - -class Caffe2RPN(Caffe2Compatible, rpn.RPN): - def _generate_proposals( - self, images, objectness_logits_pred, anchor_deltas_pred, gt_instances=None - ): - assert isinstance(images, ImageList) - if self.tensor_mode: - im_info = images.image_sizes - else: - im_info = torch.tensor([[im_sz[0], im_sz[1], 1.0] for im_sz in images.image_sizes]).to( - images.tensor.device - ) - assert isinstance(im_info, torch.Tensor) - - rpn_rois_list = [] - rpn_roi_probs_list = [] - for scores, bbox_deltas, cell_anchors_tensor, feat_stride in zip( - objectness_logits_pred, - anchor_deltas_pred, - iter(self.anchor_generator.cell_anchors), - self.anchor_generator.strides, - ): - scores = scores.detach() - bbox_deltas = bbox_deltas.detach() - - rpn_rois, rpn_roi_probs = torch.ops._caffe2.GenerateProposals( - scores, - bbox_deltas, - im_info, - cell_anchors_tensor, - spatial_scale=1.0 / feat_stride, - pre_nms_topN=self.pre_nms_topk[self.training], - post_nms_topN=self.post_nms_topk[self.training], - nms_thresh=self.nms_thresh, - min_size=self.min_box_size, - # correct_transform_coords=True, # deprecated argument - angle_bound_on=True, # Default - angle_bound_lo=-180, - angle_bound_hi=180, - clip_angle_thresh=1.0, # Default - legacy_plus_one=False, - ) - rpn_rois_list.append(rpn_rois) - rpn_roi_probs_list.append(rpn_roi_probs) - - # For FPN in D2, in RPN all proposals from different levels are concated - # together, ranked and picked by top post_nms_topk. Then in ROIPooler - # it calculates level_assignments and calls the RoIAlign from - # the corresponding level. - - if len(objectness_logits_pred) == 1: - rpn_rois = rpn_rois_list[0] - rpn_roi_probs = rpn_roi_probs_list[0] - else: - assert len(rpn_rois_list) == len(rpn_roi_probs_list) - rpn_post_nms_topN = self.post_nms_topk[self.training] - - device = rpn_rois_list[0].device - input_list = [to_device(x, "cpu") for x in (rpn_rois_list + rpn_roi_probs_list)] - - # TODO remove this after confirming rpn_max_level/rpn_min_level - # is not needed in CollectRpnProposals. - feature_strides = list(self.anchor_generator.strides) - rpn_min_level = int(math.log2(feature_strides[0])) - rpn_max_level = int(math.log2(feature_strides[-1])) - assert (rpn_max_level - rpn_min_level + 1) == len( - rpn_rois_list - ), "CollectRpnProposals requires continuous levels" - - rpn_rois = torch.ops._caffe2.CollectRpnProposals( - input_list, - # NOTE: in current implementation, rpn_max_level and rpn_min_level - # are not needed, only the subtraction of two matters and it - # can be infer from the number of inputs. Keep them now for - # consistency. - rpn_max_level=2 + len(rpn_rois_list) - 1, - rpn_min_level=2, - rpn_post_nms_topN=rpn_post_nms_topN, - ) - rpn_rois = to_device(rpn_rois, device) - rpn_roi_probs = [] - - proposals = self.c2_postprocess(im_info, rpn_rois, rpn_roi_probs, self.tensor_mode) - return proposals, {} - - def forward(self, images, features, gt_instances=None): - assert not self.training - features = [features[f] for f in self.in_features] - objectness_logits_pred, anchor_deltas_pred = self.rpn_head(features) - return self._generate_proposals( - images, - objectness_logits_pred, - anchor_deltas_pred, - gt_instances, - ) - - @staticmethod - def c2_postprocess(im_info, rpn_rois, rpn_roi_probs, tensor_mode): - proposals = InstancesList( - im_info=im_info, - indices=rpn_rois[:, 0], - extra_fields={ - "proposal_boxes": Caffe2Boxes(rpn_rois), - "objectness_logits": (torch.Tensor, rpn_roi_probs), - }, - ) - if not tensor_mode: - proposals = InstancesList.to_d2_instances_list(proposals) - else: - proposals = [proposals] - return proposals - - -class Caffe2ROIPooler(Caffe2Compatible, poolers.ROIPooler): - @staticmethod - def c2_preprocess(box_lists): - assert all(isinstance(x, Boxes) for x in box_lists) - if all(isinstance(x, Caffe2Boxes) for x in box_lists): - # input is pure-tensor based - assert len(box_lists) == 1 - pooler_fmt_boxes = box_lists[0].tensor - else: - pooler_fmt_boxes = poolers.convert_boxes_to_pooler_format(box_lists) - return pooler_fmt_boxes - - def forward(self, x, box_lists): - assert not self.training - - pooler_fmt_boxes = self.c2_preprocess(box_lists) - num_level_assignments = len(self.level_poolers) - - if num_level_assignments == 1: - if isinstance(self.level_poolers[0], ROIAlignRotated): - c2_roi_align = torch.ops._caffe2.RoIAlignRotated - aligned = True - else: - c2_roi_align = torch.ops._caffe2.RoIAlign - aligned = self.level_poolers[0].aligned - - out = c2_roi_align( - x[0], - pooler_fmt_boxes, - order="NCHW", - spatial_scale=float(self.level_poolers[0].spatial_scale), - pooled_h=int(self.output_size[0]), - pooled_w=int(self.output_size[1]), - sampling_ratio=int(self.level_poolers[0].sampling_ratio), - aligned=aligned, - ) - return out - - device = pooler_fmt_boxes.device - assert ( - self.max_level - self.min_level + 1 == 4 - ), "Currently DistributeFpnProposals only support 4 levels" - fpn_outputs = torch.ops._caffe2.DistributeFpnProposals( - to_device(pooler_fmt_boxes, "cpu"), - roi_canonical_scale=self.canonical_box_size, - roi_canonical_level=self.canonical_level, - roi_max_level=self.max_level, - roi_min_level=self.min_level, - legacy_plus_one=False, - ) - fpn_outputs = [to_device(x, device) for x in fpn_outputs] - - rois_fpn_list = fpn_outputs[:-1] - rois_idx_restore_int32 = fpn_outputs[-1] - - roi_feat_fpn_list = [] - for roi_fpn, x_level, pooler in zip(rois_fpn_list, x, self.level_poolers): - if isinstance(pooler, ROIAlignRotated): - c2_roi_align = torch.ops._caffe2.RoIAlignRotated - aligned = True - else: - c2_roi_align = torch.ops._caffe2.RoIAlign - aligned = bool(pooler.aligned) - - roi_feat_fpn = c2_roi_align( - x_level, - roi_fpn, - order="NCHW", - spatial_scale=float(pooler.spatial_scale), - pooled_h=int(self.output_size[0]), - pooled_w=int(self.output_size[1]), - sampling_ratio=int(pooler.sampling_ratio), - aligned=aligned, - ) - roi_feat_fpn_list.append(roi_feat_fpn) - - roi_feat_shuffled = cat(roi_feat_fpn_list, dim=0) - assert roi_feat_shuffled.numel() > 0 and rois_idx_restore_int32.numel() > 0, ( - "Caffe2 export requires tracing with a model checkpoint + input that can produce valid" - " detections. But no detections were obtained with the given checkpoint and input!" - ) - roi_feat = torch.ops._caffe2.BatchPermutation(roi_feat_shuffled, rois_idx_restore_int32) - return roi_feat - - -class Caffe2FastRCNNOutputsInference: - def __init__(self, tensor_mode): - self.tensor_mode = tensor_mode # whether the output is caffe2 tensor mode - - def __call__(self, box_predictor, predictions, proposals): - """equivalent to FastRCNNOutputLayers.inference""" - num_classes = box_predictor.num_classes - score_thresh = box_predictor.test_score_thresh - nms_thresh = box_predictor.test_nms_thresh - topk_per_image = box_predictor.test_topk_per_image - is_rotated = len(box_predictor.box2box_transform.weights) == 5 - - if is_rotated: - box_dim = 5 - assert box_predictor.box2box_transform.weights[4] == 1, ( - "The weights for Rotated BBoxTransform in C2 have only 4 dimensions," - + " thus enforcing the angle weight to be 1 for now" - ) - box2box_transform_weights = box_predictor.box2box_transform.weights[:4] - else: - box_dim = 4 - box2box_transform_weights = box_predictor.box2box_transform.weights - - class_logits, box_regression = predictions - if num_classes + 1 == class_logits.shape[1]: - class_prob = F.softmax(class_logits, -1) - else: - assert num_classes == class_logits.shape[1] - class_prob = F.sigmoid(class_logits) - # BoxWithNMSLimit will infer num_classes from the shape of the class_prob - # So append a zero column as placeholder for the background class - class_prob = torch.cat((class_prob, torch.zeros(class_prob.shape[0], 1)), dim=1) - - assert box_regression.shape[1] % box_dim == 0 - cls_agnostic_bbox_reg = box_regression.shape[1] // box_dim == 1 - - input_tensor_mode = proposals[0].proposal_boxes.tensor.shape[1] == box_dim + 1 - - rois = type(proposals[0].proposal_boxes).cat([p.proposal_boxes for p in proposals]) - device, dtype = rois.tensor.device, rois.tensor.dtype - if input_tensor_mode: - im_info = proposals[0].image_size - rois = rois.tensor - else: - im_info = torch.tensor( - [[sz[0], sz[1], 1.0] for sz in [x.image_size for x in proposals]] - ) - batch_ids = cat( - [ - torch.full((b, 1), i, dtype=dtype, device=device) - for i, b in enumerate(len(p) for p in proposals) - ], - dim=0, - ) - rois = torch.cat([batch_ids, rois.tensor], dim=1) - - roi_pred_bbox, roi_batch_splits = torch.ops._caffe2.BBoxTransform( - to_device(rois, "cpu"), - to_device(box_regression, "cpu"), - to_device(im_info, "cpu"), - weights=box2box_transform_weights, - apply_scale=True, - rotated=is_rotated, - angle_bound_on=True, - angle_bound_lo=-180, - angle_bound_hi=180, - clip_angle_thresh=1.0, - legacy_plus_one=False, - ) - roi_pred_bbox = to_device(roi_pred_bbox, device) - roi_batch_splits = to_device(roi_batch_splits, device) - - nms_outputs = torch.ops._caffe2.BoxWithNMSLimit( - to_device(class_prob, "cpu"), - to_device(roi_pred_bbox, "cpu"), - to_device(roi_batch_splits, "cpu"), - score_thresh=float(score_thresh), - nms=float(nms_thresh), - detections_per_im=int(topk_per_image), - soft_nms_enabled=False, - soft_nms_method="linear", - soft_nms_sigma=0.5, - soft_nms_min_score_thres=0.001, - rotated=is_rotated, - cls_agnostic_bbox_reg=cls_agnostic_bbox_reg, - input_boxes_include_bg_cls=False, - output_classes_include_bg_cls=False, - legacy_plus_one=False, - ) - roi_score_nms = to_device(nms_outputs[0], device) - roi_bbox_nms = to_device(nms_outputs[1], device) - roi_class_nms = to_device(nms_outputs[2], device) - roi_batch_splits_nms = to_device(nms_outputs[3], device) - roi_keeps_nms = to_device(nms_outputs[4], device) - roi_keeps_size_nms = to_device(nms_outputs[5], device) - if not self.tensor_mode: - roi_class_nms = roi_class_nms.to(torch.int64) - - roi_batch_ids = cat( - [ - torch.full((b, 1), i, dtype=dtype, device=device) - for i, b in enumerate(int(x.item()) for x in roi_batch_splits_nms) - ], - dim=0, - ) - - roi_class_nms = alias(roi_class_nms, "class_nms") - roi_score_nms = alias(roi_score_nms, "score_nms") - roi_bbox_nms = alias(roi_bbox_nms, "bbox_nms") - roi_batch_splits_nms = alias(roi_batch_splits_nms, "batch_splits_nms") - roi_keeps_nms = alias(roi_keeps_nms, "keeps_nms") - roi_keeps_size_nms = alias(roi_keeps_size_nms, "keeps_size_nms") - - results = InstancesList( - im_info=im_info, - indices=roi_batch_ids[:, 0], - extra_fields={ - "pred_boxes": Caffe2Boxes(roi_bbox_nms), - "scores": roi_score_nms, - "pred_classes": roi_class_nms, - }, - ) - - if not self.tensor_mode: - results = InstancesList.to_d2_instances_list(results) - batch_splits = roi_batch_splits_nms.int().tolist() - kept_indices = list(roi_keeps_nms.to(torch.int64).split(batch_splits)) - else: - results = [results] - kept_indices = [roi_keeps_nms] - - return results, kept_indices - - -class Caffe2MaskRCNNInference: - def __call__(self, pred_mask_logits, pred_instances): - """equivalent to mask_head.mask_rcnn_inference""" - if all(isinstance(x, InstancesList) for x in pred_instances): - assert len(pred_instances) == 1 - mask_probs_pred = pred_mask_logits.sigmoid() - mask_probs_pred = alias(mask_probs_pred, "mask_fcn_probs") - pred_instances[0].pred_masks = mask_probs_pred - else: - mask_rcnn_inference(pred_mask_logits, pred_instances) - - -class Caffe2KeypointRCNNInference: - def __init__(self, use_heatmap_max_keypoint): - self.use_heatmap_max_keypoint = use_heatmap_max_keypoint - - def __call__(self, pred_keypoint_logits, pred_instances): - # just return the keypoint heatmap for now, - # there will be option to call HeatmapMaxKeypointOp - output = alias(pred_keypoint_logits, "kps_score") - if all(isinstance(x, InstancesList) for x in pred_instances): - assert len(pred_instances) == 1 - if self.use_heatmap_max_keypoint: - device = output.device - output = torch.ops._caffe2.HeatmapMaxKeypoint( - to_device(output, "cpu"), - pred_instances[0].pred_boxes.tensor, - should_output_softmax=True, # worth make it configerable? - ) - output = to_device(output, device) - output = alias(output, "keypoints_out") - pred_instances[0].pred_keypoints = output - return pred_keypoint_logits diff --git a/spaces/CofAI/README/README.md b/spaces/CofAI/README/README.md deleted file mode 100644 index 3915e6bb8632ee5d632e8c83a802ea412a5d46af..0000000000000000000000000000000000000000 --- a/spaces/CofAI/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 🏢 -colorFrom: green -colorTo: green -sdk: static -pinned: true ---- - -Мы организация CofAI, мы занимаемся разработкой ИИ, и мы некоммерческая организация! \ No newline at end of file diff --git a/spaces/Cyril666/my_abi/README.md b/spaces/Cyril666/my_abi/README.md deleted file mode 100644 index 0653d1b57513232da4dea99b3b4bca1f2d1c2108..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/my_abi/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: My_abi -emoji: 💻 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Login-2b7e7f3a.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Login-2b7e7f3a.js deleted file mode 100644 index 7f5c4b0134047b58bd4286a804b8df9a959388ae..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Login-2b7e7f3a.js +++ /dev/null @@ -1,3 +0,0 @@ -import{S as j,e as q,s as A,N as h,k as $,K as C,U as L,p,o as v,z as x,v as w,A as c,x as k,O as g,P,M as B,R as H,h as N,j as S,t as I}from"./index-1d65707a.js";import{F as K}from"./Form-cd229de0.js";import{T}from"./Textbox-1f11d244.js";import{a as M}from"./Button-f155035a.js";import{C as R}from"./Column-6c43afc7.js";/* empty css */import"./BlockTitle-dee077e8.js";import"./Info-7c6961ef.js";import"./Copy-9f1657c4.js";/* empty css */function z(i){let e,s;return{c(){e=h("p"),s=P(i[0]),C(e,"class","auth svelte-1ogxbi0")},m(l,o){p(l,e,o),B(e,s)},p(l,o){o&1&&H(s,l[0])},d(l){l&&c(e)}}}function D(i){let e;return{c(){e=h("p"),e.textContent=`If you are visiting a HuggingFace Space in Incognito mode, you must - enable third party cookies.`,C(e,"class","auth svelte-1ogxbi0")},m(s,l){p(s,e,l)},d(s){s&&c(e)}}}function O(i){let e;return{c(){e=h("p"),e.textContent="Incorrect Credentials",C(e,"class","creds svelte-1ogxbi0")},m(s,l){p(s,e,l)},d(s){s&&c(e)}}}function U(i){let e,s,l,o,r,m;function d(n){i[8](n)}let _={label:"username",lines:1,show_label:!0,max_lines:1,mode:"dynamic"};i[3]!==void 0&&(_.value=i[3]),e=new T({props:_}),N.push(()=>S(e,"value",d)),e.$on("submit",i[6]);function b(n){i[9](n)}let u={label:"password",lines:1,show_label:!0,max_lines:1,mode:"dynamic",type:"password"};return i[4]!==void 0&&(u.value=i[4]),o=new T({props:u}),N.push(()=>S(o,"value",b)),o.$on("submit",i[6]),{c(){$(e.$$.fragment),l=g(),$(o.$$.fragment)},m(n,f){v(e,n,f),p(n,l,f),v(o,n,f),m=!0},p(n,f){const t={};!s&&f&8&&(s=!0,t.value=n[3],I(()=>s=!1)),e.$set(t);const a={};!r&&f&16&&(r=!0,a.value=n[4],I(()=>r=!1)),o.$set(a)},i(n){m||(x(e.$$.fragment,n),x(o.$$.fragment,n),m=!0)},o(n){w(e.$$.fragment,n),w(o.$$.fragment,n),m=!1},d(n){n&&c(l),k(e,n),k(o,n)}}}function E(i){let e;return{c(){e=P("Login")},m(s,l){p(s,e,l)},d(s){s&&c(e)}}}function G(i){let e,s,l,o,r,m,d,_,b,u=i[0]&&z(i),n=i[2]&&D(),f=i[5]&&O();return m=new K({props:{$$slots:{default:[U]},$$scope:{ctx:i}}}),_=new M({props:{size:"lg",variant:"primary",$$slots:{default:[E]},$$scope:{ctx:i}}}),_.$on("click",i[6]),{c(){e=h("h2"),e.textContent="Login",s=g(),u&&u.c(),l=g(),n&&n.c(),o=g(),f&&f.c(),r=g(),$(m.$$.fragment),d=g(),$(_.$$.fragment),C(e,"class","svelte-1ogxbi0")},m(t,a){p(t,e,a),p(t,s,a),u&&u.m(t,a),p(t,l,a),n&&n.m(t,a),p(t,o,a),f&&f.m(t,a),p(t,r,a),v(m,t,a),p(t,d,a),v(_,t,a),b=!0},p(t,a){t[0]?u?u.p(t,a):(u=z(t),u.c(),u.m(l.parentNode,l)):u&&(u.d(1),u=null),t[2]?n||(n=D(),n.c(),n.m(o.parentNode,o)):n&&(n.d(1),n=null),t[5]?f||(f=O(),f.c(),f.m(r.parentNode,r)):f&&(f.d(1),f=null);const y={};a&1048&&(y.$$scope={dirty:a,ctx:t}),m.$set(y);const F={};a&1024&&(F.$$scope={dirty:a,ctx:t}),_.$set(F)},i(t){b||(x(m.$$.fragment,t),x(_.$$.fragment,t),b=!0)},o(t){w(m.$$.fragment,t),w(_.$$.fragment,t),b=!1},d(t){t&&(c(e),c(s),c(l),c(o),c(r),c(d)),u&&u.d(t),n&&n.d(t),f&&f.d(t),k(m,t),k(_,t)}}}function J(i){let e,s,l;return s=new R({props:{variant:"panel",min_width:480,$$slots:{default:[G]},$$scope:{ctx:i}}}),{c(){e=h("div"),$(s.$$.fragment),C(e,"class","wrap svelte-1ogxbi0"),L(e,"min-h-screen",i[1])},m(o,r){p(o,e,r),v(s,e,null),l=!0},p(o,[r]){const m={};r&1085&&(m.$$scope={dirty:r,ctx:o}),s.$set(m),(!l||r&2)&&L(e,"min-h-screen",o[1])},i(o){l||(x(s.$$.fragment,o),l=!0)},o(o){w(s.$$.fragment,o),l=!1},d(o){o&&c(e),k(s)}}}function Q(i,e,s){let{root:l}=e,{auth_message:o}=e,{app_mode:r}=e,{space_id:m}=e,d="",_="",b=!1;const u=async()=>{const t=new FormData;t.append("username",d),t.append("password",_);let a=await fetch(l+"/login",{method:"POST",body:t});a.status===400?(s(5,b=!0),s(3,d=""),s(4,_="")):a.status==200&&location.reload()};function n(t){d=t,s(3,d)}function f(t){_=t,s(4,_)}return i.$$set=t=>{"root"in t&&s(7,l=t.root),"auth_message"in t&&s(0,o=t.auth_message),"app_mode"in t&&s(1,r=t.app_mode),"space_id"in t&&s(2,m=t.space_id)},[o,r,m,d,_,b,u,l,n,f]}class le extends j{constructor(e){super(),q(this,e,Q,J,A,{root:7,auth_message:0,app_mode:1,space_id:2})}}export{le as default}; -//# sourceMappingURL=Login-2b7e7f3a.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_sync/http2.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_sync/http2.py deleted file mode 100644 index d141d459a59d134beac3b2dffb17d17f29abcea4..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_sync/http2.py +++ /dev/null @@ -1,589 +0,0 @@ -import enum -import logging -import time -import types -import typing - -import h2.config -import h2.connection -import h2.events -import h2.exceptions -import h2.settings - -from .._backends.base import NetworkStream -from .._exceptions import ( - ConnectionNotAvailable, - LocalProtocolError, - RemoteProtocolError, -) -from .._models import Origin, Request, Response -from .._synchronization import Lock, Semaphore, ShieldCancellation -from .._trace import Trace -from .interfaces import ConnectionInterface - -logger = logging.getLogger("httpcore.http2") - - -def has_body_headers(request: Request) -> bool: - return any( - k.lower() == b"content-length" or k.lower() == b"transfer-encoding" - for k, v in request.headers - ) - - -class HTTPConnectionState(enum.IntEnum): - ACTIVE = 1 - IDLE = 2 - CLOSED = 3 - - -class HTTP2Connection(ConnectionInterface): - READ_NUM_BYTES = 64 * 1024 - CONFIG = h2.config.H2Configuration(validate_inbound_headers=False) - - def __init__( - self, - origin: Origin, - stream: NetworkStream, - keepalive_expiry: typing.Optional[float] = None, - ): - self._origin = origin - self._network_stream = stream - self._keepalive_expiry: typing.Optional[float] = keepalive_expiry - self._h2_state = h2.connection.H2Connection(config=self.CONFIG) - self._state = HTTPConnectionState.IDLE - self._expire_at: typing.Optional[float] = None - self._request_count = 0 - self._init_lock = Lock() - self._state_lock = Lock() - self._read_lock = Lock() - self._write_lock = Lock() - self._sent_connection_init = False - self._used_all_stream_ids = False - self._connection_error = False - - # Mapping from stream ID to response stream events. - self._events: typing.Dict[ - int, - typing.Union[ - h2.events.ResponseReceived, - h2.events.DataReceived, - h2.events.StreamEnded, - h2.events.StreamReset, - ], - ] = {} - - # Connection terminated events are stored as state since - # we need to handle them for all streams. - self._connection_terminated: typing.Optional[ - h2.events.ConnectionTerminated - ] = None - - self._read_exception: typing.Optional[Exception] = None - self._write_exception: typing.Optional[Exception] = None - - def handle_request(self, request: Request) -> Response: - if not self.can_handle_request(request.url.origin): - # This cannot occur in normal operation, since the connection pool - # will only send requests on connections that handle them. - # It's in place simply for resilience as a guard against incorrect - # usage, for anyone working directly with httpcore connections. - raise RuntimeError( - f"Attempted to send request to {request.url.origin} on connection " - f"to {self._origin}" - ) - - with self._state_lock: - if self._state in (HTTPConnectionState.ACTIVE, HTTPConnectionState.IDLE): - self._request_count += 1 - self._expire_at = None - self._state = HTTPConnectionState.ACTIVE - else: - raise ConnectionNotAvailable() - - with self._init_lock: - if not self._sent_connection_init: - try: - kwargs = {"request": request} - with Trace("send_connection_init", logger, request, kwargs): - self._send_connection_init(**kwargs) - except BaseException as exc: - with ShieldCancellation(): - self.close() - raise exc - - self._sent_connection_init = True - - # Initially start with just 1 until the remote server provides - # its max_concurrent_streams value - self._max_streams = 1 - - local_settings_max_streams = ( - self._h2_state.local_settings.max_concurrent_streams - ) - self._max_streams_semaphore = Semaphore(local_settings_max_streams) - - for _ in range(local_settings_max_streams - self._max_streams): - self._max_streams_semaphore.acquire() - - self._max_streams_semaphore.acquire() - - try: - stream_id = self._h2_state.get_next_available_stream_id() - self._events[stream_id] = [] - except h2.exceptions.NoAvailableStreamIDError: # pragma: nocover - self._used_all_stream_ids = True - self._request_count -= 1 - raise ConnectionNotAvailable() - - try: - kwargs = {"request": request, "stream_id": stream_id} - with Trace("send_request_headers", logger, request, kwargs): - self._send_request_headers(request=request, stream_id=stream_id) - with Trace("send_request_body", logger, request, kwargs): - self._send_request_body(request=request, stream_id=stream_id) - with Trace( - "receive_response_headers", logger, request, kwargs - ) as trace: - status, headers = self._receive_response( - request=request, stream_id=stream_id - ) - trace.return_value = (status, headers) - - return Response( - status=status, - headers=headers, - content=HTTP2ConnectionByteStream(self, request, stream_id=stream_id), - extensions={ - "http_version": b"HTTP/2", - "network_stream": self._network_stream, - "stream_id": stream_id, - }, - ) - except BaseException as exc: # noqa: PIE786 - with ShieldCancellation(): - kwargs = {"stream_id": stream_id} - with Trace("response_closed", logger, request, kwargs): - self._response_closed(stream_id=stream_id) - - if isinstance(exc, h2.exceptions.ProtocolError): - # One case where h2 can raise a protocol error is when a - # closed frame has been seen by the state machine. - # - # This happens when one stream is reading, and encounters - # a GOAWAY event. Other flows of control may then raise - # a protocol error at any point they interact with the 'h2_state'. - # - # In this case we'll have stored the event, and should raise - # it as a RemoteProtocolError. - if self._connection_terminated: # pragma: nocover - raise RemoteProtocolError(self._connection_terminated) - # If h2 raises a protocol error in some other state then we - # must somehow have made a protocol violation. - raise LocalProtocolError(exc) # pragma: nocover - - raise exc - - def _send_connection_init(self, request: Request) -> None: - """ - The HTTP/2 connection requires some initial setup before we can start - using individual request/response streams on it. - """ - # Need to set these manually here instead of manipulating via - # __setitem__() otherwise the H2Connection will emit SettingsUpdate - # frames in addition to sending the undesired defaults. - self._h2_state.local_settings = h2.settings.Settings( - client=True, - initial_values={ - # Disable PUSH_PROMISE frames from the server since we don't do anything - # with them for now. Maybe when we support caching? - h2.settings.SettingCodes.ENABLE_PUSH: 0, - # These two are taken from h2 for safe defaults - h2.settings.SettingCodes.MAX_CONCURRENT_STREAMS: 100, - h2.settings.SettingCodes.MAX_HEADER_LIST_SIZE: 65536, - }, - ) - - # Some websites (*cough* Yahoo *cough*) balk at this setting being - # present in the initial handshake since it's not defined in the original - # RFC despite the RFC mandating ignoring settings you don't know about. - del self._h2_state.local_settings[ - h2.settings.SettingCodes.ENABLE_CONNECT_PROTOCOL - ] - - self._h2_state.initiate_connection() - self._h2_state.increment_flow_control_window(2**24) - self._write_outgoing_data(request) - - # Sending the request... - - def _send_request_headers(self, request: Request, stream_id: int) -> None: - """ - Send the request headers to a given stream ID. - """ - end_stream = not has_body_headers(request) - - # In HTTP/2 the ':authority' pseudo-header is used instead of 'Host'. - # In order to gracefully handle HTTP/1.1 and HTTP/2 we always require - # HTTP/1.1 style headers, and map them appropriately if we end up on - # an HTTP/2 connection. - authority = [v for k, v in request.headers if k.lower() == b"host"][0] - - headers = [ - (b":method", request.method), - (b":authority", authority), - (b":scheme", request.url.scheme), - (b":path", request.url.target), - ] + [ - (k.lower(), v) - for k, v in request.headers - if k.lower() - not in ( - b"host", - b"transfer-encoding", - ) - ] - - self._h2_state.send_headers(stream_id, headers, end_stream=end_stream) - self._h2_state.increment_flow_control_window(2**24, stream_id=stream_id) - self._write_outgoing_data(request) - - def _send_request_body(self, request: Request, stream_id: int) -> None: - """ - Iterate over the request body sending it to a given stream ID. - """ - if not has_body_headers(request): - return - - assert isinstance(request.stream, typing.Iterable) - for data in request.stream: - self._send_stream_data(request, stream_id, data) - self._send_end_stream(request, stream_id) - - def _send_stream_data( - self, request: Request, stream_id: int, data: bytes - ) -> None: - """ - Send a single chunk of data in one or more data frames. - """ - while data: - max_flow = self._wait_for_outgoing_flow(request, stream_id) - chunk_size = min(len(data), max_flow) - chunk, data = data[:chunk_size], data[chunk_size:] - self._h2_state.send_data(stream_id, chunk) - self._write_outgoing_data(request) - - def _send_end_stream(self, request: Request, stream_id: int) -> None: - """ - Send an empty data frame on on a given stream ID with the END_STREAM flag set. - """ - self._h2_state.end_stream(stream_id) - self._write_outgoing_data(request) - - # Receiving the response... - - def _receive_response( - self, request: Request, stream_id: int - ) -> typing.Tuple[int, typing.List[typing.Tuple[bytes, bytes]]]: - """ - Return the response status code and headers for a given stream ID. - """ - while True: - event = self._receive_stream_event(request, stream_id) - if isinstance(event, h2.events.ResponseReceived): - break - - status_code = 200 - headers = [] - for k, v in event.headers: - if k == b":status": - status_code = int(v.decode("ascii", errors="ignore")) - elif not k.startswith(b":"): - headers.append((k, v)) - - return (status_code, headers) - - def _receive_response_body( - self, request: Request, stream_id: int - ) -> typing.Iterator[bytes]: - """ - Iterator that returns the bytes of the response body for a given stream ID. - """ - while True: - event = self._receive_stream_event(request, stream_id) - if isinstance(event, h2.events.DataReceived): - amount = event.flow_controlled_length - self._h2_state.acknowledge_received_data(amount, stream_id) - self._write_outgoing_data(request) - yield event.data - elif isinstance(event, h2.events.StreamEnded): - break - - def _receive_stream_event( - self, request: Request, stream_id: int - ) -> typing.Union[ - h2.events.ResponseReceived, h2.events.DataReceived, h2.events.StreamEnded - ]: - """ - Return the next available event for a given stream ID. - - Will read more data from the network if required. - """ - while not self._events.get(stream_id): - self._receive_events(request, stream_id) - event = self._events[stream_id].pop(0) - if isinstance(event, h2.events.StreamReset): - raise RemoteProtocolError(event) - return event - - def _receive_events( - self, request: Request, stream_id: typing.Optional[int] = None - ) -> None: - """ - Read some data from the network until we see one or more events - for a given stream ID. - """ - with self._read_lock: - if self._connection_terminated is not None: - last_stream_id = self._connection_terminated.last_stream_id - if stream_id and last_stream_id and stream_id > last_stream_id: - self._request_count -= 1 - raise ConnectionNotAvailable() - raise RemoteProtocolError(self._connection_terminated) - - # This conditional is a bit icky. We don't want to block reading if we've - # actually got an event to return for a given stream. We need to do that - # check *within* the atomic read lock. Though it also need to be optional, - # because when we call it from `_wait_for_outgoing_flow` we *do* want to - # block until we've available flow control, event when we have events - # pending for the stream ID we're attempting to send on. - if stream_id is None or not self._events.get(stream_id): - events = self._read_incoming_data(request) - for event in events: - if isinstance(event, h2.events.RemoteSettingsChanged): - with Trace( - "receive_remote_settings", logger, request - ) as trace: - self._receive_remote_settings_change(event) - trace.return_value = event - - elif isinstance( - event, - ( - h2.events.ResponseReceived, - h2.events.DataReceived, - h2.events.StreamEnded, - h2.events.StreamReset, - ), - ): - if event.stream_id in self._events: - self._events[event.stream_id].append(event) - - elif isinstance(event, h2.events.ConnectionTerminated): - self._connection_terminated = event - - self._write_outgoing_data(request) - - def _receive_remote_settings_change(self, event: h2.events.Event) -> None: - max_concurrent_streams = event.changed_settings.get( - h2.settings.SettingCodes.MAX_CONCURRENT_STREAMS - ) - if max_concurrent_streams: - new_max_streams = min( - max_concurrent_streams.new_value, - self._h2_state.local_settings.max_concurrent_streams, - ) - if new_max_streams and new_max_streams != self._max_streams: - while new_max_streams > self._max_streams: - self._max_streams_semaphore.release() - self._max_streams += 1 - while new_max_streams < self._max_streams: - self._max_streams_semaphore.acquire() - self._max_streams -= 1 - - def _response_closed(self, stream_id: int) -> None: - self._max_streams_semaphore.release() - del self._events[stream_id] - with self._state_lock: - if self._connection_terminated and not self._events: - self.close() - - elif self._state == HTTPConnectionState.ACTIVE and not self._events: - self._state = HTTPConnectionState.IDLE - if self._keepalive_expiry is not None: - now = time.monotonic() - self._expire_at = now + self._keepalive_expiry - if self._used_all_stream_ids: # pragma: nocover - self.close() - - def close(self) -> None: - # Note that this method unilaterally closes the connection, and does - # not have any kind of locking in place around it. - self._h2_state.close_connection() - self._state = HTTPConnectionState.CLOSED - self._network_stream.close() - - # Wrappers around network read/write operations... - - def _read_incoming_data( - self, request: Request - ) -> typing.List[h2.events.Event]: - timeouts = request.extensions.get("timeout", {}) - timeout = timeouts.get("read", None) - - if self._read_exception is not None: - raise self._read_exception # pragma: nocover - - try: - data = self._network_stream.read(self.READ_NUM_BYTES, timeout) - if data == b"": - raise RemoteProtocolError("Server disconnected") - except Exception as exc: - # If we get a network error we should: - # - # 1. Save the exception and just raise it immediately on any future reads. - # (For example, this means that a single read timeout or disconnect will - # immediately close all pending streams. Without requiring multiple - # sequential timeouts.) - # 2. Mark the connection as errored, so that we don't accept any other - # incoming requests. - self._read_exception = exc - self._connection_error = True - raise exc - - events: typing.List[h2.events.Event] = self._h2_state.receive_data(data) - - return events - - def _write_outgoing_data(self, request: Request) -> None: - timeouts = request.extensions.get("timeout", {}) - timeout = timeouts.get("write", None) - - with self._write_lock: - data_to_send = self._h2_state.data_to_send() - - if self._write_exception is not None: - raise self._write_exception # pragma: nocover - - try: - self._network_stream.write(data_to_send, timeout) - except Exception as exc: # pragma: nocover - # If we get a network error we should: - # - # 1. Save the exception and just raise it immediately on any future write. - # (For example, this means that a single write timeout or disconnect will - # immediately close all pending streams. Without requiring multiple - # sequential timeouts.) - # 2. Mark the connection as errored, so that we don't accept any other - # incoming requests. - self._write_exception = exc - self._connection_error = True - raise exc - - # Flow control... - - def _wait_for_outgoing_flow(self, request: Request, stream_id: int) -> int: - """ - Returns the maximum allowable outgoing flow for a given stream. - - If the allowable flow is zero, then waits on the network until - WindowUpdated frames have increased the flow rate. - https://tools.ietf.org/html/rfc7540#section-6.9 - """ - local_flow: int = self._h2_state.local_flow_control_window(stream_id) - max_frame_size: int = self._h2_state.max_outbound_frame_size - flow = min(local_flow, max_frame_size) - while flow == 0: - self._receive_events(request) - local_flow = self._h2_state.local_flow_control_window(stream_id) - max_frame_size = self._h2_state.max_outbound_frame_size - flow = min(local_flow, max_frame_size) - return flow - - # Interface for connection pooling... - - def can_handle_request(self, origin: Origin) -> bool: - return origin == self._origin - - def is_available(self) -> bool: - return ( - self._state != HTTPConnectionState.CLOSED - and not self._connection_error - and not self._used_all_stream_ids - and not ( - self._h2_state.state_machine.state - == h2.connection.ConnectionState.CLOSED - ) - ) - - def has_expired(self) -> bool: - now = time.monotonic() - return self._expire_at is not None and now > self._expire_at - - def is_idle(self) -> bool: - return self._state == HTTPConnectionState.IDLE - - def is_closed(self) -> bool: - return self._state == HTTPConnectionState.CLOSED - - def info(self) -> str: - origin = str(self._origin) - return ( - f"{origin!r}, HTTP/2, {self._state.name}, " - f"Request Count: {self._request_count}" - ) - - def __repr__(self) -> str: - class_name = self.__class__.__name__ - origin = str(self._origin) - return ( - f"<{class_name} [{origin!r}, {self._state.name}, " - f"Request Count: {self._request_count}]>" - ) - - # These context managers are not used in the standard flow, but are - # useful for testing or working with connection instances directly. - - def __enter__(self) -> "HTTP2Connection": - return self - - def __exit__( - self, - exc_type: typing.Optional[typing.Type[BaseException]] = None, - exc_value: typing.Optional[BaseException] = None, - traceback: typing.Optional[types.TracebackType] = None, - ) -> None: - self.close() - - -class HTTP2ConnectionByteStream: - def __init__( - self, connection: HTTP2Connection, request: Request, stream_id: int - ) -> None: - self._connection = connection - self._request = request - self._stream_id = stream_id - self._closed = False - - def __iter__(self) -> typing.Iterator[bytes]: - kwargs = {"request": self._request, "stream_id": self._stream_id} - try: - with Trace("receive_response_body", logger, self._request, kwargs): - for chunk in self._connection._receive_response_body( - request=self._request, stream_id=self._stream_id - ): - yield chunk - except BaseException as exc: - # If we get an exception while streaming the response, - # we want to close the response (and possibly the connection) - # before raising that exception. - with ShieldCancellation(): - self.close() - raise exc - - def close(self) -> None: - if not self._closed: - self._closed = True - kwargs = {"stream_id": self._stream_id} - with Trace("response_closed", logger, self._request, kwargs): - self._connection._response_closed(stream_id=self._stream_id) diff --git a/spaces/Detomo/ai-comic-generation/src/lib/pick.ts b/spaces/Detomo/ai-comic-generation/src/lib/pick.ts deleted file mode 100644 index 48dc2995f08d8c3774a9b7b35b808064313361a7..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/lib/pick.ts +++ /dev/null @@ -1,2 +0,0 @@ - -export const pick = (items: string[]) => items[Math.floor(Math.random()*items.length)] diff --git a/spaces/EPFL-VILAB/MultiMAE/utils/dataset_regression.py b/spaces/EPFL-VILAB/MultiMAE/utils/dataset_regression.py deleted file mode 100644 index 9ff8749536e3b0d01dd24f4ec67434f1eddb9221..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/utils/dataset_regression.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) EPFL VILAB. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -------------------------------------------------------- -# Based on BEiT, timm, DINO, DeiT and MAE-priv code bases -# https://github.com/microsoft/unilm/tree/master/beit -# https://github.com/rwightman/pytorch-image-models/tree/master/timm -# https://github.com/facebookresearch/deit -# https://github.com/facebookresearch/dino -# https://github.com/BUPT-PRIV/MAE-priv -# -------------------------------------------------------- - -import numpy as np -import torch - -try: - import albumentations as A - from albumentations.pytorch import ToTensorV2 -except: - print('albumentations not installed') -# import cv2 -import torch.nn.functional as F - -from utils import (IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD, NYU_MEAN, - NYU_STD, PAD_MASK_VALUE) -from utils.dataset_folder import ImageFolder, MultiTaskImageFolder - - -def nyu_transform(train, additional_targets, input_size=512, color_aug=False): - if train: - augs = [ - A.SmallestMaxSize(max_size=input_size, p=1), - A.HorizontalFlip(p=0.5), - ] - if color_aug: augs += [ - # Color jittering from BYOL https://arxiv.org/pdf/2006.07733.pdf - A.ColorJitter( - brightness=0.1255, - contrast=0.4, - saturation=[0.5, 1.5], - hue=[-0.2, 0.2], - p=0.5 - ), - A.ToGray(p=0.3), - ] - augs += [ - A.RandomCrop(height=input_size, width=input_size, p=1), - A.Normalize(mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD), - ToTensorV2(), - ] - - transform = A.Compose(augs, additional_targets=additional_targets) - - else: - transform = A.Compose([ - A.SmallestMaxSize(max_size=input_size, p=1), - A.CenterCrop(height=input_size, width=input_size), - A.Normalize(mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD), - ToTensorV2(), - ], additional_targets=additional_targets) - - return transform - - -def simple_regression_transform(train, additional_targets, input_size=512, pad_value=(128, 128, 128), pad_mask_value=PAD_MASK_VALUE): - - if train: - transform = A.Compose([ - A.HorizontalFlip(p=0.5), - A.LongestMaxSize(max_size=input_size, p=1), - A.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.2, hue=0.1, p=0.5), # Color jittering from MoCo-v3 / DINO - A.RandomScale(scale_limit=(0.1 - 1, 2.0 - 1), p=1), # This is LSJ (0.1, 2.0) - A.PadIfNeeded(min_height=input_size, min_width=input_size, - position=A.augmentations.PadIfNeeded.PositionType.TOP_LEFT, - border_mode=cv2.BORDER_CONSTANT, - value=pad_value, mask_value=pad_mask_value), - A.RandomCrop(height=input_size, width=input_size, p=1), - A.Normalize(mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD), - ToTensorV2(), - ], additional_targets=additional_targets) - - else: - transform = A.Compose([ - A.LongestMaxSize(max_size=input_size, p=1), - A.PadIfNeeded(min_height=input_size, min_width=input_size, - position=A.augmentations.PadIfNeeded.PositionType.TOP_LEFT, - border_mode=cv2.BORDER_CONSTANT, - value=pad_value, mask_value=pad_mask_value), - A.Normalize(mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD), - ToTensorV2(), - ], additional_targets=additional_targets) - - return transform - - -class DataAugmentationForRegression(object): - - def __init__(self, transform, mask_value=0.0): - self.transform = transform - self.mask_value = mask_value - - def __call__(self, task_dict): - - # Need to replace rgb key to image - task_dict['image'] = task_dict.pop('rgb') - # Convert to np.array - task_dict = {k: np.array(v) for k, v in task_dict.items()} - - task_dict = self.transform(**task_dict) - - task_dict['depth'] = (task_dict['depth'].float() - NYU_MEAN)/NYU_STD - - # And then replace it back to rgb - task_dict['rgb'] = task_dict.pop('image') - - task_dict['mask_valid'] = (task_dict['mask_valid'] == 255)[None] - - for task in task_dict: - if task in ['depth']: - img = task_dict[task] - if 'mask_valid' in task_dict: - mask_valid = task_dict['mask_valid'].squeeze() - img[~mask_valid] = self.mask_value - task_dict[task] = img.unsqueeze(0) - elif task in ['rgb']: - task_dict[task] = task_dict[task].to(torch.float) - - return task_dict - - -def build_regression_dataset(args, data_path, transform, max_images=None): - transform = DataAugmentationForRegression(transform=transform) - - return MultiTaskImageFolder(data_path, args.all_domains, transform=transform, prefixes=None, max_images=max_images) diff --git a/spaces/Egrt/GCycleGAN/nets/resnest/resnet.py b/spaces/Egrt/GCycleGAN/nets/resnest/resnet.py deleted file mode 100644 index 1ae6083a388cf3eb7b8a73197e13fb783fdce8fe..0000000000000000000000000000000000000000 --- a/spaces/Egrt/GCycleGAN/nets/resnest/resnet.py +++ /dev/null @@ -1,310 +0,0 @@ -##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -## Created by: Hang Zhang -## Email: zhanghang0704@gmail.com -## Copyright (c) 2020 -## -## LICENSE file in the root directory of this source tree -##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -"""ResNet variants""" -import math -import torch -import torch.nn as nn - -from .splat import SplAtConv2d - -__all__ = ['ResNet', 'Bottleneck'] - -class DropBlock2D(object): - def __init__(self, *args, **kwargs): - raise NotImplementedError - -class GlobalAvgPool2d(nn.Module): - def __init__(self): - """Global average pooling over the input's spatial dimensions""" - super(GlobalAvgPool2d, self).__init__() - - def forward(self, inputs): - return nn.functional.adaptive_avg_pool2d(inputs, 1).view(inputs.size(0), -1) - -class Bottleneck(nn.Module): - """ResNet Bottleneck - """ - # pylint: disable=unused-argument - expansion = 4 - def __init__(self, inplanes, planes, stride=1, downsample=None, - radix=1, cardinality=1, bottleneck_width=64, - avd=False, avd_first=False, dilation=1, is_first=False, - rectified_conv=False, rectify_avg=False, - norm_layer=None, dropblock_prob=0.0, last_gamma=False): - super(Bottleneck, self).__init__() - group_width = int(planes * (bottleneck_width / 64.)) * cardinality - self.conv1 = nn.Conv2d(inplanes, group_width, kernel_size=1, bias=False) - self.bn1 = norm_layer(group_width) - self.dropblock_prob = dropblock_prob - self.radix = radix - self.avd = avd and (stride > 1 or is_first) - self.avd_first = avd_first - - if self.avd: - self.avd_layer = nn.AvgPool2d(3, stride, padding=1) - stride = 1 - - if dropblock_prob > 0.0: - self.dropblock1 = DropBlock2D(dropblock_prob, 3) - if radix == 1: - self.dropblock2 = DropBlock2D(dropblock_prob, 3) - self.dropblock3 = DropBlock2D(dropblock_prob, 3) - - if radix >= 1: - self.conv2 = SplAtConv2d( - group_width, group_width, kernel_size=3, - stride=stride, padding=dilation, - dilation=dilation, groups=cardinality, bias=False, - radix=radix, rectify=rectified_conv, - rectify_avg=rectify_avg, - norm_layer=norm_layer, - dropblock_prob=dropblock_prob) - elif rectified_conv: - from rfconv import RFConv2d - self.conv2 = RFConv2d( - group_width, group_width, kernel_size=3, stride=stride, - padding=dilation, dilation=dilation, - groups=cardinality, bias=False, - average_mode=rectify_avg) - self.bn2 = norm_layer(group_width) - else: - self.conv2 = nn.Conv2d( - group_width, group_width, kernel_size=3, stride=stride, - padding=dilation, dilation=dilation, - groups=cardinality, bias=False) - self.bn2 = norm_layer(group_width) - - self.conv3 = nn.Conv2d( - group_width, planes * 4, kernel_size=1, bias=False) - self.bn3 = norm_layer(planes*4) - - if last_gamma: - from torch.nn.init import zeros_ - zeros_(self.bn3.weight) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.dilation = dilation - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - if self.dropblock_prob > 0.0: - out = self.dropblock1(out) - out = self.relu(out) - - if self.avd and self.avd_first: - out = self.avd_layer(out) - - out = self.conv2(out) - if self.radix == 0: - out = self.bn2(out) - if self.dropblock_prob > 0.0: - out = self.dropblock2(out) - out = self.relu(out) - - if self.avd and not self.avd_first: - out = self.avd_layer(out) - - out = self.conv3(out) - out = self.bn3(out) - if self.dropblock_prob > 0.0: - out = self.dropblock3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - -class ResNet(nn.Module): - """ResNet Variants - - Parameters - ---------- - block : Block - Class for the residual block. Options are BasicBlockV1, BottleneckV1. - layers : list of int - Numbers of layers in each block - classes : int, default 1000 - Number of classification classes. - dilated : bool, default False - Applying dilation strategy to pretrained ResNet yielding a stride-8 model, - typically used in Semantic Segmentation. - norm_layer : object - Normalization layer used in backbone network (default: :class:`mxnet.gluon.nn.BatchNorm`; - for Synchronized Cross-GPU BachNormalization). - - Reference: - - - He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. - - - Yu, Fisher, and Vladlen Koltun. "Multi-scale context aggregation by dilated convolutions." - """ - # pylint: disable=unused-variable - def __init__(self, block, layers, radix=1, groups=1, bottleneck_width=64, - num_classes=1000, dilated=False, dilation=1, - deep_stem=False, stem_width=64, avg_down=False, - rectified_conv=False, rectify_avg=False, - avd=False, avd_first=False, - final_drop=0.0, dropblock_prob=0, - last_gamma=False, norm_layer=nn.BatchNorm2d): - self.cardinality = groups - self.bottleneck_width = bottleneck_width - # ResNet-D params - self.inplanes = stem_width*2 if deep_stem else 64 - self.avg_down = avg_down - self.last_gamma = last_gamma - # ResNeSt params - self.radix = radix - self.avd = avd - self.avd_first = avd_first - - super(ResNet, self).__init__() - self.rectified_conv = rectified_conv - self.rectify_avg = rectify_avg - if rectified_conv: - from rfconv import RFConv2d - conv_layer = RFConv2d - else: - conv_layer = nn.Conv2d - conv_kwargs = {'average_mode': rectify_avg} if rectified_conv else {} - ''' - if deep_stem: - self.conv1 = nn.Sequential( - conv_layer(3, stem_width, kernel_size=3, stride=2, padding=1, bias=False, **conv_kwargs), - norm_layer(stem_width), - nn.ReLU(inplace=True), - conv_layer(stem_width, stem_width, kernel_size=3, stride=1, padding=1, bias=False, **conv_kwargs), - norm_layer(stem_width), - nn.ReLU(inplace=True), - conv_layer(stem_width, stem_width*2, kernel_size=3, stride=1, padding=1, bias=False, **conv_kwargs), - ) - else: - self.conv1 = conv_layer(3, 64, kernel_size=7, stride=2, padding=3, - bias=False, **conv_kwargs) - self.bn1 = norm_layer(self.inplanes) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - ''' - #self.layer1 = self._make_layer(block, 64, layers[0], norm_layer=norm_layer, is_first=False) - self.layer1 = self._make_layer(block, 64, layers[0], stride=2, norm_layer=norm_layer, is_first=False) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2, norm_layer=norm_layer) - if dilated or dilation == 4: - self.layer3 = self._make_layer(block, 256, layers[2], stride=1, - dilation=2, norm_layer=norm_layer, - dropblock_prob=dropblock_prob) - self.layer4 = self._make_layer(block, 512, layers[3], stride=1, - dilation=4, norm_layer=norm_layer, - dropblock_prob=dropblock_prob) - elif dilation==2: - self.layer3 = self._make_layer(block, 256, layers[2], stride=2, - dilation=1, norm_layer=norm_layer, - dropblock_prob=dropblock_prob) - self.layer4 = self._make_layer(block, 512, layers[3], stride=1, - dilation=2, norm_layer=norm_layer, - dropblock_prob=dropblock_prob) - else: - self.layer3 = self._make_layer(block, 256, layers[2], stride=2, - norm_layer=norm_layer, - dropblock_prob=dropblock_prob) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2, - norm_layer=norm_layer, - dropblock_prob=dropblock_prob) - ''' - self.avgpool = GlobalAvgPool2d() - self.drop = nn.Dropout(final_drop) if final_drop > 0.0 else None - self.fc = nn.Linear(512 * block.expansion, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - elif isinstance(m, norm_layer): - m.weight.data.fill_(1) - m.bias.data.zero_() - ''' - def _make_layer(self, block, planes, blocks, stride=1, dilation=1, norm_layer=None, - dropblock_prob=0.0, is_first=True): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - down_layers = [] - if self.avg_down: - if dilation == 1: - down_layers.append(nn.AvgPool2d(kernel_size=stride, stride=stride, - ceil_mode=True, count_include_pad=False)) - else: - down_layers.append(nn.AvgPool2d(kernel_size=1, stride=1, - ceil_mode=True, count_include_pad=False)) - down_layers.append(nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=1, bias=False)) - else: - down_layers.append(nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False)) - down_layers.append(norm_layer(planes * block.expansion)) - downsample = nn.Sequential(*down_layers) - - layers = [] - if dilation == 1 or dilation == 2: - layers.append(block(self.inplanes, planes, stride, downsample=downsample, - radix=self.radix, cardinality=self.cardinality, - bottleneck_width=self.bottleneck_width, - avd=self.avd, avd_first=self.avd_first, - dilation=1, is_first=is_first, rectified_conv=self.rectified_conv, - rectify_avg=self.rectify_avg, - norm_layer=norm_layer, dropblock_prob=dropblock_prob, - last_gamma=self.last_gamma)) - elif dilation == 4: - layers.append(block(self.inplanes, planes, stride, downsample=downsample, - radix=self.radix, cardinality=self.cardinality, - bottleneck_width=self.bottleneck_width, - avd=self.avd, avd_first=self.avd_first, - dilation=2, is_first=is_first, rectified_conv=self.rectified_conv, - rectify_avg=self.rectify_avg, - norm_layer=norm_layer, dropblock_prob=dropblock_prob, - last_gamma=self.last_gamma)) - else: - raise RuntimeError("=> unknown dilation size: {}".format(dilation)) - - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, - radix=self.radix, cardinality=self.cardinality, - bottleneck_width=self.bottleneck_width, - avd=self.avd, avd_first=self.avd_first, - dilation=dilation, rectified_conv=self.rectified_conv, - rectify_avg=self.rectify_avg, - norm_layer=norm_layer, dropblock_prob=dropblock_prob, - last_gamma=self.last_gamma)) - - return nn.Sequential(*layers) - - def forward(self, x): - ''' - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - ''' - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - ''' - x = self.avgpool(x) - #x = x.view(x.size(0), -1) - x = torch.flatten(x, 1) - if self.drop: - x = self.drop(x) - x = self.fc(x) - ''' - return x diff --git a/spaces/EleutherAI/magma/magma/config.py b/spaces/EleutherAI/magma/magma/config.py deleted file mode 100644 index ff3cdb9335c3b6c0ac687c1495db45fe11471ef9..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/magma/magma/config.py +++ /dev/null @@ -1,144 +0,0 @@ -from dataclasses import dataclass, asdict -import yaml -from pprint import pprint -from .utils import is_main -import os -from pathlib import Path -import uuid - - -def load_config(path, config_dir=Path("configs")): - if not path.endswith(".yml"): - path += ".yml" - if not os.path.exists(path): - path = config_dir / path - with open(path, "r") as stream: - config = yaml.safe_load(stream) - return config - - -@dataclass -class MultimodalConfig: - - # Training: - # ------------------------------------------------------------ - - batch_size: int - train_steps: int - optimizer_name: str = "AdamW" - lr: float = 8.0e-4 - image_enc_lr: float = None - min_lr: float = 0.0 - lr_decay_iters: int = None - gradient_accumulation_steps: int = 1 - image_size: int = 256 - eval_every: int = 250 - eval_steps: int = 25 - zero_stage: int = 2 - gradient_clipping: float = 1.0 - warmup_num_steps: int = 100 - weight_decay: float = 0.00 - run_blind: bool = False - fine_tune: bool = False - load_optimizer: bool = True - - # Checkpointing: - # ------------------------------------------------------------ - save_every: int = 2500 - save: str = None - load: str = None - - # Data: - # ------------------------------------------------------------ - train_dataset_name: str = "conceptual_captions" - eval_dataset_name: str = "/data/conceptual_captions" - train_dataset_dir: str = "/data/coco_data" - eval_dataset_dir: str = "/data/coco_data" - eval_dataset_pct: float = 0.1 - - # Model architecture: - # ------------------------------------------------------------ - encoder_name: str = "clip" - tokenizer_name: str = "gpt2" - lm_name: str = "EleutherAI/gpt-j-6B" - image_seq_len: int = 2 - pretrained_img_encoder: bool = False - seq_len: int = None - - # Layer Freezing settings: - # ------------------------------------------------------------ - freeze_lm: bool = True - freeze_img_encoder: bool = True - - image_embed_dropout_prob: float = 0.0 - use_image_embed_layernorm: bool = False - - # Adapter settings: - # ------------------------------------------------------------ - adapter_config: dict = None - - # Classification Finetuning settings: - # ------------------------------------------------------------ - class_dict: dict = None # {num_classes: .., ckpt_path: .., classifier_type:, .., interface_type: .., interface_position: .., freeze_model: ..} - - # Logging settings: - # ------------------------------------------------------------ - name: str = None # name, just used for wandb logging - log_every: int = 1 - wandb_project: str = "magma" - - def print(self): - if is_main(): - print("-" * 100) - pprint(self.__dict__, indent=4) - print("-" * 100) - - def __post_init__(self): - self.is_classifier = self.class_dict is not None - if self.adapter_config is None: - self.adapter_config = {} - - # Deepspeed Settings: - # ------------------------------------------------------------ - if self.lr_decay_iters is None: - self.lr_scheduler = "WarmupLR" - self.scheduler_dict = { - "type": self.lr_scheduler, - "params": { - "warmup_min_lr": self.min_lr, - "warmup_max_lr": self.lr, - "warmup_num_steps": self.warmup_num_steps, - }, - } - else: - self.lr_scheduler = "WarmupDecayLR" - self.scheduler_dict = { - "type": self.lr_scheduler, - "params": { - "total_num_steps": self.lr_decay_iters, - "warmup_min_lr": self.min_lr, - "warmup_max_lr": self.lr, - "warmup_num_steps": self.warmup_num_steps, - }, - } - self.deepspeed_config_params = { - "train_batch_size": self.batch_size, - "gradient_accumulation_steps": self.gradient_accumulation_steps, - "gradient_clipping": self.gradient_clipping, - "fp16": {"enabled": True, "loss_scale_window": 250}, - "scheduler": self.scheduler_dict, - "zero_optimization": { - "stage": self.zero_stage, - "load_from_fp32_weights": False, - }, - } - - if self.name is None: - self.name = str(uuid.uuid4())[:8] - - @classmethod - def from_yml(cls, path): - return cls(**load_config(path)) - - def to_dict(self): - return asdict(self) diff --git a/spaces/EronSamez/RVC_HFmeu/Makefile b/spaces/EronSamez/RVC_HFmeu/Makefile deleted file mode 100644 index 44de020e6feb7fcd58016d7c3c736681f533b597..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/Makefile +++ /dev/null @@ -1,63 +0,0 @@ -.PHONY: -.ONESHELL: - -help: ## Show this help and exit - @grep -hE '^[A-Za-z0-9_ \-]*?:.*##.*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' - -install: ## Install dependencies (Do everytime you start up a paperspace machine) - apt-get -y install build-essential python3-dev ffmpeg - pip install --upgrade setuptools wheel - pip install --upgrade pip - pip install faiss-gpu fairseq gradio ffmpeg ffmpeg-python praat-parselmouth pyworld numpy==1.23.5 numba==0.56.4 librosa==0.9.1 - pip install -r requirements.txt - pip install --upgrade lxml - apt-get update - apt -y install -qq aria2 - -basev1: ## Download version 1 pre-trained models (Do only once after cloning the fork) - mkdir -p pretrained uvr5_weights - git pull - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d pretrained -o D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d pretrained -o D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d pretrained -o D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d pretrained -o G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d pretrained -o G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d pretrained -o G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth -d pretrained -o f0D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth -d pretrained -o f0D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth -d pretrained -o f0D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth -d pretrained -o f0G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth -d pretrained -o f0G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth -d pretrained -o f0G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d ./ -o hubert_base.pt - -basev2: ## Download version 2 pre-trained models (Do only once after cloning the fork) - mkdir -p pretrained_v2 uvr5_weights - git pull - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D32k.pth -d pretrained_v2 -o D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d pretrained_v2 -o D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D48k.pth -d pretrained_v2 -o D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G32k.pth -d pretrained_v2 -o G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d pretrained_v2 -o G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G48k.pth -d pretrained_v2 -o G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D32k.pth -d pretrained_v2 -o f0D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -d pretrained_v2 -o f0D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D48k.pth -d pretrained_v2 -o f0D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G32k.pth -d pretrained_v2 -o f0G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth -d pretrained_v2 -o f0G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G48k.pth -d pretrained_v2 -o f0G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d ./ -o hubert_base.pt - -run-ui: ## Run the python GUI - python infer-web.py --paperspace --pycmd python - -run-cli: ## Run the python CLI - python infer-web.py --pycmd python --is_cli - -tensorboard: ## Start the tensorboard (Run on separate terminal) - echo https://tensorboard-$$(hostname).clg07azjl.paperspacegradient.com - tensorboard --logdir logs --bind_all \ No newline at end of file diff --git a/spaces/EronSamez/RVC_HFmeu/infer/modules/train/extract/extract_f0_print.py b/spaces/EronSamez/RVC_HFmeu/infer/modules/train/extract/extract_f0_print.py deleted file mode 100644 index 14ef598d73b807974204664f100c828918199816..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/infer/modules/train/extract/extract_f0_print.py +++ /dev/null @@ -1,298 +0,0 @@ -import os -import sys -import traceback - -import parselmouth - -now_dir = os.getcwd() -sys.path.append(now_dir) -import logging -from LazyImport import lazyload - -import numpy as np -import pyworld -torchcrepe = lazyload("torchcrepe") # Fork Feature. Crepe algo for training and preprocess -torch = lazyload("torch") -#from torch import Tensor # Fork Feature. Used for pitch prediction for torch crepe. -tqdm = lazyload("tqdm") -from infer.lib.audio import load_audio - -logging.getLogger("numba").setLevel(logging.WARNING) -from multiprocessing import Process - -exp_dir = sys.argv[1] -f = open("%s/extract_f0_feature.log" % exp_dir, "a+") - -DoFormant = False -Quefrency = 1.0 -Timbre = 1.0 - -def printt(strr): - print(strr) - f.write(f"{strr}\n") - f.flush() - - -n_p = int(sys.argv[2]) -f0method = sys.argv[3] -extraction_crepe_hop_length = 0 -try: - extraction_crepe_hop_length = int(sys.argv[4]) -except: - print("Temp Issue. echl is not being passed with argument!") - extraction_crepe_hop_length = 128 - -class FeatureInput(object): - def __init__(self, samplerate=16000, hop_size=160): - self.fs = samplerate - self.hop = hop_size - - self.f0_bin = 256 - self.f0_max = 1100.0 - self.f0_min = 50.0 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - - def mncrepe(self, method, x, p_len, crepe_hop_length): - f0 = None - torch_device_index = 0 - torch_device = torch.device( - f"cuda:{torch_device_index % torch.cuda.device_count()}" - ) if torch.cuda.is_available() \ - else torch.device("mps") if torch.backends.mps.is_available() \ - else torch.device("cpu") - - audio = torch.from_numpy(x.astype(np.float32)).to(torch_device, copy=True) - audio /= torch.quantile(torch.abs(audio), 0.999) - audio = torch.unsqueeze(audio, dim=0) - if audio.ndim == 2 and audio.shape[0] > 1: - audio = torch.mean(audio, dim=0, keepdim=True).detach() - audio = audio.detach() - - if method == 'mangio-crepe': - pitch: torch.Tensor = torchcrepe.predict( - audio, - self.fs, - crepe_hop_length, - self.f0_min, - self.f0_max, - "full", - batch_size=crepe_hop_length * 2, - device=torch_device, - pad=True, - ) - p_len = p_len or x.shape[0] // crepe_hop_length - # Resize the pitch - source = np.array(pitch.squeeze(0).cpu().float().numpy()) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * p_len, len(source)) / p_len, - np.arange(0, len(source)), - source, - ) - f0 = np.nan_to_num(target) - - elif method == 'crepe': - batch_size = 512 - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.fs, - 160, - self.f0_min, - self.f0_max, - "full", - batch_size=batch_size, - device=torch_device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - f0 = f0[1:] # Get rid of extra first frame - - return f0 - - def get_pm(self, x, p_len): - f0 = parselmouth.Sound(x, self.fs).to_pitch_ac( - time_step=160 / 16000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ).selected_array["frequency"] - - return np.pad( - f0, - [[max(0, (p_len - len(f0) + 1) // 2), max(0, p_len - len(f0) - (p_len - len(f0) + 1) // 2)]], - mode="constant" - ) - - def get_harvest(self, x): - f0_spectral = pyworld.harvest( - x.astype(np.double), - fs=self.fs, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop / self.fs, - ) - return pyworld.stonemask(x.astype(np.double), *f0_spectral, self.fs) - - def get_dio(self, x): - f0_spectral = pyworld.dio( - x.astype(np.double), - fs=self.fs, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop / self.fs, - ) - return pyworld.stonemask(x.astype(np.double), *f0_spectral, self.fs) - - def get_rmvpe(self, x): - if hasattr(self, "model_rmvpe") == False: - from infer.lib.rmvpe import RMVPE - - print("Loading rmvpe model") - self.model_rmvpe = RMVPE( - "assets/rmvpe/rmvpe.pt", is_half=False, device="cpu" - ) - return self.model_rmvpe.infer_from_audio(x, thred=0.03) - - def get_rmvpe_dml(self, x): - ... - - def get_f0_method_dict(self): - return { - "pm": self.get_pm, - "harvest": self.get_harvest, - "dio": self.get_dio, - "rmvpe": self.get_rmvpe - } - - def get_f0_hybrid_computation( - self, - methods_str, - x, - p_len, - crepe_hop_length, - ): - # Get various f0 methods from input to use in the computation stack - s = methods_str - s = s.split("hybrid")[1] - s = s.replace("[", "").replace("]", "") - methods = s.split("+") - f0_computation_stack = [] - - for method in methods: - if method in self.f0_method_dict: - f0 = self.f0_method_dict[method](x, p_len) if method == 'pm' else self.f0_method_dict[method](x) - f0_computation_stack.append(f0) - elif method == 'crepe' or method == 'mangio-crepe': - self.the_other_complex_function(x, method, crepe_hop_length) - - if len(f0_computation_stack) != 0: - f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0) if len(f0_computation_stack)>1 else f0_computation_stack[0] - return f0_median_hybrid - else: - raise ValueError("No valid methods were provided") - - def compute_f0(self, path, f0_method, crepe_hop_length): - x = load_audio(path, self.fs, DoFormant, Quefrency, Timbre) - p_len = x.shape[0] // self.hop - - if f0_method in self.f0_method_dict: - f0 = self.f0_method_dict[f0_method](x, p_len) if f0_method == 'pm' else self.f0_method_dict[f0_method](x) - elif f0_method in ['crepe', 'mangio-crepe']: - f0 = self.mncrepe(f0_method, x, p_len, crepe_hop_length) - elif "hybrid" in f0_method: # EXPERIMENTAL - # Perform hybrid median pitch estimation - f0 = self.get_f0_hybrid_computation( - f0_method, - x, - p_len, - crepe_hop_length, - ) - return f0 - - def coarse_f0(self, f0): - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * ( - self.f0_bin - 2 - ) / (self.f0_mel_max - self.f0_mel_min) + 1 - - # use 0 or 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1 - f0_coarse = np.rint(f0_mel).astype(int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, ( - f0_coarse.max(), - f0_coarse.min(), - ) - return f0_coarse - - def go(self, paths, f0_method, crepe_hop_length, thread_n): - if len(paths) == 0: - printt("no-f0-todo") - return - with tqdm.tqdm(total=len(paths), leave=True, position=thread_n) as pbar: - description = f"thread:{thread_n}, f0ing, Hop-Length:{crepe_hop_length}" - pbar.set_description(description) - - for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths): - try: - if ( - os.path.exists(opt_path1 + ".npy") - and os.path.exists(opt_path2 + ".npy") - ): - pbar.update(1) - continue - - featur_pit = self.compute_f0(inp_path, f0_method, crepe_hop_length) - np.save( - opt_path2, - featur_pit, - allow_pickle=False, - ) # nsf - coarse_pit = self.coarse_f0(featur_pit) - np.save( - opt_path1, - coarse_pit, - allow_pickle=False, - ) # ori - pbar.update(1) - except Exception as e: - printt(f"f0fail-{idx}-{inp_path}-{traceback.format_exc()}") - - -if __name__ == "__main__": - # exp_dir=r"E:\codes\py39\dataset\mi-test" - # n_p=16 - # f = open("%s/log_extract_f0.log"%exp_dir, "w") - printt(sys.argv) - featureInput = FeatureInput() - paths = [] - inp_root = "%s/1_16k_wavs" % (exp_dir) - opt_root1 = "%s/2a_f0" % (exp_dir) - opt_root2 = "%s/2b-f0nsf" % (exp_dir) - - os.makedirs(opt_root1, exist_ok=True) - os.makedirs(opt_root2, exist_ok=True) - for name in sorted(list(os.listdir(inp_root))): - inp_path = "%s/%s" % (inp_root, name) - if "spec" in inp_path: - continue - opt_path1 = "%s/%s" % (opt_root1, name) - opt_path2 = "%s/%s" % (opt_root2, name) - paths.append([inp_path, opt_path1, opt_path2]) - - ps = [] - print("Using f0 method: " + f0method) - for i in range(n_p): - p = Process( - target=featureInput.go, - args=(paths[i::n_p], f0method, extraction_crepe_hop_length, i), - ) - ps.append(p) - p.start() - for i in range(n_p): - ps[i].join() \ No newline at end of file diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/metrics/psnr_ssim.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/metrics/psnr_ssim.py deleted file mode 100644 index bbd950699c2495880236883861d9e199f900eae8..0000000000000000000000000000000000000000 --- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/metrics/psnr_ssim.py +++ /dev/null @@ -1,128 +0,0 @@ -import cv2 -import numpy as np - -from basicsr.metrics.metric_util import reorder_image, to_y_channel -from basicsr.utils.registry import METRIC_REGISTRY - - -@METRIC_REGISTRY.register() -def calculate_psnr(img1, img2, crop_border, input_order='HWC', test_y_channel=False): - """Calculate PSNR (Peak Signal-to-Noise Ratio). - - Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio - - Args: - img1 (ndarray): Images with range [0, 255]. - img2 (ndarray): Images with range [0, 255]. - crop_border (int): Cropped pixels in each edge of an image. These - pixels are not involved in the PSNR calculation. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - Default: 'HWC'. - test_y_channel (bool): Test on Y channel of YCbCr. Default: False. - - Returns: - float: psnr result. - """ - - assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.') - if input_order not in ['HWC', 'CHW']: - raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"') - img1 = reorder_image(img1, input_order=input_order) - img2 = reorder_image(img2, input_order=input_order) - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - - if crop_border != 0: - img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...] - img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...] - - if test_y_channel: - img1 = to_y_channel(img1) - img2 = to_y_channel(img2) - - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20. * np.log10(255. / np.sqrt(mse)) - - -def _ssim(img1, img2): - """Calculate SSIM (structural similarity) for one channel images. - - It is called by func:`calculate_ssim`. - - Args: - img1 (ndarray): Images with range [0, 255] with order 'HWC'. - img2 (ndarray): Images with range [0, 255] with order 'HWC'. - - Returns: - float: ssim result. - """ - - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -@METRIC_REGISTRY.register() -def calculate_ssim(img1, img2, crop_border, input_order='HWC', test_y_channel=False): - """Calculate SSIM (structural similarity). - - Ref: - Image quality assessment: From error visibility to structural similarity - - The results are the same as that of the official released MATLAB code in - https://ece.uwaterloo.ca/~z70wang/research/ssim/. - - For three-channel images, SSIM is calculated for each channel and then - averaged. - - Args: - img1 (ndarray): Images with range [0, 255]. - img2 (ndarray): Images with range [0, 255]. - crop_border (int): Cropped pixels in each edge of an image. These - pixels are not involved in the SSIM calculation. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - Default: 'HWC'. - test_y_channel (bool): Test on Y channel of YCbCr. Default: False. - - Returns: - float: ssim result. - """ - - assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.') - if input_order not in ['HWC', 'CHW']: - raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"') - img1 = reorder_image(img1, input_order=input_order) - img2 = reorder_image(img2, input_order=input_order) - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - - if crop_border != 0: - img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...] - img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...] - - if test_y_channel: - img1 = to_y_channel(img1) - img2 = to_y_channel(img2) - - ssims = [] - for i in range(img1.shape[2]): - ssims.append(_ssim(img1[..., i], img2[..., i])) - return np.array(ssims).mean() diff --git "a/spaces/Fengbinbin/gpt-academic/crazy_functions/\350\260\267\346\255\214\346\243\200\347\264\242\345\260\217\345\212\251\346\211\213.py" "b/spaces/Fengbinbin/gpt-academic/crazy_functions/\350\260\267\346\255\214\346\243\200\347\264\242\345\260\217\345\212\251\346\211\213.py" deleted file mode 100644 index 834f0799e1dca6328454ca7ec8eaa29b6a167199..0000000000000000000000000000000000000000 --- "a/spaces/Fengbinbin/gpt-academic/crazy_functions/\350\260\267\346\255\214\346\243\200\347\264\242\345\260\217\345\212\251\346\211\213.py" +++ /dev/null @@ -1,108 +0,0 @@ -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -from toolbox import CatchException, report_execption, write_results_to_file -from toolbox import update_ui - -def get_meta_information(url, chatbot, history): - import requests - import arxiv - import difflib - from bs4 import BeautifulSoup - from toolbox import get_conf - proxies, = get_conf('proxies') - headers = { - 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36', - } - # 发送 GET 请求 - response = requests.get(url, proxies=proxies, headers=headers) - - # 解析网页内容 - soup = BeautifulSoup(response.text, "html.parser") - - def string_similar(s1, s2): - return difflib.SequenceMatcher(None, s1, s2).quick_ratio() - - profile = [] - # 获取所有文章的标题和作者 - for result in soup.select(".gs_ri"): - title = result.a.text.replace('\n', ' ').replace(' ', ' ') - author = result.select_one(".gs_a").text - try: - citation = result.select_one(".gs_fl > a[href*='cites']").text # 引用次数是链接中的文本,直接取出来 - except: - citation = 'cited by 0' - abstract = result.select_one(".gs_rs").text.strip() # 摘要在 .gs_rs 中的文本,需要清除首尾空格 - search = arxiv.Search( - query = title, - max_results = 1, - sort_by = arxiv.SortCriterion.Relevance, - ) - paper = next(search.results()) - if string_similar(title, paper.title) > 0.90: # same paper - abstract = paper.summary.replace('\n', ' ') - is_paper_in_arxiv = True - else: # different paper - abstract = abstract - is_paper_in_arxiv = False - paper = next(search.results()) - print(title) - print(author) - print(citation) - profile.append({ - 'title':title, - 'author':author, - 'citation':citation, - 'abstract':abstract, - 'is_paper_in_arxiv':is_paper_in_arxiv, - }) - - chatbot[-1] = [chatbot[-1][0], title + f'\n\n是否在arxiv中(不在arxiv中无法获取完整摘要):{is_paper_in_arxiv}\n\n' + abstract] - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - return profile - -@CatchException -def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "分析用户提供的谷歌学术(google scholar)搜索页面中,出现的所有文章: binary-husky,插件初始化中..."]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import arxiv - import math - from bs4 import BeautifulSoup - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4 arxiv```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - meta_paper_info_list = yield from get_meta_information(txt, chatbot, history) - batchsize = 5 - for batch in range(math.ceil(len(meta_paper_info_list)/batchsize)): - if len(meta_paper_info_list[:batchsize]) > 0: - i_say = "下面是一些学术文献的数据,提取出以下内容:" + \ - "1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开(is_paper_in_arxiv);4、引用数量(cite);5、中文摘要翻译。" + \ - f"以下是信息源:{str(meta_paper_info_list[:batchsize])}" - - inputs_show_user = f"请分析此页面中出现的所有文章:{txt},这是第{batch+1}批" - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, inputs_show_user=inputs_show_user, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], - sys_prompt="你是一个学术翻译,请从数据中提取信息。你必须使用Markdown表格。你必须逐个文献进行处理。" - ) - - history.extend([ f"第{batch+1}批", gpt_say ]) - meta_paper_info_list = meta_paper_info_list[batchsize:] - - chatbot.append(["状态?", - "已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write an academic \"Related Works\" section about \"你搜索的研究领域\" for me."]) - msg = '正常' - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)); - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 diff --git a/spaces/GT-RIPL/GPT-K/model/eva_vit.py b/spaces/GT-RIPL/GPT-K/model/eva_vit.py deleted file mode 100644 index 5680876d9b8227653ba93be0d0918485bd59495c..0000000000000000000000000000000000000000 --- a/spaces/GT-RIPL/GPT-K/model/eva_vit.py +++ /dev/null @@ -1,434 +0,0 @@ -# Based on EVA, BEIT, timm and DeiT code bases -# https://github.com/baaivision/EVA -# https://github.com/rwightman/pytorch-image-models/tree/master/timm -# https://github.com/microsoft/unilm/tree/master/beit -# https://github.com/facebookresearch/deit/ -# https://github.com/facebookresearch/dino -# --------------------------------------------------------' -import math -from functools import partial - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import drop_path, to_2tuple, trunc_normal_ - -import sys -sys.path.append("./") -from model.utils import download_cached_file - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - """ - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - def extra_repr(self) -> str: - return 'p={}'.format(self.drop_prob) - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - # x = self.drop(x) - # commit this for the orignal BERT implement - x = self.fc2(x) - x = self.drop(x) - return x - - -class Attention(nn.Module): - def __init__( - self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., - proj_drop=0., window_size=None, attn_head_dim=None): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - if attn_head_dim is not None: - head_dim = attn_head_dim - all_head_dim = head_dim * self.num_heads - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, all_head_dim * 3, bias=False) - if qkv_bias: - self.q_bias = nn.Parameter(torch.zeros(all_head_dim)) - self.v_bias = nn.Parameter(torch.zeros(all_head_dim)) - else: - self.q_bias = None - self.v_bias = None - - if window_size: - self.window_size = window_size - self.num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3 - self.relative_position_bias_table = nn.Parameter( - torch.zeros(self.num_relative_distance, num_heads)) # 2*Wh-1 * 2*Ww-1, nH - # cls to token & token 2 cls & cls to cls - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(window_size[0]) - coords_w = torch.arange(window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * window_size[1] - 1 - relative_position_index = \ - torch.zeros(size=(window_size[0] * window_size[1] + 1, ) * 2, dtype=relative_coords.dtype) - relative_position_index[1:, 1:] = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - relative_position_index[0, 0:] = self.num_relative_distance - 3 - relative_position_index[0:, 0] = self.num_relative_distance - 2 - relative_position_index[0, 0] = self.num_relative_distance - 1 - - self.register_buffer("relative_position_index", relative_position_index) - else: - self.window_size = None - self.relative_position_bias_table = None - self.relative_position_index = None - - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(all_head_dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x, rel_pos_bias=None): - B, N, C = x.shape - qkv_bias = None - if self.q_bias is not None: - qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias)) - # qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias) - qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - if self.relative_position_bias_table is not None: - relative_position_bias = \ - self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1] + 1, - self.window_size[0] * self.window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if rel_pos_bias is not None: - attn = attn + rel_pos_bias - - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, -1) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class Block(nn.Module): - - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., init_values=None, act_layer=nn.GELU, norm_layer=nn.LayerNorm, - window_size=None, attn_head_dim=None): - super().__init__() - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop, window_size=window_size, attn_head_dim=attn_head_dim) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - if init_values is not None and init_values > 0: - self.gamma_1 = nn.Parameter(init_values * torch.ones((dim)),requires_grad=True) - self.gamma_2 = nn.Parameter(init_values * torch.ones((dim)),requires_grad=True) - else: - self.gamma_1, self.gamma_2 = None, None - - def forward(self, x, rel_pos_bias=None): - if self.gamma_1 is None: - x = x + self.drop_path(self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias)) - x = x + self.drop_path(self.mlp(self.norm2(x))) - else: - x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias)) - x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x))) - return x - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - """ - def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0]) - self.patch_shape = (img_size[0] // patch_size[0], img_size[1] // patch_size[1]) - self.img_size = img_size - self.patch_size = patch_size - self.num_patches = num_patches - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x, **kwargs): - B, C, H, W = x.shape - # FIXME look at relaxing size constraints - assert H == self.img_size[0] and W == self.img_size[1], \ - f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})." - x = self.proj(x).flatten(2).transpose(1, 2) - return x - - -class RelativePositionBias(nn.Module): - - def __init__(self, window_size, num_heads): - super().__init__() - self.window_size = window_size - self.num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3 - self.relative_position_bias_table = nn.Parameter( - torch.zeros(self.num_relative_distance, num_heads)) # 2*Wh-1 * 2*Ww-1, nH - # cls to token & token 2 cls & cls to cls - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(window_size[0]) - coords_w = torch.arange(window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * window_size[1] - 1 - relative_position_index = \ - torch.zeros(size=(window_size[0] * window_size[1] + 1,) * 2, dtype=relative_coords.dtype) - relative_position_index[1:, 1:] = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - relative_position_index[0, 0:] = self.num_relative_distance - 3 - relative_position_index[0:, 0] = self.num_relative_distance - 2 - relative_position_index[0, 0] = self.num_relative_distance - 1 - - self.register_buffer("relative_position_index", relative_position_index) - - # trunc_normal_(self.relative_position_bias_table, std=.02) - - def forward(self): - relative_position_bias = \ - self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1] + 1, - self.window_size[0] * self.window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH - return relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - - -class VisionTransformer(nn.Module): - """ Vision Transformer with support for patch or hybrid CNN input stage - """ - def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12, - num_heads=12, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop_rate=0., attn_drop_rate=0., - drop_path_rate=0., norm_layer=nn.LayerNorm, init_values=None, - use_abs_pos_emb=True, use_rel_pos_bias=False, use_shared_rel_pos_bias=False, - use_mean_pooling=True, init_scale=0.001, use_checkpoint=False): - super().__init__() - self.image_size = img_size - self.num_classes = num_classes - self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models - - self.patch_embed = PatchEmbed( - img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim) - num_patches = self.patch_embed.num_patches - - self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) - if use_abs_pos_emb: - self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim)) - else: - self.pos_embed = None - self.pos_drop = nn.Dropout(p=drop_rate) - - if use_shared_rel_pos_bias: - self.rel_pos_bias = RelativePositionBias(window_size=self.patch_embed.patch_shape, num_heads=num_heads) - else: - self.rel_pos_bias = None - self.use_checkpoint = use_checkpoint - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule - self.use_rel_pos_bias = use_rel_pos_bias - self.blocks = nn.ModuleList([ - Block( - dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, - init_values=init_values, window_size=self.patch_embed.patch_shape if use_rel_pos_bias else None) - for i in range(depth)]) -# self.norm = nn.Identity() if use_mean_pooling else norm_layer(embed_dim) -# self.fc_norm = norm_layer(embed_dim) if use_mean_pooling else None -# self.head = nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity() - - if self.pos_embed is not None: - trunc_normal_(self.pos_embed, std=.02) - trunc_normal_(self.cls_token, std=.02) - # trunc_normal_(self.mask_token, std=.02) -# if isinstance(self.head, nn.Linear): -# trunc_normal_(self.head.weight, std=.02) - self.apply(self._init_weights) - self.fix_init_weight() -# if isinstance(self.head, nn.Linear): -# self.head.weight.data.mul_(init_scale) -# self.head.bias.data.mul_(init_scale) - - def fix_init_weight(self): - def rescale(param, layer_id): - param.div_(math.sqrt(2.0 * layer_id)) - - for layer_id, layer in enumerate(self.blocks): - rescale(layer.attn.proj.weight.data, layer_id + 1) - rescale(layer.mlp.fc2.weight.data, layer_id + 1) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - def get_classifier(self): - return self.head - - def reset_classifier(self, num_classes, global_pool=''): - self.num_classes = num_classes - self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() - - def forward_features(self, x): - x = self.patch_embed(x) - batch_size, seq_len, _ = x.size() - - cls_tokens = self.cls_token.expand(batch_size, -1, -1) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - if self.pos_embed is not None: - x = x + self.pos_embed - x = self.pos_drop(x) - - rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None - for blk in self.blocks: - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, rel_pos_bias) - else: - x = blk(x, rel_pos_bias) - return x -# x = self.norm(x) - -# if self.fc_norm is not None: -# t = x[:, 1:, :] -# return self.fc_norm(t.mean(1)) -# else: -# return x[:, 0] - - def forward(self, x): - x = self.forward_features(x) -# x = self.head(x) - return x - - def get_intermediate_layers(self, x): - x = self.patch_embed(x) - batch_size, seq_len, _ = x.size() - - cls_tokens = self.cls_token.expand(batch_size, -1, -1) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - if self.pos_embed is not None: - x = x + self.pos_embed - x = self.pos_drop(x) - - features = [] - rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None - for blk in self.blocks: - x = blk(x, rel_pos_bias) - features.append(x) - - return features - - -def interpolate_pos_embed(model, checkpoint_model): - if 'pos_embed' in checkpoint_model: - pos_embed_checkpoint = checkpoint_model['pos_embed'].float() - embedding_size = pos_embed_checkpoint.shape[-1] - num_patches = model.patch_embed.num_patches - num_extra_tokens = model.pos_embed.shape[-2] - num_patches - # height (== width) for the checkpoint position embedding - orig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens) ** 0.5) - # height (== width) for the new position embedding - new_size = int(num_patches ** 0.5) - # class_token and dist_token are kept unchanged - if orig_size != new_size: - print("Position interpolate from %dx%d to %dx%d" % (orig_size, orig_size, new_size, new_size)) - extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens] - # only the position tokens are interpolated - pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:] - pos_tokens = pos_tokens.reshape(-1, orig_size, orig_size, embedding_size).permute(0, 3, 1, 2) - pos_tokens = torch.nn.functional.interpolate( - pos_tokens, size=(new_size, new_size), mode='bicubic', align_corners=False) - pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2) - new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1) - checkpoint_model['pos_embed'] = new_pos_embed - - -def convert_weights_to_fp16(model: nn.Module): - """Convert applicable model parameters to fp16""" - - def _convert_weights_to_fp16(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - -# if isinstance(l, (nn.MultiheadAttention, Attention)): -# for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]: -# tensor = getattr(l, attr) -# if tensor is not None: -# tensor.data = tensor.data.half() - - model.apply(_convert_weights_to_fp16) - - -def create_eva_vit_g(img_size=224,drop_path_rate=0.4,use_checkpoint=False,precision="fp16"): - model = VisionTransformer( - img_size=img_size, - patch_size=14, - use_mean_pooling=False, - embed_dim=1408, - depth=39, - num_heads=1408//88, - mlp_ratio=4.3637, - qkv_bias=True, - drop_path_rate=drop_path_rate, - norm_layer=partial(nn.LayerNorm, eps=1e-6), - use_checkpoint=use_checkpoint, - ) - url = "https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/BLIP2/eva_vit_g.pth" - cached_file = download_cached_file( - url, check_hash=False, progress=True - ) - state_dict = torch.load(cached_file, map_location="cpu") - interpolate_pos_embed(model,state_dict) - - incompatible_keys = model.load_state_dict(state_dict, strict=False) -# print(incompatible_keys) - - if precision == "fp16": -# model.to("cuda") - convert_weights_to_fp16(model) - return model \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/align_spheres_in_colored_zones.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/align_spheres_in_colored_zones.py deleted file mode 100644 index b1a78e12855ebf28e6d25f2455a08e6822bc3dc8..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/align_spheres_in_colored_zones.py +++ /dev/null @@ -1,54 +0,0 @@ -import numpy as np -import os -import pybullet as p -import random -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils -import pybullet as p - -class AlignSpheresInColoredZones(Task): - """Align spheres of different colors in the matching colored zones.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "place the {color} sphere in the {color} zone" - self.task_completed_desc = "done aligning spheres in colored zones." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Define colors - colors = ['red', 'blue', 'green', 'yellow'] - color_names = ['red', 'blue', 'green', 'yellow'] - - # Add zones. - zone_size = (0.12, 0.12, 0) - zone_urdf = 'zone/zone.urdf' - zone_poses = [] - for color in colors: - zone_pose = self.get_random_pose(env, zone_size) - env.add_object(zone_urdf, zone_pose, 'fixed', color=utils.COLORS[color]) - zone_poses.append(zone_pose) - - # Add spheres. - sphere_size = (0.04, 0.04, 0.04) - sphere_urdf = 'sphere/sphere-template.urdf' - spheres = [] - for i, color in enumerate(colors): - sphere_pose = self.get_random_pose(env, sphere_size) - replace = {'DIM': sphere_size, 'HALF': (sphere_size[0] / 2, sphere_size[1] / 2, sphere_size[2] / 2)} - sphere_urdf = self.fill_template(sphere_urdf, replace) - sphere_id = env.add_object(sphere_urdf, sphere_pose, color=utils.COLORS[color]) - spheres.append(sphere_id) - - # Add goal - self.add_goal(objs=[sphere_id], matches=np.ones((1, 1)), targ_poses=[zone_poses[i]], replace=False, - rotations=False, metric='pose', params=None, step_max_reward=1, - language_goal=self.lang_template.format(color=color_names[i])) \ No newline at end of file diff --git a/spaces/Godrose0728/sound-link/text/cleaners.py b/spaces/Godrose0728/sound-link/text/cleaners.py deleted file mode 100644 index eedbeaee8ad73dd4aaf6c12e3f900fc34a1ee630..0000000000000000000000000000000000000000 --- a/spaces/Godrose0728/sound-link/text/cleaners.py +++ /dev/null @@ -1,150 +0,0 @@ -import re -import pyopenjtalk - -pyopenjtalk._lazy_init() - - -def japanese_cleaners(text): - from text.japanese import japanese_to_romaji_with_accent - text = japanese_to_romaji_with_accent(text) - text = re.sub(r'([A-Za-z])$', r'\1.', text) - return text - - -def japanese_cleaners2(text): - return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') - - -def korean_cleaners(text): - '''Pipeline for Korean text''' - from text.korean import latin_to_hangul, number_to_hangul, divide_hangul - text = latin_to_hangul(text) - text = number_to_hangul(text) - text = divide_hangul(text) - text = re.sub(r'([\u3131-\u3163])$', r'\1.', text) - return text - - -def chinese_cleaners(text): - '''Pipeline for Chinese text''' - from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text) - return text - - -def zh_ja_mixture_cleaners(text): - from text.mandarin import chinese_to_romaji - from text.japanese import japanese_to_romaji_with_accent - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_romaji(x.group(1)) + ' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent( - x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…') + ' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def sanskrit_cleaners(text): - text = text.replace('॥', '।').replace('ॐ', 'ओम्') - if text[-1] != '।': - text += ' ।' - return text - - -def cjks_cleaners(text): - from text.mandarin import chinese_to_lazy_ipa - from text.japanese import japanese_to_ipa - from text.korean import korean_to_lazy_ipa - from text.sanskrit import devanagari_to_ipa - from text.english import english_to_lazy_ipa - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_lazy_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_lazy_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[SA\](.*?)\[SA\]', - lambda x: devanagari_to_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners(text): - from text.mandarin import chinese_to_lazy_ipa - from text.japanese import japanese_to_ipa - from text.korean import korean_to_ipa - from text.english import english_to_ipa2 - text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace( - 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn') + ' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace( - 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz') + ' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace( - 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u') + ' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners2(text): - from text.mandarin import chinese_to_ipa - from text.japanese import japanese_to_ipa2 - from text.korean import korean_to_ipa - from text.english import english_to_ipa2 - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa2(x.group(1)) + ' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_ipa2(x.group(1)) + ' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def thai_cleaners(text): - from text.thai import num_to_thai, latin_to_thai - text = num_to_thai(text) - text = latin_to_thai(text) - return text - - -def shanghainese_cleaners(text): - from text.shanghainese import shanghainese_to_ipa - text = shanghainese_to_ipa(text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def chinese_dialect_cleaners(text): - from text.mandarin import chinese_to_ipa2 - from text.japanese import japanese_to_ipa3 - from text.shanghainese import shanghainese_to_ipa - from text.cantonese import cantonese_to_ipa - from text.english import english_to_lazy_ipa2 - from text.ngu_dialect import ngu_dialect_to_ipa - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa2(x.group(1)) + ' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ') + ' ', text) - text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5', - '˧˧˦').replace( - '6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e') + ' ', text) - text = re.sub(r'\[GD\](.*?)\[GD\]', - lambda x: cantonese_to_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa2(x.group(1)) + ' ', text) - text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group( - 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ') + ' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/anime_model.md b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/anime_model.md deleted file mode 100644 index 213328d92d0dbaeb188f8ef0f47192e74efeaccc..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/anime_model.md +++ /dev/null @@ -1,68 +0,0 @@ -# Anime Model - -:white_check_mark: We add [*RealESRGAN_x4plus_anime_6B.pth*](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth), which is optimized for **anime** images with much smaller model size. - -- [How to Use](#how-to-use) - - [PyTorch Inference](#pytorch-inference) - - [ncnn Executable File](#ncnn-executable-file) -- [Comparisons with waifu2x](#comparisons-with-waifu2x) -- [Comparisons with Sliding Bars](#comparisons-with-sliding-bars) - -

    - -

    - -The following is a video comparison with sliding bar. You may need to use the full-screen mode for better visual quality, as the original image is large; otherwise, you may encounter aliasing issue. - - - -## How to Use - -### PyTorch Inference - -Pre-trained models: [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth) - -```bash -# download model -wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P weights -# inference -python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs -``` - -### ncnn Executable File - -Download the latest portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**. - -Taking the Windows as example, run: - -```bash -./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrgan-x4plus-anime -``` - -## Comparisons with waifu2x - -We compare Real-ESRGAN-anime with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan). We use the `-n 2 -s 4` for waifu2x. - -

    - -

    -

    - -

    -

    - -

    -

    - -

    -

    - -

    - -## Comparisons with Sliding Bars - -The following are video comparisons with sliding bar. You may need to use the full-screen mode for better visual quality, as the original image is large; otherwise, you may encounter aliasing issue. - - - - diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fpg/faster_rcnn_r50_fpn_crop640_50e_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/fpg/faster_rcnn_r50_fpn_crop640_50e_coco.py deleted file mode 100644 index 95f4e91f203bad8367942fc24b838da9fbf62947..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fpg/faster_rcnn_r50_fpn_crop640_50e_coco.py +++ /dev/null @@ -1,68 +0,0 @@ -_base_ = [ - '../_base_/models/faster_rcnn_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -norm_cfg = dict(type='BN', requires_grad=True) -model = dict( - backbone=dict(norm_cfg=norm_cfg, norm_eval=False), - neck=dict(norm_cfg=norm_cfg), - roi_head=dict(bbox_head=dict(norm_cfg=norm_cfg))) -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=(640, 640), - ratio_range=(0.8, 1.2), - keep_ratio=True), - dict(type='RandomCrop', crop_size=(640, 640)), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=(640, 640)), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(640, 640), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=64), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# learning policy -optimizer = dict( - type='SGD', - lr=0.08, - momentum=0.9, - weight_decay=0.0001, - paramwise_cfg=dict(norm_decay_mult=0, bypass_duplicate=True)) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=1000, - warmup_ratio=0.1, - step=[30, 40]) -# runtime settings -runner = dict(max_epochs=50) -evaluation = dict(interval=2) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x512_40k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x512_40k_voc12aug.py deleted file mode 100644 index d7eb668f39bbd22a1f42628428bc19d1645e9865..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x512_40k_voc12aug.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './ccnet_r50-d8_512x512_40k_voc12aug.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/auto/__init__.py b/spaces/HaloMaster/chinesesummary/fengshen/models/auto/__init__.py deleted file mode 100644 index ef185f32cc2d9f9b30db1a6a681ce2df34936351..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/models/auto/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from transformers.file_utils import _LazyModule, is_torch_available - - -_import_structure = { - "auto_factory": ["get_values"], - "configuration_auto": ["ALL_PRETRAINED_CONFIG_ARCHIVE_MAP", "CONFIG_MAPPING", "MODEL_NAMES_MAPPING", "AutoConfig"], - "tokenization_auto": ["TOKENIZER_MAPPING", "AutoTokenizer"], -} - -if is_torch_available(): - _import_structure["modeling_auto"] = [ - "AutoModel", - "AutoModelForMaskedLM", - "AutoModelForMultipleChoice", - "AutoModelForPreTraining", - "AutoModelForQuestionAnswering", - "AutoModelForSequenceClassification", - "AutoModelForTokenClassification", - ] - -if TYPE_CHECKING: - from .auto_factory import get_values - from .configuration_auto import ALL_PRETRAINED_CONFIG_ARCHIVE_MAP, CONFIG_MAPPING, MODEL_NAMES_MAPPING, AutoConfig - from .tokenization_auto import TOKENIZER_MAPPING, AutoTokenizer - if is_torch_available(): - from .modeling_auto import ( - AutoModel, - AutoModelForMaskedLM, - AutoModelForMultipleChoice, - AutoModelForPreTraining, - AutoModelForQuestionAnswering, - AutoModelForSequenceClassification, - AutoModelForTokenClassification, - ) - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/adaptive_span/adaptive_span_loss.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/adaptive_span/adaptive_span_loss.py deleted file mode 100644 index 056245807e5f8d313a8ad5be68aea4e285f4f580..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/adaptive_span/adaptive_span_loss.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass - -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import register_criterion -from fairseq.criterions.cross_entropy import CrossEntropyCriterion -from fairseq.dataclass import FairseqDataclass -from omegaconf import II - - -@dataclass -class AdaptiveSpanCriterionConfig(FairseqDataclass): - sentence_avg: bool = II("optimization.sentence_avg") - - -@register_criterion("adaptive_span_loss", dataclass=AdaptiveSpanCriterionConfig) -class AdaptiveSpanCriterion(CrossEntropyCriterion): - def __init__(self, task, sentence_avg): - super().__init__(task, sentence_avg) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss here is summed, different from the adaptive span code - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(**sample["net_input"]) - loss, aux_loss, avg_span, max_span = self.compute_loss( - model, net_output, sample, reduce=reduce - ) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - loss /= sample_size - total_loss = loss + aux_loss - sample_size = 1 - - logging_output = { - "loss": loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - "total_loss": total_loss.data, - "avg_span": avg_span * sample_size, - "max_span": max_span * sample_size, - } - return total_loss, sample_size, logging_output - - def compute_loss(self, model, net_output, sample, reduce=True): - loss, _ = super().compute_loss(model, net_output, sample, reduce) - aux_loss = model.get_aux_loss() - avg_span = model.get_current_avg_span() - max_span = model.get_current_max_span() - return loss, aux_loss, avg_span, max_span - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - total_loss_sum = sum(log.get("total_loss", 0) for log in logging_outputs) - avg_span_sum = sum(log.get("avg_span", 0) for log in logging_outputs) - max_span_sum = sum(log.get("max_span", 0) for log in logging_outputs) - - # we divide by log(2) to convert the loss from base e to base 2 - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_scalar("avg_span", avg_span_sum / sample_size, sample_size, round=3) - metrics.log_scalar("max_span", max_span_sum / sample_size, sample_size, round=3) - # total loss contains the L1 norm on adaptive-span - metrics.log_scalar( - "total_loss", - total_loss_sum / sample_size / math.log(2), - sample_size, - round=3, - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - else: - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/models/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/models/__init__.py deleted file mode 100644 index 7a394c7e4f25bfef8603596ca3629e65ca7b0d8b..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/models/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - -for file in os.listdir(os.path.dirname(__file__)): - if file.endswith(".py") and not file.startswith("_"): - model_name = file[: file.find(".py")] - importlib.import_module( - "examples.speech_text_joint_to_text.models." + model_name - ) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/sparse_transformer_sentence_encoder.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/sparse_transformer_sentence_encoder.py deleted file mode 100644 index f41ec09327fe80b50d20674e7482794ce45c531c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/sparse_transformer_sentence_encoder.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -from fairseq.modules import TransformerSentenceEncoder -from fairseq.modules.sparse_transformer_sentence_encoder_layer import ( - SparseTransformerSentenceEncoderLayer, -) - - -class SparseTransformerSentenceEncoder(TransformerSentenceEncoder): - """ - Sparse implementation of the TransformerSentenceEncoder - - see SparseMultiheadAttention - """ - - def __init__( - self, - padding_idx: int, - vocab_size: int, - num_encoder_layers: int = 6, - embedding_dim: int = 768, - ffn_embedding_dim: int = 3072, - num_attention_heads: int = 8, - dropout: float = 0.1, - attention_dropout: float = 0.1, - activation_dropout: float = 0.1, - max_seq_len: int = 256, - num_segments: int = 2, - use_position_embeddings: bool = True, - offset_positions_by_padding: bool = True, - encoder_normalize_before: bool = False, - apply_bert_init: bool = False, - activation_fn: str = "relu", - learned_pos_embedding: bool = True, - embed_scale: float = None, - freeze_embeddings: bool = False, - n_trans_layers_to_freeze: int = 0, - export: bool = False, - is_bidirectional: bool = True, - stride: int = 32, - expressivity: int = 8, - ) -> None: - - super().__init__( - padding_idx, - vocab_size, - num_encoder_layers, - embedding_dim, - ffn_embedding_dim, - num_attention_heads, - dropout, - attention_dropout, - activation_dropout, - max_seq_len, - num_segments, - use_position_embeddings, - offset_positions_by_padding, - encoder_normalize_before, - apply_bert_init, - activation_fn, - learned_pos_embedding, - embed_scale, - freeze_embeddings, - n_trans_layers_to_freeze, - export, - ) - - self.layers = nn.ModuleList( - [ - SparseTransformerSentenceEncoderLayer( - embedding_dim=self.embedding_dim, - ffn_embedding_dim=ffn_embedding_dim, - num_attention_heads=num_attention_heads, - dropout=dropout, - attention_dropout=attention_dropout, - activation_dropout=activation_dropout, - activation_fn=activation_fn, - export=export, - is_bidirectional=is_bidirectional, - stride=stride, - expressivity=expressivity, - ) - for _ in range(num_encoder_layers) - ] - ) - - def freeze_module_params(m): - if m is not None: - for p in m.parameters(): - p.requires_grad = False - - for layer in range(n_trans_layers_to_freeze): - freeze_module_params(self.layers[layer]) diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/scripts/inference/api.sh b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/scripts/inference/api.sh deleted file mode 100644 index 4f6ce2a2147f69e5b3da851c8222bef830056338..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/scripts/inference/api.sh +++ /dev/null @@ -1,8 +0,0 @@ -gender='male' -glowdir='../../checkpoints/glow/'$gender'/' -hifidir='../../checkpoints/hifi/'$gender'/' -device='cpu' -lang='en' - - -python ../../utils/inference/api.py -a $glowdir -v $hifidir -d $device -L $lang diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/utils/data/resample.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/utils/data/resample.py deleted file mode 100644 index c77109ef4d5142cd9094f46dd186a17571071ab8..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/utils/data/resample.py +++ /dev/null @@ -1,59 +0,0 @@ -import argparse -import librosa -import numpy as np -import os -import scipy -import scipy.io.wavfile -import sys - -from glob import glob -from tqdm import tqdm -from joblib import Parallel, delayed - - -def check_directories(dir_input, dir_output): - if not os.path.exists(dir_input): - sys.exit("Error: Input directory does not exist: {}".format(dir_input)) - if not os.path.exists(dir_output): - sys.exit("Error: Output directory does not exist: {}".format(dir_output)) - abs_a = os.path.abspath(dir_input) - abs_b = os.path.abspath(dir_output) - if abs_a == abs_b: - sys.exit("Error: Paths are the same: {}".format(abs_a)) - - -def resample_file(input_filename, output_filename, sample_rate): - mono = ( - True # librosa converts signal to mono by default, so I'm just surfacing this - ) - audio, existing_rate = librosa.load(input_filename, sr=sample_rate, mono=mono) - audio /= 1.414 # Scale to [-1.0, 1.0] - audio *= 32767 # Scale to int16 - audio = audio.astype(np.int16) - scipy.io.wavfile.write(output_filename, sample_rate, audio) - - -def downsample_wav_files(input_dir, output_dir, output_sample_rate): - check_directories(input_dir, output_dir) - inp_wav_paths = glob(input_dir + "/*.wav") - out_wav_paths = [ - os.path.join(output_dir, os.path.basename(p)) for p in inp_wav_paths - ] - _ = Parallel(n_jobs=-1)( - delayed(resample_file)(i, o, output_sample_rate) - for i, o in tqdm(zip(inp_wav_paths, out_wav_paths)) - ) - - -def parse_args(): - parser = argparse.ArgumentParser() - parser.add_argument("--input_dir", "-i", type=str, required=True) - parser.add_argument("--output_dir", "-o", type=str, required=True) - parser.add_argument("--output_sample_rate", "-s", type=int, required=True) - return parser.parse_args() - - -if __name__ == "__main__": - args = parse_args() - downsample_wav_files(args.input_dir, args.output_dir, args.output_sample_rate) - print(f"\n\tCompleted") diff --git a/spaces/Harveenchadha/en_to_indic_translation/legacy/tpu_training_instructions.md b/spaces/Harveenchadha/en_to_indic_translation/legacy/tpu_training_instructions.md deleted file mode 100644 index 41c9092811f50188c21b459c3033a59d769be8c8..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/legacy/tpu_training_instructions.md +++ /dev/null @@ -1,92 +0,0 @@ -## Instructions to run on Google cloud TPUs -Before starting these steps, make sure to prepare the dataset (normalization -> bpe -> .. -> binarization) following the steps in indicTrans workflow or do these steps on a cpu instance before launching the tpu instance (to save time and costs) - -### Creating TPU instance - -- Create a cpu instance on gcp with `torch-xla` image like: -```bash -gcloud compute --project=${PROJECT_ID} instances create \ - --zone= \ - --machine-type=n1-standard-16 \ - --image-family=torch-xla \ - --image-project=ml-images \ - --boot-disk-size=200GB \ - --scopes=https://www.googleapis.com/auth/cloud-platform -``` -- Once the instance is created, Launch a Cloud TPU (from your cpu vm instance) using the following command (you can change the `accelerator_type` according to your needs): -```bash -gcloud compute tpus create \ ---zone= \ ---network=default \ ---version=pytorch-1.7 \ ---accelerator-type=v3-8 -``` - (or) -Create a new tpu using the GUI in https://console.cloud.google.com/compute/tpus and make sure to select `version` as `pytorch 1.7`. - -- Once the tpu is launched, identify its ip address: -```bash -# you can run this inside cpu instance and note down the IP address which is located under the NETWORK_ENDPOINTS column -gcloud compute tpus list --zone=us-central1-a -``` - (or) -Go to https://console.cloud.google.com/compute/tpus and note down ip address for the created TPU from the `interal ip` column - -### Installing Fairseq, getting data on the cpu instance - -- Activate the `torch xla 1.7` conda environment and install necessary libs for IndicTrans (**Excluding FairSeq**): -```bash -conda activate torch-xla-1.7 -pip install sacremoses pandas mock sacrebleu tensorboardX pyarrow -``` -- Configure environment variables for TPU: -```bash -export TPU_IP_ADDRESS=ip-address; \ -export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470" -``` -- Download the prepared binarized data for FairSeq - -- Clone the latest version of Fairseq (this supports tpu) and install from source. There is an [issue](https://github.com/pytorch/fairseq/issues/3259) with the latest commit and hence we use a different commit to install from source (This may have been fixed in the latest master but we have not tested it.) -```bash -git clone https://github.com/pytorch/fairseq.git -git checkout da9eaba12d82b9bfc1442f0e2c6fc1b895f4d35d -pip install --editable ./ -``` - -- Start TPU training -```bash -# this is for using all tpu cores -export MKL_SERVICE_FORCE_INTEL=1 - -fairseq-train {expdir}/exp2_m2o_baseline/final_bin \ ---max-source-positions=200 \ ---max-target-positions=200 \ ---max-update=1000000 \ ---save-interval=5 \ ---arch=transformer \ ---attention-dropout=0.1 \ ---criterion=label_smoothed_cross_entropy \ ---source-lang=SRC \ ---lr-scheduler=inverse_sqrt \ ---skip-invalid-size-inputs-valid-test \ ---target-lang=TGT \ ---label-smoothing=0.1 \ ---update-freq=1 \ ---optimizer adam \ ---adam-betas '(0.9, 0.98)' \ ---warmup-init-lr 1e-07 \ ---lr 0.0005 \ ---warmup-updates 4000 \ ---dropout 0.2 \ ---weight-decay 0.0 \ ---tpu \ ---distributed-world-size 8 \ ---max-tokens 8192 \ ---num-batch-buckets 8 \ ---tensorboard-logdir {expdir}/exp2_m2o_baseline/tensorboard \ ---save-dir {expdir}/exp2_m2o_baseline/model \ ---keep-last-epochs 5 \ ---patience 5 -``` - -**Note** While training, we noticed that the training was slower on tpus, compared to using multiple GPUs, we have documented some issues and [filed an issue](https://github.com/pytorch/fairseq/issues/3317) at fairseq repo for advice. We'll update this section as we learn more about efficient training on TPUs. Also feel free to open an issue/pull request if you find a bug or know an efficient method to make code train faster on tpus. diff --git a/spaces/Hexamind/QnA/app.py b/spaces/Hexamind/QnA/app.py deleted file mode 100644 index 15e0778d36f64a77533d79373100877eabf47f95..0000000000000000000000000000000000000000 --- a/spaces/Hexamind/QnA/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import pandas as pd -import os -from langchain.llms import OpenAI -import chromadb - -from config import * -from src.control.control import Controller -from src.tools.retriever import Retriever -from src.tools.llm import LlmAgent -from src.model.doc import Doc -import src.view.view as view - -os.environ["TOKENIZERS_PARALLELISM"] = "true" - -if not "OPENAI_API_KEY" in os.environ: - from config_key import OPENAI_API_KEY - os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY - -doc_content = Doc(content_en_path) -doc_plan = Doc(plan_path) -doc_content_fr = Doc(content_fr_path) - -client_db = chromadb.Client() -retriever = Retriever(client_db, doc_plan, doc_content, doc_content_fr, collection_name) - -llm_model = OpenAI(temperature=0) -llm = LlmAgent(llm_model) - -specials['remote_rate_df'] = pd.read_csv(specials['remote_rate_path']) -specials['accommodation_meal_df'] = pd.read_csv(specials['accommodation_meal_path']) -controller = Controller(retriever=retriever, llm=llm, content_language=content_language, plan_language=plan_language, - specials=specials) - -qna = view.run(ctrl=controller, config=view_config) - -qna.queue().launch() diff --git a/spaces/Hila/RobustViT/SegmentationTest/utils/metric.py b/spaces/Hila/RobustViT/SegmentationTest/utils/metric.py deleted file mode 100644 index a820609873ec4fc7c3428e95b19baf97515cf792..0000000000000000000000000000000000000000 --- a/spaces/Hila/RobustViT/SegmentationTest/utils/metric.py +++ /dev/null @@ -1,12 +0,0 @@ -class Metric(object): - """Base class for all metrics. - From: https://github.com/pytorch/tnt/blob/master/torchnet/meter/meter.py - """ - def reset(self): - pass - - def add(self): - pass - - def value(self): - pass \ No newline at end of file diff --git a/spaces/Hoodady/3DFuse/my/README.md b/spaces/Hoodady/3DFuse/my/README.md deleted file mode 100644 index 5daa1c788deef956d5cb6399ecba2c96d947d827..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/my/README.md +++ /dev/null @@ -1,2 +0,0 @@ -a personal tookit for experiment management; -some of the designs patterns are inspired by detectron2 diff --git a/spaces/HuangLab/CELL-E_2-Image_Prediction/prediction.py b/spaces/HuangLab/CELL-E_2-Image_Prediction/prediction.py deleted file mode 100644 index 91cc94bd3532a1a70fa7c6a793c2a5658f223f69..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Image_Prediction/prediction.py +++ /dev/null @@ -1,51 +0,0 @@ -import os -os.chdir('..') -base_dir = os.getcwd() -from dataloader import CellLoader - - -def run_image_prediction( - sequence_input, - nucleus_image, - model, - device -): - """ - Run Celle model with provided inputs and display results. - - :param sequence: Path to sequence file - :param nucleus_image_path: Path to nucleus image - :param protein_image_path: Path to protein image (optional) - :param model_ckpt_path: Path to model checkpoint - :param model_config_path: Path to model config - """ - # Instantiate dataset object - dataset = CellLoader( - sequence_mode="embedding", - vocab="esm2", - split_key="val", - crop_method="center", - resize=600, - crop_size=256, - text_seq_len=1000, - pad_mode="end", - threshold="median", - ) - - # Convert SEQUENCE to sequence using dataset.tokenize_sequence() - sequence = dataset.tokenize_sequence(sequence_input) - - # Sample from model using provided sequence and nucleus image - _, _, _, predicted_threshold, predicted_heatmap = model.celle.sample( - text=sequence.to(device), - condition=nucleus_image.to(device), - timesteps=1, - temperature=1, - progress=False, - ) - - # Move predicted_threshold and predicted_heatmap to CPU and select first element of batch - predicted_threshold = predicted_threshold.cpu()[0, 0] - predicted_heatmap = predicted_heatmap.cpu()[0, 0] - - return predicted_threshold, predicted_heatmap \ No newline at end of file diff --git a/spaces/HugoDzz/super-godot-galaxy/build/_app/immutable/entry/app.ea8cc3e0.js b/spaces/HugoDzz/super-godot-galaxy/build/_app/immutable/entry/app.ea8cc3e0.js deleted file mode 100644 index 2192f20d3fae276af08b641bdd145cfd199cce72..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/super-godot-galaxy/build/_app/immutable/entry/app.ea8cc3e0.js +++ /dev/null @@ -1 +0,0 @@ -import{S as V,i as q,s as U,a as j,e as h,c as z,b as w,d as p,f as y,g as d,h as g,j as W,o as F,k as G,l as H,m as J,n as N,p as m,q as K,r as M,u as Q,v as L,w as P,x as k,y as v,z as A,A as E,B as R}from"../chunks/index.9af7eb9c.js";const X="modulepreload",Y=function(a,e){return new URL(a,e).href},B={},S=function(e,n,i){if(!n||n.length===0)return e();const s=document.getElementsByTagName("link");return Promise.all(n.map(f=>{if(f=Y(f,i),f in B)return;B[f]=!0;const t=f.endsWith(".css"),r=t?'[rel="stylesheet"]':"";if(!!i)for(let l=s.length-1;l>=0;l--){const _=s[l];if(_.href===f&&(!t||_.rel==="stylesheet"))return}else if(document.querySelector(`link[href="${f}"]${r}`))return;const o=document.createElement("link");if(o.rel=t?"stylesheet":X,t||(o.as="script",o.crossOrigin=""),o.href=f,document.head.appendChild(o),t)return new Promise((l,_)=>{o.addEventListener("load",l),o.addEventListener("error",()=>_(new Error(`Unable to preload CSS for ${f}`)))})})).then(()=>e())},ie={};function Z(a){let e,n,i;var s=a[1][0];function f(t){return{props:{data:t[3],form:t[2]}}}return s&&(e=k(s,f(a)),a[12](e)),{c(){e&&v(e.$$.fragment),n=h()},l(t){e&&A(e.$$.fragment,t),n=h()},m(t,r){e&&E(e,t,r),w(t,n,r),i=!0},p(t,r){const u={};if(r&8&&(u.data=t[3]),r&4&&(u.form=t[2]),r&2&&s!==(s=t[1][0])){if(e){L();const o=e;p(o.$$.fragment,1,0,()=>{R(o,1)}),y()}s?(e=k(s,f(t)),t[12](e),v(e.$$.fragment),d(e.$$.fragment,1),E(e,n.parentNode,n)):e=null}else s&&e.$set(u)},i(t){i||(e&&d(e.$$.fragment,t),i=!0)},o(t){e&&p(e.$$.fragment,t),i=!1},d(t){a[12](null),t&&g(n),e&&R(e,t)}}}function $(a){let e,n,i;var s=a[1][0];function f(t){return{props:{data:t[3],$$slots:{default:[x]},$$scope:{ctx:t}}}}return s&&(e=k(s,f(a)),a[11](e)),{c(){e&&v(e.$$.fragment),n=h()},l(t){e&&A(e.$$.fragment,t),n=h()},m(t,r){e&&E(e,t,r),w(t,n,r),i=!0},p(t,r){const u={};if(r&8&&(u.data=t[3]),r&8215&&(u.$$scope={dirty:r,ctx:t}),r&2&&s!==(s=t[1][0])){if(e){L();const o=e;p(o.$$.fragment,1,0,()=>{R(o,1)}),y()}s?(e=k(s,f(t)),t[11](e),v(e.$$.fragment),d(e.$$.fragment,1),E(e,n.parentNode,n)):e=null}else s&&e.$set(u)},i(t){i||(e&&d(e.$$.fragment,t),i=!0)},o(t){e&&p(e.$$.fragment,t),i=!1},d(t){a[11](null),t&&g(n),e&&R(e,t)}}}function x(a){let e,n,i;var s=a[1][1];function f(t){return{props:{data:t[4],form:t[2]}}}return s&&(e=k(s,f(a)),a[10](e)),{c(){e&&v(e.$$.fragment),n=h()},l(t){e&&A(e.$$.fragment,t),n=h()},m(t,r){e&&E(e,t,r),w(t,n,r),i=!0},p(t,r){const u={};if(r&16&&(u.data=t[4]),r&4&&(u.form=t[2]),r&2&&s!==(s=t[1][1])){if(e){L();const o=e;p(o.$$.fragment,1,0,()=>{R(o,1)}),y()}s?(e=k(s,f(t)),t[10](e),v(e.$$.fragment),d(e.$$.fragment,1),E(e,n.parentNode,n)):e=null}else s&&e.$set(u)},i(t){i||(e&&d(e.$$.fragment,t),i=!0)},o(t){e&&p(e.$$.fragment,t),i=!1},d(t){a[10](null),t&&g(n),e&&R(e,t)}}}function C(a){let e,n=a[6]&&D(a);return{c(){e=G("div"),n&&n.c(),this.h()},l(i){e=H(i,"DIV",{id:!0,"aria-live":!0,"aria-atomic":!0,style:!0});var s=J(e);n&&n.l(s),s.forEach(g),this.h()},h(){N(e,"id","svelte-announcer"),N(e,"aria-live","assertive"),N(e,"aria-atomic","true"),m(e,"position","absolute"),m(e,"left","0"),m(e,"top","0"),m(e,"clip","rect(0 0 0 0)"),m(e,"clip-path","inset(50%)"),m(e,"overflow","hidden"),m(e,"white-space","nowrap"),m(e,"width","1px"),m(e,"height","1px")},m(i,s){w(i,e,s),n&&n.m(e,null)},p(i,s){i[6]?n?n.p(i,s):(n=D(i),n.c(),n.m(e,null)):n&&(n.d(1),n=null)},d(i){i&&g(e),n&&n.d()}}}function D(a){let e;return{c(){e=K(a[7])},l(n){e=M(n,a[7])},m(n,i){w(n,e,i)},p(n,i){i&128&&Q(e,n[7])},d(n){n&&g(e)}}}function ee(a){let e,n,i,s,f;const t=[$,Z],r=[];function u(l,_){return l[1][1]?0:1}e=u(a),n=r[e]=t[e](a);let o=a[5]&&C(a);return{c(){n.c(),i=j(),o&&o.c(),s=h()},l(l){n.l(l),i=z(l),o&&o.l(l),s=h()},m(l,_){r[e].m(l,_),w(l,i,_),o&&o.m(l,_),w(l,s,_),f=!0},p(l,[_]){let b=e;e=u(l),e===b?r[e].p(l,_):(L(),p(r[b],1,1,()=>{r[b]=null}),y(),n=r[e],n?n.p(l,_):(n=r[e]=t[e](l),n.c()),d(n,1),n.m(i.parentNode,i)),l[5]?o?o.p(l,_):(o=C(l),o.c(),o.m(s.parentNode,s)):o&&(o.d(1),o=null)},i(l){f||(d(n),f=!0)},o(l){p(n),f=!1},d(l){r[e].d(l),l&&g(i),o&&o.d(l),l&&g(s)}}}function te(a,e,n){let{stores:i}=e,{page:s}=e,{constructors:f}=e,{components:t=[]}=e,{form:r}=e,{data_0:u=null}=e,{data_1:o=null}=e;W(i.page.notify);let l=!1,_=!1,b=null;F(()=>{const c=i.page.subscribe(()=>{l&&(n(6,_=!0),n(7,b=document.title||"untitled page"))});return n(5,l=!0),c});function I(c){P[c?"unshift":"push"](()=>{t[1]=c,n(0,t)})}function O(c){P[c?"unshift":"push"](()=>{t[0]=c,n(0,t)})}function T(c){P[c?"unshift":"push"](()=>{t[0]=c,n(0,t)})}return a.$$set=c=>{"stores"in c&&n(8,i=c.stores),"page"in c&&n(9,s=c.page),"constructors"in c&&n(1,f=c.constructors),"components"in c&&n(0,t=c.components),"form"in c&&n(2,r=c.form),"data_0"in c&&n(3,u=c.data_0),"data_1"in c&&n(4,o=c.data_1)},a.$$.update=()=>{a.$$.dirty&768&&i.page.set(s)},[t,f,r,u,o,l,_,b,i,s,I,O,T]}class se extends V{constructor(e){super(),q(this,e,te,ee,U,{stores:8,page:9,constructors:1,components:0,form:2,data_0:3,data_1:4})}}const re=[()=>S(()=>import("../nodes/0.22dae059.js"),["../nodes/0.22dae059.js","../chunks/index.9af7eb9c.js","../assets/0.15589e04.css"],import.meta.url),()=>S(()=>import("../nodes/1.7a9a475b.js"),["../nodes/1.7a9a475b.js","../chunks/index.9af7eb9c.js","../chunks/stores.be116e24.js","../chunks/singletons.1f11d8d9.js"],import.meta.url),()=>S(()=>import("../nodes/2.ae94ff6d.js"),["../nodes/2.ae94ff6d.js","../chunks/index.9af7eb9c.js","../chunks/stores.be116e24.js","../chunks/singletons.1f11d8d9.js"],import.meta.url)],oe=[],ae={"/":[2]},le={handleError:({error:a})=>{console.error(a)}};export{ae as dictionary,le as hooks,ie as matchers,re as nodes,se as root,oe as server_loads}; diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_data_from_w2v.py b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_data_from_w2v.py deleted file mode 100644 index 66954ea5c9f3f3330e3230860229c7c4046a5d6a..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_data_from_w2v.py +++ /dev/null @@ -1,56 +0,0 @@ -import kaldi_io -import numpy as np -import os - - -def get_parser(): - import argparse - parser = argparse.ArgumentParser() - parser.add_argument("w2v_dir", help="wav2vec feature and text directory") - parser.add_argument("tar_root", help="output data directory in kaldi's format") - parser.add_argument("split", help="name of the subset") - parser.add_argument("--label", default="", help="if specified, copy labels too") - return parser - -def main(): - parser = get_parser() - args = parser.parse_args() - - tar_dir = os.path.join(args.tar_root, args.split) - os.makedirs(tar_dir, exist_ok=True) - - lengths_path = os.path.join(args.w2v_dir, f"{args.split}.lengths") - with open(lengths_path) as f: - lengths = [int(line.rstrip()) for line in f] - offsets = [0] + np.cumsum(lengths[:-1]).tolist() - feats = np.load( - os.path.join(args.w2v_dir, f"{args.split}.npy"), - mmap_mode="r" - ) - assert feats.shape[0] == sum(lengths), \ - f"lengths mismatch {feats.shape[0]} != {sum(lengths)}" - - ark_path = os.path.join(tar_dir, "feats.ark") - scp_path = os.path.join(tar_dir, "feats.scp") - wspec = f"ark:| copy-feats --compress=true ark:- ark,scp:{ark_path},{scp_path}" - with kaldi_io.open_or_fd(wspec, "wb") as f: - for idx, (offset, length) in enumerate(zip(offsets, lengths)): - feat = feats[offset:offset+length] - kaldi_io.write_mat(f, feat, key=f"utt{idx:010d}") - - u2s_path = os.path.join(tar_dir, "utt2spk") - s2u_path = os.path.join(tar_dir, "spk2utt") - with open(u2s_path, "w") as f_u2s, open(s2u_path, "w") as f_s2u: - for idx in range(len(lengths)): - f_u2s.write(f"utt{idx:010d} utt{idx:010d}\n") - f_s2u.write(f"utt{idx:010d} utt{idx:010d}\n") - - if bool(args.label): - lab_path = os.path.join(args.w2v_dir, f"{args.split}.{args.label}") - txt_path = os.path.join(tar_dir, "text") - with open(lab_path) as f_lab, open(txt_path, "w") as f_txt: - for idx, line in enumerate(f_lab): - f_txt.write(f"utt{idx:010d} {line}") - -if __name__ == "__main__": - main() diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/criterions/legacy_masked_lm.py b/spaces/ICML2022/OFA/fairseq/fairseq/criterions/legacy_masked_lm.py deleted file mode 100644 index c70608c5a143b7b4fbd8c58dfcf9f873639d379c..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/criterions/legacy_masked_lm.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -def compute_cross_entropy_loss(logits, targets, ignore_index=-100): - """ - Function to compute the cross entropy loss. The default value of - ignore_index is the same as the default value for F.cross_entropy in - pytorch. - """ - assert logits.size(0) == targets.size( - -1 - ), "Logits and Targets tensor shapes don't match up" - - loss = F.nll_loss( - F.log_softmax(logits, -1, dtype=torch.float32), - targets, - reduction="sum", - ignore_index=ignore_index, - ) - return loss - - -@register_criterion("legacy_masked_lm_loss") -class LegacyMaskedLmLoss(FairseqCriterion): - """ - Implementation for the loss used in masked language model (MLM) training. - This optionally also computes the next sentence prediction (NSP) loss and - adds it to the overall loss based on the specified args. There are three - cases to consider: - 1) Generic MLM training without NSP loss. In this case sentence_targets - and sentence_logits are both None. - 2) BERT training without NSP loss. In this case sentence_targets is - not None but sentence_logits is None and we should not be computing - a sentence level loss. - 3) BERT training with NSP loss. In this case both sentence_targets and - sentence_logits are not None and we should be computing a sentence - level loss. The weight of the sentence level loss is specified as - an argument. - """ - - def __init__(self, task, masked_lm_only, nsp_loss_weight): - super().__init__(task) - self.masked_lm_only = masked_lm_only - self.nsp_loss_weight = nsp_loss_weight - - @staticmethod - def add_args(parser): - """Args for MaskedLM Loss""" - # Default for masked_lm_only is False so as to not break BERT training - parser.add_argument( - "--masked-lm-only", - default=False, - action="store_true", - help="compute MLM loss only", - ) - parser.add_argument( - "--nsp-loss-weight", - default=1.0, - type=float, - help="weight for next sentence prediction" " loss (default 1)", - ) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - lm_logits, output_metadata = model(**sample["net_input"]) - - # reshape lm_logits from (N,T,C) to (N*T,C) - lm_logits = lm_logits.view(-1, lm_logits.size(-1)) - lm_targets = sample["lm_target"].view(-1) - lm_loss = compute_cross_entropy_loss(lm_logits, lm_targets, self.padding_idx) - - # compute the number of tokens for which loss is computed. This is used - # to normalize the loss - ntokens = utils.strip_pad(lm_targets, self.padding_idx).numel() - loss = lm_loss / ntokens - nsentences = sample["nsentences"] - # nsentences = 0 - - # Compute sentence loss if masked_lm_only is False - sentence_loss = None - if not self.masked_lm_only: - sentence_logits = output_metadata["sentence_logits"] - sentence_targets = sample["sentence_target"].view(-1) - # This needs to be recomputed due to some differences between - # TokenBlock and BlockPair dataset. This can be resolved with a - # refactor of BERTModel which we will do in the future. - # TODO: Remove this after refactor of BERTModel - nsentences = sentence_targets.size(0) - - # Check for logits being none which can happen when remove_heads - # is set to true in the BERT model. Ideally we should set - # masked_lm_only to true in this case, but that requires some - # refactor in the BERT model. - if sentence_logits is not None: - sentence_loss = compute_cross_entropy_loss( - sentence_logits, sentence_targets - ) - - loss += self.nsp_loss_weight * (sentence_loss / nsentences) - - # NOTE: as we are summing up per token mlm loss and per sentence nsp loss - # we don't need to use sample_size as denominator for the gradient - # here sample_size is just used for logging - sample_size = 1 - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "lm_loss": utils.item(lm_loss.data) if reduce else lm_loss.data, - # sentence loss is not always computed - "sentence_loss": ( - (utils.item(sentence_loss.data) if reduce else sentence_loss.data) - if sentence_loss is not None - else 0.0 - ), - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - lm_loss_sum = sum(log.get("lm_loss", 0) for log in logging_outputs) - sentence_loss_sum = sum(log.get("sentence_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - agg_loss = sum(log.get("loss", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", - agg_loss / sample_size / math.log(2) if sample_size > 0 else 0.0, - sample_size, - round=3, - ) - metrics.log_scalar( - "lm_loss", - lm_loss_sum / ntokens / math.log(2) if ntokens > 0 else 0.0, - ntokens, - round=3, - ) - metrics.log_scalar( - "sentence_loss", - sentence_loss_sum / nsentences / math.log(2) if nsentences > 0 else 0.0, - nsentences, - round=3, - ) - metrics.log_scalar( - "nll_loss", - lm_loss_sum / ntokens / math.log(2) if ntokens > 0 else 0.0, - ntokens, - round=3, - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/speech_to_text_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/speech_to_text_dataset.py deleted file mode 100644 index 164bf413e4fd41b895348c9ef0bb57421843eb17..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/speech_to_text_dataset.py +++ /dev/null @@ -1,525 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import io -import logging -import re -from collections import defaultdict -from pathlib import Path -from typing import Dict, List, Optional -from dataclasses import dataclass - -import numpy as np -import torch -from fairseq.data import ( - ConcatDataset, - Dictionary, - FairseqDataset, - ResamplingDataset, - data_utils as fairseq_data_utils, -) -from fairseq.data.audio.audio_utils import ( - get_fbank, - get_waveform, - read_from_stored_zip, - is_npy_data, - is_sf_audio_data, - parse_path, - FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS, -) -from fairseq.data.audio.feature_transforms import CompositeAudioFeatureTransform -from fairseq.data.audio.data_cfg import S2TDataConfig - - -logger = logging.getLogger(__name__) - - -def get_features_from_npy_or_audio(path): - ext = Path(path).suffix - if ext not in FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS: - raise ValueError(f'Unsupported file format for "{path}"') - return np.load(path) if ext == ".npy" else get_fbank(path) - - -def get_features_or_waveform_from_stored_zip( - path, byte_offset, byte_size, need_waveform=False, use_sample_rate=None, -): - assert path.endswith(".zip") - data = read_from_stored_zip(path, byte_offset, byte_size) - f = io.BytesIO(data) - if is_npy_data(data): - features_or_waveform = np.load(f) - elif is_sf_audio_data(data): - features_or_waveform = \ - get_waveform( - f, always_2d=False, output_sample_rate=use_sample_rate - )[0] if need_waveform else get_fbank(f) - else: - raise ValueError(f'Unknown file format for "{path}"') - return features_or_waveform - - -def get_features_or_waveform( - path: str, need_waveform=False, use_sample_rate=None -): - """Get speech features from .npy file or waveform from .wav/.flac file. - The file may be inside an uncompressed ZIP file and is accessed via byte - offset and length. - - Args: - path (str): File path in the format of "<.npy/.wav/.flac path>" or - "::". - need_waveform (bool): return waveform instead of features. - use_sample_rate (int): change sample rate for the input wave file - - Returns: - features_or_waveform (numpy.ndarray): speech features or waveform. - """ - _path, slice_ptr = parse_path(path) - if len(slice_ptr) == 0: - if need_waveform: - return get_waveform( - _path, always_2d=False, output_sample_rate=use_sample_rate - )[0] - return get_features_from_npy_or_audio(_path) - elif len(slice_ptr) == 2: - features_or_waveform = get_features_or_waveform_from_stored_zip( - _path, slice_ptr[0], slice_ptr[1], need_waveform=need_waveform, - use_sample_rate=use_sample_rate - ) - else: - raise ValueError(f"Invalid path: {path}") - - return features_or_waveform - - -def _collate_frames( - frames: List[torch.Tensor], is_audio_input: bool = False -) -> torch.Tensor: - """ - Convert a list of 2D frames into a padded 3D tensor - Args: - frames (list): list of 2D frames of size L[i]*f_dim. Where L[i] is - length of i-th frame and f_dim is static dimension of features - Returns: - 3D tensor of size len(frames)*len_max*f_dim where len_max is max of L[i] - """ - max_len = max(frame.size(0) for frame in frames) - if is_audio_input: - out = frames[0].new_zeros((len(frames), max_len)) - else: - out = frames[0].new_zeros((len(frames), max_len, frames[0].size(1))) - for i, v in enumerate(frames): - out[i, : v.size(0)] = v - return out - - -@dataclass -class SpeechToTextDatasetItem(object): - index: int - source: torch.Tensor - target: Optional[torch.Tensor] = None - speaker_id: Optional[int] = None - - -class SpeechToTextDataset(FairseqDataset): - LANG_TAG_TEMPLATE = "" - - def __init__( - self, - split: str, - is_train_split: bool, - cfg: S2TDataConfig, - audio_paths: List[str], - n_frames: List[int], - src_texts: Optional[List[str]] = None, - tgt_texts: Optional[List[str]] = None, - speakers: Optional[List[str]] = None, - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - tgt_dict: Optional[Dictionary] = None, - pre_tokenizer=None, - bpe_tokenizer=None, - n_frames_per_step=1, - speaker_to_id=None - ): - self.split, self.is_train_split = split, is_train_split - self.cfg = cfg - self.audio_paths, self.n_frames = audio_paths, n_frames - self.n_samples = len(audio_paths) - assert len(n_frames) == self.n_samples > 0 - assert src_texts is None or len(src_texts) == self.n_samples - assert tgt_texts is None or len(tgt_texts) == self.n_samples - assert speakers is None or len(speakers) == self.n_samples - assert src_langs is None or len(src_langs) == self.n_samples - assert tgt_langs is None or len(tgt_langs) == self.n_samples - assert ids is None or len(ids) == self.n_samples - assert (tgt_dict is None and tgt_texts is None) or ( - tgt_dict is not None and tgt_texts is not None - ) - self.src_texts, self.tgt_texts = src_texts, tgt_texts - self.src_langs, self.tgt_langs = src_langs, tgt_langs - self.speakers = speakers - self.tgt_dict = tgt_dict - self.check_tgt_lang_tag() - self.ids = ids - self.shuffle = cfg.shuffle if is_train_split else False - - self.feature_transforms = CompositeAudioFeatureTransform.from_config_dict( - self.cfg.get_feature_transforms(split, is_train_split) - ) - - self.pre_tokenizer = pre_tokenizer - self.bpe_tokenizer = bpe_tokenizer - self.n_frames_per_step = n_frames_per_step - self.speaker_to_id = speaker_to_id - - self.tgt_lens = self.get_tgt_lens_and_check_oov() - - logger.info(self.__repr__()) - - def get_tgt_lens_and_check_oov(self): - if self.tgt_texts is None: - return [0 for _ in range(self.n_samples)] - tgt_lens = [] - n_tokens, n_oov_tokens = 0, 0 - for i in range(self.n_samples): - tokenized = self.get_tokenized_tgt_text(i).split(" ") - oov_tokens = [ - t - for t in tokenized - if self.tgt_dict.index(t) == self.tgt_dict.unk_index - ] - n_tokens += len(tokenized) - n_oov_tokens += len(oov_tokens) - tgt_lens.append(len(tokenized)) - logger.info(f"'{self.split}' has {n_oov_tokens / n_tokens * 100:.2f}% OOV") - return tgt_lens - - def __repr__(self): - return ( - self.__class__.__name__ - + f'(split="{self.split}", n_samples={self.n_samples:_}, ' - f"prepend_tgt_lang_tag={self.cfg.prepend_tgt_lang_tag}, " - f"shuffle={self.shuffle}, transforms={self.feature_transforms}, " - f"n_frames_per_step={self.n_frames_per_step}" - ) - - @classmethod - def is_lang_tag(cls, token): - pattern = cls.LANG_TAG_TEMPLATE.replace("{}", "(.*)") - return re.match(pattern, token) - - def check_tgt_lang_tag(self): - if self.cfg.prepend_tgt_lang_tag: - assert self.tgt_langs is not None and self.tgt_dict is not None - tgt_lang_tags = [ - self.LANG_TAG_TEMPLATE.format(t) for t in set(self.tgt_langs) - ] - assert all(t in self.tgt_dict for t in tgt_lang_tags) - - @classmethod - def tokenize(cls, tokenizer, text: str): - return text if tokenizer is None else tokenizer.encode(text) - - def get_tokenized_tgt_text(self, index: int): - text = self.tokenize(self.pre_tokenizer, self.tgt_texts[index]) - text = self.tokenize(self.bpe_tokenizer, text) - return text - - def pack_frames(self, feature: torch.Tensor): - if self.n_frames_per_step == 1: - return feature - n_packed_frames = feature.shape[0] // self.n_frames_per_step - feature = feature[:self.n_frames_per_step * n_packed_frames] - return feature.reshape(n_packed_frames, -1) - - @classmethod - def get_lang_tag_idx(cls, lang: str, dictionary: Dictionary): - lang_tag_idx = dictionary.index(cls.LANG_TAG_TEMPLATE.format(lang)) - assert lang_tag_idx != dictionary.unk() - return lang_tag_idx - - def __getitem__(self, index: int) -> SpeechToTextDatasetItem: - source = get_features_or_waveform( - self.audio_paths[index], - need_waveform=self.cfg.use_audio_input, - use_sample_rate=self.cfg.use_sample_rate, - ) - if self.feature_transforms is not None: - assert not self.cfg.use_audio_input - source = self.feature_transforms(source) - source = torch.from_numpy(source).float() - source = self.pack_frames(source) - - target = None - if self.tgt_texts is not None: - tokenized = self.get_tokenized_tgt_text(index) - target = self.tgt_dict.encode_line( - tokenized, add_if_not_exist=False, append_eos=True - ).long() - if self.cfg.prepend_tgt_lang_tag: - lang_tag_idx = self.get_lang_tag_idx( - self.tgt_langs[index], self.tgt_dict - ) - target = torch.cat((torch.LongTensor([lang_tag_idx]), target), 0) - - speaker_id = None - if self.speaker_to_id is not None: - speaker_id = self.speaker_to_id[self.speakers[index]] - return SpeechToTextDatasetItem( - index=index, source=source, target=target, speaker_id=speaker_id - ) - - def __len__(self): - return self.n_samples - - def collater( - self, samples: List[SpeechToTextDatasetItem], return_order: bool = False - ) -> Dict: - if len(samples) == 0: - return {} - indices = torch.tensor([x.index for x in samples], dtype=torch.long) - frames = _collate_frames([x.source for x in samples], self.cfg.use_audio_input) - # sort samples by descending number of frames - n_frames = torch.tensor([x.source.size(0) for x in samples], dtype=torch.long) - n_frames, order = n_frames.sort(descending=True) - indices = indices.index_select(0, order) - frames = frames.index_select(0, order) - - target, target_lengths = None, None - prev_output_tokens = None - ntokens = None - if self.tgt_texts is not None: - target = fairseq_data_utils.collate_tokens( - [x.target for x in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ) - target = target.index_select(0, order) - target_lengths = torch.tensor( - [x.target.size(0) for x in samples], dtype=torch.long - ).index_select(0, order) - prev_output_tokens = fairseq_data_utils.collate_tokens( - [x.target for x in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=True, - ) - prev_output_tokens = prev_output_tokens.index_select(0, order) - ntokens = sum(x.target.size(0) for x in samples) - - speaker = None - if self.speaker_to_id is not None: - speaker = torch.tensor( - [s.speaker_id for s in samples], dtype=torch.long - ).index_select(0, order).view(-1, 1) - - net_input = { - "src_tokens": frames, - "src_lengths": n_frames, - "prev_output_tokens": prev_output_tokens, - } - out = { - "id": indices, - "net_input": net_input, - "speaker": speaker, - "target": target, - "target_lengths": target_lengths, - "ntokens": ntokens, - "nsentences": len(samples), - } - if return_order: - out["order"] = order - return out - - def num_tokens(self, index): - return self.n_frames[index] - - def size(self, index): - return self.n_frames[index], self.tgt_lens[index] - - @property - def sizes(self): - return np.array(self.n_frames) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return True - - def ordered_indices(self): - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - # first by descending order of # of frames then by original/random order - order.append([-n for n in self.n_frames]) - return np.lexsort(order) - - def prefetch(self, indices): - raise False - - -class SpeechToTextDatasetCreator(object): - # mandatory columns - KEY_ID, KEY_AUDIO, KEY_N_FRAMES = "id", "audio", "n_frames" - KEY_TGT_TEXT = "tgt_text" - # optional columns - KEY_SPEAKER, KEY_SRC_TEXT = "speaker", "src_text" - KEY_SRC_LANG, KEY_TGT_LANG = "src_lang", "tgt_lang" - # default values - DEFAULT_SPEAKER = DEFAULT_SRC_TEXT = DEFAULT_LANG = "" - - @classmethod - def _from_list( - cls, - split_name: str, - is_train_split, - samples: List[Dict], - cfg: S2TDataConfig, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id - ) -> SpeechToTextDataset: - audio_root = Path(cfg.audio_root) - ids = [s[cls.KEY_ID] for s in samples] - audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples] - n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples] - tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples] - src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples] - speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples] - src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples] - tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples] - return SpeechToTextDataset( - split_name, - is_train_split, - cfg, - audio_paths, - n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - n_frames_per_step=n_frames_per_step, - speaker_to_id=speaker_to_id - ) - - @classmethod - def get_size_ratios( - cls, datasets: List[SpeechToTextDataset], alpha: float = 1.0 - ) -> List[float]: - """Size ratios for temperature-based sampling - (https://arxiv.org/abs/1907.05019)""" - - id_to_lp, lp_to_sz = {}, defaultdict(int) - for ds in datasets: - lang_pairs = {f"{s}->{t}" for s, t in zip(ds.src_langs, ds.tgt_langs)} - assert len(lang_pairs) == 1 - lang_pair = list(lang_pairs)[0] - id_to_lp[ds.split] = lang_pair - lp_to_sz[lang_pair] += sum(ds.n_frames) - - sz_sum = sum(v for v in lp_to_sz.values()) - lp_to_prob = {k: v / sz_sum for k, v in lp_to_sz.items()} - lp_to_tgt_prob = {k: v ** alpha for k, v in lp_to_prob.items()} - prob_sum = sum(v for v in lp_to_tgt_prob.values()) - lp_to_tgt_prob = {k: v / prob_sum for k, v in lp_to_tgt_prob.items()} - lp_to_sz_ratio = { - k: (lp_to_tgt_prob[k] * sz_sum) / v for k, v in lp_to_sz.items() - } - size_ratio = [lp_to_sz_ratio[id_to_lp[ds.split]] for ds in datasets] - - p_formatted = { - k: f"{lp_to_prob[k]:.3f}->{lp_to_tgt_prob[k]:.3f}" for k in lp_to_sz - } - logger.info(f"sampling probability balancing: {p_formatted}") - sr_formatted = {ds.split: f"{r:.3f}" for ds, r in zip(datasets, size_ratio)} - logger.info(f"balanced sampling size ratio: {sr_formatted}") - return size_ratio - - @classmethod - def _load_samples_from_tsv(cls, root: str, split: str): - tsv_path = Path(root) / f"{split}.tsv" - if not tsv_path.is_file(): - raise FileNotFoundError(f"Dataset not found: {tsv_path}") - with open(tsv_path) as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - samples = [dict(e) for e in reader] - if len(samples) == 0: - raise ValueError(f"Empty manifest: {tsv_path}") - return samples - - @classmethod - def _from_tsv( - cls, - root: str, - cfg: S2TDataConfig, - split: str, - tgt_dict, - is_train_split: bool, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id - ) -> SpeechToTextDataset: - samples = cls._load_samples_from_tsv(root, split) - return cls._from_list( - split, is_train_split, samples, cfg, tgt_dict, pre_tokenizer, - bpe_tokenizer, n_frames_per_step, speaker_to_id - ) - - @classmethod - def from_tsv( - cls, - root: str, - cfg: S2TDataConfig, - splits: str, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split: bool, - epoch: int, - seed: int, - n_frames_per_step: int = 1, - speaker_to_id=None - ) -> SpeechToTextDataset: - datasets = [ - cls._from_tsv( - root, cfg, split, tgt_dict, is_train_split, pre_tokenizer, - bpe_tokenizer, n_frames_per_step, speaker_to_id - ) - for split in splits.split(",") - ] - - if is_train_split and len(datasets) > 1 and cfg.sampling_alpha != 1.0: - # temperature-based sampling - size_ratios = cls.get_size_ratios(datasets, alpha=cfg.sampling_alpha) - datasets = [ - ResamplingDataset( - d, size_ratio=r, seed=seed, epoch=epoch, replace=(r >= 1.0) - ) - for r, d in zip(size_ratios, datasets) - ] - - return ConcatDataset(datasets) if len(datasets) > 1 else datasets[0] diff --git a/spaces/Illumotion/Koboldcpp/include/CL/cl_d3d10.h b/spaces/Illumotion/Koboldcpp/include/CL/cl_d3d10.h deleted file mode 100644 index 0d9950bed71a163132e8757928e08ba5194a0336..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/include/CL/cl_d3d10.h +++ /dev/null @@ -1,154 +0,0 @@ -/******************************************************************************* - * Copyright (c) 2008-2020 The Khronos Group Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - ******************************************************************************/ - -#ifndef __OPENCL_CL_D3D10_H -#define __OPENCL_CL_D3D10_H - -#if defined(_MSC_VER) -#if _MSC_VER >=1500 -#pragma warning( push ) -#pragma warning( disable : 4201 ) -#pragma warning( disable : 5105 ) -#endif -#endif -#include -#if defined(_MSC_VER) -#if _MSC_VER >=1500 -#pragma warning( pop ) -#endif -#endif -#include -#include - -#ifdef __cplusplus -extern "C" { -#endif - -/****************************************************************************** - * cl_khr_d3d10_sharing */ -#define cl_khr_d3d10_sharing 1 - -typedef cl_uint cl_d3d10_device_source_khr; -typedef cl_uint cl_d3d10_device_set_khr; - -/******************************************************************************/ - -/* Error Codes */ -#define CL_INVALID_D3D10_DEVICE_KHR -1002 -#define CL_INVALID_D3D10_RESOURCE_KHR -1003 -#define CL_D3D10_RESOURCE_ALREADY_ACQUIRED_KHR -1004 -#define CL_D3D10_RESOURCE_NOT_ACQUIRED_KHR -1005 - -/* cl_d3d10_device_source_nv */ -#define CL_D3D10_DEVICE_KHR 0x4010 -#define CL_D3D10_DXGI_ADAPTER_KHR 0x4011 - -/* cl_d3d10_device_set_nv */ -#define CL_PREFERRED_DEVICES_FOR_D3D10_KHR 0x4012 -#define CL_ALL_DEVICES_FOR_D3D10_KHR 0x4013 - -/* cl_context_info */ -#define CL_CONTEXT_D3D10_DEVICE_KHR 0x4014 -#define CL_CONTEXT_D3D10_PREFER_SHARED_RESOURCES_KHR 0x402C - -/* cl_mem_info */ -#define CL_MEM_D3D10_RESOURCE_KHR 0x4015 - -/* cl_image_info */ -#define CL_IMAGE_D3D10_SUBRESOURCE_KHR 0x4016 - -/* cl_command_type */ -#define CL_COMMAND_ACQUIRE_D3D10_OBJECTS_KHR 0x4017 -#define CL_COMMAND_RELEASE_D3D10_OBJECTS_KHR 0x4018 - -/******************************************************************************/ - -typedef cl_int (CL_API_CALL *clGetDeviceIDsFromD3D10KHR_fn)( - cl_platform_id platform, - cl_d3d10_device_source_khr d3d_device_source, - void * d3d_object, - cl_d3d10_device_set_khr d3d_device_set, - cl_uint num_entries, - cl_device_id * devices, - cl_uint * num_devices) CL_API_SUFFIX__VERSION_1_0; - -typedef cl_mem (CL_API_CALL *clCreateFromD3D10BufferKHR_fn)( - cl_context context, - cl_mem_flags flags, - ID3D10Buffer * resource, - cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0; - -typedef cl_mem (CL_API_CALL *clCreateFromD3D10Texture2DKHR_fn)( - cl_context context, - cl_mem_flags flags, - ID3D10Texture2D * resource, - UINT subresource, - cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0; - -typedef cl_mem (CL_API_CALL *clCreateFromD3D10Texture3DKHR_fn)( - cl_context context, - cl_mem_flags flags, - ID3D10Texture3D * resource, - UINT subresource, - cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0; - -typedef cl_int (CL_API_CALL *clEnqueueAcquireD3D10ObjectsKHR_fn)( - cl_command_queue command_queue, - cl_uint num_objects, - const cl_mem * mem_objects, - cl_uint num_events_in_wait_list, - const cl_event * event_wait_list, - cl_event * event) CL_API_SUFFIX__VERSION_1_0; - -typedef cl_int (CL_API_CALL *clEnqueueReleaseD3D10ObjectsKHR_fn)( - cl_command_queue command_queue, - cl_uint num_objects, - const cl_mem * mem_objects, - cl_uint num_events_in_wait_list, - const cl_event * event_wait_list, - cl_event * event) CL_API_SUFFIX__VERSION_1_0; - -/*************************************************************** -* cl_intel_sharing_format_query_d3d10 -***************************************************************/ -#define cl_intel_sharing_format_query_d3d10 1 - -/* when cl_khr_d3d10_sharing is supported */ - -extern CL_API_ENTRY cl_int CL_API_CALL -clGetSupportedD3D10TextureFormatsINTEL( - cl_context context, - cl_mem_flags flags, - cl_mem_object_type image_type, - cl_uint num_entries, - DXGI_FORMAT* d3d10_formats, - cl_uint* num_texture_formats) ; - -typedef cl_int (CL_API_CALL * -clGetSupportedD3D10TextureFormatsINTEL_fn)( - cl_context context, - cl_mem_flags flags, - cl_mem_object_type image_type, - cl_uint num_entries, - DXGI_FORMAT* d3d10_formats, - cl_uint* num_texture_formats) ; - -#ifdef __cplusplus -} -#endif - -#endif /* __OPENCL_CL_D3D10_H */ - diff --git a/spaces/JUNGU/VToonify/vtoonify/model/encoder/encoders/model_irse.py b/spaces/JUNGU/VToonify/vtoonify/model/encoder/encoders/model_irse.py deleted file mode 100644 index 6698d9705321dd4a27681ea15204e9ffaa51f62a..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/VToonify/vtoonify/model/encoder/encoders/model_irse.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from model.encoder.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/Jamel887/Rvc-tio887/lib/infer_pack/modules.py b/spaces/Jamel887/Rvc-tio887/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/Jamel887/Rvc-tio887/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/KPCGD/bingo/src/components/chat-notification.tsx b/spaces/KPCGD/bingo/src/components/chat-notification.tsx deleted file mode 100644 index 4be24d0f1755c8058698cfa66c736d8d4792475a..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/components/chat-notification.tsx +++ /dev/null @@ -1,77 +0,0 @@ -import { useEffect } from 'react' -import Image from 'next/image' - -import IconWarning from '@/assets/images/warning.svg' -import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types' -import { ExternalLink } from './external-link' -import { useBing } from '@/lib/hooks/use-bing' - -export interface ChatNotificationProps extends Pick, 'bot'> { - message?: ChatMessageModel -} - -function getAction(error: ChatError, reset: () => void) { - if (error.code === ErrorCode.THROTTLE_LIMIT) { - reset() - return ( -
    - 你已达到每日最大发送消息次数,请更换账号或隔一天后重试 -
    - ) - } - if (error.code === ErrorCode.BING_FORBIDDEN) { - return ( - - 你的账号已在黑名单,请尝试更换账号及申请解封 - - ) - } - if (error.code === ErrorCode.CONVERSATION_LIMIT) { - return ( -
    - 当前话题已中止,请点 - 重新开始 - 开启新的对话 -
    - ) - } - if (error.code === ErrorCode.BING_CAPTCHA) { - return ( - - 点击通过人机验证 - - ) - } - if (error.code === ErrorCode.BING_UNAUTHORIZED) { - reset() - return ( - 没有获取到身份信息或身份信息失效,点此重新设置 - ) - } - return error.message -} - -export function ChatNotification({ message, bot }: ChatNotificationProps) { - useEffect(() => { - window.scrollBy(0, 2000) - }, [message]) - - if (!message?.error) return - - return ( -
    -
    -
    -
    -
    - error - {getAction(message.error, () => bot.resetConversation())} -
    -
    -
    -
    -
    - ) -} diff --git a/spaces/Kangarroar/ApplioRVC-Inference/tools/rvc_for_realtime.py b/spaces/Kangarroar/ApplioRVC-Inference/tools/rvc_for_realtime.py deleted file mode 100644 index f746cde4dfd9c3b87fe844304aa3a975d68b3433..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/tools/rvc_for_realtime.py +++ /dev/null @@ -1,381 +0,0 @@ -import os -import sys -import traceback -import logging - -logger = logging.getLogger(__name__) - -from time import time as ttime - -import fairseq -import faiss -import numpy as np -import parselmouth -import pyworld -import scipy.signal as signal -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchcrepe - -from infer.lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) - -now_dir = os.getcwd() -sys.path.append(now_dir) -from multiprocessing import Manager as M - -from configs.config import Config - -config = Config() - -mm = M() -if config.dml == True: - - def forward_dml(ctx, x, scale): - ctx.scale = scale - res = x.clone().detach() - return res - - fairseq.modules.grad_multiply.GradMultiply.forward = forward_dml - - -# config.device=torch.device("cpu")########强制cpu测试 -# config.is_half=False########强制cpu测试 -class RVC: - def __init__( - self, - key, - pth_path, - index_path, - index_rate, - n_cpu, - inp_q, - opt_q, - device, - last_rvc=None, - ) -> None: - """ - 初始化 - """ - try: - global config - self.inp_q = inp_q - self.opt_q = opt_q - # device="cpu"########强制cpu测试 - self.device = device - self.f0_up_key = key - self.time_step = 160 / 16000 * 1000 - self.f0_min = 50 - self.f0_max = 1100 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - self.sr = 16000 - self.window = 160 - self.n_cpu = n_cpu - if index_rate != 0: - self.index = faiss.read_index(index_path) - self.big_npy = self.index.reconstruct_n(0, self.index.ntotal) - logger.info("Index search enabled") - self.pth_path = pth_path - self.index_path = index_path - self.index_rate = index_rate - - if last_rvc is None: - models, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task( - ["assets/hubert/hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - self.model = hubert_model - else: - self.model = last_rvc.model - - if last_rvc is None or last_rvc.pth_path != self.pth_path: - cpt = torch.load(self.pth_path, map_location="cpu") - self.tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] - self.if_f0 = cpt.get("f0", 1) - self.version = cpt.get("version", "v1") - if self.version == "v1": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif self.version == "v2": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del self.net_g.enc_q - logger.debug(self.net_g.load_state_dict(cpt["weight"], strict=False)) - self.net_g.eval().to(device) - # print(2333333333,device,config.device,self.device)#net_g是device,hubert是config.device - if config.is_half: - self.net_g = self.net_g.half() - else: - self.net_g = self.net_g.float() - self.is_half = config.is_half - else: - self.tgt_sr = last_rvc.tgt_sr - self.if_f0 = last_rvc.if_f0 - self.version = last_rvc.version - self.net_g = last_rvc.net_g - self.is_half = last_rvc.is_half - - if last_rvc is not None and hasattr(last_rvc, "model_rmvpe"): - self.model_rmvpe = last_rvc.model_rmvpe - except: - logger.warn(traceback.format_exc()) - - def change_key(self, new_key): - self.f0_up_key = new_key - - def change_index_rate(self, new_index_rate): - if new_index_rate != 0 and self.index_rate == 0: - self.index = faiss.read_index(self.index_path) - self.big_npy = self.index.reconstruct_n(0, self.index.ntotal) - logger.info("Index search enabled") - self.index_rate = new_index_rate - - def get_f0_post(self, f0): - f0_min = self.f0_min - f0_max = self.f0_max - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int32) - return f0_coarse, f0bak - - def get_f0(self, x, f0_up_key, n_cpu, method="harvest"): - n_cpu = int(n_cpu) - if method == "crepe": - return self.get_f0_crepe(x, f0_up_key) - if method == "rmvpe": - return self.get_f0_rmvpe(x, f0_up_key) - if method == "pm": - p_len = x.shape[0] // 160 + 1 - f0 = ( - parselmouth.Sound(x, 16000) - .to_pitch_ac( - time_step=0.01, - voicing_threshold=0.6, - pitch_floor=50, - pitch_ceiling=1100, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - # print(pad_size, p_len - len(f0) - pad_size) - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - - f0 *= pow(2, f0_up_key / 12) - return self.get_f0_post(f0) - if n_cpu == 1: - f0, t = pyworld.harvest( - x.astype(np.double), - fs=16000, - f0_ceil=1100, - f0_floor=50, - frame_period=10, - ) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - return self.get_f0_post(f0) - f0bak = np.zeros(x.shape[0] // 160 + 1, dtype=np.float64) - length = len(x) - part_length = 160 * ((length // 160 - 1) // n_cpu + 1) - n_cpu = (length // 160 - 1) // (part_length // 160) + 1 - ts = ttime() - res_f0 = mm.dict() - for idx in range(n_cpu): - tail = part_length * (idx + 1) + 320 - if idx == 0: - self.inp_q.put((idx, x[:tail], res_f0, n_cpu, ts)) - else: - self.inp_q.put( - (idx, x[part_length * idx - 320 : tail], res_f0, n_cpu, ts) - ) - while 1: - res_ts = self.opt_q.get() - if res_ts == ts: - break - f0s = [i[1] for i in sorted(res_f0.items(), key=lambda x: x[0])] - for idx, f0 in enumerate(f0s): - if idx == 0: - f0 = f0[:-3] - elif idx != n_cpu - 1: - f0 = f0[2:-3] - else: - f0 = f0[2:] - f0bak[ - part_length * idx // 160 : part_length * idx // 160 + f0.shape[0] - ] = f0 - f0bak = signal.medfilt(f0bak, 3) - f0bak *= pow(2, f0_up_key / 12) - return self.get_f0_post(f0bak) - - def get_f0_crepe(self, x, f0_up_key): - if "privateuseone" in str(self.device): ###不支持dml,cpu又太慢用不成,拿pm顶替 - return self.get_f0(x, f0_up_key, 1, "pm") - audio = torch.tensor(np.copy(x))[None].float() - # print("using crepe,device:%s"%self.device) - f0, pd = torchcrepe.predict( - audio, - self.sr, - 160, - self.f0_min, - self.f0_max, - "full", - batch_size=512, - # device=self.device if self.device.type!="privateuseone" else "cpu",###crepe不用半精度全部是全精度所以不愁###cpu延迟高到没法用 - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - f0 *= pow(2, f0_up_key / 12) - return self.get_f0_post(f0) - - def get_f0_rmvpe(self, x, f0_up_key): - if hasattr(self, "model_rmvpe") == False: - from infer.lib.rmvpe import RMVPE - - logger.info("Loading rmvpe model") - self.model_rmvpe = RMVPE( - # "rmvpe.pt", is_half=self.is_half if self.device.type!="privateuseone" else False, device=self.device if self.device.type!="privateuseone"else "cpu"####dml时强制对rmvpe用cpu跑 - # "rmvpe.pt", is_half=False, device=self.device####dml配置 - # "rmvpe.pt", is_half=False, device="cpu"####锁定cpu配置 - "assets/rmvpe/rmvpe.pt", - is_half=self.is_half, - device=self.device, ####正常逻辑 - ) - # self.model_rmvpe = RMVPE("aug2_58000_half.pt", is_half=self.is_half, device=self.device) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - f0 *= pow(2, f0_up_key / 12) - return self.get_f0_post(f0) - - def infer( - self, - feats: torch.Tensor, - indata: np.ndarray, - block_frame_16k, - rate, - cache_pitch, - cache_pitchf, - f0method, - ) -> np.ndarray: - feats = feats.view(1, -1) - if config.is_half: - feats = feats.half() - else: - feats = feats.float() - feats = feats.to(self.device) - t1 = ttime() - with torch.no_grad(): - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - inputs = { - "source": feats, - "padding_mask": padding_mask, - "output_layer": 9 if self.version == "v1" else 12, - } - logits = self.model.extract_features(**inputs) - feats = ( - self.model.final_proj(logits[0]) if self.version == "v1" else logits[0] - ) - feats = F.pad(feats, (0, 0, 1, 0)) - t2 = ttime() - try: - if hasattr(self, "index") and self.index_rate != 0: - leng_replace_head = int(rate * feats[0].shape[0]) - npy = feats[0][-leng_replace_head:].cpu().numpy().astype("float32") - score, ix = self.index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - if config.is_half: - npy = npy.astype("float16") - feats[0][-leng_replace_head:] = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * self.index_rate - + (1 - self.index_rate) * feats[0][-leng_replace_head:] - ) - else: - logger.warn("Index search FAILED or disabled") - except: - traceback.print_exc() - logger.warn("Index search FAILED") - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - t3 = ttime() - if self.if_f0 == 1: - pitch, pitchf = self.get_f0(indata, self.f0_up_key, self.n_cpu, f0method) - start_frame = block_frame_16k // 160 - end_frame = len(cache_pitch) - (pitch.shape[0] - 4) + start_frame - cache_pitch[:] = np.append(cache_pitch[start_frame:end_frame], pitch[3:-1]) - cache_pitchf[:] = np.append( - cache_pitchf[start_frame:end_frame], pitchf[3:-1] - ) - p_len = min(feats.shape[1], 13000, cache_pitch.shape[0]) - else: - cache_pitch, cache_pitchf = None, None - p_len = min(feats.shape[1], 13000) - t4 = ttime() - feats = feats[:, :p_len, :] - if self.if_f0 == 1: - cache_pitch = cache_pitch[:p_len] - cache_pitchf = cache_pitchf[:p_len] - cache_pitch = torch.LongTensor(cache_pitch).unsqueeze(0).to(self.device) - cache_pitchf = torch.FloatTensor(cache_pitchf).unsqueeze(0).to(self.device) - p_len = torch.LongTensor([p_len]).to(self.device) - ii = 0 # sid - sid = torch.LongTensor([ii]).to(self.device) - with torch.no_grad(): - if self.if_f0 == 1: - # print(12222222222,feats.device,p_len.device,cache_pitch.device,cache_pitchf.device,sid.device,rate2) - infered_audio = ( - self.net_g.infer( - feats, p_len, cache_pitch, cache_pitchf, sid, rate - )[0][0, 0] - .data - .float() - ) - else: - infered_audio = ( - self.net_g.infer(feats, p_len, sid, rate)[0][0, 0] - .data - .float() - ) - t5 = ttime() - logger.info( - "Spent time: fea = %.2fs, index = %.2fs, f0 = %.2fs, model = %.2fs", - t2 - t1, - t3 - t2, - t4 - t3, - t5 - t4, - ) - return infered_audio \ No newline at end of file diff --git a/spaces/KarloDarlo/3D_Photo_Inpainting/app.py b/spaces/KarloDarlo/3D_Photo_Inpainting/app.py deleted file mode 100644 index 1285dc5f05cda55f8ff710394a2297789b6ee4e1..0000000000000000000000000000000000000000 --- a/spaces/KarloDarlo/3D_Photo_Inpainting/app.py +++ /dev/null @@ -1,230 +0,0 @@ -# Repo source: https://github.com/vt-vl-lab/3d-photo-inpainting - -import os -#os.environ['QT_DEBUG_PLUGINS'] = '1' - -import subprocess -#subprocess.run('ldd /home/user/.local/lib/python3.8/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so', shell=True) -#subprocess.run('pip list', shell=True) -subprocess.run('nvidia-smi', shell=True) -os.mkdir("image") - -from pyvirtualdisplay import Display -display = Display(visible=0, size=(1920, 1080)).start() -#subprocess.run('echo $DISPLAY', shell=True) - -# 3d inpainting imports -import numpy as np -import argparse -import glob -import os -from functools import partial -import vispy -import scipy.misc as misc -from tqdm import tqdm -import yaml -import time -import sys -from mesh import write_ply, read_ply, output_3d_photo -from utils import get_MiDaS_samples, read_MiDaS_depth -import torch -import cv2 -from skimage.transform import resize -import imageio -import copy -from networks import Inpaint_Color_Net, Inpaint_Depth_Net, Inpaint_Edge_Net -from MiDaS.run import run_depth -from boostmonodepth_utils import run_boostmonodepth -from MiDaS.monodepth_net import MonoDepthNet -import MiDaS.MiDaS_utils as MiDaS_utils -from bilateral_filtering import sparse_bilateral_filtering - -import torch - -# gradio imports -import gradio as gr -import uuid -from PIL import Image -from pathlib import Path -import shutil -from time import sleep - -def inpaint(img_name, num_frames, fps): - - config = yaml.load(open('argument.yml', 'r')) - - config['num_frames'] = num_frames - config['fps'] = fps - - if torch.cuda.is_available(): - config['gpu_ids'] = 0 - - if config['offscreen_rendering'] is True: - vispy.use(app='egl') - - os.makedirs(config['mesh_folder'], exist_ok=True) - os.makedirs(config['video_folder'], exist_ok=True) - os.makedirs(config['depth_folder'], exist_ok=True) - sample_list = get_MiDaS_samples(config['src_folder'], config['depth_folder'], config, config['specific'], img_name.stem) - normal_canvas, all_canvas = None, None - - if isinstance(config["gpu_ids"], int) and (config["gpu_ids"] >= 0): - device = config["gpu_ids"] - else: - device = "cpu" - - print(f"running on device {device}") - - for idx in tqdm(range(len(sample_list))): - depth = None - sample = sample_list[idx] - print("Current Source ==> ", sample['src_pair_name']) - mesh_fi = os.path.join(config['mesh_folder'], sample['src_pair_name'] +'.ply') - image = imageio.imread(sample['ref_img_fi']) - - print(f"Running depth extraction at {time.time()}") - if config['use_boostmonodepth'] is True: - run_boostmonodepth(sample['ref_img_fi'], config['src_folder'], config['depth_folder']) - elif config['require_midas'] is True: - run_depth([sample['ref_img_fi']], config['src_folder'], config['depth_folder'], - config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=640) - - if 'npy' in config['depth_format']: - config['output_h'], config['output_w'] = np.load(sample['depth_fi']).shape[:2] - else: - config['output_h'], config['output_w'] = imageio.imread(sample['depth_fi']).shape[:2] - frac = config['longer_side_len'] / max(config['output_h'], config['output_w']) - config['output_h'], config['output_w'] = int(config['output_h'] * frac), int(config['output_w'] * frac) - config['original_h'], config['original_w'] = config['output_h'], config['output_w'] - if image.ndim == 2: - image = image[..., None].repeat(3, -1) - if np.sum(np.abs(image[..., 0] - image[..., 1])) == 0 and np.sum(np.abs(image[..., 1] - image[..., 2])) == 0: - config['gray_image'] = True - else: - config['gray_image'] = False - image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA) - depth = read_MiDaS_depth(sample['depth_fi'], 3.0, config['output_h'], config['output_w']) - mean_loc_depth = depth[depth.shape[0]//2, depth.shape[1]//2] - if not(config['load_ply'] is True and os.path.exists(mesh_fi)): - vis_photos, vis_depths = sparse_bilateral_filtering(depth.copy(), image.copy(), config, num_iter=config['sparse_iter'], spdb=False) - depth = vis_depths[-1] - model = None - torch.cuda.empty_cache() - print("Start Running 3D_Photo ...") - print(f"Loading edge model at {time.time()}") - depth_edge_model = Inpaint_Edge_Net(init_weights=True) - depth_edge_weight = torch.load(config['depth_edge_model_ckpt'], - map_location=torch.device(device)) - depth_edge_model.load_state_dict(depth_edge_weight) - depth_edge_model = depth_edge_model.to(device) - depth_edge_model.eval() - - print(f"Loading depth model at {time.time()}") - depth_feat_model = Inpaint_Depth_Net() - depth_feat_weight = torch.load(config['depth_feat_model_ckpt'], - map_location=torch.device(device)) - depth_feat_model.load_state_dict(depth_feat_weight, strict=True) - depth_feat_model = depth_feat_model.to(device) - depth_feat_model.eval() - depth_feat_model = depth_feat_model.to(device) - print(f"Loading rgb model at {time.time()}") - rgb_model = Inpaint_Color_Net() - rgb_feat_weight = torch.load(config['rgb_feat_model_ckpt'], - map_location=torch.device(device)) - rgb_model.load_state_dict(rgb_feat_weight) - rgb_model.eval() - rgb_model = rgb_model.to(device) - graph = None - - - print(f"Writing depth ply (and basically doing everything) at {time.time()}") - rt_info = write_ply(image, - depth, - sample['int_mtx'], - mesh_fi, - config, - rgb_model, - depth_edge_model, - depth_edge_model, - depth_feat_model) - - if rt_info is False: - continue - rgb_model = None - color_feat_model = None - depth_edge_model = None - depth_feat_model = None - torch.cuda.empty_cache() - if config['save_ply'] is True or config['load_ply'] is True: - verts, colors, faces, Height, Width, hFov, vFov = read_ply(mesh_fi) - else: - verts, colors, faces, Height, Width, hFov, vFov = rt_info - - - print(f"Making video at {time.time()}") - videos_poses, video_basename = copy.deepcopy(sample['tgts_poses']), sample['tgt_name'] - top = (config.get('original_h') // 2 - sample['int_mtx'][1, 2] * config['output_h']) - left = (config.get('original_w') // 2 - sample['int_mtx'][0, 2] * config['output_w']) - down, right = top + config['output_h'], left + config['output_w'] - border = [int(xx) for xx in [top, down, left, right]] - normal_canvas, all_canvas = output_3d_photo(verts.copy(), colors.copy(), faces.copy(), copy.deepcopy(Height), copy.deepcopy(Width), copy.deepcopy(hFov), copy.deepcopy(vFov), - copy.deepcopy(sample['tgt_pose']), sample['video_postfix'], copy.deepcopy(sample['ref_pose']), copy.deepcopy(config['video_folder']), - image.copy(), copy.deepcopy(sample['int_mtx']), config, image, - videos_poses, video_basename, config.get('original_h'), config.get('original_w'), border=border, depth=depth, normal_canvas=normal_canvas, all_canvas=all_canvas, - mean_loc_depth=mean_loc_depth) - -def resizer(input_img, max_img_size=512): - width, height = input_img.size - long_edge = height if height >= width else width - if long_edge > max_img_size: - ratio = max_img_size / long_edge - resized_width = int(ratio * width) - resized_height = int(ratio * height) - resized_input_img = input_img.resize((resized_width, resized_height), resample=2) - return resized_input_img - - else: - return input_img - -def main_app(input_img, num_frames, fps): - - # resize down - input_img = resizer(input_img) - - # Save image in necessary folder for inpainting - #img_name = Path(str(uuid.uuid4()) + '.jpg') - img_name = Path('sample.jpg') - save_folder = Path('image') - input_img.save(save_folder/img_name) - - inpaint(img_name, num_frames, fps) - - #subprocess.run('ls -l', shell=True) - #subprocess.run('ls image -l', shell=True) - #subprocess.run('ls video/ -l', shell=True) - - # Get output video path & return - input_img_path = str(save_folder/img_name) - out_vid_path = 'video/{0}_circle.mp4'.format(img_name.stem) - - return out_vid_path - -video_choices = ['dolly-zoom-in', 'zoom-in', 'circle', 'swing'] -gradio_inputs = [gr.Image(type='pil', label='Input Image'), - gr.Slider(minimum=60, maximum=240, step=1, default=120, label="Number of Frames"), - gr.Slider(minimum=10, maximum=40, step=1, default=20, label="Frames per Second (FPS)")] - -gradio_outputs = [gr.Video(label='Output Video')] -examples = [ ['moon.jpg', 60, 10], ['dog.jpg', 60, 10] ] - -description="Convert an image into a trajectory-following video. Images are automatically resized down to a max edge of 512. | NOTE: The current runtime for a sample is around 400-700 seconds. Running on a lower number of frames could help! Do be patient as this is on CPU-only, BUT if this space maybe gets a GPU one day, it's already configured to run with GPU-support :) If you have a GPU, feel free to use the author's original repo (linked at the bottom of this path, they have a collab notebook!) You can also run this space/gradio app locally!" - -article = "

    3D Photography using Context-aware Layered Depth Inpainting | Github Project Page | Github Repo

    " - -iface = gr.Interface(fn=main_app, inputs=gradio_inputs , outputs=gradio_outputs, examples=examples, - title='3D Image Inpainting', - description=description, - article=article, - allow_flagging='never', - theme="default", - cache_examples=False).launch(enable_queue=True, debug=True) diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/model.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/model.py deleted file mode 100644 index e050d3204d8f1becdf0f8b3133470708e5420cea..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/model.py +++ /dev/null @@ -1,135 +0,0 @@ -from encoder.params_model import * -from encoder.params_data import * -from scipy.interpolate import interp1d -from sklearn.metrics import roc_curve -from torch.nn.utils import clip_grad_norm_ -from scipy.optimize import brentq -from torch import nn -import numpy as np -import torch - - -class SpeakerEncoder(nn.Module): - def __init__(self, device, loss_device): - super().__init__() - self.loss_device = loss_device - - # Network defition - self.lstm = nn.LSTM(input_size=mel_n_channels, - hidden_size=model_hidden_size, - num_layers=model_num_layers, - batch_first=True).to(device) - self.linear = nn.Linear(in_features=model_hidden_size, - out_features=model_embedding_size).to(device) - self.relu = torch.nn.ReLU().to(device) - - # Cosine similarity scaling (with fixed initial parameter values) - self.similarity_weight = nn.Parameter(torch.tensor([10.])).to(loss_device) - self.similarity_bias = nn.Parameter(torch.tensor([-5.])).to(loss_device) - - # Loss - self.loss_fn = nn.CrossEntropyLoss().to(loss_device) - - def do_gradient_ops(self): - # Gradient scale - self.similarity_weight.grad *= 0.01 - self.similarity_bias.grad *= 0.01 - - # Gradient clipping - clip_grad_norm_(self.parameters(), 3, norm_type=2) - - def forward(self, utterances, hidden_init=None): - """ - Computes the embeddings of a batch of utterance spectrograms. - - :param utterances: batch of mel-scale filterbanks of same duration as a tensor of shape - (batch_size, n_frames, n_channels) - :param hidden_init: initial hidden state of the LSTM as a tensor of shape (num_layers, - batch_size, hidden_size). Will default to a tensor of zeros if None. - :return: the embeddings as a tensor of shape (batch_size, embedding_size) - """ - # Pass the input through the LSTM layers and retrieve all outputs, the final hidden state - # and the final cell state. - out, (hidden, cell) = self.lstm(utterances, hidden_init) - - # We take only the hidden state of the last layer - embeds_raw = self.relu(self.linear(hidden[-1])) - - # L2-normalize it - embeds = embeds_raw / (torch.norm(embeds_raw, dim=1, keepdim=True) + 1e-5) - - return embeds - - def similarity_matrix(self, embeds): - """ - Computes the similarity matrix according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the similarity matrix as a tensor of shape (speakers_per_batch, - utterances_per_speaker, speakers_per_batch) - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Inclusive centroids (1 per speaker). Cloning is needed for reverse differentiation - centroids_incl = torch.mean(embeds, dim=1, keepdim=True) - centroids_incl = centroids_incl.clone() / (torch.norm(centroids_incl, dim=2, keepdim=True) + 1e-5) - - # Exclusive centroids (1 per utterance) - centroids_excl = (torch.sum(embeds, dim=1, keepdim=True) - embeds) - centroids_excl /= (utterances_per_speaker - 1) - centroids_excl = centroids_excl.clone() / (torch.norm(centroids_excl, dim=2, keepdim=True) + 1e-5) - - # Similarity matrix. The cosine similarity of already 2-normed vectors is simply the dot - # product of these vectors (which is just an element-wise multiplication reduced by a sum). - # We vectorize the computation for efficiency. - sim_matrix = torch.zeros(speakers_per_batch, utterances_per_speaker, - speakers_per_batch).to(self.loss_device) - mask_matrix = 1 - np.eye(speakers_per_batch, dtype=np.int) - for j in range(speakers_per_batch): - mask = np.where(mask_matrix[j])[0] - sim_matrix[mask, :, j] = (embeds[mask] * centroids_incl[j]).sum(dim=2) - sim_matrix[j, :, j] = (embeds[j] * centroids_excl[j]).sum(dim=1) - - ## Even more vectorized version (slower maybe because of transpose) - # sim_matrix2 = torch.zeros(speakers_per_batch, speakers_per_batch, utterances_per_speaker - # ).to(self.loss_device) - # eye = np.eye(speakers_per_batch, dtype=np.int) - # mask = np.where(1 - eye) - # sim_matrix2[mask] = (embeds[mask[0]] * centroids_incl[mask[1]]).sum(dim=2) - # mask = np.where(eye) - # sim_matrix2[mask] = (embeds * centroids_excl).sum(dim=2) - # sim_matrix2 = sim_matrix2.transpose(1, 2) - - sim_matrix = sim_matrix * self.similarity_weight + self.similarity_bias - return sim_matrix - - def loss(self, embeds): - """ - Computes the softmax loss according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the loss and the EER for this batch of embeddings. - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Loss - sim_matrix = self.similarity_matrix(embeds) - sim_matrix = sim_matrix.reshape((speakers_per_batch * utterances_per_speaker, - speakers_per_batch)) - ground_truth = np.repeat(np.arange(speakers_per_batch), utterances_per_speaker) - target = torch.from_numpy(ground_truth).long().to(self.loss_device) - loss = self.loss_fn(sim_matrix, target) - - # EER (not backpropagated) - with torch.no_grad(): - inv_argmax = lambda i: np.eye(1, speakers_per_batch, i, dtype=np.int)[0] - labels = np.array([inv_argmax(i) for i in ground_truth]) - preds = sim_matrix.detach().cpu().numpy() - - # Snippet from https://yangcha.github.io/EER-ROC/ - fpr, tpr, thresholds = roc_curve(labels.flatten(), preds.flatten()) - eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.) - - return loss, eer diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/synthesizer_train.py b/spaces/Kevin676/Real-Time-Voice-Cloning/synthesizer_train.py deleted file mode 100644 index 2743d590d882f209734b68921b84a9d23492942c..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Real-Time-Voice-Cloning/synthesizer_train.py +++ /dev/null @@ -1,35 +0,0 @@ -from synthesizer.hparams import hparams -from synthesizer.train import train -from utils.argutils import print_args -import argparse - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("run_id", type=str, help= \ - "Name for this model instance. If a model state from the same run ID was previously " - "saved, the training will restart from there. Pass -f to overwrite saved states and " - "restart from scratch.") - parser.add_argument("syn_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the synthesizer directory that contains the ground truth mel spectrograms, " - "the wavs and the embeds.") - parser.add_argument("-m", "--models_dir", type=str, default="synthesizer/saved_models/", help=\ - "Path to the output directory that will contain the saved model weights and the logs.") - parser.add_argument("-s", "--save_every", type=int, default=1000, help= \ - "Number of steps between updates of the model on the disk. Set to 0 to never save the " - "model.") - parser.add_argument("-b", "--backup_every", type=int, default=25000, help= \ - "Number of steps between backups of the model. Set to 0 to never make backups of the " - "model.") - parser.add_argument("-f", "--force_restart", action="store_true", help= \ - "Do not load any saved model and restart from scratch.") - parser.add_argument("--hparams", default="", - help="Hyperparameter overrides as a comma-separated list of name=value " - "pairs") - args = parser.parse_args() - print_args(args, parser) - - args.hparams = hparams.parse(args.hparams) - - # Run the training - train(**vars(args)) diff --git a/spaces/Kimata/multimodal-deepfakes/pipeline.py b/spaces/Kimata/multimodal-deepfakes/pipeline.py deleted file mode 100644 index b2afed71cb74c4ded445e3dd43da69f2969e0131..0000000000000000000000000000000000000000 --- a/spaces/Kimata/multimodal-deepfakes/pipeline.py +++ /dev/null @@ -1,206 +0,0 @@ -import os -import cv2 -import torch -import zipfile -import librosa -import numpy as np -import tensorflow_addons -import tensorflow as tf -from facenet_pytorch import MTCNN -from rawnet import RawNet - -#Set random seed for reproducibility. -tf.random.set_seed(42) - -local_zip = "./efficientnet-b0.zip" -zip_ref = zipfile.ZipFile(local_zip, 'r') -zip_ref.extractall() -zip_ref.close() - - -# Load models. -model = tf.keras.models.load_model("efficientnet-b0/") - - - -class DetectionPipeline: - """Pipeline class for detecting faces in the frames of a video file.""" - - def __init__(self, n_frames=None, batch_size=60, resize=None, input_modality = 'video'): - """Constructor for DetectionPipeline class. - - Keyword Arguments: - n_frames {int} -- Total number of frames to load. These will be evenly spaced - throughout the video. If not specified (i.e., None), all frames will be loaded. - (default: {None}) - batch_size {int} -- Batch size to use with MTCNN face detector. (default: {32}) - resize {float} -- Fraction by which to resize frames from original prior to face - detection. A value less than 1 results in downsampling and a value greater than - 1 result in upsampling. (default: {None}) - """ - self.n_frames = n_frames - self.batch_size = batch_size - self.resize = resize - self.input_modality = input_modality - - def __call__(self, filename): - """Load frames from an MP4 video and detect faces. - - Arguments: - filename {str} -- Path to video. - """ - # Create video reader and find length - if self.input_modality == 'video': - print('Input modality is video.') - v_cap = cv2.VideoCapture(filename) - v_len = int(v_cap.get(cv2.CAP_PROP_FRAME_COUNT)) - - # Pick 'n_frames' evenly spaced frames to sample - if self.n_frames is None: - sample = np.arange(0, v_len) - else: - sample = np.linspace(0, v_len - 1, self.n_frames).astype(int) - - # Loop through frames - faces = [] - frames = [] - for j in range(v_len): - success = v_cap.grab() - if j in sample: - # Load frame - success, frame = v_cap.retrieve() - if not success: - continue - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - - # Resize frame to desired size - if self.resize is not None: - frame = frame.resize([int(d * self.resize) for d in frame.size]) - frames.append(frame) - - # When batch is full, detect faces and reset frame list - if len(frames) % self.batch_size == 0 or j == sample[-1]: - face2 = cv2.resize(frame, (224, 224)) - faces.append(face2) - - v_cap.release() - return faces - - elif self.input_modality == 'image': - print('Input modality is image.') - #Perform inference for image modality. - print('Reading image') - # print(f"Image path is: {filename}") - image = cv2.cvtColor(filename, cv2.COLOR_BGR2RGB) - image = cv2.resize(image, (224, 224)) - - # if not face.any(): - # print("No faces found...") - - return image - - elif self.input_modality == 'audio': - print("INput modality is audio.") - - #Load audio. - x, sr = librosa.load(filename) - x_pt = torch.Tensor(x) - x_pt = torch.unsqueeze(x_pt, dim = 0) - return x_pt - - else: - raise ValueError("Invalid input modality. Must be either 'video' or image") - -detection_video_pipeline = DetectionPipeline(n_frames=5, batch_size=1, input_modality='video') -detection_image_pipeline = DetectionPipeline(batch_size = 1, input_modality = 'image') - -def deepfakes_video_predict(input_video): - - faces = detection_video_pipeline(input_video) - total = 0 - real_res = [] - fake_res = [] - - for face in faces: - - face2 = face/255 - pred = model.predict(np.expand_dims(face2, axis=0))[0] - real, fake = pred[0], pred[1] - real_res.append(real) - fake_res.append(fake) - - total+=1 - - pred2 = pred[1] - - if pred2 > 0.5: - fake+=1 - else: - real+=1 - real_mean = np.mean(real_res) - fake_mean = np.mean(fake_res) - print(f"Real Faces: {real_mean}") - print(f"Fake Faces: {fake_mean}") - text = "" - - if real_mean >= 0.5: - text = "The video is REAL. \n Deepfakes Confidence: " + str(round(100 - (real_mean*100), 3)) + "%" - else: - text = "The video is FAKE. \n Deepfakes Confidence: " + str(round(fake_mean*100, 3)) + "%" - - return text - - -def deepfakes_image_predict(input_image): - faces = detection_image_pipeline(input_image) - face2 = faces/255 - pred = model.predict(np.expand_dims(face2, axis = 0))[0] - real, fake = pred[0], pred[1] - if real > 0.5: - text2 = "The image is REAL. \n Deepfakes Confidence: " + str(round(100 - (real*100), 3)) + "%" - else: - text2 = "The image is FAKE. \n Deepfakes Confidence: " + str(round(fake*100, 3)) + "%" - return text2 - -def load_audio_model(): - d_args = { - "nb_samp": 64600, - "first_conv": 1024, - "in_channels": 1, - "filts": [20, [20, 20], [20, 128], [128, 128]], - "blocks": [2, 4], - "nb_fc_node": 1024, - "gru_node": 1024, - "nb_gru_layer": 3, - "nb_classes": 2} - - model = RawNet(d_args = d_args, device='cpu') - - #Load ckpt. - model_dict = model.state_dict() - ckpt = torch.load('RawNet2.pth', map_location=torch.device('cpu')) - model.load_state_dict(ckpt, model_dict) - return model - -audio_label_map = { - 0: "Real audio", - 1: "Fake audio" -} - -def deepfakes_audio_predict(input_audio): - #Perform inference on audio. - x, sr = input_audio - x_pt = torch.Tensor(x) - x_pt = torch.unsqueeze(x_pt, dim = 0) - - #Load model. - model = load_audio_model() - - #Perform inference. - grads = model(x_pt) - - #Get the argmax. - grads_np = grads.detach().numpy() - result = np.argmax(grads_np) - - return audio_label_map[result] diff --git a/spaces/ML701G7/taim-gan/src/models/modules/cond_augment.py b/spaces/ML701G7/taim-gan/src/models/modules/cond_augment.py deleted file mode 100644 index 4bab9d86afda570670760d2f1b8bc2ba96085251..0000000000000000000000000000000000000000 --- a/spaces/ML701G7/taim-gan/src/models/modules/cond_augment.py +++ /dev/null @@ -1,57 +0,0 @@ -"""Conditioning Augmentation Module""" - -from typing import Any - -import torch -from torch import nn - - -class CondAugmentation(nn.Module): - """Conditioning Augmentation Module""" - - def __init__(self, D: int, conditioning_dim: int): - """ - :param D: Dimension of the text embedding space [D from AttnGAN paper] - :param conditioning_dim: Dimension of the conditioning space - """ - super().__init__() - self.cond_dim = conditioning_dim - self.cond_augment = nn.Linear(D, conditioning_dim * 4, bias=True) - self.glu = nn.GLU(dim=1) - - def encode(self, text_embedding: torch.Tensor) -> Any: - """ - This function encodes the text embedding into the conditioning space - :param text_embedding: Text embedding - :return: Conditioning embedding - """ - x_tensor = self.glu(self.cond_augment(text_embedding)) - mu_tensor = x_tensor[:, : self.cond_dim] - logvar = x_tensor[:, self.cond_dim :] - return mu_tensor, logvar - - def sample(self, mu_tensor: torch.Tensor, logvar: torch.Tensor) -> torch.Tensor: - """ - This function samples from the Gaussian distribution - :param mu: Mean of the Gaussian distribution - :param logvar: Log variance of the Gaussian distribution - :return: Sample from the Gaussian distribution - """ - std = torch.exp(0.5 * logvar) - eps = torch.randn_like( - std - ) # check if this should add requires_grad = True to this tensor? - return mu_tensor + eps * std - - def forward(self, text_embedding: torch.Tensor) -> Any: - """ - This function encodes the text embedding into the conditioning space, - and samples from the Gaussian distribution. - :param text_embedding: Text embedding - :return c_hat: Conditioning embedding (C^ from StackGAN++ paper) - :return mu: Mean of the Gaussian distribution - :return logvar: Log variance of the Gaussian distribution - """ - mu_tensor, logvar = self.encode(text_embedding) - c_hat = self.sample(mu_tensor, logvar) - return c_hat, mu_tensor, logvar diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/monotonic_align/__init__.py b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/monotonic_align/__init__.py deleted file mode 100644 index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import torch -from .monotonic_align.core import maximum_path_c - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/Marshalls/testmtd/script_train_gcp_dev.sh b/spaces/Marshalls/testmtd/script_train_gcp_dev.sh deleted file mode 100644 index 4a2bb1f7a1d2507cd873077ded8af35d56ebb6d5..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/script_train_gcp_dev.sh +++ /dev/null @@ -1,72 +0,0 @@ -#!/bin/bash - -export TPU_IP_ADDRESS=10.104.22.146; -#export TPU_IP_ADDRESS=10.95.66.34; -export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470" -export TPU_NAME="grpc://$TPU_IP_ADDRESS:8470" -#export XRT_WORKERS="localservice:0;grpc://localhost:40934" -#export XRT_DEVICE_MAP="CPU:0;/job:localservice/replica:0/task:0/device:XLA_CPU:0|GPU:0;/job:localservice/replica:0/task:0/device:XLA_GPU:0" -#export PYTHONPATH=$SCRATCH/:${PYTHONPATH} -#export PYTHONPATH=/gpfsscratch/rech/imi/usc19dv/lib/python3.7/site-packages:${PYTHONPATH} - -py=python3 - -#root_dir=$SCRATCH/data -root_dir=data - -####aistpp_60hz -#data_dir=${root_dir}/scaled_features -#hparams_file=aistpp_60hz/transflower_aistpp_expmap -#hparams_file=aistpp_60hz/transglower_aistpp_expmap - -####aistpp_20hz -data_dir=${root_dir}/aistpp_20hz -#exp=$1 -exp=testing -#exp=transglower_aistpp_expmap -#exp=transglower_residual_aistpp_expmap -#exp=transflower_residual_aistpp_expmap -#exp=transflower_aistpp_expmap -#exp=residualflower2_transflower_aistpp_expmap -#exp=moglow_aistpp_expmap -#hparams_file=aistpp_20hz/${exp} -hparams_file=aistpp_20hz/mowgli_aistpp_expmap_testing - -## Fix: needs vmapped version of transformer: -#hparams_file=aistpp_20hz/residualflower2_moglow_aistpp_expmap - -####dance_combined -#data_dir=${root_dir}/dance_combined -#exp=$1 -#exp=transflower_expmap -#exp=moglow_expmap -#hparams_file=dance_combined/${exp} - -#exp=${exp}_future3_actnorm -#exp=${exp}_future3 -#exp=${exp}_future3 - -echo $exp - -$py training/train.py --data_dir=${data_dir} --max_epochs=1000\ - --model=mowgli2 \ - --do_validation \ - --val_batch_size=32 \ - --batch_size=32 \ - --experiment_name=$exp\ - --workers=$(nproc) \ - --tpu_cores=8 \ - --hparams_file=training/hparams/${hparams_file}.yaml \ - #--continue_train \ - #--load_weights_only \ - #--stage2 \ - #--prior_use_x_transformers \ - #--output_lengths="3" \ - #--max_prior_loss_weight=0.01 \ - #--accelerator=ddp \ - #--scales="[[16,0]]" \ -# --use_rotary_pos_emb \ - #--residual_scales="[[16,0]]" -# --glow_norm_layer="actnorm" \ - #--use_pos_emb_output \ - #--gpus=2 \ diff --git a/spaces/MatrixYao/how_many_data_points_zh/naacl_demo/demo_utils.py b/spaces/MatrixYao/how_many_data_points_zh/naacl_demo/demo_utils.py deleted file mode 100644 index 703fcf09a13f0577cf3f44da0eb981f029333d88..0000000000000000000000000000000000000000 --- a/spaces/MatrixYao/how_many_data_points_zh/naacl_demo/demo_utils.py +++ /dev/null @@ -1,514 +0,0 @@ -import math - -import pandas as pd -import numpy as np -from itertools import product -import shapely -from bokeh.models import Span, Label, ColumnDataSource, Whisker -from bokeh.plotting import figure, show -from shapely.geometry import Polygon -import matplotlib as mpl -import matplotlib.pyplot as plt -import seaborn - -task_patterns = { - "CB": [0, 3], - "RTE": [0, 3], - "BoolQ": [0, 3, 5], - "MNLI": [0, 3], - "COPA": [0, 1], - "WSC": [0, 1, 2], - "WiC": [0, 1], - "MultiRC": [0, 1, 2], -} -task_reps = {"CB": 4, "RTE": 4, "BoolQ": 4, "MNLI": 4, "COPA": 4, "WSC": 4, "WiC": 4, "MultiRC": 4} -task_best_pattern = {"CB": 0, "RTE": 0, "BoolQ": 0, "MNLI": 0, "COPA": 1, "WSC": 0, "WiC": 0, "MultiRC": 1} -task_metric_short = { - "CB": "f1-macro", - "RTE": "acc", - "BoolQ": "acc", - "MNLI": "acc", - "COPA": "acc", - "WSC": "acc", - "WiC": "acc", - "MultiRC": "f1", -} -task_metrics = { - "CB": "F1-macro", - "RTE": "accuracy", - "BoolQ": "accuracy", - "MNLI": "accuracy", - "COPA": "accuracy", - "WSC": "accuracy", - "WiC": "accuracy", - "MultiRC": "F1", -} -task_neutral = { - "CB": True, - "RTE": True, - "BoolQ": True, - "MNLI": True, - "COPA": False, - "WSC": False, - "multirc": True, - "WiC": True, - "MultiRC": True, -} -neutral_tasks = [ - "BoolQ", - "CB", - "MNLI", - "MultiRC", - "RTE", - "WiC", -] -tasks = sorted(task_patterns.keys()) - -pvp_colors = ["goldenrod", "blanchedalmond", "floralwhite"] -ctl_colors = ["crimson", "salmon", "mistyrose"] -clf_colors = ["indigo", "plum", "thistle"] - - -def prompt_boolq(passage, question, pattern): - if pattern == 0: - return f"""{passage} Based on the previous passage, {question} [YES/NO]""" - if pattern == 1: - return f"""{passage} Question: {question} Answer: [YES/NO]""" - if pattern == 2: - return f"""Based on the following passage, {question} [YES/NO] {passage}""" - - -def advantage_text(advantage): - model_type = ( - """分类头法""" - if advantage < 0 - else """提示法""" - ) - return f"""{model_type} 优势: {abs(advantage):.2f} 条样本""" - - -def average_advantage_text(advantage): - model_type = ( - """分类头法""" - if advantage < 0 - else """提示法""" - ) - return f"""Average {model_type} 优势: {abs(advantage):.2f} 条样本""" - - -def naming_convention(task, seed, pvp_index=None, neutral=False): - method = f"PVP {pvp_index}" if pvp_index is not None else "CLF" - model = "roberta" - if neutral: - verbalizer = "neutral" - else: - verbalizer = None - return ( - f"{method} {model}" - + (f" {verbalizer} verbalizer" if verbalizer is not None else "") - + f" seed {seed} - test-{task_metric_short[task]}-all-p" - ) - - -def get_data(task): - url = f"https://raw.githubusercontent.com/TevenLeScao/pet/master/exported_results/{task.lower()}/wandb_export.csv" - df = pd.read_csv(url) - training_points = df["training_points"] - - head_performances = np.transpose(np.array([df[naming_convention(task, i)] for i in range(task_reps[task])])) - pattern_performances = {} - for pattern in task_patterns[task]: - pattern_performances[pattern] = { - "normal": np.transpose(np.array([df[naming_convention(task, i, pattern)] for i in range(task_reps[task])])) - } - if task_neutral[task]: - pattern_performances[pattern]["neutral"] = np.transpose( - np.array([df[naming_convention(task, i, pattern, True)] for i in range(task_reps[task])]) - ) - - return training_points, head_performances, pattern_performances - - -def reduct(performances, reduction="accmax", final_pattern=0, verbalizer="normal", exclude=None): - # Combining the different runs for each experimental set-up - reducted = None - - if isinstance(performances, dict): - performances = performances[final_pattern][verbalizer] - if exclude is not None: - performances = np.delete(performances, exclude, axis=1) - - if reduction == "avg": - # Average - reducted = np.nanmean(performances, axis=1) - - if reduction == "std": - # Standard deviation - reducted = np.nanstd(performances, axis=1) - - if reduction == "max": - # Maximum - reducted = np.nanmax(performances, axis=1) - - if reduction == "accmax": - # This makes the maximum curve monotonic - max_performance = np.nanmax(performances, axis=1) - reducted = np.maximum.accumulate(max_performance) - - assert reducted is not None, "unrecognized reduction method" - return reducted - - -def find_surrounding_points(perf, clf_results, pvp_results): - for i, clf_result in enumerate(clf_results): - if i - 1 > 0 and clf_result == clf_results[i - 1]: - continue - if clf_result > perf: - if i == 0: - raise ValueError(f"value {perf} too small") - else: - break - for j, pvp_result in enumerate(pvp_results): - if j - 1 > 0 and pvp_result == pvp_results[j - 1]: - continue - if pvp_result > perf: - if j == 0: - raise ValueError(f"value {perf} too small") - else: - break - return i - 1, j - 1 - - -def interpolate(perf, x1, x2, y1, y2): - return x1 + (perf - y1) * (x2 - x1) / (y2 - y1) - - -def interpolate_from_idx(perf, idx, results, training_points): - return interpolate(perf, training_points[idx], training_points[idx + 1], results[idx], results[idx + 1]) - - -def interpolate_from_perf(perf, overlapping_range, training_points, clf_results, pvp_results): - if not overlapping_range[0] <= perf <= overlapping_range[1]: - raise ValueError(f"perf {perf} not in acceptable bounds {overlapping_range}") - clf_idx, pvp_idx = find_surrounding_points(perf, clf_results, pvp_results) - return interpolate_from_idx(perf, clf_idx, clf_results, training_points), interpolate_from_idx( - perf, pvp_idx, pvp_results, training_points - ) - - -def data_difference(perf, overlapping_range, training_points, clf_results, pvp_results): - x1, x2 = interpolate_from_perf(perf, overlapping_range, training_points, clf_results, pvp_results) - return x1 - x2 - - -def calculate_overlap(clf_results, pvp_results, full_range=False): - if full_range: - return (min(min(clf_results), min(pvp_results)), max(max(clf_results), max(pvp_results))) - else: - return (max(min(clf_results), min(pvp_results)), min(max(clf_results), max(pvp_results))) - - -def calculate_range(overlapping_range, number_of_points): - integral_range = ( - overlapping_range[0] + i / (number_of_points + 1) * (overlapping_range[1] - overlapping_range[0]) - for i in range(1, number_of_points + 1) - ) - return integral_range - - -def calculate_differences(integral_range, overlapping_range, training_points, clf_results, pvp_results): - differences = [ - data_difference(y, overlapping_range, training_points, clf_results, pvp_results) for y in integral_range - ] - return differences - - -def calculate_offset(training_points, clf_results, pvp_results, number_of_points=1000): - overlapping_range = calculate_overlap(clf_results, pvp_results) - integral_range = calculate_range(overlapping_range, number_of_points) - differences = calculate_differences(integral_range, overlapping_range, training_points, clf_results, pvp_results) - offset = sum(differences) / number_of_points - return offset - - -def intersection_with_range(training_points, results, band): - result_polygon = Polygon( - [(training_points[i], results[i]) for i in range(len(training_points))] - + [(training_points[-1], 0), (training_points[0], 0)] - ) - return result_polygon.intersection(band) - - -def fill_polygon(fig, polygon, color, label=None, alpha=1.0): - if polygon.is_empty or isinstance(polygon, shapely.geometry.LineString): - return - if isinstance(polygon, Polygon): - xs, ys = polygon.exterior.xy - fig.patch(xs, ys, color=color, alpha=alpha) - else: - for geom in polygon.geoms: - if isinstance(geom, shapely.geometry.LineString): - continue - xs, ys = geom.exterior.xy - fig.patch(xs, ys, color=color, alpha=alpha) - label = None - - -label_order = { - "head run": 0, - "head advantage": 1, - "control run": 2, - "optimization advantage": 3, - "prompting run": 4, - "semantics advantage": 5, - "region of comparison": 6, -} - - -def metric_tap( - event, overlapping_range, training_points, clf_results, pvp_results, advantage_box, advantage_plot -): - _, metric_value = event.x, event.y - try: - advantage_value = data_difference(metric_value, overlapping_range, training_points, clf_results, pvp_results) - advantage_box.text = advantage_text(advantage_value) - if not isinstance(advantage_plot.renderers[-1], Span): - metric_line = Span( - location=metric_value, - line_alpha=0.7, - dimension="width", - line_color=clf_colors[0] if advantage_value < 0 else pvp_colors[0], - line_dash="dashed", - line_width=1, - ) - advantage_plot.renderers.extend([metric_line]) - else: - advantage_plot.renderers[-1].location = metric_value - advantage_plot.renderers[-1].line_color = clf_colors[0] if advantage_value < 0 else pvp_colors[0] - # clicking outside the region - except ValueError: - pass - - -def plot_polygons_bokeh(task, training_points, clf_results, pvp_results, clf_colors, pvp_colors, x_log_scale=False): - overlapping_range = calculate_overlap(clf_results, pvp_results, False) - full_range = calculate_overlap(clf_results, pvp_results, True) - middle_y = (full_range[0] + full_range[1]) / 2 - - fig = figure(plot_height=400, plot_width=800, max_height=400, max_width=800, - x_axis_type="log" if x_log_scale else "linear", title="分类头法及提示法在各规模的训练子集上的性能") - - fig.circle(training_points, clf_results, color=clf_colors[0], legend="分类头法") - fig.circle(training_points, pvp_results, color=pvp_colors[0], legend="提示法") - fig.line(training_points, clf_results, color=clf_colors[0], alpha=1) - fig.line(training_points, pvp_results, color=pvp_colors[0], alpha=1) - fig.xaxis.axis_label = "训练子集规模" - fig.yaxis.axis_label = task_metrics[task] - fig.patch( - [training_points[0], training_points[0], training_points[-1], training_points[-1]], - [overlapping_range[0], overlapping_range[1], overlapping_range[1], overlapping_range[0]], - color="black", - fill_alpha=0, - line_width=0, - legend="比较区域", - hatch_alpha=0.14, - hatch_scale=40, - hatch_pattern="/", - ) - - band = Polygon( - [ - (training_points[0], overlapping_range[0]), - (training_points[0], overlapping_range[1]), - (training_points[-1], overlapping_range[1]), - (training_points[-1], overlapping_range[0]), - ] - ) - full_band = Polygon( - [ - (training_points[0], full_range[0]), - (training_points[0], full_range[1]), - (training_points[-1], full_range[1]), - (training_points[-1], full_range[0]), - ] - ) - clf_polygon = intersection_with_range(training_points, clf_results, band) - pvp_polygon = intersection_with_range(training_points, pvp_results, band) - full_clf_polygon = intersection_with_range(training_points, clf_results, full_band) - full_pvp_polygon = intersection_with_range(training_points, pvp_results, full_band) - - clf_inside_area = clf_polygon.difference(pvp_polygon) - pvp_inside_area = pvp_polygon.difference(clf_polygon) - clf_outside_area = (full_clf_polygon.difference(full_pvp_polygon)).difference(clf_inside_area) - pvp_outside_area = (full_pvp_polygon.difference(full_clf_polygon)).difference(pvp_inside_area) - - fill_polygon(fig, clf_outside_area, clf_colors[1], alpha=0.13) - fill_polygon(fig, pvp_outside_area, pvp_colors[1], alpha=0.18) - fill_polygon( - fig, clf_inside_area, clf_colors[1], alpha=0.4, label="head advantage" if task == "WiC" else None - ) - fill_polygon(fig, pvp_inside_area, pvp_colors[1], alpha=0.4, label="prompting advantage") - - fig.line([training_points[0], training_points[-1]], [overlapping_range[0], overlapping_range[0]], color="dimgrey") - fig.line([training_points[0], training_points[-1]], [overlapping_range[1], overlapping_range[1]], color="dimgrey") - - vline = Span( - location=training_points[-1], dimension="height", line_color="black", line_width=2.5, line_dash="dashed" - ) - end_label = Label( - x=training_points[-1], y=middle_y, text="数据集总大小", angle=90, angle_units="deg", text_align="center" - ) - fig.renderers.extend([vline, end_label]) - - fig.legend.location = "bottom_right" - - return fig - - -def plot_three_polygons_bokeh( - task, training_points, clf_results, pvp_results, ctl_results, clf_colors, pvp_colors, ctl_colors, - x_log_scale=False -): - overlapping_range = calculate_overlap(clf_results, pvp_results, False) - full_range = calculate_overlap(clf_results, pvp_results, True) - middle_y = (full_range[0] + full_range[1]) / 2 - - fig = figure(plot_height=400, plot_width=800, max_height=400, max_width=800, - x_axis_type="log" if x_log_scale else "linear", title="分类头法、提示法以及空言语器提示法在各规模的训练子集上的性能") - fig.xaxis.axis_label = "训练子集规模" - fig.yaxis.axis_label = task_metrics[task] - fig.circle(training_points, clf_results, color=clf_colors[0], legend="分类头法") - fig.circle(training_points, pvp_results, color=pvp_colors[0], legend="提示法") - fig.circle(training_points, ctl_results, color=ctl_colors[0], legend="空言语器提示法") - fig.line(training_points, clf_results, color=clf_colors[0], alpha=1) - fig.line(training_points, pvp_results, color=pvp_colors[0], alpha=1) - fig.line(training_points, ctl_results, color=ctl_colors[0], alpha=1) - - fig.patch( - [training_points[0], training_points[0], training_points[-1], training_points[-1]], - [overlapping_range[0], overlapping_range[1], overlapping_range[1], overlapping_range[0]], - color="black", - fill_alpha=0, - line_width=0, - legend="比较区域", - hatch_alpha=0.14, - hatch_scale=40, - hatch_pattern="/", - ) - - band = Polygon( - [ - (training_points[0], overlapping_range[0]), - (training_points[0], overlapping_range[1]), - (training_points[-1], overlapping_range[1]), - (training_points[-1], overlapping_range[0]), - ] - ) - full_band = Polygon( - [ - (training_points[0], full_range[0]), - (training_points[0], full_range[1]), - (training_points[-1], full_range[1]), - (training_points[-1], full_range[0]), - ] - ) - - clf_polygon = intersection_with_range(training_points, clf_results, band) - pvp_polygon = intersection_with_range(training_points, pvp_results, band) - ctl_polygon = intersection_with_range(training_points, ctl_results, band) - - full_clf_polygon = intersection_with_range(training_points, clf_results, full_band) - full_pvp_polygon = intersection_with_range(training_points, pvp_results, full_band) - full_ctl_polygon = intersection_with_range(training_points, ctl_results, full_band) - - clf_inside_area = clf_polygon.difference(ctl_polygon) - pvp_inside_area = pvp_polygon.difference(clf_polygon).difference(ctl_polygon) - ctl_inside_area = ctl_polygon.difference(clf_polygon) - - clf_outside_area = (full_clf_polygon.difference(full_ctl_polygon)).difference(clf_inside_area) - pvp_outside_area = (full_pvp_polygon.difference(full_clf_polygon).difference(ctl_polygon)).difference( - pvp_inside_area - ) - ctl_outside_area = (full_ctl_polygon.difference(full_clf_polygon)).difference(pvp_inside_area) - - fill_polygon( - fig, clf_inside_area, clf_colors[1], alpha=0.4, label="head advantage" if task == "WiC" else None - ) - fill_polygon(fig, pvp_inside_area, pvp_colors[1], alpha=0.4, label="prompting advantage") - fill_polygon(fig, ctl_inside_area, ctl_colors[1], alpha=0.4, label="null verbalizer advantage") - fill_polygon(fig, clf_outside_area, clf_colors[1], alpha=0.13) - fill_polygon(fig, pvp_outside_area, pvp_colors[1], alpha=0.18) - fill_polygon(fig, ctl_outside_area, ctl_colors[1], alpha=0.13) - - fig.line([training_points[0], training_points[-1]], [overlapping_range[0], overlapping_range[0]], color="dimgrey") - fig.line([training_points[0], training_points[-1]], [overlapping_range[1], overlapping_range[1]], color="dimgrey") - - vline = Span( - location=training_points[-1], dimension="height", line_color="black", line_width=2.5, line_dash="dashed" - ) - end_label = Label( - x=training_points[-1], y=middle_y, text="数据集总大小", angle=90, angle_units="deg", text_align="center" - ) - fig.renderers.extend([vline, end_label]) - - fig.legend.location = "bottom_right" - - return fig - - -def pattern_graph(task): - fig = figure(plot_height=400, plot_width=800, max_height=400, max_width=800, x_axis_type="log", title="Performance over training subset sizes of different prompt patterns") - fig.xaxis.axis_label = "训练子集规模" - fig.yaxis.axis_label = task_metrics[task] - url = f"https://raw.githubusercontent.com/TevenLeScao/pet/master/exported_results/{task.lower()}/wandb_export.csv" - df = pd.read_csv(url) - expanded_training_points = np.array(list(df["training_points"]) * task_reps[task] * len(task_patterns[task])) - data = np.array(df[[naming_convention(task, seed, pattern) for pattern in task_patterns[task] for seed in - range(task_reps[task])]]) - data = data.reshape(-1, task_reps[task]) - col_med = np.nanmean(data, axis=1) - # Find indices that you need to replace - inds = np.where(np.isnan(data)) - # Place column means in the indices. Align the arrays using take - data[inds] = np.take(col_med, inds[0]) - data = data.reshape(len(df["training_points"]), -1) - data = data.transpose().reshape(-1) - data = data + np.random.normal(0, 0.01, len(data)) - pattern = np.array([i // (len(data) // len(task_patterns[task])) for i in range(len(data))]) - seed = np.array([0, 1, 2, 3] * (len(data) // task_reps[task])) - long_df = pd.DataFrame(np.stack((expanded_training_points, pattern, seed, data), axis=1), - columns=["training_points", "pattern", "seed", task_metrics[task]]) - long_df['pattern'] = long_df['pattern'].astype(int).astype(str) - gby_pattern = long_df.groupby('pattern') - pattern_colors = ["royalblue", "darkturquoise", "darkviolet"] - - for i, (pattern, pattern_df) in enumerate(gby_pattern): - gby_training_points = pattern_df.groupby('training_points') - x = [training_point for training_point, training_point_df in gby_training_points] - y_max = list([np.max(training_point_df[task_metrics[task]]) for training_point, training_point_df in gby_training_points]) - y_min = list([np.min(training_point_df[task_metrics[task]]) for training_point, training_point_df in gby_training_points]) - y = list([np.median(training_point_df[task_metrics[task]]) for training_point, training_point_df in gby_training_points]) - fig.circle(x, y, color=pattern_colors[i], alpha=1, legend=f"模式 {i}") - fig.line(x, y, color=pattern_colors[i], alpha=1) - fig.varea(x=x, y1=y_max, y2=y_min, color=pattern_colors[i], alpha=0.11) - # source = ColumnDataSource(data=dict(base=x, lower=y_min, upper=y_max)) - # w = Whisker(source=source, base="base", upper="upper", lower="lower", line_color=pattern_colors[i], line_alpha=0.3) - # w.upper_head.line_color = pattern_colors[i] - # w.lower_head.line_color = pattern_colors[i] - # fig.add_layout(w) - - return fig - - - -def cubic_easing(t): - if t < 0.5: - return 4 * t * t * t - p = 2 * t - 2 - return 0.5 * p * p * p + 1 - - -def circ_easing(t): - if t < 0.5: - return 0.5 * (1 - math.sqrt(1 - 4 * (t * t))) - return 0.5 * (math.sqrt(-((2 * t) - 3) * ((2 * t) - 1)) + 1) diff --git a/spaces/Mecca/whisper-webui/app-local.py b/spaces/Mecca/whisper-webui/app-local.py deleted file mode 100644 index c7717d096ca5f95177f0dba03cd62ca729bae9f3..0000000000000000000000000000000000000000 --- a/spaces/Mecca/whisper-webui/app-local.py +++ /dev/null @@ -1,5 +0,0 @@ -# Run the app with no audio file restrictions -from app import create_ui -from src.config import ApplicationConfig - -create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1)) \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/segmentors/cascade_encoder_decoder.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/segmentors/cascade_encoder_decoder.py deleted file mode 100644 index 873957d8d6468147c994493d92ff5c1b15bfb703..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/segmentors/cascade_encoder_decoder.py +++ /dev/null @@ -1,98 +0,0 @@ -from torch import nn - -from annotator.uniformer.mmseg.core import add_prefix -from annotator.uniformer.mmseg.ops import resize -from .. import builder -from ..builder import SEGMENTORS -from .encoder_decoder import EncoderDecoder - - -@SEGMENTORS.register_module() -class CascadeEncoderDecoder(EncoderDecoder): - """Cascade Encoder Decoder segmentors. - - CascadeEncoderDecoder almost the same as EncoderDecoder, while decoders of - CascadeEncoderDecoder are cascaded. The output of previous decoder_head - will be the input of next decoder_head. - """ - - def __init__(self, - num_stages, - backbone, - decode_head, - neck=None, - auxiliary_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - self.num_stages = num_stages - super(CascadeEncoderDecoder, self).__init__( - backbone=backbone, - decode_head=decode_head, - neck=neck, - auxiliary_head=auxiliary_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - - def _init_decode_head(self, decode_head): - """Initialize ``decode_head``""" - assert isinstance(decode_head, list) - assert len(decode_head) == self.num_stages - self.decode_head = nn.ModuleList() - for i in range(self.num_stages): - self.decode_head.append(builder.build_head(decode_head[i])) - self.align_corners = self.decode_head[-1].align_corners - self.num_classes = self.decode_head[-1].num_classes - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone and heads. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - self.backbone.init_weights(pretrained=pretrained) - for i in range(self.num_stages): - self.decode_head[i].init_weights() - if self.with_auxiliary_head: - if isinstance(self.auxiliary_head, nn.ModuleList): - for aux_head in self.auxiliary_head: - aux_head.init_weights() - else: - self.auxiliary_head.init_weights() - - def encode_decode(self, img, img_metas): - """Encode images with backbone and decode into a semantic segmentation - map of the same size as input.""" - x = self.extract_feat(img) - out = self.decode_head[0].forward_test(x, img_metas, self.test_cfg) - for i in range(1, self.num_stages): - out = self.decode_head[i].forward_test(x, out, img_metas, - self.test_cfg) - out = resize( - input=out, - size=img.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - return out - - def _decode_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for decode head in - training.""" - losses = dict() - - loss_decode = self.decode_head[0].forward_train( - x, img_metas, gt_semantic_seg, self.train_cfg) - - losses.update(add_prefix(loss_decode, 'decode_0')) - - for i in range(1, self.num_stages): - # forward test again, maybe unnecessary for most methods. - prev_outputs = self.decode_head[i - 1].forward_test( - x, img_metas, self.test_cfg) - loss_decode = self.decode_head[i].forward_train( - x, prev_outputs, img_metas, gt_semantic_seg, self.train_cfg) - losses.update(add_prefix(loss_decode, f'decode_{i}')) - - return losses diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/midas/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/midas/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/NATSpeech/DiffSpeech/data_gen/tts/runs/adapt_mfa_align.py b/spaces/NATSpeech/DiffSpeech/data_gen/tts/runs/adapt_mfa_align.py deleted file mode 100644 index cadb6cbb502f852279248c98566b4616f32b1311..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/DiffSpeech/data_gen/tts/runs/adapt_mfa_align.py +++ /dev/null @@ -1,18 +0,0 @@ -import utils.commons.single_thread_env # NOQA -import os -import subprocess -from utils.commons.hparams import hparams, set_hparams - - -def adapt_mfa_align(): - CORPUS = hparams['processed_data_dir'].split("/")[-1] - print(f"| Run MFA for {CORPUS}.") - NUM_JOB = int(os.getenv('N_PROC', os.cpu_count())) - subprocess.check_call( - f'CORPUS={CORPUS} NUM_JOB={NUM_JOB} bash scripts/run_mfa_adapt.sh', - shell=True) - - -if __name__ == '__main__': - set_hparams(print_hparams=False) - adapt_mfa_align() diff --git a/spaces/NATSpeech/PortaSpeech/egs/datasets/audio/lj/preprocess.py b/spaces/NATSpeech/PortaSpeech/egs/datasets/audio/lj/preprocess.py deleted file mode 100644 index a3d45c9aa855bb7ce40b5e8374547014350fa92b..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/egs/datasets/audio/lj/preprocess.py +++ /dev/null @@ -1,9 +0,0 @@ -from data_gen.tts.base_preprocess import BasePreprocessor - - -class LJPreprocess(BasePreprocessor): - def meta_data(self): - for l in open(f'{self.raw_data_dir}/metadata.csv').readlines(): - item_name, _, txt = l.strip().split("|") - wav_fn = f"{self.raw_data_dir}/wavs/{item_name}.wav" - yield {'item_name': item_name, 'wav_fn': wav_fn, 'txt': txt} diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/configs/bert_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/configs/bert_test.py deleted file mode 100644 index c734b190ea71697350cc0fb84cf50582afdb96b3..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/configs/bert_test.py +++ /dev/null @@ -1,65 +0,0 @@ -# Lint as: python3 -# Copyright 2020 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for BERT configurations and models instantiation.""" - -import tensorflow as tf - -from official.nlp.configs import bert -from official.nlp.configs import encoders - - -class BertModelsTest(tf.test.TestCase): - - def test_network_invocation(self): - config = bert.BertPretrainerConfig( - encoder=encoders.TransformerEncoderConfig(vocab_size=10, num_layers=1)) - _ = bert.instantiate_bertpretrainer_from_cfg(config) - - # Invokes with classification heads. - config = bert.BertPretrainerConfig( - encoder=encoders.TransformerEncoderConfig(vocab_size=10, num_layers=1), - cls_heads=[ - bert.ClsHeadConfig( - inner_dim=10, num_classes=2, name="next_sentence") - ]) - _ = bert.instantiate_bertpretrainer_from_cfg(config) - - with self.assertRaises(ValueError): - config = bert.BertPretrainerConfig( - encoder=encoders.TransformerEncoderConfig( - vocab_size=10, num_layers=1), - cls_heads=[ - bert.ClsHeadConfig( - inner_dim=10, num_classes=2, name="next_sentence"), - bert.ClsHeadConfig( - inner_dim=10, num_classes=2, name="next_sentence") - ]) - _ = bert.instantiate_bertpretrainer_from_cfg(config) - - def test_checkpoint_items(self): - config = bert.BertPretrainerConfig( - encoder=encoders.TransformerEncoderConfig(vocab_size=10, num_layers=1), - cls_heads=[ - bert.ClsHeadConfig( - inner_dim=10, num_classes=2, name="next_sentence") - ]) - encoder = bert.instantiate_bertpretrainer_from_cfg(config) - self.assertSameElements(encoder.checkpoint_items.keys(), - ["encoder", "next_sentence.pooler_dense"]) - - -if __name__ == "__main__": - tf.test.main() diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/networks/albert_transformer_encoder_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/networks/albert_transformer_encoder_test.py deleted file mode 100644 index 44368e494ae04dd9b92c63987e6881aabd8ff4c2..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/networks/albert_transformer_encoder_test.py +++ /dev/null @@ -1,174 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for ALBERT transformer-based text encoder network.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -from absl.testing import parameterized -import numpy as np -import tensorflow as tf - -from tensorflow.python.keras import keras_parameterized # pylint: disable=g-direct-tensorflow-import -from official.nlp.modeling.networks import albert_transformer_encoder - - -# This decorator runs the test in V1, V2-Eager, and V2-Functional mode. It -# guarantees forward compatibility of this code for the V2 switchover. -@keras_parameterized.run_all_keras_modes -class AlbertTransformerEncoderTest(keras_parameterized.TestCase): - - def tearDown(self): - super(AlbertTransformerEncoderTest, self).tearDown() - tf.keras.mixed_precision.experimental.set_policy("float32") - - @parameterized.named_parameters( - dict(testcase_name="default", expected_dtype=tf.float32), - dict( - testcase_name="with_float16_dtype", - expected_dtype=tf.float16), - ) - def test_network_creation(self, expected_dtype): - hidden_size = 32 - sequence_length = 21 - - kwargs = dict( - vocab_size=100, - hidden_size=hidden_size, - sequence_length=sequence_length, - num_attention_heads=2, - num_layers=3) - if expected_dtype == tf.float16: - tf.keras.mixed_precision.experimental.set_policy("mixed_float16") - - # Create a small TransformerEncoder for testing. - test_network = albert_transformer_encoder.AlbertTransformerEncoder(**kwargs) - - # Create the inputs (note that the first dimension is implicit). - word_ids = tf.keras.Input(shape=(sequence_length,), dtype=tf.int32) - mask = tf.keras.Input(shape=(sequence_length,), dtype=tf.int32) - type_ids = tf.keras.Input(shape=(sequence_length,), dtype=tf.int32) - data, pooled = test_network([word_ids, mask, type_ids]) - - expected_data_shape = [None, sequence_length, hidden_size] - expected_pooled_shape = [None, hidden_size] - self.assertAllEqual(expected_data_shape, data.shape.as_list()) - self.assertAllEqual(expected_pooled_shape, pooled.shape.as_list()) - - # If float_dtype is set to float16, the data output is float32 (from a layer - # norm) and pool output should be float16. - self.assertEqual(tf.float32, data.dtype) - self.assertEqual(expected_dtype, pooled.dtype) - - # ALBERT has additonal 'embedding_hidden_mapping_in' weights and - # it shares transformer weights. - self.assertNotEmpty( - [x for x in test_network.weights if "embedding_projection/" in x.name]) - self.assertNotEmpty( - [x for x in test_network.weights if "transformer/" in x.name]) - self.assertEmpty( - [x for x in test_network.weights if "transformer/layer" in x.name]) - - def test_network_invocation(self): - hidden_size = 32 - sequence_length = 21 - vocab_size = 57 - num_types = 7 - # Create a small TransformerEncoder for testing. - test_network = albert_transformer_encoder.AlbertTransformerEncoder( - vocab_size=vocab_size, - embedding_width=8, - hidden_size=hidden_size, - sequence_length=sequence_length, - num_attention_heads=2, - num_layers=3, - type_vocab_size=num_types) - self.assertTrue( - test_network._position_embedding_layer._use_dynamic_slicing) - # Create the inputs (note that the first dimension is implicit). - word_ids = tf.keras.Input(shape=(sequence_length,), dtype=tf.int32) - mask = tf.keras.Input(shape=(sequence_length,), dtype=tf.int32) - type_ids = tf.keras.Input(shape=(sequence_length,), dtype=tf.int32) - data, pooled = test_network([word_ids, mask, type_ids]) - - # Create a model based off of this network: - model = tf.keras.Model([word_ids, mask, type_ids], [data, pooled]) - - # Invoke the model. We can't validate the output data here (the model is too - # complex) but this will catch structural runtime errors. - batch_size = 3 - word_id_data = np.random.randint( - vocab_size, size=(batch_size, sequence_length)) - mask_data = np.random.randint(2, size=(batch_size, sequence_length)) - type_id_data = np.random.randint( - num_types, size=(batch_size, sequence_length)) - _ = model.predict([word_id_data, mask_data, type_id_data]) - - # Creates a TransformerEncoder with max_sequence_length != sequence_length - max_sequence_length = 128 - test_network = albert_transformer_encoder.AlbertTransformerEncoder( - vocab_size=vocab_size, - embedding_width=8, - hidden_size=hidden_size, - sequence_length=sequence_length, - max_sequence_length=max_sequence_length, - num_attention_heads=2, - num_layers=3, - type_vocab_size=num_types) - self.assertTrue(test_network._position_embedding_layer._use_dynamic_slicing) - model = tf.keras.Model([word_ids, mask, type_ids], [data, pooled]) - _ = model.predict([word_id_data, mask_data, type_id_data]) - - def test_serialize_deserialize(self): - tf.keras.mixed_precision.experimental.set_policy("mixed_float16") - # Create a network object that sets all of its config options. - kwargs = dict( - vocab_size=100, - embedding_width=8, - hidden_size=32, - num_layers=3, - num_attention_heads=2, - sequence_length=21, - max_sequence_length=21, - type_vocab_size=12, - intermediate_size=1223, - activation="relu", - dropout_rate=0.05, - attention_dropout_rate=0.22, - initializer="glorot_uniform") - network = albert_transformer_encoder.AlbertTransformerEncoder(**kwargs) - - expected_config = dict(kwargs) - expected_config["activation"] = tf.keras.activations.serialize( - tf.keras.activations.get(expected_config["activation"])) - expected_config["initializer"] = tf.keras.initializers.serialize( - tf.keras.initializers.get(expected_config["initializer"])) - self.assertEqual(network.get_config(), expected_config) - - # Create another network object from the first object's config. - new_network = ( - albert_transformer_encoder.AlbertTransformerEncoder.from_config( - network.get_config())) - - # Validate that the config can be forced to JSON. - _ = new_network.to_json() - - # If the serialization was successful, the new config should match the old. - self.assertAllEqual(network.get_config(), new_network.get_config()) - - -if __name__ == "__main__": - tf.test.main() diff --git a/spaces/NCTCMumbai/NCTC/models/research/audioset/yamnet/params.py b/spaces/NCTCMumbai/NCTC/models/research/audioset/yamnet/params.py deleted file mode 100644 index 5d848ad71695f2fdb29eddea5b7c135509fa5fe2..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/audioset/yamnet/params.py +++ /dev/null @@ -1,42 +0,0 @@ -# Copyright 2019 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Hyperparameters for YAMNet.""" - -# The following hyperparameters (except PATCH_HOP_SECONDS) were used to train YAMNet, -# so expect some variability in performance if you change these. The patch hop can -# be changed arbitrarily: a smaller hop should give you more patches from the same -# clip and possibly better performance at a larger computational cost. -SAMPLE_RATE = 16000 -STFT_WINDOW_SECONDS = 0.025 -STFT_HOP_SECONDS = 0.010 -MEL_BANDS = 64 -MEL_MIN_HZ = 125 -MEL_MAX_HZ = 7500 -LOG_OFFSET = 0.001 -PATCH_WINDOW_SECONDS = 0.96 -PATCH_HOP_SECONDS = 0.48 - -PATCH_FRAMES = int(round(PATCH_WINDOW_SECONDS / STFT_HOP_SECONDS)) -PATCH_BANDS = MEL_BANDS -NUM_CLASSES = 521 -CONV_PADDING = 'same' -BATCHNORM_CENTER = True -BATCHNORM_SCALE = False -BATCHNORM_EPSILON = 1e-4 -CLASSIFIER_ACTIVATION = 'sigmoid' - -FEATURES_LAYER_NAME = 'features' -EXAMPLE_PREDICTIONS_LAYER_NAME = 'predictions' diff --git a/spaces/NKU-AMT/AMT/networks/amts.py b/spaces/NKU-AMT/AMT/networks/amts.py deleted file mode 100644 index 1fcb01717bf9ad891fd25b5aa465221705f34f9f..0000000000000000000000000000000000000000 --- a/spaces/NKU-AMT/AMT/networks/amts.py +++ /dev/null @@ -1,152 +0,0 @@ -import torch -import torch.nn as nn -from networks.blocks.raft import ( - coords_grid, - SmallUpdateBlock, BidirCorrBlock -) -from networks.blocks.feat_enc import ( - SmallEncoder -) -from networks.blocks.ifrnet import ( - resize, - Encoder, - InitDecoder, - IntermediateDecoder -) -from networks.blocks.multi_flow import ( - multi_flow_combine, - MultiFlowDecoder -) - -class Model(nn.Module): - def __init__(self, - corr_radius=3, - corr_lvls=4, - num_flows=3, - channels=[20, 32, 44, 56], - skip_channels=20): - super(Model, self).__init__() - self.radius = corr_radius - self.corr_levels = corr_lvls - self.num_flows = num_flows - self.channels = channels - self.skip_channels = skip_channels - - self.feat_encoder = SmallEncoder(output_dim=84, norm_fn='instance', dropout=0.) - self.encoder = Encoder(channels) - - self.decoder4 = InitDecoder(channels[3], channels[2], skip_channels) - self.decoder3 = IntermediateDecoder(channels[2], channels[1], skip_channels) - self.decoder2 = IntermediateDecoder(channels[1], channels[0], skip_channels) - self.decoder1 = MultiFlowDecoder(channels[0], skip_channels, num_flows) - - self.update4 = self._get_updateblock(44) - self.update3 = self._get_updateblock(32, 2) - self.update2 = self._get_updateblock(20, 4) - - self.comb_block = nn.Sequential( - nn.Conv2d(3*num_flows, 6*num_flows, 3, 1, 1), - nn.PReLU(6*num_flows), - nn.Conv2d(6*num_flows, 3, 3, 1, 1), - ) - - def _get_updateblock(self, cdim, scale_factor=None): - return SmallUpdateBlock(cdim=cdim, hidden_dim=76, flow_dim=20, corr_dim=64, - fc_dim=68, scale_factor=scale_factor, - corr_levels=self.corr_levels, radius=self.radius) - - def _corr_scale_lookup(self, corr_fn, coord, flow0, flow1, embt, downsample=1): - # convert t -> 0 to 0 -> 1 | convert t -> 1 to 1 -> 0 - # based on linear assumption - t1_scale = 1. / embt - t0_scale = 1. / (1. - embt) - if downsample != 1: - inv = 1 / downsample - flow0 = inv * resize(flow0, scale_factor=inv) - flow1 = inv * resize(flow1, scale_factor=inv) - - corr0, corr1 = corr_fn(coord + flow1 * t1_scale, coord + flow0 * t0_scale) - corr = torch.cat([corr0, corr1], dim=1) - flow = torch.cat([flow0, flow1], dim=1) - return corr, flow - - def forward(self, img0, img1, embt, scale_factor=1.0, eval=False, **kwargs): - mean_ = torch.cat([img0, img1], 2).mean(1, keepdim=True).mean(2, keepdim=True).mean(3, keepdim=True) - img0 = img0 - mean_ - img1 = img1 - mean_ - img0_ = resize(img0, scale_factor) if scale_factor != 1.0 else img0 - img1_ = resize(img1, scale_factor) if scale_factor != 1.0 else img1 - b, _, h, w = img0_.shape - coord = coords_grid(b, h // 8, w // 8, img0.device) - - fmap0, fmap1 = self.feat_encoder([img0_, img1_]) # [1, 128, H//8, W//8] - corr_fn = BidirCorrBlock(fmap0, fmap1, radius=self.radius, num_levels=self.corr_levels) - - # f0_1: [1, c0, H//2, W//2] | f0_2: [1, c1, H//4, W//4] - # f0_3: [1, c2, H//8, W//8] | f0_4: [1, c3, H//16, W//16] - f0_1, f0_2, f0_3, f0_4 = self.encoder(img0_) - f1_1, f1_2, f1_3, f1_4 = self.encoder(img1_) - - ######################################### the 4th decoder ######################################### - up_flow0_4, up_flow1_4, ft_3_ = self.decoder4(f0_4, f1_4, embt) - corr_4, flow_4 = self._corr_scale_lookup(corr_fn, coord, - up_flow0_4, up_flow1_4, - embt, downsample=1) - - # residue update with lookup corr - delta_ft_3_, delta_flow_4 = self.update4(ft_3_, flow_4, corr_4) - delta_flow0_4, delta_flow1_4 = torch.chunk(delta_flow_4, 2, 1) - up_flow0_4 = up_flow0_4 + delta_flow0_4 - up_flow1_4 = up_flow1_4 + delta_flow1_4 - ft_3_ = ft_3_ + delta_ft_3_ - - ######################################### the 3rd decoder ######################################### - up_flow0_3, up_flow1_3, ft_2_ = self.decoder3(ft_3_, f0_3, f1_3, up_flow0_4, up_flow1_4) - corr_3, flow_3 = self._corr_scale_lookup(corr_fn, - coord, up_flow0_3, up_flow1_3, - embt, downsample=2) - - # residue update with lookup corr - delta_ft_2_, delta_flow_3 = self.update3(ft_2_, flow_3, corr_3) - delta_flow0_3, delta_flow1_3 = torch.chunk(delta_flow_3, 2, 1) - up_flow0_3 = up_flow0_3 + delta_flow0_3 - up_flow1_3 = up_flow1_3 + delta_flow1_3 - ft_2_ = ft_2_ + delta_ft_2_ - - ######################################### the 2nd decoder ######################################### - up_flow0_2, up_flow1_2, ft_1_ = self.decoder2(ft_2_, f0_2, f1_2, up_flow0_3, up_flow1_3) - corr_2, flow_2 = self._corr_scale_lookup(corr_fn, - coord, up_flow0_2, up_flow1_2, - embt, downsample=4) - - # residue update with lookup corr - delta_ft_1_, delta_flow_2 = self.update2(ft_1_, flow_2, corr_2) - delta_flow0_2, delta_flow1_2 = torch.chunk(delta_flow_2, 2, 1) - up_flow0_2 = up_flow0_2 + delta_flow0_2 - up_flow1_2 = up_flow1_2 + delta_flow1_2 - ft_1_ = ft_1_ + delta_ft_1_ - - ######################################### the 1st decoder ######################################### - up_flow0_1, up_flow1_1, mask, img_res = self.decoder1(ft_1_, f0_1, f1_1, up_flow0_2, up_flow1_2) - - if scale_factor != 1.0: - up_flow0_1 = resize(up_flow0_1, scale_factor=(1.0/scale_factor)) * (1.0/scale_factor) - up_flow1_1 = resize(up_flow1_1, scale_factor=(1.0/scale_factor)) * (1.0/scale_factor) - mask = resize(mask, scale_factor=(1.0/scale_factor)) - img_res = resize(img_res, scale_factor=(1.0/scale_factor)) - - imgt_pred = multi_flow_combine(self.comb_block, img0, img1, up_flow0_1, up_flow1_1, - mask, img_res, mean_) - imgt_pred = torch.clamp(imgt_pred, 0, 1) - - if eval: - return { 'imgt_pred': imgt_pred, } - else: - up_flow0_1 = up_flow0_1.reshape(b, self.num_flows, 2, h, w) - up_flow1_1 = up_flow1_1.reshape(b, self.num_flows, 2, h, w) - return { - 'imgt_pred': imgt_pred, - 'flow0_pred': [up_flow0_1, up_flow0_2, up_flow0_3, up_flow0_4], - 'flow1_pred': [up_flow1_1, up_flow1_2, up_flow1_3, up_flow1_4], - 'ft_pred': [ft_1_, ft_2_, ft_3_], - } diff --git a/spaces/NonnaRose/Image-Caption/app2.py b/spaces/NonnaRose/Image-Caption/app2.py deleted file mode 100644 index e60f8a871e0cdbaa698f40b1619358ad610c2634..0000000000000000000000000000000000000000 --- a/spaces/NonnaRose/Image-Caption/app2.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -import gradio as gr -import re -from transformers import AutoTokenizer, ViTFeatureExtractor, VisionEncoderDecoderModel - -device='cpu' -encoder_checkpoint = "nlpconnect/vit-gpt2-image-captioning" -decoder_checkpoint = "nlpconnect/vit-gpt2-image-captioning" -model_checkpoint = "nlpconnect/vit-gpt2-image-captioning" -feature_extractor = ViTFeatureExtractor.from_pretrained(encoder_checkpoint) -tokenizer = AutoTokenizer.from_pretrained(decoder_checkpoint) -model = VisionEncoderDecoderModel.from_pretrained(model_checkpoint).to(device) - -def predict(image,max_length=64, num_beams=4): - image = image.convert('RGB') - image = feature_extractor(image, return_tensors="pt").pixel_values.to(device) - clean_text = lambda x: x.replace('<|endoftext|>','').split('\n')[0] - caption_ids = model.generate(image, max_length = max_length)[0] - caption_text = clean_text(tokenizer.decode(caption_ids)) - return caption_text - -def set_example_image(example: list) -> dict: - return gr.Image.update(value=example[0]) -css = ''' -h1#title { - text-align: center; -} -h3#header { - text-align: center; -} -img#overview { - max-width: 800px; - max-height: 600px; -} -img#style-image { - max-width: 1000px; - max-height: 600px; -} -''' -demo = gr.Blocks(css=css) -with demo: - gr.Markdown('''

    Image Caption 🖼️

    ''') - gr.Markdown('''Made by : Shreyas Dixit''') - with gr.Column(): - input = gr.inputs.Image(label="Upload your Image", type = 'pil', optional=True) - output = gr.outputs.Textbox(type="auto",label="Captions") - btn = gr.Button("Genrate Caption") - btn.click(fn=predict, inputs=input, outputs=output) - -demo.launch() \ No newline at end of file diff --git a/spaces/Nyashi/rvc-models-epic/infer_pack/attentions.py b/spaces/Nyashi/rvc-models-epic/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/Nyashi/rvc-models-epic/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/OAOA/DifFace/README.md b/spaces/OAOA/DifFace/README.md deleted file mode 100644 index fed65b781f7ed926d45419323d08ef361e9faca1..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: DifFace -emoji: whale -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang_word.sh b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang_word.sh deleted file mode 100644 index a7ea3877beefe1d4d53f9f7e32b004d8ce01e22a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang_word.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/bin/bash - -num_sil_states=3 -num_nonsil_states=1 - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -set -eux - -dict=$1 -data_dir=$2 -lexicon=$3 - -dict_dir=$data_dir/local/dict_word -tmplm_dir=$data_dir/local/lang_tmp_word -lm_dir=$data_dir/lang_word - -mkdir -p $dict_dir $tmplm_dir $lm_dir - -# prepare dict -echo "SIL" > $dict_dir/silence_phones.txt -echo "SIL" > $dict_dir/optional_silence.txt -awk '{print $1}' $dict > $dict_dir/nonsilence_phones.txt - -(echo "!SIL SIL"; echo " SIL";) | cat - $lexicon > $dict_dir/lexicon.txt - -echo "SIL" > $dict_dir/extra_questions.txt -awk '{printf $1" "} END {printf "\n"}' $dict >> $dict_dir/extra_questions.txt - -# prepare lang -utils/prepare_lang.sh --position-dependent-phones false \ - --num_sil_states $num_sil_states --num_nonsil_states $num_nonsil_states \ - $dict_dir "" $tmplm_dir $lm_dir diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/bart/hub_interface.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/bart/hub_interface.py deleted file mode 100644 index 4d47d9751837c744b1d0d460117b78fcbeeb12d8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/bart/hub_interface.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import copy -import logging -from typing import Dict, List - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.data import encoders -from fairseq.hub_utils import GeneratorHubInterface -from omegaconf import open_dict - - -logger = logging.getLogger(__name__) - - -class BARTHubInterface(GeneratorHubInterface): - """A simple PyTorch Hub interface to BART. - - Usage: https://github.com/pytorch/fairseq/tree/main/examples/bart - """ - - def __init__(self, cfg, task, model): - super().__init__(cfg, task, [model]) - self.model = self.models[0] - - def encode( - self, sentence: str, *addl_sentences, no_separator=True - ) -> torch.LongTensor: - """ - BPE-encode a sentence (or multiple sentences). - - Every sequence begins with a beginning-of-sentence (``) symbol. - Every sentence ends with an end-of-sentence (``). - - Example (single sentence): ` a b c ` - Example (sentence pair): ` d e f 1 2 3 ` - - The BPE encoding follows GPT-2. One subtle detail is that the GPT-2 BPE - requires leading spaces. For example:: - - >>> bart.encode('Hello world').tolist() - [0, 31414, 232, 2] - >>> bart.encode(' world').tolist() - [0, 232, 2] - >>> bart.encode('world').tolist() - [0, 8331, 2] - """ - tokens = self.bpe.encode(sentence) - if len(tokens.split(" ")) > min(self.max_positions) - 2: - tokens = " ".join(tokens.split(" ")[: min(self.max_positions) - 2]) - bpe_sentence = " " + tokens + " " - for s in addl_sentences: - bpe_sentence += " " if not no_separator else "" - bpe_sentence += " " + self.bpe.encode(s) + " " - tokens = self.task.source_dictionary.encode_line(bpe_sentence, append_eos=False) - return tokens.long() - - def decode(self, tokens: torch.LongTensor): - assert tokens.dim() == 1 - tokens = tokens.cpu().numpy() - if tokens[0] == self.task.source_dictionary.bos(): - tokens = tokens[1:] # remove - eos_mask = tokens == self.task.source_dictionary.eos() - doc_mask = eos_mask[1:] & eos_mask[:-1] - sentences = np.split(tokens, doc_mask.nonzero()[0] + 1) - sentences = [ - self.bpe.decode(self.task.source_dictionary.string(s)) for s in sentences - ] - if len(sentences) == 1: - return sentences[0] - return sentences - - def _build_sample(self, src_tokens: List[torch.LongTensor]): - # assert torch.is_tensor(src_tokens) - dataset = self.task.build_dataset_for_inference( - src_tokens, - [x.numel() for x in src_tokens], - ) - sample = dataset.collater(dataset) - sample = utils.apply_to_sample(lambda tensor: tensor.to(self.device), sample) - return sample - - def generate( - self, - tokenized_sentences: List[torch.LongTensor], - *args, - inference_step_args=None, - skip_invalid_size_inputs=False, - **kwargs - ) -> List[List[Dict[str, torch.Tensor]]]: - inference_step_args = inference_step_args or {} - if "prefix_tokens" in inference_step_args: - raise NotImplementedError("prefix generation not implemented for BART") - res = [] - for batch in self._build_batches(tokenized_sentences, skip_invalid_size_inputs): - src_tokens = batch['net_input']['src_tokens'] - inference_step_args["prefix_tokens"] =src_tokens.new_full( - (src_tokens.size(0), 1), fill_value=self.task.source_dictionary.bos() - ).to(device=self.device) - results = super().generate( - src_tokens, - *args, - inference_step_args=inference_step_args, - skip_invalid_size_inputs=skip_invalid_size_inputs, - **kwargs - ) - for id, hypos in zip(batch['id'].tolist(), results): - res.append((id, hypos)) - res = [hypos for _, hypos in sorted(res, key=lambda x: x[0])] - return res - - def extract_features( - self, tokens: torch.LongTensor, return_all_hiddens: bool = False - ) -> torch.Tensor: - if tokens.dim() == 1: - tokens = tokens.unsqueeze(0) - if tokens.size(-1) > min(self.model.max_positions()): - raise ValueError( - "tokens exceeds maximum length: {} > {}".format( - tokens.size(-1), self.model.max_positions() - ) - ) - tokens.to(device=self.device), - prev_output_tokens = tokens.clone() - - prev_output_tokens[:, 0] = tokens.gather( - 1, - (tokens.ne(self.task.source_dictionary.pad()).sum(dim=1) - 1).unsqueeze(-1), - ).squeeze() - - prev_output_tokens[:, 1:] = tokens[:, :-1] - features, extra = self.model( - src_tokens=tokens, - src_lengths=None, - prev_output_tokens=prev_output_tokens, - features_only=True, - return_all_hiddens=return_all_hiddens, - ) - if return_all_hiddens: - # convert from T x B x C -> B x T x C - inner_states = extra["inner_states"] - return [inner_state.transpose(0, 1) for inner_state in inner_states] - else: - return features # just the last layer's features - - def register_classification_head( - self, name: str, num_classes: int = None, embedding_size: int = None, **kwargs - ): - self.model.register_classification_head( - name, num_classes=num_classes, embedding_size=embedding_size, **kwargs - ) - - def predict(self, head: str, tokens: torch.LongTensor, return_logits: bool = False): - if tokens.dim() == 1: - tokens = tokens.unsqueeze(0) - features = self.extract_features(tokens.to(device=self.device)) - sentence_representation = features[ - tokens.eq(self.task.source_dictionary.eos()), : - ].view(features.size(0), -1, features.size(-1))[:, -1, :] - - logits = self.model.classification_heads[head](sentence_representation) - if return_logits: - return logits - return F.log_softmax(logits, dim=-1) - - def fill_mask( - self, - masked_inputs: List[str], - topk: int = 5, - match_source_len: bool = True, - **generate_kwargs - ): - masked_token = '' - batch_tokens = [] - for masked_input in masked_inputs: - assert masked_token in masked_input, \ - "please add one {} token for the input".format(masked_token) - - text_spans = masked_input.split(masked_token) - text_spans_bpe = (' {0} '.format(masked_token)).join( - [self.bpe.encode(text_span.rstrip()) for text_span in text_spans] - ).strip() - tokens = self.task.source_dictionary.encode_line( - ' ' + text_spans_bpe + ' ', - append_eos=False, - add_if_not_exist=False, - ).long() - batch_tokens.append(tokens) - - # ensure beam size is at least as big as topk - generate_kwargs['beam'] = max( - topk, - generate_kwargs.get('beam', -1), - ) - generate_kwargs['match_source_len'] = match_source_len - batch_hypos = self.generate(batch_tokens, **generate_kwargs) - - return [ - [(self.decode(hypo['tokens']), hypo['score']) for hypo in hypos[:topk]] - for hypos in batch_hypos - ] diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/roberta/model_camembert.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/roberta/model_camembert.py deleted file mode 100644 index 46447546fafb4a0a887b481022cac07631047c80..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/roberta/model_camembert.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -CamemBERT: a Tasty French Language Model -""" - -from fairseq.models import register_model - -from .hub_interface import RobertaHubInterface -from .model import RobertaModel - - -@register_model("camembert") -class CamembertModel(RobertaModel): - @classmethod - def hub_models(cls): - return { - "camembert": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz", - "camembert.v0": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz", - "camembert-base": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz", - "camembert-large": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-large.tar.gz", - "camembert-base-ccnet": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base-ccnet.tar.gz", - "camembert-base-ccnet-4gb": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base-ccnet-4gb.tar.gz", - "camembert-base-wikipedia-4gb": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base-wikipedia-4gb.tar.gz", - "camembert-base-oscar-4gb": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base-oscar-4gb.tar.gz", - } - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - bpe="sentencepiece", - **kwargs - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - bpe=bpe, - load_checkpoint_heads=True, - **kwargs, - ) - return RobertaHubInterface(x["args"], x["task"], x["models"][0]) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/text_to_speech/vocoder.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/text_to_speech/vocoder.py deleted file mode 100644 index 65d9f9f06bfe7ffa3ed332bb41c4cdd65ac2b916..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/text_to_speech/vocoder.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import json -from typing import Dict - -import numpy as np -import torch -from torch import nn -import torch.nn.functional as F - -from fairseq.data.audio.audio_utils import ( - get_window, get_fourier_basis, get_mel_filters, TTSSpectrogram -) -from fairseq.data.audio.speech_to_text_dataset import S2TDataConfig -from fairseq.models.text_to_speech.hifigan import Generator as HiFiGANModel - -logger = logging.getLogger(__name__) - - -class PseudoInverseMelScale(torch.nn.Module): - def __init__(self, n_stft, n_mels, sample_rate, f_min, f_max) -> None: - super(PseudoInverseMelScale, self).__init__() - self.n_mels = n_mels - basis = get_mel_filters( - sample_rate, (n_stft - 1) * 2, n_mels, f_min, f_max - ) - basis = torch.pinverse(basis) # F x F_mel - self.register_buffer('basis', basis) - - def forward(self, melspec: torch.Tensor) -> torch.Tensor: - # pack batch - shape = melspec.shape # B_1 x ... x B_K x F_mel x T - n_mels, time = shape[-2], shape[-1] - melspec = melspec.view(-1, n_mels, time) - - freq, _ = self.basis.size() # F x F_mel - assert self.n_mels == n_mels, (self.n_mels, n_mels) - specgram = self.basis.matmul(melspec).clamp(min=0) - - # unpack batch - specgram = specgram.view(shape[:-2] + (freq, time)) - return specgram - - -class GriffinLim(torch.nn.Module): - def __init__( - self, n_fft: int, win_length: int, hop_length: int, n_iter: int, - window_fn=torch.hann_window - ): - super(GriffinLim, self).__init__() - self.transform = TTSSpectrogram( - n_fft, win_length, hop_length, return_phase=True - ) - - basis = get_fourier_basis(n_fft) - basis = torch.pinverse(n_fft / hop_length * basis).T[:, None, :] - basis *= get_window(window_fn, n_fft, win_length) - self.register_buffer('basis', basis) - - self.n_fft = n_fft - self.win_length = win_length - self.hop_length = hop_length - self.n_iter = n_iter - - self.tiny = 1.1754944e-38 - - @classmethod - def get_window_sum_square( - cls, n_frames, hop_length, win_length, n_fft, - window_fn=torch.hann_window - ) -> torch.Tensor: - w_sq = get_window(window_fn, n_fft, win_length) ** 2 - n = n_fft + hop_length * (n_frames - 1) - x = torch.zeros(n, dtype=torch.float32) - for i in range(n_frames): - ofst = i * hop_length - x[ofst: min(n, ofst + n_fft)] += w_sq[:max(0, min(n_fft, n - ofst))] - return x - - def inverse(self, magnitude: torch.Tensor, phase) -> torch.Tensor: - x = torch.cat( - [magnitude * torch.cos(phase), magnitude * torch.sin(phase)], - dim=1 - ) - x = F.conv_transpose1d(x, self.basis, stride=self.hop_length) - win_sum_sq = self.get_window_sum_square( - magnitude.shape[-1], hop_length=self.hop_length, - win_length=self.win_length, n_fft=self.n_fft - ).to(magnitude.device) - # remove modulation effects - approx_nonzero_indices = win_sum_sq > self.tiny - x[:, :, approx_nonzero_indices] /= win_sum_sq[approx_nonzero_indices] - x *= self.n_fft / self.hop_length - x = x[:, :, self.n_fft // 2:] - x = x[:, :, :-self.n_fft // 2:] - return x - - def forward(self, specgram: torch.Tensor) -> torch.Tensor: - angles = np.angle(np.exp(2j * np.pi * np.random.rand(*specgram.shape))) - angles = torch.from_numpy(angles).to(specgram) - _specgram = specgram.view(-1, specgram.shape[-2], specgram.shape[-1]) - waveform = self.inverse(_specgram, angles).squeeze(1) - for _ in range(self.n_iter): - _, angles = self.transform(waveform) - waveform = self.inverse(_specgram, angles).squeeze(1) - return waveform.squeeze(0) - - -class GriffinLimVocoder(nn.Module): - def __init__(self, sample_rate, win_size, hop_size, n_fft, - n_mels, f_min, f_max, window_fn, - spec_bwd_max_iter=32, - fp16=False): - super().__init__() - self.inv_mel_transform = PseudoInverseMelScale( - n_stft=n_fft // 2 + 1, n_mels=n_mels, sample_rate=sample_rate, - f_min=f_min, f_max=f_max - ) - self.gl_transform = GriffinLim( - n_fft=n_fft, win_length=win_size, hop_length=hop_size, - window_fn=window_fn, n_iter=spec_bwd_max_iter - ) - if fp16: - self.half() - self.inv_mel_transform.half() - self.gl_transform.half() - else: - self.float() - self.inv_mel_transform.float() - self.gl_transform.float() - - def forward(self, x): - # x: (B x) T x D -> (B x) 1 x T - # NOTE: batched forward produces noisier waveform. recommend running - # one utterance at a time - self.eval() - x = x.exp().transpose(-1, -2) - x = self.inv_mel_transform(x) - x = self.gl_transform(x) - return x - - @classmethod - def from_data_cfg(cls, args, data_cfg: S2TDataConfig): - feat_cfg = data_cfg.config["features"] - window_fn = getattr(torch, feat_cfg["window_fn"] + "_window") - return cls( - sample_rate=feat_cfg["sample_rate"], - win_size=int(feat_cfg["win_len_t"] * feat_cfg["sample_rate"]), - hop_size=int(feat_cfg["hop_len_t"] * feat_cfg["sample_rate"]), - n_fft=feat_cfg["n_fft"], n_mels=feat_cfg["n_mels"], - f_min=feat_cfg["f_min"], f_max=feat_cfg["f_max"], - window_fn=window_fn, spec_bwd_max_iter=args.spec_bwd_max_iter, - fp16=args.fp16 - ) - - -class HiFiGANVocoder(nn.Module): - def __init__( - self, checkpoint_path: str, model_cfg: Dict[str, str], - fp16: bool = False - ) -> None: - super().__init__() - self.model = HiFiGANModel(model_cfg) - state_dict = torch.load(checkpoint_path) - self.model.load_state_dict(state_dict["generator"]) - if fp16: - self.model.half() - logger.info(f"loaded HiFiGAN checkpoint from {checkpoint_path}") - - def forward(self, x: torch.Tensor) -> torch.Tensor: - # (B x) T x D -> (B x) 1 x T - model = self.model.eval() - if len(x.shape) == 2: - return model(x.unsqueeze(0).transpose(1, 2)).detach().squeeze(0) - else: - return model(x.transpose(-1, -2)).detach() - - @classmethod - def from_data_cfg(cls, args, data_cfg: S2TDataConfig): - vocoder_cfg = data_cfg.vocoder - assert vocoder_cfg.get("type", "griffin_lim") == "hifigan" - with open(vocoder_cfg["config"]) as f: - model_cfg = json.load(f) - return cls(vocoder_cfg["checkpoint"], model_cfg, fp16=args.fp16) - - -def get_vocoder(args, data_cfg: S2TDataConfig): - if args.vocoder == "griffin_lim": - return GriffinLimVocoder.from_data_cfg(args, data_cfg) - elif args.vocoder == "hifigan": - return HiFiGANVocoder.from_data_cfg(args, data_cfg) - else: - raise ValueError("Unknown vocoder") diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/data/ofa_dataset.py b/spaces/OFA-Sys/OFA-Visual_Grounding/data/ofa_dataset.py deleted file mode 100644 index 02d856c28016b3a1c020fed483afe0aa797bf50f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/data/ofa_dataset.py +++ /dev/null @@ -1,74 +0,0 @@ -import logging -import re -import torch.utils.data -from fairseq.data import FairseqDataset - -logger = logging.getLogger(__name__) - - -class OFADataset(FairseqDataset): - def __init__(self, split, dataset, bpe, src_dict, tgt_dict): - self.split = split - self.dataset = dataset - self.bpe = bpe - self.src_dict = src_dict - self.tgt_dict = tgt_dict - - self.bos = src_dict.bos() - self.eos = src_dict.eos() - self.pad = src_dict.pad() - self.bos_item = torch.LongTensor([self.bos]) - self.eos_item = torch.LongTensor([self.eos]) - - def __len__(self): - return len(self.dataset) - - def encode_text(self, text, length=None, append_bos=False, append_eos=False, use_bpe=True): - s = self.tgt_dict.encode_line( - line=self.bpe.encode(text) if use_bpe else text, - add_if_not_exist=False, - append_eos=False - ).long() - if length is not None: - s = s[:length] - if append_bos: - s = torch.cat([self.bos_item, s]) - if append_eos: - s = torch.cat([s, self.eos_item]) - return s - - def pre_question(self, question, max_ques_words): - question = question.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ') - - question = re.sub( - r"\s{2,}", - ' ', - question, - ) - question = question.rstrip('\n') - question = question.strip(' ') - - # truncate question - question_words = question.split(' ') - if len(question_words) > max_ques_words: - question = ' '.join(question_words[:max_ques_words]) - - return question - - def pre_caption(self, caption, max_words): - caption = caption.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ').replace('', 'person') - - caption = re.sub( - r"\s{2,}", - ' ', - caption, - ) - caption = caption.rstrip('\n') - caption = caption.strip(' ') - - # truncate caption - caption_words = caption.split(' ') - if len(caption_words) > max_words: - caption = ' '.join(caption_words[:max_words]) - - return caption diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py deleted file mode 100644 index 6177239dc75f6937d036462a5a2379aaee202e7d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py +++ /dev/null @@ -1,707 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Run inference for pre-processed data with a trained model. -""" - -import ast -from collections import namedtuple -from dataclasses import dataclass, field -from enum import Enum, auto -import hydra -from hydra.core.config_store import ConfigStore -import logging -import math -import os -from omegaconf import OmegaConf -from typing import Optional -import sys - -import editdistance -import torch - -from hydra.core.hydra_config import HydraConfig - -from fairseq import checkpoint_utils, progress_bar, tasks, utils -from fairseq.data.data_utils import post_process -from fairseq.dataclass.configs import FairseqDataclass, FairseqConfig -from fairseq.logging.meters import StopwatchMeter -from omegaconf import open_dict - -from examples.speech_recognition.kaldi.kaldi_decoder import KaldiDecoderConfig - -logging.root.setLevel(logging.INFO) -logging.basicConfig(stream=sys.stdout, level=logging.INFO) -logger = logging.getLogger(__name__) - - -class DecoderType(Enum): - VITERBI = auto() - KENLM = auto() - FAIRSEQ = auto() - KALDI = auto() - - -@dataclass -class UnsupGenerateConfig(FairseqDataclass): - fairseq: FairseqConfig = FairseqConfig() - lm_weight: float = field( - default=2.0, - metadata={"help": "language model weight"}, - ) - w2l_decoder: DecoderType = field( - default=DecoderType.VITERBI, - metadata={"help": "type of decoder to use"}, - ) - kaldi_decoder_config: Optional[KaldiDecoderConfig] = None - lexicon: Optional[str] = field( - default=None, - metadata={ - "help": "path to lexicon. This is also used to 'phonemize' for unsupvised param tuning" - }, - ) - lm_model: Optional[str] = field( - default=None, - metadata={"help": "path to language model (kenlm or fairseq)"}, - ) - unit_lm: bool = field( - default=False, - metadata={"help": "whether to use unit lm"}, - ) - beam_threshold: float = field( - default=50.0, - metadata={"help": "beam score threshold"}, - ) - beam_size_token: float = field( - default=100.0, - metadata={"help": "max tokens per beam"}, - ) - beam: int = field( - default=5, - metadata={"help": "decoder beam size"}, - ) - nbest: int = field( - default=1, - metadata={"help": "number of results to return"}, - ) - word_score: float = field( - default=1.0, - metadata={"help": "word score to add at end of word"}, - ) - unk_weight: float = field( - default=-math.inf, - metadata={"help": "unknown token weight"}, - ) - sil_weight: float = field( - default=0.0, - metadata={"help": "silence token weight"}, - ) - targets: Optional[str] = field( - default=None, - metadata={"help": "extension of ground truth labels to compute UER"}, - ) - results_path: Optional[str] = field( - default=None, - metadata={"help": "where to store results"}, - ) - post_process: Optional[str] = field( - default=None, - metadata={"help": "how to post process results"}, - ) - vocab_usage_power: float = field( - default=2, - metadata={"help": "for unsupervised param tuning"}, - ) - - viterbi_transcript: Optional[str] = field( - default=None, - metadata={"help": "for unsupervised param tuning"}, - ) - min_lm_ppl: float = field( - default=0, - metadata={"help": "for unsupervised param tuning"}, - ) - min_vt_uer: float = field( - default=0, - metadata={"help": "for unsupervised param tuning"}, - ) - - blank_weight: float = field( - default=0, - metadata={"help": "value to add or set for blank emission"}, - ) - blank_mode: str = field( - default="set", - metadata={ - "help": "can be add or set, how to modify blank emission with blank weight" - }, - ) - sil_is_blank: bool = field( - default=False, - metadata={"help": "if true, token is same as blank token"}, - ) - - unsupervised_tuning: bool = field( - default=False, - metadata={ - "help": "if true, returns a score based on unsupervised param selection metric instead of UER" - }, - ) - is_ax: bool = field( - default=False, - metadata={ - "help": "if true, assumes we are using ax for tuning and returns a tuple for ax to consume" - }, - ) - - -def get_dataset_itr(cfg, task): - return task.get_batch_iterator( - dataset=task.dataset(cfg.fairseq.dataset.gen_subset), - max_tokens=cfg.fairseq.dataset.max_tokens, - max_sentences=cfg.fairseq.dataset.batch_size, - max_positions=(sys.maxsize, sys.maxsize), - ignore_invalid_inputs=cfg.fairseq.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=cfg.fairseq.dataset.required_batch_size_multiple, - num_shards=cfg.fairseq.dataset.num_shards, - shard_id=cfg.fairseq.dataset.shard_id, - num_workers=cfg.fairseq.dataset.num_workers, - data_buffer_size=cfg.fairseq.dataset.data_buffer_size, - ).next_epoch_itr(shuffle=False) - - -def process_predictions( - cfg: UnsupGenerateConfig, - hypos, - tgt_dict, - target_tokens, - res_files, -): - retval = [] - word_preds = [] - transcriptions = [] - dec_scores = [] - - for i, hypo in enumerate(hypos[: min(len(hypos), cfg.nbest)]): - if torch.is_tensor(hypo["tokens"]): - tokens = hypo["tokens"].int().cpu() - tokens = tokens[tokens >= tgt_dict.nspecial] - hyp_pieces = tgt_dict.string(tokens) - else: - hyp_pieces = " ".join(hypo["tokens"]) - - if "words" in hypo and len(hypo["words"]) > 0: - hyp_words = " ".join(hypo["words"]) - else: - hyp_words = post_process(hyp_pieces, cfg.post_process) - - to_write = {} - if res_files is not None: - to_write[res_files["hypo.units"]] = hyp_pieces - to_write[res_files["hypo.words"]] = hyp_words - - tgt_words = "" - if target_tokens is not None: - if isinstance(target_tokens, str): - tgt_pieces = tgt_words = target_tokens - else: - tgt_pieces = tgt_dict.string(target_tokens) - tgt_words = post_process(tgt_pieces, cfg.post_process) - - if res_files is not None: - to_write[res_files["ref.units"]] = tgt_pieces - to_write[res_files["ref.words"]] = tgt_words - - if not cfg.fairseq.common_eval.quiet: - logger.info(f"HYPO {i}:" + hyp_words) - if tgt_words: - logger.info("TARGET:" + tgt_words) - - if "am_score" in hypo and "lm_score" in hypo: - logger.info( - f"DECODER AM SCORE: {hypo['am_score']}, DECODER LM SCORE: {hypo['lm_score']}, DECODER SCORE: {hypo['score']}" - ) - elif "score" in hypo: - logger.info(f"DECODER SCORE: {hypo['score']}") - - logger.info("___________________") - - hyp_words_arr = hyp_words.split() - tgt_words_arr = tgt_words.split() - - retval.append( - ( - editdistance.eval(hyp_words_arr, tgt_words_arr), - len(hyp_words_arr), - len(tgt_words_arr), - hyp_pieces, - hyp_words, - ) - ) - word_preds.append(hyp_words_arr) - transcriptions.append(to_write) - dec_scores.append(-hypo.get("score", 0)) # negate cuz kaldi returns NLL - - if len(retval) > 1: - best = None - for r, t in zip(retval, transcriptions): - if best is None or r[0] < best[0][0]: - best = r, t - for dest, tran in best[1].items(): - print(tran, file=dest) - dest.flush() - return best[0] - - assert len(transcriptions) == 1 - for dest, tran in transcriptions[0].items(): - print(tran, file=dest) - - return retval[0] - - -def prepare_result_files(cfg: UnsupGenerateConfig): - def get_res_file(file_prefix): - if cfg.fairseq.dataset.num_shards > 1: - file_prefix = f"{cfg.fairseq.dataset.shard_id}_{file_prefix}" - path = os.path.join( - cfg.results_path, - "{}{}.txt".format( - cfg.fairseq.dataset.gen_subset, - file_prefix, - ), - ) - return open(path, "w", buffering=1) - - if not cfg.results_path: - return None - - return { - "hypo.words": get_res_file(""), - "hypo.units": get_res_file("_units"), - "ref.words": get_res_file("_ref"), - "ref.units": get_res_file("_ref_units"), - "hypo.nbest.words": get_res_file("_nbest_words"), - } - - -def optimize_models(cfg: UnsupGenerateConfig, use_cuda, models): - """Optimize ensemble for generation""" - for model in models: - model.eval() - if cfg.fairseq.common.fp16: - model.half() - if use_cuda: - model.cuda() - - -GenResult = namedtuple( - "GenResult", - [ - "count", - "errs_t", - "gen_timer", - "lengths_hyp_unit_t", - "lengths_hyp_t", - "lengths_t", - "lm_score_t", - "num_feats", - "num_sentences", - "num_symbols", - "vt_err_t", - "vt_length_t", - ], -) - - -def generate(cfg: UnsupGenerateConfig, models, saved_cfg, use_cuda): - task = tasks.setup_task(cfg.fairseq.task) - saved_cfg.task.labels = cfg.fairseq.task.labels - task.load_dataset(cfg.fairseq.dataset.gen_subset, task_cfg=saved_cfg.task) - # Set dictionary - tgt_dict = task.target_dictionary - logger.info( - "| {} {} {} examples".format( - cfg.fairseq.task.data, - cfg.fairseq.dataset.gen_subset, - len(task.dataset(cfg.fairseq.dataset.gen_subset)), - ) - ) - # Load dataset (possibly sharded) - itr = get_dataset_itr(cfg, task) - # Initialize generator - gen_timer = StopwatchMeter() - - def build_generator(cfg: UnsupGenerateConfig): - w2l_decoder = cfg.w2l_decoder - if w2l_decoder == DecoderType.VITERBI: - from examples.speech_recognition.w2l_decoder import W2lViterbiDecoder - - return W2lViterbiDecoder(cfg, task.target_dictionary) - elif w2l_decoder == DecoderType.KENLM: - from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder - - return W2lKenLMDecoder(cfg, task.target_dictionary) - elif w2l_decoder == DecoderType.FAIRSEQ: - from examples.speech_recognition.w2l_decoder import W2lFairseqLMDecoder - - return W2lFairseqLMDecoder(cfg, task.target_dictionary) - elif w2l_decoder == DecoderType.KALDI: - from examples.speech_recognition.kaldi.kaldi_decoder import KaldiDecoder - - assert cfg.kaldi_decoder_config is not None - - return KaldiDecoder( - cfg.kaldi_decoder_config, - cfg.beam, - ) - else: - raise NotImplementedError( - "only wav2letter decoders with (viterbi, kenlm, fairseqlm) options are supported at the moment but found " - + str(w2l_decoder) - ) - - generator = build_generator(cfg) - - kenlm = None - fairseq_lm = None - if cfg.lm_model is not None: - import kenlm - - kenlm = kenlm.Model(cfg.lm_model) - - num_sentences = 0 - if cfg.results_path is not None and not os.path.exists(cfg.results_path): - os.makedirs(cfg.results_path) - - res_files = prepare_result_files(cfg) - errs_t = 0 - lengths_hyp_t = 0 - lengths_hyp_unit_t = 0 - lengths_t = 0 - count = 0 - num_feats = 0 - all_hyp_pieces = [] - all_hyp_words = [] - - num_symbols = ( - len([s for s in tgt_dict.symbols if not s.startswith("madeup")]) - - tgt_dict.nspecial - ) - targets = None - if cfg.targets is not None: - tgt_path = os.path.join( - cfg.fairseq.task.data, cfg.fairseq.dataset.gen_subset + "." + cfg.targets - ) - if os.path.exists(tgt_path): - with open(tgt_path, "r") as f: - targets = f.read().splitlines() - viterbi_transcript = None - if cfg.viterbi_transcript is not None and len(cfg.viterbi_transcript) > 0: - logger.info(f"loading viterbi transcript from {cfg.viterbi_transcript}") - with open(cfg.viterbi_transcript, "r") as vf: - viterbi_transcript = vf.readlines() - viterbi_transcript = [v.rstrip().split() for v in viterbi_transcript] - - gen_timer.start() - - start = 0 - end = len(itr) - - hypo_futures = None - if cfg.w2l_decoder == DecoderType.KALDI: - logger.info("Extracting features") - hypo_futures = [] - samples = [] - with progress_bar.build_progress_bar(cfg.fairseq.common, itr) as t: - for i, sample in enumerate(t): - if "net_input" not in sample or i < start or i >= end: - continue - if "padding_mask" not in sample["net_input"]: - sample["net_input"]["padding_mask"] = None - - hypos, num_feats = gen_hypos( - generator, models, num_feats, sample, task, use_cuda - ) - hypo_futures.append(hypos) - samples.append(sample) - itr = list(zip(hypo_futures, samples)) - start = 0 - end = len(itr) - logger.info("Finished extracting features") - - with progress_bar.build_progress_bar(cfg.fairseq.common, itr) as t: - for i, sample in enumerate(t): - if i < start or i >= end: - continue - - if hypo_futures is not None: - hypos, sample = sample - hypos = [h.result() for h in hypos] - else: - if "net_input" not in sample: - continue - - hypos, num_feats = gen_hypos( - generator, models, num_feats, sample, task, use_cuda - ) - - for i, sample_id in enumerate(sample["id"].tolist()): - if targets is not None: - target_tokens = targets[sample_id] - elif "target" in sample or "target_label" in sample: - toks = ( - sample["target"][i, :] - if "target_label" not in sample - else sample["target_label"][i, :] - ) - - target_tokens = utils.strip_pad(toks, tgt_dict.pad()).int().cpu() - else: - target_tokens = None - - # Process top predictions - ( - errs, - length_hyp, - length, - hyp_pieces, - hyp_words, - ) = process_predictions( - cfg, - hypos[i], - tgt_dict, - target_tokens, - res_files, - ) - errs_t += errs - lengths_hyp_t += length_hyp - lengths_hyp_unit_t += ( - len(hyp_pieces) if len(hyp_pieces) > 0 else len(hyp_words) - ) - lengths_t += length - count += 1 - all_hyp_pieces.append(hyp_pieces) - all_hyp_words.append(hyp_words) - - num_sentences += ( - sample["nsentences"] if "nsentences" in sample else sample["id"].numel() - ) - - lm_score_sum = 0 - if kenlm is not None: - - if cfg.unit_lm: - lm_score_sum = sum(kenlm.score(w) for w in all_hyp_pieces) - else: - lm_score_sum = sum(kenlm.score(w) for w in all_hyp_words) - elif fairseq_lm is not None: - lm_score_sum = sum(fairseq_lm.score([h.split() for h in all_hyp_words])[0]) - - vt_err_t = 0 - vt_length_t = 0 - if viterbi_transcript is not None: - unit_hyps = [] - if cfg.targets is not None and cfg.lexicon is not None: - lex = {} - with open(cfg.lexicon, "r") as lf: - for line in lf: - items = line.rstrip().split() - lex[items[0]] = items[1:] - for h in all_hyp_pieces: - hyp_ws = [] - for w in h.split(): - assert w in lex, w - hyp_ws.extend(lex[w]) - unit_hyps.append(hyp_ws) - - else: - unit_hyps.extend([h.split() for h in all_hyp_words]) - - vt_err_t = sum( - editdistance.eval(vt, h) for vt, h in zip(viterbi_transcript, unit_hyps) - ) - - vt_length_t = sum(len(h) for h in viterbi_transcript) - - if res_files is not None: - for r in res_files.values(): - r.close() - - gen_timer.stop(lengths_hyp_t) - - return GenResult( - count, - errs_t, - gen_timer, - lengths_hyp_unit_t, - lengths_hyp_t, - lengths_t, - lm_score_sum, - num_feats, - num_sentences, - num_symbols, - vt_err_t, - vt_length_t, - ) - - -def gen_hypos(generator, models, num_feats, sample, task, use_cuda): - sample = utils.move_to_cuda(sample) if use_cuda else sample - - if "features" in sample["net_input"]: - sample["net_input"]["dense_x_only"] = True - num_feats += ( - sample["net_input"]["features"].shape[0] - * sample["net_input"]["features"].shape[1] - ) - hypos = task.inference_step(generator, models, sample, None) - return hypos, num_feats - - -def main(cfg: UnsupGenerateConfig, model=None): - if ( - cfg.fairseq.dataset.max_tokens is None - and cfg.fairseq.dataset.batch_size is None - ): - cfg.fairseq.dataset.max_tokens = 1024000 - - use_cuda = torch.cuda.is_available() and not cfg.fairseq.common.cpu - - task = tasks.setup_task(cfg.fairseq.task) - - overrides = ast.literal_eval(cfg.fairseq.common_eval.model_overrides) - - if cfg.fairseq.task._name == "unpaired_audio_text": - overrides["model"] = { - "blank_weight": cfg.blank_weight, - "blank_mode": cfg.blank_mode, - "blank_is_sil": cfg.sil_is_blank, - "no_softmax": True, - "segmentation": { - "type": "NONE", - }, - } - else: - overrides["model"] = { - "blank_weight": cfg.blank_weight, - "blank_mode": cfg.blank_mode, - } - - if model is None: - # Load ensemble - logger.info("| loading model(s) from {}".format(cfg.fairseq.common_eval.path)) - models, saved_cfg = checkpoint_utils.load_model_ensemble( - cfg.fairseq.common_eval.path.split("\\"), - arg_overrides=overrides, - task=task, - suffix=cfg.fairseq.checkpoint.checkpoint_suffix, - strict=(cfg.fairseq.checkpoint.checkpoint_shard_count == 1), - num_shards=cfg.fairseq.checkpoint.checkpoint_shard_count, - ) - optimize_models(cfg, use_cuda, models) - else: - models = [model] - saved_cfg = cfg.fairseq - - with open_dict(saved_cfg.task): - saved_cfg.task.shuffle = False - saved_cfg.task.sort_by_length = False - - gen_result = generate(cfg, models, saved_cfg, use_cuda) - - wer = None - if gen_result.lengths_t > 0: - wer = gen_result.errs_t * 100.0 / gen_result.lengths_t - logger.info(f"WER: {wer}") - - lm_ppl = float("inf") - - if gen_result.lm_score_t != 0 and gen_result.lengths_hyp_t > 0: - hyp_len = gen_result.lengths_hyp_t - lm_ppl = math.pow( - 10, -gen_result.lm_score_t / (hyp_len + gen_result.num_sentences) - ) - logger.info(f"LM PPL: {lm_ppl}") - - logger.info( - "| Processed {} sentences ({} tokens) in {:.1f}s ({:.2f}" - " sentences/s, {:.2f} tokens/s)".format( - gen_result.num_sentences, - gen_result.gen_timer.n, - gen_result.gen_timer.sum, - gen_result.num_sentences / gen_result.gen_timer.sum, - 1.0 / gen_result.gen_timer.avg, - ) - ) - - vt_diff = None - if gen_result.vt_length_t > 0: - vt_diff = gen_result.vt_err_t / gen_result.vt_length_t - vt_diff = max(cfg.min_vt_uer, vt_diff) - - lm_ppl = max(cfg.min_lm_ppl, lm_ppl) - - if not cfg.unsupervised_tuning == 0: - weighted_score = wer - else: - weighted_score = math.log(lm_ppl) * (vt_diff or 1.0) - - res = ( - f"| Generate {cfg.fairseq.dataset.gen_subset} with beam={cfg.beam}, " - f"lm_weight={cfg.kaldi_decoder_config.acoustic_scale if cfg.kaldi_decoder_config else cfg.lm_weight}, " - f"word_score={cfg.word_score}, sil_weight={cfg.sil_weight}, blank_weight={cfg.blank_weight}, " - f"WER: {wer}, LM_PPL: {lm_ppl}, num feats: {gen_result.num_feats}, " - f"length: {gen_result.lengths_hyp_t}, UER to viterbi: {(vt_diff or 0) * 100}, score: {weighted_score}" - ) - - logger.info(res) - # print(res) - - return task, weighted_score - - -@hydra.main( - config_path=os.path.join("../../..", "fairseq", "config"), config_name="config" -) -def hydra_main(cfg): - with open_dict(cfg): - # make hydra logging work with ddp (see # see https://github.com/facebookresearch/hydra/issues/1126) - cfg.job_logging_cfg = OmegaConf.to_container( - HydraConfig.get().job_logging, resolve=True - ) - - cfg = OmegaConf.create( - OmegaConf.to_container(cfg, resolve=False, enum_to_str=False) - ) - OmegaConf.set_struct(cfg, True) - logger.info(cfg) - - utils.import_user_module(cfg.fairseq.common) - - _, score = main(cfg) - - if cfg.is_ax: - return score, None - return score - - -def cli_main(): - try: - from hydra._internal.utils import get_args - - cfg_name = get_args().config_name or "config" - except: - logger.warning("Failed to get config name from hydra args") - cfg_name = "config" - - cs = ConfigStore.instance() - cs.store(name=cfg_name, node=UnsupGenerateConfig) - hydra_main() - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/quantization/scalar/modules/qact.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/quantization/scalar/modules/qact.py deleted file mode 100644 index c5dd1d63362423ab0cfc381dddabb547a3b44c72..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/quantization/scalar/modules/qact.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from ..ops import emulate_int - - -class ActivationQuantizer: - """ - Fake scalar quantization of the activations using a forward hook. - - Args: - - module. a nn.Module for which we quantize the *post-activations* - - p: proportion of activations to quantize, set by default to 1 - - update_step: to recompute quantization parameters - - bits: number of bits for quantization - - method: choose among {"tensor", "histogram", "channel"} - - clamp_threshold: to prevent gradients overflow - - Remarks: - - Parameters scale and zero_point are recomputed every update_step - forward pass to reduce the overhead - - For the list of quantization methods and number of bits, see ops.py - - To remove the hook from the module, simply call self.handle.remove() - - At test time, the activations are fully quantized - - We use the straight-through estimator so that the gradients - back-propagate nicely in the network, this is implemented with - the detach() trick - - The activations are hard-clamped in [-clamp_threshold, clamp_threshold] - to prevent overflow during the backward pass - """ - - def __init__( - self, - module, - p=1, - update_step=1000, - bits=8, - method="histogram", - clamp_threshold=5, - ): - self.module = module - self.p = p - self.update_step = update_step - self.counter = 0 - self.bits = bits - self.method = method - self.clamp_threshold = clamp_threshold - self.handle = None - self.register_hook() - - def register_hook(self): - # forward hook - def quantize_hook(module, x, y): - - # update parameters every 1000 iterations - if self.counter % self.update_step == 0: - self.scale = None - self.zero_point = None - self.counter += 1 - - # train with QuantNoise and evaluate the fully quantized network - p = self.p if self.module.training else 1 - - # quantize activations - y_q, self.scale, self.zero_point = emulate_int( - y.detach(), - bits=self.bits, - method=self.method, - scale=self.scale, - zero_point=self.zero_point, - ) - - # mask to apply noise - mask = torch.zeros_like(y) - mask.bernoulli_(1 - p) - noise = (y_q - y).masked_fill(mask.bool(), 0) - - # using straight-through estimator (STE) - clamp_low = -self.scale * self.zero_point - clamp_high = self.scale * (2 ** self.bits - 1 - self.zero_point) - return torch.clamp(y, clamp_low.item(), clamp_high.item()) + noise.detach() - - # register hook - self.handle = self.module.register_forward_hook(quantize_hook) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/build.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/build.py deleted file mode 100644 index a31369d1693f86154a7a9249fc043d49f3e9f390..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/build.py +++ /dev/null @@ -1,542 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import logging -import numpy as np -import operator -import pickle -from typing import Any, Callable, Dict, List, Optional, Union -import torch -import torch.utils.data as torchdata -from tabulate import tabulate -from termcolor import colored - -from detectron2.config import configurable -from detectron2.structures import BoxMode -from detectron2.utils.comm import get_world_size -from detectron2.utils.env import seed_all_rng -from detectron2.utils.file_io import PathManager -from detectron2.utils.logger import _log_api_usage, log_first_n - -from .catalog import DatasetCatalog, MetadataCatalog -from .common import AspectRatioGroupedDataset, DatasetFromList, MapDataset, ToIterableDataset -from .dataset_mapper import DatasetMapper -from .detection_utils import check_metadata_consistency -from .samplers import ( - InferenceSampler, - RandomSubsetTrainingSampler, - RepeatFactorTrainingSampler, - TrainingSampler, -) - -""" -This file contains the default logic to build a dataloader for training or testing. -""" - -__all__ = [ - "build_batch_data_loader", - "build_detection_train_loader", - "build_detection_test_loader", - "get_detection_dataset_dicts", - "load_proposals_into_dataset", - "print_instances_class_histogram", -] - - -def filter_images_with_only_crowd_annotations(dataset_dicts): - """ - Filter out images with none annotations or only crowd annotations - (i.e., images without non-crowd annotations). - A common training-time preprocessing on COCO dataset. - - Args: - dataset_dicts (list[dict]): annotations in Detectron2 Dataset format. - - Returns: - list[dict]: the same format, but filtered. - """ - num_before = len(dataset_dicts) - - def valid(anns): - for ann in anns: - if ann.get("iscrowd", 0) == 0: - return True - return False - - dataset_dicts = [x for x in dataset_dicts if valid(x["annotations"])] - num_after = len(dataset_dicts) - logger = logging.getLogger(__name__) - logger.info( - "Removed {} images with no usable annotations. {} images left.".format( - num_before - num_after, num_after - ) - ) - return dataset_dicts - - -def filter_images_with_few_keypoints(dataset_dicts, min_keypoints_per_image): - """ - Filter out images with too few number of keypoints. - - Args: - dataset_dicts (list[dict]): annotations in Detectron2 Dataset format. - - Returns: - list[dict]: the same format as dataset_dicts, but filtered. - """ - num_before = len(dataset_dicts) - - def visible_keypoints_in_image(dic): - # Each keypoints field has the format [x1, y1, v1, ...], where v is visibility - annotations = dic["annotations"] - return sum( - (np.array(ann["keypoints"][2::3]) > 0).sum() - for ann in annotations - if "keypoints" in ann - ) - - dataset_dicts = [ - x for x in dataset_dicts if visible_keypoints_in_image(x) >= min_keypoints_per_image - ] - num_after = len(dataset_dicts) - logger = logging.getLogger(__name__) - logger.info( - "Removed {} images with fewer than {} keypoints.".format( - num_before - num_after, min_keypoints_per_image - ) - ) - return dataset_dicts - - -def load_proposals_into_dataset(dataset_dicts, proposal_file): - """ - Load precomputed object proposals into the dataset. - - The proposal file should be a pickled dict with the following keys: - - - "ids": list[int] or list[str], the image ids - - "boxes": list[np.ndarray], each is an Nx4 array of boxes corresponding to the image id - - "objectness_logits": list[np.ndarray], each is an N sized array of objectness scores - corresponding to the boxes. - - "bbox_mode": the BoxMode of the boxes array. Defaults to ``BoxMode.XYXY_ABS``. - - Args: - dataset_dicts (list[dict]): annotations in Detectron2 Dataset format. - proposal_file (str): file path of pre-computed proposals, in pkl format. - - Returns: - list[dict]: the same format as dataset_dicts, but added proposal field. - """ - logger = logging.getLogger(__name__) - logger.info("Loading proposals from: {}".format(proposal_file)) - - with PathManager.open(proposal_file, "rb") as f: - proposals = pickle.load(f, encoding="latin1") - - # Rename the key names in D1 proposal files - rename_keys = {"indexes": "ids", "scores": "objectness_logits"} - for key in rename_keys: - if key in proposals: - proposals[rename_keys[key]] = proposals.pop(key) - - # Fetch the indexes of all proposals that are in the dataset - # Convert image_id to str since they could be int. - img_ids = set({str(record["image_id"]) for record in dataset_dicts}) - id_to_index = {str(id): i for i, id in enumerate(proposals["ids"]) if str(id) in img_ids} - - # Assuming default bbox_mode of precomputed proposals are 'XYXY_ABS' - bbox_mode = BoxMode(proposals["bbox_mode"]) if "bbox_mode" in proposals else BoxMode.XYXY_ABS - - for record in dataset_dicts: - # Get the index of the proposal - i = id_to_index[str(record["image_id"])] - - boxes = proposals["boxes"][i] - objectness_logits = proposals["objectness_logits"][i] - # Sort the proposals in descending order of the scores - inds = objectness_logits.argsort()[::-1] - record["proposal_boxes"] = boxes[inds] - record["proposal_objectness_logits"] = objectness_logits[inds] - record["proposal_bbox_mode"] = bbox_mode - - return dataset_dicts - - -def print_instances_class_histogram(dataset_dicts, class_names): - """ - Args: - dataset_dicts (list[dict]): list of dataset dicts. - class_names (list[str]): list of class names (zero-indexed). - """ - num_classes = len(class_names) - hist_bins = np.arange(num_classes + 1) - histogram = np.zeros((num_classes,), dtype=np.int) - for entry in dataset_dicts: - annos = entry["annotations"] - classes = np.asarray( - [x["category_id"] for x in annos if not x.get("iscrowd", 0)], dtype=np.int - ) - if len(classes): - assert classes.min() >= 0, f"Got an invalid category_id={classes.min()}" - assert ( - classes.max() < num_classes - ), f"Got an invalid category_id={classes.max()} for a dataset of {num_classes} classes" - histogram += np.histogram(classes, bins=hist_bins)[0] - - N_COLS = min(6, len(class_names) * 2) - - def short_name(x): - # make long class names shorter. useful for lvis - if len(x) > 13: - return x[:11] + ".." - return x - - data = list( - itertools.chain(*[[short_name(class_names[i]), int(v)] for i, v in enumerate(histogram)]) - ) - total_num_instances = sum(data[1::2]) - data.extend([None] * (N_COLS - (len(data) % N_COLS))) - if num_classes > 1: - data.extend(["total", total_num_instances]) - data = itertools.zip_longest(*[data[i::N_COLS] for i in range(N_COLS)]) - table = tabulate( - data, - headers=["category", "#instances"] * (N_COLS // 2), - tablefmt="pipe", - numalign="left", - stralign="center", - ) - log_first_n( - logging.INFO, - "Distribution of instances among all {} categories:\n".format(num_classes) - + colored(table, "cyan"), - key="message", - ) - - -def get_detection_dataset_dicts( - names, - filter_empty=True, - min_keypoints=0, - proposal_files=None, - check_consistency=True, -): - """ - Load and prepare dataset dicts for instance detection/segmentation and semantic segmentation. - - Args: - names (str or list[str]): a dataset name or a list of dataset names - filter_empty (bool): whether to filter out images without instance annotations - min_keypoints (int): filter out images with fewer keypoints than - `min_keypoints`. Set to 0 to do nothing. - proposal_files (list[str]): if given, a list of object proposal files - that match each dataset in `names`. - check_consistency (bool): whether to check if datasets have consistent metadata. - - Returns: - list[dict]: a list of dicts following the standard dataset dict format. - """ - if isinstance(names, str): - names = [names] - assert len(names), names - dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in names] - for dataset_name, dicts in zip(names, dataset_dicts): - assert len(dicts), "Dataset '{}' is empty!".format(dataset_name) - - if proposal_files is not None: - assert len(names) == len(proposal_files) - # load precomputed proposals from proposal files - dataset_dicts = [ - load_proposals_into_dataset(dataset_i_dicts, proposal_file) - for dataset_i_dicts, proposal_file in zip(dataset_dicts, proposal_files) - ] - - if isinstance(dataset_dicts[0], torchdata.Dataset): - return torchdata.ConcatDataset(dataset_dicts) - - dataset_dicts = list(itertools.chain.from_iterable(dataset_dicts)) - - has_instances = "annotations" in dataset_dicts[0] - if filter_empty and has_instances: - dataset_dicts = filter_images_with_only_crowd_annotations(dataset_dicts) - if min_keypoints > 0 and has_instances: - dataset_dicts = filter_images_with_few_keypoints(dataset_dicts, min_keypoints) - - if check_consistency and has_instances: - try: - class_names = MetadataCatalog.get(names[0]).thing_classes - check_metadata_consistency("thing_classes", names) - print_instances_class_histogram(dataset_dicts, class_names) - except AttributeError: # class names are not available for this dataset - pass - - assert len(dataset_dicts), "No valid data found in {}.".format(",".join(names)) - return dataset_dicts - - -def build_batch_data_loader( - dataset, - sampler, - total_batch_size, - *, - aspect_ratio_grouping=False, - num_workers=0, - collate_fn=None, -): - """ - Build a batched dataloader. The main differences from `torch.utils.data.DataLoader` are: - 1. support aspect ratio grouping options - 2. use no "batch collation", because this is common for detection training - - Args: - dataset (torch.utils.data.Dataset): a pytorch map-style or iterable dataset. - sampler (torch.utils.data.sampler.Sampler or None): a sampler that produces indices. - Must be provided iff. ``dataset`` is a map-style dataset. - total_batch_size, aspect_ratio_grouping, num_workers, collate_fn: see - :func:`build_detection_train_loader`. - - Returns: - iterable[list]. Length of each list is the batch size of the current - GPU. Each element in the list comes from the dataset. - """ - world_size = get_world_size() - assert ( - total_batch_size > 0 and total_batch_size % world_size == 0 - ), "Total batch size ({}) must be divisible by the number of gpus ({}).".format( - total_batch_size, world_size - ) - batch_size = total_batch_size // world_size - - if isinstance(dataset, torchdata.IterableDataset): - assert sampler is None, "sampler must be None if dataset is IterableDataset" - else: - dataset = ToIterableDataset(dataset, sampler) - - if aspect_ratio_grouping: - data_loader = torchdata.DataLoader( - dataset, - num_workers=num_workers, - collate_fn=operator.itemgetter(0), # don't batch, but yield individual elements - worker_init_fn=worker_init_reset_seed, - ) # yield individual mapped dict - data_loader = AspectRatioGroupedDataset(data_loader, batch_size) - if collate_fn is None: - return data_loader - return MapDataset(data_loader, collate_fn) - else: - return torchdata.DataLoader( - dataset, - batch_size=batch_size, - drop_last=True, - num_workers=num_workers, - collate_fn=trivial_batch_collator if collate_fn is None else collate_fn, - worker_init_fn=worker_init_reset_seed, - ) - - -def _train_loader_from_config(cfg, mapper=None, *, dataset=None, sampler=None): - if dataset is None: - dataset = get_detection_dataset_dicts( - cfg.DATASETS.TRAIN, - filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS, - min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE - if cfg.MODEL.KEYPOINT_ON - else 0, - proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None, - ) - _log_api_usage("dataset." + cfg.DATASETS.TRAIN[0]) - - if mapper is None: - mapper = DatasetMapper(cfg, True) - - if sampler is None: - sampler_name = cfg.DATALOADER.SAMPLER_TRAIN - logger = logging.getLogger(__name__) - logger.info("Using training sampler {}".format(sampler_name)) - if sampler_name == "TrainingSampler": - sampler = TrainingSampler(len(dataset)) - elif sampler_name == "RepeatFactorTrainingSampler": - repeat_factors = RepeatFactorTrainingSampler.repeat_factors_from_category_frequency( - dataset, cfg.DATALOADER.REPEAT_THRESHOLD - ) - sampler = RepeatFactorTrainingSampler(repeat_factors) - elif sampler_name == "RandomSubsetTrainingSampler": - sampler = RandomSubsetTrainingSampler(len(dataset), cfg.DATALOADER.RANDOM_SUBSET_RATIO) - else: - raise ValueError("Unknown training sampler: {}".format(sampler_name)) - - return { - "dataset": dataset, - "sampler": sampler, - "mapper": mapper, - "total_batch_size": cfg.SOLVER.IMS_PER_BATCH, - "aspect_ratio_grouping": cfg.DATALOADER.ASPECT_RATIO_GROUPING, - "num_workers": cfg.DATALOADER.NUM_WORKERS, - } - - -@configurable(from_config=_train_loader_from_config) -def build_detection_train_loader( - dataset, - *, - mapper, - sampler=None, - total_batch_size, - aspect_ratio_grouping=True, - num_workers=0, - collate_fn=None, -): - """ - Build a dataloader for object detection with some default features. - - Args: - dataset (list or torch.utils.data.Dataset): a list of dataset dicts, - or a pytorch dataset (either map-style or iterable). It can be obtained - by using :func:`DatasetCatalog.get` or :func:`get_detection_dataset_dicts`. - mapper (callable): a callable which takes a sample (dict) from dataset and - returns the format to be consumed by the model. - When using cfg, the default choice is ``DatasetMapper(cfg, is_train=True)``. - sampler (torch.utils.data.sampler.Sampler or None): a sampler that produces - indices to be applied on ``dataset``. - If ``dataset`` is map-style, the default sampler is a :class:`TrainingSampler`, - which coordinates an infinite random shuffle sequence across all workers. - Sampler must be None if ``dataset`` is iterable. - total_batch_size (int): total batch size across all workers. - aspect_ratio_grouping (bool): whether to group images with similar - aspect ratio for efficiency. When enabled, it requires each - element in dataset be a dict with keys "width" and "height". - num_workers (int): number of parallel data loading workers - collate_fn: a function that determines how to do batching, same as the argument of - `torch.utils.data.DataLoader`. Defaults to do no collation and return a list of - data. No collation is OK for small batch size and simple data structures. - If your batch size is large and each sample contains too many small tensors, - it's more efficient to collate them in data loader. - - Returns: - torch.utils.data.DataLoader: - a dataloader. Each output from it is a ``list[mapped_element]`` of length - ``total_batch_size / num_workers``, where ``mapped_element`` is produced - by the ``mapper``. - """ - if isinstance(dataset, list): - dataset = DatasetFromList(dataset, copy=False) - if mapper is not None: - dataset = MapDataset(dataset, mapper) - - if isinstance(dataset, torchdata.IterableDataset): - assert sampler is None, "sampler must be None if dataset is IterableDataset" - else: - if sampler is None: - sampler = TrainingSampler(len(dataset)) - assert isinstance(sampler, torchdata.Sampler), f"Expect a Sampler but got {type(sampler)}" - return build_batch_data_loader( - dataset, - sampler, - total_batch_size, - aspect_ratio_grouping=aspect_ratio_grouping, - num_workers=num_workers, - collate_fn=collate_fn, - ) - - -def _test_loader_from_config(cfg, dataset_name, mapper=None): - """ - Uses the given `dataset_name` argument (instead of the names in cfg), because the - standard practice is to evaluate each test set individually (not combining them). - """ - if isinstance(dataset_name, str): - dataset_name = [dataset_name] - - dataset = get_detection_dataset_dicts( - dataset_name, - filter_empty=False, - proposal_files=[ - cfg.DATASETS.PROPOSAL_FILES_TEST[list(cfg.DATASETS.TEST).index(x)] for x in dataset_name - ] - if cfg.MODEL.LOAD_PROPOSALS - else None, - ) - if mapper is None: - mapper = DatasetMapper(cfg, False) - return { - "dataset": dataset, - "mapper": mapper, - "num_workers": cfg.DATALOADER.NUM_WORKERS, - "sampler": InferenceSampler(len(dataset)), - } - - -@configurable(from_config=_test_loader_from_config) -def build_detection_test_loader( - dataset: Union[List[Any], torchdata.Dataset], - *, - mapper: Callable[[Dict[str, Any]], Any], - sampler: Optional[torchdata.Sampler] = None, - batch_size: int = 1, - num_workers: int = 0, - collate_fn: Optional[Callable[[List[Any]], Any]] = None, -) -> torchdata.DataLoader: - """ - Similar to `build_detection_train_loader`, with default batch size = 1, - and sampler = :class:`InferenceSampler`. This sampler coordinates all workers - to produce the exact set of all samples. - - Args: - dataset: a list of dataset dicts, - or a pytorch dataset (either map-style or iterable). They can be obtained - by using :func:`DatasetCatalog.get` or :func:`get_detection_dataset_dicts`. - mapper: a callable which takes a sample (dict) from dataset - and returns the format to be consumed by the model. - When using cfg, the default choice is ``DatasetMapper(cfg, is_train=False)``. - sampler: a sampler that produces - indices to be applied on ``dataset``. Default to :class:`InferenceSampler`, - which splits the dataset across all workers. Sampler must be None - if `dataset` is iterable. - batch_size: the batch size of the data loader to be created. - Default to 1 image per worker since this is the standard when reporting - inference time in papers. - num_workers: number of parallel data loading workers - collate_fn: same as the argument of `torch.utils.data.DataLoader`. - Defaults to do no collation and return a list of data. - - Returns: - DataLoader: a torch DataLoader, that loads the given detection - dataset, with test-time transformation and batching. - - Examples: - :: - data_loader = build_detection_test_loader( - DatasetRegistry.get("my_test"), - mapper=DatasetMapper(...)) - - # or, instantiate with a CfgNode: - data_loader = build_detection_test_loader(cfg, "my_test") - """ - if isinstance(dataset, list): - dataset = DatasetFromList(dataset, copy=False) - if mapper is not None: - dataset = MapDataset(dataset, mapper) - if isinstance(dataset, torchdata.IterableDataset): - assert sampler is None, "sampler must be None if dataset is IterableDataset" - else: - if sampler is None: - sampler = InferenceSampler(len(dataset)) - return torchdata.DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - drop_last=False, - num_workers=num_workers, - collate_fn=trivial_batch_collator if collate_fn is None else collate_fn, - ) - - -def trivial_batch_collator(batch): - """ - A batch collator that does nothing. - """ - return batch - - -def worker_init_reset_seed(worker_id): - initial_seed = torch.initial_seed() % 2 ** 31 - seed_all_rng(initial_seed + worker_id) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/config/root_cfg.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/config/root_cfg.py deleted file mode 100644 index 33d1d4bd2d9ddf31d55c655c49d13a8b7ac7b376..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/config/root_cfg.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from itertools import count - -from detectron2.config import LazyCall as L - -from .dir1.dir1_a import dir1a_dict, dir1a_str - -dir1a_dict.a = "modified" - -# modification above won't affect future imports -from .dir1.dir1_b import dir1b_dict, dir1b_str - - -lazyobj = L(count)(x=dir1a_str, y=dir1b_str) diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/transformer_decoder/transformer.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/transformer_decoder/transformer.py deleted file mode 100644 index cd07525673b9b1165e1fdd0c9990a8f29c84f199..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/transformer_decoder/transformer.py +++ /dev/null @@ -1,376 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/transformer_decoder/transformer.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -""" -Transformer class. - -Copy-paste from torch.nn.Transformer with modifications: - * positional encodings are passed in MHattention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers -""" -import copy -from typing import List, Optional - -import torch -import torch.nn.functional as F -from torch import Tensor, nn - - -class Transformer(nn.Module): - def __init__( - self, - d_model=512, - nhead=8, - num_encoder_layers=6, - num_decoder_layers=6, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - return_intermediate_dec=False, - ): - super().__init__() - - encoder_layer = TransformerEncoderLayer( - d_model, nhead, dim_feedforward, dropout, activation, normalize_before - ) - encoder_norm = nn.LayerNorm(d_model) if normalize_before else None - self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm) - - decoder_layer = TransformerDecoderLayer( - d_model, nhead, dim_feedforward, dropout, activation, normalize_before - ) - decoder_norm = nn.LayerNorm(d_model) - self.decoder = TransformerDecoder( - decoder_layer, - num_decoder_layers, - decoder_norm, - return_intermediate=return_intermediate_dec, - ) - - self._reset_parameters() - - self.d_model = d_model - self.nhead = nhead - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, src, mask, query_embed, pos_embed, task_token=None): - # flatten NxCxHxW to HWxNxC - bs, c, h, w = src.shape - src = src.flatten(2).permute(2, 0, 1) - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - query_embed = query_embed.unsqueeze(1).repeat(1, bs, 1) - if mask is not None: - mask = mask.flatten(1) - - if task_token is None: - tgt = torch.zeros_like(query_embed) - else: - tgt = task_token.repeat(query_embed.shape[0], 1, 1) - - memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed) - hs = self.decoder( - tgt, memory, memory_key_padding_mask=mask, pos=pos_embed, query_pos=query_embed - ) - return hs.transpose(1, 2), memory.permute(1, 2, 0).view(bs, c, h, w) - - -class TransformerEncoder(nn.Module): - def __init__(self, encoder_layer, num_layers, norm=None): - super().__init__() - self.layers = _get_clones(encoder_layer, num_layers) - self.num_layers = num_layers - self.norm = norm - - def forward( - self, - src, - mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - output = src - - for layer in self.layers: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask, pos=pos - ) - - if self.norm is not None: - output = self.norm(output) - - return output - - -class TransformerDecoder(nn.Module): - def __init__(self, decoder_layer, num_layers, norm=None, return_intermediate=False): - super().__init__() - self.layers = _get_clones(decoder_layer, num_layers) - self.num_layers = num_layers - self.norm = norm - self.return_intermediate = return_intermediate - - def forward( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - output = tgt - - intermediate = [] - - for layer in self.layers: - output = layer( - output, - memory, - tgt_mask=tgt_mask, - memory_mask=memory_mask, - tgt_key_padding_mask=tgt_key_padding_mask, - memory_key_padding_mask=memory_key_padding_mask, - pos=pos, - query_pos=query_pos, - ) - if self.return_intermediate: - intermediate.append(self.norm(output)) - - if self.norm is not None: - output = self.norm(output) - if self.return_intermediate: - intermediate.pop() - intermediate.append(output) - - if self.return_intermediate: - return torch.stack(intermediate) - - return output.unsqueeze(0) - - -class TransformerEncoderLayer(nn.Module): - def __init__( - self, - d_model, - nhead, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - q = k = self.with_pos_embed(src, pos) - src2 = self.self_attn( - q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask - )[0] - src = src + self.dropout1(src2) - src = self.norm1(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src)))) - src = src + self.dropout2(src2) - src = self.norm2(src) - return src - - def forward_pre( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - src2 = self.norm1(src) - q = k = self.with_pos_embed(src2, pos) - src2 = self.self_attn( - q, k, value=src2, attn_mask=src_mask, key_padding_mask=src_key_padding_mask - )[0] - src = src + self.dropout1(src2) - src2 = self.norm2(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src2)))) - src = src + self.dropout2(src2) - return src - - def forward( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - if self.normalize_before: - return self.forward_pre(src, src_mask, src_key_padding_mask, pos) - return self.forward_post(src, src_mask, src_key_padding_mask, pos) - - -class TransformerDecoderLayer(nn.Module): - def __init__( - self, - d_model, - nhead, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.norm3 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - self.dropout3 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - q = k = self.with_pos_embed(tgt, query_pos) - tgt2 = self.self_attn( - q, k, value=tgt, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask - )[0] - tgt = tgt + self.dropout1(tgt2) - tgt = self.norm1(tgt) - tgt2 = self.multihead_attn( - query=self.with_pos_embed(tgt, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, - attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask, - )[0] - tgt = tgt + self.dropout2(tgt2) - tgt = self.norm2(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout3(tgt2) - tgt = self.norm3(tgt) - return tgt - - def forward_pre( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - tgt2 = self.norm1(tgt) - q = k = self.with_pos_embed(tgt2, query_pos) - tgt2 = self.self_attn( - q, k, value=tgt2, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask - )[0] - tgt = tgt + self.dropout1(tgt2) - tgt2 = self.norm2(tgt) - tgt2 = self.multihead_attn( - query=self.with_pos_embed(tgt2, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, - attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask, - )[0] - tgt = tgt + self.dropout2(tgt2) - tgt2 = self.norm3(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) - tgt = tgt + self.dropout3(tgt2) - return tgt - - def forward( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - if self.normalize_before: - return self.forward_pre( - tgt, - memory, - tgt_mask, - memory_mask, - tgt_key_padding_mask, - memory_key_padding_mask, - pos, - query_pos, - ) - return self.forward_post( - tgt, - memory, - tgt_mask, - memory_mask, - tgt_key_padding_mask, - memory_key_padding_mask, - pos, - query_pos, - ) - - -def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(f"activation should be relu/gelu, not {activation}.") diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/fileio/__init__.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/fileio/__init__.py deleted file mode 100644 index 2051b85f7e59bff7bdbaa131849ce8cd31f059a4..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/fileio/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .file_client import BaseStorageBackend, FileClient -from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler -from .io import dump, load, register_handler -from .parse import dict_from_file, list_from_file - -__all__ = [ - 'BaseStorageBackend', 'FileClient', 'load', 'dump', 'register_handler', - 'BaseFileHandler', 'JsonHandler', 'PickleHandler', 'YamlHandler', - 'list_from_file', 'dict_from_file' -] diff --git a/spaces/PKaushik/humandetect/yolov6/core/inferer.py b/spaces/PKaushik/humandetect/yolov6/core/inferer.py deleted file mode 100644 index bde7261bc5287cfef011601b52e1a904eece12cd..0000000000000000000000000000000000000000 --- a/spaces/PKaushik/humandetect/yolov6/core/inferer.py +++ /dev/null @@ -1,206 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -import os -import os.path as osp -import math -from tqdm import tqdm -import numpy as np -import cv2 -import torch -from PIL import ImageFont - -from yolov6.utils.events import LOGGER, load_yaml -from yolov6.layers.common import DetectBackend -from yolov6.data.data_augment import letterbox -from yolov6.utils.nms import non_max_suppression -from yolov6.utils.torch_utils import get_model_info - - -class Inferer: - def __init__(self, source, weights, device, yaml, img_size, half): - import glob - from yolov6.data.datasets import IMG_FORMATS - - self.__dict__.update(locals()) - - # Init model - self.device = device - self.img_size = img_size - cuda = self.device != 'cpu' and torch.cuda.is_available() - self.device = torch.device('cuda:0' if cuda else 'cpu') - self.model = DetectBackend(weights, device=self.device) - self.stride = self.model.stride - self.class_names = load_yaml(yaml)['names'] - self.img_size = self.check_img_size(self.img_size, s=self.stride) # check image size - - # Half precision - if half & (self.device.type != 'cpu'): - self.model.model.half() - else: - self.model.model.float() - half = False - - if self.device.type != 'cpu': - self.model(torch.zeros(1, 3, *self.img_size).to(self.device).type_as(next(self.model.model.parameters()))) # warmup - - # Load data - if os.path.isdir(source): - img_paths = sorted(glob.glob(os.path.join(source, '*.*'))) # dir - elif os.path.isfile(source): - img_paths = [source] # files - else: - raise Exception(f'Invalid path: {source}') - self.img_paths = [img_path for img_path in img_paths if img_path.split('.')[-1].lower() in IMG_FORMATS] - - # Switch model to deploy status - self.model_switch(self.model, self.img_size) - - def model_switch(self, model, img_size): - ''' Model switch to deploy status ''' - from yolov6.layers.common import RepVGGBlock - for layer in model.modules(): - if isinstance(layer, RepVGGBlock): - layer.switch_to_deploy() - - LOGGER.info("Switch model to deploy modality.") - - def infer(self, conf_thres, iou_thres, classes, agnostic_nms, max_det, save_dir, save_txt, save_img, hide_labels, hide_conf): - ''' Model Inference and results visualization ''' - - for img_path in tqdm(self.img_paths): - img, img_src = self.precess_image(img_path, self.img_size, self.stride, self.half) - img = img.to(self.device) - if len(img.shape) == 3: - img = img[None] - # expand for batch dim - pred_results = self.model(img) - det = non_max_suppression(pred_results, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)[0] - - save_path = osp.join(save_dir, osp.basename(img_path)) # im.jpg - txt_path = osp.join(save_dir, 'labels', osp.splitext(osp.basename(img_path))[0]) - - gn = torch.tensor(img_src.shape)[[1, 0, 1, 0]] # normalization gain whwh - img_ori = img_src - - # check image and font - assert img_ori.data.contiguous, 'Image needs to be contiguous. Please apply to input images with np.ascontiguousarray(im).' - self.font_check() - - if len(det): - det[:, :4] = self.rescale(img.shape[2:], det[:, :4], img_src.shape).round() - - for *xyxy, conf, cls in reversed(det): - if save_txt: # Write to file - xywh = (self.box_convert(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - line = (cls, *xywh, conf) - with open(txt_path + '.txt', 'a') as f: - f.write(('%g ' * len(line)).rstrip() % line + '\n') - - if save_img: - class_num = int(cls) # integer class - label = None if hide_labels else (self.class_names[class_num] if hide_conf else f'{self.class_names[class_num]} {conf:.2f}') - - self.plot_box_and_label(img_ori, max(round(sum(img_ori.shape) / 2 * 0.003), 2), xyxy, label, color=self.generate_colors(class_num, True)) - - img_src = np.asarray(img_ori) - - # Save results (image with detections) - if save_img: - cv2.imwrite(save_path, img_src) - - @staticmethod - def precess_image(path, img_size, stride, half): - '''Process image before image inference.''' - try: - img_src = cv2.imread(path) - assert img_src is not None, f'Invalid image: {path}' - except Exception as e: - LOGGER.warning(e) - image = letterbox(img_src, img_size, stride=stride)[0] - - # Convert - image = image.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB - image = torch.from_numpy(np.ascontiguousarray(image)) - image = image.half() if half else image.float() # uint8 to fp16/32 - image /= 255 # 0 - 255 to 0.0 - 1.0 - - return image, img_src - - @staticmethod - def rescale(ori_shape, boxes, target_shape): - '''Rescale the output to the original image shape''' - ratio = min(ori_shape[0] / target_shape[0], ori_shape[1] / target_shape[1]) - padding = (ori_shape[1] - target_shape[1] * ratio) / 2, (ori_shape[0] - target_shape[0] * ratio) / 2 - - boxes[:, [0, 2]] -= padding[0] - boxes[:, [1, 3]] -= padding[1] - boxes[:, :4] /= ratio - - boxes[:, 0].clamp_(0, target_shape[1]) # x1 - boxes[:, 1].clamp_(0, target_shape[0]) # y1 - boxes[:, 2].clamp_(0, target_shape[1]) # x2 - boxes[:, 3].clamp_(0, target_shape[0]) # y2 - - return boxes - - def check_img_size(self, img_size, s=32, floor=0): - """Make sure image size is a multiple of stride s in each dimension, and return a new shape list of image.""" - if isinstance(img_size, int): # integer i.e. img_size=640 - new_size = max(self.make_divisible(img_size, int(s)), floor) - elif isinstance(img_size, list): # list i.e. img_size=[640, 480] - new_size = [max(self.make_divisible(x, int(s)), floor) for x in img_size] - else: - raise Exception(f"Unsupported type of img_size: {type(img_size)}") - - if new_size != img_size: - print(f'WARNING: --img-size {img_size} must be multiple of max stride {s}, updating to {new_size}') - return new_size if isinstance(img_size,list) else [new_size]*2 - - def make_divisible(self, x, divisor): - # Upward revision the value x to make it evenly divisible by the divisor. - return math.ceil(x / divisor) * divisor - - @staticmethod - def plot_box_and_label(image, lw, box, label='', color=(128, 128, 128), txt_color=(255, 255, 255)): - # Add one xyxy box to image with label - p1, p2 = (int(box[0]), int(box[1])), (int(box[2]), int(box[3])) - cv2.rectangle(image, p1, p2, color, thickness=lw, lineType=cv2.LINE_AA) - if label: - tf = max(lw - 1, 1) # font thickness - w, h = cv2.getTextSize(label, 0, fontScale=lw / 3, thickness=tf)[0] # text width, height - outside = p1[1] - h - 3 >= 0 # label fits outside box - p2 = p1[0] + w, p1[1] - h - 3 if outside else p1[1] + h + 3 - cv2.rectangle(image, p1, p2, color, -1, cv2.LINE_AA) # filled - cv2.putText(image, label, (p1[0], p1[1] - 2 if outside else p1[1] + h + 2), 0, lw / 3, txt_color, - thickness=tf, lineType=cv2.LINE_AA) - - @staticmethod - def font_check(font='./yolov6/utils/Arial.ttf', size=10): - # Return a PIL TrueType Font, downloading to CONFIG_DIR if necessary - assert osp.exists(font), f'font path not exists: {font}' - try: - return ImageFont.truetype(str(font) if font.exists() else font.name, size) - except Exception as e: # download if missing - return ImageFont.truetype(str(font), size) - - @staticmethod - def box_convert(x): - # Convert boxes with shape [n, 4] from [x1, y1, x2, y2] to [x, y, w, h] where x1y1=top-left, x2y2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center - y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center - y[:, 2] = x[:, 2] - x[:, 0] # width - y[:, 3] = x[:, 3] - x[:, 1] # height - return y - - @staticmethod - def generate_colors(i, bgr=False): - hex = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB', - '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7') - palette = [] - for iter in hex: - h = '#' + iter - palette.append(tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4))) - num = len(palette) - color = palette[int(i) % num] - return (color[2], color[1], color[0]) if bgr else color diff --git a/spaces/PSLD/PSLD/stable-diffusion/ldm/models/diffusion/ddim.py b/spaces/PSLD/PSLD/stable-diffusion/ldm/models/diffusion/ddim.py deleted file mode 100644 index fb31215db5c3f3f703f15987d7eee6a179c9f7ec..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/stable-diffusion/ldm/models/diffusion/ddim.py +++ /dev/null @@ -1,241 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm -from functools import partial - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \ - extract_into_tensor - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None,): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - return x_dec \ No newline at end of file diff --git a/spaces/Paulog731/SD-2.1-Img2Img/README.md b/spaces/Paulog731/SD-2.1-Img2Img/README.md deleted file mode 100644 index 5e73afa8766a75b9bd6a5843cc270e91cbbb8431..0000000000000000000000000000000000000000 --- a/spaces/Paulog731/SD-2.1-Img2Img/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: SD 2.1 Img2Img -emoji: 👀 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: trysem/SD-2.1-Img2Img ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/PeepDaSlan9/AutoGPT/main.py b/spaces/PeepDaSlan9/AutoGPT/main.py deleted file mode 100644 index 160addc390b94a8b143a3a2e18991a560f9b032e..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/AutoGPT/main.py +++ /dev/null @@ -1 +0,0 @@ -from autogpt import main diff --git a/spaces/PeepDaSlan9/conceptofmind-Yarn-Llama-2-7b-128k/app.py b/spaces/PeepDaSlan9/conceptofmind-Yarn-Llama-2-7b-128k/app.py deleted file mode 100644 index fb6250efeb13a20564c0bdb498d6722994566783..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/conceptofmind-Yarn-Llama-2-7b-128k/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/conceptofmind/Yarn-Llama-2-7b-128k").launch() \ No newline at end of file diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/collect_env.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/collect_env.py deleted file mode 100644 index 2d0641dda61c9950cb54d0552106246248e571ef..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/collect_env.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import PIL - -from torch.utils.collect_env import get_pretty_env_info - - -def get_pil_version(): - return "\n Pillow ({})".format(PIL.__version__) - - -def collect_env_info(): - env_str = get_pretty_env_info() - env_str += get_pil_version() - return env_str diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/model_serialization.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/model_serialization.py deleted file mode 100644 index 01669fd076bc543096aafaccf42e3b256db91ec2..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/model_serialization.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from collections import OrderedDict, defaultdict -import logging -import math -import torch - -from maskrcnn_benchmark.utils.imports import import_file - -def resize_2d(posemb, shape_new): - # Rescale the grid of position embeddings when loading from state_dict. Adapted from - # https://github.com/google-research/vision_transformer/blob/00883dd691c63a6830751563748663526e811cee/vit_jax/checkpoint.py#L224 - ntok_new = shape_new[0] - gs_old = int(math.sqrt(len(posemb))) # 2 * w - 1 - gs_new = int(math.sqrt(ntok_new)) # 2 * w - 1 - posemb_grid = posemb.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = torch.nn.functional.interpolate(posemb_grid, size=(gs_new, gs_new), mode='bilinear') - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(gs_new * gs_new, -1) - return posemb_grid - -def align_and_update_state_dicts(model_state_dict, loaded_state_dict, reshape_keys=['pos_bias_table'], use_weightmap=False): - """ - Strategy: suppose that the models that we will create will have prefixes appended - to each of its keys, for example due to an extra level of nesting that the original - pre-trained weights from ImageNet won't contain. For example, model.state_dict() - might return backbone[0].body.res2.conv1.weight, while the pre-trained model contains - res2.conv1.weight. We thus want to match both parameters together. - For that, we look for each model weight, look among all loaded keys if there is one - that is a suffix of the current weight name, and use it if that's the case. - If multiple matches exist, take the one with longest size - of the corresponding name. For example, for the same model as before, the pretrained - weight file can contain both res2.conv1.weight, as well as conv1.weight. In this case, - we want to match backbone[0].body.conv1.weight to conv1.weight, and - backbone[0].body.res2.conv1.weight to res2.conv1.weight. - """ - current_keys = sorted(list(model_state_dict.keys())) - loaded_keys = sorted(list(loaded_state_dict.keys())) - # get a matrix of string matches, where each (i, j) entry correspond to the size of the - # loaded_key string, if it matches - match_matrix = [ - len(j) if i.endswith(j) else 0 for i in current_keys for j in loaded_keys - ] - match_matrix = torch.as_tensor(match_matrix).view( - len(current_keys), len(loaded_keys) - ) - max_match_size, idxs = match_matrix.max(1) - # remove indices that correspond to no-match - idxs[max_match_size == 0] = -1 - - matched_keys = [] - # used for logging - max_size = max([len(key) for key in current_keys]) if current_keys else 1 - max_size_loaded = max([len(key) for key in loaded_keys]) if loaded_keys else 1 - log_str_template = "{: <{}} loaded from {: <{}} of shape {}" - logger = logging.getLogger(__name__) - for idx_new, idx_old in enumerate(idxs.tolist()): - if idx_old == -1: - continue - key = current_keys[idx_new] - key_old = loaded_keys[idx_old] - if model_state_dict[key].shape != loaded_state_dict[key_old].shape: - if any([k in key_old for k in reshape_keys]): - new_shape = model_state_dict[key].shape - logger.warning('Reshaping {} -> {}. \n'.format(key_old, key)) - model_state_dict[key] = resize_2d(loaded_state_dict[key_old], new_shape) - elif use_weightmap and 'cls_logits' in key: - coco_in_objects365_inds = [ - 227, 26, 55, 202, 2, 44, 338, 346, 32, 336, 118, 299, 218, - 25, 361, 59, 95, 161, 278, 82, 110, 22, 364, 134, 9, 350, - 152, 323, 304, 130, 285, 289, 16, 172, 17, 18, 283, 305, - 321, 35, 362, 88, 127, 174, 292, 37, 11, 6, 267, 212, 41, - 58, 162, 237, 98, 48, 63, 81, 247, 23, 94, 326, 349, 178, - 203, 259, 171, 60, 198, 213, 325, 282, 258, 33, 71, 353, - 273, 318, 148, 330 - ] - logger.info("Use coco_in_objects365_inds labelmap for COCO detection because of size mis-match, " - "Reshaping {} -> {}. \n".format(key_old, key)) - new_shape = model_state_dict[key].shape - assert new_shape[0] == len(coco_in_objects365_inds) - weight_inds_old = torch.as_tensor(coco_in_objects365_inds).to(loaded_state_dict[key_old].device) - model_state_dict[key] = loaded_state_dict[key_old][weight_inds_old].to(model_state_dict[key].device) - else: - logger.info('Skip due to size mismatch: {} -> {}. \n'.format(key_old, key)) - continue - else: - model_state_dict[key] = loaded_state_dict[key_old] - matched_keys.append(key) - logger.info( - log_str_template.format( - key, - max_size, - key_old, - max_size_loaded, - tuple(loaded_state_dict[key_old].shape), - ) - ) - missing_keys = set(current_keys)-set(matched_keys) - if len(missing_keys): - groups = _group_checkpoint_keys(missing_keys) - msg_per_group = sorted(k + _group_to_str(v) for k, v in groups.items()) - msg = '\n'.join(sorted(msg_per_group)) - logger.warning('Some layers unloaded with pre-trained weight: \n' + msg) - -def strip_prefix_if_present(state_dict, prefix): - keys = sorted(state_dict.keys()) - if not all(key.startswith(prefix) for key in keys): - return state_dict - stripped_state_dict = OrderedDict() - for key, value in state_dict.items(): - stripped_state_dict[key.replace(prefix, "", 1)] = value - return stripped_state_dict - -def load_state_dict(model, loaded_state_dict): - model_state_dict = model.state_dict() - # if the state_dict comes from a model that was wrapped in a - # DataParallel or DistributedDataParallel during serialization, - # remove the "module" prefix before performing the matching - loaded_state_dict = strip_prefix_if_present(loaded_state_dict, prefix="module.") - align_and_update_state_dicts(model_state_dict, loaded_state_dict) - - # use strict loading - model.load_state_dict(model_state_dict) - -def _group_checkpoint_keys(keys): - """ - Group keys based on common prefixes. A prefix is the string up to the final - "." in each key. - Args: - keys (list[str]): list of parameter names, i.e. keys in the model - checkpoint dict. - Returns: - dict[list]: keys with common prefixes are grouped into lists. - """ - groups = defaultdict(list) - for key in keys: - pos = key.rfind(".") - if pos >= 0: - head, tail = key[:pos], [key[pos + 1 :]] - else: - head, tail = key, [] - groups[head].extend(tail) - return groups - -def _group_to_str(group): - """ - Format a group of parameter name suffixes into a loggable string. - Args: - group (list[str]): list of parameter name suffixes. - Returns: - str: formated string. - """ - if len(group) == 0: - return "" - - if len(group) == 1: - return "." + group[0] - - return ".{" + ", ".join(sorted(group)) + "}" \ No newline at end of file diff --git a/spaces/Podtekatel/ArcaneSVK2/inference/center_crop.py b/spaces/Podtekatel/ArcaneSVK2/inference/center_crop.py deleted file mode 100644 index 5ef5008869aa2882ea8c26b5dc72579b236ef644..0000000000000000000000000000000000000000 --- a/spaces/Podtekatel/ArcaneSVK2/inference/center_crop.py +++ /dev/null @@ -1,24 +0,0 @@ -import numpy as np - - -# From albumentations -def center_crop(img: np.ndarray, crop_height: int, crop_width: int): - height, width = img.shape[:2] - if height < crop_height or width < crop_width: - raise ValueError( - "Requested crop size ({crop_height}, {crop_width}) is " - "larger than the image size ({height}, {width})".format( - crop_height=crop_height, crop_width=crop_width, height=height, width=width - ) - ) - x1, y1, x2, y2 = get_center_crop_coords(height, width, crop_height, crop_width) - img = img[y1:y2, x1:x2] - return img - - -def get_center_crop_coords(height: int, width: int, crop_height: int, crop_width: int): - y1 = (height - crop_height) // 2 - y2 = y1 + crop_height - x1 = (width - crop_width) // 2 - x2 = x1 + crop_width - return x1, y1, x2, y2 diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/utils/utils.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/utils/utils.py deleted file mode 100644 index 3135d70e949a058095ef84dd87b49384546c465c..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/utils/utils.py +++ /dev/null @@ -1,298 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from concurrent.futures import ProcessPoolExecutor -from contextlib import contextmanager -from functools import wraps, lru_cache -import hashlib -import json -import logging -from pathlib import Path -import typing as tp - -import flashy -import flashy.distrib -import omegaconf -import torch -from torch.nn.utils.rnn import pad_sequence - - -logger = logging.getLogger(__name__) - - -def model_hash(model: torch.nn.Module) -> str: - """Return a model hash. This should allow us to track regressions in model init - from the logs of past experiments. - """ - hasher = hashlib.sha1() - for p in model.parameters(): - hasher.update(p.data.cpu().numpy().tobytes()) - return hasher.hexdigest() - - -def dict_from_config(cfg: omegaconf.DictConfig) -> dict: - """Convenience function to map an omegaconf configuration to a dictionary. - - Args: - cfg (omegaconf.DictConfig): Original configuration to map to dict. - Returns: - dict: Config as dictionary object. - """ - dct = omegaconf.OmegaConf.to_container(cfg, resolve=True) - assert isinstance(dct, dict) - return dct - - -def random_subset(dataset, max_samples: int, seed: int = 42) -> torch.utils.data.Subset: - if max_samples >= len(dataset): - return dataset - - generator = torch.Generator().manual_seed(seed) - perm = torch.randperm(len(dataset), generator=generator) - return torch.utils.data.Subset(dataset, perm[:max_samples].tolist()) - - -def get_loader(dataset, num_samples: tp.Optional[int], batch_size: int, - num_workers: int, seed: int, **kwargs) -> torch.utils.data.DataLoader: - """Convenience function to load dataset into a dataloader with optional subset sampling. - - Args: - dataset: Dataset to load. - num_samples (Optional[int]): Number of samples to limit subset size. - batch_size (int): Batch size. - num_workers (int): Number of workers for data loading. - seed (int): Random seed. - """ - if num_samples is not None: - dataset = random_subset(dataset, num_samples, seed) - - dataloader = flashy.distrib.loader( - dataset, - batch_size=batch_size, - num_workers=num_workers, - **kwargs - ) - return dataloader - - -def get_dataset_from_loader(dataloader): - dataset = dataloader.dataset - if isinstance(dataset, torch.utils.data.Subset): - return dataset.dataset - else: - return dataset - - -def multinomial(input: torch.Tensor, num_samples: int, replacement=False, *, generator=None): - """torch.multinomial with arbitrary number of dimensions, and number of candidates on the last dimension. - - Args: - input (torch.Tensor): The input tensor containing probabilities. - num_samples (int): Number of samples to draw. - replacement (bool): Whether to draw with replacement or not. - Keywords args: - generator (torch.Generator): A pseudorandom number generator for sampling. - Returns: - torch.Tensor: Last dimension contains num_samples indices - sampled from the multinomial probability distribution - located in the last dimension of tensor input. - """ - input_ = input.reshape(-1, input.shape[-1]) - output_ = torch.multinomial(input_, num_samples=num_samples, replacement=replacement, generator=generator) - output = output_.reshape(*list(input.shape[:-1]), -1) - return output - - -def sample_top_k(probs: torch.Tensor, k: int) -> torch.Tensor: - """Sample next token from top K values along the last dimension of the input probs tensor. - - Args: - probs (torch.Tensor): Input probabilities with token candidates on the last dimension. - k (int): The k in “top-k”. - Returns: - torch.Tensor: Sampled tokens. - """ - top_k_value, _ = torch.topk(probs, k, dim=-1) - min_value_top_k = top_k_value[..., [-1]] - probs *= (probs >= min_value_top_k).float() - probs.div_(probs.sum(dim=-1, keepdim=True)) - next_token = multinomial(probs, num_samples=1) - return next_token - - -def sample_top_p(probs: torch.Tensor, p: float) -> torch.Tensor: - """Sample next token from top P probabilities along the last dimension of the input probs tensor. - - Args: - probs (torch.Tensor): Input probabilities with token candidates on the last dimension. - p (int): The p in “top-p”. - Returns: - torch.Tensor: Sampled tokens. - """ - probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True) - probs_sum = torch.cumsum(probs_sort, dim=-1) - mask = probs_sum - probs_sort > p - probs_sort *= (~mask).float() - probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True)) - next_token = multinomial(probs_sort, num_samples=1) - next_token = torch.gather(probs_idx, -1, next_token) - return next_token - - -class DummyPoolExecutor: - """Dummy pool executor to use when we actually have only 1 worker. - (e.g. instead of ProcessPoolExecutor). - """ - class DummyResult: - def __init__(self, func, *args, **kwargs): - self.func = func - self.args = args - self.kwargs = kwargs - - def result(self): - return self.func(*self.args, **self.kwargs) - - def __init__(self, workers, mp_context=None): - pass - - def submit(self, func, *args, **kwargs): - return DummyPoolExecutor.DummyResult(func, *args, **kwargs) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, exc_tb): - return - - -def get_pool_executor(num_workers: int, mp_context=None): - return ProcessPoolExecutor(num_workers, mp_context) if num_workers > 1 else DummyPoolExecutor(1) - - -def length_to_mask(lengths: torch.Tensor, max_len: tp.Optional[int] = None) -> torch.Tensor: - """Utility function to convert a tensor of sequence lengths to a mask (useful when working on padded sequences). - For example: [3, 5] => [[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]] - - Args: - lengths (torch.Tensor): tensor with lengths - max_len (int): can set the max length manually. Defaults to None. - Returns: - torch.Tensor: mask with 0s where there is pad tokens else 1s - """ - assert len(lengths.shape) == 1, "Length shape should be 1 dimensional." - final_length = lengths.max().item() if not max_len else max_len - final_length = max(final_length, 1) # if all seqs are of len zero we don't want a zero-size tensor - return torch.arange(final_length)[None, :].to(lengths.device) < lengths[:, None] - - -def hash_trick(word: str, vocab_size: int) -> int: - """Hash trick to pair each word with an index - - Args: - word (str): word we wish to convert to an index - vocab_size (int): size of the vocabulary - Returns: - int: index of the word in the embedding LUT - """ - hash = int(hashlib.sha256(word.encode("utf-8")).hexdigest(), 16) - return hash % vocab_size - - -def with_rank_rng(base_seed: int = 1234): - """Decorator for a function so that the function will use a Random Number Generator - whose state depend on the GPU rank. The original RNG state is restored upon returning. - - Args: - base_seed (int): Random seed. - """ - def _decorator(fun: tp.Callable): - @wraps(fun) - def _decorated(*args, **kwargs): - state = torch.get_rng_state() - seed = base_seed ^ flashy.distrib.rank() - torch.manual_seed(seed) - logger.debug('Rank dependent seed set to %d', seed) - try: - return fun(*args, **kwargs) - finally: - torch.set_rng_state(state) - logger.debug('RNG state restored.') - return _decorated - return _decorator - - -def collate(tensors: tp.List[torch.Tensor], dim: int = 0) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Get a list of tensors and collate them to a single tensor. according to the following logic: - - `dim` specifies the time dimension which will be stacked and padded. - - The output will contain 1 new dimension (dimension index 0) which will be the size of - of the original list. - - Args: - tensors (tp.List[torch.Tensor]): List of tensors to collate. - dim (int): Dimension which will be stacked and padded. - Returns: - tp.Tuple[torch.Tensor, torch.Tensor]: - torch.Tensor: Stacked and padded tensor. The output will contain 1 new dimension - (dimension index 0) which will be the size of the original list. - torch.Tensor: Tensor containing length of original tensor sizes (without padding). - """ - tensors = [x.transpose(0, dim) for x in tensors] - lens = torch.LongTensor([len(x) for x in tensors]) - padded_tensors = pad_sequence(tensors) - padded_tensors = padded_tensors.transpose(0, 1) - padded_tensors = padded_tensors.transpose(1, dim + 1) - return padded_tensors, lens - - -# TODO: Move to flashy? -def copy_state(state: tp.Any, device: tp.Union[torch.device, str] = 'cpu', - dtype: tp.Optional[torch.dtype] = None) -> tp.Any: - if isinstance(state, torch.Tensor): - if dtype is None or not state.is_floating_point(): - dtype = state.dtype - return state.detach().to(device=device, dtype=dtype, copy=True) - elif isinstance(state, dict): - return {k: copy_state(v, device, dtype) for k, v in state.items()} - elif isinstance(state, list): - return [copy_state(v, device, dtype) for v in state] - - -# TODO: Move to flashy? -@contextmanager -def swap_state(model, state, **kwargs): - old_state = copy_state(model.state_dict()) - model.load_state_dict(state, **kwargs) - try: - yield - finally: - model.load_state_dict(old_state) - - -@lru_cache(None) -def warn_once(logger, msg): - """Warn about a given message only once.""" - logger.warning(msg) - - -def is_jsonable(x: tp.Any): - """Check if an object can be serialized into a json:""" - try: - json.dumps(x) - return True - except (TypeError, OverflowError): - return False - - -def load_clap_state_dict(clap_model, path: tp.Union[str, Path]): - """Wrapper around state dict loading of CLAP model - addressing compatibility issues between CLAP and AudioCraft - HuggingFace transformer version. - See: https://github.com/LAION-AI/CLAP/issues/118 - """ - from clap_module.factory import load_state_dict # type: ignore - pkg = load_state_dict(path) - pkg.pop('text_branch.embeddings.position_ids', None) - clap_model.model.load_state_dict(pkg) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/hooks.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/hooks.py deleted file mode 100644 index d181ba2ec2e55d274897315887b78fbdca757da8..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/hooks.py +++ /dev/null @@ -1,33 +0,0 @@ -""" -requests.hooks -~~~~~~~~~~~~~~ - -This module provides the capabilities for the Requests hooks system. - -Available hooks: - -``response``: - The response generated from a Request. -""" -HOOKS = ["response"] - - -def default_hooks(): - return {event: [] for event in HOOKS} - - -# TODO: response is the only one - - -def dispatch_hook(key, hooks, hook_data, **kwargs): - """Dispatches a hook dictionary on a given piece of data.""" - hooks = hooks or {} - hooks = hooks.get(key) - if hooks: - if hasattr(hooks, "__call__"): - hooks = [hooks] - for hook in hooks: - _hook_data = hook(hook_data, **kwargs) - if _hook_data is not None: - hook_data = _hook_data - return hook_data diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/__init__.py deleted file mode 100644 index 7802ff158d83eb88e6dbe78d9cd33ca14341662a..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/__init__.py +++ /dev/null @@ -1,331 +0,0 @@ -# module pyparsing.py -# -# Copyright (c) 2003-2022 Paul T. McGuire -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. -# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY -# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, -# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE -# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -# - -__doc__ = """ -pyparsing module - Classes and methods to define and execute parsing grammars -============================================================================= - -The pyparsing module is an alternative approach to creating and -executing simple grammars, vs. the traditional lex/yacc approach, or the -use of regular expressions. With pyparsing, you don't need to learn -a new syntax for defining grammars or matching expressions - the parsing -module provides a library of classes that you use to construct the -grammar directly in Python. - -Here is a program to parse "Hello, World!" (or any greeting of the form -``", !"``), built up using :class:`Word`, -:class:`Literal`, and :class:`And` elements -(the :meth:`'+'` operators create :class:`And` expressions, -and the strings are auto-converted to :class:`Literal` expressions):: - - from pyparsing import Word, alphas - - # define grammar of a greeting - greet = Word(alphas) + "," + Word(alphas) + "!" - - hello = "Hello, World!" - print(hello, "->", greet.parse_string(hello)) - -The program outputs the following:: - - Hello, World! -> ['Hello', ',', 'World', '!'] - -The Python representation of the grammar is quite readable, owing to the -self-explanatory class names, and the use of :class:`'+'`, -:class:`'|'`, :class:`'^'` and :class:`'&'` operators. - -The :class:`ParseResults` object returned from -:class:`ParserElement.parseString` can be -accessed as a nested list, a dictionary, or an object with named -attributes. - -The pyparsing module handles some of the problems that are typically -vexing when writing text parsers: - - - extra or missing whitespace (the above program will also handle - "Hello,World!", "Hello , World !", etc.) - - quoted strings - - embedded comments - - -Getting Started - ------------------ -Visit the classes :class:`ParserElement` and :class:`ParseResults` to -see the base classes that most other pyparsing -classes inherit from. Use the docstrings for examples of how to: - - - construct literal match expressions from :class:`Literal` and - :class:`CaselessLiteral` classes - - construct character word-group expressions using the :class:`Word` - class - - see how to create repetitive expressions using :class:`ZeroOrMore` - and :class:`OneOrMore` classes - - use :class:`'+'`, :class:`'|'`, :class:`'^'`, - and :class:`'&'` operators to combine simple expressions into - more complex ones - - associate names with your parsed results using - :class:`ParserElement.setResultsName` - - access the parsed data, which is returned as a :class:`ParseResults` - object - - find some helpful expression short-cuts like :class:`delimitedList` - and :class:`oneOf` - - find more useful common expressions in the :class:`pyparsing_common` - namespace class -""" -from typing import NamedTuple - - -class version_info(NamedTuple): - major: int - minor: int - micro: int - releaselevel: str - serial: int - - @property - def __version__(self): - return ( - "{}.{}.{}".format(self.major, self.minor, self.micro) - + ( - "{}{}{}".format( - "r" if self.releaselevel[0] == "c" else "", - self.releaselevel[0], - self.serial, - ), - "", - )[self.releaselevel == "final"] - ) - - def __str__(self): - return "{} {} / {}".format(__name__, self.__version__, __version_time__) - - def __repr__(self): - return "{}.{}({})".format( - __name__, - type(self).__name__, - ", ".join("{}={!r}".format(*nv) for nv in zip(self._fields, self)), - ) - - -__version_info__ = version_info(3, 0, 9, "final", 0) -__version_time__ = "05 May 2022 07:02 UTC" -__version__ = __version_info__.__version__ -__versionTime__ = __version_time__ -__author__ = "Paul McGuire " - -from .util import * -from .exceptions import * -from .actions import * -from .core import __diag__, __compat__ -from .results import * -from .core import * -from .core import _builtin_exprs as core_builtin_exprs -from .helpers import * -from .helpers import _builtin_exprs as helper_builtin_exprs - -from .unicode import unicode_set, UnicodeRangeList, pyparsing_unicode as unicode -from .testing import pyparsing_test as testing -from .common import ( - pyparsing_common as common, - _builtin_exprs as common_builtin_exprs, -) - -# define backward compat synonyms -if "pyparsing_unicode" not in globals(): - pyparsing_unicode = unicode -if "pyparsing_common" not in globals(): - pyparsing_common = common -if "pyparsing_test" not in globals(): - pyparsing_test = testing - -core_builtin_exprs += common_builtin_exprs + helper_builtin_exprs - - -__all__ = [ - "__version__", - "__version_time__", - "__author__", - "__compat__", - "__diag__", - "And", - "AtLineStart", - "AtStringStart", - "CaselessKeyword", - "CaselessLiteral", - "CharsNotIn", - "Combine", - "Dict", - "Each", - "Empty", - "FollowedBy", - "Forward", - "GoToColumn", - "Group", - "IndentedBlock", - "Keyword", - "LineEnd", - "LineStart", - "Literal", - "Located", - "PrecededBy", - "MatchFirst", - "NoMatch", - "NotAny", - "OneOrMore", - "OnlyOnce", - "OpAssoc", - "Opt", - "Optional", - "Or", - "ParseBaseException", - "ParseElementEnhance", - "ParseException", - "ParseExpression", - "ParseFatalException", - "ParseResults", - "ParseSyntaxException", - "ParserElement", - "PositionToken", - "QuotedString", - "RecursiveGrammarException", - "Regex", - "SkipTo", - "StringEnd", - "StringStart", - "Suppress", - "Token", - "TokenConverter", - "White", - "Word", - "WordEnd", - "WordStart", - "ZeroOrMore", - "Char", - "alphanums", - "alphas", - "alphas8bit", - "any_close_tag", - "any_open_tag", - "c_style_comment", - "col", - "common_html_entity", - "counted_array", - "cpp_style_comment", - "dbl_quoted_string", - "dbl_slash_comment", - "delimited_list", - "dict_of", - "empty", - "hexnums", - "html_comment", - "identchars", - "identbodychars", - "java_style_comment", - "line", - "line_end", - "line_start", - "lineno", - "make_html_tags", - "make_xml_tags", - "match_only_at_col", - "match_previous_expr", - "match_previous_literal", - "nested_expr", - "null_debug_action", - "nums", - "one_of", - "printables", - "punc8bit", - "python_style_comment", - "quoted_string", - "remove_quotes", - "replace_with", - "replace_html_entity", - "rest_of_line", - "sgl_quoted_string", - "srange", - "string_end", - "string_start", - "trace_parse_action", - "unicode_string", - "with_attribute", - "indentedBlock", - "original_text_for", - "ungroup", - "infix_notation", - "locatedExpr", - "with_class", - "CloseMatch", - "token_map", - "pyparsing_common", - "pyparsing_unicode", - "unicode_set", - "condition_as_parse_action", - "pyparsing_test", - # pre-PEP8 compatibility names - "__versionTime__", - "anyCloseTag", - "anyOpenTag", - "cStyleComment", - "commonHTMLEntity", - "countedArray", - "cppStyleComment", - "dblQuotedString", - "dblSlashComment", - "delimitedList", - "dictOf", - "htmlComment", - "javaStyleComment", - "lineEnd", - "lineStart", - "makeHTMLTags", - "makeXMLTags", - "matchOnlyAtCol", - "matchPreviousExpr", - "matchPreviousLiteral", - "nestedExpr", - "nullDebugAction", - "oneOf", - "opAssoc", - "pythonStyleComment", - "quotedString", - "removeQuotes", - "replaceHTMLEntity", - "replaceWith", - "restOfLine", - "sglQuotedString", - "stringEnd", - "stringStart", - "traceParseAction", - "unicodeString", - "withAttribute", - "indentedBlock", - "originalTextFor", - "infixNotation", - "locatedExpr", - "withClass", - "tokenMap", - "conditionAsParseAction", - "autoname_elements", -] diff --git a/spaces/Realcat/image-matching-webui/hloc/pipelines/Aachen/pipeline.py b/spaces/Realcat/image-matching-webui/hloc/pipelines/Aachen/pipeline.py deleted file mode 100644 index 1dbf8ab8838d5400482dd2e6ef2e9cb28c40cfea..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/pipelines/Aachen/pipeline.py +++ /dev/null @@ -1,102 +0,0 @@ -from pathlib import Path -from pprint import pformat -import argparse - -from ... import extract_features, match_features -from ... import pairs_from_covisibility, pairs_from_retrieval -from ... import colmap_from_nvm, triangulation, localize_sfm - - -parser = argparse.ArgumentParser() -parser.add_argument( - "--dataset", - type=Path, - default="datasets/aachen", - help="Path to the dataset, default: %(default)s", -) -parser.add_argument( - "--outputs", - type=Path, - default="outputs/aachen", - help="Path to the output directory, default: %(default)s", -) -parser.add_argument( - "--num_covis", - type=int, - default=20, - help="Number of image pairs for SfM, default: %(default)s", -) -parser.add_argument( - "--num_loc", - type=int, - default=50, - help="Number of image pairs for loc, default: %(default)s", -) -args = parser.parse_args() - -# Setup the paths -dataset = args.dataset -images = dataset / "images/images_upright/" - -outputs = args.outputs # where everything will be saved -sift_sfm = outputs / "sfm_sift" # from which we extract the reference poses -reference_sfm = ( - outputs / "sfm_superpoint+superglue" -) # the SfM model we will build -sfm_pairs = ( - outputs / f"pairs-db-covis{args.num_covis}.txt" -) # top-k most covisible in SIFT model -loc_pairs = ( - outputs / f"pairs-query-netvlad{args.num_loc}.txt" -) # top-k retrieved by NetVLAD -results = ( - outputs / f"Aachen_hloc_superpoint+superglue_netvlad{args.num_loc}.txt" -) - -# list the standard configurations available -print(f"Configs for feature extractors:\n{pformat(extract_features.confs)}") -print(f"Configs for feature matchers:\n{pformat(match_features.confs)}") - -# pick one of the configurations for extraction and matching -retrieval_conf = extract_features.confs["netvlad"] -feature_conf = extract_features.confs["superpoint_aachen"] -matcher_conf = match_features.confs["superglue"] - -features = extract_features.main(feature_conf, images, outputs) - -colmap_from_nvm.main( - dataset / "3D-models/aachen_cvpr2018_db.nvm", - dataset / "3D-models/database_intrinsics.txt", - dataset / "aachen.db", - sift_sfm, -) -pairs_from_covisibility.main(sift_sfm, sfm_pairs, num_matched=args.num_covis) -sfm_matches = match_features.main( - matcher_conf, sfm_pairs, feature_conf["output"], outputs -) - -triangulation.main( - reference_sfm, sift_sfm, images, sfm_pairs, features, sfm_matches -) - -global_descriptors = extract_features.main(retrieval_conf, images, outputs) -pairs_from_retrieval.main( - global_descriptors, - loc_pairs, - args.num_loc, - query_prefix="query", - db_model=reference_sfm, -) -loc_matches = match_features.main( - matcher_conf, loc_pairs, feature_conf["output"], outputs -) - -localize_sfm.main( - reference_sfm, - dataset / "queries/*_time_queries_with_intrinsics.txt", - loc_pairs, - features, - loc_matches, - results, - covisibility_clustering=False, -) # not required with SuperPoint+SuperGlue diff --git a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/scripts/reproduce_test/indoor.sh b/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/scripts/reproduce_test/indoor.sh deleted file mode 100644 index 41e5c76a146fb84a2296f7fc63e6da881c0c8e03..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/scripts/reproduce_test/indoor.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash -l -# a indoor_ds model with the pos_enc impl bug fixed. - -SCRIPTPATH=$(dirname $(readlink -f "$0")) -PROJECT_DIR="${SCRIPTPATH}/../../" - -# conda activate loftr -export PYTHONPATH=$PROJECT_DIR:$PYTHONPATH -cd $PROJECT_DIR - -data_cfg_path="configs/data/scannet_test_1500.py" -main_cfg_path="configs/aspan/indoor/aspan_test.py" -ckpt_path='weights/indoor.ckpt' -dump_dir="dump/indoor_dump" -profiler_name="inference" -n_nodes=1 # mannually keep this the same with --nodes -n_gpus_per_node=-1 -torch_num_workers=4 -batch_size=1 # per gpu - -python -u ./test.py \ - ${data_cfg_path} \ - ${main_cfg_path} \ - --ckpt_path=${ckpt_path} \ - --dump_dir=${dump_dir} \ - --gpus=${n_gpus_per_node} --num_nodes=${n_nodes} --accelerator="ddp" \ - --batch_size=${batch_size} --num_workers=${torch_num_workers}\ - --profiler_name=${profiler_name} \ - --benchmark \ - --mode integrated - \ No newline at end of file diff --git a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/models/transformer/layers/block.py b/spaces/Realcat/image-matching-webui/third_party/Roma/roma/models/transformer/layers/block.py deleted file mode 100644 index 1b5f5158f073788d3d5fe3e09742d4485ef26441..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/models/transformer/layers/block.py +++ /dev/null @@ -1,284 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# References: -# https://github.com/facebookresearch/dino/blob/master/vision_transformer.py -# https://github.com/rwightman/pytorch-image-models/tree/master/timm/layers/patch_embed.py - -import logging -from typing import Callable, List, Any, Tuple, Dict - -import torch -from torch import nn, Tensor - -from .attention import Attention, MemEffAttention -from .drop_path import DropPath -from .layer_scale import LayerScale -from .mlp import Mlp - - -logger = logging.getLogger("dinov2") - - -try: - from xformers.ops import fmha - from xformers.ops import scaled_index_add, index_select_cat - - XFORMERS_AVAILABLE = True -except ImportError: - logger.warning("xFormers not available") - XFORMERS_AVAILABLE = False - - -class Block(nn.Module): - def __init__( - self, - dim: int, - num_heads: int, - mlp_ratio: float = 4.0, - qkv_bias: bool = False, - proj_bias: bool = True, - ffn_bias: bool = True, - drop: float = 0.0, - attn_drop: float = 0.0, - init_values=None, - drop_path: float = 0.0, - act_layer: Callable[..., nn.Module] = nn.GELU, - norm_layer: Callable[..., nn.Module] = nn.LayerNorm, - attn_class: Callable[..., nn.Module] = Attention, - ffn_layer: Callable[..., nn.Module] = Mlp, - ) -> None: - super().__init__() - # print(f"biases: qkv: {qkv_bias}, proj: {proj_bias}, ffn: {ffn_bias}") - self.norm1 = norm_layer(dim) - self.attn = attn_class( - dim, - num_heads=num_heads, - qkv_bias=qkv_bias, - proj_bias=proj_bias, - attn_drop=attn_drop, - proj_drop=drop, - ) - self.ls1 = ( - LayerScale(dim, init_values=init_values) if init_values else nn.Identity() - ) - self.drop_path1 = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = ffn_layer( - in_features=dim, - hidden_features=mlp_hidden_dim, - act_layer=act_layer, - drop=drop, - bias=ffn_bias, - ) - self.ls2 = ( - LayerScale(dim, init_values=init_values) if init_values else nn.Identity() - ) - self.drop_path2 = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - - self.sample_drop_ratio = drop_path - - def forward(self, x: Tensor) -> Tensor: - def attn_residual_func(x: Tensor) -> Tensor: - return self.ls1(self.attn(self.norm1(x))) - - def ffn_residual_func(x: Tensor) -> Tensor: - return self.ls2(self.mlp(self.norm2(x))) - - if self.training and self.sample_drop_ratio > 0.1: - # the overhead is compensated only for a drop path rate larger than 0.1 - x = drop_add_residual_stochastic_depth( - x, - residual_func=attn_residual_func, - sample_drop_ratio=self.sample_drop_ratio, - ) - x = drop_add_residual_stochastic_depth( - x, - residual_func=ffn_residual_func, - sample_drop_ratio=self.sample_drop_ratio, - ) - elif self.training and self.sample_drop_ratio > 0.0: - x = x + self.drop_path1(attn_residual_func(x)) - x = x + self.drop_path1(ffn_residual_func(x)) # FIXME: drop_path2 - else: - x = x + attn_residual_func(x) - x = x + ffn_residual_func(x) - return x - - -def drop_add_residual_stochastic_depth( - x: Tensor, - residual_func: Callable[[Tensor], Tensor], - sample_drop_ratio: float = 0.0, -) -> Tensor: - # 1) extract subset using permutation - b, n, d = x.shape - sample_subset_size = max(int(b * (1 - sample_drop_ratio)), 1) - brange = (torch.randperm(b, device=x.device))[:sample_subset_size] - x_subset = x[brange] - - # 2) apply residual_func to get residual - residual = residual_func(x_subset) - - x_flat = x.flatten(1) - residual = residual.flatten(1) - - residual_scale_factor = b / sample_subset_size - - # 3) add the residual - x_plus_residual = torch.index_add( - x_flat, 0, brange, residual.to(dtype=x.dtype), alpha=residual_scale_factor - ) - return x_plus_residual.view_as(x) - - -def get_branges_scales(x, sample_drop_ratio=0.0): - b, n, d = x.shape - sample_subset_size = max(int(b * (1 - sample_drop_ratio)), 1) - brange = (torch.randperm(b, device=x.device))[:sample_subset_size] - residual_scale_factor = b / sample_subset_size - return brange, residual_scale_factor - - -def add_residual(x, brange, residual, residual_scale_factor, scaling_vector=None): - if scaling_vector is None: - x_flat = x.flatten(1) - residual = residual.flatten(1) - x_plus_residual = torch.index_add( - x_flat, 0, brange, residual.to(dtype=x.dtype), alpha=residual_scale_factor - ) - else: - x_plus_residual = scaled_index_add( - x, - brange, - residual.to(dtype=x.dtype), - scaling=scaling_vector, - alpha=residual_scale_factor, - ) - return x_plus_residual - - -attn_bias_cache: Dict[Tuple, Any] = {} - - -def get_attn_bias_and_cat(x_list, branges=None): - """ - this will perform the index select, cat the tensors, and provide the attn_bias from cache - """ - batch_sizes = ( - [b.shape[0] for b in branges] - if branges is not None - else [x.shape[0] for x in x_list] - ) - all_shapes = tuple((b, x.shape[1]) for b, x in zip(batch_sizes, x_list)) - if all_shapes not in attn_bias_cache.keys(): - seqlens = [] - for b, x in zip(batch_sizes, x_list): - for _ in range(b): - seqlens.append(x.shape[1]) - attn_bias = fmha.BlockDiagonalMask.from_seqlens(seqlens) - attn_bias._batch_sizes = batch_sizes - attn_bias_cache[all_shapes] = attn_bias - - if branges is not None: - cat_tensors = index_select_cat([x.flatten(1) for x in x_list], branges).view( - 1, -1, x_list[0].shape[-1] - ) - else: - tensors_bs1 = tuple(x.reshape([1, -1, *x.shape[2:]]) for x in x_list) - cat_tensors = torch.cat(tensors_bs1, dim=1) - - return attn_bias_cache[all_shapes], cat_tensors - - -def drop_add_residual_stochastic_depth_list( - x_list: List[Tensor], - residual_func: Callable[[Tensor, Any], Tensor], - sample_drop_ratio: float = 0.0, - scaling_vector=None, -) -> Tensor: - # 1) generate random set of indices for dropping samples in the batch - branges_scales = [ - get_branges_scales(x, sample_drop_ratio=sample_drop_ratio) for x in x_list - ] - branges = [s[0] for s in branges_scales] - residual_scale_factors = [s[1] for s in branges_scales] - - # 2) get attention bias and index+concat the tensors - attn_bias, x_cat = get_attn_bias_and_cat(x_list, branges) - - # 3) apply residual_func to get residual, and split the result - residual_list = attn_bias.split(residual_func(x_cat, attn_bias=attn_bias)) # type: ignore - - outputs = [] - for x, brange, residual, residual_scale_factor in zip( - x_list, branges, residual_list, residual_scale_factors - ): - outputs.append( - add_residual( - x, brange, residual, residual_scale_factor, scaling_vector - ).view_as(x) - ) - return outputs - - -class NestedTensorBlock(Block): - def forward_nested(self, x_list: List[Tensor]) -> List[Tensor]: - """ - x_list contains a list of tensors to nest together and run - """ - assert isinstance(self.attn, MemEffAttention) - - if self.training and self.sample_drop_ratio > 0.0: - - def attn_residual_func(x: Tensor, attn_bias=None) -> Tensor: - return self.attn(self.norm1(x), attn_bias=attn_bias) - - def ffn_residual_func(x: Tensor, attn_bias=None) -> Tensor: - return self.mlp(self.norm2(x)) - - x_list = drop_add_residual_stochastic_depth_list( - x_list, - residual_func=attn_residual_func, - sample_drop_ratio=self.sample_drop_ratio, - scaling_vector=self.ls1.gamma - if isinstance(self.ls1, LayerScale) - else None, - ) - x_list = drop_add_residual_stochastic_depth_list( - x_list, - residual_func=ffn_residual_func, - sample_drop_ratio=self.sample_drop_ratio, - scaling_vector=self.ls2.gamma - if isinstance(self.ls1, LayerScale) - else None, - ) - return x_list - else: - - def attn_residual_func(x: Tensor, attn_bias=None) -> Tensor: - return self.ls1(self.attn(self.norm1(x), attn_bias=attn_bias)) - - def ffn_residual_func(x: Tensor, attn_bias=None) -> Tensor: - return self.ls2(self.mlp(self.norm2(x))) - - attn_bias, x = get_attn_bias_and_cat(x_list) - x = x + attn_residual_func(x, attn_bias=attn_bias) - x = x + ffn_residual_func(x) - return attn_bias.split(x) - - def forward(self, x_or_x_list): - if isinstance(x_or_x_list, Tensor): - return super().forward(x_or_x_list) - elif isinstance(x_or_x_list, list): - assert ( - XFORMERS_AVAILABLE - ), "Please install xFormers for nested tensors usage" - return self.forward_nested(x_or_x_list) - else: - raise AssertionError diff --git a/spaces/Ridwanz/sdrv1_4/app.py b/spaces/Ridwanz/sdrv1_4/app.py deleted file mode 100644 index 3812eb4041fd9ca07b2cada96fb099a2d76ac0d1..0000000000000000000000000000000000000000 --- a/spaces/Ridwanz/sdrv1_4/app.py +++ /dev/null @@ -1,196 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'SG161222/Realistic_Vision_V1.4' -prefix = 'RAW photo,' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - - -def _parse_args(prompt, generator): - parser = argparse.ArgumentParser( - description="making it work." - ) - parser.add_argument( - "--no-half-vae", help="no half vae" - ) - - cmdline_args = parser.parse_args() - command = cmdline_args.command - conf_file = cmdline_args.conf_file - conf_args = Arguments(conf_file) - opt = conf_args.readArguments() - - if cmdline_args.config_overrides: - for config_override in cmdline_args.config_overrides.split(";"): - config_override = config_override.strip() - if config_override: - var_val = config_override.split("=") - assert ( - len(var_val) == 2 - ), f"Config override '{var_val}' does not have the form 'VAR=val'" - conf_args.add_opt(opt, var_val[0], var_val[1], force_override=True) - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - - - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - - def fake_safety_checker(images, **kwargs): - return result.images[0], [False] * len(images) - - pipe.safety_checker = fake_safety_checker - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
    -
    -

    Realistic Vision V1.4 ⚡

    -
    -

    - Demo for Realistic Vision V1.4 - Stable Diffusion model by Eugene. {"" if prefix else ""} - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU ⚡"}. -

    -

    Please use the prompt template below to get an example of the desired generation results: -

    - -Prompt: -
    -RAW photo, *subject*, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 -
    -
    - -Example: RAW photo, a close up portrait photo of 26 y.o woman in wastelander clothes, long haircut, pale skin, slim body, background is city ruins,
    -(high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 -
    -
    -Important note: The "RAW photo" in the prompt may degrade the result in v1.4. - -
    -Negative Prompt: -
    -(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality,
    -low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry,
    -dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms,
    -extra legs, fused fingers, too many fingers, long neck -
    - -
    -Have Fun & Enjoy ⚡ //THAFX -
    - -
    - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False,max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically (RAW photo,)", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - #with gr.Tab("Prompts"): - #with gr.Group(): - #neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - # gr.JSON(value=lambda: random.choice([ test ])), - - - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/Ritori/TTS_Yui/text/__init__.py b/spaces/Ritori/TTS_Yui/text/__init__.py deleted file mode 100644 index 02ecf0e741145fe0d6c1ede752acd7027b934af6..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/text/__init__.py +++ /dev/null @@ -1,74 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -import re -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - -# Regular expression matching text enclosed in curly braces: -_curly_re = re.compile(r'(.*?)\{(.+?)\}(.*)') - - -def text_to_sequence(text, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - - The text can optionally have ARPAbet sequences enclosed in curly braces embedded - in it. For example, "Turn left on {HH AW1 S S T AH0 N} Street." - - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - - # Check for curly braces and treat their contents as ARPAbet: - while len(text): - m = _curly_re.match(text) - if not m: - sequence += _symbols_to_sequence(_clean_text(text, cleaner_names)) - break - sequence += _symbols_to_sequence(_clean_text(m.group(1), cleaner_names)) - sequence += _arpabet_to_sequence(m.group(2)) - text = m.group(3) - - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - if symbol_id in _id_to_symbol: - s = _id_to_symbol[symbol_id] - # Enclose ARPAbet back in curly braces: - if len(s) > 1 and s[0] == '@': - s = '{%s}' % s[1:] - result += s - return result.replace('}{', ' ') - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text - - -def _symbols_to_sequence(symbols): - return [_symbol_to_id[s] for s in symbols if _should_keep_symbol(s)] - - -def _arpabet_to_sequence(text): - return _symbols_to_sequence(['@' + s for s in text.split()]) - - -def _should_keep_symbol(s): - return s in _symbol_to_id and s is not '_' and s is not '~' diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/utils/util_mixins.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/utils/util_mixins.py deleted file mode 100644 index 69669a3ca943eebe0f138b2784c5b61724196bbe..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/utils/util_mixins.py +++ /dev/null @@ -1,104 +0,0 @@ -"""This module defines the :class:`NiceRepr` mixin class, which defines a -``__repr__`` and ``__str__`` method that only depend on a custom ``__nice__`` -method, which you must define. This means you only have to overload one -function instead of two. Furthermore, if the object defines a ``__len__`` -method, then the ``__nice__`` method defaults to something sensible, otherwise -it is treated as abstract and raises ``NotImplementedError``. - -To use simply have your object inherit from :class:`NiceRepr` -(multi-inheritance should be ok). - -This code was copied from the ubelt library: https://github.com/Erotemic/ubelt - -Example: - >>> # Objects that define __nice__ have a default __str__ and __repr__ - >>> class Student(NiceRepr): - ... def __init__(self, name): - ... self.name = name - ... def __nice__(self): - ... return self.name - >>> s1 = Student('Alice') - >>> s2 = Student('Bob') - >>> print(f's1 = {s1}') - >>> print(f's2 = {s2}') - s1 = - s2 = - -Example: - >>> # Objects that define __len__ have a default __nice__ - >>> class Group(NiceRepr): - ... def __init__(self, data): - ... self.data = data - ... def __len__(self): - ... return len(self.data) - >>> g = Group([1, 2, 3]) - >>> print(f'g = {g}') - g = -""" -import warnings - - -class NiceRepr(object): - """Inherit from this class and define ``__nice__`` to "nicely" print your - objects. - - Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function - Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``. - If the inheriting class has a ``__len__``, method then the default - ``__nice__`` method will return its length. - - Example: - >>> class Foo(NiceRepr): - ... def __nice__(self): - ... return 'info' - >>> foo = Foo() - >>> assert str(foo) == '' - >>> assert repr(foo).startswith('>> class Bar(NiceRepr): - ... pass - >>> bar = Bar() - >>> import pytest - >>> with pytest.warns(None) as record: - >>> assert 'object at' in str(bar) - >>> assert 'object at' in repr(bar) - - Example: - >>> class Baz(NiceRepr): - ... def __len__(self): - ... return 5 - >>> baz = Baz() - >>> assert str(baz) == '' - """ - - def __nice__(self): - """str: a "nice" summary string describing this module""" - if hasattr(self, '__len__'): - # It is a common pattern for objects to use __len__ in __nice__ - # As a convenience we define a default __nice__ for these objects - return str(len(self)) - else: - # In all other cases force the subclass to overload __nice__ - raise NotImplementedError( - f'Define the __nice__ method for {self.__class__!r}') - - def __repr__(self): - """str: the string of the module""" - try: - nice = self.__nice__() - classname = self.__class__.__name__ - return f'<{classname}({nice}) at {hex(id(self))}>' - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) - - def __str__(self): - """str: the string of the module""" - try: - classname = self.__class__.__name__ - nice = self.__nice__() - return f'<{classname}({nice})>' - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/base.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/base.py deleted file mode 100644 index 89134f3696ead442a5ff57184e9d256fdf7d0ba4..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/base.py +++ /dev/null @@ -1,355 +0,0 @@ -from abc import ABCMeta, abstractmethod -from collections import OrderedDict - -import mmcv -import numpy as np -import torch -import torch.distributed as dist -import torch.nn as nn -from mmcv.runner import auto_fp16 -from mmcv.utils import print_log - -from mmdet.core.visualization import imshow_det_bboxes -from mmdet.utils import get_root_logger - - -class BaseDetector(nn.Module, metaclass=ABCMeta): - """Base class for detectors.""" - - def __init__(self): - super(BaseDetector, self).__init__() - self.fp16_enabled = False - - @property - def with_neck(self): - """bool: whether the detector has a neck""" - return hasattr(self, 'neck') and self.neck is not None - - # TODO: these properties need to be carefully handled - # for both single stage & two stage detectors - @property - def with_shared_head(self): - """bool: whether the detector has a shared head in the RoI Head""" - return hasattr(self, 'roi_head') and self.roi_head.with_shared_head - - @property - def with_bbox(self): - """bool: whether the detector has a bbox head""" - return ((hasattr(self, 'roi_head') and self.roi_head.with_bbox) - or (hasattr(self, 'bbox_head') and self.bbox_head is not None)) - - @property - def with_mask(self): - """bool: whether the detector has a mask head""" - return ((hasattr(self, 'roi_head') and self.roi_head.with_mask) - or (hasattr(self, 'mask_head') and self.mask_head is not None)) - - @abstractmethod - def extract_feat(self, imgs): - """Extract features from images.""" - pass - - def extract_feats(self, imgs): - """Extract features from multiple images. - - Args: - imgs (list[torch.Tensor]): A list of images. The images are - augmented from the same image but in different ways. - - Returns: - list[torch.Tensor]: Features of different images - """ - assert isinstance(imgs, list) - return [self.extract_feat(img) for img in imgs] - - def forward_train(self, imgs, img_metas, **kwargs): - """ - Args: - img (list[Tensor]): List of tensors of shape (1, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys, see - :class:`mmdet.datasets.pipelines.Collect`. - kwargs (keyword arguments): Specific to concrete implementation. - """ - # NOTE the batched image size information may be useful, e.g. - # in DETR, this is needed for the construction of masks, which is - # then used for the transformer_head. - batch_input_shape = tuple(imgs[0].size()[-2:]) - for img_meta in img_metas: - img_meta['batch_input_shape'] = batch_input_shape - - async def async_simple_test(self, img, img_metas, **kwargs): - raise NotImplementedError - - @abstractmethod - def simple_test(self, img, img_metas, **kwargs): - pass - - @abstractmethod - def aug_test(self, imgs, img_metas, **kwargs): - """Test function with test time augmentation.""" - pass - - def init_weights(self, pretrained=None): - """Initialize the weights in detector. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if pretrained is not None: - logger = get_root_logger() - print_log(f'load model from: {pretrained}', logger=logger) - - async def aforward_test(self, *, img, img_metas, **kwargs): - for var, name in [(img, 'img'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(img) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(img)}) ' - f'!= num of image metas ({len(img_metas)})') - # TODO: remove the restriction of samples_per_gpu == 1 when prepared - samples_per_gpu = img[0].size(0) - assert samples_per_gpu == 1 - - if num_augs == 1: - return await self.async_simple_test(img[0], img_metas[0], **kwargs) - else: - raise NotImplementedError - - def forward_test(self, imgs, img_metas, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) ' - f'!= num of image meta ({len(img_metas)})') - - # NOTE the batched image size information may be useful, e.g. - # in DETR, this is needed for the construction of masks, which is - # then used for the transformer_head. - for img, img_meta in zip(imgs, img_metas): - batch_size = len(img_meta) - for img_id in range(batch_size): - img_meta[img_id]['batch_input_shape'] = tuple(img.size()[-2:]) - - if num_augs == 1: - # proposals (List[List[Tensor]]): the outer list indicates - # test-time augs (multiscale, flip, etc.) and the inner list - # indicates images in a batch. - # The Tensor should have a shape Px4, where P is the number of - # proposals. - if 'proposals' in kwargs: - kwargs['proposals'] = kwargs['proposals'][0] - return self.simple_test(imgs[0], img_metas[0], **kwargs) - else: - assert imgs[0].size(0) == 1, 'aug test does not support ' \ - 'inference with batch size ' \ - f'{imgs[0].size(0)}' - # TODO: support test augmentation for predefined proposals - assert 'proposals' not in kwargs - return self.aug_test(imgs, img_metas, **kwargs) - - @auto_fp16(apply_to=('img', )) - def forward(self, img, img_metas, return_loss=True, **kwargs): - """Calls either :func:`forward_train` or :func:`forward_test` depending - on whether ``return_loss`` is ``True``. - - Note this setting will change the expected inputs. When - ``return_loss=True``, img and img_meta are single-nested (i.e. Tensor - and List[dict]), and when ``resturn_loss=False``, img and img_meta - should be double nested (i.e. List[Tensor], List[List[dict]]), with - the outer list indicating test time augmentations. - """ - if return_loss: - return self.forward_train(img, img_metas, **kwargs) - else: - return self.forward_test(img, img_metas, **kwargs) - - def _parse_losses(self, losses): - """Parse the raw outputs (losses) of the network. - - Args: - losses (dict): Raw output of the network, which usually contain - losses and other necessary infomation. - - Returns: - tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor \ - which may be a weighted sum of all losses, log_vars contains \ - all the variables to be sent to the logger. - """ - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(_value for _key, _value in log_vars.items() - if 'loss' in _key) - - log_vars['loss'] = loss - for loss_name, loss_value in log_vars.items(): - # reduce loss when distributed training - if dist.is_available() and dist.is_initialized(): - loss_value = loss_value.data.clone() - dist.all_reduce(loss_value.div_(dist.get_world_size())) - log_vars[loss_name] = loss_value.item() - - return loss, log_vars - - def train_step(self, data, optimizer): - """The iteration step during training. - - This method defines an iteration step during training, except for the - back propagation and optimizer updating, which are done in an optimizer - hook. Note that in some complicated cases or models, the whole process - including back propagation and optimizer updating is also defined in - this method, such as GAN. - - Args: - data (dict): The output of dataloader. - optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of - runner is passed to ``train_step()``. This argument is unused - and reserved. - - Returns: - dict: It should contain at least 3 keys: ``loss``, ``log_vars``, \ - ``num_samples``. - - - ``loss`` is a tensor for back propagation, which can be a \ - weighted sum of multiple losses. - - ``log_vars`` contains all the variables to be sent to the - logger. - - ``num_samples`` indicates the batch size (when the model is \ - DDP, it means the batch size on each GPU), which is used for \ - averaging the logs. - """ - losses = self(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['img_metas'])) - - return outputs - - def val_step(self, data, optimizer): - """The iteration step during validation. - - This method shares the same signature as :func:`train_step`, but used - during val epochs. Note that the evaluation after training epochs is - not implemented with this method, but an evaluation hook. - """ - losses = self(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['img_metas'])) - - return outputs - - def show_result(self, - img, - result, - score_thr=0.3, - bbox_color=(72, 101, 241), - text_color=(72, 101, 241), - mask_color=None, - thickness=2, - font_size=13, - win_name='', - show=False, - wait_time=0, - out_file=None): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (Tensor or tuple): The results to draw over `img` - bbox_result or (bbox_result, segm_result). - score_thr (float, optional): Minimum score of bboxes to be shown. - Default: 0.3. - bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: 'green' - text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: 'green' - mask_color (None or str or tuple(int) or :obj:`Color`): - Color of masks. The tuple of color should be in BGR order. - Default: None - thickness (int): Thickness of lines. Default: 2 - font_size (int): Font size of texts. Default: 13 - win_name (str): The window name. Default: '' - wait_time (float): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - - Returns: - img (Tensor): Only if not `show` or `out_file` - """ - img = mmcv.imread(img) - img = img.copy() - if isinstance(result, tuple): - bbox_result, segm_result = result - if isinstance(segm_result, tuple): - segm_result = segm_result[0] # ms rcnn - else: - bbox_result, segm_result = result, None - bboxes = np.vstack(bbox_result) - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - # draw segmentation masks - segms = None - if segm_result is not None and len(labels) > 0: # non empty - segms = mmcv.concat_list(segm_result) - if isinstance(segms[0], torch.Tensor): - segms = torch.stack(segms, dim=0).detach().cpu().numpy() - else: - segms = np.stack(segms, axis=0) - # if out_file specified, do not show image in window - if out_file is not None: - show = False - # draw bounding boxes - img = imshow_det_bboxes( - img, - bboxes, - labels, - segms, - class_names=self.CLASSES, - score_thr=score_thr, - bbox_color=bbox_color, - text_color=text_color, - mask_color=mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=show, - wait_time=wait_time, - out_file=out_file) - - if not (show or out_file): - return img diff --git a/spaces/SERER/VITS-Umamusume-voice-synthesizer/text/cleaners.py b/spaces/SERER/VITS-Umamusume-voice-synthesizer/text/cleaners.py deleted file mode 100644 index c80e113b2b81a66134800dbdaa29c7d96a0152a7..0000000000000000000000000000000000000000 --- a/spaces/SERER/VITS-Umamusume-voice-synthesizer/text/cleaners.py +++ /dev/null @@ -1,146 +0,0 @@ -import re - - -def japanese_cleaners(text): - from text.japanese import japanese_to_romaji_with_accent - text = japanese_to_romaji_with_accent(text) - text = re.sub(r'([A-Za-z])$', r'\1.', text) - return text - - -def japanese_cleaners2(text): - return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') - - -def korean_cleaners(text): - '''Pipeline for Korean text''' - from text.korean import latin_to_hangul, number_to_hangul, divide_hangul - text = latin_to_hangul(text) - text = number_to_hangul(text) - text = divide_hangul(text) - text = re.sub(r'([\u3131-\u3163])$', r'\1.', text) - return text - - -def chinese_cleaners(text): - '''Pipeline for Chinese text''' - from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text) - return text - - -def zh_ja_mixture_cleaners(text): - from text.mandarin import chinese_to_romaji - from text.japanese import japanese_to_romaji_with_accent - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_romaji(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent( - x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def sanskrit_cleaners(text): - text = text.replace('॥', '।').replace('ॐ', 'ओम्') - if text[-1] != '।': - text += ' ।' - return text - - -def cjks_cleaners(text): - from text.mandarin import chinese_to_lazy_ipa - from text.japanese import japanese_to_ipa - from text.korean import korean_to_lazy_ipa - from text.sanskrit import devanagari_to_ipa - from text.english import english_to_lazy_ipa - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[SA\](.*?)\[SA\]', - lambda x: devanagari_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners(text): - from text.mandarin import chinese_to_lazy_ipa - from text.japanese import japanese_to_ipa - from text.korean import korean_to_ipa - from text.english import english_to_ipa2 - text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace( - 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn')+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace( - 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz')+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace( - 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners2(text): - from text.mandarin import chinese_to_ipa - from text.japanese import japanese_to_ipa2 - from text.korean import korean_to_ipa - from text.english import english_to_ipa2 - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def thai_cleaners(text): - from text.thai import num_to_thai, latin_to_thai - text = num_to_thai(text) - text = latin_to_thai(text) - return text - - -def shanghainese_cleaners(text): - from text.shanghainese import shanghainese_to_ipa - text = shanghainese_to_ipa(text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def chinese_dialect_cleaners(text): - from text.mandarin import chinese_to_ipa2 - from text.japanese import japanese_to_ipa3 - from text.shanghainese import shanghainese_to_ipa - from text.cantonese import cantonese_to_ipa - from text.english import english_to_lazy_ipa2 - from text.ngu_dialect import ngu_dialect_to_ipa - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ')+' ', text) - text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5', - '˧˧˦').replace('6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e')+' ', text) - text = re.sub(r'\[GD\](.*?)\[GD\]', - lambda x: cantonese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group( - 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/datasets/composition_1k.py b/spaces/SankarSrin/image-matting-app/ppmatting/datasets/composition_1k.py deleted file mode 100644 index 854b29bed6d91f20616060c3cee50fc21dc5b8f2..0000000000000000000000000000000000000000 --- a/spaces/SankarSrin/image-matting-app/ppmatting/datasets/composition_1k.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os -import math - -import cv2 -import numpy as np -import random -import paddle -from paddleseg.cvlibs import manager - -import ppmatting.transforms as T -from ppmatting.datasets.matting_dataset import MattingDataset - - -@manager.DATASETS.add_component -class Composition1K(MattingDataset): - def __init__(self, **kwargs): - super().__init__(**kwargs) diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/models/gca.py b/spaces/SankarSrin/image-matting-app/ppmatting/models/gca.py deleted file mode 100644 index 369a913570682f85ea696beaf3b78b7c2ec88141..0000000000000000000000000000000000000000 --- a/spaces/SankarSrin/image-matting-app/ppmatting/models/gca.py +++ /dev/null @@ -1,305 +0,0 @@ -# copyright (c) 2022 PaddlePaddle Authors. All Rights Reserve. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# The gca code was heavily based on https://github.com/Yaoyi-Li/GCA-Matting -# and https://github.com/open-mmlab/mmediting - -import paddle -import paddle.nn as nn -import paddle.nn.functional as F -from paddleseg.models import layers -from paddleseg import utils -from paddleseg.cvlibs import manager, param_init - -from ppmatting.models.layers import GuidedCxtAtten - - -@manager.MODELS.add_component -class GCABaseline(nn.Layer): - def __init__(self, backbone, pretrained=None): - super().__init__() - self.encoder = backbone - self.decoder = ResShortCut_D_Dec([2, 3, 3, 2]) - - def forward(self, inputs): - - x = paddle.concat([inputs['img'], inputs['trimap'] / 255], axis=1) - embedding, mid_fea = self.encoder(x) - alpha_pred = self.decoder(embedding, mid_fea) - - if self.training: - logit_dict = {'alpha_pred': alpha_pred, } - loss_dict = {} - alpha_gt = inputs['alpha'] - loss_dict["alpha"] = F.l1_loss(alpha_pred, alpha_gt) - loss_dict["all"] = loss_dict["alpha"] - return logit_dict, loss_dict - - return alpha_pred - - -@manager.MODELS.add_component -class GCA(GCABaseline): - def __init__(self, backbone, pretrained=None): - super().__init__(backbone, pretrained) - self.decoder = ResGuidedCxtAtten_Dec([2, 3, 3, 2]) - - -def conv5x5(in_planes, out_planes, stride=1, groups=1, dilation=1): - """5x5 convolution with padding""" - return nn.Conv2D( - in_planes, - out_planes, - kernel_size=5, - stride=stride, - padding=2, - groups=groups, - bias_attr=False, - dilation=dilation) - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2D( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - groups=groups, - bias_attr=False, - dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2D( - in_planes, out_planes, kernel_size=1, stride=stride, bias_attr=False) - - -class BasicBlock(nn.Layer): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - upsample=None, - norm_layer=None, - large_kernel=False): - super().__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm - self.stride = stride - conv = conv5x5 if large_kernel else conv3x3 - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - if self.stride > 1: - self.conv1 = nn.utils.spectral_norm( - nn.Conv2DTranspose( - inplanes, - inplanes, - kernel_size=4, - stride=2, - padding=1, - bias_attr=False)) - else: - self.conv1 = nn.utils.spectral_norm(conv(inplanes, inplanes)) - self.bn1 = norm_layer(inplanes) - self.activation = nn.LeakyReLU(0.2) - self.conv2 = nn.utils.spectral_norm(conv(inplanes, planes)) - self.bn2 = norm_layer(planes) - self.upsample = upsample - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.activation(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.upsample is not None: - identity = self.upsample(x) - - out += identity - out = self.activation(out) - - return out - - -class ResNet_D_Dec(nn.Layer): - def __init__(self, - layers=[3, 4, 4, 2], - norm_layer=None, - large_kernel=False, - late_downsample=False): - super().__init__() - - if norm_layer is None: - norm_layer = nn.BatchNorm - self._norm_layer = norm_layer - self.large_kernel = large_kernel - self.kernel_size = 5 if self.large_kernel else 3 - - self.inplanes = 512 if layers[0] > 0 else 256 - self.late_downsample = late_downsample - self.midplanes = 64 if late_downsample else 32 - - self.conv1 = nn.utils.spectral_norm( - nn.Conv2DTranspose( - self.midplanes, - 32, - kernel_size=4, - stride=2, - padding=1, - bias_attr=False)) - self.bn1 = norm_layer(32) - self.leaky_relu = nn.LeakyReLU(0.2) - self.conv2 = nn.Conv2D( - 32, - 1, - kernel_size=self.kernel_size, - stride=1, - padding=self.kernel_size // 2) - self.upsample = nn.UpsamplingNearest2D(scale_factor=2) - self.tanh = nn.Tanh() - self.layer1 = self._make_layer(BasicBlock, 256, layers[0], stride=2) - self.layer2 = self._make_layer(BasicBlock, 128, layers[1], stride=2) - self.layer3 = self._make_layer(BasicBlock, 64, layers[2], stride=2) - self.layer4 = self._make_layer( - BasicBlock, self.midplanes, layers[3], stride=2) - - self.init_weight() - - def _make_layer(self, block, planes, blocks, stride=1): - if blocks == 0: - return nn.Sequential(nn.Identity()) - norm_layer = self._norm_layer - upsample = None - if stride != 1: - upsample = nn.Sequential( - nn.UpsamplingNearest2D(scale_factor=2), - nn.utils.spectral_norm( - conv1x1(self.inplanes, planes * block.expansion)), - norm_layer(planes * block.expansion), ) - elif self.inplanes != planes * block.expansion: - upsample = nn.Sequential( - nn.utils.spectral_norm( - conv1x1(self.inplanes, planes * block.expansion)), - norm_layer(planes * block.expansion), ) - - layers = [ - block(self.inplanes, planes, stride, upsample, norm_layer, - self.large_kernel) - ] - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block( - self.inplanes, - planes, - norm_layer=norm_layer, - large_kernel=self.large_kernel)) - - return nn.Sequential(*layers) - - def forward(self, x, mid_fea): - x = self.layer1(x) # N x 256 x 32 x 32 - print(x.shape) - x = self.layer2(x) # N x 128 x 64 x 64 - print(x.shape) - x = self.layer3(x) # N x 64 x 128 x 128 - print(x.shape) - x = self.layer4(x) # N x 32 x 256 x 256 - print(x.shape) - x = self.conv1(x) - x = self.bn1(x) - x = self.leaky_relu(x) - x = self.conv2(x) - - alpha = (self.tanh(x) + 1.0) / 2.0 - - return alpha - - def init_weight(self): - for layer in self.sublayers(): - if isinstance(layer, nn.Conv2D): - - if hasattr(layer, "weight_orig"): - param = layer.weight_orig - else: - param = layer.weight - param_init.xavier_uniform(param) - - elif isinstance(layer, (nn.BatchNorm, nn.SyncBatchNorm)): - param_init.constant_init(layer.weight, value=1.0) - param_init.constant_init(layer.bias, value=0.0) - - elif isinstance(layer, BasicBlock): - param_init.constant_init(layer.bn2.weight, value=0.0) - - -class ResShortCut_D_Dec(ResNet_D_Dec): - def __init__(self, - layers, - norm_layer=None, - large_kernel=False, - late_downsample=False): - super().__init__( - layers, norm_layer, large_kernel, late_downsample=late_downsample) - - def forward(self, x, mid_fea): - fea1, fea2, fea3, fea4, fea5 = mid_fea['shortcut'] - x = self.layer1(x) + fea5 - x = self.layer2(x) + fea4 - x = self.layer3(x) + fea3 - x = self.layer4(x) + fea2 - x = self.conv1(x) - x = self.bn1(x) - x = self.leaky_relu(x) + fea1 - x = self.conv2(x) - - alpha = (self.tanh(x) + 1.0) / 2.0 - - return alpha - - -class ResGuidedCxtAtten_Dec(ResNet_D_Dec): - def __init__(self, - layers, - norm_layer=None, - large_kernel=False, - late_downsample=False): - super().__init__( - layers, norm_layer, large_kernel, late_downsample=late_downsample) - self.gca = GuidedCxtAtten(128, 128) - - def forward(self, x, mid_fea): - fea1, fea2, fea3, fea4, fea5 = mid_fea['shortcut'] - im = mid_fea['image_fea'] - x = self.layer1(x) + fea5 # N x 256 x 32 x 32 - x = self.layer2(x) + fea4 # N x 128 x 64 x 64 - x = self.gca(im, x, mid_fea['unknown']) # contextual attention - x = self.layer3(x) + fea3 # N x 64 x 128 x 128 - x = self.layer4(x) + fea2 # N x 32 x 256 x 256 - x = self.conv1(x) - x = self.bn1(x) - x = self.leaky_relu(x) + fea1 - x = self.conv2(x) - - alpha = (self.tanh(x) + 1.0) / 2.0 - - return alpha diff --git a/spaces/SeViLA/SeViLA/sevila_checkpoints/__init__.py b/spaces/SeViLA/SeViLA/sevila_checkpoints/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ServerX/PorcoDiaz/infer/modules/train/extract/extract_f0_print.py b/spaces/ServerX/PorcoDiaz/infer/modules/train/extract/extract_f0_print.py deleted file mode 100644 index 14ef598d73b807974204664f100c828918199816..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/infer/modules/train/extract/extract_f0_print.py +++ /dev/null @@ -1,298 +0,0 @@ -import os -import sys -import traceback - -import parselmouth - -now_dir = os.getcwd() -sys.path.append(now_dir) -import logging -from LazyImport import lazyload - -import numpy as np -import pyworld -torchcrepe = lazyload("torchcrepe") # Fork Feature. Crepe algo for training and preprocess -torch = lazyload("torch") -#from torch import Tensor # Fork Feature. Used for pitch prediction for torch crepe. -tqdm = lazyload("tqdm") -from infer.lib.audio import load_audio - -logging.getLogger("numba").setLevel(logging.WARNING) -from multiprocessing import Process - -exp_dir = sys.argv[1] -f = open("%s/extract_f0_feature.log" % exp_dir, "a+") - -DoFormant = False -Quefrency = 1.0 -Timbre = 1.0 - -def printt(strr): - print(strr) - f.write(f"{strr}\n") - f.flush() - - -n_p = int(sys.argv[2]) -f0method = sys.argv[3] -extraction_crepe_hop_length = 0 -try: - extraction_crepe_hop_length = int(sys.argv[4]) -except: - print("Temp Issue. echl is not being passed with argument!") - extraction_crepe_hop_length = 128 - -class FeatureInput(object): - def __init__(self, samplerate=16000, hop_size=160): - self.fs = samplerate - self.hop = hop_size - - self.f0_bin = 256 - self.f0_max = 1100.0 - self.f0_min = 50.0 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - - def mncrepe(self, method, x, p_len, crepe_hop_length): - f0 = None - torch_device_index = 0 - torch_device = torch.device( - f"cuda:{torch_device_index % torch.cuda.device_count()}" - ) if torch.cuda.is_available() \ - else torch.device("mps") if torch.backends.mps.is_available() \ - else torch.device("cpu") - - audio = torch.from_numpy(x.astype(np.float32)).to(torch_device, copy=True) - audio /= torch.quantile(torch.abs(audio), 0.999) - audio = torch.unsqueeze(audio, dim=0) - if audio.ndim == 2 and audio.shape[0] > 1: - audio = torch.mean(audio, dim=0, keepdim=True).detach() - audio = audio.detach() - - if method == 'mangio-crepe': - pitch: torch.Tensor = torchcrepe.predict( - audio, - self.fs, - crepe_hop_length, - self.f0_min, - self.f0_max, - "full", - batch_size=crepe_hop_length * 2, - device=torch_device, - pad=True, - ) - p_len = p_len or x.shape[0] // crepe_hop_length - # Resize the pitch - source = np.array(pitch.squeeze(0).cpu().float().numpy()) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * p_len, len(source)) / p_len, - np.arange(0, len(source)), - source, - ) - f0 = np.nan_to_num(target) - - elif method == 'crepe': - batch_size = 512 - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.fs, - 160, - self.f0_min, - self.f0_max, - "full", - batch_size=batch_size, - device=torch_device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - f0 = f0[1:] # Get rid of extra first frame - - return f0 - - def get_pm(self, x, p_len): - f0 = parselmouth.Sound(x, self.fs).to_pitch_ac( - time_step=160 / 16000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ).selected_array["frequency"] - - return np.pad( - f0, - [[max(0, (p_len - len(f0) + 1) // 2), max(0, p_len - len(f0) - (p_len - len(f0) + 1) // 2)]], - mode="constant" - ) - - def get_harvest(self, x): - f0_spectral = pyworld.harvest( - x.astype(np.double), - fs=self.fs, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop / self.fs, - ) - return pyworld.stonemask(x.astype(np.double), *f0_spectral, self.fs) - - def get_dio(self, x): - f0_spectral = pyworld.dio( - x.astype(np.double), - fs=self.fs, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop / self.fs, - ) - return pyworld.stonemask(x.astype(np.double), *f0_spectral, self.fs) - - def get_rmvpe(self, x): - if hasattr(self, "model_rmvpe") == False: - from infer.lib.rmvpe import RMVPE - - print("Loading rmvpe model") - self.model_rmvpe = RMVPE( - "assets/rmvpe/rmvpe.pt", is_half=False, device="cpu" - ) - return self.model_rmvpe.infer_from_audio(x, thred=0.03) - - def get_rmvpe_dml(self, x): - ... - - def get_f0_method_dict(self): - return { - "pm": self.get_pm, - "harvest": self.get_harvest, - "dio": self.get_dio, - "rmvpe": self.get_rmvpe - } - - def get_f0_hybrid_computation( - self, - methods_str, - x, - p_len, - crepe_hop_length, - ): - # Get various f0 methods from input to use in the computation stack - s = methods_str - s = s.split("hybrid")[1] - s = s.replace("[", "").replace("]", "") - methods = s.split("+") - f0_computation_stack = [] - - for method in methods: - if method in self.f0_method_dict: - f0 = self.f0_method_dict[method](x, p_len) if method == 'pm' else self.f0_method_dict[method](x) - f0_computation_stack.append(f0) - elif method == 'crepe' or method == 'mangio-crepe': - self.the_other_complex_function(x, method, crepe_hop_length) - - if len(f0_computation_stack) != 0: - f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0) if len(f0_computation_stack)>1 else f0_computation_stack[0] - return f0_median_hybrid - else: - raise ValueError("No valid methods were provided") - - def compute_f0(self, path, f0_method, crepe_hop_length): - x = load_audio(path, self.fs, DoFormant, Quefrency, Timbre) - p_len = x.shape[0] // self.hop - - if f0_method in self.f0_method_dict: - f0 = self.f0_method_dict[f0_method](x, p_len) if f0_method == 'pm' else self.f0_method_dict[f0_method](x) - elif f0_method in ['crepe', 'mangio-crepe']: - f0 = self.mncrepe(f0_method, x, p_len, crepe_hop_length) - elif "hybrid" in f0_method: # EXPERIMENTAL - # Perform hybrid median pitch estimation - f0 = self.get_f0_hybrid_computation( - f0_method, - x, - p_len, - crepe_hop_length, - ) - return f0 - - def coarse_f0(self, f0): - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * ( - self.f0_bin - 2 - ) / (self.f0_mel_max - self.f0_mel_min) + 1 - - # use 0 or 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1 - f0_coarse = np.rint(f0_mel).astype(int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, ( - f0_coarse.max(), - f0_coarse.min(), - ) - return f0_coarse - - def go(self, paths, f0_method, crepe_hop_length, thread_n): - if len(paths) == 0: - printt("no-f0-todo") - return - with tqdm.tqdm(total=len(paths), leave=True, position=thread_n) as pbar: - description = f"thread:{thread_n}, f0ing, Hop-Length:{crepe_hop_length}" - pbar.set_description(description) - - for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths): - try: - if ( - os.path.exists(opt_path1 + ".npy") - and os.path.exists(opt_path2 + ".npy") - ): - pbar.update(1) - continue - - featur_pit = self.compute_f0(inp_path, f0_method, crepe_hop_length) - np.save( - opt_path2, - featur_pit, - allow_pickle=False, - ) # nsf - coarse_pit = self.coarse_f0(featur_pit) - np.save( - opt_path1, - coarse_pit, - allow_pickle=False, - ) # ori - pbar.update(1) - except Exception as e: - printt(f"f0fail-{idx}-{inp_path}-{traceback.format_exc()}") - - -if __name__ == "__main__": - # exp_dir=r"E:\codes\py39\dataset\mi-test" - # n_p=16 - # f = open("%s/log_extract_f0.log"%exp_dir, "w") - printt(sys.argv) - featureInput = FeatureInput() - paths = [] - inp_root = "%s/1_16k_wavs" % (exp_dir) - opt_root1 = "%s/2a_f0" % (exp_dir) - opt_root2 = "%s/2b-f0nsf" % (exp_dir) - - os.makedirs(opt_root1, exist_ok=True) - os.makedirs(opt_root2, exist_ok=True) - for name in sorted(list(os.listdir(inp_root))): - inp_path = "%s/%s" % (inp_root, name) - if "spec" in inp_path: - continue - opt_path1 = "%s/%s" % (opt_root1, name) - opt_path2 = "%s/%s" % (opt_root2, name) - paths.append([inp_path, opt_path1, opt_path2]) - - ps = [] - print("Using f0 method: " + f0method) - for i in range(n_p): - p = Process( - target=featureInput.go, - args=(paths[i::n_p], f0method, extraction_crepe_hop_length, i), - ) - ps.append(p) - p.start() - for i in range(n_p): - ps[i].join() \ No newline at end of file diff --git a/spaces/Silentlin/DiffSinger/modules/hifigan/mel_utils.py b/spaces/Silentlin/DiffSinger/modules/hifigan/mel_utils.py deleted file mode 100644 index 06e0f7d4d16fa3e4aefc8949347455f5a6e938da..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/modules/hifigan/mel_utils.py +++ /dev/null @@ -1,80 +0,0 @@ -import numpy as np -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn -from scipy.io.wavfile import read - -MAX_WAV_VALUE = 32768.0 - - -def load_wav(full_path): - sampling_rate, data = read(full_path) - return data, sampling_rate - - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def mel_spectrogram(y, hparams, center=False, complex=False): - # hop_size: 512 # For 22050Hz, 275 ~= 12.5 ms (0.0125 * sample_rate) - # win_size: 2048 # For 22050Hz, 1100 ~= 50 ms (If None, win_size: fft_size) (0.05 * sample_rate) - # fmin: 55 # Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To test depending on dataset. Pitch info: male~[65, 260], female~[100, 525]) - # fmax: 10000 # To be increased/reduced depending on data. - # fft_size: 2048 # Extra window size is filled with 0 paddings to match this parameter - # n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, - n_fft = hparams['fft_size'] - num_mels = hparams['audio_num_mel_bins'] - sampling_rate = hparams['audio_sample_rate'] - hop_size = hparams['hop_size'] - win_size = hparams['win_size'] - fmin = hparams['fmin'] - fmax = hparams['fmax'] - y = y.clamp(min=-1., max=1.) - global mel_basis, hann_window - if fmax not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[str(fmax) + '_' + str(y.device)] = torch.from_numpy(mel).float().to(y.device) - hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), - mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - if not complex: - spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9)) - spec = torch.matmul(mel_basis[str(fmax) + '_' + str(y.device)], spec) - spec = spectral_normalize_torch(spec) - else: - B, C, T, _ = spec.shape - spec = spec.transpose(1, 2) # [B, T, n_fft, 2] - return spec diff --git a/spaces/Sing11104/bingo-11104/Dockerfile b/spaces/Sing11104/bingo-11104/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/Sing11104/bingo-11104/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/Slep/CondViT-LRVSF-Demo/src/style.css b/spaces/Slep/CondViT-LRVSF-Demo/src/style.css deleted file mode 100644 index dad2d62ce1694590c3c8eb319324a6574248f3e6..0000000000000000000000000000000000000000 --- a/spaces/Slep/CondViT-LRVSF-Demo/src/style.css +++ /dev/null @@ -1,44 +0,0 @@ -/* OUTPUT */ -#html_output, -#html_examples { - display: flex; - align-items: center; - justify-content: center; - flex-wrap: wrap; -} - -#html_output>img { - align-self: center; - height: 200px; - border: 2px solid; - border-color: var(--block-border-color); - border-radius: var(--block-radius); - margin: 1.5em; -} - -/* EXAMPLE */ -#html_examples>figure>img { - align-self: center; - height: 100px; - border: 2px solid; - border-color: var(--block-border-color); - border-radius: var(--block-radius); - margin: .7em; -} - -#html_examples>figure { - transition-duration: 0.2s; -} - -#html_examples>figure:hover { - transform: scale(1.2); - cursor: pointer; -} - -#html_examples>figure>figcaption { - text-align: center; -} - -#preset_examples { - display: none; -} \ No newline at end of file diff --git a/spaces/StarbucksCN/starbucks_doc/faq/robot_manager.py b/spaces/StarbucksCN/starbucks_doc/faq/robot_manager.py deleted file mode 100644 index afb2988f8a83d63f373f95a7229176977d9ac081..0000000000000000000000000000000000000000 --- a/spaces/StarbucksCN/starbucks_doc/faq/robot_manager.py +++ /dev/null @@ -1,130 +0,0 @@ -from abc import ABC, abstractmethod -from typing import Any - -from llama_index import load_index_from_storage -from llama_index.indices.query.base import BaseQueryEngine -from llama_index.indices.response import ResponseMode - -from core.helper import LifecycleHelper -from core.lifecycle import Lifecycle -from llama.service_context import ServiceContextManager -from llama.storage_context import StorageContextManager -# from few_shot import get_few_shot_template - -from langchain import PromptTemplate, FewShotPromptTemplate -examples = [ - { - "question": "戴帽卫衣可以穿了吗?", - "answer": - """ - 可以的,颜色需要符合上衣标准要求。 - """ - }, - { - "question": "下装的标准是什么?", - "answer": - """ -1.伙伴可以穿着长裤或及膝短裤,也可以穿裙子(包括连衣裙),但需要是纯色并且长度及膝或过膝。伙伴不应穿着颜色不均匀的牛仔裤,宽松下垂、破洞或者做旧效果的牛仔裤也不能穿。出于安全考虑,伙伴也不应穿着皮裤、瑜伽裤、弹力纤维裤和紧身裤(包括黑色连裤袜)。 -2.颜色要求:卡其色、深蓝色、深灰色、黑色。 -""" - } -] - - -def get_few_shot_template() -> str: - template = "Question: {question}, answer: {answer}\n" - rendered_strings = [] - for item in examples: - rendered_string = template.format(**item) - rendered_strings.append(rendered_string) - output = "\n".join(rendered_strings) - return output - - -class FAQRobot(ABC): - @abstractmethod - def ask(self, question: str) -> Any: - pass - - -class AzureOpenAIFAQWikiRobot(FAQRobot): - query_engine: BaseQueryEngine - - def __init__(self, query_engine: BaseQueryEngine) -> None: - super().__init__() - self.query_engine = query_engine - - def ask(self, question: str) -> Any: - print("question: ", question) - response = self.query_engine.query(question) - print("response type: ", type(response)) - return response.__str__() - - -class FAQRobotManager(Lifecycle): - @abstractmethod - def get_robot(self) -> FAQRobot: - pass - - -DEFAULT_QA_PROMPT_TMPL_PREFIX = ( - "Given examples below.\n" - "---------------------\n" -) -DEFAULT_QA_PROMPT_TMPL_SUFFIX = ( - "---------------------\n" - "Context information is below.\n" - "---------------------\n" - "{context_str}\n" - "---------------------\n" - "Given the context information and not prior knowledge, " - "either say '不好意思,我从文档中无法找到答案' or answer the function: {query_str}\n" -) - -class AzureFAQRobotManager(FAQRobotManager): - service_context_manager: ServiceContextManager - storage_context_manager: StorageContextManager - query_engine: BaseQueryEngine - - def __init__( - self, - service_context_manager: ServiceContextManager, - storage_context_manager: StorageContextManager, - ) -> None: - super().__init__() - self.service_context_manager = service_context_manager - self.storage_context_manager = storage_context_manager - - def get_robot(self) -> FAQRobot: - return AzureOpenAIFAQWikiRobot(self.query_engine) - - def do_init(self) -> None: - LifecycleHelper.initialize_if_possible(self.service_context_manager) - LifecycleHelper.initialize_if_possible(self.storage_context_manager) - - def do_start(self) -> None: - LifecycleHelper.start_if_possible(self.service_context_manager) - LifecycleHelper.start_if_possible(self.storage_context_manager) - index = load_index_from_storage( - storage_context=self.storage_context_manager.storage_context, - service_context=self.service_context_manager.get_service_context(), - ) - from llama_index import Prompt - few_shot_examples = get_few_shot_template() - - self.query_engine = index.as_query_engine( - service_context=self.service_context_manager.get_service_context(), - response_mode=ResponseMode.REFINE, - similarity_top_k=2, - text_qa_template=Prompt("\n".join([DEFAULT_QA_PROMPT_TMPL_PREFIX, - few_shot_examples, - DEFAULT_QA_PROMPT_TMPL_SUFFIX])) - ) - - def do_stop(self) -> None: - LifecycleHelper.stop_if_possible(self.storage_context_manager) - LifecycleHelper.stop_if_possible(self.service_context_manager) - - def do_dispose(self) -> None: - LifecycleHelper.dispose_if_possible(self.storage_context_manager) - LifecycleHelper.dispose_if_possible(self.service_context_manager) diff --git a/spaces/StatsByZach/app/README.md b/spaces/StatsByZach/app/README.md deleted file mode 100644 index c6cc054cd7fea45bcfdb0c3d0a0c4590c62656d9..0000000000000000000000000000000000000000 --- a/spaces/StatsByZach/app/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Shiny for Python template -emoji: 🌍 -colorFrom: yellow -colorTo: indigo -sdk: docker -pinned: false -license: mit -duplicated_from: posit/shiny-for-python-template ---- - -This is a templated Space for [Shiny for Python](https://shiny.rstudio.com/py/). - -To get started with a new app do the following: - -1) Install Shiny with `pip install shiny` -2) Create a new app with `shiny create .` -3) Then run the app with `shiny run --reload` - -To learn more about this framework please see the [Documentation](https://shiny.rstudio.com/py/docs/overview.html). diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/grids/audiogen/__init__.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/grids/audiogen/__init__.py deleted file mode 100644 index 8a0a2688450ce120088b79c3314a2f267394dc11..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/grids/audiogen/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""AudioGen grids.""" diff --git a/spaces/Sumsub/Sumsub-ffs-demo/model_loader.py b/spaces/Sumsub/Sumsub-ffs-demo/model_loader.py deleted file mode 100644 index f923136991a4098624e7859376d2435384aa379f..0000000000000000000000000000000000000000 --- a/spaces/Sumsub/Sumsub-ffs-demo/model_loader.py +++ /dev/null @@ -1,59 +0,0 @@ -from enum import Enum -import torch - -from model_classes import Model200M, Model5M, SyntheticV2 -from model_transforms import transform_200M, transform_5M, transform_synthetic - -class ModelType(str, Enum): - MIDJOURNEY_200M = "midjourney_200M" - DIFFUSIONS_200M = "diffusions_200M" - MIDJOURNEY_5M = "midjourney_5M" - DIFFUSIONS_5M = "diffusions_5M" - SYNTHETIC_DETECTOR_V2 = "synthetic_detector_v2" - - def __str__(self): - return str(self.value) - - @staticmethod - def get_list(): - return [model_type.value for model_type in ModelType] - -def load_model(value: ModelType): - model = type_to_class[value] - path = type_to_path[value] - ckpt = torch.load(path, map_location=torch.device('cpu')) - model.load_state_dict(ckpt) - model.eval() - return model - -type_to_class = { - ModelType.MIDJOURNEY_200M : Model200M(), - ModelType.DIFFUSIONS_200M : Model200M(), - ModelType.MIDJOURNEY_5M : Model5M(), - ModelType.DIFFUSIONS_5M : Model5M(), - ModelType.SYNTHETIC_DETECTOR_V2 : SyntheticV2(), -} - -type_to_path = { - ModelType.MIDJOURNEY_200M : 'models/midjourney200M.pt', - ModelType.DIFFUSIONS_200M : 'models/diffusions200M.pt', - ModelType.MIDJOURNEY_5M : 'models/midjourney5M.pt', - ModelType.DIFFUSIONS_5M : 'models/diffusions5M.pt', - ModelType.SYNTHETIC_DETECTOR_V2 : 'models/synthetic_detector_v2.pt', -} - -type_to_loaded_model = { - ModelType.MIDJOURNEY_200M: load_model(ModelType.MIDJOURNEY_200M), - ModelType.DIFFUSIONS_200M: load_model(ModelType.DIFFUSIONS_200M), - ModelType.MIDJOURNEY_5M: load_model(ModelType.MIDJOURNEY_5M), - ModelType.DIFFUSIONS_5M: load_model(ModelType.DIFFUSIONS_5M), - ModelType.SYNTHETIC_DETECTOR_V2: load_model(ModelType.SYNTHETIC_DETECTOR_V2) -} - -type_to_transforms = { - ModelType.MIDJOURNEY_200M: transform_200M, - ModelType.DIFFUSIONS_200M: transform_200M, - ModelType.MIDJOURNEY_5M: transform_5M, - ModelType.DIFFUSIONS_5M: transform_5M, - ModelType.SYNTHETIC_DETECTOR_V2: transform_synthetic -} \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_run.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_run.py deleted file mode 100644 index 9687786b46a4ab660474ebc10758413143b717a4..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_run.py +++ /dev/null @@ -1,626 +0,0 @@ -# encoding: utf-8 -"""Tests for code execution (%run and related), which is particularly tricky. - -Because of how %run manages namespaces, and the fact that we are trying here to -verify subtle object deletion and reference counting issues, the %run tests -will be kept in this separate file. This makes it easier to aggregate in one -place the tricks needed to handle it; most other magics are much easier to test -and we do so in a common test_magic file. - -Note that any test using `run -i` should make sure to do a `reset` afterwards, -as otherwise it may influence later tests. -""" - -# Copyright (c) IPython Development Team. -# Distributed under the terms of the Modified BSD License. - - - -import functools -import os -import platform -import random -import string -import sys -import textwrap -import unittest -from os.path import join as pjoin -from unittest.mock import patch - -import pytest -from tempfile import TemporaryDirectory - -from IPython.core import debugger -from IPython.testing import decorators as dec -from IPython.testing import tools as tt -from IPython.utils.io import capture_output - - -def doctest_refbug(): - """Very nasty problem with references held by multiple runs of a script. - See: https://github.com/ipython/ipython/issues/141 - - In [1]: _ip.clear_main_mod_cache() - # random - - In [2]: %run refbug - - In [3]: call_f() - lowercased: hello - - In [4]: %run refbug - - In [5]: call_f() - lowercased: hello - lowercased: hello - """ - - -def doctest_run_builtins(): - r"""Check that %run doesn't damage __builtins__. - - In [1]: import tempfile - - In [2]: bid1 = id(__builtins__) - - In [3]: fname = tempfile.mkstemp('.py')[1] - - In [3]: f = open(fname, 'w', encoding='utf-8') - - In [4]: dummy= f.write('pass\n') - - In [5]: f.flush() - - In [6]: t1 = type(__builtins__) - - In [7]: %run $fname - - In [7]: f.close() - - In [8]: bid2 = id(__builtins__) - - In [9]: t2 = type(__builtins__) - - In [10]: t1 == t2 - Out[10]: True - - In [10]: bid1 == bid2 - Out[10]: True - - In [12]: try: - ....: os.unlink(fname) - ....: except: - ....: pass - ....: - """ - - -def doctest_run_option_parser(): - r"""Test option parser in %run. - - In [1]: %run print_argv.py - [] - - In [2]: %run print_argv.py print*.py - ['print_argv.py'] - - In [3]: %run -G print_argv.py print*.py - ['print*.py'] - - """ - - -@dec.skip_win32 -def doctest_run_option_parser_for_posix(): - r"""Test option parser in %run (Linux/OSX specific). - - You need double quote to escape glob in POSIX systems: - - In [1]: %run print_argv.py print\\*.py - ['print*.py'] - - You can't use quote to escape glob in POSIX systems: - - In [2]: %run print_argv.py 'print*.py' - ['print_argv.py'] - - """ - - -doctest_run_option_parser_for_posix.__skip_doctest__ = sys.platform == "win32" - - -@dec.skip_if_not_win32 -def doctest_run_option_parser_for_windows(): - r"""Test option parser in %run (Windows specific). - - In Windows, you can't escape ``*` `by backslash: - - In [1]: %run print_argv.py print\\*.py - ['print\\\\*.py'] - - You can use quote to escape glob: - - In [2]: %run print_argv.py 'print*.py' - ["'print*.py'"] - - """ - - -doctest_run_option_parser_for_windows.__skip_doctest__ = sys.platform != "win32" - - -def doctest_reset_del(): - """Test that resetting doesn't cause errors in __del__ methods. - - In [2]: class A(object): - ...: def __del__(self): - ...: print(str("Hi")) - ...: - - In [3]: a = A() - - In [4]: get_ipython().reset(); import gc; x = gc.collect(0) - Hi - - In [5]: 1+1 - Out[5]: 2 - """ - -# For some tests, it will be handy to organize them in a class with a common -# setup that makes a temp file - -class TestMagicRunPass(tt.TempFileMixin): - - def setUp(self): - content = "a = [1,2,3]\nb = 1" - self.mktmp(content) - - def run_tmpfile(self): - _ip = get_ipython() - # This fails on Windows if self.tmpfile.name has spaces or "~" in it. - # See below and ticket https://bugs.launchpad.net/bugs/366353 - _ip.run_line_magic("run", self.fname) - - def run_tmpfile_p(self): - _ip = get_ipython() - # This fails on Windows if self.tmpfile.name has spaces or "~" in it. - # See below and ticket https://bugs.launchpad.net/bugs/366353 - _ip.run_line_magic("run", "-p %s" % self.fname) - - def test_builtins_id(self): - """Check that %run doesn't damage __builtins__ """ - _ip = get_ipython() - # Test that the id of __builtins__ is not modified by %run - bid1 = id(_ip.user_ns['__builtins__']) - self.run_tmpfile() - bid2 = id(_ip.user_ns['__builtins__']) - assert bid1 == bid2 - - def test_builtins_type(self): - """Check that the type of __builtins__ doesn't change with %run. - - However, the above could pass if __builtins__ was already modified to - be a dict (it should be a module) by a previous use of %run. So we - also check explicitly that it really is a module: - """ - _ip = get_ipython() - self.run_tmpfile() - assert type(_ip.user_ns["__builtins__"]) == type(sys) - - def test_run_profile(self): - """Test that the option -p, which invokes the profiler, do not - crash by invoking execfile""" - self.run_tmpfile_p() - - def test_run_debug_twice(self): - # https://github.com/ipython/ipython/issues/10028 - _ip = get_ipython() - with tt.fake_input(["c"]): - _ip.run_line_magic("run", "-d %s" % self.fname) - with tt.fake_input(["c"]): - _ip.run_line_magic("run", "-d %s" % self.fname) - - def test_run_debug_twice_with_breakpoint(self): - """Make a valid python temp file.""" - _ip = get_ipython() - with tt.fake_input(["b 2", "c", "c"]): - _ip.run_line_magic("run", "-d %s" % self.fname) - - with tt.fake_input(["c"]): - with tt.AssertNotPrints("KeyError"): - _ip.run_line_magic("run", "-d %s" % self.fname) - - -class TestMagicRunSimple(tt.TempFileMixin): - - def test_simpledef(self): - """Test that simple class definitions work.""" - src = ("class foo: pass\n" - "def f(): return foo()") - self.mktmp(src) - _ip.run_line_magic("run", str(self.fname)) - _ip.run_cell("t = isinstance(f(), foo)") - assert _ip.user_ns["t"] is True - - @pytest.mark.xfail( - platform.python_implementation() == "PyPy", - reason="expecting __del__ call on exit is unreliable and doesn't happen on PyPy", - ) - def test_obj_del(self): - """Test that object's __del__ methods are called on exit.""" - src = ("class A(object):\n" - " def __del__(self):\n" - " print('object A deleted')\n" - "a = A()\n") - self.mktmp(src) - err = None - tt.ipexec_validate(self.fname, 'object A deleted', err) - - def test_aggressive_namespace_cleanup(self): - """Test that namespace cleanup is not too aggressive GH-238 - - Returning from another run magic deletes the namespace""" - # see ticket https://github.com/ipython/ipython/issues/238 - - with tt.TempFileMixin() as empty: - empty.mktmp("") - # On Windows, the filename will have \users in it, so we need to use the - # repr so that the \u becomes \\u. - src = ( - "ip = get_ipython()\n" - "for i in range(5):\n" - " try:\n" - " ip.magic(%r)\n" - " except NameError as e:\n" - " print(i)\n" - " break\n" % ("run " + empty.fname) - ) - self.mktmp(src) - _ip.run_line_magic("run", str(self.fname)) - _ip.run_cell("ip == get_ipython()") - assert _ip.user_ns["i"] == 4 - - def test_run_second(self): - """Test that running a second file doesn't clobber the first, gh-3547""" - self.mktmp("avar = 1\n" "def afunc():\n" " return avar\n") - - with tt.TempFileMixin() as empty: - empty.mktmp("") - - _ip.run_line_magic("run", self.fname) - _ip.run_line_magic("run", empty.fname) - assert _ip.user_ns["afunc"]() == 1 - - def test_tclass(self): - mydir = os.path.dirname(__file__) - tc = os.path.join(mydir, "tclass") - src = f"""\ -import gc -%run "{tc}" C-first -gc.collect(0) -%run "{tc}" C-second -gc.collect(0) -%run "{tc}" C-third -gc.collect(0) -%reset -f -""" - self.mktmp(src, ".ipy") - out = """\ -ARGV 1-: ['C-first'] -ARGV 1-: ['C-second'] -tclass.py: deleting object: C-first -ARGV 1-: ['C-third'] -tclass.py: deleting object: C-second -tclass.py: deleting object: C-third -""" - err = None - tt.ipexec_validate(self.fname, out, err) - - def test_run_i_after_reset(self): - """Check that %run -i still works after %reset (gh-693)""" - src = "yy = zz\n" - self.mktmp(src) - _ip.run_cell("zz = 23") - try: - _ip.run_line_magic("run", "-i %s" % self.fname) - assert _ip.user_ns["yy"] == 23 - finally: - _ip.run_line_magic("reset", "-f") - - _ip.run_cell("zz = 23") - try: - _ip.run_line_magic("run", "-i %s" % self.fname) - assert _ip.user_ns["yy"] == 23 - finally: - _ip.run_line_magic("reset", "-f") - - def test_unicode(self): - """Check that files in odd encodings are accepted.""" - mydir = os.path.dirname(__file__) - na = os.path.join(mydir, "nonascii.py") - _ip.magic('run "%s"' % na) - assert _ip.user_ns["u"] == "Ўт№Ф" - - def test_run_py_file_attribute(self): - """Test handling of `__file__` attribute in `%run .py`.""" - src = "t = __file__\n" - self.mktmp(src) - _missing = object() - file1 = _ip.user_ns.get("__file__", _missing) - _ip.run_line_magic("run", self.fname) - file2 = _ip.user_ns.get("__file__", _missing) - - # Check that __file__ was equal to the filename in the script's - # namespace. - assert _ip.user_ns["t"] == self.fname - - # Check that __file__ was not leaked back into user_ns. - assert file1 == file2 - - def test_run_ipy_file_attribute(self): - """Test handling of `__file__` attribute in `%run `.""" - src = "t = __file__\n" - self.mktmp(src, ext='.ipy') - _missing = object() - file1 = _ip.user_ns.get("__file__", _missing) - _ip.run_line_magic("run", self.fname) - file2 = _ip.user_ns.get("__file__", _missing) - - # Check that __file__ was equal to the filename in the script's - # namespace. - assert _ip.user_ns["t"] == self.fname - - # Check that __file__ was not leaked back into user_ns. - assert file1 == file2 - - def test_run_formatting(self): - """ Test that %run -t -N does not raise a TypeError for N > 1.""" - src = "pass" - self.mktmp(src) - _ip.run_line_magic("run", "-t -N 1 %s" % self.fname) - _ip.run_line_magic("run", "-t -N 10 %s" % self.fname) - - def test_ignore_sys_exit(self): - """Test the -e option to ignore sys.exit()""" - src = "import sys; sys.exit(1)" - self.mktmp(src) - with tt.AssertPrints("SystemExit"): - _ip.run_line_magic("run", self.fname) - - with tt.AssertNotPrints("SystemExit"): - _ip.run_line_magic("run", "-e %s" % self.fname) - - def test_run_nb(self): - """Test %run notebook.ipynb""" - pytest.importorskip("nbformat") - from nbformat import v4, writes - nb = v4.new_notebook( - cells=[ - v4.new_markdown_cell("The Ultimate Question of Everything"), - v4.new_code_cell("answer=42") - ] - ) - src = writes(nb, version=4) - self.mktmp(src, ext='.ipynb') - - _ip.run_line_magic("run", self.fname) - - assert _ip.user_ns["answer"] == 42 - - def test_run_nb_error(self): - """Test %run notebook.ipynb error""" - pytest.importorskip("nbformat") - from nbformat import v4, writes - - # %run when a file name isn't provided - pytest.raises(Exception, _ip.magic, "run") - - # %run when a file doesn't exist - pytest.raises(Exception, _ip.magic, "run foobar.ipynb") - - # %run on a notebook with an error - nb = v4.new_notebook( - cells=[ - v4.new_code_cell("0/0") - ] - ) - src = writes(nb, version=4) - self.mktmp(src, ext='.ipynb') - pytest.raises(Exception, _ip.magic, "run %s" % self.fname) - - def test_file_options(self): - src = ('import sys\n' - 'a = " ".join(sys.argv[1:])\n') - self.mktmp(src) - test_opts = "-x 3 --verbose" - _ip.run_line_magic("run", "{0} {1}".format(self.fname, test_opts)) - assert _ip.user_ns["a"] == test_opts - - -class TestMagicRunWithPackage(unittest.TestCase): - - def writefile(self, name, content): - path = os.path.join(self.tempdir.name, name) - d = os.path.dirname(path) - if not os.path.isdir(d): - os.makedirs(d) - with open(path, "w", encoding="utf-8") as f: - f.write(textwrap.dedent(content)) - - def setUp(self): - self.package = package = 'tmp{0}'.format(''.join([random.choice(string.ascii_letters) for i in range(10)])) - """Temporary (probably) valid python package name.""" - - self.value = int(random.random() * 10000) - - self.tempdir = TemporaryDirectory() - self.__orig_cwd = os.getcwd() - sys.path.insert(0, self.tempdir.name) - - self.writefile(os.path.join(package, '__init__.py'), '') - self.writefile(os.path.join(package, 'sub.py'), """ - x = {0!r} - """.format(self.value)) - self.writefile(os.path.join(package, 'relative.py'), """ - from .sub import x - """) - self.writefile(os.path.join(package, 'absolute.py'), """ - from {0}.sub import x - """.format(package)) - self.writefile(os.path.join(package, 'args.py'), """ - import sys - a = " ".join(sys.argv[1:]) - """.format(package)) - - def tearDown(self): - os.chdir(self.__orig_cwd) - sys.path[:] = [p for p in sys.path if p != self.tempdir.name] - self.tempdir.cleanup() - - def check_run_submodule(self, submodule, opts=""): - _ip.user_ns.pop("x", None) - _ip.run_line_magic( - "run", "{2} -m {0}.{1}".format(self.package, submodule, opts) - ) - self.assertEqual( - _ip.user_ns["x"], - self.value, - "Variable `x` is not loaded from module `{0}`.".format(submodule), - ) - - def test_run_submodule_with_absolute_import(self): - self.check_run_submodule('absolute') - - def test_run_submodule_with_relative_import(self): - """Run submodule that has a relative import statement (#2727).""" - self.check_run_submodule('relative') - - def test_prun_submodule_with_absolute_import(self): - self.check_run_submodule('absolute', '-p') - - def test_prun_submodule_with_relative_import(self): - self.check_run_submodule('relative', '-p') - - def with_fake_debugger(func): - @functools.wraps(func) - def wrapper(*args, **kwds): - with patch.object(debugger.Pdb, 'run', staticmethod(eval)): - return func(*args, **kwds) - return wrapper - - @with_fake_debugger - def test_debug_run_submodule_with_absolute_import(self): - self.check_run_submodule('absolute', '-d') - - @with_fake_debugger - def test_debug_run_submodule_with_relative_import(self): - self.check_run_submodule('relative', '-d') - - def test_module_options(self): - _ip.user_ns.pop("a", None) - test_opts = "-x abc -m test" - _ip.run_line_magic("run", "-m {0}.args {1}".format(self.package, test_opts)) - assert _ip.user_ns["a"] == test_opts - - def test_module_options_with_separator(self): - _ip.user_ns.pop("a", None) - test_opts = "-x abc -m test" - _ip.run_line_magic("run", "-m {0}.args -- {1}".format(self.package, test_opts)) - assert _ip.user_ns["a"] == test_opts - - -def test_run__name__(): - with TemporaryDirectory() as td: - path = pjoin(td, "foo.py") - with open(path, "w", encoding="utf-8") as f: - f.write("q = __name__") - - _ip.user_ns.pop("q", None) - _ip.run_line_magic("run", "{}".format(path)) - assert _ip.user_ns.pop("q") == "__main__" - - _ip.run_line_magic("run", "-n {}".format(path)) - assert _ip.user_ns.pop("q") == "foo" - - try: - _ip.run_line_magic("run", "-i -n {}".format(path)) - assert _ip.user_ns.pop("q") == "foo" - finally: - _ip.run_line_magic("reset", "-f") - - -def test_run_tb(): - """Test traceback offset in %run""" - with TemporaryDirectory() as td: - path = pjoin(td, "foo.py") - with open(path, "w", encoding="utf-8") as f: - f.write( - "\n".join( - [ - "def foo():", - " return bar()", - "def bar():", - " raise RuntimeError('hello!')", - "foo()", - ] - ) - ) - with capture_output() as io: - _ip.run_line_magic("run", "{}".format(path)) - out = io.stdout - assert "execfile" not in out - assert "RuntimeError" in out - assert out.count("---->") == 3 - del ip.user_ns['bar'] - del ip.user_ns['foo'] - - -def test_multiprocessing_run(): - """Set we can run mutiprocesgin without messing up up main namespace - - Note that import `nose.tools as nt` mdify the value s - sys.module['__mp_main__'] so we need to temporarily set it to None to test - the issue. - """ - with TemporaryDirectory() as td: - mpm = sys.modules.get('__mp_main__') - sys.modules['__mp_main__'] = None - try: - path = pjoin(td, "test.py") - with open(path, "w", encoding="utf-8") as f: - f.write("import multiprocessing\nprint('hoy')") - with capture_output() as io: - _ip.run_line_magic('run', path) - _ip.run_cell("i_m_undefined") - out = io.stdout - assert "hoy" in out - assert "AttributeError" not in out - assert "NameError" in out - assert out.count("---->") == 1 - except: - raise - finally: - sys.modules['__mp_main__'] = mpm - - -def test_script_tb(): - """Test traceback offset in `ipython script.py`""" - with TemporaryDirectory() as td: - path = pjoin(td, "foo.py") - with open(path, "w", encoding="utf-8") as f: - f.write( - "\n".join( - [ - "def foo():", - " return bar()", - "def bar():", - " raise RuntimeError('hello!')", - "foo()", - ] - ) - ) - out, err = tt.ipexec(path) - assert "execfile" not in out - assert "RuntimeError" in out - assert out.count("---->") == 3 diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/plugin/simple.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/plugin/simple.py deleted file mode 100644 index 35fbfd2fbdced20195bd18a37218fb909cc9b83c..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/plugin/simple.py +++ /dev/null @@ -1,44 +0,0 @@ -"""Simple example using doctests. - -This file just contains doctests both using plain python and IPython prompts. -All tests should be loaded by Pytest. -""" - -def pyfunc(): - """Some pure python tests... - - >>> pyfunc() - 'pyfunc' - - >>> import os - - >>> 2+3 - 5 - - >>> for i in range(3): - ... print(i, end=' ') - ... print(i+1, end=' ') - ... - 0 1 1 2 2 3 - """ - return 'pyfunc' - - -def ipyfunc(): - """Some IPython tests... - - In [1]: ipyfunc() - Out[1]: 'ipyfunc' - - In [2]: import os - - In [3]: 2+3 - Out[3]: 5 - - In [4]: for i in range(3): - ...: print(i, end=' ') - ...: print(i+1, end=' ') - ...: - Out[4]: 0 1 1 2 2 3 - """ - return "ipyfunc" diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_exceptions.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_exceptions.py deleted file mode 100644 index ae706a1806299a1f13f3a905b4582c52bda5450c..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_exceptions.py +++ /dev/null @@ -1,441 +0,0 @@ -import warnings -from typing import Any, Dict, Iterable, List, Optional, Set # noqa - -from yarl import URL - -from .typedefs import LooseHeaders, StrOrURL -from .web_response import Response - -__all__ = ( - "HTTPException", - "HTTPError", - "HTTPRedirection", - "HTTPSuccessful", - "HTTPOk", - "HTTPCreated", - "HTTPAccepted", - "HTTPNonAuthoritativeInformation", - "HTTPNoContent", - "HTTPResetContent", - "HTTPPartialContent", - "HTTPMultipleChoices", - "HTTPMovedPermanently", - "HTTPFound", - "HTTPSeeOther", - "HTTPNotModified", - "HTTPUseProxy", - "HTTPTemporaryRedirect", - "HTTPPermanentRedirect", - "HTTPClientError", - "HTTPBadRequest", - "HTTPUnauthorized", - "HTTPPaymentRequired", - "HTTPForbidden", - "HTTPNotFound", - "HTTPMethodNotAllowed", - "HTTPNotAcceptable", - "HTTPProxyAuthenticationRequired", - "HTTPRequestTimeout", - "HTTPConflict", - "HTTPGone", - "HTTPLengthRequired", - "HTTPPreconditionFailed", - "HTTPRequestEntityTooLarge", - "HTTPRequestURITooLong", - "HTTPUnsupportedMediaType", - "HTTPRequestRangeNotSatisfiable", - "HTTPExpectationFailed", - "HTTPMisdirectedRequest", - "HTTPUnprocessableEntity", - "HTTPFailedDependency", - "HTTPUpgradeRequired", - "HTTPPreconditionRequired", - "HTTPTooManyRequests", - "HTTPRequestHeaderFieldsTooLarge", - "HTTPUnavailableForLegalReasons", - "HTTPServerError", - "HTTPInternalServerError", - "HTTPNotImplemented", - "HTTPBadGateway", - "HTTPServiceUnavailable", - "HTTPGatewayTimeout", - "HTTPVersionNotSupported", - "HTTPVariantAlsoNegotiates", - "HTTPInsufficientStorage", - "HTTPNotExtended", - "HTTPNetworkAuthenticationRequired", -) - - -############################################################ -# HTTP Exceptions -############################################################ - - -class HTTPException(Response, Exception): - - # You should set in subclasses: - # status = 200 - - status_code = -1 - empty_body = False - - __http_exception__ = True - - def __init__( - self, - *, - headers: Optional[LooseHeaders] = None, - reason: Optional[str] = None, - body: Any = None, - text: Optional[str] = None, - content_type: Optional[str] = None, - ) -> None: - if body is not None: - warnings.warn( - "body argument is deprecated for http web exceptions", - DeprecationWarning, - ) - Response.__init__( - self, - status=self.status_code, - headers=headers, - reason=reason, - body=body, - text=text, - content_type=content_type, - ) - Exception.__init__(self, self.reason) - if self.body is None and not self.empty_body: - self.text = f"{self.status}: {self.reason}" - - def __bool__(self) -> bool: - return True - - -class HTTPError(HTTPException): - """Base class for exceptions with status codes in the 400s and 500s.""" - - -class HTTPRedirection(HTTPException): - """Base class for exceptions with status codes in the 300s.""" - - -class HTTPSuccessful(HTTPException): - """Base class for exceptions with status codes in the 200s.""" - - -class HTTPOk(HTTPSuccessful): - status_code = 200 - - -class HTTPCreated(HTTPSuccessful): - status_code = 201 - - -class HTTPAccepted(HTTPSuccessful): - status_code = 202 - - -class HTTPNonAuthoritativeInformation(HTTPSuccessful): - status_code = 203 - - -class HTTPNoContent(HTTPSuccessful): - status_code = 204 - empty_body = True - - -class HTTPResetContent(HTTPSuccessful): - status_code = 205 - empty_body = True - - -class HTTPPartialContent(HTTPSuccessful): - status_code = 206 - - -############################################################ -# 3xx redirection -############################################################ - - -class _HTTPMove(HTTPRedirection): - def __init__( - self, - location: StrOrURL, - *, - headers: Optional[LooseHeaders] = None, - reason: Optional[str] = None, - body: Any = None, - text: Optional[str] = None, - content_type: Optional[str] = None, - ) -> None: - if not location: - raise ValueError("HTTP redirects need a location to redirect to.") - super().__init__( - headers=headers, - reason=reason, - body=body, - text=text, - content_type=content_type, - ) - self.headers["Location"] = str(URL(location)) - self.location = location - - -class HTTPMultipleChoices(_HTTPMove): - status_code = 300 - - -class HTTPMovedPermanently(_HTTPMove): - status_code = 301 - - -class HTTPFound(_HTTPMove): - status_code = 302 - - -# This one is safe after a POST (the redirected location will be -# retrieved with GET): -class HTTPSeeOther(_HTTPMove): - status_code = 303 - - -class HTTPNotModified(HTTPRedirection): - # FIXME: this should include a date or etag header - status_code = 304 - empty_body = True - - -class HTTPUseProxy(_HTTPMove): - # Not a move, but looks a little like one - status_code = 305 - - -class HTTPTemporaryRedirect(_HTTPMove): - status_code = 307 - - -class HTTPPermanentRedirect(_HTTPMove): - status_code = 308 - - -############################################################ -# 4xx client error -############################################################ - - -class HTTPClientError(HTTPError): - pass - - -class HTTPBadRequest(HTTPClientError): - status_code = 400 - - -class HTTPUnauthorized(HTTPClientError): - status_code = 401 - - -class HTTPPaymentRequired(HTTPClientError): - status_code = 402 - - -class HTTPForbidden(HTTPClientError): - status_code = 403 - - -class HTTPNotFound(HTTPClientError): - status_code = 404 - - -class HTTPMethodNotAllowed(HTTPClientError): - status_code = 405 - - def __init__( - self, - method: str, - allowed_methods: Iterable[str], - *, - headers: Optional[LooseHeaders] = None, - reason: Optional[str] = None, - body: Any = None, - text: Optional[str] = None, - content_type: Optional[str] = None, - ) -> None: - allow = ",".join(sorted(allowed_methods)) - super().__init__( - headers=headers, - reason=reason, - body=body, - text=text, - content_type=content_type, - ) - self.headers["Allow"] = allow - self.allowed_methods: Set[str] = set(allowed_methods) - self.method = method.upper() - - -class HTTPNotAcceptable(HTTPClientError): - status_code = 406 - - -class HTTPProxyAuthenticationRequired(HTTPClientError): - status_code = 407 - - -class HTTPRequestTimeout(HTTPClientError): - status_code = 408 - - -class HTTPConflict(HTTPClientError): - status_code = 409 - - -class HTTPGone(HTTPClientError): - status_code = 410 - - -class HTTPLengthRequired(HTTPClientError): - status_code = 411 - - -class HTTPPreconditionFailed(HTTPClientError): - status_code = 412 - - -class HTTPRequestEntityTooLarge(HTTPClientError): - status_code = 413 - - def __init__(self, max_size: float, actual_size: float, **kwargs: Any) -> None: - kwargs.setdefault( - "text", - "Maximum request body size {} exceeded, " - "actual body size {}".format(max_size, actual_size), - ) - super().__init__(**kwargs) - - -class HTTPRequestURITooLong(HTTPClientError): - status_code = 414 - - -class HTTPUnsupportedMediaType(HTTPClientError): - status_code = 415 - - -class HTTPRequestRangeNotSatisfiable(HTTPClientError): - status_code = 416 - - -class HTTPExpectationFailed(HTTPClientError): - status_code = 417 - - -class HTTPMisdirectedRequest(HTTPClientError): - status_code = 421 - - -class HTTPUnprocessableEntity(HTTPClientError): - status_code = 422 - - -class HTTPFailedDependency(HTTPClientError): - status_code = 424 - - -class HTTPUpgradeRequired(HTTPClientError): - status_code = 426 - - -class HTTPPreconditionRequired(HTTPClientError): - status_code = 428 - - -class HTTPTooManyRequests(HTTPClientError): - status_code = 429 - - -class HTTPRequestHeaderFieldsTooLarge(HTTPClientError): - status_code = 431 - - -class HTTPUnavailableForLegalReasons(HTTPClientError): - status_code = 451 - - def __init__( - self, - link: str, - *, - headers: Optional[LooseHeaders] = None, - reason: Optional[str] = None, - body: Any = None, - text: Optional[str] = None, - content_type: Optional[str] = None, - ) -> None: - super().__init__( - headers=headers, - reason=reason, - body=body, - text=text, - content_type=content_type, - ) - self.headers["Link"] = '<%s>; rel="blocked-by"' % link - self.link = link - - -############################################################ -# 5xx Server Error -############################################################ -# Response status codes beginning with the digit "5" indicate cases in -# which the server is aware that it has erred or is incapable of -# performing the request. Except when responding to a HEAD request, the -# server SHOULD include an entity containing an explanation of the error -# situation, and whether it is a temporary or permanent condition. User -# agents SHOULD display any included entity to the user. These response -# codes are applicable to any request method. - - -class HTTPServerError(HTTPError): - pass - - -class HTTPInternalServerError(HTTPServerError): - status_code = 500 - - -class HTTPNotImplemented(HTTPServerError): - status_code = 501 - - -class HTTPBadGateway(HTTPServerError): - status_code = 502 - - -class HTTPServiceUnavailable(HTTPServerError): - status_code = 503 - - -class HTTPGatewayTimeout(HTTPServerError): - status_code = 504 - - -class HTTPVersionNotSupported(HTTPServerError): - status_code = 505 - - -class HTTPVariantAlsoNegotiates(HTTPServerError): - status_code = 506 - - -class HTTPInsufficientStorage(HTTPServerError): - status_code = 507 - - -class HTTPNotExtended(HTTPServerError): - status_code = 510 - - -class HTTPNetworkAuthenticationRequired(HTTPServerError): - status_code = 511 diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/utils/theme.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/utils/theme.py deleted file mode 100644 index 10dc6fa8a81646ed7e9fa8d6be4e1634ec14e7d8..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/utils/theme.py +++ /dev/null @@ -1,10 +0,0 @@ -"""Utilities for registering and working with themes""" - -from .plugin_registry import PluginRegistry -from typing import Callable - -ThemeType = Callable[..., dict] - - -class ThemeRegistry(PluginRegistry[ThemeType]): - pass diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/dateutil/tz/_common.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/dateutil/tz/_common.py deleted file mode 100644 index e6ac11831522b266114d5b68ee1da298e3aeb14a..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/dateutil/tz/_common.py +++ /dev/null @@ -1,419 +0,0 @@ -from six import PY2 - -from functools import wraps - -from datetime import datetime, timedelta, tzinfo - - -ZERO = timedelta(0) - -__all__ = ['tzname_in_python2', 'enfold'] - - -def tzname_in_python2(namefunc): - """Change unicode output into bytestrings in Python 2 - - tzname() API changed in Python 3. It used to return bytes, but was changed - to unicode strings - """ - if PY2: - @wraps(namefunc) - def adjust_encoding(*args, **kwargs): - name = namefunc(*args, **kwargs) - if name is not None: - name = name.encode() - - return name - - return adjust_encoding - else: - return namefunc - - -# The following is adapted from Alexander Belopolsky's tz library -# https://github.com/abalkin/tz -if hasattr(datetime, 'fold'): - # This is the pre-python 3.6 fold situation - def enfold(dt, fold=1): - """ - Provides a unified interface for assigning the ``fold`` attribute to - datetimes both before and after the implementation of PEP-495. - - :param fold: - The value for the ``fold`` attribute in the returned datetime. This - should be either 0 or 1. - - :return: - Returns an object for which ``getattr(dt, 'fold', 0)`` returns - ``fold`` for all versions of Python. In versions prior to - Python 3.6, this is a ``_DatetimeWithFold`` object, which is a - subclass of :py:class:`datetime.datetime` with the ``fold`` - attribute added, if ``fold`` is 1. - - .. versionadded:: 2.6.0 - """ - return dt.replace(fold=fold) - -else: - class _DatetimeWithFold(datetime): - """ - This is a class designed to provide a PEP 495-compliant interface for - Python versions before 3.6. It is used only for dates in a fold, so - the ``fold`` attribute is fixed at ``1``. - - .. versionadded:: 2.6.0 - """ - __slots__ = () - - def replace(self, *args, **kwargs): - """ - Return a datetime with the same attributes, except for those - attributes given new values by whichever keyword arguments are - specified. Note that tzinfo=None can be specified to create a naive - datetime from an aware datetime with no conversion of date and time - data. - - This is reimplemented in ``_DatetimeWithFold`` because pypy3 will - return a ``datetime.datetime`` even if ``fold`` is unchanged. - """ - argnames = ( - 'year', 'month', 'day', 'hour', 'minute', 'second', - 'microsecond', 'tzinfo' - ) - - for arg, argname in zip(args, argnames): - if argname in kwargs: - raise TypeError('Duplicate argument: {}'.format(argname)) - - kwargs[argname] = arg - - for argname in argnames: - if argname not in kwargs: - kwargs[argname] = getattr(self, argname) - - dt_class = self.__class__ if kwargs.get('fold', 1) else datetime - - return dt_class(**kwargs) - - @property - def fold(self): - return 1 - - def enfold(dt, fold=1): - """ - Provides a unified interface for assigning the ``fold`` attribute to - datetimes both before and after the implementation of PEP-495. - - :param fold: - The value for the ``fold`` attribute in the returned datetime. This - should be either 0 or 1. - - :return: - Returns an object for which ``getattr(dt, 'fold', 0)`` returns - ``fold`` for all versions of Python. In versions prior to - Python 3.6, this is a ``_DatetimeWithFold`` object, which is a - subclass of :py:class:`datetime.datetime` with the ``fold`` - attribute added, if ``fold`` is 1. - - .. versionadded:: 2.6.0 - """ - if getattr(dt, 'fold', 0) == fold: - return dt - - args = dt.timetuple()[:6] - args += (dt.microsecond, dt.tzinfo) - - if fold: - return _DatetimeWithFold(*args) - else: - return datetime(*args) - - -def _validate_fromutc_inputs(f): - """ - The CPython version of ``fromutc`` checks that the input is a ``datetime`` - object and that ``self`` is attached as its ``tzinfo``. - """ - @wraps(f) - def fromutc(self, dt): - if not isinstance(dt, datetime): - raise TypeError("fromutc() requires a datetime argument") - if dt.tzinfo is not self: - raise ValueError("dt.tzinfo is not self") - - return f(self, dt) - - return fromutc - - -class _tzinfo(tzinfo): - """ - Base class for all ``dateutil`` ``tzinfo`` objects. - """ - - def is_ambiguous(self, dt): - """ - Whether or not the "wall time" of a given datetime is ambiguous in this - zone. - - :param dt: - A :py:class:`datetime.datetime`, naive or time zone aware. - - - :return: - Returns ``True`` if ambiguous, ``False`` otherwise. - - .. versionadded:: 2.6.0 - """ - - dt = dt.replace(tzinfo=self) - - wall_0 = enfold(dt, fold=0) - wall_1 = enfold(dt, fold=1) - - same_offset = wall_0.utcoffset() == wall_1.utcoffset() - same_dt = wall_0.replace(tzinfo=None) == wall_1.replace(tzinfo=None) - - return same_dt and not same_offset - - def _fold_status(self, dt_utc, dt_wall): - """ - Determine the fold status of a "wall" datetime, given a representation - of the same datetime as a (naive) UTC datetime. This is calculated based - on the assumption that ``dt.utcoffset() - dt.dst()`` is constant for all - datetimes, and that this offset is the actual number of hours separating - ``dt_utc`` and ``dt_wall``. - - :param dt_utc: - Representation of the datetime as UTC - - :param dt_wall: - Representation of the datetime as "wall time". This parameter must - either have a `fold` attribute or have a fold-naive - :class:`datetime.tzinfo` attached, otherwise the calculation may - fail. - """ - if self.is_ambiguous(dt_wall): - delta_wall = dt_wall - dt_utc - _fold = int(delta_wall == (dt_utc.utcoffset() - dt_utc.dst())) - else: - _fold = 0 - - return _fold - - def _fold(self, dt): - return getattr(dt, 'fold', 0) - - def _fromutc(self, dt): - """ - Given a timezone-aware datetime in a given timezone, calculates a - timezone-aware datetime in a new timezone. - - Since this is the one time that we *know* we have an unambiguous - datetime object, we take this opportunity to determine whether the - datetime is ambiguous and in a "fold" state (e.g. if it's the first - occurrence, chronologically, of the ambiguous datetime). - - :param dt: - A timezone-aware :class:`datetime.datetime` object. - """ - - # Re-implement the algorithm from Python's datetime.py - dtoff = dt.utcoffset() - if dtoff is None: - raise ValueError("fromutc() requires a non-None utcoffset() " - "result") - - # The original datetime.py code assumes that `dst()` defaults to - # zero during ambiguous times. PEP 495 inverts this presumption, so - # for pre-PEP 495 versions of python, we need to tweak the algorithm. - dtdst = dt.dst() - if dtdst is None: - raise ValueError("fromutc() requires a non-None dst() result") - delta = dtoff - dtdst - - dt += delta - # Set fold=1 so we can default to being in the fold for - # ambiguous dates. - dtdst = enfold(dt, fold=1).dst() - if dtdst is None: - raise ValueError("fromutc(): dt.dst gave inconsistent " - "results; cannot convert") - return dt + dtdst - - @_validate_fromutc_inputs - def fromutc(self, dt): - """ - Given a timezone-aware datetime in a given timezone, calculates a - timezone-aware datetime in a new timezone. - - Since this is the one time that we *know* we have an unambiguous - datetime object, we take this opportunity to determine whether the - datetime is ambiguous and in a "fold" state (e.g. if it's the first - occurrence, chronologically, of the ambiguous datetime). - - :param dt: - A timezone-aware :class:`datetime.datetime` object. - """ - dt_wall = self._fromutc(dt) - - # Calculate the fold status given the two datetimes. - _fold = self._fold_status(dt, dt_wall) - - # Set the default fold value for ambiguous dates - return enfold(dt_wall, fold=_fold) - - -class tzrangebase(_tzinfo): - """ - This is an abstract base class for time zones represented by an annual - transition into and out of DST. Child classes should implement the following - methods: - - * ``__init__(self, *args, **kwargs)`` - * ``transitions(self, year)`` - this is expected to return a tuple of - datetimes representing the DST on and off transitions in standard - time. - - A fully initialized ``tzrangebase`` subclass should also provide the - following attributes: - * ``hasdst``: Boolean whether or not the zone uses DST. - * ``_dst_offset`` / ``_std_offset``: :class:`datetime.timedelta` objects - representing the respective UTC offsets. - * ``_dst_abbr`` / ``_std_abbr``: Strings representing the timezone short - abbreviations in DST and STD, respectively. - * ``_hasdst``: Whether or not the zone has DST. - - .. versionadded:: 2.6.0 - """ - def __init__(self): - raise NotImplementedError('tzrangebase is an abstract base class') - - def utcoffset(self, dt): - isdst = self._isdst(dt) - - if isdst is None: - return None - elif isdst: - return self._dst_offset - else: - return self._std_offset - - def dst(self, dt): - isdst = self._isdst(dt) - - if isdst is None: - return None - elif isdst: - return self._dst_base_offset - else: - return ZERO - - @tzname_in_python2 - def tzname(self, dt): - if self._isdst(dt): - return self._dst_abbr - else: - return self._std_abbr - - def fromutc(self, dt): - """ Given a datetime in UTC, return local time """ - if not isinstance(dt, datetime): - raise TypeError("fromutc() requires a datetime argument") - - if dt.tzinfo is not self: - raise ValueError("dt.tzinfo is not self") - - # Get transitions - if there are none, fixed offset - transitions = self.transitions(dt.year) - if transitions is None: - return dt + self.utcoffset(dt) - - # Get the transition times in UTC - dston, dstoff = transitions - - dston -= self._std_offset - dstoff -= self._std_offset - - utc_transitions = (dston, dstoff) - dt_utc = dt.replace(tzinfo=None) - - isdst = self._naive_isdst(dt_utc, utc_transitions) - - if isdst: - dt_wall = dt + self._dst_offset - else: - dt_wall = dt + self._std_offset - - _fold = int(not isdst and self.is_ambiguous(dt_wall)) - - return enfold(dt_wall, fold=_fold) - - def is_ambiguous(self, dt): - """ - Whether or not the "wall time" of a given datetime is ambiguous in this - zone. - - :param dt: - A :py:class:`datetime.datetime`, naive or time zone aware. - - - :return: - Returns ``True`` if ambiguous, ``False`` otherwise. - - .. versionadded:: 2.6.0 - """ - if not self.hasdst: - return False - - start, end = self.transitions(dt.year) - - dt = dt.replace(tzinfo=None) - return (end <= dt < end + self._dst_base_offset) - - def _isdst(self, dt): - if not self.hasdst: - return False - elif dt is None: - return None - - transitions = self.transitions(dt.year) - - if transitions is None: - return False - - dt = dt.replace(tzinfo=None) - - isdst = self._naive_isdst(dt, transitions) - - # Handle ambiguous dates - if not isdst and self.is_ambiguous(dt): - return not self._fold(dt) - else: - return isdst - - def _naive_isdst(self, dt, transitions): - dston, dstoff = transitions - - dt = dt.replace(tzinfo=None) - - if dston < dstoff: - isdst = dston <= dt < dstoff - else: - isdst = not dstoff <= dt < dston - - return isdst - - @property - def _dst_base_offset(self): - return self._dst_offset - self._std_offset - - __hash__ = None - - def __ne__(self, other): - return not (self == other) - - def __repr__(self): - return "%s(...)" % self.__class__.__name__ - - __reduce__ = object.__reduce__ diff --git a/spaces/Suniilkumaar/SwapMukham/face_analyser.py b/spaces/Suniilkumaar/SwapMukham/face_analyser.py deleted file mode 100644 index 69a5955a34b27b98f52087f5654e2c243378ae6a..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/SwapMukham/face_analyser.py +++ /dev/null @@ -1,194 +0,0 @@ -import os -import cv2 -import numpy as np -from tqdm import tqdm -from utils import scale_bbox_from_center - -detect_conditions = [ - "best detection", - "left most", - "right most", - "top most", - "bottom most", - "middle", - "biggest", - "smallest", -] - -swap_options_list = [ - "All Face", - "Specific Face", - "Age less than", - "Age greater than", - "All Male", - "All Female", - "Left Most", - "Right Most", - "Top Most", - "Bottom Most", - "Middle", - "Biggest", - "Smallest", -] - -def get_single_face(faces, method="best detection"): - total_faces = len(faces) - if total_faces == 1: - return faces[0] - - print(f"{total_faces} face detected. Using {method} face.") - if method == "best detection": - return sorted(faces, key=lambda face: face["det_score"])[-1] - elif method == "left most": - return sorted(faces, key=lambda face: face["bbox"][0])[0] - elif method == "right most": - return sorted(faces, key=lambda face: face["bbox"][0])[-1] - elif method == "top most": - return sorted(faces, key=lambda face: face["bbox"][1])[0] - elif method == "bottom most": - return sorted(faces, key=lambda face: face["bbox"][1])[-1] - elif method == "middle": - return sorted(faces, key=lambda face: ( - (face["bbox"][0] + face["bbox"][2]) / 2 - 0.5) ** 2 + - ((face["bbox"][1] + face["bbox"][3]) / 2 - 0.5) ** 2)[len(faces) // 2] - elif method == "biggest": - return sorted(faces, key=lambda face: (face["bbox"][2] - face["bbox"][0]) * (face["bbox"][3] - face["bbox"][1]))[-1] - elif method == "smallest": - return sorted(faces, key=lambda face: (face["bbox"][2] - face["bbox"][0]) * (face["bbox"][3] - face["bbox"][1]))[0] - - -def analyse_face(image, model, return_single_face=True, detect_condition="best detection", scale=1.0): - faces = model.get(image) - if scale != 1: # landmark-scale - for i, face in enumerate(faces): - landmark = face['kps'] - center = np.mean(landmark, axis=0) - landmark = center + (landmark - center) * scale - faces[i]['kps'] = landmark - - if not return_single_face: - return faces - - return get_single_face(faces, method=detect_condition) - - -def cosine_distance(a, b): - a /= np.linalg.norm(a) - b /= np.linalg.norm(b) - return 1 - np.dot(a, b) - - -def get_analysed_data(face_analyser, image_sequence, source_data, swap_condition="All face", detect_condition="left most", scale=1.0): - if swap_condition != "Specific Face": - source_path, age = source_data - source_image = cv2.imread(source_path) - analysed_source = analyse_face(source_image, face_analyser, return_single_face=True, detect_condition=detect_condition, scale=scale) - else: - analysed_source_specifics = [] - source_specifics, threshold = source_data - for source, specific in zip(*source_specifics): - if source is None or specific is None: - continue - analysed_source = analyse_face(source, face_analyser, return_single_face=True, detect_condition=detect_condition, scale=scale) - analysed_specific = analyse_face(specific, face_analyser, return_single_face=True, detect_condition=detect_condition, scale=scale) - analysed_source_specifics.append([analysed_source, analysed_specific]) - - analysed_target_list = [] - analysed_source_list = [] - whole_frame_eql_list = [] - num_faces_per_frame = [] - - total_frames = len(image_sequence) - curr_idx = 0 - for curr_idx, frame_path in tqdm(enumerate(image_sequence), total=total_frames, desc="Analysing face data"): - frame = cv2.imread(frame_path) - analysed_faces = analyse_face(frame, face_analyser, return_single_face=False, detect_condition=detect_condition, scale=scale) - - n_faces = 0 - for analysed_face in analysed_faces: - if swap_condition == "All Face": - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - elif swap_condition == "Age less than" and analysed_face["age"] < age: - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - elif swap_condition == "Age greater than" and analysed_face["age"] > age: - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - elif swap_condition == "All Male" and analysed_face["gender"] == 1: - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - elif swap_condition == "All Female" and analysed_face["gender"] == 0: - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - elif swap_condition == "Specific Face": - for analysed_source, analysed_specific in analysed_source_specifics: - distance = cosine_distance(analysed_specific["embedding"], analysed_face["embedding"]) - if distance < threshold: - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - - if swap_condition == "Left Most": - analysed_face = get_single_face(analysed_faces, method="left most") - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - - elif swap_condition == "Right Most": - analysed_face = get_single_face(analysed_faces, method="right most") - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - - elif swap_condition == "Top Most": - analysed_face = get_single_face(analysed_faces, method="top most") - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - - elif swap_condition == "Bottom Most": - analysed_face = get_single_face(analysed_faces, method="bottom most") - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - - elif swap_condition == "Middle": - analysed_face = get_single_face(analysed_faces, method="middle") - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - - elif swap_condition == "Biggest": - analysed_face = get_single_face(analysed_faces, method="biggest") - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - - elif swap_condition == "Smallest": - analysed_face = get_single_face(analysed_faces, method="smallest") - analysed_target_list.append(analysed_face) - analysed_source_list.append(analysed_source) - whole_frame_eql_list.append(frame_path) - n_faces += 1 - - num_faces_per_frame.append(n_faces) - - return analysed_target_list, analysed_source_list, whole_frame_eql_list, num_faces_per_frame diff --git a/spaces/Superlang/ImageProcessor/annotator/leres/pix2pix/options/base_options.py b/spaces/Superlang/ImageProcessor/annotator/leres/pix2pix/options/base_options.py deleted file mode 100644 index 533a1e88a7e8494223f6994e6861c93667754f83..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/leres/pix2pix/options/base_options.py +++ /dev/null @@ -1,156 +0,0 @@ -import argparse -import os -from ...pix2pix.util import util -# import torch -from ...pix2pix import models -# import pix2pix.data -import numpy as np - -class BaseOptions(): - """This class defines options used during both training and test time. - - It also implements several helper functions such as parsing, printing, and saving the options. - It also gathers additional options defined in functions in both dataset class and model class. - """ - - def __init__(self): - """Reset the class; indicates the class hasn't been initailized""" - self.initialized = False - - def initialize(self, parser): - """Define the common options that are used in both training and test.""" - # basic parameters - parser.add_argument('--dataroot', help='path to images (should have subfolders trainA, trainB, valA, valB, etc)') - parser.add_argument('--name', type=str, default='void', help='mahdi_unet_new, scaled_unet') - parser.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0 0,1,2, 0,2. use -1 for CPU') - parser.add_argument('--checkpoints_dir', type=str, default='./pix2pix/checkpoints', help='models are saved here') - # model parameters - parser.add_argument('--model', type=str, default='cycle_gan', help='chooses which model to use. [cycle_gan | pix2pix | test | colorization]') - parser.add_argument('--input_nc', type=int, default=2, help='# of input image channels: 3 for RGB and 1 for grayscale') - parser.add_argument('--output_nc', type=int, default=1, help='# of output image channels: 3 for RGB and 1 for grayscale') - parser.add_argument('--ngf', type=int, default=64, help='# of gen filters in the last conv layer') - parser.add_argument('--ndf', type=int, default=64, help='# of discrim filters in the first conv layer') - parser.add_argument('--netD', type=str, default='basic', help='specify discriminator architecture [basic | n_layers | pixel]. The basic model is a 70x70 PatchGAN. n_layers allows you to specify the layers in the discriminator') - parser.add_argument('--netG', type=str, default='resnet_9blocks', help='specify generator architecture [resnet_9blocks | resnet_6blocks | unet_256 | unet_128]') - parser.add_argument('--n_layers_D', type=int, default=3, help='only used if netD==n_layers') - parser.add_argument('--norm', type=str, default='instance', help='instance normalization or batch normalization [instance | batch | none]') - parser.add_argument('--init_type', type=str, default='normal', help='network initialization [normal | xavier | kaiming | orthogonal]') - parser.add_argument('--init_gain', type=float, default=0.02, help='scaling factor for normal, xavier and orthogonal.') - parser.add_argument('--no_dropout', action='store_true', help='no dropout for the generator') - # dataset parameters - parser.add_argument('--dataset_mode', type=str, default='unaligned', help='chooses how datasets are loaded. [unaligned | aligned | single | colorization]') - parser.add_argument('--direction', type=str, default='AtoB', help='AtoB or BtoA') - parser.add_argument('--serial_batches', action='store_true', help='if true, takes images in order to make batches, otherwise takes them randomly') - parser.add_argument('--num_threads', default=4, type=int, help='# threads for loading data') - parser.add_argument('--batch_size', type=int, default=1, help='input batch size') - parser.add_argument('--load_size', type=int, default=672, help='scale images to this size') - parser.add_argument('--crop_size', type=int, default=672, help='then crop to this size') - parser.add_argument('--max_dataset_size', type=int, default=10000, help='Maximum number of samples allowed per dataset. If the dataset directory contains more than max_dataset_size, only a subset is loaded.') - parser.add_argument('--preprocess', type=str, default='resize_and_crop', help='scaling and cropping of images at load time [resize_and_crop | crop | scale_width | scale_width_and_crop | none]') - parser.add_argument('--no_flip', action='store_true', help='if specified, do not flip the images for data augmentation') - parser.add_argument('--display_winsize', type=int, default=256, help='display window size for both visdom and HTML') - # additional parameters - parser.add_argument('--epoch', type=str, default='latest', help='which epoch to load? set to latest to use latest cached model') - parser.add_argument('--load_iter', type=int, default='0', help='which iteration to load? if load_iter > 0, the code will load models by iter_[load_iter]; otherwise, the code will load models by [epoch]') - parser.add_argument('--verbose', action='store_true', help='if specified, print more debugging information') - parser.add_argument('--suffix', default='', type=str, help='customized suffix: opt.name = opt.name + suffix: e.g., {model}_{netG}_size{load_size}') - - parser.add_argument('--data_dir', type=str, required=False, - help='input files directory images can be .png .jpg .tiff') - parser.add_argument('--output_dir', type=str, required=False, - help='result dir. result depth will be png. vides are JMPG as avi') - parser.add_argument('--savecrops', type=int, required=False) - parser.add_argument('--savewholeest', type=int, required=False) - parser.add_argument('--output_resolution', type=int, required=False, - help='0 for no restriction 1 for resize to input size') - parser.add_argument('--net_receptive_field_size', type=int, required=False) - parser.add_argument('--pix2pixsize', type=int, required=False) - parser.add_argument('--generatevideo', type=int, required=False) - parser.add_argument('--depthNet', type=int, required=False, help='0: midas 1:strurturedRL') - parser.add_argument('--R0', action='store_true') - parser.add_argument('--R20', action='store_true') - parser.add_argument('--Final', action='store_true') - parser.add_argument('--colorize_results', action='store_true') - parser.add_argument('--max_res', type=float, default=np.inf) - - self.initialized = True - return parser - - def gather_options(self): - """Initialize our parser with basic options(only once). - Add additional model-specific and dataset-specific options. - These options are defined in the function - in model and dataset classes. - """ - if not self.initialized: # check if it has been initialized - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser = self.initialize(parser) - - # get the basic options - opt, _ = parser.parse_known_args() - - # modify model-related parser options - model_name = opt.model - model_option_setter = models.get_option_setter(model_name) - parser = model_option_setter(parser, self.isTrain) - opt, _ = parser.parse_known_args() # parse again with new defaults - - # modify dataset-related parser options - # dataset_name = opt.dataset_mode - # dataset_option_setter = pix2pix.data.get_option_setter(dataset_name) - # parser = dataset_option_setter(parser, self.isTrain) - - # save and return the parser - self.parser = parser - #return parser.parse_args() #EVIL - return opt - - def print_options(self, opt): - """Print and save options - - It will print both current options and default values(if different). - It will save options into a text file / [checkpoints_dir] / opt.txt - """ - message = '' - message += '----------------- Options ---------------\n' - for k, v in sorted(vars(opt).items()): - comment = '' - default = self.parser.get_default(k) - if v != default: - comment = '\t[default: %s]' % str(default) - message += '{:>25}: {:<30}{}\n'.format(str(k), str(v), comment) - message += '----------------- End -------------------' - print(message) - - # save to the disk - expr_dir = os.path.join(opt.checkpoints_dir, opt.name) - util.mkdirs(expr_dir) - file_name = os.path.join(expr_dir, '{}_opt.txt'.format(opt.phase)) - with open(file_name, 'wt') as opt_file: - opt_file.write(message) - opt_file.write('\n') - - def parse(self): - """Parse our options, create checkpoints directory suffix, and set up gpu device.""" - opt = self.gather_options() - opt.isTrain = self.isTrain # train or test - - # process opt.suffix - if opt.suffix: - suffix = ('_' + opt.suffix.format(**vars(opt))) if opt.suffix != '' else '' - opt.name = opt.name + suffix - - #self.print_options(opt) - - # set gpu ids - str_ids = opt.gpu_ids.split(',') - opt.gpu_ids = [] - for str_id in str_ids: - id = int(str_id) - if id >= 0: - opt.gpu_ids.append(id) - #if len(opt.gpu_ids) > 0: - # torch.cuda.set_device(opt.gpu_ids[0]) - - self.opt = opt - return self.opt diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/pycocotools/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/pycocotools/__init__.py deleted file mode 100644 index 3f7d85bba884ea8f83fc6ab2a1e6ade80d98d4d9..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/pycocotools/__init__.py +++ /dev/null @@ -1 +0,0 @@ -__author__ = 'tylin' diff --git a/spaces/Superying/vits-uma-genshin-honkai/utils.py b/spaces/Superying/vits-uma-genshin-honkai/utils.py deleted file mode 100644 index ee4b01ddfbe8173965371b29f770f3e87615fe71..0000000000000000000000000000000000000000 --- a/spaces/Superying/vits-uma-genshin-honkai/utils.py +++ /dev/null @@ -1,225 +0,0 @@ -import os -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -import librosa -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return torch.FloatTensor(audio.astype(np.float32)) - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/THEMUNCHERCRUNCHER/teachif/README.md b/spaces/THEMUNCHERCRUNCHER/teachif/README.md deleted file mode 100644 index 5277250349af92bc2514e477ef3bcdb660371972..0000000000000000000000000000000000000000 --- a/spaces/THEMUNCHERCRUNCHER/teachif/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Teachif -emoji: 🏢 -colorFrom: indigo -colorTo: green -sdk: docker -pinned: false -license: cc-by-nd-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/search.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/search.py deleted file mode 100644 index 03ed925b246dd551ec2ef45095ed6cad00fd2745..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/search.py +++ /dev/null @@ -1,174 +0,0 @@ -import logging -import shutil -import sys -import textwrap -import xmlrpc.client -from collections import OrderedDict -from optparse import Values -from typing import TYPE_CHECKING, Dict, List, Optional - -from pip._vendor.packaging.version import parse as parse_version - -from pip._internal.cli.base_command import Command -from pip._internal.cli.req_command import SessionCommandMixin -from pip._internal.cli.status_codes import NO_MATCHES_FOUND, SUCCESS -from pip._internal.exceptions import CommandError -from pip._internal.metadata import get_default_environment -from pip._internal.models.index import PyPI -from pip._internal.network.xmlrpc import PipXmlrpcTransport -from pip._internal.utils.logging import indent_log -from pip._internal.utils.misc import write_output - -if TYPE_CHECKING: - from typing import TypedDict - - class TransformedHit(TypedDict): - name: str - summary: str - versions: List[str] - - -logger = logging.getLogger(__name__) - - -class SearchCommand(Command, SessionCommandMixin): - """Search for PyPI packages whose name or summary contains .""" - - usage = """ - %prog [options] """ - ignore_require_venv = True - - def add_options(self) -> None: - self.cmd_opts.add_option( - "-i", - "--index", - dest="index", - metavar="URL", - default=PyPI.pypi_url, - help="Base URL of Python Package Index (default %default)", - ) - - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - if not args: - raise CommandError("Missing required argument (search query).") - query = args - pypi_hits = self.search(query, options) - hits = transform_hits(pypi_hits) - - terminal_width = None - if sys.stdout.isatty(): - terminal_width = shutil.get_terminal_size()[0] - - print_results(hits, terminal_width=terminal_width) - if pypi_hits: - return SUCCESS - return NO_MATCHES_FOUND - - def search(self, query: List[str], options: Values) -> List[Dict[str, str]]: - index_url = options.index - - session = self.get_default_session(options) - - transport = PipXmlrpcTransport(index_url, session) - pypi = xmlrpc.client.ServerProxy(index_url, transport) - try: - hits = pypi.search({"name": query, "summary": query}, "or") - except xmlrpc.client.Fault as fault: - message = "XMLRPC request failed [code: {code}]\n{string}".format( - code=fault.faultCode, - string=fault.faultString, - ) - raise CommandError(message) - assert isinstance(hits, list) - return hits - - -def transform_hits(hits: List[Dict[str, str]]) -> List["TransformedHit"]: - """ - The list from pypi is really a list of versions. We want a list of - packages with the list of versions stored inline. This converts the - list from pypi into one we can use. - """ - packages: Dict[str, "TransformedHit"] = OrderedDict() - for hit in hits: - name = hit["name"] - summary = hit["summary"] - version = hit["version"] - - if name not in packages.keys(): - packages[name] = { - "name": name, - "summary": summary, - "versions": [version], - } - else: - packages[name]["versions"].append(version) - - # if this is the highest version, replace summary and score - if version == highest_version(packages[name]["versions"]): - packages[name]["summary"] = summary - - return list(packages.values()) - - -def print_dist_installation_info(name: str, latest: str) -> None: - env = get_default_environment() - dist = env.get_distribution(name) - if dist is not None: - with indent_log(): - if dist.version == latest: - write_output("INSTALLED: %s (latest)", dist.version) - else: - write_output("INSTALLED: %s", dist.version) - if parse_version(latest).pre: - write_output( - "LATEST: %s (pre-release; install" - " with `pip install --pre`)", - latest, - ) - else: - write_output("LATEST: %s", latest) - - -def print_results( - hits: List["TransformedHit"], - name_column_width: Optional[int] = None, - terminal_width: Optional[int] = None, -) -> None: - if not hits: - return - if name_column_width is None: - name_column_width = ( - max( - [ - len(hit["name"]) + len(highest_version(hit.get("versions", ["-"]))) - for hit in hits - ] - ) - + 4 - ) - - for hit in hits: - name = hit["name"] - summary = hit["summary"] or "" - latest = highest_version(hit.get("versions", ["-"])) - if terminal_width is not None: - target_width = terminal_width - name_column_width - 5 - if target_width > 10: - # wrap and indent summary to fit terminal - summary_lines = textwrap.wrap(summary, target_width) - summary = ("\n" + " " * (name_column_width + 3)).join(summary_lines) - - name_latest = f"{name} ({latest})" - line = f"{name_latest:{name_column_width}} - {summary}" - try: - write_output(line) - print_dist_installation_info(name, latest) - except UnicodeEncodeError: - pass - - -def highest_version(versions: List[str]) -> str: - return max(versions, key=parse_version) diff --git a/spaces/TencentARC/VLog/models/grit_src/grit/data/custom_dataset_mapper.py b/spaces/TencentARC/VLog/models/grit_src/grit/data/custom_dataset_mapper.py deleted file mode 100644 index 1e21edb3d151dafdca5c4debfb7341a9ed0efdd9..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/grit/data/custom_dataset_mapper.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# Modified by Jialian Wu from https://github.com/facebookresearch/Detic/blob/main/detic/data/custom_dataset_mapper.py -import copy -import numpy as np -import torch - -from detectron2.config import configurable - -from detectron2.data import detection_utils as utils -from detectron2.data import transforms as T -from detectron2.data.dataset_mapper import DatasetMapper -from .custom_build_augmentation import build_custom_augmentation -from itertools import compress -import logging - -__all__ = ["CustomDatasetMapper", "ObjDescription"] -logger = logging.getLogger(__name__) - - -class CustomDatasetMapper(DatasetMapper): - @configurable - def __init__(self, is_train: bool, - dataset_augs=[], - **kwargs): - if is_train: - self.dataset_augs = [T.AugmentationList(x) for x in dataset_augs] - super().__init__(is_train, **kwargs) - - @classmethod - def from_config(cls, cfg, is_train: bool = True): - ret = super().from_config(cfg, is_train) - if is_train: - if cfg.INPUT.CUSTOM_AUG == 'EfficientDetResizeCrop': - dataset_scales = cfg.DATALOADER.DATASET_INPUT_SCALE - dataset_sizes = cfg.DATALOADER.DATASET_INPUT_SIZE - ret['dataset_augs'] = [ - build_custom_augmentation(cfg, True, scale, size) \ - for scale, size in zip(dataset_scales, dataset_sizes)] - else: - assert cfg.INPUT.CUSTOM_AUG == 'ResizeShortestEdge' - min_sizes = cfg.DATALOADER.DATASET_MIN_SIZES - max_sizes = cfg.DATALOADER.DATASET_MAX_SIZES - ret['dataset_augs'] = [ - build_custom_augmentation( - cfg, True, min_size=mi, max_size=ma) \ - for mi, ma in zip(min_sizes, max_sizes)] - else: - ret['dataset_augs'] = [] - - return ret - - def __call__(self, dataset_dict): - dataset_dict_out = self.prepare_data(dataset_dict) - - # When augmented image is too small, do re-augmentation - retry = 0 - while (dataset_dict_out["image"].shape[1] < 32 or dataset_dict_out["image"].shape[2] < 32): - retry += 1 - if retry == 100: - logger.info('Retry 100 times for augmentation. Make sure the image size is not too small.') - logger.info('Find image information below') - logger.info(dataset_dict) - dataset_dict_out = self.prepare_data(dataset_dict) - - return dataset_dict_out - - def prepare_data(self, dataset_dict_in): - dataset_dict = copy.deepcopy(dataset_dict_in) - if 'file_name' in dataset_dict: - ori_image = utils.read_image( - dataset_dict["file_name"], format=self.image_format) - else: - ori_image, _, _ = self.tar_dataset[dataset_dict["tar_index"]] - ori_image = utils._apply_exif_orientation(ori_image) - ori_image = utils.convert_PIL_to_numpy(ori_image, self.image_format) - utils.check_image_size(dataset_dict, ori_image) - - aug_input = T.AugInput(copy.deepcopy(ori_image), sem_seg=None) - if self.is_train: - transforms = \ - self.dataset_augs[dataset_dict['dataset_source']](aug_input) - else: - transforms = self.augmentations(aug_input) - image, sem_seg_gt = aug_input.image, aug_input.sem_seg - - image_shape = image.shape[:2] - dataset_dict["image"] = torch.as_tensor( - np.ascontiguousarray(image.transpose(2, 0, 1))) - - if not self.is_train: - # USER: Modify this if you want to keep them for some reason. - dataset_dict.pop("annotations", None) - return dataset_dict - - if "annotations" in dataset_dict: - if len(dataset_dict["annotations"]) > 0: - object_descriptions = [an['object_description'] for an in dataset_dict["annotations"]] - else: - object_descriptions = [] - # USER: Modify this if you want to keep them for some reason. - for anno in dataset_dict["annotations"]: - if not self.use_instance_mask: - anno.pop("segmentation", None) - if not self.use_keypoint: - anno.pop("keypoints", None) - - all_annos = [ - (utils.transform_instance_annotations( - obj, transforms, image_shape, - keypoint_hflip_indices=self.keypoint_hflip_indices, - ), obj.get("iscrowd", 0)) - for obj in dataset_dict.pop("annotations") - ] - annos = [ann[0] for ann in all_annos if ann[1] == 0] - instances = utils.annotations_to_instances( - annos, image_shape, mask_format=self.instance_mask_format - ) - - instances.gt_object_descriptions = ObjDescription(object_descriptions) - - del all_annos - if self.recompute_boxes: - instances.gt_boxes = instances.gt_masks.get_bounding_boxes() - dataset_dict["instances"] = utils.filter_empty_instances(instances) - - return dataset_dict - - -class ObjDescription: - def __init__(self, object_descriptions): - self.data = object_descriptions - - def __getitem__(self, item): - assert type(item) == torch.Tensor - assert item.dim() == 1 - if len(item) > 0: - assert item.dtype == torch.int64 or item.dtype == torch.bool - if item.dtype == torch.int64: - return ObjDescription([self.data[x.item()] for x in item]) - elif item.dtype == torch.bool: - return ObjDescription(list(compress(self.data, item))) - - return ObjDescription(list(compress(self.data, item))) - - def __len__(self): - return len(self.data) - - def __repr__(self): - return "ObjDescription({})".format(self.data) \ No newline at end of file diff --git a/spaces/TeraTTS/TTS/infer_onnx.py b/spaces/TeraTTS/TTS/infer_onnx.py deleted file mode 100644 index 9176341766d39ceeb9cd2319848902af019996f2..0000000000000000000000000000000000000000 --- a/spaces/TeraTTS/TTS/infer_onnx.py +++ /dev/null @@ -1,90 +0,0 @@ -import scipy.io.wavfile -import os -import onnxruntime -import numpy as np -from huggingface_hub import snapshot_download -from num2words import num2words -import re -from transliterate import translit -import json - -class TTS: - def __init__(self, model_name: str, save_path: str = "./model", add_time_to_end: float = 0.8) -> None: - if not os.path.exists(save_path): - os.mkdir(save_path) - - model_dir = os.path.join(save_path, model_name) - - if not os.path.exists(model_dir): - snapshot_download(repo_id=model_name, - allow_patterns=["*.txt", "*.onnx", "*.json"], - local_dir=model_dir, - local_dir_use_symlinks=False - ) - - self.model = onnxruntime.InferenceSession(os.path.join(model_dir, "exported/model.onnx"), providers=['CPUExecutionProvider']) - with open(os.path.join(model_dir, "exported/config.json")) as config_file: - self.config = json.load(config_file)["model_config"] - - if os.path.exists(os.path.join(model_dir, "exported/dictionary.txt")): - from tokenizer import TokenizerG2P - print("Use g2p") - self.tokenizer = TokenizerG2P(os.path.join(model_dir, "exported")) - - else: - from tokenizer import TokenizerGRUUT - print("Use gruut") - self.tokenizer = TokenizerGRUUT(os.path.join(model_dir, "exported")) - - self.add_time_to_end = add_time_to_end - - - def _add_silent(self, audio, silence_duration: float = 1.0, sample_rate: int = 22050): - num_samples_silence = int(sample_rate * silence_duration) - silence_array = np.zeros(num_samples_silence, dtype=np.float32) - audio_with_silence = np.concatenate((audio, silence_array), axis=0) - return audio_with_silence - - - def save_wav(self, audio, path:str, sample_rate: int = 22050): - '''save audio to wav''' - scipy.io.wavfile.write(path, sample_rate, audio) - - - def _intersperse(self, lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - def _get_seq(self, text): - phoneme_ids = self.tokenizer._get_seq(text) - phoneme_ids_inter = self._intersperse(phoneme_ids, 0) - return phoneme_ids_inter - - def _num2wordsshor(self, match): - match = match.group() - ret = num2words(match, lang ='ru') - return ret - - def __call__(self, text: str, length_scale=1.2): - text = translit(text, 'ru') - text = re.sub(r'\d+',self._num2wordsshor,text) - phoneme_ids = self._get_seq(text) - text = np.expand_dims(np.array(phoneme_ids, dtype=np.int64), 0) - text_lengths = np.array([text.shape[1]], dtype=np.int64) - scales = np.array( - [0.667, length_scale, 0.8], - dtype=np.float32, - ) - audio = self.model.run( - None, - { - "input": text, - "input_lengths": text_lengths, - "scales": scales, - "sid": None, - }, - )[0][0,0][0] - - audio = self._add_silent(audio, silence_duration = self.add_time_to_end, sample_rate=self.config["samplerate"]) - return audio \ No newline at end of file diff --git a/spaces/Tetel/chat/EdgeGPT/conversation_style.py b/spaces/Tetel/chat/EdgeGPT/conversation_style.py deleted file mode 100644 index 284ae24b387333b63cd866ab5fa691e7592b337d..0000000000000000000000000000000000000000 --- a/spaces/Tetel/chat/EdgeGPT/conversation_style.py +++ /dev/null @@ -1,63 +0,0 @@ -from enum import Enum - -try: - from typing import Union, Literal -except ImportError: - from typing_extensions import Literal -from typing import Optional - - -class ConversationStyle(Enum): - creative = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "iycapbing", - "iyxapbing", - "rai271", - "prtime2t", - "smartname", - "enbsnptrc", - "dv3sugg", - "iyoloxap", - "iyoloneutral", - "h3imaginative", - "saharagenconv5", - "dsblhlthcrd", - "clgalileo", - "gencontentv3", - ] - balanced = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "galileo", - "saharagenconv5", - "objopinion", - "dsblhlthcrd", - "dv3sugg", - "autosave", - ] - precise = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "h3precise", - "objopinion", - "dsblhlthcrd", - "dv3sugg", - "autosave", - "clgalileo", - "gencontentv3", - ] - - -CONVERSATION_STYLE_TYPE = Optional[ - Union[ConversationStyle, Literal["creative", "balanced", "precise"]] -] diff --git a/spaces/ThirdEyeData/Maximum_Repair_Prediction/app.py b/spaces/ThirdEyeData/Maximum_Repair_Prediction/app.py deleted file mode 100644 index 3f6a2eb38aad293fcec0469b274a1cf3f15503bf..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Maximum_Repair_Prediction/app.py +++ /dev/null @@ -1,106 +0,0 @@ - -# import required libraries - -import pandas as pd -import numpy as np -import matplotlib.pyplot as plt -import seaborn as sns - -from datetime import datetime -from datetime import timedelta -from sklearn.model_selection import RandomizedSearchCV, GridSearchCV, train_test_split -from sklearn.ensemble import RandomForestRegressor -from sklearn.metrics import r2_score -from sklearn.preprocessing import LabelEncoder -from sklearn.preprocessing import StandardScaler -import streamlit as st -import warnings -warnings.filterwarnings('ignore') - - - -st.title("Prediction of Maximum Number of Repairs") -st.sidebar.header('Enter the Components Details here') -st.write("""This model helps to know the probable maximum number of times a component can be repaired. -After which, we can straight away replace it with a new component""") -import pandas as pd -import numpy as np -import pickle - -# load the saved model using pickle -with open('max_repair_model.pkl', 'rb') as file: - model = pickle.load(file) - -# Load the saved manufacturer label encoder object using pickle -with open('manufacturer_le.pkl', 'rb') as file1: - le = pickle.load(file1) - -# DATA from user -def user_report(): - manufacturer = st.sidebar.selectbox("Manufacturer", - ("JKL Company", "GHI Company","DEF Company","ABC Company","XYZ Company" )) - if manufacturer=='JKL Company': - manufacturer=3 - elif manufacturer=="GHI Company": - manufacturer=2 - elif manufacturer=="DEF Company": - manufacturer=1 - elif manufacturer=="ABC Company": - manufacturer =0 - else: - manufacturer=4 - component_age = st.sidebar.slider('Component Age (in hours)', 100,250, 300 ) - total_operating_hours = st.sidebar.slider('Total Operating Hours)', 400,1500, 500 ) - operating_temperature = st.sidebar.slider('Operating Temperature', 70,80, 75 ) - humidity = st.sidebar.slider('Humidity', 50,70, 55 ) - Vibration_Level = st.sidebar.slider('Vibration Level', 2,4, 2 ) - Pressure = st.sidebar.slider('Pressure', 28,32, 30 ) - Power_Input_Voltage= st.sidebar.slider('Power Input Voltage (V)',105,120,115) - previous_number_of_repairs = st.sidebar.number_input('Enter the Previous Number of Repairs Undergone 0 to 5 )',min_value=0,max_value=5,step=1) - load_factor = st.sidebar.slider('Load Factor',3,10,4) - engine_speed=st.sidebar.slider('Engine Speed',7000,8000,7800) - Oil_Temperature=st.sidebar.slider('Oil Temperature',170,185,172) - - - user_report_data = { - 'Manufacturer': manufacturer, - 'Component_Age': component_age, - 'Total_Operating_Hours': total_operating_hours, - 'Operating_Temperature': operating_temperature, - 'Humidity': humidity, - 'Vibration_Level': Vibration_Level, - 'Pressure': Pressure, - 'Power_Input_Voltage': Power_Input_Voltage, - 'Previous_number_of_repairs': previous_number_of_repairs, - 'Load_Factor': load_factor, - 'Engine_Speed': engine_speed, - 'Oil_Temperature':Oil_Temperature - } - report_data = pd.DataFrame(user_report_data, index=[0]) - - return report_data - -#Customer Data -user_data = user_report() -st.subheader("Component Details") -st.write(user_data) - - -# define the prediction function -def predict_max_number_of_repairs(user_data): - - # encode the manufacturer using the loaded LabelEncoder object - #manufacturer_encoded = le.transform([manufacturer])[0] - - - - # make the prediction using the loaded model and input data - predicted_max_number_of_repairs = model.predict(user_data) - - # return the predicted max number of repairs as output - return np.round(predicted_max_number_of_repairs[0]) -# Function calling -y_pred = int(predict_max_number_of_repairs(user_data)) -st.write("Click here to see the Predictions") -if st.button("Predict"): - st.subheader(f"Maximun Number of Repairs is {y_pred} ") \ No newline at end of file diff --git a/spaces/TohsakaSu/AQI-predictor/README.md b/spaces/TohsakaSu/AQI-predictor/README.md deleted file mode 100644 index bf6884c6500208792752c2c1da9d7044bc2569ab..0000000000000000000000000000000000000000 --- a/spaces/TohsakaSu/AQI-predictor/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AQI Predictor -emoji: 🐨 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/UserXTheUnknown/stablediffusion-infinity/README.md b/spaces/UserXTheUnknown/stablediffusion-infinity/README.md deleted file mode 100644 index a36895a07dc78ac3d7350d5216d1d267bf09b557..0000000000000000000000000000000000000000 --- a/spaces/UserXTheUnknown/stablediffusion-infinity/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Stablediffusion Infinity -emoji: ♾️ -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: true -license: apache-2.0 -duplicated_from: lnyan/stablediffusion-infinity ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Vision-CAIR/minigpt4/minigpt4/conversation/__init__.py b/spaces/Vision-CAIR/minigpt4/minigpt4/conversation/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/WindVChen/INR-Harmon/model/backbone.py b/spaces/WindVChen/INR-Harmon/model/backbone.py deleted file mode 100644 index 6ef7b61ca1bf5a22e9ac62cf9519dd1b68832cbe..0000000000000000000000000000000000000000 --- a/spaces/WindVChen/INR-Harmon/model/backbone.py +++ /dev/null @@ -1,79 +0,0 @@ -import torch.nn as nn - -from .hrnetv2.hrnet_ocr import HighResolutionNet -from .hrnetv2.modifiers import LRMult -from .base.basic_blocks import MaxPoolDownSize -from .base.ih_model import IHModelWithBackbone, DeepImageHarmonization - - -def build_backbone(name, opt): - return eval(name)(opt) - - -class baseline(IHModelWithBackbone): - def __init__(self, opt, ocr=64): - base_config = {'model': DeepImageHarmonization, - 'params': {'depth': 7, 'batchnorm_from': 2, 'image_fusion': True, 'opt': opt}} - - params = base_config['params'] - - backbone = HRNetV2(opt, ocr=ocr) - - params.update(dict( - backbone_from=2, - backbone_channels=backbone.output_channels, - backbone_mode='cat', - opt=opt - )) - base_model = base_config['model'](**params) - - super(baseline, self).__init__(base_model, backbone, False, 'sum', opt=opt) - - -class HRNetV2(nn.Module): - def __init__( - self, opt, - cat_outputs=True, - pyramid_channels=-1, pyramid_depth=4, - width=18, ocr=128, small=False, - lr_mult=0.1, pretained=True - ): - super(HRNetV2, self).__init__() - self.opt = opt - self.cat_outputs = cat_outputs - self.ocr_on = ocr > 0 and cat_outputs - self.pyramid_on = pyramid_channels > 0 and cat_outputs - - self.hrnet = HighResolutionNet(width, 2, ocr_width=ocr, small=small, opt=opt) - self.hrnet.apply(LRMult(lr_mult)) - if self.ocr_on: - self.hrnet.ocr_distri_head.apply(LRMult(1.0)) - self.hrnet.ocr_gather_head.apply(LRMult(1.0)) - self.hrnet.conv3x3_ocr.apply(LRMult(1.0)) - - hrnet_cat_channels = [width * 2 ** i for i in range(4)] - if self.pyramid_on: - self.output_channels = [pyramid_channels] * 4 - elif self.ocr_on: - self.output_channels = [ocr * 2] - elif self.cat_outputs: - self.output_channels = [sum(hrnet_cat_channels)] - else: - self.output_channels = hrnet_cat_channels - - if self.pyramid_on: - downsize_in_channels = ocr * 2 if self.ocr_on else sum(hrnet_cat_channels) - self.downsize = MaxPoolDownSize(downsize_in_channels, pyramid_channels, pyramid_channels, pyramid_depth) - - if pretained: - self.load_pretrained_weights( - "./pretrained_models/hrnetv2_w18_imagenet_pretrained.pth") - - self.output_resolution = (opt.input_size // 8) ** 2 - - def forward(self, image, mask, mask_features=None): - outputs = list(self.hrnet(image, mask, mask_features)) - return outputs - - def load_pretrained_weights(self, pretrained_path): - self.hrnet.load_pretrained_weights(pretrained_path) diff --git a/spaces/WindVChen/INR-Harmon/model/hrnetv2/resnetv1b.py b/spaces/WindVChen/INR-Harmon/model/hrnetv2/resnetv1b.py deleted file mode 100644 index 4ad24cef5bde19f2627cfd3f755636f37cfb39ac..0000000000000000000000000000000000000000 --- a/spaces/WindVChen/INR-Harmon/model/hrnetv2/resnetv1b.py +++ /dev/null @@ -1,276 +0,0 @@ -import torch -import torch.nn as nn -GLUON_RESNET_TORCH_HUB = 'rwightman/pytorch-pretrained-gluonresnet' - - -class BasicBlockV1b(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None, - previous_dilation=1, norm_layer=nn.BatchNorm2d): - super(BasicBlockV1b, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=3, stride=stride, - padding=dilation, dilation=dilation, bias=False) - self.bn1 = norm_layer(planes) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, - padding=previous_dilation, dilation=previous_dilation, bias=False) - self.bn2 = norm_layer(planes) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out = out + residual - out = self.relu(out) - - return out - - -class BottleneckV1b(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None, - previous_dilation=1, norm_layer=nn.BatchNorm2d): - super(BottleneckV1b, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = norm_layer(planes) - - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, - padding=dilation, dilation=dilation, bias=False) - self.bn2 = norm_layer(planes) - - self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = norm_layer(planes * self.expansion) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out = out + residual - out = self.relu(out) - - return out - - -class ResNetV1b(nn.Module): - """ Pre-trained ResNetV1b Model, which produces the strides of 8 featuremaps at conv5. - - Parameters - ---------- - block : Block - Class for the residual block. Options are BasicBlockV1, BottleneckV1. - layers : list of int - Numbers of layers in each block - classes : int, default 1000 - Number of classification classes. - dilated : bool, default False - Applying dilation strategy to pretrained ResNet yielding a stride-8 model, - typically used in Semantic Segmentation. - norm_layer : object - Normalization layer used (default: :class:`nn.BatchNorm2d`) - deep_stem : bool, default False - Whether to replace the 7x7 conv1 with 3 3x3 convolution layers. - avg_down : bool, default False - Whether to use average pooling for projection skip connection between stages/downsample. - final_drop : float, default 0.0 - Dropout ratio before the final classification layer. - - Reference: - - He, Kaiming, et al. "Deep residual learning for image recognition." - Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. - - - Yu, Fisher, and Vladlen Koltun. "Multi-scale context aggregation by dilated convolutions." - """ - def __init__(self, block, layers, classes=1000, dilated=True, deep_stem=False, stem_width=32, - avg_down=False, final_drop=0.0, norm_layer=nn.BatchNorm2d): - self.inplanes = stem_width*2 if deep_stem else 64 - super(ResNetV1b, self).__init__() - if not deep_stem: - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) - else: - self.conv1 = nn.Sequential( - nn.Conv2d(3, stem_width, kernel_size=3, stride=2, padding=1, bias=False), - norm_layer(stem_width), - nn.ReLU(True), - nn.Conv2d(stem_width, stem_width, kernel_size=3, stride=1, padding=1, bias=False), - norm_layer(stem_width), - nn.ReLU(True), - nn.Conv2d(stem_width, 2*stem_width, kernel_size=3, stride=1, padding=1, bias=False) - ) - self.bn1 = norm_layer(self.inplanes) - self.relu = nn.ReLU(True) - self.maxpool = nn.MaxPool2d(3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0], avg_down=avg_down, - norm_layer=norm_layer) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2, avg_down=avg_down, - norm_layer=norm_layer) - if dilated: - self.layer3 = self._make_layer(block, 256, layers[2], stride=1, dilation=2, - avg_down=avg_down, norm_layer=norm_layer) - self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation=4, - avg_down=avg_down, norm_layer=norm_layer) - else: - self.layer3 = self._make_layer(block, 256, layers[2], stride=2, - avg_down=avg_down, norm_layer=norm_layer) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2, - avg_down=avg_down, norm_layer=norm_layer) - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.drop = None - if final_drop > 0.0: - self.drop = nn.Dropout(final_drop) - self.fc = nn.Linear(512 * block.expansion, classes) - - def _make_layer(self, block, planes, blocks, stride=1, dilation=1, - avg_down=False, norm_layer=nn.BatchNorm2d): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = [] - if avg_down: - if dilation == 1: - downsample.append( - nn.AvgPool2d(kernel_size=stride, stride=stride, ceil_mode=True, count_include_pad=False) - ) - else: - downsample.append( - nn.AvgPool2d(kernel_size=1, stride=1, ceil_mode=True, count_include_pad=False) - ) - downsample.extend([ - nn.Conv2d(self.inplanes, out_channels=planes * block.expansion, - kernel_size=1, stride=1, bias=False), - norm_layer(planes * block.expansion) - ]) - downsample = nn.Sequential(*downsample) - else: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, out_channels=planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - norm_layer(planes * block.expansion) - ) - - layers = [] - if dilation in (1, 2): - layers.append(block(self.inplanes, planes, stride, dilation=1, downsample=downsample, - previous_dilation=dilation, norm_layer=norm_layer)) - elif dilation == 4: - layers.append(block(self.inplanes, planes, stride, dilation=2, downsample=downsample, - previous_dilation=dilation, norm_layer=norm_layer)) - else: - raise RuntimeError("=> unknown dilation size: {}".format(dilation)) - - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append(block(self.inplanes, planes, dilation=dilation, - previous_dilation=dilation, norm_layer=norm_layer)) - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - x = x.view(x.size(0), -1) - if self.drop is not None: - x = self.drop(x) - x = self.fc(x) - - return x - - -def _safe_state_dict_filtering(orig_dict, model_dict_keys): - filtered_orig_dict = {} - for k, v in orig_dict.items(): - if k in model_dict_keys: - filtered_orig_dict[k] = v - else: - print(f"[ERROR] Failed to load <{k}> in backbone") - return filtered_orig_dict - - -def resnet34_v1b(pretrained=False, **kwargs): - model = ResNetV1b(BasicBlockV1b, [3, 4, 6, 3], **kwargs) - if pretrained: - model_dict = model.state_dict() - filtered_orig_dict = _safe_state_dict_filtering( - torch.hub.load(GLUON_RESNET_TORCH_HUB, 'gluon_resnet34_v1b', pretrained=True).state_dict(), - model_dict.keys() - ) - model_dict.update(filtered_orig_dict) - model.load_state_dict(model_dict) - return model - - -def resnet50_v1s(pretrained=False, **kwargs): - model = ResNetV1b(BottleneckV1b, [3, 4, 6, 3], deep_stem=True, stem_width=64, **kwargs) - if pretrained: - model_dict = model.state_dict() - filtered_orig_dict = _safe_state_dict_filtering( - torch.hub.load(GLUON_RESNET_TORCH_HUB, 'gluon_resnet50_v1s', pretrained=True).state_dict(), - model_dict.keys() - ) - model_dict.update(filtered_orig_dict) - model.load_state_dict(model_dict) - return model - - -def resnet101_v1s(pretrained=False, **kwargs): - model = ResNetV1b(BottleneckV1b, [3, 4, 23, 3], deep_stem=True, stem_width=64, **kwargs) - if pretrained: - model_dict = model.state_dict() - filtered_orig_dict = _safe_state_dict_filtering( - torch.hub.load(GLUON_RESNET_TORCH_HUB, 'gluon_resnet101_v1s', pretrained=True).state_dict(), - model_dict.keys() - ) - model_dict.update(filtered_orig_dict) - model.load_state_dict(model_dict) - return model - - -def resnet152_v1s(pretrained=False, **kwargs): - model = ResNetV1b(BottleneckV1b, [3, 8, 36, 3], deep_stem=True, stem_width=64, **kwargs) - if pretrained: - model_dict = model.state_dict() - filtered_orig_dict = _safe_state_dict_filtering( - torch.hub.load(GLUON_RESNET_TORCH_HUB, 'gluon_resnet152_v1s', pretrained=True).state_dict(), - model_dict.keys() - ) - model_dict.update(filtered_orig_dict) - model.load_state_dict(model_dict) - return model diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/setup.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/setup.py deleted file mode 100644 index 2e008ded9f468399c645ca45c4ada90acb6d3d54..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/setup.py +++ /dev/null @@ -1,40 +0,0 @@ -from setuptools import setup, find_packages - - -def get_description(): - return "Deep Learning library for colorizing and restoring old images and video" - - -# def get_long_description(): -# with open("README.md") as f: -# return f.read() - - -def get_requirements(): - with open("requirements.txt") as f: - return f.read().splitlines() - - -setup( - name="DeOldify", - version="0.0.1", - packages=find_packages(exclude=["tests"]), - url="https://github.com/jantic/DeOldify", - license="MIT License", - description=get_description(), - # long_description=get_long_description(), - # long_description_content_type="text/markdown", - classifiers=[ - "Development Status :: 4 - Beta", - "Framework :: Jupyter", - "Intended Audience :: Developers", - "Intended Audience :: Science/Research", - "License :: OSI Approved :: MIT License", - "Programming Language :: Python :: 3.6", - "Programming Language :: Python :: 3.7", - "Topic :: Scientific/Engineering :: Artificial Intelligence", - "Topic :: Software Development :: Libraries :: Python Modules", - ], - install_requires=get_requirements(), - python_requires=">=3.6", -) diff --git a/spaces/XingHe0127/Chatbot/modules/base_model.py b/spaces/XingHe0127/Chatbot/modules/base_model.py deleted file mode 100644 index ba3fc62514123d55f01e56460499c6a12e9ceaf6..0000000000000000000000000000000000000000 --- a/spaces/XingHe0127/Chatbot/modules/base_model.py +++ /dev/null @@ -1,551 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMBot = 3 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmbot" in model_name_lower: - model_type = ModelType.XMBot - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - construct_index(self.api_key, file_src=files) - status = "索引构建完成" - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery - from llama_index.indices.query.schema import QueryBundle - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.chat_models import ChatOpenAI - from llama_index import ( - GPTSimpleVectorIndex, - ServiceContext, - LangchainEmbedding, - OpenAIEmbedding, - ) - limited_context = True - msg = "加载索引中……" - logging.info(msg) - # yield chatbot + [(inputs, "")], msg - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - if local_embedding or self.model_type != ModelType.OpenAI: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings()) - else: - embed_model = OpenAIEmbedding() - # yield chatbot + [(inputs, "")], msg - with retrieve_proxy(): - prompt_helper = PromptHelper( - max_input_size=4096, - num_output=5, - max_chunk_overlap=20, - chunk_size_limit=600, - ) - from llama_index import ServiceContext - - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, embed_model=embed_model - ) - query_object = GPTVectorStoreIndexQuery( - index.index_struct, - service_context=service_context, - similarity_top_k=5, - vector_store=index._vector_store, - docstore=index._docstore, - ) - query_bundle = QueryBundle(real_inputs) - nodes = query_object.retrieve(query_bundle) - reference_results = [n.node.text for n in nodes] - reference_results = add_source_numbers(reference_results, use_source=False) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
  1. {domain_name}
  2. \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
      \n\n" + "".join(display_append) + "
    " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return new_access_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, chatbot, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, user_name, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return filename, json_s["system"], json_s["chatbot"] - except FileNotFoundError: - logging.warning(f"{user_name} 没有找到对话历史文件,不执行任何操作") - return filename, self.system_prompt, chatbot diff --git a/spaces/YanzBotz/YanzBotz-Models/vc_infer_pipeline.py b/spaces/YanzBotz/YanzBotz-Models/vc_infer_pipeline.py deleted file mode 100644 index 82c15f59a8072e1b317fa1d750ccc1b814a6989d..0000000000000000000000000000000000000000 --- a/spaces/YanzBotz/YanzBotz-Models/vc_infer_pipeline.py +++ /dev/null @@ -1,443 +0,0 @@ -import numpy as np, parselmouth, torch, pdb, sys, os -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -now_dir = os.getcwd() -sys.path.append(now_dir) - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - model = "full" - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - elif f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from rmvpe import RMVPE - - print("loading rmvpe model") - self.model_rmvpe = RMVPE( - "rmvpe.pt", is_half=self.is_half, device=self.device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/Yiqin/ChatVID/model/utils/generate_tf_record.py b/spaces/Yiqin/ChatVID/model/utils/generate_tf_record.py deleted file mode 100644 index 881a91935a3a7980215d6b96a4ab1fcf599277cf..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/utils/generate_tf_record.py +++ /dev/null @@ -1,278 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Python script to generate TFRecords of SequenceExample from csv.""" - -import contextlib -import math -import os -from typing import Optional, Sequence - -from absl import app -from absl import flags -import numpy as np -import pandas as pd -import tensorflow as tf -from tqdm import tqdm - -flags.DEFINE_string("csv_path", None, "Input csv") -flags.DEFINE_string("output_path", None, "Tfrecords output path.") -flags.DEFINE_string( - "features_path", - None, - "In case features are stored in individual files and not in the csv.", -) -flags.DEFINE_integer( - "num_shards", - -1, - ( - "Number of shards to output, -1 means" - "it will automatically adapt to the sqrt(num_examples)." - ), -) -flags.DEFINE_bool("shuffle_csv", False, "Whether or not to shuffle the csv.") -FLAGS = flags.FLAGS - - -@contextlib.contextmanager -def _close_on_exit(writers): - """Call close on all writers on exit.""" - try: - yield writers - finally: - for writer in writers: - writer.close() - - -def add_float_list(key: str, values: Sequence[float], - sequence: tf.train.SequenceExample): - sequence.feature_lists.feature_list[key].feature.add( - ).float_list.value[:] = values - - -def add_bytes_list(key: str, values: Sequence[bytes], - sequence: tf.train.SequenceExample): - sequence.feature_lists.feature_list[key].feature.add( - ).bytes_list.value[:] = values - - -def add_int_list(key: str, values: Sequence[int], - sequence: tf.train.SequenceExample): - sequence.feature_lists.feature_list[key].feature.add( - ).int64_list.value[:] = values - - -def set_context_int_list(key: str, value: Sequence[int], - sequence: tf.train.SequenceExample): - sequence.context.feature[key].int64_list.value[:] = value - - -def set_context_bytes(key: str, value: bytes, - sequence: tf.train.SequenceExample): - sequence.context.feature[key].bytes_list.value[:] = (value,) - - -def set_context_float(key: str, value: float, - sequence: tf.train.SequenceExample): - sequence.context.feature[key].float_list.value[:] = (value,) - - -def set_context_int(key: str, value: int, sequence: tf.train.SequenceExample): - sequence.context.feature[key].int64_list.value[:] = (value,) - - -def generate_sequence_example(video_id: str, - start: Optional[Sequence[float]], - end: Optional[Sequence[float]], - caption: Optional[Sequence[str]], - asr_start: Sequence[float], - asr_end: Sequence[float], - asr_string: Sequence[str], - features: Sequence[Sequence[float]], - duration: int, - split: Sequence[int] = None): - """Generate a sequence example.""" - - # Initiate the sequence example. - seq_example = tf.train.SequenceExample() - - # Add dense captioning annotations if these exist. - if caption is not None: - for s, e, c in zip(start, end, caption): - seq_example.context.feature[ - "video/timestamps/start" - ].int64_list.value.append(s) - seq_example.context.feature[ - "video/timestamps/end" - ].int64_list.value.append(e) - seq_example.context.feature["caption/string"].bytes_list.value.append( - c.encode() - ) - - # Add ASR. - if asr_start: - for s, e, c in zip(asr_start, asr_end, asr_string): - seq_example.context.feature[ - "ASR/timestamps/start" - ].int64_list.value.append(s) - seq_example.context.feature["ASR/timestamps/end"].int64_list.value.append( - e - ) - seq_example.context.feature["ASR/string"].bytes_list.value.append( - c.encode() - ) - - # Add visual features. - for f in features: - add_float_list("image/clip_embeddings", f, seq_example) - - if split is not None: - for s in split: - seq_example.context.feature["split"].int64_list.value.append(s) - - # Add other metadata. - set_context_bytes("videoid", video_id.encode(), seq_example) - set_context_int("video/duration", duration, seq_example) - return seq_example - -def generate(video_info): - # reads the input csv. - # input_csv = pd.read_csv(FLAGS.csv_path) - # if FLAGS.num_shards == -1: - # num_shards = int(math.sqrt(len(video_info))) - # else: - # num_shards = FLAGS.num_shards - num_shards = 1 - # Set up the TFRecordWriters. - # basename = os.path.splitext(os.path.basename(FLAGS.csv_path))[0] - basename = video_info['basename'] - shard_names = [ - os.path.join(video_info['output_path'], f"{basename}-{i:05d}-of-{num_shards:05d}") - for i in range(num_shards) - ] - writers = [tf.io.TFRecordWriter(shard_name) for shard_name in shard_names] - - with _close_on_exit(writers) as writers: - for i in tqdm(range(len(video_info))): - print( - "Processing example %d of %d (%d%%) \r" % - (i, len(video_info), i * 100 / len(video_info)), - end="") - # no gds needed - start = None - end = None - caption = None - - asr_start = video_info["asr_start"] - if isinstance(asr_start, str): - asr_start = eval(asr_start) # pylint:disable=eval-used - asr_end = video_info["asr_end"] - if isinstance(asr_end, str): - asr_end = eval(asr_end) # pylint:disable=eval-used - asr_string = video_info["asr_string"] - if isinstance(asr_string, str): - asr_string = eval(asr_string) # pylint:disable=eval-used - video_id = video_info["video_id"] - split = None - # pylint:disable=eval-used - if "features" not in video_info: # load on the fly - assert video_info['features_path'] - features = list( - np.load(os.path.join(video_info['features_path'], video_id + ".npy")) - ) - else: - features = video_info["features"] # pylint:disable=eval-used - duration = int(video_info["duration"]) - seq_ex = generate_sequence_example( - video_id, - start, - end, - caption, - asr_start, - asr_end, - asr_string, - features, - duration, - split) - writers[i % len(writers)].write(seq_ex.SerializeToString()) - -def main(*args): - # reads the input csv. - input_csv = pd.read_csv(FLAGS.csv_path) - if FLAGS.num_shards == -1: - num_shards = int(math.sqrt(len(input_csv))) - else: - num_shards = FLAGS.num_shards - # Set up the TFRecordWriters. - basename = os.path.splitext(os.path.basename(FLAGS.csv_path))[0] - shard_names = [ - os.path.join(FLAGS.output_path, f"{basename}-{i:05d}-of-{num_shards:05d}") - for i in range(num_shards) - ] - writers = [tf.io.TFRecordWriter(shard_name) for shard_name in shard_names] - - if FLAGS.shuffle_csv: - input_csv = input_csv.sample(frac=1) - with _close_on_exit(writers) as writers: - for i in tqdm(range(len(input_csv))): - print( - "Processing example %d of %d (%d%%) \r" % - (i, len(input_csv), i * 100 / len(input_csv)), - end="") - if "caption" in input_csv: - start = eval(input_csv["start"].values[i]) # pylint:disable=eval-used - end = eval(input_csv["end"].values[i]) # pylint:disable=eval-used - caption = eval(input_csv["caption"].values[i]) # pylint:disable=eval-used - else: - start = None - end = None - caption = None - asr_start = input_csv["asr_start"].values[i] - if isinstance(asr_start, str): - asr_start = eval(asr_start) # pylint:disable=eval-used - asr_end = input_csv["asr_end"].values[i] - if isinstance(asr_end, str): - asr_end = eval(asr_end) # pylint:disable=eval-used - asr_string = input_csv["asr_string"].values[i] - if isinstance(asr_string, str): - asr_string = eval(asr_string) # pylint:disable=eval-used - video_id = input_csv["video_id"].values[i] - split = None - if "split" in input_csv: - split = input_csv["split"].values[i] - if isinstance(split, str): - split = eval(split) # pylint:disable=eval-used - if "features" not in input_csv: # load on the fly - assert FLAGS.features_path - features = list( - np.load(os.path.join(FLAGS.features_path, video_id + ".npy")) - ) - else: - features = eval(input_csv["features"].values[i]) # pylint:disable=eval-used - duration = int(input_csv["duration"].values[i]) - seq_ex = generate_sequence_example( - video_id, - start, - end, - caption, - asr_start, - asr_end, - asr_string, - features, - duration, - split) - writers[i % len(writers)].write(seq_ex.SerializeToString()) - - -if __name__ == "__main__": - app.run(main) diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/common.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/common.py deleted file mode 100644 index d6b8742417abc897f5faa190db1341bbe7b2940d..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/common.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import itertools -import logging -import numpy as np -import pickle -import random -import torch.utils.data as data -from torch.utils.data.sampler import Sampler - -from detectron2.utils.serialize import PicklableWrapper - -__all__ = ["MapDataset", "DatasetFromList", "AspectRatioGroupedDataset", "ToIterableDataset"] - - -def _shard_iterator_dataloader_worker(iterable): - # Shard the iterable if we're currently inside pytorch dataloader worker. - worker_info = data.get_worker_info() - if worker_info is None or worker_info.num_workers == 1: - # do nothing - yield from iterable - else: - yield from itertools.islice(iterable, worker_info.id, None, worker_info.num_workers) - - -class _MapIterableDataset(data.IterableDataset): - """ - Map a function over elements in an IterableDataset. - - Similar to pytorch's MapIterDataPipe, but support filtering when map_func - returns None. - - This class is not public-facing. Will be called by `MapDataset`. - """ - - def __init__(self, dataset, map_func): - self._dataset = dataset - self._map_func = PicklableWrapper(map_func) # wrap so that a lambda will work - - def __len__(self): - return len(self._dataset) - - def __iter__(self): - for x in map(self._map_func, self._dataset): - if x is not None: - yield x - - -class MapDataset(data.Dataset): - """ - Map a function over the elements in a dataset. - """ - - def __init__(self, dataset, map_func): - """ - Args: - dataset: a dataset where map function is applied. Can be either - map-style or iterable dataset. When given an iterable dataset, - the returned object will also be an iterable dataset. - map_func: a callable which maps the element in dataset. map_func can - return None to skip the data (e.g. in case of errors). - How None is handled depends on the style of `dataset`. - If `dataset` is map-style, it randomly tries other elements. - If `dataset` is iterable, it skips the data and tries the next. - """ - self._dataset = dataset - self._map_func = PicklableWrapper(map_func) # wrap so that a lambda will work - - self._rng = random.Random(42) - self._fallback_candidates = set(range(len(dataset))) - - def __new__(cls, dataset, map_func): - is_iterable = isinstance(dataset, data.IterableDataset) - if is_iterable: - return _MapIterableDataset(dataset, map_func) - else: - return super().__new__(cls) - - def __getnewargs__(self): - return self._dataset, self._map_func - - def __len__(self): - return len(self._dataset) - - def __getitem__(self, idx): - retry_count = 0 - cur_idx = int(idx) - - while True: - data = self._map_func(self._dataset[cur_idx]) - if data is not None: - self._fallback_candidates.add(cur_idx) - return data - - # _map_func fails for this idx, use a random new index from the pool - retry_count += 1 - self._fallback_candidates.discard(cur_idx) - cur_idx = self._rng.sample(self._fallback_candidates, k=1)[0] - - if retry_count >= 3: - logger = logging.getLogger(__name__) - logger.warning( - "Failed to apply `_map_func` for idx: {}, retry count: {}".format( - idx, retry_count - ) - ) - - -class DatasetFromList(data.Dataset): - """ - Wrap a list to a torch Dataset. It produces elements of the list as data. - """ - - def __init__(self, lst: list, copy: bool = True, serialize: bool = True): - """ - Args: - lst (list): a list which contains elements to produce. - copy (bool): whether to deepcopy the element when producing it, - so that the result can be modified in place without affecting the - source in the list. - serialize (bool): whether to hold memory using serialized objects, when - enabled, data loader workers can use shared RAM from master - process instead of making a copy. - """ - self._lst = lst - self._copy = copy - self._serialize = serialize - - def _serialize(data): - buffer = pickle.dumps(data, protocol=-1) - return np.frombuffer(buffer, dtype=np.uint8) - - if self._serialize: - logger = logging.getLogger(__name__) - logger.info( - "Serializing {} elements to byte tensors and concatenating them all ...".format( - len(self._lst) - ) - ) - self._lst = [_serialize(x) for x in self._lst] - self._addr = np.asarray([len(x) for x in self._lst], dtype=np.int64) - self._addr = np.cumsum(self._addr) - self._lst = np.concatenate(self._lst) - logger.info("Serialized dataset takes {:.2f} MiB".format(len(self._lst) / 1024 ** 2)) - - def __len__(self): - if self._serialize: - return len(self._addr) - else: - return len(self._lst) - - def __getitem__(self, idx): - if self._serialize: - start_addr = 0 if idx == 0 else self._addr[idx - 1].item() - end_addr = self._addr[idx].item() - bytes = memoryview(self._lst[start_addr:end_addr]) - return pickle.loads(bytes) - elif self._copy: - return copy.deepcopy(self._lst[idx]) - else: - return self._lst[idx] - - -class ToIterableDataset(data.IterableDataset): - """ - Convert an old indices-based (also called map-style) dataset - to an iterable-style dataset. - """ - - def __init__(self, dataset: data.Dataset, sampler: Sampler, shard_sampler: bool = True): - """ - Args: - dataset: an old-style dataset with ``__getitem__`` - sampler: a cheap iterable that produces indices to be applied on ``dataset``. - shard_sampler: whether to shard the sampler based on the current pytorch data loader - worker id. When an IterableDataset is forked by pytorch's DataLoader into multiple - workers, it is responsible for sharding its data based on worker id so that workers - don't produce identical data. - - Most samplers (like our TrainingSampler) do not shard based on dataloader worker id - and this argument should be set to True. But certain samplers may be already - sharded, in that case this argument should be set to False. - """ - assert not isinstance(dataset, data.IterableDataset), dataset - assert isinstance(sampler, Sampler), sampler - self.dataset = dataset - self.sampler = sampler - self.shard_sampler = shard_sampler - - def __iter__(self): - if not self.shard_sampler: - sampler = self.sampler - else: - # With map-style dataset, `DataLoader(dataset, sampler)` runs the - # sampler in main process only. But `DataLoader(ToIterableDataset(dataset, sampler))` - # will run sampler in every of the N worker. So we should only keep 1/N of the ids on - # each worker. The assumption is that sampler is cheap to iterate so it's fine to - # discard ids in workers. - sampler = _shard_iterator_dataloader_worker(self.sampler) - for idx in sampler: - yield self.dataset[idx] - - def __len__(self): - return len(self.sampler) - - -class AspectRatioGroupedDataset(data.IterableDataset): - """ - Batch data that have similar aspect ratio together. - In this implementation, images whose aspect ratio < (or >) 1 will - be batched together. - This improves training speed because the images then need less padding - to form a batch. - - It assumes the underlying dataset produces dicts with "width" and "height" keys. - It will then produce a list of original dicts with length = batch_size, - all with similar aspect ratios. - """ - - def __init__(self, dataset, batch_size): - """ - Args: - dataset: an iterable. Each element must be a dict with keys - "width" and "height", which will be used to batch data. - batch_size (int): - """ - self.dataset = dataset - self.batch_size = batch_size - self._buckets = [[] for _ in range(2)] - # Hard-coded two aspect ratio groups: w > h and w < h. - # Can add support for more aspect ratio groups, but doesn't seem useful - - def __iter__(self): - for d in self.dataset: - w, h = d["width"], d["height"] - bucket_id = 0 if w > h else 1 - bucket = self._buckets[bucket_id] - bucket.append(d) - if len(bucket) == self.batch_size: - yield bucket[:] - del bucket[:] diff --git a/spaces/YlcldKlns/bing/src/pages/api/blob.ts b/spaces/YlcldKlns/bing/src/pages/api/blob.ts deleted file mode 100644 index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000 --- a/spaces/YlcldKlns/bing/src/pages/api/blob.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { Readable } from 'node:stream' -import { fetch } from '@/lib/isomorphic' - -const API_DOMAIN = 'https://www.bing.com' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { bcid } = req.query - - const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`, - { - method: 'GET', - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referrer-Policy": "origin-when-cross-origin", - }, - }, - ) - - res.writeHead(200, { - 'Content-Length': headers.get('content-length')!, - 'Content-Type': headers.get('content-type')!, - }) - // @ts-ignore - return Readable.fromWeb(body!).pipe(res) - } catch (e) { - console.log('Error', e) - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/YouLiXiya/Mobile-SAM/segment_anything/segment_anything/modeling/mask_decoder_hq.py b/spaces/YouLiXiya/Mobile-SAM/segment_anything/segment_anything/modeling/mask_decoder_hq.py deleted file mode 100644 index c4576f3495ae72d639b2278c4c252e3e02e5d424..0000000000000000000000000000000000000000 --- a/spaces/YouLiXiya/Mobile-SAM/segment_anything/segment_anything/modeling/mask_decoder_hq.py +++ /dev/null @@ -1,232 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# Modified by HQ-SAM team -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import nn -from torch.nn import functional as F - -from typing import List, Tuple, Type - -from .common import LayerNorm2d - - -class MaskDecoderHQ(nn.Module): - def __init__( - self, - *, - transformer_dim: int, - transformer: nn.Module, - num_multimask_outputs: int = 3, - activation: Type[nn.Module] = nn.GELU, - iou_head_depth: int = 3, - iou_head_hidden_dim: int = 256, - vit_dim: int = 1024, - ) -> None: - """ - Predicts masks given an image and prompt embeddings, using a - transformer architecture. - - Arguments: - transformer_dim (int): the channel dimension of the transformer - transformer (nn.Module): the transformer used to predict masks - num_multimask_outputs (int): the number of masks to predict - when disambiguating masks - activation (nn.Module): the type of activation to use when - upscaling masks - iou_head_depth (int): the depth of the MLP used to predict - mask quality - iou_head_hidden_dim (int): the hidden dimension of the MLP - used to predict mask quality - """ - super().__init__() - self.transformer_dim = transformer_dim - self.transformer = transformer - - self.num_multimask_outputs = num_multimask_outputs - - self.iou_token = nn.Embedding(1, transformer_dim) - self.num_mask_tokens = num_multimask_outputs + 1 - self.mask_tokens = nn.Embedding(self.num_mask_tokens, transformer_dim) - - self.output_upscaling = nn.Sequential( - nn.ConvTranspose2d(transformer_dim, transformer_dim // 4, kernel_size=2, stride=2), - LayerNorm2d(transformer_dim // 4), - activation(), - nn.ConvTranspose2d(transformer_dim // 4, transformer_dim // 8, kernel_size=2, stride=2), - activation(), - ) - self.output_hypernetworks_mlps = nn.ModuleList( - [ - MLP(transformer_dim, transformer_dim, transformer_dim // 8, 3) - for i in range(self.num_mask_tokens) - ] - ) - - self.iou_prediction_head = MLP( - transformer_dim, iou_head_hidden_dim, self.num_mask_tokens, iou_head_depth - ) - - # HQ-SAM parameters - self.hf_token = nn.Embedding(1, transformer_dim) # HQ-Ouptput-Token - self.hf_mlp = MLP(transformer_dim, transformer_dim, transformer_dim // 8, 3) # corresponding new MLP layer for HQ-Ouptput-Token - self.num_mask_tokens = self.num_mask_tokens + 1 - - # three conv fusion layers for obtaining HQ-Feature - self.compress_vit_feat = nn.Sequential( - nn.ConvTranspose2d(vit_dim, transformer_dim, kernel_size=2, stride=2), - LayerNorm2d(transformer_dim), - nn.GELU(), - nn.ConvTranspose2d(transformer_dim, transformer_dim // 8, kernel_size=2, stride=2)) - - self.embedding_encoder = nn.Sequential( - nn.ConvTranspose2d(transformer_dim, transformer_dim // 4, kernel_size=2, stride=2), - LayerNorm2d(transformer_dim // 4), - nn.GELU(), - nn.ConvTranspose2d(transformer_dim // 4, transformer_dim // 8, kernel_size=2, stride=2), - ) - self.embedding_maskfeature = nn.Sequential( - nn.Conv2d(transformer_dim // 8, transformer_dim // 4, 3, 1, 1), - LayerNorm2d(transformer_dim // 4), - nn.GELU(), - nn.Conv2d(transformer_dim // 4, transformer_dim // 8, 3, 1, 1)) - - - - def forward( - self, - image_embeddings: torch.Tensor, - image_pe: torch.Tensor, - sparse_prompt_embeddings: torch.Tensor, - dense_prompt_embeddings: torch.Tensor, - multimask_output: bool, - hq_token_only: bool, - interm_embeddings: torch.Tensor, - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Predict masks given image and prompt embeddings. - - Arguments: - image_embeddings (torch.Tensor): the embeddings from the ViT image encoder - image_pe (torch.Tensor): positional encoding with the shape of image_embeddings - sparse_prompt_embeddings (torch.Tensor): the embeddings of the points and boxes - dense_prompt_embeddings (torch.Tensor): the embeddings of the mask inputs - multimask_output (bool): Whether to return multiple masks or a single - mask. - - Returns: - torch.Tensor: batched predicted masks - torch.Tensor: batched predictions of mask quality - """ - vit_features = interm_embeddings[0].permute(0, 3, 1, 2) # early-layer ViT feature, after 1st global attention block in ViT - hq_features = self.embedding_encoder(image_embeddings) + self.compress_vit_feat(vit_features) - - masks, iou_pred = self.predict_masks( - image_embeddings=image_embeddings, - image_pe=image_pe, - sparse_prompt_embeddings=sparse_prompt_embeddings, - dense_prompt_embeddings=dense_prompt_embeddings, - hq_features=hq_features, - ) - - # Select the correct mask or masks for output - if multimask_output: - # mask with highest score - mask_slice = slice(1,self.num_mask_tokens-1) - iou_pred = iou_pred[:, mask_slice] - iou_pred, max_iou_idx = torch.max(iou_pred,dim=1) - iou_pred = iou_pred.unsqueeze(1) - masks_multi = masks[:, mask_slice, :, :] - masks_sam = masks_multi[torch.arange(masks_multi.size(0)),max_iou_idx].unsqueeze(1) - else: - # singale mask output, default - mask_slice = slice(0, 1) - iou_pred = iou_pred[:,mask_slice] - masks_sam = masks[:,mask_slice] - - masks_hq = masks[:,slice(self.num_mask_tokens-1, self.num_mask_tokens)] - if hq_token_only: - masks = masks_hq - else: - masks = masks_sam + masks_hq - # Prepare output - return masks, iou_pred - - def predict_masks( - self, - image_embeddings: torch.Tensor, - image_pe: torch.Tensor, - sparse_prompt_embeddings: torch.Tensor, - dense_prompt_embeddings: torch.Tensor, - hq_features: torch.Tensor, - ) -> Tuple[torch.Tensor, torch.Tensor]: - """Predicts masks. See 'forward' for more details.""" - # Concatenate output tokens - output_tokens = torch.cat([self.iou_token.weight, self.mask_tokens.weight, self.hf_token.weight], dim=0) - output_tokens = output_tokens.unsqueeze(0).expand(sparse_prompt_embeddings.size(0), -1, -1) - tokens = torch.cat((output_tokens, sparse_prompt_embeddings), dim=1) - - # Expand per-image data in batch direction to be per-mask - src = torch.repeat_interleave(image_embeddings, tokens.shape[0], dim=0) - src = src + dense_prompt_embeddings - pos_src = torch.repeat_interleave(image_pe, tokens.shape[0], dim=0) - b, c, h, w = src.shape - - # Run the transformer - hs, src = self.transformer(src, pos_src, tokens) - iou_token_out = hs[:, 0, :] - mask_tokens_out = hs[:, 1 : (1 + self.num_mask_tokens), :] - - # Upscale mask embeddings and predict masks using the mask tokens - src = src.transpose(1, 2).view(b, c, h, w) - - upscaled_embedding_sam = self.output_upscaling(src) - upscaled_embedding_hq = self.embedding_maskfeature(upscaled_embedding_sam) + hq_features.repeat(b,1,1,1) - - hyper_in_list: List[torch.Tensor] = [] - for i in range(self.num_mask_tokens): - if i < self.num_mask_tokens - 1: - hyper_in_list.append(self.output_hypernetworks_mlps[i](mask_tokens_out[:, i, :])) - else: - hyper_in_list.append(self.hf_mlp(mask_tokens_out[:, i, :])) - - hyper_in = torch.stack(hyper_in_list, dim=1) - b, c, h, w = upscaled_embedding_sam.shape - - masks_sam = (hyper_in[:,:self.num_mask_tokens-1] @ upscaled_embedding_sam.view(b, c, h * w)).view(b, -1, h, w) - masks_sam_hq = (hyper_in[:,self.num_mask_tokens-1:] @ upscaled_embedding_hq.view(b, c, h * w)).view(b, -1, h, w) - masks = torch.cat([masks_sam,masks_sam_hq],dim=1) - # Generate mask quality predictions - iou_pred = self.iou_prediction_head(iou_token_out) - - return masks, iou_pred - - -# Lightly adapted from -# https://github.com/facebookresearch/MaskFormer/blob/main/mask_former/modeling/transformer/transformer_predictor.py # noqa -class MLP(nn.Module): - def __init__( - self, - input_dim: int, - hidden_dim: int, - output_dim: int, - num_layers: int, - sigmoid_output: bool = False, - ) -> None: - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList( - nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]) - ) - self.sigmoid_output = sigmoid_output - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - if self.sigmoid_output: - x = F.sigmoid(x) - return x \ No newline at end of file diff --git a/spaces/YuAnthony/Audio-Caption/coco_caption/pycocoevalcap/cider/cider.py b/spaces/YuAnthony/Audio-Caption/coco_caption/pycocoevalcap/cider/cider.py deleted file mode 100644 index e2a4447ed89b309e27f941d52c31d44f21691705..0000000000000000000000000000000000000000 --- a/spaces/YuAnthony/Audio-Caption/coco_caption/pycocoevalcap/cider/cider.py +++ /dev/null @@ -1,60 +0,0 @@ -# Filename: cider.py -# -# Description: Describes the class to compute the CIDEr (Consensus-Based Image Description Evaluation) Metric -# by Vedantam, Zitnick, and Parikh (http://arxiv.org/abs/1411.5726) -# -# Creation Date: Sun Feb 8 14:16:54 2015 -# -# Authors: Ramakrishna Vedantam and Tsung-Yi Lin - -# ================================================================= -# This code was pulled from https://github.com/tylin/coco-caption -# and refactored for Python 3. -# Image-specific names and comments have also been changed to be audio-specific -# ================================================================= - -from .cider_scorer import CiderScorer -import pdb - -class Cider: - """ - Main Class to compute the CIDEr metric - - """ - def __init__(self, test=None, refs=None, n=4, sigma=6.0): - # set cider to sum over 1 to 4-grams - self._n = n - # set the standard deviation parameter for gaussian penalty - self._sigma = sigma - - def compute_score(self, gts, res): - """ - Main function to compute CIDEr score - :param hypo_for_audio (dict) : dictionary with key
-

Conclusion and FAQs

-

In this article, we have shown you how to download X Recorder, a screen recording app for Android devices. We have also explained how to use it to record your screen, and how to download it on your PC or Mac using an Android emulator or an alternative screen recorder like Movavi Screen Recorder. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Here are some FAQs that might answer some of your queries:

-

Q: Is X Recorder safe to use?

-

A: Yes, X Recorder is safe to use as long as you download it from the official Google Play Store or the verified website. The app does not contain any malware, viruses, or spyware. However, you should be careful about what you record and share, as some apps or websites may have privacy policies or terms of service that prohibit screen recording.

-

Q: How can I remove the watermark from X Recorder?

-

A: X Recorder does not add any watermark to your recordings by default. However, if you want to add your own logo or text, you can do so in the app settings. You can also remove it anytime by tapping on the watermark icon on the floating window and turning it off.

-

Q: How can I record internal audio with X Recorder?

-

A: X Recorder supports internal audio recording for Android 10 or above devices. You can enable it in the app settings by choosing "Internal sound" as the audio source. For lower Android versions, you can try using a third-party app like Internal Audio Plugin or SCR Pro, but they may require root access or special permissions.

-

Q: How can I live stream with X Recorder?

-

A: X Recorder allows you to live stream your screen to YouTube, Facebook, Twitch, and other platforms. You can enable it in the app settings by choosing "Live" as the recording mode. You will need to sign in with your account and choose the platform, title, description, quality, and privacy of your live stream. Then, you can start live streaming by tapping on the red circle icon.

-

Q: How can I contact X Recorder support?

-

A: If you have any issues or suggestions regarding X Recorder, you can contact the app support team by sending an email to videostudio.feedback@gmail.com. You can also visit their website for more information and FAQs.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Mafia City Hack Get Unlimited Gold and VIP Status in Minutes.md b/spaces/congsaPfin/Manga-OCR/logs/Mafia City Hack Get Unlimited Gold and VIP Status in Minutes.md deleted file mode 100644 index 79cae842966f0b389d2c41ea81979fc30d48f74d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Mafia City Hack Get Unlimited Gold and VIP Status in Minutes.md +++ /dev/null @@ -1,127 +0,0 @@ -
-

Mafia City Hack: How to Get Unlimited Gold and Resources in 2023

-

Mafia City is a popular strategy game that lets you become a mafia boss and build your own criminal empire. You can recruit gangsters, steal from banks, form alliances with other players, and fight for turf and power. The game is available on Android, iOS, PC, and Mac platforms.

-

mafia city hack


DOWNLOADhttps://urlca.com/2uO63S



-

However, playing Mafia City can be challenging and time-consuming if you don't have enough gold and resources. Gold is the premium currency in the game that can be used to buy items, speed up tasks, unlock features, and more. Resources are the basic materials that you need to upgrade your buildings, train your troops, research technologies, and more.

-

If you want to get unlimited gold and resources in Mafia City without spending real money or waiting for long hours, you may want to hack the game. Hacking a game means modifying its source code or data in order to gain an advantage. For example, you may hack a game to get more gold, resources, health, speed, damage, or other benefits. Hacking a game can make it more fun, easy, or interesting to play.

-

However, hacking a game also comes with some risks and challenges. Hacking an online game is against the terms of service and may result in account suspension or ban. Hacking a game may also be considered unfair or cheating by other players and may ruin their gaming experience. Hacking a game may also require some technical skills, tools, and methods that are not easy to learn or use.

-

In this article, we will show you how to hack Mafia City with some of the most popular and effective hack tools and methods available in 2023. We will also provide you with some tips and tricks on how to hack the game safely and smartly. We will cover the following topics:

- -

Before we start, we want to remind you that hacking any online game is risky and may have consequences. We do not encourage or endorse hacking any game for malicious or illegal purposes. We only provide this information for educational and entertainment purposes. You are responsible for your own actions and decisions when hacking any game.

-

How to Hack Mafia City with Cheat Engine on PC

-

Cheat Engine is one of the most popular and powerful hack tools for PC games. It is a free and open-source software that allows you to scan and modify the memory of any running process on your computer. You can use Cheat Engine to change the values of any variable in a game, such as gold, resources, health, speed, damage, etc.

-

To hack Mafia City with Cheat Engine on PC, you need to follow these steps:

-

mafia city hack gold
-mafia city hack apk
-mafia city hack ios
-mafia city hack android
-mafia city hack no verification
-mafia city hack online
-mafia city hack 2023
-mafia city hack download
-mafia city hack mod
-mafia city hack generator
-mafia city hack tool
-mafia city hack without human verification
-mafia city hack unlimited gold
-mafia city hack reddit
-mafia city hack version
-mafia city hack app
-mafia city hack free
-mafia city hack no survey
-mafia city hack cheat engine
-mafia city hack pc
-mafia city hack game guardian
-mafia city hack lucky patcher
-mafia city hack script
-mafia city hack website
-mafia city hack legit
-mafia city hack vip
-mafia city hack codes
-mafia city hack diamonds
-mafia city hack cydia
-mafia city hack jailbreak
-mafia city hack bluestacks
-mafia city hack obb
-mafia city hack root
-mafia city hack forum
-mafia city hack discord
-mafia city hack quora
-mafia city hack youtube
-mafia city hack review
-mafia city hack tutorial
-mafia city hack tips
-mafia city hack tricks
-mafia city hack guide
-mafia city hack video
-mafia city hack blogspot
-mafia city hack wordpress
-mafia city bot hacks

-
    -
  1. Download and install Cheat Engine from . Make sure you have the latest version of the software.
  2. -
  3. Launch Mafia City on your PC using an emulator like BlueStacks or NoxPlayer. You can download these emulators from or . Make sure you have the latest version of the emulator.
  4. -
  5. Launch Cheat Engine on your PC and click on the "Select a process to open" button. It looks like a computer icon with a magnifying glass.
  6. -
  7. Select the process that corresponds to your emulator. For example, if you are using BlueStacks, select "BlueStacks.exe". If you are using NoxPlayer, select "Nox.exe". Click on "Open".
  8. -
  9. Go back to Mafia City and check your current amount of gold or resources. For example, if you have 1000 gold, remember this number.
  10. -
  11. Go back to Cheat Engine and click on the "First Scan" button. Enter your current amount of gold or resources in the "Value" box. For example, if you have 1000 gold, enter "1000". Make sure you select the correct value type from the drop-down menu. For example, if you are scanning for gold or resources, select "4 Bytes". Click on "First Scan".
  12. -
  13. Cheat Engine will scan the memory of your emulator process and display a list of addresses that match your value. These addresses are the locations where your gold or resources are stored in the memory.
  14. -
  15. Go back to Mafia City and spend or earn some gold or resources. For example, if you have 1000 gold, buy something that costs 100 gold or complete a task that rewards you with 100 gold. Your new amount of gold should be 900 or 1100.
  16. -
  17. Go back to Cheat Engine and click on the "Next Scan" button. Enter your new amount of gold or resources in the "Value" box. For example, if you have 900 or 1100 gold, enter "900" or "1100". Click on "Next Scan".
  18. -
  19. Cheat Engine will scan the memory of your emulator process again and display a shorter list of addresses that match your new value. These addresses are the locations where your gold or resources are stored in the memory after you spent or earned some.
  20. -
  21. Repeat steps 7 to 10 until you have only one address left in the list. This address is the location where your gold or resources are stored in the memory.
  22. -
  23. Select the address from the list and double-click on it. It will be added to the bottom panel of Cheat Engine.
  24. -
  25. Select the address from the bottom panel and double-click on its value. A window will pop up where you can change its value.
  26. -
  27. Enter any value that you want for your gold or resources in the window. For example, if you want to have 999999 gold, enter "999999". Click on "OK".
  28. -
  29. Go back to Mafia City and check your new amount of gold or resources. It should be the same as the value that you entered in Cheat Engine. For example, if you entered "999999", you should have 999999 gold.
  30. -
  31. Congratulations, you have successfully hacked Mafia City with Cheat Engine on PC. You can now enjoy the game with unlimited gold and resources.
  32. -
-

Note: You may need to repeat these steps every time you launch the game or change the level. You may also need to adjust the value type or scan type depending on the game version or update. You may also need to use other features of Cheat Engine such as pointers, scripts, or speed hack to hack other aspects of the game.

-

How to Hack Mafia City with Lucky Patcher on Android

-

Lucky Patcher is another popular and powerful hack tool for Android games. It is a free and easy-to-use app that allows you to patch and modify the APK files of any installed app on your device. You can use Lucky Patcher to remove ads, license verification, in-app purchases, and other restrictions from any app. You can also use Lucky Patcher to change the permissions, signatures, and components of any app.

-

To hack Mafia City with Lucky Patcher on Android, you need to follow these steps:

-
    -
  1. Download and install Lucky Patcher from . Make sure you have the latest version of the app.
  2. -
  3. Launch Lucky Patcher on your Android device and grant it root access if prompted. Rooting your device means gaining full control over it and unlocking its hidden features. You can root your device using apps like KingRoot, SuperSU, or Magisk. You can download these apps from or . Make sure you have the latest version of the app.
  4. -
  5. Select Mafia City from the list of installed apps in Lucky Patcher. Tap on it and select "Menu of Patches".
  6. -
  7. Select "Create Modified APK File". This will create a new APK file of Mafia City with your desired patches and modifications.
  8. -
  9. Select "APK with MultiPatch". This will allow you to apply multiple patches and modifications to Mafia City at once.
  10. -
  11. Select the patches and modifications that you want to apply to Mafia City. For example, if you want to get unlimited gold and resources, you may select "Remove License Verification", "Remove Google Ads", "Support patch for InApp and LVL emulation", "Change Permissions", and "Disable signature verification in the package manager". Tap on "Apply".
  12. -
  13. Lucky Patcher will start creating a modified APK file of Mafia City with your selected patches and modifications. Wait for it to finish.
  14. -
  15. Once done, tap on "Go to file". This will take you to the location where the modified APK file of Mafia City is saved.
  16. -
  17. Tap on the modified APK file of Mafia City and select "Uninstall and Install". This will uninstall the original version of Mafia City from your device and install the modified version instead.
  18. -
  19. Launch the modified version of Mafia City on your device and check your new amount of gold and resources. It should be unlimited or increased according to your patches and modifications.
  20. -
  21. Congratulations, you have successfully hacked Mafia City with Lucky Patcher on Android. You can now enjoy the game with unlimited gold and resources.
  22. -
-

Note: You may need to repeat these steps every time you update the game or change the device. You may also need to adjust the patches and modifications depending on the game version or update. You may also need to use other features of Lucky Patcher such as custom patches, backup/restore, clone, or freeze to hack other aspects of the game.

How to Hack Mafia City with Other Tools and Methods

-

Cheat Engine and Lucky Patcher are not the only hack tools and methods that you can use to hack Mafia City. There are many other tools and methods that you can try to get unlimited gold and resources in the game. Some of them are:

- -

However, you should be careful when using these tools and methods as they may not work as advertised or may have hidden risks or costs. Some of them may be outdated, detected, unsafe, or fake. Some of them may contain viruses, malware, spyware, or adware that can harm your device or steal your personal information. Some of them may require you to pay money, share your account details, or complete shady tasks that can compromise your security or privacy.

-

Therefore, you should always do your research before using any hack tool or method for Mafia City. You should check the reviews, ratings, comments, feedbacks, and testimonials of other users who have used the tool or method before. You should also scan the tool or method with an antivirus or anti-malware software before using it. You should also backup your device and game data before using it.

-

Conclusion

-

Mafia City is a fun and addictive game that lets you become a mafia boss and build your own criminal empire. However, if you want to get unlimited gold and resources in the game without spending real money or waiting for long hours, you may want to hack the game.

-

In this article, we have shown you how to hack Mafia City with some of the most popular and effective hack tools and methods available in 2023. We have also provided you with some tips and tricks on how to hack the game safely and smartly.

-

We hope you have enjoyed this article and learned something new from it. We invite you to try out the hack tools and methods mentioned in this article and see how they work for you. However, we also remind you that hacking any online game is risky and may have consequences. You should always be careful and responsible when hacking any game.

-

Thank you for reading this article and happy hacking!

-

FAQs

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/Beadtool-4-Serial-Full-Version-BEST.md b/spaces/contluForse/HuggingGPT/Beadtool-4-Serial-Full-Version-BEST.md deleted file mode 100644 index e8fa70c492376d71c853da1a3c92589bff662e6a..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/Beadtool-4-Serial-Full-Version-BEST.md +++ /dev/null @@ -1,96 +0,0 @@ -## Beadtool 4 Serial Full Version - - - - - - ![Beadtool 4 Serial Full Version - - - - - -

LINK === 0 else nn.Identity() - - def zero_init_last(self): - nn.init.zeros_(self.conv3.weight) - - def forward(self, x): - x_preact = self.norm1(x) - - # shortcut branch - shortcut = x - if self.downsample is not None: - shortcut = self.downsample(x_preact) - - # residual branch - x = self.conv1(x_preact) - x = self.conv2(self.norm2(x)) - x = self.conv3(self.norm3(x)) - x = self.drop_path(x) - return x + shortcut - - -class Bottleneck(nn.Module): - """Non Pre-activation bottleneck block, equiv to V1.5/V1b Bottleneck. Used for ViT. - """ - def __init__( - self, in_chs, out_chs=None, bottle_ratio=0.25, stride=1, dilation=1, first_dilation=None, groups=1, - act_layer=None, conv_layer=None, norm_layer=None, proj_layer=None, drop_path_rate=0.): - super().__init__() - first_dilation = first_dilation or dilation - act_layer = act_layer or nn.ReLU - conv_layer = conv_layer or StdConv2d - norm_layer = norm_layer or partial(GroupNormAct, num_groups=32) - out_chs = out_chs or in_chs - mid_chs = make_div(out_chs * bottle_ratio) - - if proj_layer is not None: - self.downsample = proj_layer( - in_chs, out_chs, stride=stride, dilation=dilation, preact=False, - conv_layer=conv_layer, norm_layer=norm_layer) - else: - self.downsample = None - - self.conv1 = conv_layer(in_chs, mid_chs, 1) - self.norm1 = norm_layer(mid_chs) - self.conv2 = conv_layer(mid_chs, mid_chs, 3, stride=stride, dilation=first_dilation, groups=groups) - self.norm2 = norm_layer(mid_chs) - self.conv3 = conv_layer(mid_chs, out_chs, 1) - self.norm3 = norm_layer(out_chs, apply_act=False) - self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0 else nn.Identity() - self.act3 = act_layer(inplace=True) - - def zero_init_last(self): - nn.init.zeros_(self.norm3.weight) - - def forward(self, x): - # shortcut branch - shortcut = x - if self.downsample is not None: - shortcut = self.downsample(x) - - # residual - x = self.conv1(x) - x = self.norm1(x) - x = self.conv2(x) - x = self.norm2(x) - x = self.conv3(x) - x = self.norm3(x) - x = self.drop_path(x) - x = self.act3(x + shortcut) - return x - - -class DownsampleConv(nn.Module): - def __init__( - self, in_chs, out_chs, stride=1, dilation=1, first_dilation=None, preact=True, - conv_layer=None, norm_layer=None): - super(DownsampleConv, self).__init__() - self.conv = conv_layer(in_chs, out_chs, 1, stride=stride) - self.norm = nn.Identity() if preact else norm_layer(out_chs, apply_act=False) - - def forward(self, x): - return self.norm(self.conv(x)) - - -class DownsampleAvg(nn.Module): - def __init__( - self, in_chs, out_chs, stride=1, dilation=1, first_dilation=None, - preact=True, conv_layer=None, norm_layer=None): - """ AvgPool Downsampling as in 'D' ResNet variants. This is not in RegNet space but I might experiment.""" - super(DownsampleAvg, self).__init__() - avg_stride = stride if dilation == 1 else 1 - if stride > 1 or dilation > 1: - avg_pool_fn = AvgPool2dSame if avg_stride == 1 and dilation > 1 else nn.AvgPool2d - self.pool = avg_pool_fn(2, avg_stride, ceil_mode=True, count_include_pad=False) - else: - self.pool = nn.Identity() - self.conv = conv_layer(in_chs, out_chs, 1, stride=1) - self.norm = nn.Identity() if preact else norm_layer(out_chs, apply_act=False) - - def forward(self, x): - return self.norm(self.conv(self.pool(x))) - - -class ResNetStage(nn.Module): - """ResNet Stage.""" - def __init__(self, in_chs, out_chs, stride, dilation, depth, bottle_ratio=0.25, groups=1, - avg_down=False, block_dpr=None, block_fn=PreActBottleneck, - act_layer=None, conv_layer=None, norm_layer=None, **block_kwargs): - super(ResNetStage, self).__init__() - first_dilation = 1 if dilation in (1, 2) else 2 - layer_kwargs = dict(act_layer=act_layer, conv_layer=conv_layer, norm_layer=norm_layer) - proj_layer = DownsampleAvg if avg_down else DownsampleConv - prev_chs = in_chs - self.blocks = nn.Sequential() - for block_idx in range(depth): - drop_path_rate = block_dpr[block_idx] if block_dpr else 0. - stride = stride if block_idx == 0 else 1 - self.blocks.add_module(str(block_idx), block_fn( - prev_chs, out_chs, stride=stride, dilation=dilation, bottle_ratio=bottle_ratio, groups=groups, - first_dilation=first_dilation, proj_layer=proj_layer, drop_path_rate=drop_path_rate, - **layer_kwargs, **block_kwargs)) - prev_chs = out_chs - first_dilation = dilation - proj_layer = None - - def forward(self, x): - x = self.blocks(x) - return x - - -def is_stem_deep(stem_type): - return any([s in stem_type for s in ('deep', 'tiered')]) - - -def create_resnetv2_stem( - in_chs, out_chs=64, stem_type='', preact=True, - conv_layer=StdConv2d, norm_layer=partial(GroupNormAct, num_groups=32)): - stem = OrderedDict() - assert stem_type in ('', 'fixed', 'same', 'deep', 'deep_fixed', 'deep_same', 'tiered') - - # NOTE conv padding mode can be changed by overriding the conv_layer def - if is_stem_deep(stem_type): - # A 3 deep 3x3 conv stack as in ResNet V1D models - if 'tiered' in stem_type: - stem_chs = (3 * out_chs // 8, out_chs // 2) # 'T' resnets in resnet.py - else: - stem_chs = (out_chs // 2, out_chs // 2) # 'D' ResNets - stem['conv1'] = conv_layer(in_chs, stem_chs[0], kernel_size=3, stride=2) - stem['norm1'] = norm_layer(stem_chs[0]) - stem['conv2'] = conv_layer(stem_chs[0], stem_chs[1], kernel_size=3, stride=1) - stem['norm2'] = norm_layer(stem_chs[1]) - stem['conv3'] = conv_layer(stem_chs[1], out_chs, kernel_size=3, stride=1) - if not preact: - stem['norm3'] = norm_layer(out_chs) - else: - # The usual 7x7 stem conv - stem['conv'] = conv_layer(in_chs, out_chs, kernel_size=7, stride=2) - if not preact: - stem['norm'] = norm_layer(out_chs) - - if 'fixed' in stem_type: - # 'fixed' SAME padding approximation that is used in BiT models - stem['pad'] = nn.ConstantPad2d(1, 0.) - stem['pool'] = nn.MaxPool2d(kernel_size=3, stride=2, padding=0) - elif 'same' in stem_type: - # full, input size based 'SAME' padding, used in ViT Hybrid model - stem['pool'] = create_pool2d('max', kernel_size=3, stride=2, padding='same') - else: - # the usual PyTorch symmetric padding - stem['pool'] = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - return nn.Sequential(stem) - - -class ResNetV2(nn.Module): - """Implementation of Pre-activation (v2) ResNet mode. - """ - - def __init__( - self, layers, channels=(256, 512, 1024, 2048), - num_classes=1000, in_chans=3, global_pool='avg', output_stride=32, - width_factor=1, stem_chs=64, stem_type='', avg_down=False, preact=True, - act_layer=nn.ReLU, conv_layer=StdConv2d, norm_layer=partial(GroupNormAct, num_groups=32), - drop_rate=0., drop_path_rate=0., zero_init_last=True): - super().__init__() - self.num_classes = num_classes - self.drop_rate = drop_rate - wf = width_factor - - self.feature_info = [] - stem_chs = make_div(stem_chs * wf) - self.stem = create_resnetv2_stem( - in_chans, stem_chs, stem_type, preact, conv_layer=conv_layer, norm_layer=norm_layer) - stem_feat = ('stem.conv3' if is_stem_deep(stem_type) else 'stem.conv') if preact else 'stem.norm' - self.feature_info.append(dict(num_chs=stem_chs, reduction=2, module=stem_feat)) - - prev_chs = stem_chs - curr_stride = 4 - dilation = 1 - block_dprs = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(layers)).split(layers)] - block_fn = PreActBottleneck if preact else Bottleneck - self.stages = nn.Sequential() - for stage_idx, (d, c, bdpr) in enumerate(zip(layers, channels, block_dprs)): - out_chs = make_div(c * wf) - stride = 1 if stage_idx == 0 else 2 - if curr_stride >= output_stride: - dilation *= stride - stride = 1 - stage = ResNetStage( - prev_chs, out_chs, stride=stride, dilation=dilation, depth=d, avg_down=avg_down, - act_layer=act_layer, conv_layer=conv_layer, norm_layer=norm_layer, block_dpr=bdpr, block_fn=block_fn) - prev_chs = out_chs - curr_stride *= stride - self.feature_info += [dict(num_chs=prev_chs, reduction=curr_stride, module=f'stages.{stage_idx}')] - self.stages.add_module(str(stage_idx), stage) - - self.num_features = prev_chs - self.norm = norm_layer(self.num_features) if preact else nn.Identity() - self.head = ClassifierHead( - self.num_features, num_classes, pool_type=global_pool, drop_rate=self.drop_rate, use_conv=True) - - self.init_weights(zero_init_last=zero_init_last) - - def init_weights(self, zero_init_last=True): - named_apply(partial(_init_weights, zero_init_last=zero_init_last), self) - - @torch.jit.ignore() - def load_pretrained(self, checkpoint_path, prefix='resnet/'): - _load_weights(self, checkpoint_path, prefix) - - def get_classifier(self): - return self.head.fc - - def reset_classifier(self, num_classes, global_pool='avg'): - self.num_classes = num_classes - self.head = ClassifierHead( - self.num_features, num_classes, pool_type=global_pool, drop_rate=self.drop_rate, use_conv=True) - - def forward_features(self, x): - x = self.stem(x) - x = self.stages(x) - x = self.norm(x) - return x - - def forward(self, x): - x = self.forward_features(x) - x = self.head(x) - return x - - -def _init_weights(module: nn.Module, name: str = '', zero_init_last=True): - if isinstance(module, nn.Linear) or ('head.fc' in name and isinstance(module, nn.Conv2d)): - nn.init.normal_(module.weight, mean=0.0, std=0.01) - nn.init.zeros_(module.bias) - elif isinstance(module, nn.Conv2d): - nn.init.kaiming_normal_(module.weight, mode='fan_out', nonlinearity='relu') - if module.bias is not None: - nn.init.zeros_(module.bias) - elif isinstance(module, (nn.BatchNorm2d, nn.LayerNorm, nn.GroupNorm)): - nn.init.ones_(module.weight) - nn.init.zeros_(module.bias) - elif zero_init_last and hasattr(module, 'zero_init_last'): - module.zero_init_last() - - -@torch.no_grad() -def _load_weights(model: nn.Module, checkpoint_path: str, prefix: str = 'resnet/'): - import numpy as np - - def t2p(conv_weights): - """Possibly convert HWIO to OIHW.""" - if conv_weights.ndim == 4: - conv_weights = conv_weights.transpose([3, 2, 0, 1]) - return torch.from_numpy(conv_weights) - - weights = np.load(checkpoint_path) - stem_conv_w = adapt_input_conv( - model.stem.conv.weight.shape[1], t2p(weights[f'{prefix}root_block/standardized_conv2d/kernel'])) - model.stem.conv.weight.copy_(stem_conv_w) - model.norm.weight.copy_(t2p(weights[f'{prefix}group_norm/gamma'])) - model.norm.bias.copy_(t2p(weights[f'{prefix}group_norm/beta'])) - if isinstance(getattr(model.head, 'fc', None), nn.Conv2d) and \ - model.head.fc.weight.shape[0] == weights[f'{prefix}head/conv2d/kernel'].shape[-1]: - model.head.fc.weight.copy_(t2p(weights[f'{prefix}head/conv2d/kernel'])) - model.head.fc.bias.copy_(t2p(weights[f'{prefix}head/conv2d/bias'])) - for i, (sname, stage) in enumerate(model.stages.named_children()): - for j, (bname, block) in enumerate(stage.blocks.named_children()): - cname = 'standardized_conv2d' - block_prefix = f'{prefix}block{i + 1}/unit{j + 1:02d}/' - block.conv1.weight.copy_(t2p(weights[f'{block_prefix}a/{cname}/kernel'])) - block.conv2.weight.copy_(t2p(weights[f'{block_prefix}b/{cname}/kernel'])) - block.conv3.weight.copy_(t2p(weights[f'{block_prefix}c/{cname}/kernel'])) - block.norm1.weight.copy_(t2p(weights[f'{block_prefix}a/group_norm/gamma'])) - block.norm2.weight.copy_(t2p(weights[f'{block_prefix}b/group_norm/gamma'])) - block.norm3.weight.copy_(t2p(weights[f'{block_prefix}c/group_norm/gamma'])) - block.norm1.bias.copy_(t2p(weights[f'{block_prefix}a/group_norm/beta'])) - block.norm2.bias.copy_(t2p(weights[f'{block_prefix}b/group_norm/beta'])) - block.norm3.bias.copy_(t2p(weights[f'{block_prefix}c/group_norm/beta'])) - if block.downsample is not None: - w = weights[f'{block_prefix}a/proj/{cname}/kernel'] - block.downsample.conv.weight.copy_(t2p(w)) - - -def _create_resnetv2(variant, pretrained=False, **kwargs): - feature_cfg = dict(flatten_sequential=True) - return build_model_with_cfg( - ResNetV2, variant, pretrained, - default_cfg=default_cfgs[variant], - feature_cfg=feature_cfg, - pretrained_custom_load=True, - **kwargs) - - -def _create_resnetv2_bit(variant, pretrained=False, **kwargs): - return _create_resnetv2( - variant, pretrained=pretrained, stem_type='fixed', conv_layer=partial(StdConv2d, eps=1e-8), **kwargs) - - -@register_model -def resnetv2_50x1_bitm(pretrained=False, **kwargs): - return _create_resnetv2_bit( - 'resnetv2_50x1_bitm', pretrained=pretrained, layers=[3, 4, 6, 3], width_factor=1, **kwargs) - - -@register_model -def resnetv2_50x3_bitm(pretrained=False, **kwargs): - return _create_resnetv2_bit( - 'resnetv2_50x3_bitm', pretrained=pretrained, layers=[3, 4, 6, 3], width_factor=3, **kwargs) - - -@register_model -def resnetv2_101x1_bitm(pretrained=False, **kwargs): - return _create_resnetv2_bit( - 'resnetv2_101x1_bitm', pretrained=pretrained, layers=[3, 4, 23, 3], width_factor=1, **kwargs) - - -@register_model -def resnetv2_101x3_bitm(pretrained=False, **kwargs): - return _create_resnetv2_bit( - 'resnetv2_101x3_bitm', pretrained=pretrained, layers=[3, 4, 23, 3], width_factor=3, **kwargs) - - -@register_model -def resnetv2_152x2_bitm(pretrained=False, **kwargs): - return _create_resnetv2_bit( - 'resnetv2_152x2_bitm', pretrained=pretrained, layers=[3, 8, 36, 3], width_factor=2, **kwargs) - - -@register_model -def resnetv2_152x4_bitm(pretrained=False, **kwargs): - return _create_resnetv2_bit( - 'resnetv2_152x4_bitm', pretrained=pretrained, layers=[3, 8, 36, 3], width_factor=4, **kwargs) - - -@register_model -def resnetv2_50x1_bitm_in21k(pretrained=False, **kwargs): - return _create_resnetv2_bit( - 'resnetv2_50x1_bitm_in21k', pretrained=pretrained, num_classes=kwargs.pop('num_classes', 21843), - layers=[3, 4, 6, 3], width_factor=1, **kwargs) - - -@register_model -def resnetv2_50x3_bitm_in21k(pretrained=False, **kwargs): - return _create_resnetv2_bit( - 'resnetv2_50x3_bitm_in21k', pretrained=pretrained, num_classes=kwargs.pop('num_classes', 21843), - layers=[3, 4, 6, 3], width_factor=3, **kwargs) - - -@register_model -def resnetv2_101x1_bitm_in21k(pretrained=False, **kwargs): - return _create_resnetv2( - 'resnetv2_101x1_bitm_in21k', pretrained=pretrained, num_classes=kwargs.pop('num_classes', 21843), - layers=[3, 4, 23, 3], width_factor=1, **kwargs) - - -@register_model -def resnetv2_101x3_bitm_in21k(pretrained=False, **kwargs): - return _create_resnetv2_bit( - 'resnetv2_101x3_bitm_in21k', pretrained=pretrained, num_classes=kwargs.pop('num_classes', 21843), - layers=[3, 4, 23, 3], width_factor=3, **kwargs) - - -@register_model -def resnetv2_152x2_bitm_in21k(pretrained=False, **kwargs): - return _create_resnetv2_bit( - 'resnetv2_152x2_bitm_in21k', pretrained=pretrained, num_classes=kwargs.pop('num_classes', 21843), - layers=[3, 8, 36, 3], width_factor=2, **kwargs) - - -@register_model -def resnetv2_152x4_bitm_in21k(pretrained=False, **kwargs): - return _create_resnetv2_bit( - 'resnetv2_152x4_bitm_in21k', pretrained=pretrained, num_classes=kwargs.pop('num_classes', 21843), - layers=[3, 8, 36, 3], width_factor=4, **kwargs) - - -@register_model -def resnetv2_50x1_bit_distilled(pretrained=False, **kwargs): - """ ResNetV2-50x1-BiT Distilled - Paper: Knowledge distillation: A good teacher is patient and consistent - https://arxiv.org/abs/2106.05237 - """ - return _create_resnetv2_bit( - 'resnetv2_50x1_bit_distilled', pretrained=pretrained, layers=[3, 4, 6, 3], width_factor=1, **kwargs) - - -@register_model -def resnetv2_152x2_bit_teacher(pretrained=False, **kwargs): - """ ResNetV2-152x2-BiT Teacher - Paper: Knowledge distillation: A good teacher is patient and consistent - https://arxiv.org/abs/2106.05237 - """ - return _create_resnetv2_bit( - 'resnetv2_152x2_bit_teacher', pretrained=pretrained, layers=[3, 8, 36, 3], width_factor=2, **kwargs) - - -@register_model -def resnetv2_152x2_bit_teacher_384(pretrained=False, **kwargs): - """ ResNetV2-152xx-BiT Teacher @ 384x384 - Paper: Knowledge distillation: A good teacher is patient and consistent - https://arxiv.org/abs/2106.05237 - """ - return _create_resnetv2_bit( - 'resnetv2_152x2_bit_teacher_384', pretrained=pretrained, layers=[3, 8, 36, 3], width_factor=2, **kwargs) - - -@register_model -def resnetv2_50(pretrained=False, **kwargs): - return _create_resnetv2( - 'resnetv2_50', pretrained=pretrained, - layers=[3, 4, 6, 3], conv_layer=create_conv2d, norm_layer=BatchNormAct2d, **kwargs) - - -@register_model -def resnetv2_50d(pretrained=False, **kwargs): - return _create_resnetv2( - 'resnetv2_50d', pretrained=pretrained, - layers=[3, 4, 6, 3], conv_layer=create_conv2d, norm_layer=BatchNormAct2d, - stem_type='deep', avg_down=True, **kwargs) - - -@register_model -def resnetv2_50t(pretrained=False, **kwargs): - return _create_resnetv2( - 'resnetv2_50t', pretrained=pretrained, - layers=[3, 4, 6, 3], conv_layer=create_conv2d, norm_layer=BatchNormAct2d, - stem_type='tiered', avg_down=True, **kwargs) - - -@register_model -def resnetv2_101(pretrained=False, **kwargs): - return _create_resnetv2( - 'resnetv2_101', pretrained=pretrained, - layers=[3, 4, 23, 3], conv_layer=create_conv2d, norm_layer=BatchNormAct2d, **kwargs) - - -@register_model -def resnetv2_101d(pretrained=False, **kwargs): - return _create_resnetv2( - 'resnetv2_101d', pretrained=pretrained, - layers=[3, 4, 23, 3], conv_layer=create_conv2d, norm_layer=BatchNormAct2d, - stem_type='deep', avg_down=True, **kwargs) - - -@register_model -def resnetv2_152(pretrained=False, **kwargs): - return _create_resnetv2( - 'resnetv2_152', pretrained=pretrained, - layers=[3, 8, 36, 3], conv_layer=create_conv2d, norm_layer=BatchNormAct2d, **kwargs) - - -@register_model -def resnetv2_152d(pretrained=False, **kwargs): - return _create_resnetv2( - 'resnetv2_152d', pretrained=pretrained, - layers=[3, 8, 36, 3], conv_layer=create_conv2d, norm_layer=BatchNormAct2d, - stem_type='deep', avg_down=True, **kwargs) - - -# @register_model -# def resnetv2_50ebd(pretrained=False, **kwargs): -# # FIXME for testing w/ TPU + PyTorch XLA -# return _create_resnetv2( -# 'resnetv2_50d', pretrained=pretrained, -# layers=[3, 4, 6, 3], conv_layer=create_conv2d, norm_layer=EvoNormBatch2d, -# stem_type='deep', avg_down=True, **kwargs) -# -# -# @register_model -# def resnetv2_50esd(pretrained=False, **kwargs): -# # FIXME for testing w/ TPU + PyTorch XLA -# return _create_resnetv2( -# 'resnetv2_50d', pretrained=pretrained, -# layers=[3, 4, 6, 3], conv_layer=create_conv2d, norm_layer=EvoNormSample2d, -# stem_type='deep', avg_down=True, **kwargs) diff --git a/spaces/cooelf/Multimodal-CoT/timm/scheduler/scheduler.py b/spaces/cooelf/Multimodal-CoT/timm/scheduler/scheduler.py deleted file mode 100644 index 21d51509c87a0783c6b61986c574a3ed5366e165..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/scheduler/scheduler.py +++ /dev/null @@ -1,105 +0,0 @@ -from typing import Dict, Any - -import torch - - -class Scheduler: - """ Parameter Scheduler Base Class - A scheduler base class that can be used to schedule any optimizer parameter groups. - - Unlike the builtin PyTorch schedulers, this is intended to be consistently called - * At the END of each epoch, before incrementing the epoch count, to calculate next epoch's value - * At the END of each optimizer update, after incrementing the update count, to calculate next update's value - - The schedulers built on this should try to remain as stateless as possible (for simplicity). - - This family of schedulers is attempting to avoid the confusion of the meaning of 'last_epoch' - and -1 values for special behaviour. All epoch and update counts must be tracked in the training - code and explicitly passed in to the schedulers on the corresponding step or step_update call. - - Based on ideas from: - * https://github.com/pytorch/fairseq/tree/master/fairseq/optim/lr_scheduler - * https://github.com/allenai/allennlp/tree/master/allennlp/training/learning_rate_schedulers - """ - - def __init__(self, - optimizer: torch.optim.Optimizer, - param_group_field: str, - noise_range_t=None, - noise_type='normal', - noise_pct=0.67, - noise_std=1.0, - noise_seed=None, - initialize: bool = True) -> None: - self.optimizer = optimizer - self.param_group_field = param_group_field - self._initial_param_group_field = f"initial_{param_group_field}" - if initialize: - for i, group in enumerate(self.optimizer.param_groups): - if param_group_field not in group: - raise KeyError(f"{param_group_field} missing from param_groups[{i}]") - group.setdefault(self._initial_param_group_field, group[param_group_field]) - else: - for i, group in enumerate(self.optimizer.param_groups): - if self._initial_param_group_field not in group: - raise KeyError(f"{self._initial_param_group_field} missing from param_groups[{i}]") - self.base_values = [group[self._initial_param_group_field] for group in self.optimizer.param_groups] - self.metric = None # any point to having this for all? - self.noise_range_t = noise_range_t - self.noise_pct = noise_pct - self.noise_type = noise_type - self.noise_std = noise_std - self.noise_seed = noise_seed if noise_seed is not None else 42 - self.update_groups(self.base_values) - - def state_dict(self) -> Dict[str, Any]: - return {key: value for key, value in self.__dict__.items() if key != 'optimizer'} - - def load_state_dict(self, state_dict: Dict[str, Any]) -> None: - self.__dict__.update(state_dict) - - def get_epoch_values(self, epoch: int): - return None - - def get_update_values(self, num_updates: int): - return None - - def step(self, epoch: int, metric: float = None) -> None: - self.metric = metric - values = self.get_epoch_values(epoch) - if values is not None: - values = self._add_noise(values, epoch) - self.update_groups(values) - - def step_update(self, num_updates: int, metric: float = None): - self.metric = metric - values = self.get_update_values(num_updates) - if values is not None: - values = self._add_noise(values, num_updates) - self.update_groups(values) - - def update_groups(self, values): - if not isinstance(values, (list, tuple)): - values = [values] * len(self.optimizer.param_groups) - for param_group, value in zip(self.optimizer.param_groups, values): - param_group[self.param_group_field] = value - - def _add_noise(self, lrs, t): - if self.noise_range_t is not None: - if isinstance(self.noise_range_t, (list, tuple)): - apply_noise = self.noise_range_t[0] <= t < self.noise_range_t[1] - else: - apply_noise = t >= self.noise_range_t - if apply_noise: - g = torch.Generator() - g.manual_seed(self.noise_seed + t) - if self.noise_type == 'normal': - while True: - # resample if noise out of percent limit, brute force but shouldn't spin much - noise = torch.randn(1, generator=g).item() - if abs(noise) < self.noise_pct: - break - else: - noise = 2 * (torch.rand(1, generator=g).item() - 0.5) * self.noise_pct - lrs = [v + v * noise for v in lrs] - return lrs diff --git a/spaces/cpluoiudy00001/QQsign/devices/device_8950.js b/spaces/cpluoiudy00001/QQsign/devices/device_8950.js deleted file mode 100644 index fe1caad4a8c5eb07633510e1d8a890197056a211..0000000000000000000000000000000000000000 --- a/spaces/cpluoiudy00001/QQsign/devices/device_8950.js +++ /dev/null @@ -1,344 +0,0 @@ -"use strict"; -var __importDefault = (this && this.__importDefault) || function (mod) { - return (mod && mod.__esModule) ? mod : { "default": mod }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -exports.getApkInfo = exports.Platform = exports.Device = exports.generateFullDevice = exports.generateShortDevice = void 0; -const crypto_1 = require("crypto"); -const constants_1 = require("./constants"); -const axios_1 = __importDefault(require("axios")); -const algo_1 = require("./algo"); -function generateImei() { - let imei = `86${(0, constants_1.randomString)(12, '0123456789')}`; - function calcSP(imei) { - let sum = 0; - for (let i = 0; i < imei.length; ++i) { - if (i % 2) { - let j = parseInt(imei[i]) * 2; - sum += j % 10 + Math.floor(j / 10); - } - else { - sum += parseInt(imei[i]); - } - } - return (100 - sum) % 10; - } - return imei + calcSP(imei); -} -/** 生成短设备信息 */ -function generateShortDevice() { - const randstr = (length, num = false) => { - const map = num ? '0123456789' : '0123456789abcdef'; - return (0, constants_1.randomString)(length, map); - }; - return { - "--begin--": "该设备为随机生成,丢失后不能得到原先配置", - product: `ILPP-${randstr(5).toUpperCase()}`, - device: `${randstr(5).toUpperCase()}`, - board: `${randstr(5).toUpperCase()}`, - brand: `${randstr(4).toUpperCase()}`, - model: `ICQQ ${randstr(4).toUpperCase()}`, - wifi_ssid: `HUAWEI-${randstr(7)}`, - bootloader: `U-boot`, - android_id: `IL.${randstr(7, true)}.${randstr(4, true)}`, - boot_id: `${randstr(8)}-${randstr(4)}-${randstr(4)}-${randstr(4)}-${randstr(12)}`, - proc_version: `Linux version 5.10.101-android12-${randstr(8)}`, - mac_address: `2D:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}`, - ip_address: `192.168.${randstr(2, true)}.${randstr(2, true)}`, - imei: `${generateImei()}`, - incremental: `${randstr(10, true).toUpperCase()}`, - "--end--": "修改后可能需要重新验证设备。" - }; -} -exports.generateShortDevice = generateShortDevice; -/** 生成完整设备信息 */ -function generateFullDevice(apk, d) { - if (!d) - d = generateShortDevice(); - return { - display: d.android_id, - product: d.product, - device: d.device, - board: d.board, - brand: d.brand, - model: d.model, - bootloader: d.bootloader, - fingerprint: `${d.brand}/${d.product}/${d.device}:10/${d.android_id}/${d.incremental}:user/release-keys`, - boot_id: d.boot_id, - proc_version: d.proc_version, - baseband: "", - sim: "T-Mobile", - os_type: "android", - mac_address: d.mac_address, - ip_address: d.ip_address, - wifi_bssid: d.mac_address, - wifi_ssid: d.wifi_ssid, - imei: d.imei, - android_id: (0, constants_1.md5)(d.android_id).toString("hex"), - apn: "wifi", - version: { - incremental: d.incremental, - release: "10", - codename: "REL", - sdk: 29, - }, - imsi: (0, crypto_1.randomBytes)(16), - guid: (0, constants_1.md5)(Buffer.concat([Buffer.from(d.imei), Buffer.from(d.mac_address)])), - }; -} -exports.generateFullDevice = generateFullDevice; -class Device { - constructor(apk, d) { - this.apk = apk; - this.secret = 'ZdJqM15EeO2zWc08'; - this.publicKey = `-----BEGIN PUBLIC KEY----- -MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEIxgwoutfwoJxcGQeedgP7FG9 -qaIuS0qzfR8gWkrkTZKM2iWHn2ajQpBRZjMSoSf6+KJGvar2ORhBfpDXyVtZCKpq -LQ+FLkpncClKVIrBwv6PHyUvuCb0rIarmgDnzkfQAqVufEtR64iazGDKatvJ9y6B -9NMbHddGSAUmRTCrHQIDAQAB ------END PUBLIC KEY-----`; - if (!d) - d = generateShortDevice(); - Object.assign(this, generateFullDevice(apk, d)); - } - async getQIMEI() { - if (this.apk.app_key === "") { - return; - } - const k = (0, constants_1.randomString)(16); - const key = (0, algo_1.encryptPKCS1)(this.publicKey, k); - const time = Date.now(); - const nonce = (0, constants_1.randomString)(16); - const payload = this.genRandomPayloadByDevice(); - const params = (0, algo_1.aesEncrypt)(JSON.stringify(payload), k).toString('base64'); - try { - const { data } = await axios_1.default.post("https://snowflake.qq.com/ola/android", { - key, - params, - time, nonce, - sign: (0, constants_1.md5)(key + params + time + nonce + this.secret).toString("hex"), - extra: '' - }, { - headers: { - 'User-Agent': `Dalvik/2.1.0 (Linux; U; Android ${this.version.release}; PCRT00 Build/N2G48H)`, - 'Content-Type': "application/json" - } - }); - if (data?.code !== 0) { - return; - } - const { q16, q36 } = JSON.parse((0, algo_1.aesDecrypt)(data.data, k)); - this.qImei16 = q16; - this.qImei36 = q36; - } - catch { - } - } - genRandomPayloadByDevice() { - const fixedRand = (max = 1, min = 0) => { - if (max < min) - [max, min] = [min, max]; - const diff = max - min; - return Math.floor(Math.random() * diff) + min; - }; - const reserved = { - "harmony": "0", - "clone": Math.random() > 0.5 ? "1" : "0", - "containe": "", - "oz": "", - "oo": "", - "kelong": Math.random() > 0.5 ? "1" : "0", - "uptimes": (0, constants_1.formatTime)(new Date()), - "multiUser": Math.random() > 0.5 ? "1" : "0", - "bod": this.board, - "brd": this.brand, - "dv": this.device, - "firstLevel": "", - "manufact": this.brand, - "name": this.model, - "host": "se.infra", - "kernel": this.fingerprint - }; - const timestamp = Date.now(); - this.mtime = this.mtime || Date.now(); - const mtime1 = new Date(this.mtime || Date.now()); - const dateFormat = (fmt, time = Date.now()) => (0, constants_1.formatTime)(time, fmt); - const mtimeStr1 = dateFormat("YYYY-mm-ddHHMMSS", mtime1) + "." + this.imei.slice(2, 11); - const mtime2 = new Date(this.mtime - parseInt(this.imei.slice(2, 4))); - const mtimeStr2 = dateFormat("YYYY-mm-ddHHMMSS", mtime2) + "." + this.imei.slice(5, 14); - let beaconIdArr = [ - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr1, - '0000000000000000', - (0, constants_1.md5)(this.android_id + this.imei).toString("hex").slice(0, 16), - ...new Array(4).fill(false).map((_) => fixedRand(10000000, 1000000)), - this.boot_id, - '1', - fixedRand(5, 0), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(50000, 10000), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr2, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((10 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(100, 10), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(5, 0), - ].map((str, idx) => `k${idx + 1}:${str}`); - return { - "androidId": this.android_id, - "platformId": 1, - "appKey": this.apk.app_key, - "appVersion": this.apk.version, - "beaconIdSrc": beaconIdArr.join(';'), - "brand": this.brand, - "channelId": "2017", - "cid": "", - "imei": this.imei, - "imsi": this.imsi.toString("hex"), - "mac": this.mac_address, - "model": this.model, - "networkType": "unknown", - "oaid": "", - "osVersion": `Android ${this.version.release},level ${this.version.sdk}`, - "qimei": "", - "qimei36": "", - "sdkVersion": "1.2.13.6", - "targetSdkVersion": "26", - "audit": "", - "userId": "{}", - "packageId": this.apk.id, - "deviceType": this.display, - "sdkName": "", - "reserved": JSON.stringify(reserved), - }; - } -} -exports.Device = Device; -/** 支持的登录设备平台 */ -var Platform; -(function (Platform) { - Platform[Platform["Android"] = 1] = "Android"; - Platform[Platform["aPad"] = 2] = "aPad"; - Platform[Platform["Watch"] = 3] = "Watch"; - Platform[Platform["iMac"] = 4] = "iMac"; - Platform[Platform["iPad"] = 5] = "iPad"; - Platform[Platform["Tim"] = 6] = "Tim"; -})(Platform || (exports.Platform = Platform = {})); -const mobile = { - id: "com.tencent.mobileqq", - app_key: '0S200MNJT807V3GE', - name: "A8.9.50.f5a7d351", - version: "8.9.50.10650", - ver: "8.9.50", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1676531414, - appid: 16, - subid: 537155547, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2535", - display: "Android", - qua: 'V1_AND_SQ_8.9.50_3898_YYB_D', - ssover: 19, -}; -const tim = { - id: "com.tencent.tim", - app_key: '0S200MNJT807V3GE', - name: "A3.5.1.3168", - version: "3.5.1.3168", - ver: "3.5.1", - sign: Buffer.from('775e696d09856872fdd8ab4f3f06b1e0', 'hex'), - buildtime: 1630062176, - appid: 16, - subid: 537150355, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2484", - display: "Tim", - qua: "V1_AND_SQ_8.3.9_351_TIM_D", - ssover: 18, -}; -const watch = { - id: "com.tencent.qqlite", - app_key: '0S200MNJT807V3GE', - name: "A2.0.8", - version: "2.0.8", - ver: "2.0.8", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1559564731, - appid: 16, - subid: 537065138, - bitmap: 16252796, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2365", - display: "Watch", - qua: '', - ssover: 5 -}; -const hd = { - id: "com.tencent.minihd.qq", - app_key: '0S200MNJT807V3GE', - name: "A5.9.3.3468", - version: "5.9.3.3468", - ver: "5.9.3", - sign: Buffer.from('AA 39 78 F4 1F D9 6F F9 91 4A 66 9E 18 64 74 C7'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1637427966, - appid: 16, - subid: 537128930, - bitmap: 150470524, - main_sig_map: 1970400, - sub_sig_map: 66560, - sdkver: "6.0.0.2433", - display: "iMac", - qua: '', - ssover: 12 -}; -const apklist = { - [Platform.Android]: mobile, - [Platform.Tim]: tim, - [Platform.aPad]: { - ...mobile, - subid: 537155599, - display: 'aPad' - }, - [Platform.Watch]: watch, - [Platform.iMac]: { ...hd }, - [Platform.iPad]: { - ...mobile, - subid: 537155074, - sign: hd.sign, - name: 'A8.9.50.611', - version: 'A8.9.50.611', - sdkver: '6.0.0.2535', - qua: 'V1_AND_SQ_8.9.50_3898_YYB_D', - display: 'iPad' - }, -}; -function getApkInfo(p) { - return apklist[p] || apklist[Platform.Android]; -} -exports.getApkInfo = getApkInfo; diff --git a/spaces/cvlab/zero123-live/taming-transformers/taming/modules/losses/vqperceptual.py b/spaces/cvlab/zero123-live/taming-transformers/taming/modules/losses/vqperceptual.py deleted file mode 100644 index c2febd445728479d4cd9aacdb2572cb1f1af04db..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/taming-transformers/taming/modules/losses/vqperceptual.py +++ /dev/null @@ -1,136 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from taming.modules.losses.lpips import LPIPS -from taming.modules.discriminator.model import NLayerDiscriminator, weights_init - - -class DummyLoss(nn.Module): - def __init__(self): - super().__init__() - - -def adopt_weight(weight, global_step, threshold=0, value=0.): - if global_step < threshold: - weight = value - return weight - - -def hinge_d_loss(logits_real, logits_fake): - loss_real = torch.mean(F.relu(1. - logits_real)) - loss_fake = torch.mean(F.relu(1. + logits_fake)) - d_loss = 0.5 * (loss_real + loss_fake) - return d_loss - - -def vanilla_d_loss(logits_real, logits_fake): - d_loss = 0.5 * ( - torch.mean(torch.nn.functional.softplus(-logits_real)) + - torch.mean(torch.nn.functional.softplus(logits_fake))) - return d_loss - - -class VQLPIPSWithDiscriminator(nn.Module): - def __init__(self, disc_start, codebook_weight=1.0, pixelloss_weight=1.0, - disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0, - perceptual_weight=1.0, use_actnorm=False, disc_conditional=False, - disc_ndf=64, disc_loss="hinge"): - super().__init__() - assert disc_loss in ["hinge", "vanilla"] - self.codebook_weight = codebook_weight - self.pixel_weight = pixelloss_weight - self.perceptual_loss = LPIPS().eval() - self.perceptual_weight = perceptual_weight - - self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels, - n_layers=disc_num_layers, - use_actnorm=use_actnorm, - ndf=disc_ndf - ).apply(weights_init) - self.discriminator_iter_start = disc_start - if disc_loss == "hinge": - self.disc_loss = hinge_d_loss - elif disc_loss == "vanilla": - self.disc_loss = vanilla_d_loss - else: - raise ValueError(f"Unknown GAN loss '{disc_loss}'.") - print(f"VQLPIPSWithDiscriminator running with {disc_loss} loss.") - self.disc_factor = disc_factor - self.discriminator_weight = disc_weight - self.disc_conditional = disc_conditional - - def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None): - if last_layer is not None: - nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0] - else: - nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0] - - d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4) - d_weight = torch.clamp(d_weight, 0.0, 1e4).detach() - d_weight = d_weight * self.discriminator_weight - return d_weight - - def forward(self, codebook_loss, inputs, reconstructions, optimizer_idx, - global_step, last_layer=None, cond=None, split="train"): - rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous()) - if self.perceptual_weight > 0: - p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous()) - rec_loss = rec_loss + self.perceptual_weight * p_loss - else: - p_loss = torch.tensor([0.0]) - - nll_loss = rec_loss - #nll_loss = torch.sum(nll_loss) / nll_loss.shape[0] - nll_loss = torch.mean(nll_loss) - - # now the GAN part - if optimizer_idx == 0: - # generator update - if cond is None: - assert not self.disc_conditional - logits_fake = self.discriminator(reconstructions.contiguous()) - else: - assert self.disc_conditional - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1)) - g_loss = -torch.mean(logits_fake) - - try: - d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer) - except RuntimeError: - assert not self.training - d_weight = torch.tensor(0.0) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - loss = nll_loss + d_weight * disc_factor * g_loss + self.codebook_weight * codebook_loss.mean() - - log = {"{}/total_loss".format(split): loss.clone().detach().mean(), - "{}/quant_loss".format(split): codebook_loss.detach().mean(), - "{}/nll_loss".format(split): nll_loss.detach().mean(), - "{}/rec_loss".format(split): rec_loss.detach().mean(), - "{}/p_loss".format(split): p_loss.detach().mean(), - "{}/d_weight".format(split): d_weight.detach(), - "{}/disc_factor".format(split): torch.tensor(disc_factor), - "{}/g_loss".format(split): g_loss.detach().mean(), - } - return loss, log - - if optimizer_idx == 1: - # second pass for discriminator update - if cond is None: - logits_real = self.discriminator(inputs.contiguous().detach()) - logits_fake = self.discriminator(reconstructions.contiguous().detach()) - else: - logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1)) - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1)) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - d_loss = disc_factor * self.disc_loss(logits_real, logits_fake) - - log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(), - "{}/logits_real".format(split): logits_real.detach().mean(), - "{}/logits_fake".format(split): logits_fake.detach().mean() - } - return d_loss, log diff --git a/spaces/dawdqd/ChuanhuChatGPT/modules/webui_locale.py b/spaces/dawdqd/ChuanhuChatGPT/modules/webui_locale.py deleted file mode 100644 index 1ce4d97b9b41cbb2d9be3fdadc4c85f6ef897604..0000000000000000000000000000000000000000 --- a/spaces/dawdqd/ChuanhuChatGPT/modules/webui_locale.py +++ /dev/null @@ -1,26 +0,0 @@ -import os -import locale -import commentjson as json - -class I18nAuto: - def __init__(self): - if os.path.exists("config.json"): - with open("config.json", "r", encoding='utf-8') as f: - config = json.load(f) - else: - config = {} - lang_config = config.get("language", "auto") - language = os.environ.get("LANGUAGE", lang_config) - if language == "auto": - language = locale.getdefaultlocale()[0] # get the language code of the system (ex. zh_CN) - self.language_map = {} - self.file_is_exists = os.path.isfile(f"./locale/{language}.json") - if self.file_is_exists: - with open(f"./locale/{language}.json", "r", encoding="utf-8") as f: - self.language_map.update(json.load(f)) - - def __call__(self, key): - if self.file_is_exists and key in self.language_map: - return self.language_map[key] - else: - return key diff --git a/spaces/dawdqd/ChuanhuChatGPT/web_assets/stylesheet/custom-components.css b/spaces/dawdqd/ChuanhuChatGPT/web_assets/stylesheet/custom-components.css deleted file mode 100644 index 633c4cd958b8f45d6f185aa81adcf26f07043ea8..0000000000000000000000000000000000000000 --- a/spaces/dawdqd/ChuanhuChatGPT/web_assets/stylesheet/custom-components.css +++ /dev/null @@ -1,240 +0,0 @@ - -/* user-info */ -#user-info.block { - white-space: nowrap; - position: absolute; left: 8em; top: .8em; - z-index: var(--layer-2); - box-shadow: var(--block-shadow); - border: none!important; border-radius: var(--block-label-radius); - background: var(--color-accent); - padding: var(--block-label-padding); - font-size: var(--block-label-text-size); line-height: var(--line-sm); - width: auto; max-height: 30px!important; - opacity: 1; - transition: opacity 0.3s ease-in-out; -} -#user-info.block .wrap { - opacity: 0; -} -#user-info p { - color: white; - font-weight: var(--block-label-text-weight); -} -#user-info.info-transparent { - opacity: 0; - transition: opacity 1s ease-in-out; -} - - -/* updater */ -#toast-update { - position: absolute; - display: flex; - top: -500px; - width: 100%; - justify-content: center; - z-index: var(--layer-top); - transition: top 0.3s ease-out; -} -#check-chuanhu-update { - position: absolute; - align-items: center; - display: flex; - flex-direction: column; - justify-content: center; - margin: var(--size-6) var(--size-4); - box-shadow: var(--shadow-drop-lg); - border: 1px solid var(--block-label-border-color); - border-radius: var(--container-radius); - background: var(--background-fill-primary); - padding: var(--size-4) var(--size-6); - min-width: 360px; - max-width: 480px; - overflow: hidden; - pointer-events: auto; -} -#version-info-title { - font-size: 1.2em; - font-weight: bold; - text-align: start; - width: 100%; -} -#release-note-wrap { - width: 100%; - max-width: 400px; - height: 120px; - border: solid 1px var(--border-color-primary); - overflow: auto; - padding: 0 8px; -} -#release-note-wrap.hideK { - display: none; -} -.btn-update-group { - display: flex; - justify-content: space-evenly; - align-items: center; - width: 100%; - padding-top: 10px; -} -.btn-update-group.hideK { - display: none; -} -#updating-info { - margin: 16px 0px 24px; - text-align: start; - width: 100%; -} - - -#usage-display p, #usage-display span { - margin: 0; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: .5em 0 !important; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill); - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} - - -/* 亮暗色模式切换 */ -#apSwitch input[type="checkbox"] { - margin: 0 !important; -} -#apSwitch label.apSwitch { - display: flex; - align-items: center; - cursor: pointer; - color: var(--body-text-color); - font-weight: var(--checkbox-label-text-weight); - font-size: var(--checkbox-label-text-size); - line-height: var(--line-md); - margin: 2px 0 !important; -} -input[type="checkbox"]#apSwitch-checkbox::before { - background: none !important; - content: '🌞'; - border: none !important; - box-shadow: none !important; - font-size: 22px; - top: -4.4px; - left: -1px; -} -input:checked[type="checkbox"]#apSwitch-checkbox::before { - content: '🌚'; - left: 16px; -} - -/* .apSwitch { - top: 2px; - display: inline-block; - height: 22px; - position: relative; - width: 40px; - border-radius: 11px; - box-shadow: inset 0 0 1px 0 rgba(0,0,0,0.05), inset 0 0 2px 0 rgba(0,0,0,0.08) !important; -} -.apSwitch input { - display: none !important; -} -.apSlider { - background-color: var(--neutral-200); - bottom: 0; - cursor: pointer; - left: 0; - position: absolute; - right: 0; - top: 0; - transition: .4s; - font-size: 22px; - border-radius: 11px; -} -.apSlider::before { - transform: scale(0.9); - position: absolute; - transition: .4s; - content: "🌞"; -} -input:checked + .apSlider { - background-color: var(--primary-600); -} -input:checked + .apSlider::before { - transform: translateX(18px); - content:"🌚"; -} */ - -/* switch-checkbox */ -.switch-checkbox label { - flex-direction: row-reverse; - justify-content: space-between; -} -.switch-checkbox input[type="checkbox"] + span { - margin-left: 0 !important; -} - -.switch-checkbox input[type="checkbox"] { - -moz-appearance: none; - appearance: none; - -webkit-appearance: none; - outline: none; -} - -.switch-checkbox input[type="checkbox"] { - display: inline-block !important; - position: relative !important; - border: none !important; - outline: none; - width: 40px !important; - height: 22px !important; - border-radius: 11px !important; - background-image: none !important; - box-shadow: inset 0 0 1px 0 rgba(0,0,0,0.05), inset 0 0 2px 0 rgba(0,0,0,0.08) !important; - background-image: none !important; - background-color: var(--switch-checkbox-color-light) !important; - transition: .2s ease background-color; -} -.dark .switch-checkbox input[type="checkbox"] { - background-color: var(--switch-checkbox-color-dark) !important; -} -.switch-checkbox input[type="checkbox"]::before { - content: ""; - position: absolute; - width: 22px; - height: 22px; - top: 0; - left: 0; - background: #FFFFFF; - border: 0.5px solid rgba(0,0,0,0.02); - box-shadow: 0 0 0 0 rgba(0,0,0,0.15), 0 1px 0 0 rgba(0,0,0,0.05); - transform: scale(0.9); - border-radius: 11px !important; - transition: .4s ease all; - box-shadow: var(--input-shadow); -} -.switch-checkbox input:checked[type="checkbox"] { - background-color: var(--primary-600) !important; -} -.switch-checkbox input:checked[type="checkbox"]::before { - background-color: #fff; - left: 18px; -} - diff --git a/spaces/dblitzz21/food-spoonycal/README.md b/spaces/dblitzz21/food-spoonycal/README.md deleted file mode 100644 index a9c42ec5cfe9938227522cf0e54bd156308083de..0000000000000000000000000000000000000000 --- a/spaces/dblitzz21/food-spoonycal/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Food Spoonycal -emoji: 💩 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_P_.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_P_.py deleted file mode 100644 index 1abc02590c240377177d4ac12fe4848720e24959..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_P_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .T_S_I_V_ import table_T_S_I_V_ - - -class table_T_S_I_P_(table_T_S_I_V_): - pass diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/dircache.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/dircache.py deleted file mode 100644 index eca19566b135e5a7a4f6e7407d56411ec58bfe44..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/dircache.py +++ /dev/null @@ -1,98 +0,0 @@ -import time -from collections.abc import MutableMapping -from functools import lru_cache - - -class DirCache(MutableMapping): - """ - Caching of directory listings, in a structure like:: - - {"path0": [ - {"name": "path0/file0", - "size": 123, - "type": "file", - ... - }, - {"name": "path0/file1", - }, - ... - ], - "path1": [...] - } - - Parameters to this class control listing expiry or indeed turn - caching off - """ - - def __init__( - self, - use_listings_cache=True, - listings_expiry_time=None, - max_paths=None, - **kwargs, - ): - """ - - Parameters - ---------- - use_listings_cache: bool - If False, this cache never returns items, but always reports KeyError, - and setting items has no effect - listings_expiry_time: int or float (optional) - Time in seconds that a listing is considered valid. If None, - listings do not expire. - max_paths: int (optional) - The number of most recent listings that are considered valid; 'recent' - refers to when the entry was set. - """ - self._cache = {} - self._times = {} - if max_paths: - self._q = lru_cache(max_paths + 1)(lambda key: self._cache.pop(key, None)) - self.use_listings_cache = use_listings_cache - self.listings_expiry_time = listings_expiry_time - self.max_paths = max_paths - - def __getitem__(self, item): - if self.listings_expiry_time is not None: - if self._times.get(item, 0) - time.time() < -self.listings_expiry_time: - del self._cache[item] - if self.max_paths: - self._q(item) - return self._cache[item] # maybe raises KeyError - - def clear(self): - self._cache.clear() - - def __len__(self): - return len(self._cache) - - def __contains__(self, item): - try: - self[item] - return True - except KeyError: - return False - - def __setitem__(self, key, value): - if not self.use_listings_cache: - return - if self.max_paths: - self._q(key) - self._cache[key] = value - if self.listings_expiry_time is not None: - self._times[key] = time.time() - - def __delitem__(self, key): - del self._cache[key] - - def __iter__(self): - entries = list(self._cache) - - return (k for k in entries if k in self) - - def __reduce__(self): - return ( - DirCache, - (self.use_listings_cache, self.listings_expiry_time, self.max_paths), - ) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-7bf0115a.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-7bf0115a.js deleted file mode 100644 index ee5b4c56f68f00a366ceb9927a6bd9efdd199b0b..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-7bf0115a.js +++ /dev/null @@ -1,4 +0,0 @@ -import{S as we,e as ke,s as ve,f as Re,g as b,h as I,j as C,n as ae,k as N,m as F,o as $,Y as H,p as z,b as ge,af as He,B as Ee,t as ee,x as te,I as Me,P as Ve,Z as bl,w as P,r as re,u as R,v as _e,C as Ie,ag as Jl,F as q,G as K,H as J,ah as Yl,N as Pe,ai as Xl,_ as je,E as M,X as Te,aj as gl,O as De,T as Le,a9 as Gl,ab as Ql,ac as Zl,ad as Wl,ak as V,V as wl,ae as kl,Q as vl,R as Al}from"./index-39fce9e2.js";import{a as Sl,B as El}from"./Button-79f6e3bf.js";import{U as pl}from"./UploadText-61f66d92.js";import{U as xl}from"./Upload-78d05dac.js";import{M as $l}from"./ModifyUpload-02c07c98.js";import{B as Vl}from"./BlockLabel-b1428685.js";import{n as yl}from"./ModifyUpload.svelte_svelte_type_style_lang-14b768c9.js";import{I as en}from"./IconButton-0ac328a0.js";import{E as ln}from"./Empty-16d6169a.js";import{S as nn,u as tn}from"./ShareButton-c9a8cbaf.js";import{D as an}from"./Download-0afd7f1a.js";function sn(l){let e,t,n,i;return{c(){e=Re("svg"),t=Re("path"),n=Re("circle"),i=Re("circle"),b(t,"d","M9 18V5l12-2v13"),b(n,"cx","6"),b(n,"cy","18"),b(n,"r","3"),b(i,"cx","18"),b(i,"cy","16"),b(i,"r","3"),b(e,"xmlns","http://www.w3.org/2000/svg"),b(e,"width","100%"),b(e,"height","100%"),b(e,"viewBox","0 0 24 24"),b(e,"fill","none"),b(e,"stroke","currentColor"),b(e,"stroke-width","1.5"),b(e,"stroke-linecap","round"),b(e,"stroke-linejoin","round"),b(e,"class","feather feather-music")},m(s,a){I(s,e,a),C(e,t),C(e,n),C(e,i)},p:ae,i:ae,o:ae,d(s){s&&N(e)}}}class ze extends we{constructor(e){super(),ke(this,e,null,sn,ve,{})}}function qe(l,e,t){const n=l.slice();return n[27]=e[t],n[29]=t,n}function Ke(l){let e,t,n,i,s=(l[6]==="label"||l[7]==="label")&&Je(l);return{c(){e=F("span"),s&&s.c(),b(e,"class","pip first"),b(e,"style",t=l[14]+": 0%;"),H(e,"selected",l[17](l[0])),H(e,"in-range",l[16](l[0]))},m(a,u){I(a,e,u),s&&s.m(e,null),n||(i=[z(e,"click",function(){ge(l[20](l[0]))&&l[20](l[0]).apply(this,arguments)}),z(e,"touchend",He(function(){ge(l[20](l[0]))&&l[20](l[0]).apply(this,arguments)}))],n=!0)},p(a,u){l=a,l[6]==="label"||l[7]==="label"?s?s.p(l,u):(s=Je(l),s.c(),s.m(e,null)):s&&(s.d(1),s=null),u&16384&&t!==(t=l[14]+": 0%;")&&b(e,"style",t),u&131073&&H(e,"selected",l[17](l[0])),u&65537&&H(e,"in-range",l[16](l[0]))},d(a){a&&N(e),s&&s.d(),n=!1,Ee(i)}}}function Je(l){let e,t=l[12](l[0],0,0)+"",n,i=l[10]&&Ye(l),s=l[11]&&Xe(l);return{c(){e=F("span"),i&&i.c(),n=ee(t),s&&s.c(),b(e,"class","pipVal")},m(a,u){I(a,e,u),i&&i.m(e,null),C(e,n),s&&s.m(e,null)},p(a,u){a[10]?i?i.p(a,u):(i=Ye(a),i.c(),i.m(e,n)):i&&(i.d(1),i=null),u&4097&&t!==(t=a[12](a[0],0,0)+"")&&te(n,t),a[11]?s?s.p(a,u):(s=Xe(a),s.c(),s.m(e,null)):s&&(s.d(1),s=null)},d(a){a&&N(e),i&&i.d(),s&&s.d()}}}function Ye(l){let e,t;return{c(){e=F("span"),t=ee(l[10]),b(e,"class","pipVal-prefix")},m(n,i){I(n,e,i),C(e,t)},p(n,i){i&1024&&te(t,n[10])},d(n){n&&N(e)}}}function Xe(l){let e,t;return{c(){e=F("span"),t=ee(l[11]),b(e,"class","pipVal-suffix")},m(n,i){I(n,e,i),C(e,t)},p(n,i){i&2048&&te(t,n[11])},d(n){n&&N(e)}}}function Ge(l){let e,t=Me(Array(l[19]+1)),n=[];for(let i=0;iv}=e,{focus:p=void 0}=e,{orientationStart:A=void 0}=e,{percentOf:x=void 0}=e,{moveHandle:ne=void 0}=e;function se(v){ne(void 0,v)}return l.$$set=v=>{"range"in v&&t(21,f=v.range),"min"in v&&t(0,g=v.min),"max"in v&&t(1,r=v.max),"step"in v&&t(22,m=v.step),"values"in v&&t(23,_=v.values),"vertical"in v&&t(2,d=v.vertical),"reversed"in v&&t(3,c=v.reversed),"hoverable"in v&&t(4,S=v.hoverable),"disabled"in v&&t(5,D=v.disabled),"pipstep"in v&&t(24,w=v.pipstep),"all"in v&&t(6,L=v.all),"first"in v&&t(7,Y=v.first),"last"in v&&t(8,T=v.last),"rest"in v&&t(9,G=v.rest),"prefix"in v&&t(10,O=v.prefix),"suffix"in v&&t(11,X=v.suffix),"formatter"in v&&t(12,Q=v.formatter),"focus"in v&&t(13,p=v.focus),"orientationStart"in v&&t(14,A=v.orientationStart),"percentOf"in v&&t(15,x=v.percentOf),"moveHandle"in v&&t(25,ne=v.moveHandle)},l.$$.update=()=>{l.$$.dirty&20971527&&t(26,n=w||((r-g)/m>=(d?50:100)?(r-g)/(d?10:20):1)),l.$$.dirty&71303171&&t(19,i=parseInt((r-g)/(m*n),10)),l.$$.dirty&71303169&&t(18,s=function(v){return g+v*m*n}),l.$$.dirty&8388608&&t(17,a=function(v){return _.some(ie=>ie===v)}),l.$$.dirty&10485760&&t(16,u=function(v){if(f==="min")return _[0]>v;if(f==="max")return _[0]v})},[g,r,d,c,S,D,L,Y,T,G,O,X,Q,p,A,x,u,a,s,i,se,f,m,_,w,ne,n]}class on extends we{constructor(e){super(),ke(this,e,fn,un,ve,{range:21,min:0,max:1,step:22,values:23,vertical:2,reversed:3,hoverable:4,disabled:5,pipstep:24,all:6,first:7,last:8,rest:9,prefix:10,suffix:11,formatter:12,focus:13,orientationStart:14,percentOf:15,moveHandle:25})}}function tl(l,e,t){const n=l.slice();return n[63]=e[t],n[65]=t,n}function il(l){let e,t=l[21](l[63],l[65],l[23](l[63]))+"",n,i=l[18]&&al(l),s=l[19]&&sl(l);return{c(){e=F("span"),i&&i.c(),n=ee(t),s&&s.c(),b(e,"class","rangeFloat")},m(a,u){I(a,e,u),i&&i.m(e,null),C(e,n),s&&s.m(e,null)},p(a,u){a[18]?i?i.p(a,u):(i=al(a),i.c(),i.m(e,n)):i&&(i.d(1),i=null),u[0]&10485761&&t!==(t=a[21](a[63],a[65],a[23](a[63]))+"")&&te(n,t),a[19]?s?s.p(a,u):(s=sl(a),s.c(),s.m(e,null)):s&&(s.d(1),s=null)},d(a){a&&N(e),i&&i.d(),s&&s.d()}}}function al(l){let e,t;return{c(){e=F("span"),t=ee(l[18]),b(e,"class","rangeFloat-prefix")},m(n,i){I(n,e,i),C(e,t)},p(n,i){i[0]&262144&&te(t,n[18])},d(n){n&&N(e)}}}function sl(l){let e,t;return{c(){e=F("span"),t=ee(l[19]),b(e,"class","rangeFloat-suffix")},m(n,i){I(n,e,i),C(e,t)},p(n,i){i[0]&524288&&te(t,n[19])},d(n){n&&N(e)}}}function ul(l){let e,t,n,i,s,a,u,f,g,r,m,_,d=l[7]&&il(l);return{c(){e=F("span"),t=F("span"),n=$(),d&&d.c(),b(t,"class","rangeNub"),b(e,"role","slider"),b(e,"class","rangeHandle"),b(e,"data-handle",l[65]),b(e,"style",i=l[28]+": "+l[29][l[65]]+"%; z-index: "+(l[26]===l[65]?3:2)+";"),b(e,"aria-valuemin",s=l[2]===!0&&l[65]===1?l[0][0]:l[3]),b(e,"aria-valuemax",a=l[2]===!0&&l[65]===0?l[0][1]:l[4]),b(e,"aria-valuenow",u=l[63]),b(e,"aria-valuetext",f=""+(l[18]+l[21](l[63],l[65],l[23](l[63]))+l[19])),b(e,"aria-orientation",g=l[6]?"vertical":"horizontal"),b(e,"aria-disabled",l[10]),b(e,"disabled",l[10]),b(e,"tabindex",r=l[10]?-1:0),H(e,"active",l[24]&&l[26]===l[65]),H(e,"press",l[25]&&l[26]===l[65])},m(c,S){I(c,e,S),C(e,t),C(e,n),d&&d.m(e,null),m||(_=[z(e,"blur",l[33]),z(e,"focus",l[34]),z(e,"keydown",l[35])],m=!0)},p(c,S){c[7]?d?d.p(c,S):(d=il(c),d.c(),d.m(e,null)):d&&(d.d(1),d=null),S[0]&872415232&&i!==(i=c[28]+": "+c[29][c[65]]+"%; z-index: "+(c[26]===c[65]?3:2)+";")&&b(e,"style",i),S[0]&13&&s!==(s=c[2]===!0&&c[65]===1?c[0][0]:c[3])&&b(e,"aria-valuemin",s),S[0]&21&&a!==(a=c[2]===!0&&c[65]===0?c[0][1]:c[4])&&b(e,"aria-valuemax",a),S[0]&1&&u!==(u=c[63])&&b(e,"aria-valuenow",u),S[0]&11272193&&f!==(f=""+(c[18]+c[21](c[63],c[65],c[23](c[63]))+c[19]))&&b(e,"aria-valuetext",f),S[0]&64&&g!==(g=c[6]?"vertical":"horizontal")&&b(e,"aria-orientation",g),S[0]&1024&&b(e,"aria-disabled",c[10]),S[0]&1024&&b(e,"disabled",c[10]),S[0]&1024&&r!==(r=c[10]?-1:0)&&b(e,"tabindex",r),S[0]&83886080&&H(e,"active",c[24]&&c[26]===c[65]),S[0]&100663296&&H(e,"press",c[25]&&c[26]===c[65])},d(c){c&&N(e),d&&d.d(),m=!1,Ee(_)}}}function fl(l){let e,t;return{c(){e=F("span"),b(e,"class","rangeBar"),b(e,"style",t=l[28]+": "+l[31](l[29])+"%; "+l[27]+": "+l[32](l[29])+"%;")},m(n,i){I(n,e,i)},p(n,i){i[0]&939524096&&t!==(t=n[28]+": "+n[31](n[29])+"%; "+n[27]+": "+n[32](n[29])+"%;")&&b(e,"style",t)},d(n){n&&N(e)}}}function ol(l){let e,t;return e=new on({props:{values:l[0],min:l[3],max:l[4],step:l[5],range:l[2],vertical:l[6],reversed:l[8],orientationStart:l[28],hoverable:l[9],disabled:l[10],all:l[13],first:l[14],last:l[15],rest:l[16],pipstep:l[12],prefix:l[18],suffix:l[19],formatter:l[20],focus:l[24],percentOf:l[23],moveHandle:l[30]}}),{c(){q(e.$$.fragment)},m(n,i){K(e,n,i),t=!0},p(n,i){const s={};i[0]&1&&(s.values=n[0]),i[0]&8&&(s.min=n[3]),i[0]&16&&(s.max=n[4]),i[0]&32&&(s.step=n[5]),i[0]&4&&(s.range=n[2]),i[0]&64&&(s.vertical=n[6]),i[0]&256&&(s.reversed=n[8]),i[0]&268435456&&(s.orientationStart=n[28]),i[0]&512&&(s.hoverable=n[9]),i[0]&1024&&(s.disabled=n[10]),i[0]&8192&&(s.all=n[13]),i[0]&16384&&(s.first=n[14]),i[0]&32768&&(s.last=n[15]),i[0]&65536&&(s.rest=n[16]),i[0]&4096&&(s.pipstep=n[12]),i[0]&262144&&(s.prefix=n[18]),i[0]&524288&&(s.suffix=n[19]),i[0]&1048576&&(s.formatter=n[20]),i[0]&16777216&&(s.focus=n[24]),i[0]&8388608&&(s.percentOf=n[23]),e.$set(s)},i(n){t||(P(e.$$.fragment,n),t=!0)},o(n){R(e.$$.fragment,n),t=!1},d(n){J(e,n)}}}function rn(l){let e,t,n,i,s,a,u=Me(l[0]),f=[];for(let m=0;m{r=null}),_e()),(!i||_[0]&131072)&&b(e,"id",m[17]),(!i||_[0]&4)&&H(e,"range",m[2]),(!i||_[0]&1024)&&H(e,"disabled",m[10]),(!i||_[0]&512)&&H(e,"hoverable",m[9]),(!i||_[0]&64)&&H(e,"vertical",m[6]),(!i||_[0]&256)&&H(e,"reversed",m[8]),(!i||_[0]&16777216)&&H(e,"focus",m[24]),(!i||_[0]&4)&&H(e,"min",m[2]==="min"),(!i||_[0]&4)&&H(e,"max",m[2]==="max"),(!i||_[0]&2048)&&H(e,"pips",m[11]),(!i||_[0]&122880)&&H(e,"pip-labels",m[13]==="label"||m[14]==="label"||m[15]==="label"||m[16]==="label")},i(m){i||(P(r),i=!0)},o(m){R(r),i=!1},d(m){m&&N(e),bl(f,m),g&&g.d(),r&&r.d(),l[49](null),s=!1,Ee(a)}}}function rl(l){if(!l)return-1;for(var e=0;l=l.previousElementSibling;)e++;return e}function Fe(l){return l.type.includes("touch")?l.touches[0]:l}function _n(l,e,t){let n,i,s,a,u,f,g=ae,r=()=>(g(),g=Yl(W,o=>t(29,f=o)),W);l.$$.on_destroy.push(()=>g());let{slider:m}=e,{range:_=!1}=e,{pushy:d=!1}=e,{min:c=0}=e,{max:S=100}=e,{step:D=1}=e,{values:w=[(S+c)/2]}=e,{vertical:L=!1}=e,{float:Y=!1}=e,{reversed:T=!1}=e,{hoverable:G=!0}=e,{disabled:O=!1}=e,{pips:X=!1}=e,{pipstep:Q=void 0}=e,{all:p=void 0}=e,{first:A=void 0}=e,{last:x=void 0}=e,{rest:ne=void 0}=e,{id:se=void 0}=e,{prefix:v=""}=e,{suffix:ie=""}=e,{formatter:de=(o,y,U)=>o}=e,{handleFormatter:be=de}=e,{precision:ue=2}=e,{springValues:me={stiffness:.15,damping:.4}}=e;const le=Ie();let he=0,k=!1,fe=!1,oe=!1,Ae=!1,Z=w.length-1,E,j,W;function Se(o){const y=m.querySelectorAll(".handle"),U=Array.prototype.includes.call(y,o),B=Array.prototype.some.call(y,ce=>ce.contains(o));return U||B}function Oe(o){return _==="min"||_==="max"?o.slice(0,1):_?o.slice(0,2):o}function Ne(){return m.getBoundingClientRect()}function Be(o){const y=Ne();let U=0,B=0,ce=0;L?(U=o.clientY-y.top,B=U/y.height*100,B=T?B:100-B):(U=o.clientX-y.left,B=U/y.width*100,B=T?100-B:B),ce=(S-c)/100*B+c;let Ce;return _===!0&&w[0]===w[1]?ce>w[1]?1:0:(Ce=w.indexOf([...w].sort((ql,Kl)=>Math.abs(ce-ql)-Math.abs(ce-Kl))[0]),Ce)}function h(o){const y=Ne();let U=0,B=0,ce=0;L?(U=o.clientY-y.top,B=U/y.height*100,B=T?B:100-B):(U=o.clientX-y.left,B=U/y.width*100,B=T?100-B:B),ce=(S-c)/100*B+c,ye(Z,ce)}function ye(o,y){return y=s(y),typeof o>"u"&&(o=Z),_&&(o===0&&y>w[1]?d?t(0,w[1]=y,w):y=w[1]:o===1&&ys(o))})}function Ue(){!O&&le("stop",{activeHandle:Z,startValue:E,value:w[Z],values:w.map(o=>s(o))})}function Cl(){!O&&le("change",{activeHandle:Z,startValue:E,previousValue:typeof j>"u"?E:j,value:w[Z],values:w.map(o=>s(o))})}function jl(o){Pe[o?"unshift":"push"](()=>{m=o,t(1,m)})}return l.$$set=o=>{"slider"in o&&t(1,m=o.slider),"range"in o&&t(2,_=o.range),"pushy"in o&&t(43,d=o.pushy),"min"in o&&t(3,c=o.min),"max"in o&&t(4,S=o.max),"step"in o&&t(5,D=o.step),"values"in o&&t(0,w=o.values),"vertical"in o&&t(6,L=o.vertical),"float"in o&&t(7,Y=o.float),"reversed"in o&&t(8,T=o.reversed),"hoverable"in o&&t(9,G=o.hoverable),"disabled"in o&&t(10,O=o.disabled),"pips"in o&&t(11,X=o.pips),"pipstep"in o&&t(12,Q=o.pipstep),"all"in o&&t(13,p=o.all),"first"in o&&t(14,A=o.first),"last"in o&&t(15,x=o.last),"rest"in o&&t(16,ne=o.rest),"id"in o&&t(17,se=o.id),"prefix"in o&&t(18,v=o.prefix),"suffix"in o&&t(19,ie=o.suffix),"formatter"in o&&t(20,de=o.formatter),"handleFormatter"in o&&t(21,be=o.handleFormatter),"precision"in o&&t(44,ue=o.precision),"springValues"in o&&t(45,me=o.springValues)},l.$$.update=()=>{l.$$.dirty[0]&24&&t(48,i=function(o){return o<=c?c:o>=S?S:o}),l.$$.dirty[0]&56|l.$$.dirty[1]&139264&&t(47,s=function(o){if(o<=c)return c;if(o>=S)return S;let y=(o-c)%D,U=o-y;return Math.abs(y)*2>=D&&(U+=y>0?D:-D),U=i(U),parseFloat(U.toFixed(ue))}),l.$$.dirty[0]&24|l.$$.dirty[1]&8192&&t(23,n=function(o){let y=(o-c)/(S-c)*100;return isNaN(y)||y<=0?0:y>=100?100:parseFloat(y.toFixed(ue))}),l.$$.dirty[0]&12582937|l.$$.dirty[1]&114688&&(Array.isArray(w)||(t(0,w=[(S+c)/2]),console.error("'values' prop should be an Array (https://github.com/simeydotme/svelte-range-slider-pips#slider-props)")),t(0,w=Oe(w.map(o=>s(o)))),he!==w.length?r(t(22,W=Jl(w.map(o=>n(o)),me))):W.set(w.map(o=>n(o))),t(46,he=w.length)),l.$$.dirty[0]&320&&t(28,a=L?T?"top":"bottom":T?"right":"left"),l.$$.dirty[0]&320&&t(27,u=L?T?"bottom":"top":T?"left":"right")},[w,m,_,c,S,D,L,Y,T,G,O,X,Q,p,A,x,ne,se,v,ie,de,be,W,n,k,oe,Z,u,a,f,ye,Hl,Il,Nl,Rl,Ml,Tl,Dl,Ll,Ol,Bl,Fl,zl,d,ue,me,he,s,i,jl]}class dn extends we{constructor(e){super(),ke(this,e,_n,rn,ve,{slider:1,range:2,pushy:43,min:3,max:4,step:5,values:0,vertical:6,float:7,reversed:8,hoverable:9,disabled:10,pips:11,pipstep:12,all:13,first:14,last:15,rest:16,id:17,prefix:18,suffix:19,formatter:20,handleFormatter:21,precision:44,springValues:45},null,[-1,-1,-1])}}function Pl(l,{crop_values:e,autoplay:t}={}){function n(){if(e===void 0)return;const s=e[0]/100*l.duration,a=e[1]/100*l.duration;l.currentTimea&&(l.currentTime=s,l.pause())}async function i(){t&&(l.pause(),await l.play())}return l.addEventListener("loadeddata",i),l.addEventListener("timeupdate",n),{destroy(){l.removeEventListener("loadeddata",i),l.removeEventListener("timeupdate",n)}}}function mn(l){let e,t,n,i,s,a,u,f,g,r,m;e=new $l({props:{editable:l[7],absolute:!0}}),e.$on("clear",l[14]),e.$on("edit",l[27]);let _=l[9]==="edit"&&l[10]?.duration&&_l(l);return{c(){q(e.$$.fragment),t=$(),n=F("audio"),u=$(),_&&_.c(),f=Ve(),n.controls=!0,b(n,"preload","metadata"),Te(n.src,i=l[1]?.data)||b(n,"src",i),b(n,"data-testid",s=`${l[2]}-audio`),b(n,"class","svelte-1thnwz")},m(d,c){K(e,d,c),I(d,t,c),I(d,n,c),l[28](n),I(d,u,c),_&&_.m(d,c),I(d,f,c),g=!0,r||(m=[gl(a=Pl.call(null,n,{autoplay:l[6],crop_values:l[11]})),z(n,"play",l[24]),z(n,"pause",l[25]),z(n,"ended",l[17])],r=!0)},p(d,c){const S={};c[0]&128&&(S.editable=d[7]),e.$set(S),(!g||c[0]&2&&!Te(n.src,i=d[1]?.data))&&b(n,"src",i),(!g||c[0]&4&&s!==(s=`${d[2]}-audio`))&&b(n,"data-testid",s),a&&ge(a.update)&&c[0]&2112&&a.update.call(null,{autoplay:d[6],crop_values:d[11]}),d[9]==="edit"&&d[10]?.duration?_?(_.p(d,c),c[0]&1536&&P(_,1)):(_=_l(d),_.c(),P(_,1),_.m(f.parentNode,f)):_&&(re(),R(_,1,1,()=>{_=null}),_e())},i(d){g||(P(e.$$.fragment,d),P(_),g=!0)},o(d){R(e.$$.fragment,d),R(_),g=!1},d(d){d&&(N(t),N(n),N(u),N(f)),J(e,d),l[28](null),_&&_.d(d),r=!1,Ee(m)}}}function hn(l){let e,t,n,i;const s=[bn,cn],a=[];function u(f,g){return f[4]==="microphone"?0:f[4]==="upload"?1:-1}return~(e=u(l))&&(t=a[e]=s[e](l)),{c(){t&&t.c(),n=Ve()},m(f,g){~e&&a[e].m(f,g),I(f,n,g),i=!0},p(f,g){let r=e;e=u(f),e===r?~e&&a[e].p(f,g):(t&&(re(),R(a[r],1,1,()=>{a[r]=null}),_e()),~e?(t=a[e],t?t.p(f,g):(t=a[e]=s[e](f),t.c()),P(t,1),t.m(n.parentNode,n)):t=null)},i(f){i||(P(t),i=!0)},o(f){R(t),i=!1},d(f){f&&N(n),~e&&a[e].d(f)}}}function _l(l){let e,t,n;function i(a){l[29](a)}let s={range:!0,min:0,max:100,step:1};return l[11]!==void 0&&(s.values=l[11]),e=new dn({props:s}),Pe.push(()=>De(e,"values",i)),e.$on("change",l[15]),{c(){q(e.$$.fragment)},m(a,u){K(e,a,u),n=!0},p(a,u){const f={};!t&&u[0]&2048&&(t=!0,f.values=a[11],Le(()=>t=!1)),e.$set(f)},i(a){n||(P(e.$$.fragment,a),n=!0)},o(a){R(e.$$.fragment,a),n=!1},d(a){J(e,a)}}}function cn(l){let e,t,n;function i(a){l[26](a)}let s={filetype:"audio/aac,audio/midi,audio/mpeg,audio/ogg,audio/wav,audio/x-wav,audio/opus,audio/webm,audio/flac,audio/vnd.rn-realaudio,audio/x-ms-wma,audio/x-aiff,audio/amr,audio/*",$$slots:{default:[gn]},$$scope:{ctx:l}};return l[0]!==void 0&&(s.dragging=l[0]),e=new xl({props:s}),Pe.push(()=>De(e,"dragging",i)),e.$on("load",l[16]),{c(){q(e.$$.fragment)},m(a,u){K(e,a,u),n=!0},p(a,u){const f={};u[0]&1073741824&&(f.$$scope={dirty:u,ctx:a}),!t&&u[0]&1&&(t=!0,f.dragging=a[0],Le(()=>t=!1)),e.$set(f)},i(a){n||(P(e.$$.fragment,a),n=!0)},o(a){R(e.$$.fragment,a),n=!1},d(a){J(e,a)}}}function bn(l){let e,t,n,i;const s=[kn,wn],a=[];function u(f,g){return f[8]?0:1}return t=u(l),n=a[t]=s[t](l),{c(){e=F("div"),n.c(),b(e,"class","mic-wrap svelte-1thnwz")},m(f,g){I(f,e,g),a[t].m(e,null),i=!0},p(f,g){let r=t;t=u(f),t===r?a[t].p(f,g):(re(),R(a[r],1,1,()=>{a[r]=null}),_e(),n=a[t],n?n.p(f,g):(n=a[t]=s[t](f),n.c()),P(n,1),n.m(e,null))},i(f){i||(P(n),i=!0)},o(f){R(n),i=!1},d(f){f&&N(e),a[t].d()}}}function gn(l){let e;const t=l[23].default,n=Gl(t,l,l[30],null);return{c(){n&&n.c()},m(i,s){n&&n.m(i,s),e=!0},p(i,s){n&&n.p&&(!e||s[0]&1073741824)&&Ql(n,t,i,i[30],e?Wl(t,i[30],s,null):Zl(i[30]),null)},i(i){e||(P(n,i),e=!0)},o(i){R(n,i),e=!1},d(i){n&&n.d(i)}}}function wn(l){let e,t;return e=new Sl({props:{size:"sm",$$slots:{default:[vn]},$$scope:{ctx:l}}}),e.$on("click",l[12]),{c(){q(e.$$.fragment)},m(n,i){K(e,n,i),t=!0},p(n,i){const s={};i[0]&1073741824&&(s.$$scope={dirty:i,ctx:n}),e.$set(s)},i(n){t||(P(e.$$.fragment,n),t=!0)},o(n){R(e.$$.fragment,n),t=!1},d(n){J(e,n)}}}function kn(l){let e,t;return e=new Sl({props:{size:"sm",$$slots:{default:[An]},$$scope:{ctx:l}}}),e.$on("click",l[13]),{c(){q(e.$$.fragment)},m(n,i){K(e,n,i),t=!0},p(n,i){const s={};i[0]&1073741824&&(s.$$scope={dirty:i,ctx:n}),e.$set(s)},i(n){t||(P(e.$$.fragment,n),t=!0)},o(n){R(e.$$.fragment,n),t=!1},d(n){J(e,n)}}}function vn(l){let e,t;return{c(){e=F("span"),e.innerHTML='',t=ee(` - Record from microphone`),b(e,"class","record-icon svelte-1thnwz")},m(n,i){I(n,e,i),I(n,t,i)},p:ae,d(n){n&&(N(e),N(t))}}}function An(l){let e,t;return{c(){e=F("span"),e.innerHTML=' ',t=ee(` - Stop recording`),b(e,"class","record-icon svelte-1thnwz")},m(n,i){I(n,e,i),I(n,t,i)},p:ae,d(n){n&&(N(e),N(t))}}}function Sn(l){let e,t,n,i,s,a;e=new Vl({props:{show_label:l[3],Icon:ze,float:l[4]==="upload"&&l[1]===null,label:l[2]||"Audio"}});const u=[hn,mn],f=[];function g(r,m){return r[1]===null||r[5]?0:1}return n=g(l),i=f[n]=u[n](l),{c(){q(e.$$.fragment),t=$(),i.c(),s=Ve()},m(r,m){K(e,r,m),I(r,t,m),f[n].m(r,m),I(r,s,m),a=!0},p(r,m){const _={};m[0]&8&&(_.show_label=r[3]),m[0]&18&&(_.float=r[4]==="upload"&&r[1]===null),m[0]&4&&(_.label=r[2]||"Audio"),e.$set(_);let d=n;n=g(r),n===d?f[n].p(r,m):(re(),R(f[d],1,1,()=>{f[d]=null}),_e(),i=f[n],i?i.p(r,m):(i=f[n]=u[n](r),i.c()),P(i,1),i.m(s.parentNode,s))},i(r){a||(P(e.$$.fragment,r),P(i),a=!0)},o(r){R(e.$$.fragment,r),R(i),a=!1},d(r){r&&(N(t),N(s)),J(e,r),f[n].d(r)}}}const En=500,dl=44;function Vn(l){return new Promise((e,t)=>{let n=new FileReader;n.onerror=t,n.onload=()=>e(n.result),n.readAsDataURL(l)})}function yn(l,e,t){let{$$slots:n={},$$scope:i}=e,{value:s=null}=e,{label:a}=e,{show_label:u=!0}=e,{name:f=""}=e,{source:g}=e,{pending:r=!1}=e,{streaming:m=!1}=e,{autoplay:_=!1}=e,{show_edit_button:d=!0}=e,c=!1,S,D="",w,L=[],Y=!1,T,G=!1,O=[0,100],X=[],Q;function p(){Q=[je(()=>import("./module-3b9777eb.js"),["./module-3b9777eb.js","./index-39fce9e2.js","./index-9b163635.css"],import.meta.url),je(()=>import("./module-1791af61.js"),[],import.meta.url)]}m&&p();const A=Ie(),x=async(E,j)=>{let W=new Blob(E,{type:"audio/wav"});t(1,s={data:await Vn(W),name:"audio.wav"}),A(j,s)};async function ne(){let E;try{E=await navigator.mediaDevices.getUserMedia({audio:!0})}catch(j){if(j instanceof DOMException&&j.name=="NotAllowedError"){A("error","Please allow access to the microphone for recording.");return}throw j}if(E!=null){if(m){const[{MediaRecorder:j,register:W},{connect:Se}]=await Promise.all(Q);await W(await Se()),S=new j(E,{mimeType:"audio/wav"}),S.addEventListener("dataavailable",se)}else S=new MediaRecorder(E),S.addEventListener("dataavailable",j=>{X.push(j.data)}),S.addEventListener("stop",async()=>{t(8,c=!1),await x(X,"change"),await x(X,"stop_recording"),X=[]});G=!0}}async function se(E){let j=await E.data.arrayBuffer(),W=new Uint8Array(j);if(w||(t(20,w=new Uint8Array(j.slice(0,dl))),W=new Uint8Array(j.slice(dl))),r)L.push(W);else{let Se=[w].concat(L,[W]);x(Se,"stream"),t(21,L=[])}}async function v(){t(8,c=!0),A("start_recording"),G||await ne(),t(20,w=void 0),m?S.start(En):S.start()}Xl(()=>{S&&S.state!=="inactive"&&S.stop()});function ie(){S.stop(),m&&(t(8,c=!1),r&&t(22,Y=!0))}function de(){A("change",null),A("clear"),t(9,D=""),t(1,s=null)}function be({detail:{values:E}}){s&&(A("change",{data:s.data,name:f,crop_min:E[0],crop_max:E[1]}),A("edit"))}function ue({detail:E}){t(1,s=E),A("change",{data:E.data,name:E.name}),A("upload",E)}function me(){A("stop"),A("end")}let{dragging:le=!1}=e;function he(E){M.call(this,l,E)}function k(E){M.call(this,l,E)}function fe(E){le=E,t(0,le)}const oe=()=>t(9,D="edit");function Ae(E){Pe[E?"unshift":"push"](()=>{T=E,t(10,T)})}function Z(E){O=E,t(11,O)}return l.$$set=E=>{"value"in E&&t(1,s=E.value),"label"in E&&t(2,a=E.label),"show_label"in E&&t(3,u=E.show_label),"name"in E&&t(18,f=E.name),"source"in E&&t(4,g=E.source),"pending"in E&&t(19,r=E.pending),"streaming"in E&&t(5,m=E.streaming),"autoplay"in E&&t(6,_=E.autoplay),"show_edit_button"in E&&t(7,d=E.show_edit_button),"dragging"in E&&t(0,le=E.dragging),"$$scope"in E&&t(30,i=E.$$scope)},l.$$.update=()=>{if(l.$$.dirty[0]&7864320&&Y&&r===!1&&(t(22,Y=!1),w&&L)){let E=[w].concat(L);t(21,L=[]),x(E,"stream")}l.$$.dirty[0]&1&&A("drag",le)},[le,s,a,u,g,m,_,d,c,D,T,O,v,ie,de,be,ue,me,f,r,w,L,Y,n,he,k,fe,oe,Ae,Z,i]}class Pn extends we{constructor(e){super(),ke(this,e,yn,Sn,ve,{value:1,label:2,show_label:3,name:18,source:4,pending:19,streaming:5,autoplay:6,show_edit_button:7,dragging:0},null,[-1,-1])}}function Hn(l){let e,t;return e=new pl({props:{type:"audio"}}),{c(){q(e.$$.fragment)},m(n,i){K(e,n,i),t=!0},p:ae,i(n){t||(P(e.$$.fragment,n),t=!0)},o(n){R(e.$$.fragment,n),t=!1},d(n){J(e,n)}}}function In(l){let e,t,n,i;const s=[l[1]];let a={};for(let u=0;ut(0,f=k),x=({detail:k})=>{t(0,f=k),n("stream",f)},ne=({detail:k})=>t(18,p=k);function se(k){M.call(this,l,k)}function v(k){M.call(this,l,k)}function ie(k){M.call(this,l,k)}function de(k){M.call(this,l,k)}function be(k){M.call(this,l,k)}function ue(k){M.call(this,l,k)}function me(k){M.call(this,l,k)}function le(k){M.call(this,l,k)}const he=({detail:k})=>{t(1,T=T||{}),t(1,T.status="error",T),n("error",k)};return l.$$set=k=>{"elem_id"in k&&t(2,i=k.elem_id),"elem_classes"in k&&t(3,s=k.elem_classes),"visible"in k&&t(4,a=k.visible),"mode"in k&&t(5,u=k.mode),"value"in k&&t(0,f=k.value),"name"in k&&t(6,g=k.name),"source"in k&&t(7,r=k.source),"label"in k&&t(8,m=k.label),"root"in k&&t(20,_=k.root),"show_label"in k&&t(9,d=k.show_label),"pending"in k&&t(10,c=k.pending),"streaming"in k&&t(11,S=k.streaming),"root_url"in k&&t(21,D=k.root_url),"container"in k&&t(12,w=k.container),"scale"in k&&t(13,L=k.scale),"min_width"in k&&t(14,Y=k.min_width),"loading_status"in k&&t(1,T=k.loading_status),"autoplay"in k&&t(15,G=k.autoplay),"show_edit_button"in k&&t(16,O=k.show_edit_button)},l.$$.update=()=>{l.$$.dirty[0]&3145729&&t(17,Q=yl(f,_,D)),l.$$.dirty[0]&4194305&&JSON.stringify(f)!==JSON.stringify(X)&&(t(22,X=f),n("change"))},[f,T,i,s,a,u,g,r,m,d,c,S,w,L,Y,G,O,Q,p,n,_,D,X,A,x,ne,se,v,ie,de,be,ue,me,le,he]}class Mn extends we{constructor(e){super(),ke(this,e,Rn,Nn,ve,{elem_id:2,elem_classes:3,visible:4,mode:5,value:0,name:6,source:7,label:8,root:20,show_label:9,pending:10,streaming:11,root_url:21,container:12,scale:13,min_width:14,loading_status:1,autoplay:15,show_edit_button:16},null,[-1,-1])}get elem_id(){return this.$$.ctx[2]}set elem_id(e){this.$$set({elem_id:e}),V()}get elem_classes(){return this.$$.ctx[3]}set elem_classes(e){this.$$set({elem_classes:e}),V()}get visible(){return this.$$.ctx[4]}set visible(e){this.$$set({visible:e}),V()}get mode(){return this.$$.ctx[5]}set mode(e){this.$$set({mode:e}),V()}get value(){return this.$$.ctx[0]}set value(e){this.$$set({value:e}),V()}get name(){return this.$$.ctx[6]}set name(e){this.$$set({name:e}),V()}get source(){return this.$$.ctx[7]}set source(e){this.$$set({source:e}),V()}get label(){return this.$$.ctx[8]}set label(e){this.$$set({label:e}),V()}get root(){return this.$$.ctx[20]}set root(e){this.$$set({root:e}),V()}get show_label(){return this.$$.ctx[9]}set show_label(e){this.$$set({show_label:e}),V()}get pending(){return this.$$.ctx[10]}set pending(e){this.$$set({pending:e}),V()}get streaming(){return this.$$.ctx[11]}set streaming(e){this.$$set({streaming:e}),V()}get root_url(){return this.$$.ctx[21]}set root_url(e){this.$$set({root_url:e}),V()}get container(){return this.$$.ctx[12]}set container(e){this.$$set({container:e}),V()}get scale(){return this.$$.ctx[13]}set scale(e){this.$$set({scale:e}),V()}get min_width(){return this.$$.ctx[14]}set min_width(e){this.$$set({min_width:e}),V()}get loading_status(){return this.$$.ctx[1]}set loading_status(e){this.$$set({loading_status:e}),V()}get autoplay(){return this.$$.ctx[15]}set autoplay(e){this.$$set({autoplay:e}),V()}get show_edit_button(){return this.$$.ctx[16]}set show_edit_button(e){this.$$set({show_edit_button:e}),V()}}function ml(l){let e,t,n,i=l[4]&&hl(l),s=l[5]&&cl(l);return{c(){e=F("div"),i&&i.c(),t=$(),s&&s.c(),b(e,"class","icon-buttons svelte-pq78xp")},m(a,u){I(a,e,u),i&&i.m(e,null),C(e,t),s&&s.m(e,null),n=!0},p(a,u){a[4]?i?(i.p(a,u),u&16&&P(i,1)):(i=hl(a),i.c(),P(i,1),i.m(e,t)):i&&(re(),R(i,1,1,()=>{i=null}),_e()),a[5]?s?(s.p(a,u),u&32&&P(s,1)):(s=cl(a),s.c(),P(s,1),s.m(e,null)):s&&(re(),R(s,1,1,()=>{s=null}),_e())},i(a){n||(P(i),P(s),n=!0)},o(a){R(i),R(s),n=!1},d(a){a&&N(e),i&&i.d(),s&&s.d()}}}function hl(l){let e,t,n,i,s;return t=new en({props:{Icon:an,label:"Download"}}),{c(){e=F("a"),q(t.$$.fragment),b(e,"href",n=l[0].data),b(e,"target",window.__is_colab__?"_blank":null),b(e,"download",i=l[0].name)},m(a,u){I(a,e,u),K(t,e,null),s=!0},p(a,u){(!s||u&1&&n!==(n=a[0].data))&&b(e,"href",n),(!s||u&1&&i!==(i=a[0].name))&&b(e,"download",i)},i(a){s||(P(t.$$.fragment,a),s=!0)},o(a){R(t.$$.fragment,a),s=!1},d(a){a&&N(e),J(t)}}}function cl(l){let e,t;return e=new nn({props:{formatter:l[10],value:l[0]}}),e.$on("error",l[11]),e.$on("share",l[12]),{c(){q(e.$$.fragment)},m(n,i){K(e,n,i),t=!0},p(n,i){const s={};i&1&&(s.value=n[0]),e.$set(s)},i(n){t||(P(e.$$.fragment,n),t=!0)},o(n){R(e.$$.fragment,n),t=!1},d(n){J(e,n)}}}function Tn(l){let e,t,n,i,s,a;return{c(){e=F("audio"),e.controls=!0,b(e,"preload","metadata"),Te(e.src,t=l[0]?.data)||b(e,"src",t),b(e,"data-testid",n=`${l[1]}-audio`),b(e,"class","svelte-pq78xp")},m(u,f){I(u,e,f),s||(a=[gl(i=Pl.call(null,e,{autoplay:l[3]})),z(e,"play",l[8]),z(e,"pause",l[9]),z(e,"ended",l[6])],s=!0)},p(u,f){f&1&&!Te(e.src,t=u[0]?.data)&&b(e,"src",t),f&2&&n!==(n=`${u[1]}-audio`)&&b(e,"data-testid",n),i&&ge(i.update)&&f&8&&i.update.call(null,{autoplay:u[3]})},i:ae,o:ae,d(u){u&&N(e),s=!1,Ee(a)}}}function Dn(l){let e,t;return e=new ln({props:{size:"small",$$slots:{default:[Ln]},$$scope:{ctx:l}}}),{c(){q(e.$$.fragment)},m(n,i){K(e,n,i),t=!0},p(n,i){const s={};i&16384&&(s.$$scope={dirty:i,ctx:n}),e.$set(s)},i(n){t||(P(e.$$.fragment,n),t=!0)},o(n){R(e.$$.fragment,n),t=!1},d(n){J(e,n)}}}function Ln(l){let e,t;return e=new ze({}),{c(){q(e.$$.fragment)},m(n,i){K(e,n,i),t=!0},i(n){t||(P(e.$$.fragment,n),t=!0)},o(n){R(e.$$.fragment,n),t=!1},d(n){J(e,n)}}}function On(l){let e,t,n,i,s,a,u;e=new Vl({props:{show_label:l[2],Icon:ze,float:!1,label:l[1]||"Audio"}});let f=l[0]!==null&&ml(l);const g=[Dn,Tn],r=[];function m(_,d){return _[0]===null?0:1}return i=m(l),s=r[i]=g[i](l),{c(){q(e.$$.fragment),t=$(),f&&f.c(),n=$(),s.c(),a=Ve()},m(_,d){K(e,_,d),I(_,t,d),f&&f.m(_,d),I(_,n,d),r[i].m(_,d),I(_,a,d),u=!0},p(_,[d]){const c={};d&4&&(c.show_label=_[2]),d&2&&(c.label=_[1]||"Audio"),e.$set(c),_[0]!==null?f?(f.p(_,d),d&1&&P(f,1)):(f=ml(_),f.c(),P(f,1),f.m(n.parentNode,n)):f&&(re(),R(f,1,1,()=>{f=null}),_e());let S=i;i=m(_),i===S?r[i].p(_,d):(re(),R(r[S],1,1,()=>{r[S]=null}),_e(),s=r[i],s?s.p(_,d):(s=r[i]=g[i](_),s.c()),P(s,1),s.m(a.parentNode,a))},i(_){u||(P(e.$$.fragment,_),P(f),P(s),u=!0)},o(_){R(e.$$.fragment,_),R(f),R(s),u=!1},d(_){_&&(N(t),N(n),N(a)),J(e,_),f&&f.d(_),r[i].d(_)}}}function Bn(l,e,t){let{value:n=null}=e,{label:i}=e,{name:s}=e,{show_label:a=!0}=e,{autoplay:u}=e,{show_download_button:f=!0}=e,{show_share_button:g=!1}=e;const r=Ie();function m(){r("stop"),r("end")}function _(w){M.call(this,l,w)}function d(w){M.call(this,l,w)}const c=async w=>w?``:"";function S(w){M.call(this,l,w)}function D(w){M.call(this,l,w)}return l.$$set=w=>{"value"in w&&t(0,n=w.value),"label"in w&&t(1,i=w.label),"name"in w&&t(7,s=w.name),"show_label"in w&&t(2,a=w.show_label),"autoplay"in w&&t(3,u=w.autoplay),"show_download_button"in w&&t(4,f=w.show_download_button),"show_share_button"in w&&t(5,g=w.show_share_button)},l.$$.update=()=>{l.$$.dirty&129&&n&&r("change",{name:s,data:n?.data})},[n,i,a,u,f,g,m,s,_,d,c,S,D]}class Fn extends we{constructor(e){super(),ke(this,e,Bn,On,ve,{value:0,label:1,name:7,show_label:2,autoplay:3,show_download_button:4,show_share_button:5})}}function zn(l){let e,t,n,i;const s=[l[11]];let a={};for(let u=0;u{"elem_id"in A&&t(0,i=A.elem_id),"elem_classes"in A&&t(1,s=A.elem_classes),"visible"in A&&t(2,a=A.visible),"mode"in A&&t(3,u=A.mode),"value"in A&&t(4,f=A.value),"source"in A&&t(5,g=A.source),"label"in A&&t(6,r=A.label),"root"in A&&t(17,m=A.root),"show_label"in A&&t(7,_=A.show_label),"root_url"in A&&t(18,d=A.root_url),"container"in A&&t(8,c=A.container),"scale"in A&&t(9,S=A.scale),"min_width"in A&&t(10,D=A.min_width),"loading_status"in A&&t(11,w=A.loading_status),"autoplay"in A&&t(12,L=A.autoplay),"show_download_button"in A&&t(13,Y=A.show_download_button),"show_share_button"in A&&t(14,T=A.show_share_button)},l.$$.update=()=>{l.$$.dirty&393232&&t(15,O=yl(f,m,d)),l.$$.dirty&524304&&JSON.stringify(f)!==JSON.stringify(G)&&(t(19,G=f),n("change"))},[i,s,a,u,f,g,r,_,c,S,D,w,L,Y,T,O,X,m,d,G,Q,p]}class jn extends we{constructor(e){super(),ke(this,e,Cn,Un,ve,{elem_id:0,elem_classes:1,visible:2,mode:3,value:4,source:5,label:6,root:17,show_label:7,root_url:18,container:8,scale:9,min_width:10,loading_status:11,autoplay:12,show_download_button:13,show_share_button:14})}get elem_id(){return this.$$.ctx[0]}set elem_id(e){this.$$set({elem_id:e}),V()}get elem_classes(){return this.$$.ctx[1]}set elem_classes(e){this.$$set({elem_classes:e}),V()}get visible(){return this.$$.ctx[2]}set visible(e){this.$$set({visible:e}),V()}get mode(){return this.$$.ctx[3]}set mode(e){this.$$set({mode:e}),V()}get value(){return this.$$.ctx[4]}set value(e){this.$$set({value:e}),V()}get source(){return this.$$.ctx[5]}set source(e){this.$$set({source:e}),V()}get label(){return this.$$.ctx[6]}set label(e){this.$$set({label:e}),V()}get root(){return this.$$.ctx[17]}set root(e){this.$$set({root:e}),V()}get show_label(){return this.$$.ctx[7]}set show_label(e){this.$$set({show_label:e}),V()}get root_url(){return this.$$.ctx[18]}set root_url(e){this.$$set({root_url:e}),V()}get container(){return this.$$.ctx[8]}set container(e){this.$$set({container:e}),V()}get scale(){return this.$$.ctx[9]}set scale(e){this.$$set({scale:e}),V()}get min_width(){return this.$$.ctx[10]}set min_width(e){this.$$set({min_width:e}),V()}get loading_status(){return this.$$.ctx[11]}set loading_status(e){this.$$set({loading_status:e}),V()}get autoplay(){return this.$$.ctx[12]}set autoplay(e){this.$$set({autoplay:e}),V()}get show_download_button(){return this.$$.ctx[13]}set show_download_button(e){this.$$set({show_download_button:e}),V()}get show_share_button(){return this.$$.ctx[14]}set show_share_button(e){this.$$set({show_share_button:e}),V()}}function qn(l){let e,t,n;function i(a){l[34](a)}let s={elem_id:l[1],elem_classes:l[2],visible:l[3],mode:l[4],source:l[6],label:l[7],root:l[8],show_label:l[9],root_url:l[12],container:l[13],scale:l[14],min_width:l[15],loading_status:l[16],autoplay:l[17],show_download_button:l[18],show_share_button:l[19]};return l[0]!==void 0&&(s.value=l[0]),e=new jn({props:s}),Pe.push(()=>De(e,"value",i)),e.$on("change",l[35]),e.$on("stream",l[36]),e.$on("drag",l[37]),e.$on("edit",l[38]),e.$on("play",l[39]),e.$on("pause",l[40]),e.$on("stop",l[41]),e.$on("end",l[42]),e.$on("start_recording",l[43]),e.$on("stop_recording",l[44]),e.$on("upload",l[45]),e.$on("error",l[46]),{c(){q(e.$$.fragment)},m(a,u){K(e,a,u),n=!0},p(a,u){const f={};u[0]&2&&(f.elem_id=a[1]),u[0]&4&&(f.elem_classes=a[2]),u[0]&8&&(f.visible=a[3]),u[0]&16&&(f.mode=a[4]),u[0]&64&&(f.source=a[6]),u[0]&128&&(f.label=a[7]),u[0]&256&&(f.root=a[8]),u[0]&512&&(f.show_label=a[9]),u[0]&4096&&(f.root_url=a[12]),u[0]&8192&&(f.container=a[13]),u[0]&16384&&(f.scale=a[14]),u[0]&32768&&(f.min_width=a[15]),u[0]&65536&&(f.loading_status=a[16]),u[0]&131072&&(f.autoplay=a[17]),u[0]&262144&&(f.show_download_button=a[18]),u[0]&524288&&(f.show_share_button=a[19]),!t&&u[0]&1&&(t=!0,f.value=a[0],Le(()=>t=!1)),e.$set(f)},i(a){n||(P(e.$$.fragment,a),n=!0)},o(a){R(e.$$.fragment,a),n=!1},d(a){J(e,a)}}}function Kn(l){let e,t,n;function i(a){l[21](a)}let s={elem_id:l[1],elem_classes:l[2],visible:l[3],mode:l[4],name:l[5],source:l[6],label:l[7],root:l[8],show_label:l[9],pending:l[10],streaming:l[11],root_url:l[12],container:l[13],scale:l[14],min_width:l[15],loading_status:l[16],autoplay:l[17],show_edit_button:l[20]};return l[0]!==void 0&&(s.value=l[0]),e=new Mn({props:s}),Pe.push(()=>De(e,"value",i)),e.$on("change",l[22]),e.$on("stream",l[23]),e.$on("drag",l[24]),e.$on("edit",l[25]),e.$on("play",l[26]),e.$on("pause",l[27]),e.$on("stop",l[28]),e.$on("end",l[29]),e.$on("start_recording",l[30]),e.$on("stop_recording",l[31]),e.$on("upload",l[32]),e.$on("error",l[33]),{c(){q(e.$$.fragment)},m(a,u){K(e,a,u),n=!0},p(a,u){const f={};u[0]&2&&(f.elem_id=a[1]),u[0]&4&&(f.elem_classes=a[2]),u[0]&8&&(f.visible=a[3]),u[0]&16&&(f.mode=a[4]),u[0]&32&&(f.name=a[5]),u[0]&64&&(f.source=a[6]),u[0]&128&&(f.label=a[7]),u[0]&256&&(f.root=a[8]),u[0]&512&&(f.show_label=a[9]),u[0]&1024&&(f.pending=a[10]),u[0]&2048&&(f.streaming=a[11]),u[0]&4096&&(f.root_url=a[12]),u[0]&8192&&(f.container=a[13]),u[0]&16384&&(f.scale=a[14]),u[0]&32768&&(f.min_width=a[15]),u[0]&65536&&(f.loading_status=a[16]),u[0]&131072&&(f.autoplay=a[17]),u[0]&1048576&&(f.show_edit_button=a[20]),!t&&u[0]&1&&(t=!0,f.value=a[0],Le(()=>t=!1)),e.$set(f)},i(a){n||(P(e.$$.fragment,a),n=!0)},o(a){R(e.$$.fragment,a),n=!1},d(a){J(e,a)}}}function Jn(l){let e,t,n,i;const s=[Kn,qn],a=[];function u(f,g){return f[4]==="dynamic"?0:1}return e=u(l),t=a[e]=s[e](l),{c(){t.c(),n=Ve()},m(f,g){a[e].m(f,g),I(f,n,g),i=!0},p(f,g){let r=e;e=u(f),e===r?a[e].p(f,g):(re(),R(a[r],1,1,()=>{a[r]=null}),_e(),t=a[e],t?t.p(f,g):(t=a[e]=s[e](f),t.c()),P(t,1),t.m(n.parentNode,n))},i(f){i||(P(t),i=!0)},o(f){R(t),i=!1},d(f){f&&N(n),a[e].d(f)}}}function Yn(l,e,t){let{elem_id:n=""}=e,{elem_classes:i=[]}=e,{visible:s=!0}=e,{mode:a}=e,{value:u=null}=e,{name:f}=e,{source:g}=e,{label:r}=e,{root:m}=e,{show_label:_}=e,{pending:d}=e,{streaming:c}=e,{root_url:S}=e,{container:D=!0}=e,{scale:w=null}=e,{min_width:L=void 0}=e,{loading_status:Y}=e,{autoplay:T=!1}=e,{show_download_button:G=!0}=e,{show_share_button:O=!1}=e,{show_edit_button:X=!0}=e;function Q(h){u=h,t(0,u)}function p(h){M.call(this,l,h)}function A(h){M.call(this,l,h)}function x(h){M.call(this,l,h)}function ne(h){M.call(this,l,h)}function se(h){M.call(this,l,h)}function v(h){M.call(this,l,h)}function ie(h){M.call(this,l,h)}function de(h){M.call(this,l,h)}function be(h){M.call(this,l,h)}function ue(h){M.call(this,l,h)}function me(h){M.call(this,l,h)}function le(h){M.call(this,l,h)}function he(h){u=h,t(0,u)}function k(h){M.call(this,l,h)}function fe(h){M.call(this,l,h)}function oe(h){M.call(this,l,h)}function Ae(h){M.call(this,l,h)}function Z(h){M.call(this,l,h)}function E(h){M.call(this,l,h)}function j(h){M.call(this,l,h)}function W(h){M.call(this,l,h)}function Se(h){M.call(this,l,h)}function Oe(h){M.call(this,l,h)}function Ne(h){M.call(this,l,h)}function Be(h){M.call(this,l,h)}return l.$$set=h=>{"elem_id"in h&&t(1,n=h.elem_id),"elem_classes"in h&&t(2,i=h.elem_classes),"visible"in h&&t(3,s=h.visible),"mode"in h&&t(4,a=h.mode),"value"in h&&t(0,u=h.value),"name"in h&&t(5,f=h.name),"source"in h&&t(6,g=h.source),"label"in h&&t(7,r=h.label),"root"in h&&t(8,m=h.root),"show_label"in h&&t(9,_=h.show_label),"pending"in h&&t(10,d=h.pending),"streaming"in h&&t(11,c=h.streaming),"root_url"in h&&t(12,S=h.root_url),"container"in h&&t(13,D=h.container),"scale"in h&&t(14,w=h.scale),"min_width"in h&&t(15,L=h.min_width),"loading_status"in h&&t(16,Y=h.loading_status),"autoplay"in h&&t(17,T=h.autoplay),"show_download_button"in h&&t(18,G=h.show_download_button),"show_share_button"in h&&t(19,O=h.show_share_button),"show_edit_button"in h&&t(20,X=h.show_edit_button)},[u,n,i,s,a,f,g,r,m,_,d,c,S,D,w,L,Y,T,G,O,X,Q,p,A,x,ne,se,v,ie,de,be,ue,me,le,he,k,fe,oe,Ae,Z,E,j,W,Se,Oe,Ne,Be]}class Xn extends we{constructor(e){super(),ke(this,e,Yn,Jn,ve,{elem_id:1,elem_classes:2,visible:3,mode:4,value:0,name:5,source:6,label:7,root:8,show_label:9,pending:10,streaming:11,root_url:12,container:13,scale:14,min_width:15,loading_status:16,autoplay:17,show_download_button:18,show_share_button:19,show_edit_button:20},null,[-1,-1])}get elem_id(){return this.$$.ctx[1]}set elem_id(e){this.$$set({elem_id:e}),V()}get elem_classes(){return this.$$.ctx[2]}set elem_classes(e){this.$$set({elem_classes:e}),V()}get visible(){return this.$$.ctx[3]}set visible(e){this.$$set({visible:e}),V()}get mode(){return this.$$.ctx[4]}set mode(e){this.$$set({mode:e}),V()}get value(){return this.$$.ctx[0]}set value(e){this.$$set({value:e}),V()}get name(){return this.$$.ctx[5]}set name(e){this.$$set({name:e}),V()}get source(){return this.$$.ctx[6]}set source(e){this.$$set({source:e}),V()}get label(){return this.$$.ctx[7]}set label(e){this.$$set({label:e}),V()}get root(){return this.$$.ctx[8]}set root(e){this.$$set({root:e}),V()}get show_label(){return this.$$.ctx[9]}set show_label(e){this.$$set({show_label:e}),V()}get pending(){return this.$$.ctx[10]}set pending(e){this.$$set({pending:e}),V()}get streaming(){return this.$$.ctx[11]}set streaming(e){this.$$set({streaming:e}),V()}get root_url(){return this.$$.ctx[12]}set root_url(e){this.$$set({root_url:e}),V()}get container(){return this.$$.ctx[13]}set container(e){this.$$set({container:e}),V()}get scale(){return this.$$.ctx[14]}set scale(e){this.$$set({scale:e}),V()}get min_width(){return this.$$.ctx[15]}set min_width(e){this.$$set({min_width:e}),V()}get loading_status(){return this.$$.ctx[16]}set loading_status(e){this.$$set({loading_status:e}),V()}get autoplay(){return this.$$.ctx[17]}set autoplay(e){this.$$set({autoplay:e}),V()}get show_download_button(){return this.$$.ctx[18]}set show_download_button(e){this.$$set({show_download_button:e}),V()}get show_share_button(){return this.$$.ctx[19]}set show_share_button(e){this.$$set({show_share_button:e}),V()}get show_edit_button(){return this.$$.ctx[20]}set show_edit_button(e){this.$$set({show_edit_button:e}),V()}}const it=Xn,at=["static","dynamic"];export{it as Component,at as modes}; -//# sourceMappingURL=index-7bf0115a.js.map diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/test_data/blocks_configs.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/test_data/blocks_configs.py deleted file mode 100644 index b8796d49d9af321d7290d86143f7ad06cac3b0df..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/test_data/blocks_configs.py +++ /dev/null @@ -1,858 +0,0 @@ -XRAY_CONFIG = { - "version": "3.32.0\n", - "mode": "blocks", - "dev_mode": True, - "analytics_enabled": False, - "components": [ - { - "id": 1, - "type": "markdown", - "props": { - "value": "

Detect Disease From Scan

\n

With this model you can lorem ipsum

\n
    \n
  • ipsum 1
  • \n
  • ipsum 2
  • \n
\n", - "name": "markdown", - "visible": True, - "rtl": False, - }, - "serializer": "StringSerializable", - "api_info": {"info": {"type": "string"}, "serialized_info": False}, - "example_inputs": {"raw": "Howdy!", "serialized": "Howdy!"}, - }, - { - "id": 2, - "type": "checkboxgroup", - "props": { - "choices": ["Covid", "Malaria", "Lung Cancer"], - "value": [], - "label": "Disease to Scan For", - "show_label": True, - "container": True, - "min_width": 160, - "name": "checkboxgroup", - "visible": True, - }, - "serializer": "ListStringSerializable", - "api_info": { - "info": {"type": "array", "items": {"type": "string"}}, - "serialized_info": False, - }, - "example_inputs": {"raw": "Covid", "serialized": "Covid"}, - }, - {"id": 3, "type": "tabs", "props": {"visible": True}}, - {"id": 4, "type": "tabitem", "props": {"label": "X-ray", "visible": True}}, - { - "id": 5, - "type": "row", - "props": { - "type": "row", - "variant": "default", - "equal_height": True, - "visible": True, - }, - }, - { - "id": 6, - "type": "image", - "props": { - "image_mode": "RGB", - "brush_color": "#000000", - "mask_opacity": 0.7, - "source": "upload", - "tool": "editor", - "streaming": False, - "mirror_webcam": True, - "selectable": False, - "show_label": True, - "container": True, - "min_width": 160, - "name": "image", - "show_share_button": False, - "show_download_button": True, - "visible": True, - }, - "serializer": "ImgSerializable", - "api_info": { - "info": { - "type": "string", - "description": "base64 representation of an image", - }, - "serialized_info": True, - }, - "example_inputs": { - "raw": "data:image/png;base64,R0lGODlhPQBEAPeoAJosM//AwO/AwHVYZ/z595kzAP/s7P+goOXMv8+fhw/v739/f+8PD98fH/8mJl+fn/9ZWb8/PzWlwv///6wWGbImAPgTEMImIN9gUFCEm/gDALULDN8PAD6atYdCTX9gUNKlj8wZAKUsAOzZz+UMAOsJAP/Z2ccMDA8PD/95eX5NWvsJCOVNQPtfX/8zM8+QePLl38MGBr8JCP+zs9myn/8GBqwpAP/GxgwJCPny78lzYLgjAJ8vAP9fX/+MjMUcAN8zM/9wcM8ZGcATEL+QePdZWf/29uc/P9cmJu9MTDImIN+/r7+/vz8/P8VNQGNugV8AAF9fX8swMNgTAFlDOICAgPNSUnNWSMQ5MBAQEJE3QPIGAM9AQMqGcG9vb6MhJsEdGM8vLx8fH98AANIWAMuQeL8fABkTEPPQ0OM5OSYdGFl5jo+Pj/+pqcsTE78wMFNGQLYmID4dGPvd3UBAQJmTkP+8vH9QUK+vr8ZWSHpzcJMmILdwcLOGcHRQUHxwcK9PT9DQ0O/v70w5MLypoG8wKOuwsP/g4P/Q0IcwKEswKMl8aJ9fX2xjdOtGRs/Pz+Dg4GImIP8gIH0sKEAwKKmTiKZ8aB/f39Wsl+LFt8dgUE9PT5x5aHBwcP+AgP+WltdgYMyZfyywz78AAAAAAAD///8AAP9mZv///wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAEAAKgALAAAAAA9AEQAAAj/AFEJHEiwoMGDCBMqXMiwocAbBww4nEhxoYkUpzJGrMixogkfGUNqlNixJEIDB0SqHGmyJSojM1bKZOmyop0gM3Oe2liTISKMOoPy7GnwY9CjIYcSRYm0aVKSLmE6nfq05QycVLPuhDrxBlCtYJUqNAq2bNWEBj6ZXRuyxZyDRtqwnXvkhACDV+euTeJm1Ki7A73qNWtFiF+/gA95Gly2CJLDhwEHMOUAAuOpLYDEgBxZ4GRTlC1fDnpkM+fOqD6DDj1aZpITp0dtGCDhr+fVuCu3zlg49ijaokTZTo27uG7Gjn2P+hI8+PDPERoUB318bWbfAJ5sUNFcuGRTYUqV/3ogfXp1rWlMc6awJjiAAd2fm4ogXjz56aypOoIde4OE5u/F9x199dlXnnGiHZWEYbGpsAEA3QXYnHwEFliKAgswgJ8LPeiUXGwedCAKABACCN+EA1pYIIYaFlcDhytd51sGAJbo3onOpajiihlO92KHGaUXGwWjUBChjSPiWJuOO/LYIm4v1tXfE6J4gCSJEZ7YgRYUNrkji9P55sF/ogxw5ZkSqIDaZBV6aSGYq/lGZplndkckZ98xoICbTcIJGQAZcNmdmUc210hs35nCyJ58fgmIKX5RQGOZowxaZwYA+JaoKQwswGijBV4C6SiTUmpphMspJx9unX4KaimjDv9aaXOEBteBqmuuxgEHoLX6Kqx+yXqqBANsgCtit4FWQAEkrNbpq7HSOmtwag5w57GrmlJBASEU18ADjUYb3ADTinIttsgSB1oJFfA63bduimuqKB1keqwUhoCSK374wbujvOSu4QG6UvxBRydcpKsav++Ca6G8A6Pr1x2kVMyHwsVxUALDq/krnrhPSOzXG1lUTIoffqGR7Goi2MAxbv6O2kEG56I7CSlRsEFKFVyovDJoIRTg7sugNRDGqCJzJgcKE0ywc0ELm6KBCCJo8DIPFeCWNGcyqNFE06ToAfV0HBRgxsvLThHn1oddQMrXj5DyAQgjEHSAJMWZwS3HPxT/QMbabI/iBCliMLEJKX2EEkomBAUCxRi42VDADxyTYDVogV+wSChqmKxEKCDAYFDFj4OmwbY7bDGdBhtrnTQYOigeChUmc1K3QTnAUfEgGFgAWt88hKA6aCRIXhxnQ1yg3BCayK44EWdkUQcBByEQChFXfCB776aQsG0BIlQgQgE8qO26X1h8cEUep8ngRBnOy74E9QgRgEAC8SvOfQkh7FDBDmS43PmGoIiKUUEGkMEC/PJHgxw0xH74yx/3XnaYRJgMB8obxQW6kL9QYEJ0FIFgByfIL7/IQAlvQwEpnAC7DtLNJCKUoO/w45c44GwCXiAFB/OXAATQryUxdN4LfFiwgjCNYg+kYMIEFkCKDs6PKAIJouyGWMS1FSKJOMRB/BoIxYJIUXFUxNwoIkEKPAgCBZSQHQ1A2EWDfDEUVLyADj5AChSIQW6gu10bE/JG2VnCZGfo4R4d0sdQoBAHhPjhIB94v/wRoRKQWGRHgrhGSQJxCS+0pCZbEhAAOw==", - "serialized": "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", - }, - }, - { - "id": 7, - "type": "json", - "props": { - "show_label": True, - "container": True, - "min_width": 160, - "name": "json", - "visible": True, - }, - "serializer": "JSONSerializable", - "api_info": { - "info": {"type": {}, "description": "any valid json"}, - "serialized_info": True, - }, - "example_inputs": {"raw": {"a": 1, "b": 2}, "serialized": None}, - }, - { - "id": 8, - "type": "button", - "props": { - "value": "Run", - "variant": "secondary", - "interactive": True, - "name": "button", - "visible": True, - }, - "serializer": "StringSerializable", - "api_info": {"info": {"type": "string"}, "serialized_info": False}, - "example_inputs": {"raw": "Howdy!", "serialized": "Howdy!"}, - }, - {"id": 9, "type": "tabitem", "props": {"label": "CT Scan", "visible": True}}, - { - "id": 10, - "type": "row", - "props": { - "type": "row", - "variant": "default", - "equal_height": True, - "visible": True, - }, - }, - { - "id": 11, - "type": "image", - "props": { - "image_mode": "RGB", - "brush_color": "#000000", - "mask_opacity": 0.7, - "source": "upload", - "tool": "editor", - "streaming": False, - "mirror_webcam": True, - "selectable": False, - "show_label": True, - "container": True, - "min_width": 160, - "name": "image", - "show_share_button": False, - "show_download_button": True, - "visible": True, - }, - "serializer": "ImgSerializable", - "api_info": { - "info": { - "type": "string", - "description": "base64 representation of an image", - }, - "serialized_info": True, - }, - "example_inputs": { - "raw": "data:image/png;base64,R0lGODlhPQBEAPeoAJosM//AwO/AwHVYZ/z595kzAP/s7P+goOXMv8+fhw/v739/f+8PD98fH/8mJl+fn/9ZWb8/PzWlwv///6wWGbImAPgTEMImIN9gUFCEm/gDALULDN8PAD6atYdCTX9gUNKlj8wZAKUsAOzZz+UMAOsJAP/Z2ccMDA8PD/95eX5NWvsJCOVNQPtfX/8zM8+QePLl38MGBr8JCP+zs9myn/8GBqwpAP/GxgwJCPny78lzYLgjAJ8vAP9fX/+MjMUcAN8zM/9wcM8ZGcATEL+QePdZWf/29uc/P9cmJu9MTDImIN+/r7+/vz8/P8VNQGNugV8AAF9fX8swMNgTAFlDOICAgPNSUnNWSMQ5MBAQEJE3QPIGAM9AQMqGcG9vb6MhJsEdGM8vLx8fH98AANIWAMuQeL8fABkTEPPQ0OM5OSYdGFl5jo+Pj/+pqcsTE78wMFNGQLYmID4dGPvd3UBAQJmTkP+8vH9QUK+vr8ZWSHpzcJMmILdwcLOGcHRQUHxwcK9PT9DQ0O/v70w5MLypoG8wKOuwsP/g4P/Q0IcwKEswKMl8aJ9fX2xjdOtGRs/Pz+Dg4GImIP8gIH0sKEAwKKmTiKZ8aB/f39Wsl+LFt8dgUE9PT5x5aHBwcP+AgP+WltdgYMyZfyywz78AAAAAAAD///8AAP9mZv///wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAEAAKgALAAAAAA9AEQAAAj/AFEJHEiwoMGDCBMqXMiwocAbBww4nEhxoYkUpzJGrMixogkfGUNqlNixJEIDB0SqHGmyJSojM1bKZOmyop0gM3Oe2liTISKMOoPy7GnwY9CjIYcSRYm0aVKSLmE6nfq05QycVLPuhDrxBlCtYJUqNAq2bNWEBj6ZXRuyxZyDRtqwnXvkhACDV+euTeJm1Ki7A73qNWtFiF+/gA95Gly2CJLDhwEHMOUAAuOpLYDEgBxZ4GRTlC1fDnpkM+fOqD6DDj1aZpITp0dtGCDhr+fVuCu3zlg49ijaokTZTo27uG7Gjn2P+hI8+PDPERoUB318bWbfAJ5sUNFcuGRTYUqV/3ogfXp1rWlMc6awJjiAAd2fm4ogXjz56aypOoIde4OE5u/F9x199dlXnnGiHZWEYbGpsAEA3QXYnHwEFliKAgswgJ8LPeiUXGwedCAKABACCN+EA1pYIIYaFlcDhytd51sGAJbo3onOpajiihlO92KHGaUXGwWjUBChjSPiWJuOO/LYIm4v1tXfE6J4gCSJEZ7YgRYUNrkji9P55sF/ogxw5ZkSqIDaZBV6aSGYq/lGZplndkckZ98xoICbTcIJGQAZcNmdmUc210hs35nCyJ58fgmIKX5RQGOZowxaZwYA+JaoKQwswGijBV4C6SiTUmpphMspJx9unX4KaimjDv9aaXOEBteBqmuuxgEHoLX6Kqx+yXqqBANsgCtit4FWQAEkrNbpq7HSOmtwag5w57GrmlJBASEU18ADjUYb3ADTinIttsgSB1oJFfA63bduimuqKB1keqwUhoCSK374wbujvOSu4QG6UvxBRydcpKsav++Ca6G8A6Pr1x2kVMyHwsVxUALDq/krnrhPSOzXG1lUTIoffqGR7Goi2MAxbv6O2kEG56I7CSlRsEFKFVyovDJoIRTg7sugNRDGqCJzJgcKE0ywc0ELm6KBCCJo8DIPFeCWNGcyqNFE06ToAfV0HBRgxsvLThHn1oddQMrXj5DyAQgjEHSAJMWZwS3HPxT/QMbabI/iBCliMLEJKX2EEkomBAUCxRi42VDADxyTYDVogV+wSChqmKxEKCDAYFDFj4OmwbY7bDGdBhtrnTQYOigeChUmc1K3QTnAUfEgGFgAWt88hKA6aCRIXhxnQ1yg3BCayK44EWdkUQcBByEQChFXfCB776aQsG0BIlQgQgE8qO26X1h8cEUep8ngRBnOy74E9QgRgEAC8SvOfQkh7FDBDmS43PmGoIiKUUEGkMEC/PJHgxw0xH74yx/3XnaYRJgMB8obxQW6kL9QYEJ0FIFgByfIL7/IQAlvQwEpnAC7DtLNJCKUoO/w45c44GwCXiAFB/OXAATQryUxdN4LfFiwgjCNYg+kYMIEFkCKDs6PKAIJouyGWMS1FSKJOMRB/BoIxYJIUXFUxNwoIkEKPAgCBZSQHQ1A2EWDfDEUVLyADj5AChSIQW6gu10bE/JG2VnCZGfo4R4d0sdQoBAHhPjhIB94v/wRoRKQWGRHgrhGSQJxCS+0pCZbEhAAOw==", - "serialized": "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", - }, - }, - { - "id": 12, - "type": "json", - "props": { - "show_label": True, - "container": True, - "min_width": 160, - "name": "json", - "visible": True, - }, - "serializer": "JSONSerializable", - "api_info": { - "info": {"type": {}, "description": "any valid json"}, - "serialized_info": True, - }, - "example_inputs": {"raw": {"a": 1, "b": 2}, "serialized": None}, - }, - { - "id": 13, - "type": "button", - "props": { - "value": "Run", - "variant": "secondary", - "interactive": True, - "name": "button", - "visible": True, - }, - "serializer": "StringSerializable", - "api_info": {"info": {"type": "string"}, "serialized_info": False}, - "example_inputs": {"raw": "Howdy!", "serialized": "Howdy!"}, - }, - { - "id": 14, - "type": "textbox", - "props": { - "autofocus": False, - "lines": 1, - "max_lines": 20, - "value": "", - "type": "text", - "show_label": True, - "container": True, - "min_width": 160, - "name": "textbox", - "show_copy_button": False, - "visible": True, - "rtl": False, - }, - "serializer": "StringSerializable", - "api_info": {"info": {"type": "string"}, "serialized_info": False}, - "example_inputs": {"raw": "Howdy!", "serialized": "Howdy!"}, - }, - { - "id": 15, - "type": "form", - "props": {"type": "form", "scale": 0, "min_width": 0, "visible": True}, - }, - { - "id": 16, - "type": "form", - "props": {"type": "form", "scale": 0, "min_width": 0, "visible": True}, - }, - ], - "css": None, - "title": "Gradio", - "space_id": False, - "enable_queue": None, - "show_error": True, - "show_api": True, - "is_colab": False, - "stylesheets": [ - "https://fonts.googleapis.com/css2?family=Source+Sans+Pro:wght@400;600&display=swap", - "https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600&display=swap", - ], - "theme": "default", - "layout": { - "id": 0, - "children": [ - {"id": 1}, - {"id": 15, "children": [{"id": 2}]}, - { - "id": 3, - "children": [ - { - "id": 4, - "children": [ - {"id": 5, "children": [{"id": 6}, {"id": 7}]}, - {"id": 8}, - ], - }, - { - "id": 9, - "children": [ - {"id": 10, "children": [{"id": 11}, {"id": 12}]}, - {"id": 13}, - ], - }, - ], - }, - {"id": 16, "children": [{"id": 14}]}, - ], - }, - "dependencies": [ - { - "targets": [8], - "trigger": "click", - "inputs": [2, 6], - "outputs": [7], - "backend_fn": True, - "js": None, - "queue": None, - "api_name": None, - "scroll_to_output": False, - "every": None, - "batch": False, - "max_batch_size": 4, - "cancels": [], - "types": {"continuous": False, "generator": False}, - "collects_event_data": False, - "trigger_after": None, - "trigger_only_on_success": False, - "show_progress": "full", - }, - { - "targets": [13], - "trigger": "click", - "inputs": [2, 11], - "outputs": [12], - "backend_fn": True, - "js": None, - "queue": None, - "api_name": None, - "scroll_to_output": False, - "every": None, - "batch": False, - "max_batch_size": 4, - "cancels": [], - "types": {"continuous": False, "generator": False}, - "collects_event_data": False, - "trigger_after": None, - "trigger_only_on_success": False, - "show_progress": "full", - }, - { - "targets": [], - "trigger": "load", - "inputs": [], - "outputs": [14], - "backend_fn": True, - "js": None, - "queue": None, - "api_name": None, - "scroll_to_output": False, - "every": None, - "batch": False, - "max_batch_size": 4, - "cancels": [], - "types": {"continuous": False, "generator": False}, - "collects_event_data": False, - "trigger_after": None, - "trigger_only_on_success": False, - "show_progress": "full", - }, - ], -} - -XRAY_CONFIG_DIFF_IDS = { - "version": "3.32.0\n", - "mode": "blocks", - "dev_mode": True, - "analytics_enabled": False, - "components": [ - { - "id": 6, - "type": "markdown", - "props": { - "value": "

Detect Disease From Scan

\n

With this model you can lorem ipsum

\n
    \n
  • ipsum 1
  • \n
  • ipsum 2
  • \n
\n", - "name": "markdown", - "visible": True, - "rtl": False, - }, - "serializer": "StringSerializable", - "api_info": {"info": {"type": "string"}, "serialized_info": False}, - "example_inputs": {"raw": "Howdy!", "serialized": "Howdy!"}, - }, - { - "id": 7, - "type": "checkboxgroup", - "props": { - "choices": ["Covid", "Malaria", "Lung Cancer"], - "value": [], - "label": "Disease to Scan For", - "show_label": True, - "container": True, - "min_width": 160, - "name": "checkboxgroup", - "visible": True, - }, - "serializer": "ListStringSerializable", - "api_info": { - "info": {"type": "array", "items": {"type": "string"}}, - "serialized_info": False, - }, - "example_inputs": {"raw": "Covid", "serialized": "Covid"}, - }, - {"id": 8, "type": "tabs", "props": {"visible": True}}, - {"id": 9, "type": "tabitem", "props": {"label": "X-ray", "visible": True}}, - { - "id": 10, - "type": "row", - "props": { - "type": "row", - "variant": "default", - "equal_height": True, - "visible": True, - }, - }, - { - "id": 11, - "type": "image", - "props": { - "image_mode": "RGB", - "brush_color": "#000000", - "mask_opacity": 0.7, - "source": "upload", - "tool": "editor", - "streaming": False, - "mirror_webcam": True, - "selectable": False, - "show_label": True, - "container": True, - "min_width": 160, - "name": "image", - "show_share_button": False, - "show_download_button": True, - "visible": True, - }, - "serializer": "ImgSerializable", - "api_info": { - "info": { - "type": "string", - "description": "base64 representation of an image", - }, - "serialized_info": True, - }, - "example_inputs": { - "raw": "data:image/png;base64,R0lGODlhPQBEAPeoAJosM//AwO/AwHVYZ/z595kzAP/s7P+goOXMv8+fhw/v739/f+8PD98fH/8mJl+fn/9ZWb8/PzWlwv///6wWGbImAPgTEMImIN9gUFCEm/gDALULDN8PAD6atYdCTX9gUNKlj8wZAKUsAOzZz+UMAOsJAP/Z2ccMDA8PD/95eX5NWvsJCOVNQPtfX/8zM8+QePLl38MGBr8JCP+zs9myn/8GBqwpAP/GxgwJCPny78lzYLgjAJ8vAP9fX/+MjMUcAN8zM/9wcM8ZGcATEL+QePdZWf/29uc/P9cmJu9MTDImIN+/r7+/vz8/P8VNQGNugV8AAF9fX8swMNgTAFlDOICAgPNSUnNWSMQ5MBAQEJE3QPIGAM9AQMqGcG9vb6MhJsEdGM8vLx8fH98AANIWAMuQeL8fABkTEPPQ0OM5OSYdGFl5jo+Pj/+pqcsTE78wMFNGQLYmID4dGPvd3UBAQJmTkP+8vH9QUK+vr8ZWSHpzcJMmILdwcLOGcHRQUHxwcK9PT9DQ0O/v70w5MLypoG8wKOuwsP/g4P/Q0IcwKEswKMl8aJ9fX2xjdOtGRs/Pz+Dg4GImIP8gIH0sKEAwKKmTiKZ8aB/f39Wsl+LFt8dgUE9PT5x5aHBwcP+AgP+WltdgYMyZfyywz78AAAAAAAD///8AAP9mZv///wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAEAAKgALAAAAAA9AEQAAAj/AFEJHEiwoMGDCBMqXMiwocAbBww4nEhxoYkUpzJGrMixogkfGUNqlNixJEIDB0SqHGmyJSojM1bKZOmyop0gM3Oe2liTISKMOoPy7GnwY9CjIYcSRYm0aVKSLmE6nfq05QycVLPuhDrxBlCtYJUqNAq2bNWEBj6ZXRuyxZyDRtqwnXvkhACDV+euTeJm1Ki7A73qNWtFiF+/gA95Gly2CJLDhwEHMOUAAuOpLYDEgBxZ4GRTlC1fDnpkM+fOqD6DDj1aZpITp0dtGCDhr+fVuCu3zlg49ijaokTZTo27uG7Gjn2P+hI8+PDPERoUB318bWbfAJ5sUNFcuGRTYUqV/3ogfXp1rWlMc6awJjiAAd2fm4ogXjz56aypOoIde4OE5u/F9x199dlXnnGiHZWEYbGpsAEA3QXYnHwEFliKAgswgJ8LPeiUXGwedCAKABACCN+EA1pYIIYaFlcDhytd51sGAJbo3onOpajiihlO92KHGaUXGwWjUBChjSPiWJuOO/LYIm4v1tXfE6J4gCSJEZ7YgRYUNrkji9P55sF/ogxw5ZkSqIDaZBV6aSGYq/lGZplndkckZ98xoICbTcIJGQAZcNmdmUc210hs35nCyJ58fgmIKX5RQGOZowxaZwYA+JaoKQwswGijBV4C6SiTUmpphMspJx9unX4KaimjDv9aaXOEBteBqmuuxgEHoLX6Kqx+yXqqBANsgCtit4FWQAEkrNbpq7HSOmtwag5w57GrmlJBASEU18ADjUYb3ADTinIttsgSB1oJFfA63bduimuqKB1keqwUhoCSK374wbujvOSu4QG6UvxBRydcpKsav++Ca6G8A6Pr1x2kVMyHwsVxUALDq/krnrhPSOzXG1lUTIoffqGR7Goi2MAxbv6O2kEG56I7CSlRsEFKFVyovDJoIRTg7sugNRDGqCJzJgcKE0ywc0ELm6KBCCJo8DIPFeCWNGcyqNFE06ToAfV0HBRgxsvLThHn1oddQMrXj5DyAQgjEHSAJMWZwS3HPxT/QMbabI/iBCliMLEJKX2EEkomBAUCxRi42VDADxyTYDVogV+wSChqmKxEKCDAYFDFj4OmwbY7bDGdBhtrnTQYOigeChUmc1K3QTnAUfEgGFgAWt88hKA6aCRIXhxnQ1yg3BCayK44EWdkUQcBByEQChFXfCB776aQsG0BIlQgQgE8qO26X1h8cEUep8ngRBnOy74E9QgRgEAC8SvOfQkh7FDBDmS43PmGoIiKUUEGkMEC/PJHgxw0xH74yx/3XnaYRJgMB8obxQW6kL9QYEJ0FIFgByfIL7/IQAlvQwEpnAC7DtLNJCKUoO/w45c44GwCXiAFB/OXAATQryUxdN4LfFiwgjCNYg+kYMIEFkCKDs6PKAIJouyGWMS1FSKJOMRB/BoIxYJIUXFUxNwoIkEKPAgCBZSQHQ1A2EWDfDEUVLyADj5AChSIQW6gu10bE/JG2VnCZGfo4R4d0sdQoBAHhPjhIB94v/wRoRKQWGRHgrhGSQJxCS+0pCZbEhAAOw==", - "serialized": "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", - }, - }, - { - "id": 12, - "type": "json", - "props": { - "show_label": True, - "container": True, - "min_width": 160, - "name": "json", - "visible": True, - }, - "serializer": "JSONSerializable", - "api_info": { - "info": {"type": {}, "description": "any valid json"}, - "serialized_info": True, - }, - "example_inputs": {"raw": {"a": 1, "b": 2}, "serialized": None}, - }, - { - "id": 13, - "type": "button", - "props": { - "value": "Run", - "variant": "secondary", - "interactive": True, - "name": "button", - "visible": True, - }, - "serializer": "StringSerializable", - "api_info": {"info": {"type": "string"}, "serialized_info": False}, - "example_inputs": {"raw": "Howdy!", "serialized": "Howdy!"}, - }, - {"id": 14, "type": "tabitem", "props": {"label": "CT Scan", "visible": True}}, - { - "id": 15, - "type": "row", - "props": { - "type": "row", - "variant": "default", - "equal_height": True, - "visible": True, - }, - }, - { - "id": 16, - "type": "image", - "props": { - "image_mode": "RGB", - "brush_color": "#000000", - "mask_opacity": 0.7, - "source": "upload", - "tool": "editor", - "streaming": False, - "mirror_webcam": True, - "selectable": False, - "show_label": True, - "container": True, - "min_width": 160, - "name": "image", - "show_share_button": False, - "show_download_button": True, - "visible": True, - }, - "serializer": "ImgSerializable", - "api_info": { - "info": { - "type": "string", - "description": "base64 representation of an image", - }, - "serialized_info": True, - }, - "example_inputs": { - "raw": "data:image/png;base64,R0lGODlhPQBEAPeoAJosM//AwO/AwHVYZ/z595kzAP/s7P+goOXMv8+fhw/v739/f+8PD98fH/8mJl+fn/9ZWb8/PzWlwv///6wWGbImAPgTEMImIN9gUFCEm/gDALULDN8PAD6atYdCTX9gUNKlj8wZAKUsAOzZz+UMAOsJAP/Z2ccMDA8PD/95eX5NWvsJCOVNQPtfX/8zM8+QePLl38MGBr8JCP+zs9myn/8GBqwpAP/GxgwJCPny78lzYLgjAJ8vAP9fX/+MjMUcAN8zM/9wcM8ZGcATEL+QePdZWf/29uc/P9cmJu9MTDImIN+/r7+/vz8/P8VNQGNugV8AAF9fX8swMNgTAFlDOICAgPNSUnNWSMQ5MBAQEJE3QPIGAM9AQMqGcG9vb6MhJsEdGM8vLx8fH98AANIWAMuQeL8fABkTEPPQ0OM5OSYdGFl5jo+Pj/+pqcsTE78wMFNGQLYmID4dGPvd3UBAQJmTkP+8vH9QUK+vr8ZWSHpzcJMmILdwcLOGcHRQUHxwcK9PT9DQ0O/v70w5MLypoG8wKOuwsP/g4P/Q0IcwKEswKMl8aJ9fX2xjdOtGRs/Pz+Dg4GImIP8gIH0sKEAwKKmTiKZ8aB/f39Wsl+LFt8dgUE9PT5x5aHBwcP+AgP+WltdgYMyZfyywz78AAAAAAAD///8AAP9mZv///wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAEAAKgALAAAAAA9AEQAAAj/AFEJHEiwoMGDCBMqXMiwocAbBww4nEhxoYkUpzJGrMixogkfGUNqlNixJEIDB0SqHGmyJSojM1bKZOmyop0gM3Oe2liTISKMOoPy7GnwY9CjIYcSRYm0aVKSLmE6nfq05QycVLPuhDrxBlCtYJUqNAq2bNWEBj6ZXRuyxZyDRtqwnXvkhACDV+euTeJm1Ki7A73qNWtFiF+/gA95Gly2CJLDhwEHMOUAAuOpLYDEgBxZ4GRTlC1fDnpkM+fOqD6DDj1aZpITp0dtGCDhr+fVuCu3zlg49ijaokTZTo27uG7Gjn2P+hI8+PDPERoUB318bWbfAJ5sUNFcuGRTYUqV/3ogfXp1rWlMc6awJjiAAd2fm4ogXjz56aypOoIde4OE5u/F9x199dlXnnGiHZWEYbGpsAEA3QXYnHwEFliKAgswgJ8LPeiUXGwedCAKABACCN+EA1pYIIYaFlcDhytd51sGAJbo3onOpajiihlO92KHGaUXGwWjUBChjSPiWJuOO/LYIm4v1tXfE6J4gCSJEZ7YgRYUNrkji9P55sF/ogxw5ZkSqIDaZBV6aSGYq/lGZplndkckZ98xoICbTcIJGQAZcNmdmUc210hs35nCyJ58fgmIKX5RQGOZowxaZwYA+JaoKQwswGijBV4C6SiTUmpphMspJx9unX4KaimjDv9aaXOEBteBqmuuxgEHoLX6Kqx+yXqqBANsgCtit4FWQAEkrNbpq7HSOmtwag5w57GrmlJBASEU18ADjUYb3ADTinIttsgSB1oJFfA63bduimuqKB1keqwUhoCSK374wbujvOSu4QG6UvxBRydcpKsav++Ca6G8A6Pr1x2kVMyHwsVxUALDq/krnrhPSOzXG1lUTIoffqGR7Goi2MAxbv6O2kEG56I7CSlRsEFKFVyovDJoIRTg7sugNRDGqCJzJgcKE0ywc0ELm6KBCCJo8DIPFeCWNGcyqNFE06ToAfV0HBRgxsvLThHn1oddQMrXj5DyAQgjEHSAJMWZwS3HPxT/QMbabI/iBCliMLEJKX2EEkomBAUCxRi42VDADxyTYDVogV+wSChqmKxEKCDAYFDFj4OmwbY7bDGdBhtrnTQYOigeChUmc1K3QTnAUfEgGFgAWt88hKA6aCRIXhxnQ1yg3BCayK44EWdkUQcBByEQChFXfCB776aQsG0BIlQgQgE8qO26X1h8cEUep8ngRBnOy74E9QgRgEAC8SvOfQkh7FDBDmS43PmGoIiKUUEGkMEC/PJHgxw0xH74yx/3XnaYRJgMB8obxQW6kL9QYEJ0FIFgByfIL7/IQAlvQwEpnAC7DtLNJCKUoO/w45c44GwCXiAFB/OXAATQryUxdN4LfFiwgjCNYg+kYMIEFkCKDs6PKAIJouyGWMS1FSKJOMRB/BoIxYJIUXFUxNwoIkEKPAgCBZSQHQ1A2EWDfDEUVLyADj5AChSIQW6gu10bE/JG2VnCZGfo4R4d0sdQoBAHhPjhIB94v/wRoRKQWGRHgrhGSQJxCS+0pCZbEhAAOw==", - "serialized": "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", - }, - }, - { - "id": 17, - "type": "json", - "props": { - "show_label": True, - "container": True, - "min_width": 160, - "name": "json", - "visible": True, - }, - "serializer": "JSONSerializable", - "api_info": { - "info": {"type": {}, "description": "any valid json"}, - "serialized_info": True, - }, - "example_inputs": {"raw": {"a": 1, "b": 2}, "serialized": None}, - }, - { - "id": 18, - "type": "button", - "props": { - "value": "Run", - "variant": "secondary", - "interactive": True, - "name": "button", - "visible": True, - }, - "serializer": "StringSerializable", - "api_info": {"info": {"type": "string"}, "serialized_info": False}, - "example_inputs": {"raw": "Howdy!", "serialized": "Howdy!"}, - }, - { - "id": 19, - "type": "textbox", - "props": { - "autofocus": False, - "lines": 1, - "max_lines": 20, - "value": "", - "type": "text", - "show_label": True, - "container": True, - "min_width": 160, - "name": "textbox", - "show_copy_button": False, - "visible": True, - "rtl": False, - }, - "serializer": "StringSerializable", - "api_info": {"info": {"type": "string"}, "serialized_info": False}, - "example_inputs": {"raw": "Howdy!", "serialized": "Howdy!"}, - }, - { - "id": 20, - "type": "form", - "props": {"type": "form", "scale": 0, "min_width": 0, "visible": True}, - }, - { - "id": 21, - "type": "form", - "props": {"type": "form", "scale": 0, "min_width": 0, "visible": True}, - }, - ], - "css": None, - "title": "Gradio", - "space_id": False, - "enable_queue": None, - "show_error": True, - "show_api": True, - "is_colab": False, - "stylesheets": [ - "https://fonts.googleapis.com/css2?family=Source+Sans+Pro:wght@400;600&display=swap", - "https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600&display=swap", - ], - "theme": "default", - "layout": { - "id": 0, - "children": [ - {"id": 6}, - {"id": 20, "children": [{"id": 7}]}, - { - "id": 8, - "children": [ - { - "id": 9, - "children": [ - {"id": 10, "children": [{"id": 11}, {"id": 12}]}, - {"id": 13}, - ], - }, - { - "id": 14, - "children": [ - {"id": 15, "children": [{"id": 16}, {"id": 17}]}, - {"id": 18}, - ], - }, - ], - }, - {"id": 21, "children": [{"id": 19}]}, - ], - }, - "dependencies": [ - { - "targets": [13], - "trigger": "click", - "inputs": [7, 11], - "outputs": [12], - "backend_fn": True, - "js": None, - "queue": None, - "api_name": None, - "scroll_to_output": False, - "every": None, - "batch": False, - "max_batch_size": 4, - "cancels": [], - "types": {"continuous": False, "generator": False}, - "collects_event_data": False, - "trigger_after": None, - "trigger_only_on_success": False, - "show_progress": "full", - }, - { - "targets": [18], - "trigger": "click", - "inputs": [7, 16], - "outputs": [17], - "backend_fn": True, - "js": None, - "queue": None, - "api_name": None, - "scroll_to_output": False, - "every": None, - "batch": False, - "max_batch_size": 4, - "cancels": [], - "types": {"continuous": False, "generator": False}, - "collects_event_data": False, - "trigger_after": None, - "trigger_only_on_success": False, - "show_progress": "full", - }, - { - "targets": [], - "trigger": "load", - "inputs": [], - "outputs": [19], - "backend_fn": True, - "js": None, - "queue": None, - "api_name": None, - "scroll_to_output": False, - "every": None, - "batch": False, - "max_batch_size": 4, - "cancels": [], - "types": {"continuous": False, "generator": False}, - "collects_event_data": False, - "trigger_after": None, - "trigger_only_on_success": False, - "show_progress": "full", - }, - ], -} - - -XRAY_CONFIG_WITH_MISTAKE = { - "mode": "blocks", - "dev_mode": True, - "analytics_enabled": False, - "theme": "default", - "components": [ - { - "id": 1, - "type": "markdown", - "props": { - "value": "

Detect Disease From Scan

\n

With this model you can lorem ipsum

\n
    \n
  • ipsum 1
  • \n
  • ipsum 2
  • \n
\n", - "name": "markdown", - "rtl": False, - }, - }, - { - "id": 2, - "type": "checkboxgroup", - "props": { - "choices": ["Covid", "Malaria", "Lung Cancer"], - "value": [], - "name": "checkboxgroup", - "show_label": True, - "label": "Disease to Scan For", - "container": True, - "min_width": 160, - }, - }, - { - "id": 3, - "type": "tabs", - "props": { - "value": True, - }, - }, - { - "id": 4, - "type": "tabitem", - "props": { - "label": "X-ray", - "value": True, - }, - }, - { - "id": 5, - "type": "row", - "props": { - "type": "row", - "variant": "default", - "equal_height": True, - "value": True, - }, - }, - { - "id": 6, - "type": "image", - "props": { - "image_mode": "RGB", - "brush_color": "#000000", - "mask_opacity": 0.7, - "source": "upload", - "streaming": False, - "mirror_webcam": True, - "tool": "editor", - "name": "image", - "show_share_button": False, - "selectable": False, - }, - }, - { - "id": 7, - "type": "json", - "props": { - "name": "json", - }, - }, - { - "id": 8, - "type": "button", - "props": { - "value": "Run", - "name": "button", - "interactive": True, - "css": {"background-color": "red", "--hover-color": "orange"}, - "variant": "secondary", - }, - }, - { - "id": 9, - "type": "tabitem", - "props": { - "show_label": True, - "label": "CT Scan", - "value": True, - }, - }, - { - "id": 10, - "type": "row", - "props": { - "type": "row", - "variant": "default", - "equal_height": True, - "value": True, - }, - }, - { - "id": 11, - "type": "image", - "props": { - "image_mode": "RGB", - "brush_color": "#000000", - "mask_opacity": 0.7, - "source": "upload", - "tool": "editor", - "streaming": False, - "mirror_webcam": True, - "name": "image", - "show_share_button": False, - "selectable": False, - }, - }, - { - "id": 12, - "type": "json", - "props": { - "name": "json", - }, - }, - { - "id": 13, - "type": "button", - "props": { - "value": "Run", - "interactive": True, - "name": "button", - "variant": "secondary", - }, - }, - { - "id": 14, - "type": "textbox", - "props": { - "lines": 1, - "value": "", - "name": "textbox", - "show_copy_button": False, - "type": "text", - "rtl": False, - "autofocus": False, - }, - }, - ], - "layout": { - "id": 0, - "children": [ - {"id": 1}, - {"id": 2}, - { - "id": 3, - "children": [ - { - "id": 4, - "children": [ - {"id": 5, "children": [{"id": 6}, {"id": 7}]}, - {"id": 8}, - ], - }, - { - "id": 9, - "children": [ - {"id": 10, "children": [{"id": 12}, {"id": 11}]}, - {"id": 13}, - ], - }, - ], - }, - {"id": 14}, - ], - }, - "dependencies": [ - { - "targets": [8], - "trigger": "click", - "inputs": [2, 6], - "outputs": [7], - "api_name": None, - "scroll_to_output": False, - "cancels": [], - "trigger_after": None, - "trigger_only_on_success": False, - "show_progress": "full", - }, - { - "targets": [13], - "trigger": "click", - "inputs": [2, 11], - "outputs": [12], - "api_name": None, - "scroll_to_output": False, - "cancels": [], - "trigger_after": None, - "trigger_only_on_success": False, - "show_progress": "full", - }, - ], -} diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_k_dpm_2_discrete.py b/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_k_dpm_2_discrete.py deleted file mode 100644 index 809da798f889ebe9d7788fd6c422918cd8e1b440..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_k_dpm_2_discrete.py +++ /dev/null @@ -1,333 +0,0 @@ -# Copyright 2023 Katherine Crowson, The HuggingFace Team and hlky. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import math -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin, SchedulerOutput - - -# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar -def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999) -> torch.Tensor: - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - - def alpha_bar(time_step): - return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2 - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return torch.tensor(betas, dtype=torch.float32) - - -class KDPM2DiscreteScheduler(SchedulerMixin, ConfigMixin): - """ - Scheduler created by @crowsonkb in [k_diffusion](https://github.com/crowsonkb/k-diffusion), see: - https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188 - - Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022). - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. beta_start (`float`): the - starting `beta` value of inference. beta_end (`float`): the final `beta` value. beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear` or `scaled_linear`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small`, - `fixed_small_log`, `fixed_large`, `fixed_large_log`, `learned` or `learned_range`. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - """ - - _compatibles = [e.name for e in KarrasDiffusionSchedulers] - order = 2 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.00085, # sensible defaults - beta_end: float = 0.012, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - prediction_type: str = "epsilon", - ): - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - - # set all values - self.set_timesteps(num_train_timesteps, None, num_train_timesteps) - - def index_for_timestep(self, timestep): - indices = (self.timesteps == timestep).nonzero() - if self.state_in_first_order: - pos = -1 - else: - pos = 0 - return indices[pos].item() - - def scale_model_input( - self, - sample: torch.FloatTensor, - timestep: Union[float, torch.FloatTensor], - ) -> torch.FloatTensor: - """ - Args: - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - sample (`torch.FloatTensor`): input sample timestep (`int`, optional): current timestep - Returns: - `torch.FloatTensor`: scaled input sample - """ - step_index = self.index_for_timestep(timestep) - - if self.state_in_first_order: - sigma = self.sigmas[step_index] - else: - sigma = self.sigmas_interpol[step_index] - - sample = sample / ((sigma**2 + 1) ** 0.5) - return sample - - def set_timesteps( - self, - num_inference_steps: int, - device: Union[str, torch.device] = None, - num_train_timesteps: Optional[int] = None, - ): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - device (`str` or `torch.device`, optional): - the device to which the timesteps should be moved to. If `None`, the timesteps are not moved. - """ - self.num_inference_steps = num_inference_steps - - num_train_timesteps = num_train_timesteps or self.config.num_train_timesteps - - timesteps = np.linspace(0, num_train_timesteps - 1, num_inference_steps, dtype=float)[::-1].copy() - - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - self.log_sigmas = torch.from_numpy(np.log(sigmas)).to(device) - - sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas) - sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32) - sigmas = torch.from_numpy(sigmas).to(device=device) - - # interpolate sigmas - sigmas_interpol = sigmas.log().lerp(sigmas.roll(1).log(), 0.5).exp() - - self.sigmas = torch.cat([sigmas[:1], sigmas[1:].repeat_interleave(2), sigmas[-1:]]) - self.sigmas_interpol = torch.cat( - [sigmas_interpol[:1], sigmas_interpol[1:].repeat_interleave(2), sigmas_interpol[-1:]] - ) - - # standard deviation of the initial noise distribution - self.init_noise_sigma = self.sigmas.max() - - if str(device).startswith("mps"): - # mps does not support float64 - timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32) - else: - timesteps = torch.from_numpy(timesteps).to(device) - - # interpolate timesteps - timesteps_interpol = self.sigma_to_t(sigmas_interpol).to(device) - interleaved_timesteps = torch.stack((timesteps_interpol[1:-1, None], timesteps[1:, None]), dim=-1).flatten() - - self.timesteps = torch.cat([timesteps[:1], interleaved_timesteps]) - - self.sample = None - - def sigma_to_t(self, sigma): - # get log sigma - log_sigma = sigma.log() - - # get distribution - dists = log_sigma - self.log_sigmas[:, None] - - # get sigmas range - low_idx = dists.ge(0).cumsum(dim=0).argmax(dim=0).clamp(max=self.log_sigmas.shape[0] - 2) - high_idx = low_idx + 1 - - low = self.log_sigmas[low_idx] - high = self.log_sigmas[high_idx] - - # interpolate sigmas - w = (low - log_sigma) / (low - high) - w = w.clamp(0, 1) - - # transform interpolation to time range - t = (1 - w) * low_idx + w * high_idx - t = t.view(sigma.shape) - return t - - @property - def state_in_first_order(self): - return self.sample is None - - def step( - self, - model_output: Union[torch.FloatTensor, np.ndarray], - timestep: Union[float, torch.FloatTensor], - sample: Union[torch.FloatTensor, np.ndarray], - return_dict: bool = True, - ) -> Union[SchedulerOutput, Tuple]: - """ - Args: - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - model_output (`torch.FloatTensor` or `np.ndarray`): direct output from learned diffusion model. timestep - (`int`): current discrete timestep in the diffusion chain. sample (`torch.FloatTensor` or `np.ndarray`): - current instance of sample being created by diffusion process. - return_dict (`bool`): option for returning tuple rather than SchedulerOutput class - Returns: - [`~schedulers.scheduling_utils.SchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.SchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - step_index = self.index_for_timestep(timestep) - - if self.state_in_first_order: - sigma = self.sigmas[step_index] - sigma_interpol = self.sigmas_interpol[step_index + 1] - sigma_next = self.sigmas[step_index + 1] - else: - # 2nd order / KDPM2's method - sigma = self.sigmas[step_index - 1] - sigma_interpol = self.sigmas_interpol[step_index] - sigma_next = self.sigmas[step_index] - - # currently only gamma=0 is supported. This usually works best anyways. - # We can support gamma in the future but then need to scale the timestep before - # passing it to the model which requires a change in API - gamma = 0 - sigma_hat = sigma * (gamma + 1) # Note: sigma_hat == sigma for now - - # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise - if self.config.prediction_type == "epsilon": - sigma_input = sigma_hat if self.state_in_first_order else sigma_interpol - pred_original_sample = sample - sigma_input * model_output - elif self.config.prediction_type == "v_prediction": - sigma_input = sigma_hat if self.state_in_first_order else sigma_interpol - pred_original_sample = model_output * (-sigma_input / (sigma_input**2 + 1) ** 0.5) + ( - sample / (sigma_input**2 + 1) - ) - elif self.config.prediction_type == "sample": - raise NotImplementedError("prediction_type not implemented yet: sample") - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`" - ) - - if self.state_in_first_order: - # 2. Convert to an ODE derivative for 1st order - derivative = (sample - pred_original_sample) / sigma_hat - # 3. delta timestep - dt = sigma_interpol - sigma_hat - - # store for 2nd order step - self.sample = sample - else: - # DPM-Solver-2 - # 2. Convert to an ODE derivative for 2nd order - derivative = (sample - pred_original_sample) / sigma_interpol - - # 3. delta timestep - dt = sigma_next - sigma_hat - - sample = self.sample - self.sample = None - - prev_sample = sample + derivative * dt - - if not return_dict: - return (prev_sample,) - - return SchedulerOutput(prev_sample=prev_sample) - - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.FloatTensor, - ) -> torch.FloatTensor: - # Make sure sigmas and timesteps have the same device and dtype as original_samples - self.sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype) - if original_samples.device.type == "mps" and torch.is_floating_point(timesteps): - # mps does not support float64 - self.timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32) - timesteps = timesteps.to(original_samples.device, dtype=torch.float32) - else: - self.timesteps = self.timesteps.to(original_samples.device) - timesteps = timesteps.to(original_samples.device) - - step_indices = [self.index_for_timestep(t) for t in timesteps] - - sigma = self.sigmas[step_indices].flatten() - while len(sigma.shape) < len(original_samples.shape): - sigma = sigma.unsqueeze(-1) - - noisy_samples = original_samples + noise * sigma - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_ipndm.py b/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_ipndm.py deleted file mode 100644 index 549caed47fe8f100c2bc4164329210209595ba7f..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_ipndm.py +++ /dev/null @@ -1,161 +0,0 @@ -import tempfile - -import torch - -from diffusers import IPNDMScheduler - -from .test_schedulers import SchedulerCommonTest - - -class IPNDMSchedulerTest(SchedulerCommonTest): - scheduler_classes = (IPNDMScheduler,) - forward_default_kwargs = (("num_inference_steps", 50),) - - def get_scheduler_config(self, **kwargs): - config = {"num_train_timesteps": 1000} - config.update(**kwargs) - return config - - def check_over_configs(self, time_step=0, **config): - kwargs = dict(self.forward_default_kwargs) - num_inference_steps = kwargs.pop("num_inference_steps", None) - sample = self.dummy_sample - residual = 0.1 * sample - dummy_past_residuals = [residual + 0.2, residual + 0.15, residual + 0.1, residual + 0.05] - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - scheduler.set_timesteps(num_inference_steps) - # copy over dummy past residuals - scheduler.ets = dummy_past_residuals[:] - - if time_step is None: - time_step = scheduler.timesteps[len(scheduler.timesteps) // 2] - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler = scheduler_class.from_pretrained(tmpdirname) - new_scheduler.set_timesteps(num_inference_steps) - # copy over dummy past residuals - new_scheduler.ets = dummy_past_residuals[:] - - output = scheduler.step(residual, time_step, sample, **kwargs).prev_sample - new_output = new_scheduler.step(residual, time_step, sample, **kwargs).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - output = scheduler.step(residual, time_step, sample, **kwargs).prev_sample - new_output = new_scheduler.step(residual, time_step, sample, **kwargs).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def test_from_save_pretrained(self): - pass - - def check_over_forward(self, time_step=0, **forward_kwargs): - kwargs = dict(self.forward_default_kwargs) - num_inference_steps = kwargs.pop("num_inference_steps", None) - sample = self.dummy_sample - residual = 0.1 * sample - dummy_past_residuals = [residual + 0.2, residual + 0.15, residual + 0.1, residual + 0.05] - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - scheduler.set_timesteps(num_inference_steps) - - # copy over dummy past residuals (must be after setting timesteps) - scheduler.ets = dummy_past_residuals[:] - - if time_step is None: - time_step = scheduler.timesteps[len(scheduler.timesteps) // 2] - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler = scheduler_class.from_pretrained(tmpdirname) - # copy over dummy past residuals - new_scheduler.set_timesteps(num_inference_steps) - - # copy over dummy past residual (must be after setting timesteps) - new_scheduler.ets = dummy_past_residuals[:] - - output = scheduler.step(residual, time_step, sample, **kwargs).prev_sample - new_output = new_scheduler.step(residual, time_step, sample, **kwargs).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - output = scheduler.step(residual, time_step, sample, **kwargs).prev_sample - new_output = new_scheduler.step(residual, time_step, sample, **kwargs).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def full_loop(self, **config): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - - num_inference_steps = 10 - model = self.dummy_model() - sample = self.dummy_sample_deter - scheduler.set_timesteps(num_inference_steps) - - for i, t in enumerate(scheduler.timesteps): - residual = model(sample, t) - sample = scheduler.step(residual, t, sample).prev_sample - - for i, t in enumerate(scheduler.timesteps): - residual = model(sample, t) - sample = scheduler.step(residual, t, sample).prev_sample - - return sample - - def test_step_shape(self): - kwargs = dict(self.forward_default_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - sample = self.dummy_sample - residual = 0.1 * sample - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - scheduler.set_timesteps(num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - # copy over dummy past residuals (must be done after set_timesteps) - dummy_past_residuals = [residual + 0.2, residual + 0.15, residual + 0.1, residual + 0.05] - scheduler.ets = dummy_past_residuals[:] - - time_step_0 = scheduler.timesteps[5] - time_step_1 = scheduler.timesteps[6] - - output_0 = scheduler.step(residual, time_step_0, sample, **kwargs).prev_sample - output_1 = scheduler.step(residual, time_step_1, sample, **kwargs).prev_sample - - self.assertEqual(output_0.shape, sample.shape) - self.assertEqual(output_0.shape, output_1.shape) - - output_0 = scheduler.step(residual, time_step_0, sample, **kwargs).prev_sample - output_1 = scheduler.step(residual, time_step_1, sample, **kwargs).prev_sample - - self.assertEqual(output_0.shape, sample.shape) - self.assertEqual(output_0.shape, output_1.shape) - - def test_timesteps(self): - for timesteps in [100, 1000]: - self.check_over_configs(num_train_timesteps=timesteps, time_step=None) - - def test_inference_steps(self): - for t, num_inference_steps in zip([1, 5, 10], [10, 50, 100]): - self.check_over_forward(num_inference_steps=num_inference_steps, time_step=None) - - def test_full_loop_no_noise(self): - sample = self.full_loop() - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_mean.item() - 2540529) < 10 diff --git a/spaces/deedax/Change-Your-Style/utils.py b/spaces/deedax/Change-Your-Style/utils.py deleted file mode 100644 index de2f814ed5592ebd39c1d30c4aa797a115f0a34f..0000000000000000000000000000000000000000 --- a/spaces/deedax/Change-Your-Style/utils.py +++ /dev/null @@ -1,146 +0,0 @@ -from base64 import b64encode - -import numpy -import torch -from diffusers import AutoencoderKL, LMSDiscreteScheduler, UNet2DConditionModel - -from IPython.display import HTML -from matplotlib import pyplot as plt -from PIL import Image -from torch import autocast -from torchvision import transforms as tfms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer, logging - -import gdown -import os - -torch.manual_seed(1) -logging.set_verbosity_error() - -torch_device = "cuda" if torch.cuda.is_available() else "cpu" - -if not os.path.exists('models/vae.pt'): vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae") -if not os.path.exists('models/unet.pt'): unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet") -if not os.path.exists('models/scheduler.pt'): scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000) -if not os.path.exists('models/tokenizer.pt'): tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14") -if not os.path.exists('models/text_encoder.pt'): text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14") - -vae = vae.to(torch_device) -text_encoder = text_encoder.to(torch_device) -unet = unet.to(torch_device) - -def download_models(): - if not os.path.exists('models/vae.pt'): gdown.download(url = '', output = 'vae.pt') - if not os.path.exists('models/unet.pt'): gdown.download(url = '', output = 'unet.pt') - if not os.path.exists('models/scheduler.pt'): gdown.download(url = '', output = 'scheduler.pt') - if not os.path.exists('models/tokenizer.pt'): gdown.download(url = '', output = 'tokenizer.pt') - if not os.path.exists('models/text_encoder.pt'): gdown.download(url = '', output = 'text_encoder.pt') - -def pil_to_latent(input_im): - with torch.no_grad(): - latent = vae.encode(tfms.ToTensor()(input_im).unsqueeze(0).to(torch_device)*2-1) - return 0.18215 * latent.latent_dist.sample() - -def latents_to_pil(latents): - latents = (1 / 0.18215) * latents - with torch.no_grad(): - image = vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - image = image.detach().cpu().permute(0, 2, 3, 1).numpy() - images = (image * 255).round().astype("uint8") - pil_images = [Image.fromarray(image) for image in images] - return pil_images - -def get_style(style): - learned_emebeds_map = { - 'Ghibli': ['', 'ghibli'], - 'Manga': ['', 'manga'], - 'GTA 5': ['', 'gta'], - 'Sims': ['', 'sims'], - 'Kaya Ghost Assasin': ['', 'kaya'], - 'Uzumaki': ['', 'uzumaki'], - 'Arcane': ['', 'arcane'] - } - return learned_emebeds_map[style] - -def change_style(image, style, inf_steps, guidance, str_step): - - input_image = Image.fromarray(image).resize((512, 512)) - encoded = pil_to_latent(input_image) - learned_emebed = torch.load('learned_embeds/{}_learned_embeds.bin'.format(get_style(style)[1])) - prompt = 'portrait of a person in the style of temp' - - text_input = tokenizer(prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt") - input_ids = text_input.input_ids.to(torch_device) - position_ids = text_encoder.text_model.embeddings.position_ids[:, :77] - - token_emb_layer = text_encoder.text_model.embeddings.token_embedding - pos_emb_layer = text_encoder.text_model.embeddings.position_embedding - - position_embeddings = pos_emb_layer(position_ids) - token_embeddings = token_emb_layer(input_ids) - - replacement_token_embedding = learned_emebed[get_style(style)[0]].to(torch_device) - - token_embeddings[0, torch.where(input_ids[0]==11097)] = replacement_token_embedding.to(torch_device) - - input_embeddings = token_embeddings + position_embeddings - - bsz, seq_len = input_embeddings.shape[:2] - causal_attention_mask = text_encoder.text_model._build_causal_attention_mask(bsz, seq_len, dtype=input_embeddings.dtype) - - encoder_outputs = text_encoder.text_model.encoder( - inputs_embeds=input_embeddings, - attention_mask=None, - causal_attention_mask=causal_attention_mask.to(torch_device), - output_attentions=None, - output_hidden_states=True, - return_dict=None, - ) - modified_output_embeddings = encoder_outputs[0] - - modified_output_embeddings = text_encoder.text_model.final_layer_norm(modified_output_embeddings) - - height = 512 - width = 512 - num_inference_steps = inf_steps - guidance_scale = guidance - generator = torch.manual_seed(32) - batch_size = 1 - - max_length = text_input.input_ids.shape[-1] - uncond_input = tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt" - ) - with torch.no_grad(): - uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] - text_embeddings = torch.cat([uncond_embeddings, modified_output_embeddings]) - - scheduler.set_timesteps(num_inference_steps) - start_step = str_step - start_sigma = scheduler.sigmas[start_step] - noise = torch.randn_like(encoded) - - latents = scheduler.add_noise(encoded, noise, timesteps=torch.tensor([scheduler.timesteps[start_step]])) - latents = latents.to(torch_device).float() - - for i, t in tqdm(enumerate(scheduler.timesteps)): - if i >= start_step: - latent_model_input = torch.cat([latents] * 2) - sigma = scheduler.sigmas[i] - latent_model_input = scheduler.scale_model_input(latent_model_input, t) - - torch.cuda.empty_cache() - - with torch.no_grad(): - noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings)["sample"] - - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - latents = scheduler.step(noise_pred, t, latents).prev_sample - - return(latents_to_pil(latents)[0]) - - diff --git a/spaces/devoworm-group/membrane_segmentation/README.md b/spaces/devoworm-group/membrane_segmentation/README.md deleted file mode 100644 index 0b5bf393adec9bdf25ce13c42201fecd0319934d..0000000000000000000000000000000000000000 --- a/spaces/devoworm-group/membrane_segmentation/README.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Membrane Segmentor -emoji: 🏃 -colorFrom: purple -colorTo: indigo -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - - -# Devolearn_Membrane_Segmentor_huggingface_spaces -devolearn Membrane segmentor model deployed in hugging face spaces using streamlit as backend. -Here in this model, image of c_elegans is served as input then membrane of c_elegans is segmented and parallel centroid of nucleus is extracted -check out model here [Membrane-Segmentor](https://huggingface.co/spaces/devoworm-group/membrane_segmentation) - -NOTE : There is a problem with hugging face spaces, aren't working well in safari Browser,Due to compatibility issues ,so pls try to open it in other browser such as Chrome, Brave and Microsoft Edge. diff --git a/spaces/diacanFperku/AutoGPT/Aagaya Hero 2 Full Movie Hd 1080p Free Download Utorrent Movies.md b/spaces/diacanFperku/AutoGPT/Aagaya Hero 2 Full Movie Hd 1080p Free Download Utorrent Movies.md deleted file mode 100644 index 847a9e1636bf7b359057f3d3d9d3fbf3773eec63..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Aagaya Hero 2 Full Movie Hd 1080p Free Download Utorrent Movies.md +++ /dev/null @@ -1,12 +0,0 @@ -

Aagaya Hero 2 full movie hd 1080p free download utorrent movies


Download ✔✔✔ https://gohhs.com/2uFVvl



- -Aagaya Hero 2 Movie HD 1080p U Movies | HD Movies 1080p. -Bulan Yang Lalu. -Namun, Kita Tahu Tidak Banyak Film Di Kata Karena Baru Geng. -Yang Banyak Film di Kata Karena Baru Geng. -Namun, Kita Tahu Tidak Banyak Film Di Kata Karena Baru Geng -Aagaya Hero 2 Movie HD 1080p U Movies | HD Movies 1080p. -Namun, Kita Tahu Tidak Banyak Film Di Kata Karena Baru Geng. 8a78ff9644
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/Nemetschek Frilo R20111SL2B.md b/spaces/diacanFperku/AutoGPT/Nemetschek Frilo R20111SL2B.md deleted file mode 100644 index d8d7003c7a40d4fd46e83fba59385b9e25f8b311..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Nemetschek Frilo R20111SL2B.md +++ /dev/null @@ -1,21 +0,0 @@ - -

Nemetschek Frilo R20111SL2B: A Complete Software Solution for Structural Engineering and Design

-

Nemetschek Frilo R20111SL2B is the latest version of the Frilo Business Suite, a comprehensive software solution for structural analysis and design. Frilo is a brand of the Nemetschek Group, a pioneer for digital transformation in the AEC/O industry. With its intelligent software solutions, Nemetschek covers the entire lifecycle of building and infrastructure projects and guides its customers into the future of digitalization.

-

Nemetschek Frilo R20111SL2B


Download File ✦✦✦ https://gohhs.com/2uFTst



-

Frilo offers over 100 software solutions for different material areas, such as solid construction, roof & timber, foundation & foil analysis and steel construction. Frilo software solutions are easy to use, fast and reliable, and continuously adapted to current standards. Frilo also provides a high-quality customer service and support.

-

One of the key features of Frilo R20111SL2B is the BIM-Connector, which enables the seamless integration of Frilo software with other BIM solutions from the Nemetschek Group, such as Allplan, Graphisoft and Vectorworks. The BIM-Connector allows the exchange of data and models between architects and engineers, improving collaboration and efficiency. The BIM-Connector also supports open standards, such as IFC and BCF, for interoperability with other BIM platforms.

-

Frilo R20111SL2B also includes new and improved software solutions for various structural engineering tasks, such as GEO (Building Model), DLT (Continuous Beam), FEA (Finite Element Analysis), STA (Stability Analysis) and many more. Frilo R20111SL2B helps engineers to design and calculate complex structures with confidence and accuracy.

-

-

Frilo R20111SL2B is available for download from the official website: https://www.frilo.eu/en/. Customers can also request a free trial or a demo version to test the software before purchasing. Frilo R20111SL2B is compatible with Windows 10 and requires a minimum of 4 GB RAM and 10 GB disk space.

- -

Frilo R20111SL2B has received positive reviews from many customers who have used it for their structural engineering projects. Some of the benefits that customers have highlighted are:

- -

Frilo R20111SL2B also offers a mobile app called Frilo StaticsToGo, which allows the user to display and edit their structural analysis documents created with Frilo Document Designer on their iOS devices. The app enables the user to access their documents anytime and anywhere, whether on the construction site, in a meeting or at another location. The app also allows the user to share their documents via email or cloud services.

-

Frilo R20111SL2B is a complete software solution for structural engineering and design that can help engineers to design and calculate structures more efficiently, sustainably and resource-saving. Frilo R20111SL2B is a product of Frilo Software GmbH, a brand of the Nemetschek Group, a pioneer for digital transformation in the AEC/O industry. For more information about Frilo R20111SL2B and other Frilo software solutions, visit https://www.frilo.eu/en/.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/diego2554/RemBG_super/rembg/commands/i_command.py b/spaces/diego2554/RemBG_super/rembg/commands/i_command.py deleted file mode 100644 index d65313c968f01c0ba331c9db198331156b65857f..0000000000000000000000000000000000000000 --- a/spaces/diego2554/RemBG_super/rembg/commands/i_command.py +++ /dev/null @@ -1,93 +0,0 @@ -import json -import sys -from typing import IO - -import click - -from ..bg import remove -from ..session_factory import new_session -from ..sessions import sessions_names - - -@click.command( - name="i", - help="for a file as input", -) -@click.option( - "-m", - "--model", - default="u2net", - type=click.Choice(sessions_names), - show_default=True, - show_choices=True, - help="model name", -) -@click.option( - "-a", - "--alpha-matting", - is_flag=True, - show_default=True, - help="use alpha matting", -) -@click.option( - "-af", - "--alpha-matting-foreground-threshold", - default=240, - type=int, - show_default=True, - help="trimap fg threshold", -) -@click.option( - "-ab", - "--alpha-matting-background-threshold", - default=10, - type=int, - show_default=True, - help="trimap bg threshold", -) -@click.option( - "-ae", - "--alpha-matting-erode-size", - default=10, - type=int, - show_default=True, - help="erode size", -) -@click.option( - "-om", - "--only-mask", - is_flag=True, - show_default=True, - help="output only the mask", -) -@click.option( - "-ppm", - "--post-process-mask", - is_flag=True, - show_default=True, - help="post process the mask", -) -@click.option( - "-bgc", - "--bgcolor", - default=None, - type=(int, int, int, int), - nargs=4, - help="Background color (R G B A) to replace the removed background with", -) -@click.option("-x", "--extras", type=str) -@click.argument( - "input", default=(None if sys.stdin.isatty() else "-"), type=click.File("rb") -) -@click.argument( - "output", - default=(None if sys.stdin.isatty() else "-"), - type=click.File("wb", lazy=True), -) -def i_command(model: str, extras: str, input: IO, output: IO, **kwargs) -> None: - try: - kwargs.update(json.loads(extras)) - except Exception: - pass - - output.write(remove(input.read(), session=new_session(model), **kwargs)) diff --git a/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/text/english.py b/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/text/english.py deleted file mode 100644 index 781d0a56cef71f66fc67db51d76538be90d3ddd2..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/text/english.py +++ /dev/null @@ -1,138 +0,0 @@ -import pickle -import os -import re -from g2p_en import G2p -from string import punctuation - -from text import symbols - -current_file_path = os.path.dirname(__file__) -CMU_DICT_PATH = os.path.join(current_file_path, 'cmudict.rep') -CACHE_PATH = os.path.join(current_file_path, 'cmudict_cache.pickle') -_g2p = G2p() - -arpa = {'AH0', 'S', 'AH1', 'EY2', 'AE2', 'EH0', 'OW2', 'UH0', 'NG', 'B', 'G', 'AY0', 'M', 'AA0', 'F', 'AO0', 'ER2', 'UH1', 'IY1', 'AH2', 'DH', 'IY0', 'EY1', 'IH0', 'K', 'N', 'W', 'IY2', 'T', 'AA1', 'ER1', 'EH2', 'OY0', 'UH2', 'UW1', 'Z', 'AW2', 'AW1', 'V', 'UW2', 'AA2', 'ER', 'AW0', 'UW0', 'R', 'OW1', 'EH1', 'ZH', 'AE0', 'IH2', 'IH', 'Y', 'JH', 'P', 'AY1', 'EY0', 'OY2', 'TH', 'HH', 'D', 'ER0', 'CH', 'AO1', 'AE1', 'AO2', 'OY1', 'AY2', 'IH1', 'OW0', 'L', 'SH'} - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def read_dict(): - g2p_dict = {} - start_line = 49 - with open(CMU_DICT_PATH) as f: - line = f.readline() - line_index = 1 - while line: - if line_index >= start_line: - line = line.strip() - word_split = line.split(' ') - word = word_split[0] - - syllable_split = word_split[1].split(' - ') - g2p_dict[word] = [] - for syllable in syllable_split: - phone_split = syllable.split(' ') - g2p_dict[word].append(phone_split) - - line_index = line_index + 1 - line = f.readline() - - return g2p_dict - - -def cache_dict(g2p_dict, file_path): - with open(file_path, 'wb') as pickle_file: - pickle.dump(g2p_dict, pickle_file) - - -def get_dict(): - if os.path.exists(CACHE_PATH): - with open(CACHE_PATH, 'rb') as pickle_file: - g2p_dict = pickle.load(pickle_file) - else: - g2p_dict = read_dict() - cache_dict(g2p_dict, CACHE_PATH) - - return g2p_dict - -eng_dict = get_dict() - -def refine_ph(phn): - tone = 0 - if re.search(r'\d$', phn): - tone = int(phn[-1]) + 1 - phn = phn[:-1] - return phn.lower(), tone - -def refine_syllables(syllables): - tones = [] - phonemes = [] - for phn_list in syllables: - for i in range(len(phn_list)): - phn = phn_list[i] - phn, tone = refine_ph(phn) - phonemes.append(phn) - tones.append(tone) - return phonemes, tones - - -def text_normalize(text): - # todo: eng text normalize - return text - -def g2p(text): - - phones = [] - tones = [] - words = re.split(r"([,;.\-\?\!\s+])", text) - for w in words: - if w.upper() in eng_dict: - phns, tns = refine_syllables(eng_dict[w.upper()]) - phones += phns - tones += tns - else: - phone_list = list(filter(lambda p: p != " ", _g2p(w))) - for ph in phone_list: - if ph in arpa: - ph, tn = refine_ph(ph) - phones.append(ph) - tones.append(tn) - else: - phones.append(ph) - tones.append(0) - # todo: implement word2ph - word2ph = [1 for i in phones] - - phones = [post_replace_ph(i) for i in phones] - return phones, tones, word2ph - -if __name__ == "__main__": - # print(get_dict()) - # print(eng_word_to_phoneme("hello")) - print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder.")) - # all_phones = set() - # for k, syllables in eng_dict.items(): - # for group in syllables: - # for ph in group: - # all_phones.add(ph) - # print(all_phones) \ No newline at end of file diff --git a/spaces/ecarbo/paddleOCR-demo/README.md b/spaces/ecarbo/paddleOCR-demo/README.md deleted file mode 100644 index 6f42737b42dc2c6d2ae3c025ff6bc0c383b2454e..0000000000000000000000000000000000000000 --- a/spaces/ecarbo/paddleOCR-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PaddleOCR Demo -emoji: 🔥 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false ---- - -# paddleOCR-demo -paddleOCR with HF diff --git a/spaces/ennet/ChatDev/camel/messages/system_messages.py b/spaces/ennet/ChatDev/camel/messages/system_messages.py deleted file mode 100644 index 5a4cc9185e9fb1151a80110a1f68af28a27725ea..0000000000000000000000000000000000000000 --- a/spaces/ennet/ChatDev/camel/messages/system_messages.py +++ /dev/null @@ -1,81 +0,0 @@ -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -# Licensed under the Apache License, Version 2.0 (the “License”); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an “AS IS” BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -from dataclasses import dataclass -from typing import Dict, Optional - -from camel.messages import BaseMessage -from camel.typing import RoleType - - -@dataclass -class SystemMessage(BaseMessage): - r"""Class for system messages used in CAMEL chat system. - - Args: - role_name (str): The name of the user or assistant role. - role_type (RoleType): The type of role, either - :obj:`RoleType.ASSISTANT` or :obj:`RoleType.USER`. - meta_dict (Optional[Dict[str, str]]): Additional metadata dictionary - for the message. - role (str): The role of the message in OpenAI chat system. - (default: :obj:`"system"`) - content (str): The content of the message. (default: :obj:`""`) - """ - role_name: str - role_type: RoleType - meta_dict: Optional[Dict[str, str]] = None - role: str = "system" - content: str = "" - - -@dataclass -class AssistantSystemMessage(SystemMessage): - r"""Class for system messages from the assistant used in the CAMEL chat - system. - - Args: - role_name (str): The name of the assistant role. - role_type (RoleType): The type of role, always - :obj:`RoleType.ASSISTANT`. - meta_dict (Optional[Dict[str, str]]): Additional metadata dictionary - for the message. - role (str): The role of the message in OpenAI chat system. - (default: :obj:`"system"`) - content (str): The content of the message. (default: :obj:`""`) - """ - role_name: str - role_type: RoleType = RoleType.ASSISTANT - meta_dict: Optional[Dict[str, str]] = None - role: str = "system" - content: str = "" - - -@dataclass -class UserSystemMessage(SystemMessage): - r"""Class for system messages from the user used in the CAMEL chat system. - - Args: - role_name (str): The name of the user role. - role_type (RoleType): The type of role, always :obj:`RoleType.USER`. - meta_dict (Optional[Dict[str, str]]): Additional metadata dictionary - for the message. - role (str): The role of the message in OpenAI chat system. - (default: :obj:`"system"`) - content (str): The content of the message. (default: :obj:`""`) - """ - role_name: str - role_type: RoleType = RoleType.USER - meta_dict: Optional[Dict[str, str]] = None - role: str = "system" - content: str = "" diff --git a/spaces/everm1nd/musika/models.py b/spaces/everm1nd/musika/models.py deleted file mode 100644 index 254bf152c1cd7032a9fad0b77d7977ff9ec65686..0000000000000000000000000000000000000000 --- a/spaces/everm1nd/musika/models.py +++ /dev/null @@ -1,783 +0,0 @@ -import numpy as np -import tensorflow as tf -from tensorflow.python.keras.utils.layer_utils import count_params - -from layers import AddNoise - - -class Models_functions: - def __init__(self, args): - - self.args = args - - if self.args.mixed_precision: - self.mixed_precision = tf.keras.mixed_precision - self.policy = tf.keras.mixed_precision.Policy("mixed_float16") - tf.keras.mixed_precision.set_global_policy(self.policy) - self.init = tf.keras.initializers.he_uniform() - - def conv_util( - self, inp, filters, kernel_size=(1, 3), strides=(1, 1), noise=False, upsample=False, padding="same", bnorm=True - ): - - x = inp - - bias = True - if bnorm: - bias = False - - if upsample: - x = tf.keras.layers.Conv2DTranspose( - filters, - kernel_size=kernel_size, - strides=strides, - activation="linear", - padding=padding, - kernel_initializer=self.init, - use_bias=bias, - )(x) - else: - x = tf.keras.layers.Conv2D( - filters, - kernel_size=kernel_size, - strides=strides, - activation="linear", - padding=padding, - kernel_initializer=self.init, - use_bias=bias, - )(x) - - if noise: - x = AddNoise(self.args.datatype)(x) - - if bnorm: - x = tf.keras.layers.BatchNormalization()(x) - - x = tf.keras.activations.swish(x) - - return x - - def pixel_shuffle(self, x, factor=2): - bs_dim, h_dim, w_dim, c_dim = tf.shape(x)[0], x.shape[1], x.shape[2], x.shape[3] - x = tf.reshape(x, [bs_dim, h_dim, w_dim, c_dim // factor, factor]) - x = tf.transpose(x, [0, 1, 2, 4, 3]) - return tf.reshape(x, [bs_dim, h_dim, w_dim * factor, c_dim // factor]) - - def adain(self, x, emb, name): - emb = tf.keras.layers.Conv2D( - x.shape[-1], - kernel_size=(1, 1), - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name=name, - )(emb) - x = x / (tf.math.reduce_std(x, -2, keepdims=True) + 1e-5) - return x * emb - - def conv_util_gen( - self, - inp, - filters, - kernel_size=(1, 9), - strides=(1, 1), - noise=False, - upsample=False, - emb=None, - se1=None, - name="0", - ): - - x = inp - - if upsample: - x = tf.keras.layers.Conv2DTranspose( - filters, - kernel_size=kernel_size, - strides=strides, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name=name + "c", - )(x) - else: - x = tf.keras.layers.Conv2D( - filters, - kernel_size=kernel_size, - strides=strides, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name=name + "c", - )(x) - - if noise: - x = AddNoise(self.args.datatype, name=name + "r")(x) - - if emb is not None: - x = self.adain(x, emb, name=name + "ai") - else: - x = tf.keras.layers.BatchNormalization(name=name + "bn")(x) - - x = tf.keras.activations.swish(x) - - return x - - def res_block_disc(self, inp, filters, kernel_size=(1, 3), kernel_size_2=None, strides=(1, 1), name="0"): - - if kernel_size_2 is None: - kernel_size_2 = kernel_size - - x = tf.keras.layers.Conv2D( - inp.shape[-1], - kernel_size=kernel_size_2, - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - name=name + "c0", - )(inp) - x = tf.keras.layers.LeakyReLU(0.2)(x) - x = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * x - x = tf.keras.layers.Conv2D( - filters, - kernel_size=kernel_size, - strides=strides, - activation="linear", - padding="same", - kernel_initializer=self.init, - name=name + "c1", - )(x) - x = tf.keras.layers.LeakyReLU(0.2)(x) - x = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * x - - if strides != (1, 1): - inp = tf.keras.layers.AveragePooling2D(strides, padding="same")(inp) - - if inp.shape[-1] != filters: - inp = tf.keras.layers.Conv2D( - filters, - kernel_size=1, - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=False, - name=name + "c3", - )(inp) - - return x + inp - - def build_encoder2(self): - - inpf = tf.keras.layers.Input((1, self.args.shape, self.args.hop // 4)) - - inpfls = tf.split(inpf, 8, -2) - inpb = tf.concat(inpfls, 0) - - g0 = self.conv_util(inpb, self.args.hop, kernel_size=(1, 3), strides=(1, 1), padding="same", bnorm=False) - g1 = self.conv_util( - g0, self.args.hop + self.args.hop // 2, kernel_size=(1, 3), strides=(1, 2), padding="valid", bnorm=False - ) - g2 = self.conv_util( - g1, self.args.hop + self.args.hop // 2, kernel_size=(1, 3), strides=(1, 1), padding="same", bnorm=False - ) - g3 = self.conv_util(g2, self.args.hop * 2, kernel_size=(1, 3), strides=(1, 2), padding="valid", bnorm=False) - g4 = self.conv_util(g3, self.args.hop * 2, kernel_size=(1, 3), strides=(1, 1), padding="same", bnorm=False) - g5 = self.conv_util(g4, self.args.hop * 3, kernel_size=(1, 3), strides=(1, 1), padding="valid", bnorm=False) - g5 = self.conv_util(g5, self.args.hop * 3, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False) - - g = tf.keras.layers.Conv2D( - self.args.latdepth, - kernel_size=(1, 1), - strides=1, - padding="valid", - kernel_initializer=self.init, - name="cbottle", - activation="tanh", - )(g5) - - gls = tf.split(g, 8, 0) - g = tf.concat(gls, -2) - gls = tf.split(g, 2, -2) - g = tf.concat(gls, 0) - - gf = tf.cast(g, tf.float32) - - return tf.keras.Model(inpf, gf, name="ENC2") - - def build_decoder2(self): - - inpf = tf.keras.layers.Input((1, self.args.shape // 32, self.args.latdepth)) - - g = inpf - - g = self.conv_util( - g, self.args.hop * 3, kernel_size=(1, 3), strides=(1, 1), upsample=False, noise=True, bnorm=False - ) - g = self.conv_util( - g, - self.args.hop * 2 + self.args.hop // 2, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - bnorm=False, - ) - g = self.conv_util( - g, - self.args.hop * 2 + self.args.hop // 2, - kernel_size=(1, 3), - strides=(1, 1), - upsample=False, - noise=True, - bnorm=False, - ) - g = self.conv_util( - g, self.args.hop * 2, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - g = self.conv_util( - g, self.args.hop * 2, kernel_size=(1, 3), strides=(1, 1), upsample=False, noise=True, bnorm=False - ) - g = self.conv_util( - g, - self.args.hop + self.args.hop // 2, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - bnorm=False, - ) - g = self.conv_util(g, self.args.hop, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False) - - gf = tf.keras.layers.Conv2D( - self.args.hop // 4, kernel_size=(1, 1), strides=1, padding="same", kernel_initializer=self.init, name="cout" - )(g) - - gfls = tf.split(gf, 2, 0) - gf = tf.concat(gfls, -2) - - gf = tf.cast(gf, tf.float32) - - return tf.keras.Model(inpf, gf, name="DEC2") - - def build_encoder(self): - - dim = ((4 * self.args.hop) // 2) + 1 - - inpf = tf.keras.layers.Input((dim, self.args.shape, 1)) - - ginp = tf.transpose(inpf, [0, 3, 2, 1]) - - g0 = self.conv_util(ginp, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False) - g1 = self.conv_util(g0, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False) - g2 = self.conv_util(g1, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False) - g4 = self.conv_util(g2, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False) - g5 = self.conv_util(g4, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False) - - g = tf.keras.layers.Conv2D( - self.args.hop // 4, kernel_size=(1, 1), strides=1, padding="valid", kernel_initializer=self.init - )(g5) - - g = tf.keras.activations.tanh(g) - - gls = tf.split(g, 2, -2) - g = tf.concat(gls, 0) - - gf = tf.cast(g, tf.float32) - - return tf.keras.Model(inpf, gf, name="ENC") - - def build_decoder(self): - - dim = ((4 * self.args.hop) // 2) + 1 - - inpf = tf.keras.layers.Input((1, self.args.shape // 2, self.args.hop // 4)) - - g = inpf - - g0 = self.conv_util(g, self.args.hop * 3, kernel_size=(1, 3), strides=(1, 1), noise=True, bnorm=False) - g1 = self.conv_util(g0, self.args.hop * 3, kernel_size=(1, 3), strides=(1, 2), noise=True, bnorm=False) - g2 = self.conv_util(g1, self.args.hop * 2, kernel_size=(1, 3), strides=(1, 2), noise=True, bnorm=False) - g3 = self.conv_util(g2, self.args.hop, kernel_size=(1, 3), strides=(1, 2), noise=True, bnorm=False) - g = self.conv_util(g3, self.args.hop, kernel_size=(1, 3), strides=(1, 2), noise=True, bnorm=False) - - g33 = self.conv_util( - g, self.args.hop, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - g22 = self.conv_util( - g3, self.args.hop * 2, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - g11 = self.conv_util( - g22 + g2, self.args.hop * 3, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - g00 = self.conv_util( - g11 + g1, self.args.hop * 3, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - - g = tf.keras.layers.Conv2D( - dim, kernel_size=(1, 1), strides=(1, 1), kernel_initializer=self.init, padding="same" - )(g00 + g0) - gf = tf.clip_by_value(g, -1.0, 1.0) - - g = self.conv_util( - g22, self.args.hop * 3, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - g = self.conv_util( - g + g11, self.args.hop * 3, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - g = tf.keras.layers.Conv2D( - dim, kernel_size=(1, 1), strides=(1, 1), kernel_initializer=self.init, padding="same" - )(g + g00) - pf = tf.clip_by_value(g, -1.0, 1.0) - - gfls = tf.split(gf, self.args.shape // self.args.window, 0) - gf = tf.concat(gfls, -2) - - pfls = tf.split(pf, self.args.shape // self.args.window, 0) - pf = tf.concat(pfls, -2) - - s = tf.transpose(gf, [0, 2, 3, 1]) - p = tf.transpose(pf, [0, 2, 3, 1]) - - s = tf.cast(tf.squeeze(s, -1), tf.float32) - p = tf.cast(tf.squeeze(p, -1), tf.float32) - - return tf.keras.Model(inpf, [s, p], name="DEC") - - def build_critic(self): - - sinp = tf.keras.layers.Input(shape=(1, self.args.latlen, self.args.latdepth * 2)) - - sf = tf.keras.layers.Conv2D( - self.args.base_channels * 3, - kernel_size=(1, 4), - strides=(1, 2), - activation="linear", - padding="same", - kernel_initializer=self.init, - name="1c", - )(sinp) - sf = tf.keras.layers.LeakyReLU(0.2)(sf) - - sf = self.res_block_disc(sf, self.args.base_channels * 4, kernel_size=(1, 4), strides=(1, 2), name="2") - - sf = self.res_block_disc(sf, self.args.base_channels * 5, kernel_size=(1, 4), strides=(1, 2), name="3") - - sf = self.res_block_disc(sf, self.args.base_channels * 6, kernel_size=(1, 4), strides=(1, 2), name="4") - - sf = self.res_block_disc(sf, self.args.base_channels * 7, kernel_size=(1, 4), strides=(1, 2), name="5") - - if not self.args.small: - sf = self.res_block_disc( - sf, self.args.base_channels * 7, kernel_size=(1, 4), strides=(1, 2), kernel_size_2=(1, 1), name="6" - ) - - sf = tf.keras.layers.Conv2D( - self.args.base_channels * 7, - kernel_size=(1, 3), - strides=(1, 1), - activation="linear", - padding="same", - kernel_initializer=self.init, - name="7c", - )(sf) - sf = tf.keras.layers.LeakyReLU(0.2)(sf) - - gf = tf.keras.layers.Dense(1, activation="linear", use_bias=True, kernel_initializer=self.init, name="7d")( - tf.keras.layers.Flatten()(sf) - ) - - gf = tf.cast(gf, tf.float32) - - return tf.keras.Model(sinp, gf, name="C") - - def build_generator(self): - - dim = self.args.latdepth * 2 - - inpf = tf.keras.layers.Input((self.args.latlen, self.args.latdepth * 2)) - - inpfls = tf.split(inpf, 2, -2) - inpb = tf.concat(inpfls, 0) - - inpg = tf.reduce_mean(inpb, -2) - inp1 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(tf.expand_dims(inpb, -3)) - inp2 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp1) - inp3 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp2) - inp4 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp3) - inp5 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp4) - if not self.args.small: - inp6 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp5) - - if not self.args.small: - g = tf.keras.layers.Dense( - 4 * (self.args.base_channels * 7), - activation="linear", - use_bias=True, - kernel_initializer=self.init, - name="00d", - )(tf.keras.layers.Flatten()(inp6)) - g = tf.keras.layers.Reshape((1, 4, self.args.base_channels * 7))(g) - g = AddNoise(self.args.datatype, name="00n")(g) - g = self.adain(g, inp5, name="00ai") - g = tf.keras.activations.swish(g) - else: - g = tf.keras.layers.Dense( - 4 * (self.args.base_channels * 7), - activation="linear", - use_bias=True, - kernel_initializer=self.init, - name="00d", - )(tf.keras.layers.Flatten()(inp5)) - g = tf.keras.layers.Reshape((1, 4, self.args.base_channels * 7))(g) - g = AddNoise(self.args.datatype, name="00n")(g) - g = self.adain(g, inp4, name="00ai") - g = tf.keras.activations.swish(g) - - if not self.args.small: - g1 = self.conv_util_gen( - g, - self.args.base_channels * 6, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - emb=inp4, - name="0", - ) - g1 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g1 - g1 = self.conv_util_gen( - g1, - self.args.base_channels * 6, - kernel_size=(1, 4), - strides=(1, 1), - upsample=False, - noise=True, - emb=inp4, - name="1", - ) - g1 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g1 - g1 = g1 + tf.keras.layers.Conv2D( - g1.shape[-1], - kernel_size=(1, 1), - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name="res1c", - )(self.pixel_shuffle(g)) - else: - g1 = self.conv_util_gen( - g, - self.args.base_channels * 6, - kernel_size=(1, 1), - strides=(1, 1), - upsample=False, - noise=True, - emb=inp4, - name="0_small", - ) - g1 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g1 - g1 = self.conv_util_gen( - g1, - self.args.base_channels * 6, - kernel_size=(1, 1), - strides=(1, 1), - upsample=False, - noise=True, - emb=inp4, - name="1_small", - ) - g1 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g1 - g1 = g1 + tf.keras.layers.Conv2D( - g1.shape[-1], - kernel_size=(1, 1), - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name="res1c_small", - )(g) - - g2 = self.conv_util_gen( - g1, - self.args.base_channels * 5, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - emb=inp3, - name="2", - ) - g2 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g2 - g2 = self.conv_util_gen( - g2, - self.args.base_channels * 5, - kernel_size=(1, 4), - strides=(1, 1), - upsample=False, - noise=True, - emb=inp3, - name="3", - ) - g2 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g2 - g2 = g2 + tf.keras.layers.Conv2D( - g2.shape[-1], - kernel_size=(1, 1), - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name="res2c", - )(self.pixel_shuffle(g1)) - - g3 = self.conv_util_gen( - g2, - self.args.base_channels * 4, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - emb=inp2, - name="4", - ) - g3 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g3 - g3 = self.conv_util_gen( - g3, - self.args.base_channels * 4, - kernel_size=(1, 4), - strides=(1, 1), - upsample=False, - noise=True, - emb=inp2, - name="5", - ) - g3 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g3 - g3 = g3 + tf.keras.layers.Conv2D( - g3.shape[-1], - kernel_size=(1, 1), - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name="res3c", - )(self.pixel_shuffle(g2)) - - g4 = self.conv_util_gen( - g3, - self.args.base_channels * 3, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - emb=inp1, - name="6", - ) - g4 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g4 - g4 = self.conv_util_gen( - g4, - self.args.base_channels * 3, - kernel_size=(1, 4), - strides=(1, 1), - upsample=False, - noise=True, - emb=inp1, - name="7", - ) - g4 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g4 - g4 = g4 + tf.keras.layers.Conv2D( - g4.shape[-1], - kernel_size=(1, 1), - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name="res4c", - )(self.pixel_shuffle(g3)) - - g5 = self.conv_util_gen( - g4, - self.args.base_channels * 2, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - emb=tf.expand_dims(tf.cast(inpb, dtype=self.args.datatype), -3), - name="8", - ) - - gf = tf.keras.layers.Conv2D( - dim, kernel_size=(1, 4), strides=(1, 1), kernel_initializer=self.init, padding="same", name="9c" - )(g5) - - gfls = tf.split(gf, 2, 0) - gf = tf.concat(gfls, -2) - - gf = tf.cast(gf, tf.float32) - - return tf.keras.Model(inpf, gf, name="GEN") - - # Load past models from path to resume training or test - def load(self, path, load_dec=False): - gen = self.build_generator() - critic = self.build_critic() - enc = self.build_encoder() - dec = self.build_decoder() - enc2 = self.build_encoder2() - dec2 = self.build_decoder2() - gen_ema = self.build_generator() - - switch = tf.Variable(-1.0, dtype=tf.float32) - - if self.args.mixed_precision: - opt_disc = self.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam(0.0001, 0.5)) - opt_dec = self.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam(0.0001, 0.5)) - else: - opt_disc = tf.keras.optimizers.Adam(0.0001, 0.9) - opt_dec = tf.keras.optimizers.Adam(0.0001, 0.9) - - if load_dec: - dec.load_weights(self.args.dec_path + "/dec.h5") - dec2.load_weights(self.args.dec_path + "/dec2.h5") - enc.load_weights(self.args.dec_path + "/enc.h5") - enc2.load_weights(self.args.dec_path + "/enc2.h5") - - else: - grad_vars = critic.trainable_weights - zero_grads = [tf.zeros_like(w) for w in grad_vars] - opt_disc.apply_gradients(zip(zero_grads, grad_vars)) - - grad_vars = gen.trainable_variables - zero_grads = [tf.zeros_like(w) for w in grad_vars] - opt_dec.apply_gradients(zip(zero_grads, grad_vars)) - - if not self.args.testing: - opt_disc.set_weights(np.load(path + "/opt_disc.npy", allow_pickle=True)) - opt_dec.set_weights(np.load(path + "/opt_dec.npy", allow_pickle=True)) - critic.load_weights(path + "/critic.h5") - gen.load_weights(path + "/gen.h5") - switch = tf.Variable(float(np.load(path + "/switch.npy", allow_pickle=True)), dtype=tf.float32) - - gen_ema.load_weights(path + "/gen_ema.h5") - dec.load_weights(self.args.dec_path + "/dec.h5") - dec2.load_weights(self.args.dec_path + "/dec2.h5") - enc.load_weights(self.args.dec_path + "/enc.h5") - enc2.load_weights(self.args.dec_path + "/enc2.h5") - - return ( - critic, - gen, - enc, - dec, - enc2, - dec2, - gen_ema, - [opt_dec, opt_disc], - switch, - ) - - def build(self): - gen = self.build_generator() - critic = self.build_critic() - enc = self.build_encoder() - dec = self.build_decoder() - enc2 = self.build_encoder2() - dec2 = self.build_decoder2() - gen_ema = self.build_generator() - - switch = tf.Variable(-1.0, dtype=tf.float32) - - gen_ema = tf.keras.models.clone_model(gen) - gen_ema.set_weights(gen.get_weights()) - - if self.args.mixed_precision: - opt_disc = self.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam(0.0001, 0.5)) - opt_dec = self.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam(0.0001, 0.5)) - else: - opt_disc = tf.keras.optimizers.Adam(0.0001, 0.5) - opt_dec = tf.keras.optimizers.Adam(0.0001, 0.5) - - return ( - critic, - gen, - enc, - dec, - enc2, - dec2, - gen_ema, - [opt_dec, opt_disc], - switch, - ) - - def get_networks(self): - ( - critic, - gen, - enc, - dec, - enc2, - dec2, - gen_ema_1, - [opt_dec, opt_disc], - switch, - ) = self.load(self.args.load_path_1, load_dec=False) - print(f"Networks loaded from {self.args.load_path_1}") - - ( - critic, - gen, - enc, - dec, - enc2, - dec2, - gen_ema_2, - [opt_dec, opt_disc], - switch, - ) = self.load(self.args.load_path_2, load_dec=False) - print(f"Networks loaded from {self.args.load_path_2}") - - ( - critic, - gen, - enc, - dec, - enc2, - dec2, - gen_ema_3, - [opt_dec, opt_disc], - switch, - ) = self.load(self.args.load_path_3, load_dec=False) - print(f"Networks loaded from {self.args.load_path_3}") - - return ( - (critic, gen, enc, dec, enc2, dec2, gen_ema_1, [opt_dec, opt_disc], switch), - (critic, gen, enc, dec, enc2, dec2, gen_ema_2, [opt_dec, opt_disc], switch), - (critic, gen, enc, dec, enc2, dec2, gen_ema_3, [opt_dec, opt_disc], switch), - ) - - def initialize_networks(self): - - ( - (critic, gen, enc, dec, enc2, dec2, gen_ema_1, [opt_dec, opt_disc], switch), - (critic, gen, enc, dec, enc2, dec2, gen_ema_2, [opt_dec, opt_disc], switch), - (critic, gen, enc, dec, enc2, dec2, gen_ema_3, [opt_dec, opt_disc], switch), - ) = self.get_networks() - - print(f"Critic params: {count_params(critic.trainable_variables)}") - print(f"Generator params: {count_params(gen.trainable_variables)}") - - return ( - (critic, gen, enc, dec, enc2, dec2, gen_ema_1, [opt_dec, opt_disc], switch), - (critic, gen, enc, dec, enc2, dec2, gen_ema_2, [opt_dec, opt_disc], switch), - (critic, gen, enc, dec, enc2, dec2, gen_ema_3, [opt_dec, opt_disc], switch), - ) diff --git a/spaces/falterWliame/Face_Mask_Detection/Axure Rp 8 License Keyl.md b/spaces/falterWliame/Face_Mask_Detection/Axure Rp 8 License Keyl.md deleted file mode 100644 index fb9154757343bf081d239661a76fa39fa2afe932..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Axure Rp 8 License Keyl.md +++ /dev/null @@ -1,6 +0,0 @@ -

Axure Rp 8 License Keyl


Download Zip ===> https://urlca.com/2uDdqP



- -... cgi movie software health clubs in madison wisconsin license book html page ... metrodome address girl skateboard video girl hips sunil jhangiani 5 8 19 year old ... uitgebreid queen size bed kunstuniversiteiten la piste var sleutels keylist voor ... no ceres afleveringen ontgrendel v3 cel barriles hotel in axure rp la grange il ... 1fdad05405
-
-
-

diff --git a/spaces/fatiXbelha/sd/Download Sausage Man Mod APK and Play with Unlimited Resources.md b/spaces/fatiXbelha/sd/Download Sausage Man Mod APK and Play with Unlimited Resources.md deleted file mode 100644 index 05f1a6a3ed7a375ac7a1d4a11527cbef99439656..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Sausage Man Mod APK and Play with Unlimited Resources.md +++ /dev/null @@ -1,117 +0,0 @@ -
-

Download APK Sausage Man Mod: How to Enjoy the Funniest Battle Royale Game with Unlimited Features

-

If you are looking for a hilarious and addictive game that will make you laugh out loud, then you should try Sausage Man. Sausage Man is a battle royale game that parodies popular games like PUBG and Fortnite, but with a twist: you play as a sausage. Yes, you read that right. You are a sausage that can run, jump, shoot, drive, and even transform into different objects. Sounds crazy, right? Well, it is. And that's what makes it so fun.

-

But what if you want to enjoy the game even more? What if you want to have unlimited money, coins, candy, and premium features that will let you customize your sausage and unlock more weapons, vehicles, and skins? Well, there is a way. And that is by downloading the Sausage Man Mod APK.

-

download apk sausage man mod


Download Zip ————— https://urllie.com/2uNItM



-

What is Sausage Man Mod APK?

-

Sausage Man Mod APK is a modified version of the original game that gives you access to all the features that are normally locked or require real money to purchase. With this mod apk, you can get unlimited money, coins, candy, and premium features that will let you enjoy the game without any limitations or restrictions.

-

Some of the benefits of using Sausage Man Mod APK are:

- -

How to Download and Install Sausage Man Mod APK?

-

If you are interested in downloading and installing Sausage Man Mod APK, then you need to follow these simple steps:

-

The steps to download the mod apk file

-
    -
  1. Go to this link: [Download APK Sausage Man Mod](^1^).
  2. -
  3. Click on the download button and wait for the file to be downloaded.
  4. -
  5. Locate the file in your device's storage and tap on it.
  6. -
-

The steps to install the mod apk file

-
    -
  1. Before installing the file, make sure that you have enabled the option to install apps from unknown sources in your device's settings.
  2. -
  3. After tapping on the file, follow the instructions on the screen to install the app.
  4. -
  5. Once the installation is complete, launch the app and enjoy the game.
  6. -
-

How to Play Sausage Man Mod APK?

-

Playing Sausage Man Mod APK is similar to playing the original game, but with more fun and excitement. Here are some of the basic gameplay and controls that you need to know:

-

The basic gameplay and controls

- -

The tips and tricks to win the game

- -

Conclusion

-

Sausage Man is a fun and hilarious game that will make you laugh and enjoy yourself. It is a parody of popular battle royale games, but with a unique twist: you play as a sausage. You can download the Sausage Man Mod APK to get unlimited money, coins, candy, and premium features that will let you customize your sausage and unlock more weapons, vehicles, and skins. You can also use the mod menu to activate various cheats and hacks that will make the game easier and more fun. To download and install the Sausage Man Mod APK, you just need to follow the simple steps that we have provided in this article. Then, you can start playing the game and have a blast. So, what are you waiting for? Download APK Sausage Man Mod now and enjoy the funniest battle royale game ever!

-

Frequently Asked Questions

-

Here are some of the common questions that people ask about Sausage Man Mod APK:

-
    -
  1. Is Sausage Man Mod APK safe to use?
  2. -

    Yes, Sausage Man Mod APK is safe to use as long as you download it from a trusted source. However, you should always be careful when downloading any mod apk file from the internet, as some of them may contain viruses or malware that can harm your device. You should also avoid using the mod apk on your main account, as it may get banned by the game developers.

    -
  3. Is Sausage Man Mod APK compatible with my device?
  4. -

    Sausage Man Mod APK is compatible with most Android devices that have Android 4.4 or higher. However, some devices may not support the mod apk due to different specifications or settings. You should always check the compatibility of the mod apk before downloading it.

    -

    download sausage man mod apk unlimited money
    -sausage man mod apk latest version download
    -how to download sausage man mod apk on android
    -download sausage man mod apk for pc
    -sausage man mod apk free download 2023
    -download sausage man mod apk offline
    -sausage man mod apk download happymod
    -download sausage man mod apk no root
    -sausage man mod apk download link
    -download sausage man mod apk android 1
    -sausage man mod apk full version download
    -download sausage man mod apk with obb
    -sausage man mod apk download rexdl
    -download sausage man mod apk unlimited candy
    -sausage man mod apk download apkpure
    -download sausage man mod apk online
    -sausage man mod apk download for ios
    -download sausage man mod apk revdl
    -sausage man mod apk download 2022
    -download sausage man mod apk new update
    -sausage man mod apk hack download
    -download sausage man mod apk unlimited health
    -sausage man mod apk download for laptop
    -download sausage man mod apk cheat
    -sausage man mod apk download uptodown
    -download sausage man mod apk unlimited ammo
    -sausage man mod apk download for windows 10
    -download sausage man mod apk god mode
    -sausage man mod apk download 2021
    -download sausage man mod apk mega mod
    -sausage man mod apk premium download
    -download sausage man mod apk no ads
    -sausage man mod apk vip download
    -download sausage man mod apk all unlocked
    -sausage man mod apk pro download
    -download sausage man mod apk high damage
    -sausage man mod apk plus download
    -download sausage man mod apk no verification
    -sausage man mod apk gold download
    -download sausage man mod apk anti ban
    -sausage man mod apk diamond download
    -download sausage man mod apk easy install
    -sausage man mod apk original download
    -download sausage man mod apk fast speed
    -sausage man mod apk cracked download
    -download sausage man mod apk low mb
    -sausage man mod apk unlocked everything download

    -
  5. How can I update Sausage Man Mod APK?
  6. -

    To update Sausage Man Mod APK, you need to download the latest version of the mod apk file from the same source that you downloaded it from. Then, you need to uninstall the previous version of the mod apk and install the new one. You should always backup your data before updating the mod apk, as it may erase your progress or settings.

    -
  7. Can I play Sausage Man Mod APK online with other players?
  8. -

    Yes, you can play Sausage Man Mod APK online with other players who are using the same mod apk or the original game. However, you should be aware that using the mod apk may give you an unfair advantage over other players, which may ruin their gaming experience or cause them to report you. You should always respect other players and play fair.

    -
  9. Can I request more features for Sausage Man Mod APK?
  10. -

    Yes, you can request more features for Sausage Man Mod APK by contacting the developers of the mod apk or leaving a comment on their website or social media platforms. However, you should understand that not all requests can be fulfilled, as some features may be impossible or impractical to implement. You should also appreciate the work that the developers have done and support them if possible.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Customize Your Navigation Bar with Soft Key Bar APK - No Root Required.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Customize Your Navigation Bar with Soft Key Bar APK - No Root Required.md deleted file mode 100644 index a345b93d7f99d95561201ca8e5dda0987f724b6b..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Customize Your Navigation Bar with Soft Key Bar APK - No Root Required.md +++ /dev/null @@ -1,132 +0,0 @@ -
-

What is a soft key bar apk and why you might need it

-

If you have an Android device, you probably use some buttons to navigate through your apps and settings. These buttons can be either physical (hardware) or virtual (software). A soft key bar is an on-screen keyboard that displays the standard Android buttons (Home, Back, Menu, Search) at the bottom of your screen.

-

There are several benefits of using a soft key bar instead of hardware buttons. For example:

-

soft key bar apk


Download File ››››› https://gohhs.com/2uPtce



-