diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Alberts Easy Activator v0.57.17 for Tomtom.zip What You Need to Know About Tomtom Updates.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Alberts Easy Activator v0.57.17 for Tomtom.zip What You Need to Know About Tomtom Updates.md deleted file mode 100644 index 9d00600394da81c802b26e7378416d1018a4dcd6..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Alberts Easy Activator v0.57.17 for Tomtom.zip What You Need to Know About Tomtom Updates.md +++ /dev/null @@ -1,147 +0,0 @@ - -

Alberts Easy Activator v0.57.17 for Tomtom.zip: A Complete Guide

-

Do you own a Tomtom navigation device and want to update it with the latest maps and features? Do you want to save money and time by activating your Tomtom device without paying for a subscription or visiting a dealer? If you answered yes to any of these questions, then you need to know about Alberts Easy Activator v0.57.17 for Tomtom.zip.

-

Alberts Easy Activator v0.57.17 for Tomtom.zip


DOWNLOAD ::: https://byltly.com/2uKwxP



-

Alberts Easy Activator is a software tool that allows you to activate your Tomtom device with just a few clicks. It also lets you update your maps, customize your settings, and troubleshoot common issues with your Tomtom device. In this article, we will explain what Alberts Easy Activator is, how it works, and how you can download and install it on your computer. We will also show you how to use it to activate and update your Tomtom device, and answer some frequently asked questions about it.

-

What is Alberts Easy Activator?

-

Alberts Easy Activator is a software tool that was created by a user named Albert Swafega on the GPS Underground forum. It is designed to help Tomtom users activate their devices without having to pay for a subscription or visit a dealer.

-

A brief history of Alberts Easy Activator

-

Albert Swafega started developing Alberts Easy Activator in 2009, after he bought a second-hand Tomtom device that was locked by the previous owner. He wanted to unlock his device and update it with the latest maps and features, but he did not want to pay for a subscription or visit a dealer.

-

How to use Alberts Easy Activator for Tomtom maps
-Alberts Easy Activator v0.57.17 download link
-Tomtom map update with Alberts Easy Activator
-Alberts Easy Activator v0.57.17 tutorial
-Alberts Easy Activator v0.57.17 crack for Tomtom
-Tomtom navigation software activation by Alberts Easy Activator
-Alberts Easy Activator v0.57.17 features and benefits
-Alberts Easy Activator v0.57.17 compatibility with Tomtom devices
-Alberts Easy Activator v0.57.17 reviews and testimonials
-Alberts Easy Activator v0.57.17 alternatives and competitors
-Alberts Easy Activator v0.57.17 problems and solutions
-Alberts Easy Activator v0.57.17 support and customer service
-Alberts Easy Activator v0.57.17 license and terms of use
-Alberts Easy Activator v0.57.17 installation and setup guide
-Alberts Easy Activator v0.57.17 latest version and updates
-Alberts Easy Activator v0.57.17 free trial and discount offer
-Alberts Easy Activator v0.57.17 virus and malware scan
-Alberts Easy Activator v0.57.17 for Tomtom.zip file size and format
-Alberts Easy Activator v0.57.17 for Tomtom.zip password and encryption
-Alberts Easy Activator v0.57.17 for Tomtom.zip extraction and verification
-Alberts Easy Activator v0.57.17 for Tomtom.zip source and origin
-Alberts Easy Activator v0.57.17 for Tomtom.zip legality and ethics
-Alberts Easy Activator v0.57.17 for Tomtom.zip risks and consequences
-Alberts Easy Activator v0.57.17 for Tomtom.zip backup and recovery
-Alberts Easy Activator v0.57.17 for Tomtom.zip comparison and evaluation
-How to uninstall Alberts Easy Activator v0.57.17 for Tomtom.zip
-How to fix errors in Alberts Easy Activator v0.57.17 for Tomtom.zip
-How to optimize performance of Alberts Easy Activator v0.57.17 for Tomtom.zip
-How to customize settings of Alberts Easy Activator v0.57.17 for Tomtom.zip
-How to troubleshoot issues with Alberts Easy Activator v0.57.17 for Tomtom.zip
-How to contact the developer of Alberts Easy Activator v0.57.17 for Tomtom.zip
-How to share feedback on Alberts Easy Activator v0.57.17 for Tomtom.zip
-How to recommend Alberts Easy Activator v0.57.17 for Tomtom.zip to others
-How to get help with Alberts Easy Activator v0.57.17 for Tomtom.zip online
-How to learn more about Alberts Easy Activator v0.57.17 for Tomtom.zip
-What is the difference between Alberts Easy Activator v0.57.17 and other versions of Alberts Easy Activator?
-What are the advantages of using Alberts Easy Activator v0.57.17 over other methods of activating Tomtom maps?
-What are the disadvantages of using Alberts Easy Activator v0.57.17 for Tomtom maps?
-What are the best practices of using Alberts Easy Activator v0.57.17 for Tomtom maps?
-What are the common mistakes of using Alberts Easy Activator v0.57

-

He searched online for a solution and found out that there were some tools that could activate Tomtom devices, but they were either complicated, outdated, or risky to use. He decided to create his own tool that would be easy, safe, and reliable to use.

-

He named his tool Alberts Easy Activator and shared it on the GPS Underground forum for other Tomtom users to use. Since then, he has been updating his tool regularly with new features and improvements.

-

How does Alberts Easy Activator work?

-

Alberts Easy Activator works by generating activation codes for your Tomtom device based on its serial number and device ID. These codes are then used to unlock your device and enable you to use the latest maps and features.

-

Alberts Easy Activator also works by downloading and installing the latest official map updates from Tomtom's servers. These updates are then patched with activation codes so that they can work on your device without any issues.

-

What are the benefits of using Alberts Easy Activator?

-

There are many benefits of using Alberts Easy Activator for your Tomtom device, such as:

- -

How to download and install Alberts Easy Activator v0.57.17 for Tomtom.zip

-

To download and install Alberts Easy Activator v0.57.17 for Tomtom.zip on your computer, you need to follow these steps:

-

Where to find the download link for Alberts Easy Activator v0.57.17 for Tomtom.zip

-

The download link for Alberts Easy Activator v0.57.17 for Tomtom.zip is available on the GPS Underground forum, where Albert Swafega posts his updates regularly.

-

To access the forum, you need to register an account first by clicking on this link: https://www.gpsunderground.com/forum/register.php

-

After registering an account, you need to log in and go to this thread: https://www.gpsunderground.com/forum/forum/gps-navigation-systems/tomtom-gps-systems/tomtom-tutorials/3597-how-to-activate-maps-on-tom-tom-using-albert-s-easy-activator

-

In this thread, you will find the latest version of Alberts Easy Activator v0.57.17 for Tomtom.zip along with instructions on how to use it.

-

How to unzip and run Alberts Easy Activator v0.57.17 for Tomtom.zip

-

After downloading Alberts Easy Activator v0.57.17 for Tomtom.zip from the forum, you need to unzip it using a software such as WinRAR or 7-Zip.

-

To unzip it, you need to right-click on the file and select "Extract here" or "Extract to" depending on your software.

-

This will create a folder named "Albert's_Easy_Activator_v0_57_17" in the same location as the zip file.

-

To run Alberts Easy Activator v0.57.17 for Tomtom.zip, you need to double-click on the file named "RunMe.bat" inside the folder.

-

This will open a command prompt window where you will see some options and instructions on how to use the tool.

-

How to activate your Tomtom device with Alberts Easy Activator v0.57.17

-

To activate your Tomtom device with Alberts Easy Activator v0.57.17, you need to follow these steps:

-
    -
  1. Connect your Tomtom device to your computer using a USB cable.
  2. -
  3. Make sure that your device is recognized by your computer and that it has enough battery power.
  4. -
  5. In the command prompt window of Albert's Easy Activator, press 1 and hit Enter.
  6. -
  7. This will display some information about your device such as its serial number and device ID.
  8. -
  9. Note down these numbers as you will need them later.
  10. -
  11. In the command prompt window of Albert's Easy Activator, press 2 and hit Enter.
  12. -
  13. This will generate an activation code for your device based on its serial number and device ID.
  14. -
  15. Note down this code as you will need it later.
  16. -
  17. In the command prompt window of Albert's Easy Activator, press 5 and hit Enter.
  18. -
  19. This will open a folder named "Meta" where you will see some files with names like "ttgo.bif", "ttgo.bak", etc.
  20. -
  21. Copy these files from the "Meta" folder and paste them into the root directory of your Tomtom device (the main folder where you see folders like "art", "voices", etc.).
  22. -
  23. In the command prompt window of Albert's Easy Activator, press 6 and hit Enter.
  24. -
  25. This will open another folder named "Activators" where you will see some files with names like "FastActivate.exe", "EasyUseTools.exe", etc.
  26. -". -
  27. Copy this file from the "Maps" folder and paste it into the folder named "Europe" (or whatever the name of the map is) on your Tomtom device.
  28. -
  29. Disconnect your Tomtom device from your computer and turn it on.
  30. -
  31. On your Tomtom device, go to the main menu and select "Switch map".
  32. -
  33. Select the map update that you just installed and confirm.
  34. -
  35. Your Tomtom device should now be updated with the latest map version.
  36. -
-

How to customize your Tomtom settings with Alberts Easy Activator v0.57.17

-

To customize your Tomtom settings with Alberts Easy Activator v0.57.17, you need to follow these steps:

-
    -
  1. Connect your Tomtom device to your computer using a USB cable.
  2. -
  3. Make sure that your device is recognized by your computer and that it has enough battery power.
  4. -
  5. In the command prompt window of Albert's Easy Activator, press 7 and hit Enter.
  6. -
  7. This will display a list of available customization options for your device such as voices, colors, icons, etc.
  8. -
  9. Select the customization option that you want to download and hit Enter.
  10. -
  11. This will start downloading the customization option to a folder named "Customize" in the same location as Albert's Easy Activator.
  12. -
  13. Wait until the download is complete and then press any key to continue.
  14. -
  15. In the command prompt window of Albert's Easy Activator, press 8 and hit Enter.
  16. -
  17. This will open a folder named "Customize" where you will see the customization option file with a name like "TomTom Voices.zip".
  18. -
  19. Unzip this file using a software such as WinRAR or 7-Zip.
  20. -
  21. Copy the unzipped files from the "TomTom Voices" folder and paste them into the folder named "voices" on your Tomtom device.
  22. -
  23. Disconnect your Tomtom device from your computer and turn it on.
  24. -
  25. On your Tomtom device, go to the main menu and select "Change preferences".
  26. -
  27. Scroll down and select "Change voice".
  28. -
  29. Select the voice that you just installed and confirm.
  30. -
  31. Your Tomtom device should now have a new voice.
  32. -
-

How to troubleshoot common issues with Alberts Easy Activator v0.57.17

-

To troubleshoot common issues with Alberts Easy Activator v0.57.17, you need to follow these steps:

-
    -
  1. If you encounter an error message such as "No maps found" or "Problem with map", try to delete the file named "ttgo.bif" from the root directory of your Tomtom device and then run Albert's Easy Activator again to generate a new one.
  2. -
  3. If you encounter an error message such as "No GPS signal" or "Waiting for a valid GPS signal", try to reset your Tomtom device by holding down the power button for 15 seconds until you hear a drum sound. Then wait for a few minutes until your device acquires a GPS signal.
  4. -
  5. If you encounter an error message such as "Corrupted files" or "Invalid files", try to format your Tomtom device by connecting it to your computer and then right-clicking on its drive letter and selecting "Format". Then run Albert's Easy Activator again to activate and update your device.
  6. -
  7. If you encounter any other issues or have any questions, try to visit the GPS Underground forum and search for answers or post your queries there. You can also contact Albert Swafega directly by sending him a private message on the forum.
  8. -
-

Conclusion

-

In this article, we have explained what Alberts Easy Activator v0.57.17 for Tomtom.zip is, how it works, and how you can download and install it on your computer. We have also shown you how to use it to activate and update your Tomtom device, customize your settings, and troubleshoot common issues. We hope that this article has been helpful and informative for you. If you have any feedback or suggestions, please let us know in the comments section below. Thank you for reading!

-

FAQs

-

Here are some frequently asked questions about Alberts Easy Activator v0.57.17 for Tomtom.zip:

-

Is Alberts Easy Activator v0.57.17 for Tomtom.zip safe to use?

-

Yes, Alberts Easy Activator v0.57.17 for Tomtom.zip is safe to use as long as you download it from the official source on the GPS Underground forum and follow the instructions carefully. It does not contain any viruses or malware and does not harm your Tomtom device in any way.

-

Is Alberts Easy Activator v0.57.17 for Tomtom.zip legal to use?

-

Alberts Easy Activator v0.57.17 for Tomtom.zip is not officially endorsed or supported by Tomtom, so it may violate their terms of service or warranty policy. However, it is not illegal to use as long as you own a legitimate copy of the map that you want to activate or update on your device. You are responsible for using Alberts Easy Activator v0.57.17 for Tomtom.zip at your own risk and discretion.

-

Does Alberts Easy Activator v0.57.17 for Tomtom.zip work on all Tomtom devices?

-

No, Alberts Easy Activator v0.57.17 for Tomtom.zip does not work on all Tomtom devices. It only works on devices that have a serial number starting with one of these letters: A, B, C, D, E, F, G, H, J, K, L, M, N, P, Q, R, S, T, U, V, W, X, Y or Z.

-

Does Alberts Easy Activator v0.57.17 for Tomtom.zip work on all maps?

- such as its name, version, size, etc. You can find the meta files for the latest maps on this thread: https://www.gpsunderground.com/forum/forum/gps-navigation-systems/tomtom-gps-systems/tomtom-maps/3596-tomtom-maps-meta-codes

-

How often does Alberts Easy Activator v0.57.17 for Tomtom.zip get updated?

-

Alberts Easy Activator v0.57.17 for Tomtom.zip gets updated whenever there is a new map release or a new feature or improvement added by Albert Swafega. You can check the GPS Underground forum regularly to see if there is a new version available. You can also subscribe to the thread or enable notifications to get alerted when there is a new update.

-

Where can I get more help or support for Alberts Easy Activator v0.57.17 for Tomtom.zip?

-

If you need more help or support for Alberts Easy Activator v0.57.17 for Tomtom.zip, you can visit the GPS Underground forum and search for answers or post your queries there. You can also contact Albert Swafega directly by sending him a private message on the forum. He is very friendly and helpful and will try to assist you as soon as possible.

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Battlefield 3 Game File Part 35.rar.rar The Final Piece of the Puzzle.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Battlefield 3 Game File Part 35.rar.rar The Final Piece of the Puzzle.md deleted file mode 100644 index bad0fd3ad13a5f6a399b24d3c0e6796b84a4f212..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Battlefield 3 Game File Part 35.rar.rar The Final Piece of the Puzzle.md +++ /dev/null @@ -1,125 +0,0 @@ - -

What is Battlefield 3 Game File Part 35.rar.rar?

-

Introduction

-

If you are a fan of first-person shooter games, you might have heard of Battlefield 3, a popular game from Electronic Arts that was released in 2011. Battlefield 3 is a sequel to Battlefield 2 and features realistic graphics, physics, and sound effects that immerse you in the war zone. You can play as a soldier in various campaigns across different locations, such as Iran, Paris, New York, and more. You can also compete with other players online in various modes, such as team deathmatch, conquest, rush, and more.

-

battlefield 3 game file part 35.rar.rar


Download Ziphttps://byltly.com/2uKyOv



-

But how can you get this game for free? One way is to download a file called Battlefield 3 Game File Part 35.rar.rar. This is a compressed file that contains part of the game data that you need to install and play Battlefield 3 on your PC. But what is a compressed file and why does it have two RAR extensions? Let's find out.

-

What is a RAR file?

-

A RAR file is a type of compressed file that uses a proprietary algorithm to reduce the size of large files. Compressing files can save disk space and bandwidth when transferring files online. A RAR file can also be split into multiple parts to make it easier to download or upload. To open a RAR file, you need a software program that can extract its contents, such as WinRAR or 7-Zip.

-

Why are there two RAR extensions?

-

Normally, a RAR file has only one extension, such as .rar or .part01.rar. However, sometimes you might encounter a file that has two RAR extensions, such as .rar.rar or .part35.rar.rar. This means that the file is actually a RAR file inside another RAR file. This can happen when someone compresses an already compressed file again, either by mistake or on purpose. To open such a file, you need to extract it twice: first from the outer RAR file and then from the inner RAR file.

-

How to download Battlefield 3 Game File Part 35.rar.rar?

-

Now that you know what Battlefield 3 Game File Part 35.rar.rar is, you might be wondering how to download it. There are several websites that offer this file for free, but you need to be careful because some of them might contain viruses or malware that can harm your computer. Here are some tips on how to download this file safely and efficiently.

-

Where to find the file online?

-

One way to find this file online is to use a search engine like Google or Bing. You can type in keywords like "battlefield 3 game file part 35.rar.rar" or "battlefield 3 game files.part35.rar" and see what results come up. However, not all results are reliable or trustworthy, so you need to check the source before clicking on any link. Some indicators of a good source are:

-

battlefield 3 reloaded gamefiles part35 rar
-battlefield 3 game files.part35.rar 18
-battlefield 3 game files.part35.rar download
-battlefield 3 save game files download pc
-battlefield 3 save game files
-battlefield 4 save game files
-battlefield v game files
-battlefield 1 save game files
-battlefield heroes game files
-game files.part35.rar battlefield 3
-battlefield hardline save game files
-battlefield 1 verifying game files
-battlefield 3 rar file download
-battlefield 3 rar password
-battlefield 3 rar free download
-battlefield 3 rar parts
-battlefield 3 rarbg
-battlefield 3 rar file password
-battlefield 3 rar file free download
-battlefield 3 rar file size
-battlefield 3 reloaded rar password
-battlefield 3 reloaded rar download
-battlefield 3 reloaded rar free download
-battlefield 3 reloaded rar parts
-battlefield 3 reloaded rarbg
-battlefield 4 reloaded rar password
-battlefield 4 reloaded rar download
-battlefield hardline reloaded rar password
-battlefield hardline reloaded rar download
-need for speed the run part35 rar poe
-need for speed the run part35 rar download
-need for speed the run part35 rar password
-need for speed the run part35 rar free download
-need for speed the run part35 rar file size
-need for speed the run part35 rarbg
-need for speed most wanted part35 rar poe
-need for speed most wanted part35 rar download
-need for speed most wanted part35 rar password
-need for speed most wanted part35 rar free download
-need for speed most wanted part35 rar file size
-need for speed most wanted part35 rarbg
-captain sim 777 crack part35 rar poe
-captain sim 777 crack part35 rar download
-captain sim 777 crack part35 rar password
-captain sim 777 crack part35 rar free download
-captain sim 777 crack part35 rar file size
-captain sim 777 crack part35 rarbg

- -

Some examples of websites that offer this file for free are:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
WebsiteURLFile SizeFormatQuality
[MM] Millionaire Mafiahttps://millionaremafia.darkbb.com/t45-battlefield-3-game-files-part35-rar-18398 MBRARGood
Midbeaukavermemshttps://midbeaukavermems.wixsite.com/tramdestnatic/post/battlefield-3-game-files-part35-rar398 MBRARGood
Erenatinhttps://diongelderanhauki.wixsite.com/erenatin/post/utorrent-battlefield-3-game-free-64bit-software-windows13 GB (total)RAR + ISO + MDS + BINExcellent
Pastebinhttps://pastebin.com/DmMgNRgDN/AN/APoor
-

As you can see, some websites offer only one part of the game file, while others offer the whole game in multiple parts. Depending on your preference and internet speed, you can choose which option suits you best.

-

How to use a torrent client?

-

Another way to find this file online is to use a torrent client like uTorrent or BitTorrent. A torrent client is a software program that allows you to download files from other users who have them on their computers. This is called peer-to-peer (P2P) sharing and it can be faster and more efficient than downloading from a single source. However, it also comes with some risks and challenges, such as:

- -

If you decide to use a torrent client, here are some steps on how to do it:

-
    -
  1. Download and install a torrent client on your computer.
  2. -
  3. Go to a torrent site like The Pirate Bay or Kickass Torrents and search for "battlefield 3 game files.part35.rar" or "battlefield 3 game files.part35.rar.rar". You can also use Google or Bing with keywords like "battlefield 3 game files.part35.rar torrent" or "battlefield 3 game files.part35.rar.rar torrent".
  4. -
  5. Select the torrent that has the most seeds (uploaders) and leeches (downloaders) and click on it.
  6. -
  7. Download the torrent file or copy the magnet link and open it with your torrent client.
  8. -
  9. Select where you want to save the downloaded file on your computer and start downloading.
  10. -
  11. Wait until the download is complete and verify the file integrity by checking its size and hash value. 0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Enscape 3.3 Full Crack Bagas31 Is It Worth It?.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Enscape 3.3 Full Crack Bagas31 Is It Worth It?.md deleted file mode 100644 index 4c32b334a68312947300c0be0414c3c64fe85d85..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Enscape 3.3 Full Crack Bagas31 Is It Worth It?.md +++ /dev/null @@ -1,21 +0,0 @@ - -

    How to Download Enscape 3.3 Full Crack Bagas31

    -

    Enscape 3.3 is a powerful and easy-to-use real-time rendering software that allows you to create stunning and realistic visuals for your architectural and design projects. It integrates seamlessly with popular CAD software such as SketchUp, Revit, Rhino, and ArchiCAD, and enables you to explore your designs in VR or export them as images or videos. Enscape 3.3 also offers many features and enhancements, such as improved lighting, shadows, reflections, materials, textures, and more.

    -

    However, Enscape 3.3 is not a free software. You need to purchase a license or subscription plan to use it legally and fully. Some people may be tempted to use download enscape 3.3 full crack bagas31, which is a pirated or modified version of the software that bypasses the license verification or activation process. This may seem like a good way to save money and enjoy the full features of Enscape 3.3, but it is actually a bad idea for several reasons. Here are some of the risks and consequences of using download enscape 3.3 full crack bagas31:

    -

    download enscape 3.3 full crack bagas31


    Download Filehttps://byltly.com/2uKvbT



    - -

    As you can see, using download enscape 3.3 full crack bagas31 is not worth the risk or hassle. You are better off using the official version of Enscape 3.3 that is legal, safe, and reliable. You can download Enscape 3.3 from its official website or from trusted sources. You can also purchase a license or subscription plan that suits your needs and budget. By doing so, you can enjoy the benefits of a powerful and easy-to-use real-time rendering software without compromising your security or quality. Don't wait any longer and make the smart choice today!

    - -

    If you are still not convinced by the official version of Enscape 3.3 and want to use download enscape 3.3 full crack bagas31, you should be aware of the alternatives that are legal, safe, and reliable. Here are some of the options that you can consider:

    -
      -
    1. Enscape Trial Version: This is a free version of Enscape 3.3 that allows you to use it for 14 days with full features and functionality. You can download Enscape Trial Version from its official website and use it to test and evaluate the software before purchasing a license or subscription plan. You can also extend your trial period by contacting the Enscape team.
    2. -
    3. Enscape Student Version: This is a free version of Enscape 3.3 that is available for students and educators who are enrolled or employed in an accredited academic institution. You can download Enscape Student Version from its official website and use it for non-commercial purposes only. You need to provide a valid email address and proof of eligibility to access this version.
    4. -
    5. Other Rendering Software: There are many other rendering software that you can use for your architectural and design projects, such as Lumion, V-Ray, Twinmotion, Unreal Engine, and more. Some of them are free or open-source, while others are paid or subscription-based. You can compare their features, functionality, quality, and performance and choose the one that best suits your needs and budget.
    6. -
    -

    As you can see, there are many alternatives to download enscape 3.3 full crack bagas31 that are legal, safe, and reliable. By using one of these options, you can enjoy the benefits of a powerful and easy-to-use real-time rendering software without compromising your security or quality. Try them out today and see for yourself!

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Movie Maker Windows 10 Crack A Risky and Illegal Move.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Movie Maker Windows 10 Crack A Risky and Illegal Move.md deleted file mode 100644 index fa20da89182298f602346a1dea1b172a594e6b79..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Movie Maker Windows 10 Crack A Risky and Illegal Move.md +++ /dev/null @@ -1,26 +0,0 @@ - -

    Download Movie Maker Windows 10 Crack: Is It Safe and Legal?

    -

    Movie Maker is a simple and easy-to-use video editing software that was originally developed by Microsoft for Windows. It allows you to create and edit videos, add transitions, effects, titles, music, and more. You can also share your videos online or burn them to DVDs.

    -

    However, Movie Maker is no longer available for download from Microsoft since 2017. If you want to use it on Windows 10, you might be tempted to look for a download movie maker windows 10 crack online. But is it safe and legal to do so?

    -

    download movie maker windows 10 crack


    Downloadhttps://byltly.com/2uKzX6



    -

    The Risks of Downloading a Movie Maker Windows 10 Crack

    -

    Downloading a movie maker windows 10 crack might seem like a good idea at first, but it comes with many risks and disadvantages. Here are some of them:

    - -

    The Benefits of Using an Alternative to Movie Maker

    -

    Instead of downloading a movie maker windows 10 crack, you can use an alternative video editing software that is compatible with Windows 10. There are many options available online, both free and paid. Some of them are:

    - -

    Conclusion

    -

    Movie Maker is a great video editing software that was discontinued by Microsoft in 2017. If you want to use it on Windows 10, you might be tempted to download a movie maker windows 10 crack online. However, this is not a safe or legal option. You could face legal consequences, damage your computer, compromise your privacy, or lose your data. You could also miss out on the latest updates, features, and support from the developers.

    -

    The best way to edit videos on Windows 10 is to use an alternative video editing software that is compatible with Windows 10. There are many options available online, both free and paid. You can choose the one that suits your needs and preferences. You will get the best quality and performance from the software. You will also get access to the latest updates, features, and support from the developers. You will also be supporting the developers who have created an amazing product that can help you create amazing videos.

    -

    If you want to edit videos on Windows 10 without downloading a movie maker windows 10 crack online,download VSDC Free Video Editor ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EasyWorship 7 Problems and Solutions How to Optimize Your Church Presentation Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EasyWorship 7 Problems and Solutions How to Optimize Your Church Presentation Software.md deleted file mode 100644 index 06df62dd588721a062f0216cd5de02b2c1a07854..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EasyWorship 7 Problems and Solutions How to Optimize Your Church Presentation Software.md +++ /dev/null @@ -1,43 +0,0 @@ - -

    How to Troubleshoot Common Problems with EasyWorship 7

    -

    EasyWorship 7 is a powerful church presentation software that allows you to create and display slides, lyrics, scriptures, videos, and more. However, sometimes you may encounter some issues that prevent you from using EasyWorship 7 smoothly. In this article, we will show you how to troubleshoot some of the most common problems with EasyWorship 7 and how to contact the support team if you need further assistance.

    -

    Live Output Not Filling The Entire Screen

    -

    One of the possible problems with EasyWorship 7 is that the live output does not fill the entire screen of your projector or monitor. This can happen if the resolution of your output device does not match the resolution of your EasyWorship 7 settings. To fix this, you need to adjust the resolution of your output device and your EasyWorship 7 settings to match each other. Here are the steps to do this:

    -

    easyworship 7 troubleshooting


    Download ✏ ✏ ✏ https://byltly.com/2uKvff



    -
      -
    1. Right-click on your desktop and select Display settings.
    2. -
    3. Under Scale and layout, check the resolution of your output device and make a note of it.
    4. -
    5. Open EasyWorship 7 and go to Edit > Options > Live.
    6. -
    7. Under Output Monitor Resolution, select the same resolution as your output device.
    8. -
    9. Click OK and restart EasyWorship 7.
    10. -
    -

    If this does not solve the problem, you may need to adjust the aspect ratio of your output device and your EasyWorship 7 settings. Here are the steps to do this:

    -
      -
    1. Right-click on your desktop and select Display settings.
    2. -
    3. Under Scale and layout, check the aspect ratio of your output device and make a note of it.
    4. -
    5. Open EasyWorship 7 and go to Edit > Options > Live.
    6. -
    7. Under Output Monitor Aspect Ratio, select the same aspect ratio as your output device.
    8. -
    9. Click OK and restart EasyWorship 7.
    10. -
    -

    Screens Switching Or Duplicated Onto Another Monitor

    -

    Another possible problem with EasyWorship 7 is that the screens switch or duplicate onto another monitor. This can happen if the display settings of your computer are not configured correctly. To fix this, you need to set up your display settings to extend your desktop across multiple monitors. Here are the steps to do this:

    -
      -
    1. Right-click on your desktop and select Display settings.
    2. -
    3. Under Multiple displays, select Extend these displays.
    4. -
    5. Drag and drop the monitors to arrange them according to their physical position.
    6. -
    7. Click Apply and close the settings window.
    8. -
    -

    Access Violation When Opening EasyWorship 7

    -

    A third possible problem with EasyWorship 7 is that you get an access violation error when opening EasyWorship 7. This can happen if there is a corrupted file or folder in your EasyWorship 7 installation. To fix this, you need to delete the corrupted file or folder and reinstall EasyWorship 7. Here are the steps to do this:

    -
      -
    1. Close EasyWorship 7 if it is running.
    2. -
    3. Navigate to C:\Users\Public\Public Documents\Softouch\Easyworship\Default\v7.1\Databases\Data
    4. -
    5. Delete the file or folder that has a name starting with EWDatabase_ followed by a number.
    6. -
    7. Navigate to C:\Users\Public\Public Documents\Softouch\Easyworship\Default\v7.1\Databases\Profiles
    8. -
    9. Delete the file or folder that has a name starting with EWProfile_ followed by a number.
    10. -
    11. Navigate to C:\Program Files (x86)\Softouch\Easyworship 7
    12. -
    13. Delete the file or folder named EW.vclstyles
    14. -
    15. Reinstall EasyWorship 7 from https://www.easyworship.com/downloads/ew_builds/EasyWorshipWeb.exe
    16. -

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Checklist Persiapan Majlis Perkahwinan Pdf Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Checklist Persiapan Majlis Perkahwinan Pdf Download.md deleted file mode 100644 index 848199c29a3e6e9f81425794a2e16673c314d3d8..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Checklist Persiapan Majlis Perkahwinan Pdf Download.md +++ /dev/null @@ -1,68 +0,0 @@ - -

    Checklist Persiapan Majlis Perkahwinan Pdf Download: Panduan Lengkap Untuk Bakal Pengantin

    - -

    Anda sedang merancang untuk melangsungkan perkahwinan anda? Tahniah! Perkahwinan adalah satu peristiwa penting dalam hidup anda dan anda pasti mahu ia berjalan dengan lancar dan sempurna. Tetapi, adakah anda tahu apa yang perlu anda sediakan dan uruskan sebelum, semasa dan selepas majlis perkahwinan anda?

    - -

    Jangan risau, kami ada penyelesaian untuk anda. Kami telah menyediakan sebuah checklist persiapan majlis perkahwinan pdf download yang boleh anda gunakan sebagai rujukan dan panduan untuk memudahkan perancangan anda. Checklist ini meliputi semua aspek yang berkaitan dengan perkahwinan anda, seperti:

    -

    Checklist Persiapan Majlis Perkahwinan Pdf Download


    DOWNLOADhttps://imgfil.com/2uxYPg



    - - - -

    Checklist ini juga disertakan dengan contoh-contoh gambar, idea-idea menarik, tips-tips berguna dan pautan-pautan rujukan yang boleh membantu anda mendapatkan perkhidmatan dan produk yang berkualiti dan berpatutan. Anda juga boleh menyesuaikan checklist ini mengikut keperluan, bajet dan cita rasa anda.

    - -

    Bagaimana cara untuk mendapatkan checklist persiapan majlis perkahwinan pdf download ini? Mudah sahaja, anda hanya perlu klik pada pautan di bawah ini dan anda akan dibawa ke halaman muat turun. Anda boleh menyimpan checklist ini dalam telefon, tablet atau komputer anda dan membukanya pada bila-bila masa sahaja. Anda juga boleh mencetaknya jika anda suka.

    - -

    Checklist Persiapan Majlis Perkahwinan Pdf Download

    - -

    Dengan checklist ini, anda tidak perlu bimbang lagi tentang apa yang perlu anda buat untuk majlis perkahwinan anda. Anda boleh mengikut langkah demi langkah yang telah disusun dengan teratur dan logik. Anda juga boleh menjadikan checklist ini sebagai senarai semak untuk memastikan anda tidak tertinggal apa-apa perkara penting.

    - -

    Checklist persiapan majlis perkahwinan pdf download ini adalah satu alat yang sangat berguna dan praktikal untuk bakal pengantin yang ingin merancang majlis perkahwinan impian mereka. Jadi, tunggu apa lagi? Muat turun checklist ini sekarang dan mulakan perancangan anda dengan lebih mudah dan sistematik!

    -

    Checklist Persiapan Majlis Perkahwinan Pdf Download: Apa Yang Anda Perlu Tahu

    - -

    Sebelum anda mula menggunakan checklist persiapan majlis perkahwinan pdf download ini, ada beberapa perkara yang anda perlu tahu dan fahami. Ini adalah untuk memastikan anda dapat menggunakan checklist ini dengan betul dan efektif. Berikut adalah beberapa perkara yang anda perlu tahu:

    -

    - -
      -
    1. Checklist ini adalah bersifat umum dan tidak spesifik kepada sesuatu adat, budaya atau agama. Anda perlu menyesuaikan checklist ini mengikut adat, budaya dan agama anda dan pasangan anda. Anda juga perlu mengambil kira kehendak dan keperluan keluarga anda dan pasangan anda.
    2. -
    3. Checklist ini adalah bersifat fleksibel dan boleh diubah suai mengikut situasi dan keadaan anda. Anda tidak perlu mengikut semua perkara yang terdapat dalam checklist ini secara ketat dan kaku. Anda boleh menambah, mengurangkan atau mengubah sesuatu perkara mengikut kesesuaian dan keselesaan anda.
    4. -
    5. Checklist ini adalah bersifat panduan dan bukan arahan. Anda tidak perlu merasa terikat atau terbeban dengan checklist ini. Anda boleh menggunakan checklist ini sebagai rujukan dan bantuan untuk memudahkan perancangan anda. Anda juga boleh mendapatkan nasihat dan pendapat dari orang-orang yang berpengalaman atau pakar dalam bidang perkahwinan.
    6. -
    - -

    Dengan mengetahui perkara-perkara ini, anda dapat menggunakan checklist persiapan majlis perkahwinan pdf download ini dengan lebih bijak dan berkesan. Anda juga dapat mengelakkan sebarang masalah atau kesilapan yang mungkin timbul semasa merancang majlis perkahwinan anda.

    -

    Checklist Persiapan Majlis Perkahwinan Pdf Download: Bagaimana Cara Menggunakannya

    - -

    Setelah anda muat turun checklist persiapan majlis perkahwinan pdf download ini, anda mungkin tertanya-tanya bagaimana cara menggunakannya dengan betul. Jangan risau, kami akan tunjukkan kepada anda langkah-langkah mudah untuk menggunakan checklist ini. Berikut adalah langkah-langkah yang anda perlu ikuti:

    - -
      -
    1. Buka file pdf yang anda telah muat turun dan baca dengan teliti. Anda boleh mencetak file ini jika anda suka atau menyimpannya dalam telefon, tablet atau komputer anda.
    2. -
    3. Tentukan tarikh perkahwinan anda dan pasangan anda. Ini adalah perkara yang paling penting kerana ia akan menentukan bila anda perlu mula dan siapkan semua perkara yang berkaitan dengan majlis perkahwinan anda.
    4. -
    5. Buat perancangan awal dengan merujuk kepada checklist ini. Anda boleh menandakan perkara-perkara yang paling utama dan penting untuk anda lakukan terlebih dahulu. Anda juga boleh menetapkan tempoh masa untuk menyelesaikan setiap perkara.
    6. -
    7. Lakukan tindakan mengikut checklist ini. Anda boleh melakukan perkara-perkara yang terdapat dalam checklist ini secara berurutan atau mengikut keutamaan anda. Anda juga boleh mendapatkan bantuan dari keluarga, sahabat atau profesional jika perlu.
    8. -
    9. Semak dan pantau kemajuan anda. Anda boleh menandakan perkara-perkara yang telah anda selesaikan dalam checklist ini. Anda juga boleh menyemak semula checklist ini dari masa ke semasa untuk memastikan anda tidak tertinggal apa-apa perkara.
    10. -
    - -

    Dengan mengikuti langkah-langkah ini, anda dapat menggunakan checklist persiapan majlis perkahwinan pdf download ini dengan lebih mudah dan sistematik. Anda juga dapat menjimatkan masa, tenaga dan wang dalam merancang majlis perkahwinan anda.

    -

    Checklist Persiapan Majlis Perkahwinan Pdf Download: Kesimpulan

    - -

    Perkahwinan adalah satu peristiwa yang indah dan bermakna dalam hidup anda. Anda pasti mahu merancang dan melaksanakan majlis perkahwinan anda dengan sebaik mungkin. Tetapi, merancang majlis perkahwinan bukanlah sesuatu yang mudah dan ringkas. Anda perlu menguruskan banyak perkara dan memastikan semuanya berjalan dengan lancar dan sempurna.

    - -

    Oleh itu, kami telah menyediakan checklist persiapan majlis perkahwinan pdf download yang boleh anda gunakan sebagai panduan dan rujukan untuk memudahkan perancangan anda. Checklist ini meliputi semua aspek yang berkaitan dengan perkahwinan anda, dari merisik hingga majlis sanding. Checklist ini juga disertakan dengan contoh-contoh gambar, idea-idea menarik, tips-tips berguna dan pautan-pautan rujukan yang boleh membantu anda mendapatkan perkhidmatan dan produk yang berkualiti dan berpatutan.

    - -

    Anda boleh muat turun checklist ini secara percuma dengan mengklik pada pautan di bawah ini. Anda boleh menyimpan checklist ini dalam telefon, tablet atau komputer anda dan membukanya pada bila-bila masa sahaja. Anda juga boleh mencetaknya jika anda suka. Anda juga boleh menyesuaikan checklist ini mengikut keperluan, bajet dan cita rasa anda.

    - -

    Checklist Persiapan Majlis Perkahwinan Pdf Download

    - -

    Kami harap checklist ini dapat membantu anda dalam merancang majlis perkahwinan impian anda. Kami juga mengucapkan tahniah dan selamat pengantin baru kepada anda dan pasangan anda. Semoga perkahwinan anda bahagia dan kekal hingga ke akhir hayat. Amin.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Dlddiscografia320kbps !EXCLUSIVE!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Dlddiscografia320kbps !EXCLUSIVE!.md deleted file mode 100644 index 61d578da5ae10b535409f0345aae012432333867..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Dlddiscografia320kbps !EXCLUSIVE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

    dlddiscografia320kbps


    Download Zip ✫✫✫ https://imgfil.com/2uy0l5



    -
    -ALBUMS: Discogs ARCHIVES: Dlddiscografia. Mixtapes. Hardware. Algo Gucci'. Algo Beezy. 💽 DOWNLOAD: dlddiscografia320kbps. Homepage: DLDiscography. dlddiscografia320kbps. DLDiscography/320kbps: [ mixtape ] - Apple Music. dlddiscografia320kbps. ติดตามเว็บไหนที่รวมเว็บไหนด้วยละ dlddiscografia320kbps. Free shipping on qualifying offers. ละ. dlddiscografia320kbps. R&B Discogs R&B Discogs: l&JG, Arllex, Jomar, Shaggy D. R&B Discogs R&B Discogs: Los Angeles, DMC, The Beatnuts, Pumpkinhead, Pete Rock. Mixes by The Almighty [... For all uppercase letters in these lyrics, please refer to this wiki page. "(Get in the Vehicle)" (14th Heaven) (24 yrs) Los Angeles, Ca, USA iGO (Люди которые просто хотят быть хорошими, не понимают людей которые только хотят быть счастливыми) "Crazy" (14th Heaven) (16 yrs) Los Angeles, Ca, USA "I wanna be like you" (Skeleton & Silk) (27 yrs) Los Angeles, Ca, USA "Still I'm waiting for you" (Skeleton & Silk) (27 yrs) Los 4fefd39f24
    -
    -
    -

    diff --git a/spaces/1line/AutoGPT/run.sh b/spaces/1line/AutoGPT/run.sh deleted file mode 100644 index edcbc44155b9ca9df83e283fdf976472c13e6492..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/run.sh +++ /dev/null @@ -1,9 +0,0 @@ -#!/bin/bash -python scripts/check_requirements.py requirements.txt -if [ $? -eq 1 ] -then - echo Installing missing packages... - pip install -r requirements.txt -fi -python -m autogpt $@ -read -p "Press any key to continue..." diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Raflaan MP3 Song - The Punjabi Sensation by Mankirt Aulakh and Gurlez Akhtar.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Raflaan MP3 Song - The Punjabi Sensation by Mankirt Aulakh and Gurlez Akhtar.md deleted file mode 100644 index 1db79c3bd3830f2563b31dc41ad435261ecdb130..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Raflaan MP3 Song - The Punjabi Sensation by Mankirt Aulakh and Gurlez Akhtar.md +++ /dev/null @@ -1,128 +0,0 @@ - -

    How to Download 8 Raflaan Song MP3 for Free

    -

    If you are a fan of Punjabi music, you might have heard of the popular song 8 Raflaan by Mankirt Aulakh and Gurlej Akhtar. This song has been trending on YouTube and other music platforms since its release in 2021. But how can you download this song in MP3 format for free? In this article, we will tell you everything you need to know about 8 Raflaan song, why you should download it in MP3 format, and where to find the best free music download sites.

    -

    What is 8 Raflaan Song?

    -

    8 Raflaan is a Punjabi song that was released on April 28, 2021 by Mankirt Aulakh Music. The song features Mankirt Aulakh and Gurlej Akhtar as the main singers, along with Shree Brar as the lyricist and composer, and Avvy Sra as the music producer. The song is a catchy and upbeat track that showcases the Punjabi culture and lifestyle.

    -

    download 8 raflaan song mp3


    Download ►►►►► https://urlin.us/2uT0ZE



    -

    The artists and the music video

    -

    Mankirt Aulakh is a famous Punjabi singer and actor who has been active in the music industry since 2014. He is known for his hit songs like Badnam, Kadar, Jugaadi Jatt, and more. Gurlej Akhtar is a renowned Punjabi female singer who has collaborated with many artists like Dilpreet Dhillon, Karan Aujla, Sidhu Moose Wala, and more. She is known for her songs like Don't Worry, Jatt Di Pasand, Defend, and more.

    -

    The music video of 8 Raflaan was directed by Mahi Sandhu and Joban Sandhu, and featured Mankirt Aulakh, Gurlej Akhtar, Ginni Kapoor, Yaad Grewal, and others. The video depicts the story of a gangster who falls in love with a girl, but faces trouble from his rivals and the police. The video has received over 251 million views on YouTube as of June 2021.

    -

    download 8 raflaan song mp3 by mankirt aulakh
    -download 8 raflaan song mp3 free
    -download 8 raflaan song mp3 320kbps
    -download 8 raflaan song mp3 pagalworld
    -download 8 raflaan song mp3 mr jatt
    -download 8 raflaan song mp3 djpunjab
    -download 8 raflaan song mp3 ringtone
    -download 8 raflaan song mp3 lyrics
    -download 8 raflaan song mp3 video
    -download 8 raflaan song mp3 remix
    -download 8 raflaan song mp3 gurlez akhtar
    -download 8 raflaan song mp3 online
    -download 8 raflaan song mp3 audio
    -download 8 raflaan song mp3 hd
    -download 8 raflaan song mp3 full
    -download 8 raflaan song mp3 new punjabi song 2021
    -download 8 raflaan song mp3 avvy sra
    -download 8 raflaan song mp3 shree brar
    -download 8 raflaan song mp3 ginni kapoor
    -download 8 raflaan song mp3 bestwap
    -download 8 raflaan song mp3 gaana
    -download 8 raflaan song mp3 hungama
    -download 8 raflaan song mp3 jiosaavn
    -download 8 raflaan song mp3 spotify
    -download 8 raflaan song mp3 itunes
    -download 8 raflaan song mp3 resso
    -download 8 raflaan song mp3 youtube music
    -download 8 raflaan song mp3 linkfire
    -download 8 raflaan song mp3 b2gether pros
    -download 8 raflaan song mp3 sky digital
    -download 8 raflaan song mp3 mankirt aulakh music
    -download 8 raflaan song mp3 latest punjabi songs
    -download 8 raflaan song mp3 high quality
    -download 8 raflaan song mp3 low quality
    -download 8 raflaan song mp3 original version
    -download 8 raflaan song mp3 official video
    -download 8 raflaan song mp3 teaser trailer
    -download 8 raflaan song mp3 behind the scenes
    -download 8 raflaan song mp3 making of the video
    -download 8 raflaan song mp3 reaction video

    -

    The lyrics and the meaning

    -

    The lyrics of 8 Raflaan are written in Punjabi language, with some English words mixed in. The title of the song means "8 rifles", which refers to the weapons used by the gangsters in the video. The song is about the love and loyalty between the gangster and his girlfriend, as well as the challenges they face from their enemies. Some of the catchy lines from the song are:

    - -

    Why download 8 Raflaan Song MP3 for free?

    -

    If you love 8 Raflaan song, you might want to download it in MP3 format for free. But why MP3 format? And is it legal and ethical to download music for free? Here are some answers to these questions.

    -

    The benefits of MP3 format

    -

    MP3 is a digital audio format that compresses the sound data without losing much of the quality. MP3 files are smaller than other audio formats, which means they take less space on your device and less time to download. MP3 files are also compatible with most devices and players, which means you can enjoy your music on your phone, computer, car, or any other device that supports MP3 playback. MP3 files are also easy to edit, transfer, and share with others.

    -

    The legal and ethical issues of downloading music

    -

    Downloading music for free can be a tricky issue, as it involves the rights of the artists and the music industry. Music is a form of intellectual property, which means that the creators and owners of the music have the right to control how their music is used and distributed. Downloading music for free without their permission can be considered as piracy, which is illegal and punishable by law in many countries. Piracy can also harm the artists and the music industry, as they lose revenue and recognition for their work.

    -

    However, not all free music downloads are illegal or unethical. There are some sources that offer free music downloads legally and ethically, such as public domain music, creative commons music, or promotional music. These sources either have no copyright restrictions or have permission from the artists or the owners to share their music for free. Downloading music from these sources can be a way of supporting the artists and discovering new music.

    -

    Where to download 8 Raflaan Song MP3 for free?

    -

    Now that you know what 8 Raflaan song is and why you should download it in MP3 format, you might be wondering where to find it for free. There are many websites that claim to offer free music downloads, but not all of them are safe, reliable, or legal. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Some of them may also have low-quality audio files or broken links that can ruin your listening experience. Some of them may also violate the rights of the artists or the music industry and expose you to legal risks.

    -

    To avoid these problems, you should only download 8 Raflaan song MP3 from trusted and reputable sources that offer legal and ethical free music downloads. Here are some of the best free music download sites that you can use:

    -

    The best free music download sites

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    SiteDescriptionProsCons
    OKmusiA free online MP3 downloader that allows you to download any song from YouTube, SoundCloud, Spotify, and other platforms.- Easy to use
    - High-quality audio files
    - No registration or installation required
    - Supports multiple languages
    - Limited to online streaming platforms
    - May not have all songs available
    Jamendo MusicA free music platform that offers thousands of songs from independent artists under creative commons licenses.- Legal and ethical
    - High-quality audio files
    - Supports multiple genres and moods
    - Allows offline listening
    - Requires registration
    - May not have mainstream songs
    Free Music ArchiveA free music library that offers a large collection of songs from various genres and artists under public domain or creative commons licenses.- Legal and ethical
    - High-quality audio files
    - Supports multiple genres and categories
    - Allows user ratings and comments
    - Requires registration
    - May not have latest songs
    SoundCloudA popular online music platform that allows users to upload, stream, and download songs from various artists and genres.- High-quality audio files
    - Supports multiple genres and tags
    - Allows social interaction and feedback
    - Has mainstream and indie songs
    - Requires registration
    - Not all songs are downloadable
    - May have ads
    AudiomackA free online music platform that offers songs from emerging and established artists across various genres.- High-quality audio files
    - Supports multiple genres and playlists
    - Allows offline listening and sharing
    - Has trending and new songs
    - Requires registration
    - May have ads
    - May not have all songs available
    -

    How to use OKmusi MP3 downloader

    -

    One of the easiest and fastest ways to download 8 Raflaan song MP3 for free is to use OKmusi MP3 downloader. OKmusi is a free online MP3 downloader that allows you to download any song from YouTube, SoundCloud, Spotify, and other platforms. Here are the steps to use OKmusi MP3 downloader:

    -
      -
    1. Go to https://okmusi.com/ on your browser.
    2. -
    3. Type "8 Raflaan" in the search box and click the search icon.
    4. -
    5. Select the song from the list of results and click the download button.
    6. -
    7. Choose the MP3 format and the quality you want and click the download button again.
    8. -
    9. Wait for the download to finish and enjoy your song.
    10. -
    -

    Conclusion

    -

    8 Raflaan is a popular Punjabi song that has been loved by many music fans. If you want to download this song in MP3 format for free, you should use a trusted and reputable source that offers legal and ethical free music downloads. One of the best sources is OKmusi MP3 downloader, which allows you to download any song from YouTube, SoundCloud, Spotify, and other platforms. With OKmusi MP3 downloader, you can enjoy 8 Raflaan song on any device and share it with your friends.

    -

    FAQs

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 3uTools for Windows 7 64-bit - Free and Safe.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 3uTools for Windows 7 64-bit - Free and Safe.md deleted file mode 100644 index cfc352e9818e62031ee90ac646a9977ba154c17d..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 3uTools for Windows 7 64-bit - Free and Safe.md +++ /dev/null @@ -1,72 +0,0 @@ -
    -

    Download 3uTools for Windows 7 64 Bit: A Complete Guide

    -

    If you are looking for a free and easy way to manage your iOS device data on your PC, then you might want to try 3uTools. This software program lets you access a large amount of information on your iPad, iPhone, or iPod touch, such as apps, books, ringtones, etc. You can also jailbreak your iDevice with one click, download various content, and perform other useful tasks. In this article, we will show you how to download and install 3uTools for Windows 7 64 bit, and how to use it to manage your iOS device.

    -

    What is 3uTools and why do you need it?

    -

    3uTools is a comprehensive app for PCs that lets you view and manage the information on your Apple device in a user-friendly interface. You can connect your portable device to your PC with a USB cable or WIFI network. The lightning cable will give you the best connection. You will only need to use WIFI when the USB connection is not working. Here are some of the features that 3uTools offers:

    -

    download 3utools for windows 7 64 bit


    Download Zip ===> https://urlin.us/2uT2Kn



    -

    Manage your iOS device data on your PC

    -

    With 3uTools, you can easily manage your apps, photos, music, ringtones, videos, and other multimedia files. You can also view your device's different statuses, such as activation, jailbreak, battery, and iCloud lock. You can also see detailed iOS and iDevice information, such as the iOS version, serial number, storage space, etc.

    -

    Jailbreak your iDevice with one click

    -

    If you want to customize your iOS device beyond the limitations imposed by Apple, you can use 3uTools to jailbreak it with one click. Jailbreaking allows you to install unofficial apps, tweaks, themes, and more. 3uTools can automatch available firmwares for iOS devices and support various jailbreak tools. You can also backup and restore your SHSH files, which are essential for downgrading or upgrading your iOS version.

    -

    Download apps, ringtones, wallpapers, and more

    -

    Another feature of 3uTools is that it lets you download various apps, ringtones, wallpapers, and other content for your iOS device. You can browse through different categories and find what you like. You can also use 3uTools to make your own ringtones from music files or convert videos and audio files to different formats.

    -

    How to download 3utools for windows 7 64 bit
    -Download 3utools for windows 7 64 bit free
    -Download 3utools for windows 7 64 bit latest version
    -Download 3utools for windows 7 64 bit filehippo
    -Download 3utools for windows 7 64 bit softonic
    -Download 3utools for windows 7 64 bit full crack
    -Download 3utools for windows 7 64 bit offline installer
    -Download 3utools for windows 7 64 bit from official website
    -Download 3utools for windows 7 64 bit with jailbreak
    -Download 3utools for windows 7 64 bit without itunes
    -Download and install 3utools for windows 7 64 bit
    -Download and use 3utools for windows 7 64 bit
    -Download and update 3utools for windows 7 64 bit
    -Download and backup data with 3utools for windows 7 64 bit
    -Download and restore data with 3utools for windows 7 64 bit
    -Best site to download 3utools for windows 7 64 bit
    -Best alternative to download 3utools for windows 7 64 bit
    -Best way to download 3utools for windows 7 64 bit
    -Best tips to download 3utools for windows 7 64 bit
    -Best guide to download 3utools for windows 7 64 bit
    -Why download 3utools for windows 7 64 bit
    -What is download speed of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download of download ofdownloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed downloading speed

    -

    How to download and install 3uTools for Windows 7 64 bit?

    -

    If you want to use 3uTools on your PC running Windows 7 64 bit, you need to follow these steps:

    -

    Check your system requirements

    -

    Before you download and install 3uTools, make sure that your PC meets the minimum system requirements. According to the official website, you need:

    - -

    I hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy chess playing!

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Experience Ninja Arashi 2 in a New Way with APK Mod Download.md b/spaces/1phancelerku/anime-remove-background/Experience Ninja Arashi 2 in a New Way with APK Mod Download.md deleted file mode 100644 index 08c5f277853cf33a271577ecf80158bc8714bb05..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Experience Ninja Arashi 2 in a New Way with APK Mod Download.md +++ /dev/null @@ -1,146 +0,0 @@ - -

    Download Ninja Arashi 2 APK Mod: A Stealth, Hack, and Slash Game

    -

    Are you a fan of ninja games? Do you like stealth, hack, and slash games? If yes, then you should try Ninja Arashi 2, a thrilling action-adventure game that will test your skills and reflexes. In this article, we will tell you everything you need to know about this game, including what it is, what are its features, how to download its mod version, how to play it, and some FAQs.

    -

    What is Ninja Arashi 2?

    -

    Ninja Arashi 2 is a game developed by Black Panther, a studio that specializes in creating ninja games. It is a sequel to the first Ninja Arashi game, which was released in 2017 and received positive reviews from players and critics. Here are some of the aspects of Ninja Arashi 2 that make it an exciting game.

    -

    download ninja arashi 2 apk mod


    Download Zip ❤❤❤ https://jinyurl.com/2uNJNt



    -

    A sequel to the first Ninja Arashi game

    -

    Ninja Arashi 2 continues the legacy of the first ninja game. In this episode, you play as the raging Arashi, who finally escapes from the frozen prison which was created by Dosu, a cruel evil shadow demon. Arashi continues his pursuit after Dosu to rescue his son and unveil the shadow behind Dosu's plan. However, the journey will be much more challenging this time.

    -

    A 2D action-adventure game with stealth, hacking, and slashing elements

    -

    Ninja Arashi 2 features simple yet addicting gameplay, giving you thrilling moments and unexpected experiences. The game is focused on 2D fighting with aspects of stealth, hacking, and slashing. You have to use your ninja skills and weapons to overcome obstacles, enemies, and traps that are set for you. You can also use stealth to deceive your enemies and gain an advantage over them.

    -

    A story of a ninja who escapes from prison and tries to rescue his son from an evil demon

    -

    Ninja Arashi 2 has a compelling story that will keep you hooked until the end. The game has four acts in story mode with 80 stages to complete. Each stage has its own objectives and challenges that you have to accomplish. Along the way, you will encounter new terrains, environments, enemies, and bosses that will test your limits. You will also learn more about Arashi's past and his relationship with his son.

    -

    What are the features of Ninja Arashi 2?

    -

    Ninja Arashi 2 has many features that make it a fun and enjoyable game. Here are some of them:

    -

    Challenging platformer with 80 stages to complete

    -

    Ninja Arashi 2 is not an easy game. It requires skill, patience, and strategy to complete each stage. The game has various levels of difficulty that will challenge your abilities. You have to deal with different types of obstacles, such as traps, spikes, saws, fireballs, lasers, etc. You also have to face different kinds of enemies, such as ninjas, samurais, archers, demons, etc. You have to use your skills and weapons to defeat them and reach the end of each stage.

    -

    download ninja arashi 2 mod apk unlimited money
    -ninja arashi 2 apk mod free download for android
    -how to download ninja arashi 2 mod apk latest version
    -ninja arashi 2 mod apk download link
    -download ninja arashi 2 hack mod apk
    -ninja arashi 2 mod apk offline download
    -ninja arashi 2 mod apk android 1 download
    -download game ninja arashi 2 mod apk
    -ninja arashi 2 full mod apk download
    -ninja arashi 2 mod menu apk download
    -download ninja arashi 2 premium mod apk
    -ninja arashi 2 mod apk rexdl download
    -ninja arashi 2 mega mod apk download
    -download ninja arashi 2 mod apk revdl
    -ninja arashi 2 god mode mod apk download
    -download ninja arashi 2 unlimited coins mod apk
    -ninja arashi 2 mod apk no ads download
    -ninja arashi 2 mod apk all levels unlocked download
    -download ninja arashi 2 pro mod apk
    -ninja arashi 2 mod apk unlimited gems download
    -download cheat ninja arashi 2 mod apk
    -ninja arashi 2 mod apk unlimited health download
    -ninja arashi 2 hack version mod apk download
    -download ninja arashi 2 cracked mod apk
    -ninja arashi 2 unlimited everything mod apk download
    -download ninja arashi 2 vip mod apk
    -ninja arashi 2 mod apk no root download
    -ninja arashi 2 unlimited skills mod apk download
    -download ninja arashi 2 original mod apk
    -ninja arashi 2 high damage mod apk download
    -download ninja arashi 2 unlocked mod apk
    -ninja arashi 2 one hit kill mod apk download
    -download ninja arashi 2 super mod apk
    -ninja arashi 2 no cooldown mod apk download
    -download ninja arashi 2 extreme mod apk
    -ninja arashi 2 infinite ammo mod apk download
    -download ninja arashi 2 ultimate mod apk
    -ninja arashi 2 all weapons unlocked mod apk download
    -download ninja arashi 2 best mod apk
    -ninja arashi 2 easy win mod apk download

    -

    New melee weapon and new mechanics

    -

    Ninja Arashi 2 introduces a new melee weapon for Arashi: the katana. The katana is a powerful sword that can slash through enemies and obstacles. You can also use it to deflect projectiles and perform special attacks. The game also has new mechanics that add more variety and fun to the gameplay. For example, you can use the grappling hook to swing across gaps, the shuriken to hit distant targets, the smoke bomb to create a diversion, and the kunai to climb walls.

    -

    A skill tree system and an artifact system to upgrade your ninja skills

    -

    Ninja Arashi 2 allows you to customize and improve your ninja skills according to your preference and playstyle. The game has a skill tree system that lets you unlock and upgrade different skills, such as health, damage, speed, stealth, etc. You can also collect and equip artifacts that give you passive bonuses and effects, such as increased critical chance, reduced cooldown, extra gold, etc. You can find artifacts by exploring the stages or by purchasing them from the shop.

    -

    Beautiful graphics and scenery with shadow silhouette style

    -

    Ninja Arashi 2 has stunning graphics and scenery that create a captivating atmosphere for the game. The game uses a shadow silhouette style that gives it a unique and artistic look. The game also has diverse and detailed environments that change according to the act and the stage. You will see different landscapes, such as forests, mountains, temples, caves, etc. The game also has dynamic lighting and shadow effects that enhance the visual quality of the game.

    -

    Epic ninja vs boss fights

    -

    Ninja Arashi 2 has epic ninja vs boss fights that will challenge your skills and reflexes. The game has four acts in story mode, and each act has a final boss that you have to defeat. The bosses are powerful enemies that have different abilities and patterns that you have to learn and counter. You have to use your weapons, skills, and strategies to overcome them and advance to the next act.

    -

    How to download Ninja Arashi 2 APK Mod?

    -

    Ninja Arashi 2 is available for free on Google Play Store for Android devices. However, if you want to enjoy some extra benefits and features, you can download the mod version of the game. Here are some of the benefits of downloading the mod version:

    - - You can get unlimited money and diamonds that you can use to buy items from the shop or upgrade your skills. - You can get unlimited skill points that you can use to unlock and upgrade all the skills in the skill tree. - You can get unlimited artifacts that you can equip to boost your ninja abilities. - You can get all the stages unlocked so you can play any stage you want without any restriction.

    If you are interested in downloading the mod version of Ninja Arashi 2, here are the steps to follow:

    - - First, you need to uninstall the original version of Ninja Arashi 2 from your device if you have it installed. - Second, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store. - Third, you need to download the mod APK file from a reliable website. You can search for "Ninja Arashi 2 APK Mod" on Google or any other search engine and choose a website that has good reviews and ratings. - Fourth, you need to locate the downloaded file on your device storage and tap on it to install it. - Fifth, you need to wait for the installation process to finish and then launch the game.

    That's it! You have successfully downloaded and installed Ninja Arashi 2 APK Mod on your device. Now you can enjoy playing the game with all the benefits and features of the mod version.

    -

    How to play Ninja Arashi 2?

    -

    Ninja Arashi 2 is a simple yet challenging game that requires skill, patience, and strategy. Here are some of the basic controls and gameplay tips that will help you play the game:

    -

    The basic controls and gameplay

    -

    Ninja Arashi 2 has easy-to-use controls that let you move, jump, attack, use items, etc. Here are some of the controls:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ControlFunction
    Left arrowMove left
    Right arrowMove right
    Up arrowJumpDown arrowCrouch
    A buttonAttack with katana
    B buttonUse shuriken
    C buttonUse grappling hook
    D buttonUse smoke bomb
    E buttonUse kunai
    F buttonUse special attack
    G buttonPause the game
    H buttonShow the skill tree and the artifact system
    -

    The gameplay of Ninja Arashi 2 is based on stealth, hacking, and slashing. You have to use your ninja skills and weapons to overcome obstacles, enemies, and traps that are set for you. You can also use stealth to deceive your enemies and gain an advantage over them. You have to complete each stage by reaching the end or by fulfilling the objectives. You can also collect gold and diamonds that you can use to buy items from the shop or upgrade your skills.

    -

    The tips and tricks to master the game

    -

    Ninja Arashi 2 is a challenging game that requires skill, patience, and strategy. Here are some of the tips and tricks that will help you master the game:

    - - Be aware of your surroundings. Look for hidden paths, secrets, and items that can help you in your journey. Also, watch out for traps, spikes, saws, fireballs, lasers, etc. that can harm you or block your way. - Be stealthy. Use the shadows and the environment to hide from your enemies. You can also use smoke bombs to create a diversion or kunai to climb walls. Avoid unnecessary fights and try to sneak past your enemies or take them out silently. - Be smart. Use your weapons and skills wisely. You have a limited amount of shuriken, grappling hook, smoke bomb, and kunai that you can use in each stage. You also have a special attack that you can use once in a while. Choose the best weapon and skill for each situation and enemy. - Be agile. Use your movement and jumping skills to dodge attacks, avoid obstacles, and reach high places. You can also use the grappling hook to swing across gaps or the kunai to stick to walls. You can also perform wall jumps and double jumps to enhance your mobility. - Be strategic. Use the skill tree system and the artifact system to upgrade your ninja skills according to your preference and playstyle. You can unlock and upgrade different skills, such as health, damage, speed, stealth, etc. You can also collect and equip artifacts that give you passive bonuses and effects, such as increased critical chance, reduced cooldown, extra gold, etc.

    The best skills and artifacts to use in the game

    -

    Ninja Arashi 2 has a skill tree system and an artifact system that let you customize and improve your ninja skills according to your preference and playstyle. Here are some of the best skills and artifacts to use in the game:

    - - Health: This skill increases your maximum health points, which means you can survive more hits from enemies and traps. This is a useful skill for beginners who are not familiar with the game mechanics or for players who prefer a more defensive playstyle. - Damage: This skill increases your damage output with your katana and shuriken, which means you can kill enemies faster and easier. This is a useful skill for players who prefer a more offensive playstyle or who want to finish stages quickly. - Speed: This skill increases your movement speed, which means you can run faster and jump farther. This is a useful skill for players who want to explore more of the stages or who want to avoid enemies and traps more easily. - Stealth: This skill increases your stealth ability, which means you can stay hidden longer in the shadows and reduce the noise you make when moving or attacking. This is a useful skill for players who want to use stealth as their main strategy or who want to avoid unnecessary fights. - Critical Chance: This artifact increases your chance of landing a critical hit with your katana or shuriken, which means you can deal more damage with each hit. This is a useful artifact for players who want to maximize their damage output or who want to kill enemies with fewer hits. - Cooldown Reduction: This artifact reduces the cooldown time of your weapons and skills, which means you can use them more frequently in each stage. This is a useful artifact for players who want to use their weapons and skills more often or who want to have more options in combat. - Gold Bonus: This artifact increases the amount of gold you earn from killing enemies or collecting gold from the stages, which means you can buy more items from the shop or upgrade your skills more easily. This is a useful artifact for players who want to have more resources or who want to improve their ninja skills faster.

    Conclusion

    -

    Ninja Arashi 2 is a thrilling action-adventure game that will test your skills and reflexes. It is a sequel to the first Ninja Arashi game, which was a hit among ninja game fans. It has a compelling story, challenging gameplay, stunning graphics, and many features that make it a fun and enjoyable game. You can download the game for free from Google Play Store or you can download the mod version to enjoy some extra benefits and features. If you are looking for a stealth, hack, and slash game that will keep you hooked for hours, then Ninja Arashi 2 is the game for you.

    -

    FAQs

    -

    Here are some of the frequently asked questions about Ninja Arashi 2:

    -

    Q: Is Ninja Arashi 2 safe to download and play?

    -

    A: Yes, Ninja Arashi 2 is safe to download and play. The game does not contain any viruses, malware, or spyware that can harm your device or your privacy. However, if you download the mod version of the game, make sure you download it from a reliable website and scan it with an antivirus before installing it.

    -

    Q: Is Ninja Arashi 2 compatible with my device?

    -

    A: Ninja Arashi 2 is compatible with most Android devices that have Android 4.4 or higher. However, some devices may experience lagging or crashing issues due to low memory or performance. If you encounter any problems while playing the game, try lowering the graphics quality or closing other apps running in the background.

    -

    Q: How can I save my progress in Ninja Arashi 2?

    -

    A: Ninja Arashi 2 has an auto-save feature that saves your progress every time you complete a stage or exit the game. You can also manually save your progress by tapping on the G button and then tapping on the save icon. You can also load your saved progress by tapping on the load icon.

    -

    Q: How can I contact the developer of Ninja Arashi 2?

    -

    A: If you have any questions, feedback, or suggestions about Ninja Arashi 2, you can contact the developer of the game by sending an email to blackpanther.gamestudio@gmail.com. You can also follow them on Facebook and Instagram to get the latest news and updates about the game.

    -

    Q: How can I support the developer of Ninja Arashi 2?

    -

    A: If you like Ninja Arashi 2 and want to support the developer of the game, you can do so by rating and reviewing the game on Google Play Store, sharing it with your friends and family, or making an in-app purchase to remove ads or buy more items.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Free 3D Models Download or Edit Online Clara.io.md b/spaces/1phancelerku/anime-remove-background/Free 3D Models Download or Edit Online Clara.io.md deleted file mode 100644 index 07445ec3f1a75b698e5c939da59fa518b5023649..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Free 3D Models Download or Edit Online Clara.io.md +++ /dev/null @@ -1,133 +0,0 @@ - -

    3D Objects Free: How to Find and Download Them for Your Projects

    -

    If you are working on a project that involves 3D graphics, animation, or visualization, you might be looking for some free 3D objects to use. 3D objects are digital representations of physical objects that can be displayed and manipulated in three dimensions. They can be used for various purposes, such as creating realistic scenes, enhancing user interfaces, simulating physical phenomena, or telling stories.

    -

    3d objects free


    Download File ===> https://jinyurl.com/2uNLoa



    -

    What are 3D Objects and Why Use Them?

    -

    3D objects are composed of vertices, edges, and faces that define their shape and appearance. They can also have textures, materials, colors, lighting, and animations that add more details and effects. 3D objects can be created using specialized software, such as Blender, Maya, or SketchUp, or scanned from real-world objects using cameras or sensors.

    -

    The Benefits of Using 3D Objects in Your Projects

    -

    Using 3D objects in your projects can have many benefits, such as:

    - -

    The Challenges of Creating 3D Objects from Scratch

    -

    However, creating your own 3D objects from scratch can also have some challenges, such as:

    - -

    Where to Find Free 3D Objects Online

    -

    Fortunately, there are many online sources where you can find free 3D objects that you can use for your projects. These sources can be divided into two main categories: websites and libraries.

    -

    Free 3D Models Websites

    -

    These are websites that offer a large collection of free 3D models that you can browse, download, and use. Some examples are:

    -

    free 3d models download
    -free 3d assets for games
    -free 3d characters and creatures
    -free 3d cars and vehicles
    -free 3d weapons and military
    -free 3d scans and photogrammetry
    -free 3d handpainted textures
    -free 3d pbr materials
    -free 3d medieval and fantasy
    -free 3d electronics and gadgets
    -free 3d robots and mechs
    -free 3d furniture and decorations
    -free 3d architecture and buildings
    -free 3d city and environment
    -free 3d low poly and stylized
    -free 3d sci-fi and space
    -free 3d anime and manga
    -free 3d cartoon and comic
    -free 3d horror and zombie
    -free 3d plants and trees
    -free 3d animals and pets
    -free 3d dinosaurs and prehistoric
    -free 3d humanoids and aliens
    -free 3d rigged and animated
    -free 3d printable and sculptable
    -free 3d blender models
    -free 3d obj models
    -free 3d fbx models
    -free 3d max models
    -free 3d maya models
    -free 3d c4d models
    -free 3d unity models
    -free 3d unreal models
    -free 3d sketchfab models
    -free 3d cgtrader models
    -free 3d turbosquid models
    -free 3d artstation models
    -free 3d mixamo models
    -free 3d quixel models
    -free 3d substance models
    -royalty-free 3d models
    -creative commons license 3d models
    -high quality 3d models for free
    -realistic 3d models for free
    -best sites for free 3d models
    -how to get free 3d models
    -where to find free 3d models
    -tips for creating free 3d models
    -tutorials for making free 3d models

    -

    Free3D.com

    -

    This is a website that hosts over 15,000 free 3D models in various formats, such as Blender, OBJ, 3DS, C4D, MAX, MAYA. You can find models for different categories, such as architecture, vehicles, characters, furniture, aircrafts, etc. You can also share your own models with the community.

    -

    Sketchfab

    -

    This is a website that allows you to view, share, and download over half a million free 3D models under Creative Commons licenses. You can also buy royalty-free models from the Sketchfab Store. You can explore models for different categories, such as characters, cars, weapons, scans, handpainted, medieval, fantasy, etc.

    -

    CGTrader

    -

    This is a website that offers over one million free and paid 3 D models for various categories, such as animals, architecture, electronics, furniture, vehicles, etc. You can also sell your own models and earn money. You can filter models by price, format, license, polycount, etc.

    -

    Free 3D Models Libraries and Repositories

    -

    These are online platforms that store and organize free 3D models that you can access and download. Some examples are:

    -

    Google Poly

    -

    This is a platform that allows you to browse, discover, and download thousands of free 3D models created by Google and other users. You can also upload your own models and share them with the world. You can find models for different categories, such as animals, art, food, nature, objects, scenes, etc.

    -

    BlenderKit

    -

    This is a platform that offers over 10,000 free 3D models for Blender users. You can also upload your own models and earn credits or money. You can find models for different categories, such as animals, architecture, characters, furniture, vehicles, etc. You can also access free materials, brushes, and textures.

    -

    NASA 3D Resources

    -

    This is a platform that provides free 3D models of NASA's missions, spacecrafts, planets, asteroids, comets, etc. You can also access free images, videos, podcasts, and e-books. You can find models in various formats, such as STL, OBJ, 3DS, FBX, etc.

    -

    How to Download and Use Free 3D Objects

    -

    Once you have found the free 3D objects that you want to use for your projects, you need to download and use them properly. Here are some tips to help you:

    -

    The Different File Formats for 3D Objects

    -

    Free 3D objects can come in different file formats that determine how they are stored and displayed. Some of the most common formats are:

    - -

    The Software and Tools You Need to Open and Edit 3D Objects

    -

    To open and edit free 3D objects, you need to have the appropriate software and tools installed on your computer or device. Some of the most popular software and tools are:

    - -

    The Best Practices for Using Free 3D Objects in Your Projects

    -

    To use free 3D objects in your projects effectively and ethically, you need to follow some best practices, such as:

    - -

    Conclusion

    -

    In conclusion, free 3D objects are a great resource for anyone who wants to create or enhance their projects with 3D graphics, animation, or visualization. They can save you time, money, and effort, as well as provide you with a variety of options and possibilities. However, you also need to be aware of the challenges and best practices of finding, downloading, and using free 3D objects online. By following the tips and resources in this article, you can make the most out of free 3D objects for your projects.

    -

    FAQs

    -

    Here are some frequently asked questions about free 3D objects:

    -
      -
    1. What are the best websites to find free 3D objects?
    2. -

      There is no definitive answer to this question, as different websites may have different advantages and disadvantages depending on your needs and preferences. However, some of the most popular and reputable websites are Free3D.com, Sketchfab, CGTrader, Google Poly, BlenderKit, and NASA 3D Resources.

      -
    3. What are the best software and tools to open and edit free 3D objects?
    4. -

      Again, this depends on your personal choice and project requirements. However, some of the most widely used and versatile software and tools are Blender, SketchUp, Unity, and Paint 3D.

      -
    5. What are the best file formats for free 3D objects?
    6. -

      This depends on the type and purpose of your project, as well as the compatibility and functionality of your software and tools. However, some of the most common and flexible file formats are OBJ, STL, FBX, and GLTF.

      -
    7. How can I optimize the size and quality of free 3D objects?
    8. -

      You can use various methods and techniques to optimize the size and quality of free 3D objects, such as reducing the polygon count, simplifying the geometry, compressing the file size, adjusting the resolution, applying level of detail (LOD), etc.

      -
    9. How can I credit the original creators or sources of free 3D objects?
    10. -

      You can credit the original creators or sources of free 3D objects by following their license and terms of use, which may include providing their name, website, link, or other information. You can also use tools or platforms that automatically generate citations or attributions for free 3D objects.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/alt_diffusion/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/alt_diffusion/__init__.py deleted file mode 100644 index 2d01604130d546566de087f4c72d690921fa429e..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/alt_diffusion/__init__.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# flake8: noqa - -from dataclasses import dataclass -from typing import List, Optional, Union - -import numpy as np -import PIL - -from ...utils import BaseOutput, is_paddle_available, is_paddlenlp_available - - -@dataclass -# Copied from diffusers.pipelines.stable_diffusion.__init__.StableDiffusionPipelineOutput with Stable->Alt -class AltDiffusionPipelineOutput(BaseOutput): - """ - Output class for Alt Diffusion pipelines. - - Args: - images (`List[PIL.Image.Image]` or `np.ndarray`) - List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width, - num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline. - nsfw_content_detected (`List[bool]`) - List of flags denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, or `None` if safety checking could not be performed. - """ - - images: Union[List[PIL.Image.Image], np.ndarray] - nsfw_content_detected: Optional[List[bool]] - - -if is_paddlenlp_available() and is_paddle_available(): - from .modeling_roberta_series import RobertaSeriesModelWithTransformation - from .pipeline_alt_diffusion import AltDiffusionPipeline - from .pipeline_alt_diffusion_img2img import AltDiffusionImg2ImgPipeline diff --git a/spaces/AP123/dreamgaussian/gs_renderer.py b/spaces/AP123/dreamgaussian/gs_renderer.py deleted file mode 100644 index 55e7b24c5da788625816cd60eea61d904abfa336..0000000000000000000000000000000000000000 --- a/spaces/AP123/dreamgaussian/gs_renderer.py +++ /dev/null @@ -1,820 +0,0 @@ -import os -import math -import numpy as np -from typing import NamedTuple -from plyfile import PlyData, PlyElement - -import torch -from torch import nn - -from diff_gaussian_rasterization import ( - GaussianRasterizationSettings, - GaussianRasterizer, -) -from simple_knn._C import distCUDA2 - -from sh_utils import eval_sh, SH2RGB, RGB2SH -from mesh import Mesh -from mesh_utils import decimate_mesh, clean_mesh - -import kiui - -def inverse_sigmoid(x): - return torch.log(x/(1-x)) - -def get_expon_lr_func( - lr_init, lr_final, lr_delay_steps=0, lr_delay_mult=1.0, max_steps=1000000 -): - - def helper(step): - if lr_init == lr_final: - # constant lr, ignore other params - return lr_init - if step < 0 or (lr_init == 0.0 and lr_final == 0.0): - # Disable this parameter - return 0.0 - if lr_delay_steps > 0: - # A kind of reverse cosine decay. - delay_rate = lr_delay_mult + (1 - lr_delay_mult) * np.sin( - 0.5 * np.pi * np.clip(step / lr_delay_steps, 0, 1) - ) - else: - delay_rate = 1.0 - t = np.clip(step / max_steps, 0, 1) - log_lerp = np.exp(np.log(lr_init) * (1 - t) + np.log(lr_final) * t) - return delay_rate * log_lerp - - return helper - - -def strip_lowerdiag(L): - uncertainty = torch.zeros((L.shape[0], 6), dtype=torch.float, device="cuda") - - uncertainty[:, 0] = L[:, 0, 0] - uncertainty[:, 1] = L[:, 0, 1] - uncertainty[:, 2] = L[:, 0, 2] - uncertainty[:, 3] = L[:, 1, 1] - uncertainty[:, 4] = L[:, 1, 2] - uncertainty[:, 5] = L[:, 2, 2] - return uncertainty - -def strip_symmetric(sym): - return strip_lowerdiag(sym) - -def gaussian_3d_coeff(xyzs, covs): - # xyzs: [N, 3] - # covs: [N, 6] - x, y, z = xyzs[:, 0], xyzs[:, 1], xyzs[:, 2] - a, b, c, d, e, f = covs[:, 0], covs[:, 1], covs[:, 2], covs[:, 3], covs[:, 4], covs[:, 5] - - # eps must be small enough !!! - inv_det = 1 / (a * d * f + 2 * e * c * b - e**2 * a - c**2 * d - b**2 * f + 1e-24) - inv_a = (d * f - e**2) * inv_det - inv_b = (e * c - b * f) * inv_det - inv_c = (e * b - c * d) * inv_det - inv_d = (a * f - c**2) * inv_det - inv_e = (b * c - e * a) * inv_det - inv_f = (a * d - b**2) * inv_det - - power = -0.5 * (x**2 * inv_a + y**2 * inv_d + z**2 * inv_f) - x * y * inv_b - x * z * inv_c - y * z * inv_e - - power[power > 0] = -1e10 # abnormal values... make weights 0 - - return torch.exp(power) - -def build_rotation(r): - norm = torch.sqrt(r[:,0]*r[:,0] + r[:,1]*r[:,1] + r[:,2]*r[:,2] + r[:,3]*r[:,3]) - - q = r / norm[:, None] - - R = torch.zeros((q.size(0), 3, 3), device='cuda') - - r = q[:, 0] - x = q[:, 1] - y = q[:, 2] - z = q[:, 3] - - R[:, 0, 0] = 1 - 2 * (y*y + z*z) - R[:, 0, 1] = 2 * (x*y - r*z) - R[:, 0, 2] = 2 * (x*z + r*y) - R[:, 1, 0] = 2 * (x*y + r*z) - R[:, 1, 1] = 1 - 2 * (x*x + z*z) - R[:, 1, 2] = 2 * (y*z - r*x) - R[:, 2, 0] = 2 * (x*z - r*y) - R[:, 2, 1] = 2 * (y*z + r*x) - R[:, 2, 2] = 1 - 2 * (x*x + y*y) - return R - -def build_scaling_rotation(s, r): - L = torch.zeros((s.shape[0], 3, 3), dtype=torch.float, device="cuda") - R = build_rotation(r) - - L[:,0,0] = s[:,0] - L[:,1,1] = s[:,1] - L[:,2,2] = s[:,2] - - L = R @ L - return L - -class BasicPointCloud(NamedTuple): - points: np.array - colors: np.array - normals: np.array - - -class GaussianModel: - - def setup_functions(self): - def build_covariance_from_scaling_rotation(scaling, scaling_modifier, rotation): - L = build_scaling_rotation(scaling_modifier * scaling, rotation) - actual_covariance = L @ L.transpose(1, 2) - symm = strip_symmetric(actual_covariance) - return symm - - self.scaling_activation = torch.exp - self.scaling_inverse_activation = torch.log - - self.covariance_activation = build_covariance_from_scaling_rotation - - self.opacity_activation = torch.sigmoid - self.inverse_opacity_activation = inverse_sigmoid - - self.rotation_activation = torch.nn.functional.normalize - - - def __init__(self, sh_degree : int): - self.active_sh_degree = 0 - self.max_sh_degree = sh_degree - self._xyz = torch.empty(0) - self._features_dc = torch.empty(0) - self._features_rest = torch.empty(0) - self._scaling = torch.empty(0) - self._rotation = torch.empty(0) - self._opacity = torch.empty(0) - self.max_radii2D = torch.empty(0) - self.xyz_gradient_accum = torch.empty(0) - self.denom = torch.empty(0) - self.optimizer = None - self.percent_dense = 0 - self.spatial_lr_scale = 0 - self.setup_functions() - - def capture(self): - return ( - self.active_sh_degree, - self._xyz, - self._features_dc, - self._features_rest, - self._scaling, - self._rotation, - self._opacity, - self.max_radii2D, - self.xyz_gradient_accum, - self.denom, - self.optimizer.state_dict(), - self.spatial_lr_scale, - ) - - def restore(self, model_args, training_args): - (self.active_sh_degree, - self._xyz, - self._features_dc, - self._features_rest, - self._scaling, - self._rotation, - self._opacity, - self.max_radii2D, - xyz_gradient_accum, - denom, - opt_dict, - self.spatial_lr_scale) = model_args - self.training_setup(training_args) - self.xyz_gradient_accum = xyz_gradient_accum - self.denom = denom - self.optimizer.load_state_dict(opt_dict) - - @property - def get_scaling(self): - return self.scaling_activation(self._scaling) - - @property - def get_rotation(self): - return self.rotation_activation(self._rotation) - - @property - def get_xyz(self): - return self._xyz - - @property - def get_features(self): - features_dc = self._features_dc - features_rest = self._features_rest - return torch.cat((features_dc, features_rest), dim=1) - - @property - def get_opacity(self): - return self.opacity_activation(self._opacity) - - @torch.no_grad() - def extract_fields(self, resolution=128, num_blocks=16, relax_ratio=1.5): - # resolution: resolution of field - - block_size = 2 / num_blocks - - assert resolution % block_size == 0 - split_size = resolution // num_blocks - - opacities = self.get_opacity - - # pre-filter low opacity gaussians to save computation - mask = (opacities > 0.005).squeeze(1) - - opacities = opacities[mask] - xyzs = self.get_xyz[mask] - stds = self.get_scaling[mask] - - # normalize to ~ [-1, 1] - mn, mx = xyzs.amin(0), xyzs.amax(0) - self.center = (mn + mx) / 2 - self.scale = 1.8 / (mx - mn).amax().item() - - xyzs = (xyzs - self.center) * self.scale - stds = stds * self.scale - - covs = self.covariance_activation(stds, 1, self._rotation[mask]) - - # tile - device = opacities.device - occ = torch.zeros([resolution] * 3, dtype=torch.float32, device=device) - - X = torch.linspace(-1, 1, resolution).split(split_size) - Y = torch.linspace(-1, 1, resolution).split(split_size) - Z = torch.linspace(-1, 1, resolution).split(split_size) - - - # loop blocks (assume max size of gaussian is small than relax_ratio * block_size !!!) - for xi, xs in enumerate(X): - for yi, ys in enumerate(Y): - for zi, zs in enumerate(Z): - xx, yy, zz = torch.meshgrid(xs, ys, zs) - # sample points [M, 3] - pts = torch.cat([xx.reshape(-1, 1), yy.reshape(-1, 1), zz.reshape(-1, 1)], dim=-1).to(device) - # in-tile gaussians mask - vmin, vmax = pts.amin(0), pts.amax(0) - vmin -= block_size * relax_ratio - vmax += block_size * relax_ratio - mask = (xyzs < vmax).all(-1) & (xyzs > vmin).all(-1) - # if hit no gaussian, continue to next block - if not mask.any(): - continue - mask_xyzs = xyzs[mask] # [L, 3] - mask_covs = covs[mask] # [L, 6] - mask_opas = opacities[mask].view(1, -1) # [L, 1] --> [1, L] - - # query per point-gaussian pair. - g_pts = pts.unsqueeze(1).repeat(1, mask_covs.shape[0], 1) - mask_xyzs.unsqueeze(0) # [M, L, 3] - g_covs = mask_covs.unsqueeze(0).repeat(pts.shape[0], 1, 1) # [M, L, 6] - - # batch on gaussian to avoid OOM - batch_g = 1024 - val = 0 - for start in range(0, g_covs.shape[1], batch_g): - end = min(start + batch_g, g_covs.shape[1]) - w = gaussian_3d_coeff(g_pts[:, start:end].reshape(-1, 3), g_covs[:, start:end].reshape(-1, 6)).reshape(pts.shape[0], -1) # [M, l] - val += (mask_opas[:, start:end] * w).sum(-1) - - # kiui.lo(val, mask_opas, w) - - occ[xi * split_size: xi * split_size + len(xs), - yi * split_size: yi * split_size + len(ys), - zi * split_size: zi * split_size + len(zs)] = val.reshape(len(xs), len(ys), len(zs)) - - kiui.lo(occ, verbose=1) - - return occ - - def extract_mesh(self, path, density_thresh=1, resolution=128, decimate_target=1e5): - - os.makedirs(os.path.dirname(path), exist_ok=True) - - occ = self.extract_fields(resolution).detach().cpu().numpy() - - import mcubes - vertices, triangles = mcubes.marching_cubes(occ, density_thresh) - vertices = vertices / (resolution - 1.0) * 2 - 1 - - # transform back to the original space - vertices = vertices / self.scale + self.center.detach().cpu().numpy() - - vertices, triangles = clean_mesh(vertices, triangles, remesh=True, remesh_size=0.015) - if decimate_target > 0 and triangles.shape[0] > decimate_target: - vertices, triangles = decimate_mesh(vertices, triangles, decimate_target) - - v = torch.from_numpy(vertices.astype(np.float32)).contiguous().cuda() - f = torch.from_numpy(triangles.astype(np.int32)).contiguous().cuda() - - print( - f"[INFO] marching cubes result: {v.shape} ({v.min().item()}-{v.max().item()}), {f.shape}" - ) - - mesh = Mesh(v=v, f=f, device='cuda') - - return mesh - - def get_covariance(self, scaling_modifier = 1): - return self.covariance_activation(self.get_scaling, scaling_modifier, self._rotation) - - def oneupSHdegree(self): - if self.active_sh_degree < self.max_sh_degree: - self.active_sh_degree += 1 - - def create_from_pcd(self, pcd : BasicPointCloud, spatial_lr_scale : float = 1): - self.spatial_lr_scale = spatial_lr_scale - fused_point_cloud = torch.tensor(np.asarray(pcd.points)).float().cuda() - fused_color = RGB2SH(torch.tensor(np.asarray(pcd.colors)).float().cuda()) - features = torch.zeros((fused_color.shape[0], 3, (self.max_sh_degree + 1) ** 2)).float().cuda() - features[:, :3, 0 ] = fused_color - features[:, 3:, 1:] = 0.0 - - print("Number of points at initialisation : ", fused_point_cloud.shape[0]) - - dist2 = torch.clamp_min(distCUDA2(torch.from_numpy(np.asarray(pcd.points)).float().cuda()), 0.0000001) - scales = torch.log(torch.sqrt(dist2))[...,None].repeat(1, 3) - rots = torch.zeros((fused_point_cloud.shape[0], 4), device="cuda") - rots[:, 0] = 1 - - opacities = inverse_sigmoid(0.1 * torch.ones((fused_point_cloud.shape[0], 1), dtype=torch.float, device="cuda")) - - self._xyz = nn.Parameter(fused_point_cloud.requires_grad_(True)) - self._features_dc = nn.Parameter(features[:,:,0:1].transpose(1, 2).contiguous().requires_grad_(True)) - self._features_rest = nn.Parameter(features[:,:,1:].transpose(1, 2).contiguous().requires_grad_(True)) - self._scaling = nn.Parameter(scales.requires_grad_(True)) - self._rotation = nn.Parameter(rots.requires_grad_(True)) - self._opacity = nn.Parameter(opacities.requires_grad_(True)) - self.max_radii2D = torch.zeros((self.get_xyz.shape[0]), device="cuda") - - def training_setup(self, training_args): - self.percent_dense = training_args.percent_dense - self.xyz_gradient_accum = torch.zeros((self.get_xyz.shape[0], 1), device="cuda") - self.denom = torch.zeros((self.get_xyz.shape[0], 1), device="cuda") - - l = [ - {'params': [self._xyz], 'lr': training_args.position_lr_init * self.spatial_lr_scale, "name": "xyz"}, - {'params': [self._features_dc], 'lr': training_args.feature_lr, "name": "f_dc"}, - {'params': [self._features_rest], 'lr': training_args.feature_lr / 20.0, "name": "f_rest"}, - {'params': [self._opacity], 'lr': training_args.opacity_lr, "name": "opacity"}, - {'params': [self._scaling], 'lr': training_args.scaling_lr, "name": "scaling"}, - {'params': [self._rotation], 'lr': training_args.rotation_lr, "name": "rotation"} - ] - - self.optimizer = torch.optim.Adam(l, lr=0.0, eps=1e-15) - self.xyz_scheduler_args = get_expon_lr_func(lr_init=training_args.position_lr_init*self.spatial_lr_scale, - lr_final=training_args.position_lr_final*self.spatial_lr_scale, - lr_delay_mult=training_args.position_lr_delay_mult, - max_steps=training_args.position_lr_max_steps) - - def update_learning_rate(self, iteration): - ''' Learning rate scheduling per step ''' - for param_group in self.optimizer.param_groups: - if param_group["name"] == "xyz": - lr = self.xyz_scheduler_args(iteration) - param_group['lr'] = lr - return lr - - def construct_list_of_attributes(self): - l = ['x', 'y', 'z', 'nx', 'ny', 'nz'] - # All channels except the 3 DC - for i in range(self._features_dc.shape[1]*self._features_dc.shape[2]): - l.append('f_dc_{}'.format(i)) - for i in range(self._features_rest.shape[1]*self._features_rest.shape[2]): - l.append('f_rest_{}'.format(i)) - l.append('opacity') - for i in range(self._scaling.shape[1]): - l.append('scale_{}'.format(i)) - for i in range(self._rotation.shape[1]): - l.append('rot_{}'.format(i)) - return l - - def save_ply(self, path): - os.makedirs(os.path.dirname(path), exist_ok=True) - - xyz = self._xyz.detach().cpu().numpy() - normals = np.zeros_like(xyz) - f_dc = self._features_dc.detach().transpose(1, 2).flatten(start_dim=1).contiguous().cpu().numpy() - f_rest = self._features_rest.detach().transpose(1, 2).flatten(start_dim=1).contiguous().cpu().numpy() - opacities = self._opacity.detach().cpu().numpy() - scale = self._scaling.detach().cpu().numpy() - rotation = self._rotation.detach().cpu().numpy() - - dtype_full = [(attribute, 'f4') for attribute in self.construct_list_of_attributes()] - - elements = np.empty(xyz.shape[0], dtype=dtype_full) - attributes = np.concatenate((xyz, normals, f_dc, f_rest, opacities, scale, rotation), axis=1) - elements[:] = list(map(tuple, attributes)) - el = PlyElement.describe(elements, 'vertex') - PlyData([el]).write(path) - - def reset_opacity(self): - opacities_new = inverse_sigmoid(torch.min(self.get_opacity, torch.ones_like(self.get_opacity)*0.01)) - optimizable_tensors = self.replace_tensor_to_optimizer(opacities_new, "opacity") - self._opacity = optimizable_tensors["opacity"] - - def load_ply(self, path): - plydata = PlyData.read(path) - - xyz = np.stack((np.asarray(plydata.elements[0]["x"]), - np.asarray(plydata.elements[0]["y"]), - np.asarray(plydata.elements[0]["z"])), axis=1) - opacities = np.asarray(plydata.elements[0]["opacity"])[..., np.newaxis] - - print("Number of points at loading : ", xyz.shape[0]) - - features_dc = np.zeros((xyz.shape[0], 3, 1)) - features_dc[:, 0, 0] = np.asarray(plydata.elements[0]["f_dc_0"]) - features_dc[:, 1, 0] = np.asarray(plydata.elements[0]["f_dc_1"]) - features_dc[:, 2, 0] = np.asarray(plydata.elements[0]["f_dc_2"]) - - extra_f_names = [p.name for p in plydata.elements[0].properties if p.name.startswith("f_rest_")] - assert len(extra_f_names)==3*(self.max_sh_degree + 1) ** 2 - 3 - features_extra = np.zeros((xyz.shape[0], len(extra_f_names))) - for idx, attr_name in enumerate(extra_f_names): - features_extra[:, idx] = np.asarray(plydata.elements[0][attr_name]) - # Reshape (P,F*SH_coeffs) to (P, F, SH_coeffs except DC) - features_extra = features_extra.reshape((features_extra.shape[0], 3, (self.max_sh_degree + 1) ** 2 - 1)) - - scale_names = [p.name for p in plydata.elements[0].properties if p.name.startswith("scale_")] - scales = np.zeros((xyz.shape[0], len(scale_names))) - for idx, attr_name in enumerate(scale_names): - scales[:, idx] = np.asarray(plydata.elements[0][attr_name]) - - rot_names = [p.name for p in plydata.elements[0].properties if p.name.startswith("rot")] - rots = np.zeros((xyz.shape[0], len(rot_names))) - for idx, attr_name in enumerate(rot_names): - rots[:, idx] = np.asarray(plydata.elements[0][attr_name]) - - self._xyz = nn.Parameter(torch.tensor(xyz, dtype=torch.float, device="cuda").requires_grad_(True)) - self._features_dc = nn.Parameter(torch.tensor(features_dc, dtype=torch.float, device="cuda").transpose(1, 2).contiguous().requires_grad_(True)) - self._features_rest = nn.Parameter(torch.tensor(features_extra, dtype=torch.float, device="cuda").transpose(1, 2).contiguous().requires_grad_(True)) - self._opacity = nn.Parameter(torch.tensor(opacities, dtype=torch.float, device="cuda").requires_grad_(True)) - self._scaling = nn.Parameter(torch.tensor(scales, dtype=torch.float, device="cuda").requires_grad_(True)) - self._rotation = nn.Parameter(torch.tensor(rots, dtype=torch.float, device="cuda").requires_grad_(True)) - - self.active_sh_degree = self.max_sh_degree - - def replace_tensor_to_optimizer(self, tensor, name): - optimizable_tensors = {} - for group in self.optimizer.param_groups: - if group["name"] == name: - stored_state = self.optimizer.state.get(group['params'][0], None) - stored_state["exp_avg"] = torch.zeros_like(tensor) - stored_state["exp_avg_sq"] = torch.zeros_like(tensor) - - del self.optimizer.state[group['params'][0]] - group["params"][0] = nn.Parameter(tensor.requires_grad_(True)) - self.optimizer.state[group['params'][0]] = stored_state - - optimizable_tensors[group["name"]] = group["params"][0] - return optimizable_tensors - - def _prune_optimizer(self, mask): - optimizable_tensors = {} - for group in self.optimizer.param_groups: - stored_state = self.optimizer.state.get(group['params'][0], None) - if stored_state is not None: - stored_state["exp_avg"] = stored_state["exp_avg"][mask] - stored_state["exp_avg_sq"] = stored_state["exp_avg_sq"][mask] - - del self.optimizer.state[group['params'][0]] - group["params"][0] = nn.Parameter((group["params"][0][mask].requires_grad_(True))) - self.optimizer.state[group['params'][0]] = stored_state - - optimizable_tensors[group["name"]] = group["params"][0] - else: - group["params"][0] = nn.Parameter(group["params"][0][mask].requires_grad_(True)) - optimizable_tensors[group["name"]] = group["params"][0] - return optimizable_tensors - - def prune_points(self, mask): - valid_points_mask = ~mask - optimizable_tensors = self._prune_optimizer(valid_points_mask) - - self._xyz = optimizable_tensors["xyz"] - self._features_dc = optimizable_tensors["f_dc"] - self._features_rest = optimizable_tensors["f_rest"] - self._opacity = optimizable_tensors["opacity"] - self._scaling = optimizable_tensors["scaling"] - self._rotation = optimizable_tensors["rotation"] - - self.xyz_gradient_accum = self.xyz_gradient_accum[valid_points_mask] - - self.denom = self.denom[valid_points_mask] - self.max_radii2D = self.max_radii2D[valid_points_mask] - - def cat_tensors_to_optimizer(self, tensors_dict): - optimizable_tensors = {} - for group in self.optimizer.param_groups: - assert len(group["params"]) == 1 - extension_tensor = tensors_dict[group["name"]] - stored_state = self.optimizer.state.get(group['params'][0], None) - if stored_state is not None: - - stored_state["exp_avg"] = torch.cat((stored_state["exp_avg"], torch.zeros_like(extension_tensor)), dim=0) - stored_state["exp_avg_sq"] = torch.cat((stored_state["exp_avg_sq"], torch.zeros_like(extension_tensor)), dim=0) - - del self.optimizer.state[group['params'][0]] - group["params"][0] = nn.Parameter(torch.cat((group["params"][0], extension_tensor), dim=0).requires_grad_(True)) - self.optimizer.state[group['params'][0]] = stored_state - - optimizable_tensors[group["name"]] = group["params"][0] - else: - group["params"][0] = nn.Parameter(torch.cat((group["params"][0], extension_tensor), dim=0).requires_grad_(True)) - optimizable_tensors[group["name"]] = group["params"][0] - - return optimizable_tensors - - def densification_postfix(self, new_xyz, new_features_dc, new_features_rest, new_opacities, new_scaling, new_rotation): - d = {"xyz": new_xyz, - "f_dc": new_features_dc, - "f_rest": new_features_rest, - "opacity": new_opacities, - "scaling" : new_scaling, - "rotation" : new_rotation} - - optimizable_tensors = self.cat_tensors_to_optimizer(d) - self._xyz = optimizable_tensors["xyz"] - self._features_dc = optimizable_tensors["f_dc"] - self._features_rest = optimizable_tensors["f_rest"] - self._opacity = optimizable_tensors["opacity"] - self._scaling = optimizable_tensors["scaling"] - self._rotation = optimizable_tensors["rotation"] - - self.xyz_gradient_accum = torch.zeros((self.get_xyz.shape[0], 1), device="cuda") - self.denom = torch.zeros((self.get_xyz.shape[0], 1), device="cuda") - self.max_radii2D = torch.zeros((self.get_xyz.shape[0]), device="cuda") - - def densify_and_split(self, grads, grad_threshold, scene_extent, N=2): - n_init_points = self.get_xyz.shape[0] - # Extract points that satisfy the gradient condition - padded_grad = torch.zeros((n_init_points), device="cuda") - padded_grad[:grads.shape[0]] = grads.squeeze() - selected_pts_mask = torch.where(padded_grad >= grad_threshold, True, False) - selected_pts_mask = torch.logical_and(selected_pts_mask, - torch.max(self.get_scaling, dim=1).values > self.percent_dense*scene_extent) - - stds = self.get_scaling[selected_pts_mask].repeat(N,1) - means =torch.zeros((stds.size(0), 3),device="cuda") - samples = torch.normal(mean=means, std=stds) - rots = build_rotation(self._rotation[selected_pts_mask]).repeat(N,1,1) - new_xyz = torch.bmm(rots, samples.unsqueeze(-1)).squeeze(-1) + self.get_xyz[selected_pts_mask].repeat(N, 1) - new_scaling = self.scaling_inverse_activation(self.get_scaling[selected_pts_mask].repeat(N,1) / (0.8*N)) - new_rotation = self._rotation[selected_pts_mask].repeat(N,1) - new_features_dc = self._features_dc[selected_pts_mask].repeat(N,1,1) - new_features_rest = self._features_rest[selected_pts_mask].repeat(N,1,1) - new_opacity = self._opacity[selected_pts_mask].repeat(N,1) - - self.densification_postfix(new_xyz, new_features_dc, new_features_rest, new_opacity, new_scaling, new_rotation) - - prune_filter = torch.cat((selected_pts_mask, torch.zeros(N * selected_pts_mask.sum(), device="cuda", dtype=bool))) - self.prune_points(prune_filter) - - def densify_and_clone(self, grads, grad_threshold, scene_extent): - # Extract points that satisfy the gradient condition - selected_pts_mask = torch.where(torch.norm(grads, dim=-1) >= grad_threshold, True, False) - selected_pts_mask = torch.logical_and(selected_pts_mask, - torch.max(self.get_scaling, dim=1).values <= self.percent_dense*scene_extent) - - new_xyz = self._xyz[selected_pts_mask] - new_features_dc = self._features_dc[selected_pts_mask] - new_features_rest = self._features_rest[selected_pts_mask] - new_opacities = self._opacity[selected_pts_mask] - new_scaling = self._scaling[selected_pts_mask] - new_rotation = self._rotation[selected_pts_mask] - - self.densification_postfix(new_xyz, new_features_dc, new_features_rest, new_opacities, new_scaling, new_rotation) - - def densify_and_prune(self, max_grad, min_opacity, extent, max_screen_size): - grads = self.xyz_gradient_accum / self.denom - grads[grads.isnan()] = 0.0 - - self.densify_and_clone(grads, max_grad, extent) - self.densify_and_split(grads, max_grad, extent) - - prune_mask = (self.get_opacity < min_opacity).squeeze() - if max_screen_size: - big_points_vs = self.max_radii2D > max_screen_size - big_points_ws = self.get_scaling.max(dim=1).values > 0.1 * extent - prune_mask = torch.logical_or(torch.logical_or(prune_mask, big_points_vs), big_points_ws) - self.prune_points(prune_mask) - - torch.cuda.empty_cache() - - def prune(self, min_opacity, extent, max_screen_size): - - prune_mask = (self.get_opacity < min_opacity).squeeze() - if max_screen_size: - big_points_vs = self.max_radii2D > max_screen_size - big_points_ws = self.get_scaling.max(dim=1).values > 0.1 * extent - prune_mask = torch.logical_or(torch.logical_or(prune_mask, big_points_vs), big_points_ws) - self.prune_points(prune_mask) - - torch.cuda.empty_cache() - - - def add_densification_stats(self, viewspace_point_tensor, update_filter): - self.xyz_gradient_accum[update_filter] += torch.norm(viewspace_point_tensor.grad[update_filter,:2], dim=-1, keepdim=True) - self.denom[update_filter] += 1 - -def getProjectionMatrix(znear, zfar, fovX, fovY): - tanHalfFovY = math.tan((fovY / 2)) - tanHalfFovX = math.tan((fovX / 2)) - - P = torch.zeros(4, 4) - - z_sign = 1.0 - - P[0, 0] = 1 / tanHalfFovX - P[1, 1] = 1 / tanHalfFovY - P[3, 2] = z_sign - P[2, 2] = z_sign * zfar / (zfar - znear) - P[2, 3] = -(zfar * znear) / (zfar - znear) - return P - - -class MiniCam: - def __init__(self, c2w, width, height, fovy, fovx, znear, zfar): - # c2w (pose) should be in NeRF convention. - - self.image_width = width - self.image_height = height - self.FoVy = fovy - self.FoVx = fovx - self.znear = znear - self.zfar = zfar - - w2c = np.linalg.inv(c2w) - - # rectify... - w2c[1:3, :3] *= -1 - w2c[:3, 3] *= -1 - - self.world_view_transform = torch.tensor(w2c).transpose(0, 1).cuda() - self.projection_matrix = ( - getProjectionMatrix( - znear=self.znear, zfar=self.zfar, fovX=self.FoVx, fovY=self.FoVy - ) - .transpose(0, 1) - .cuda() - ) - self.full_proj_transform = self.world_view_transform @ self.projection_matrix - self.camera_center = -torch.tensor(c2w[:3, 3]).cuda() - - -class Renderer: - def __init__(self, sh_degree=3, white_background=True, radius=1): - - self.sh_degree = sh_degree - self.white_background = white_background - self.radius = radius - - self.gaussians = GaussianModel(sh_degree) - - self.bg_color = torch.tensor( - [1, 1, 1] if white_background else [0, 0, 0], - dtype=torch.float32, - device="cuda", - ) - - def initialize(self, input=None, num_pts=5000, radius=0.5): - # load checkpoint - if input is None: - # init from random point cloud - - phis = np.random.random((num_pts,)) * 2 * np.pi - costheta = np.random.random((num_pts,)) * 2 - 1 - thetas = np.arccos(costheta) - mu = np.random.random((num_pts,)) - radius = radius * np.cbrt(mu) - x = radius * np.sin(thetas) * np.cos(phis) - y = radius * np.sin(thetas) * np.sin(phis) - z = radius * np.cos(thetas) - xyz = np.stack((x, y, z), axis=1) - # xyz = np.random.random((num_pts, 3)) * 2.6 - 1.3 - - shs = np.random.random((num_pts, 3)) / 255.0 - pcd = BasicPointCloud( - points=xyz, colors=SH2RGB(shs), normals=np.zeros((num_pts, 3)) - ) - self.gaussians.create_from_pcd(pcd, 10) - elif isinstance(input, BasicPointCloud): - # load from a provided pcd - self.gaussians.create_from_pcd(input, 1) - else: - # load from saved ply - self.gaussians.load_ply(input) - - def render( - self, - viewpoint_camera, - scaling_modifier=1.0, - invert_bg_color=False, - override_color=None, - compute_cov3D_python=False, - convert_SHs_python=False, - ): - # Create zero tensor. We will use it to make pytorch return gradients of the 2D (screen-space) means - screenspace_points = ( - torch.zeros_like( - self.gaussians.get_xyz, - dtype=self.gaussians.get_xyz.dtype, - requires_grad=True, - device="cuda", - ) - + 0 - ) - try: - screenspace_points.retain_grad() - except: - pass - - # Set up rasterization configuration - tanfovx = math.tan(viewpoint_camera.FoVx * 0.5) - tanfovy = math.tan(viewpoint_camera.FoVy * 0.5) - - raster_settings = GaussianRasterizationSettings( - image_height=int(viewpoint_camera.image_height), - image_width=int(viewpoint_camera.image_width), - tanfovx=tanfovx, - tanfovy=tanfovy, - bg=self.bg_color if not invert_bg_color else 1 - self.bg_color, - scale_modifier=scaling_modifier, - viewmatrix=viewpoint_camera.world_view_transform, - projmatrix=viewpoint_camera.full_proj_transform, - sh_degree=self.gaussians.active_sh_degree, - campos=viewpoint_camera.camera_center, - prefiltered=False, - debug=False, - ) - - rasterizer = GaussianRasterizer(raster_settings=raster_settings) - - means3D = self.gaussians.get_xyz - means2D = screenspace_points - opacity = self.gaussians.get_opacity - - # If precomputed 3d covariance is provided, use it. If not, then it will be computed from - # scaling / rotation by the rasterizer. - scales = None - rotations = None - cov3D_precomp = None - if compute_cov3D_python: - cov3D_precomp = self.gaussians.get_covariance(scaling_modifier) - else: - scales = self.gaussians.get_scaling - rotations = self.gaussians.get_rotation - - # If precomputed colors are provided, use them. Otherwise, if it is desired to precompute colors - # from SHs in Python, do it. If not, then SH -> RGB conversion will be done by rasterizer. - shs = None - colors_precomp = None - if colors_precomp is None: - if convert_SHs_python: - shs_view = self.gaussians.get_features.transpose(1, 2).view( - -1, 3, (self.gaussians.max_sh_degree + 1) ** 2 - ) - dir_pp = self.gaussians.get_xyz - viewpoint_camera.camera_center.repeat( - self.gaussians.get_features.shape[0], 1 - ) - dir_pp_normalized = dir_pp / dir_pp.norm(dim=1, keepdim=True) - sh2rgb = eval_sh( - self.gaussians.active_sh_degree, shs_view, dir_pp_normalized - ) - colors_precomp = torch.clamp_min(sh2rgb + 0.5, 0.0) - else: - shs = self.gaussians.get_features - else: - colors_precomp = override_color - - # Rasterize visible Gaussians to image, obtain their radii (on screen). - rendered_image, radii, rendered_depth, rendered_alpha = rasterizer( - means3D=means3D, - means2D=means2D, - shs=shs, - colors_precomp=colors_precomp, - opacities=opacity, - scales=scales, - rotations=rotations, - cov3D_precomp=cov3D_precomp, - ) - - rendered_image = rendered_image.clamp(0, 1) - - # Those Gaussians that were frustum culled or had a radius of 0 were not visible. - # They will be excluded from value updates used in the splitting criteria. - return { - "image": rendered_image, - "depth": rendered_depth, - "alpha": rendered_alpha, - "viewspace_points": screenspace_points, - "visibility_filter": radii > 0, - "radii": radii, - } diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/.ipynb_checkpoints/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/.ipynb_checkpoints/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Aaaaaaaabdualh/meter2poem-1/app.py b/spaces/Aaaaaaaabdualh/meter2poem-1/app.py deleted file mode 100644 index f355591c17afc5f99fc6989581f924d054c17c20..0000000000000000000000000000000000000000 --- a/spaces/Aaaaaaaabdualh/meter2poem-1/app.py +++ /dev/null @@ -1,51 +0,0 @@ -from transformers import BertTokenizer, EncoderDecoderModel -import gradio as gr - -tokenizerM = BertTokenizer.from_pretrained("mareloraby/BERTShared-meter2poem-arV01") -bertSharedM = EncoderDecoderModel.from_pretrained("mareloraby/BERTShared-meter2poem-arV01") - -def generate_response(text, k = 70, p = 0.7, nb = 4): -# meters = set(['الرمل','البسيط','الخفيف','الكامل','السريع','الطويل','المتقارب','الرجز','المجتث','المنسرح','الوافر','المقتضب','الهزج','المديد','المضارع']) - prompt = f"{text}" - encoded_prompt = tokenizerM.encode_plus(prompt, return_tensors = 'pt')#.to(device) - gneration = bertSharedM.generate( - input_ids = encoded_prompt.input_ids, - attention_mask = encoded_prompt.attention_mask, - do_sample = True, - top_k= k, - top_p = p, - num_beams= nb, - max_length =130, - repetition_penalty = 2.0, - no_repeat_ngram_size = 2, - early_stopping=True) - - generated_text = tokenizerM.decode(gneration[0], skip_special_tokens=True) - bayts = generated_text.split("[SOB]") - while("BSEP" not in bayts[-1]): - bayts = bayts[:-1] - # if(len(bayts[-1]) < 2): - # bayts = bayts[:-1] - bayts = bayts[:-1] - temp_poem = '' - for b in range(len(bayts)): - temp_line = bayts[b].split('[BSEP]') - temp_poem = temp_poem + temp_line[1] + ' - ' + temp_line[0] +'\n' - - return temp_poem - - - -gr.Interface(fn=generate_response, - title = 'BERTShared - meter based generation', - # description =''' - # topics : ['حزينه','هجاء','عتاب','غزل','مدح','رومنسيه','دينية','وطنيه'] - # ''', - inputs=[ - gr.inputs.Radio(['الرمل','البسيط','الخفيف','الكامل','السريع','الطويل','المتقارب','الرجز','المجتث','المنسرح','الوافر','المقتضب','الهزج','المديد','المضارع'],label='Choose Meter'), - gr.inputs.Slider(10, 200, step=10,default = 70, label='Top-K'), - gr.inputs.Slider(0.10, 0.99, step=0.02, default = 0.70, label='Top-P'), - # gr.inputs.Slider(1, 20, step=1, default = 4, label='Beams'), - ], - outputs="text").launch() - \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/app.d.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/app.d.ts deleted file mode 100644 index dfd942be68dce05eda0ca563062e867c1230d866..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/app.d.ts +++ /dev/null @@ -1,20 +0,0 @@ -/// -/// - -import type { User } from "$lib/types/User"; - -// See https://kit.svelte.dev/docs/types#app -// for information about these interfaces -declare global { - namespace App { - // interface Error {} - interface Locals { - sessionId: string; - user?: User; - } - // interface PageData {} - // interface Platform {} - } -} - -export {}; diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/buildPrompt.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/buildPrompt.ts deleted file mode 100644 index ef64e9870e2e5726c0729eaced5d8483dfce8bd7..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/buildPrompt.ts +++ /dev/null @@ -1,34 +0,0 @@ -import type { BackendModel } from "./server/models"; -import type { Message } from "./types/Message"; -import { collections } from "$lib/server/database"; -import { authCondition } from "./server/auth"; -/** - * Convert [{user: "assistant", content: "hi"}, {user: "user", content: "hello"}] to: - * - * <|assistant|>hi<|endoftext|><|prompter|>hello<|endoftext|><|assistant|> - */ - -interface buildPromptOptions { - messages: Pick[]; - model: BackendModel; - locals?: App.Locals; - webSearchId?: string; - preprompt?: string; -} - -export async function buildPrompt({ - messages, - model, - locals, - webSearchId, - preprompt, -}: buildPromptOptions): Promise { - return ( - model - .chatPromptRender({ messages, preprompt }) - // Not super precise, but it's truncated in the model's backend anyway - .split(" ") - .slice(-(model.parameters?.truncate ?? 0)) - .join(" ") - ); -} diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/Methods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/Methods.js deleted file mode 100644 index 7050362023f2ebf1f278594863e0b123c892a588..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/Methods.js +++ /dev/null @@ -1,18 +0,0 @@ -import SetTransitCallbackMethods from './SetTransitCallbackMethods.js'; -import DelayCallMethods from './DelayCallMethods.js'; -import ExpandSubMenu from './ExpandSubMenu.js'; -import Collapse from './Collapse.js'; -import CollapseSubMenu from './CollapseSubMenu.js'; - -var Methods = { - expandSubMenu: ExpandSubMenu, - collapse: Collapse, - collapseSubMenu: CollapseSubMenu, -} - -Object.assign( - Methods, - SetTransitCallbackMethods, - DelayCallMethods -) -export default Methods; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/numberbar/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/numberbar/Factory.d.ts deleted file mode 100644 index 32f6351dbd236fe749637f366dc5aa2e83052448..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/numberbar/Factory.d.ts +++ /dev/null @@ -1,5 +0,0 @@ -import NumberBar from './NumberBar'; - -export default function ( - config?: NumberBar.IConfig -): NumberBar; \ No newline at end of file diff --git a/spaces/Alifarsi/news_summarizer/README.md b/spaces/Alifarsi/news_summarizer/README.md deleted file mode 100644 index 84c2b5036df9ef0fd81cf75c830f577c16fa2e2a..0000000000000000000000000000000000000000 --- a/spaces/Alifarsi/news_summarizer/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: News_summarizer -emoji: 🌖 -colorFrom: indigo -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Amon1/ChatGPTForAcadamic/config.py b/spaces/Amon1/ChatGPTForAcadamic/config.py deleted file mode 100644 index 1cedffccdcdada73e63067aea54b9d10d887354f..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/config.py +++ /dev/null @@ -1,48 +0,0 @@ -import os - -# [step 1]>> 例如: API_KEY = "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" (此key无效) -# Put your own API_KEY into `OPENAI_API_KEY` environment. -API_KEY = os.environ.get('OPENAI_API_KEY') - -# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改 -USE_PROXY = False -if USE_PROXY: - # 填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改 - # 例如 "socks5h://localhost:11284" - # [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http - # [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上) - # [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上 - - # 代理网络的地址,打开你的科学上网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284) - proxies = { - # [协议]:// [地址] :[端口] - "http": "socks5h://localhost:11284", - "https": "socks5h://localhost:11284", - } -else: - proxies = None - -# [step 3]>> 以下配置可以优化体验,但大部分场合下并不需要修改 -# 对话窗的高度 -CHATBOT_HEIGHT = 1115 - -# 发送请求到OpenAI后,等待多久判定为超时 -TIMEOUT_SECONDS = 25 - -# 网页的端口, -1代表随机端口 -WEB_PORT = 7860 - -# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制 -MAX_RETRY = 2 - -# OpenAI模型选择是(gpt4现在只对申请成功的人开放) -LLM_MODEL = "gpt-3.5-turbo" - -# OpenAI的API_URL -API_URL = "https://api.openai.com/v1/chat/completions" - -# 设置并行使用的线程数 -CONCURRENT_COUNT = 100 - -# 设置用户名和密码(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个) -AUTHENTICATION = [] # [("username", "password"), ("username2", "password2"), ...] diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/detectors_resnext.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/detectors_resnext.py deleted file mode 100644 index 57d032fe37ed82d5ba24e761bdc014cc0ee5ac64..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/detectors_resnext.py +++ /dev/null @@ -1,122 +0,0 @@ -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from .detectors_resnet import Bottleneck as _Bottleneck -from .detectors_resnet import DetectoRS_ResNet - - -class Bottleneck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_sac: - self.conv2 = build_conv_layer( - self.sac, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - elif not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - -@BACKBONES.register_module() -class DetectoRS_ResNeXt(DetectoRS_ResNet): - """ResNeXt backbone for DetectoRS. - - Args: - groups (int): The number of groups in ResNeXt. - base_width (int): The base width of ResNeXt. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(DetectoRS_ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - return super().make_res_layer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/README.md b/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/README.md deleted file mode 100644 index 1c8ba1cdf74c0207683e41ad905361c671577d6d..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/README.md +++ /dev/null @@ -1,47 +0,0 @@ -# CCNet: Criss-Cross Attention for Semantic Segmentation - -## Introduction - - - -```latex -@article{huang2018ccnet, - title={CCNet: Criss-Cross Attention for Semantic Segmentation}, - author={Huang, Zilong and Wang, Xinggang and Huang, Lichao and Huang, Chang and Wei, Yunchao and Liu, Wenyu}, - booktitle={ICCV}, - year={2019} -} -``` - -## Results and models - -### Cityscapes - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| CCNet | R-50-D8 | 512x1024 | 40000 | 6 | 3.32 | 77.76 | 78.87 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x1024_40k_cityscapes/ccnet_r50-d8_512x1024_40k_cityscapes_20200616_142517-4123f401.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x1024_40k_cityscapes/ccnet_r50-d8_512x1024_40k_cityscapes_20200616_142517.log.json) | -| CCNet | R-101-D8 | 512x1024 | 40000 | 9.5 | 2.31 | 76.35 | 78.19 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x1024_40k_cityscapes/ccnet_r101-d8_512x1024_40k_cityscapes_20200616_142540-a3b84ba6.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x1024_40k_cityscapes/ccnet_r101-d8_512x1024_40k_cityscapes_20200616_142540.log.json) | -| CCNet | R-50-D8 | 769x769 | 40000 | 6.8 | 1.43 | 78.46 | 79.93 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_769x769_40k_cityscapes/ccnet_r50-d8_769x769_40k_cityscapes_20200616_145125-76d11884.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_769x769_40k_cityscapes/ccnet_r50-d8_769x769_40k_cityscapes_20200616_145125.log.json) | -| CCNet | R-101-D8 | 769x769 | 40000 | 10.7 | 1.01 | 76.94 | 78.62 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_769x769_40k_cityscapes/ccnet_r101-d8_769x769_40k_cityscapes_20200617_101428-4f57c8d0.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_769x769_40k_cityscapes/ccnet_r101-d8_769x769_40k_cityscapes_20200617_101428.log.json) | -| CCNet | R-50-D8 | 512x1024 | 80000 | - | - | 79.03 | 80.16 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x1024_80k_cityscapes/ccnet_r50-d8_512x1024_80k_cityscapes_20200617_010421-869a3423.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x1024_80k_cityscapes/ccnet_r50-d8_512x1024_80k_cityscapes_20200617_010421.log.json) | -| CCNet | R-101-D8 | 512x1024 | 80000 | - | - | 78.87 | 79.90 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x1024_80k_cityscapes/ccnet_r101-d8_512x1024_80k_cityscapes_20200617_203935-ffae8917.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x1024_80k_cityscapes/ccnet_r101-d8_512x1024_80k_cityscapes_20200617_203935.log.json) | -| CCNet | R-50-D8 | 769x769 | 80000 | - | - | 79.29 | 81.08 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_769x769_80k_cityscapes/ccnet_r50-d8_769x769_80k_cityscapes_20200617_010421-73eed8ca.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_769x769_80k_cityscapes/ccnet_r50-d8_769x769_80k_cityscapes_20200617_010421.log.json) | -| CCNet | R-101-D8 | 769x769 | 80000 | - | - | 79.45 | 80.66 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_769x769_80k_cityscapes/ccnet_r101-d8_769x769_80k_cityscapes_20200618_011502-ad3cd481.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_769x769_80k_cityscapes/ccnet_r101-d8_769x769_80k_cityscapes_20200618_011502.log.json) | - -### ADE20K - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | --------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| CCNet | R-50-D8 | 512x512 | 80000 | 8.8 | 20.89 | 41.78 | 42.98 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_80k_ade20k/ccnet_r50-d8_512x512_80k_ade20k_20200615_014848-aa37f61e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_80k_ade20k/ccnet_r50-d8_512x512_80k_ade20k_20200615_014848.log.json) | -| CCNet | R-101-D8 | 512x512 | 80000 | 12.2 | 14.11 | 43.97 | 45.13 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_80k_ade20k/ccnet_r101-d8_512x512_80k_ade20k_20200615_014848-1f4929a3.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_80k_ade20k/ccnet_r101-d8_512x512_80k_ade20k_20200615_014848.log.json) | -| CCNet | R-50-D8 | 512x512 | 160000 | - | - | 42.08 | 43.13 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_160k_ade20k/ccnet_r50-d8_512x512_160k_ade20k_20200616_084435-7c97193b.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_160k_ade20k/ccnet_r50-d8_512x512_160k_ade20k_20200616_084435.log.json) | -| CCNet | R-101-D8 | 512x512 | 160000 | - | - | 43.71 | 45.04 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_160k_ade20k/ccnet_r101-d8_512x512_160k_ade20k_20200616_000644-e849e007.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_160k_ade20k/ccnet_r101-d8_512x512_160k_ade20k_20200616_000644.log.json) | - -### Pascal VOC 2012 + Aug - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| CCNet | R-50-D8 | 512x512 | 20000 | 6 | 20.45 | 76.17 | 77.51 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_20k_voc12aug/ccnet_r50-d8_512x512_20k_voc12aug_20200617_193212-fad81784.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_20k_voc12aug/ccnet_r50-d8_512x512_20k_voc12aug_20200617_193212.log.json) | -| CCNet | R-101-D8 | 512x512 | 20000 | 9.5 | 13.64 | 77.27 | 79.02 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_20k_voc12aug/ccnet_r101-d8_512x512_20k_voc12aug_20200617_193212-0007b61d.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_20k_voc12aug/ccnet_r101-d8_512x512_20k_voc12aug_20200617_193212.log.json) | -| CCNet | R-50-D8 | 512x512 | 40000 | - | - | 75.96 | 77.04 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_40k_voc12aug/ccnet_r50-d8_512x512_40k_voc12aug_20200613_232127-c2a15f02.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_40k_voc12aug/ccnet_r50-d8_512x512_40k_voc12aug_20200613_232127.log.json) | -| CCNet | R-101-D8 | 512x512 | 40000 | - | - | 77.87 | 78.90 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_40k_voc12aug/ccnet_r101-d8_512x512_40k_voc12aug_20200613_232127-c30da577.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_40k_voc12aug/ccnet_r101-d8_512x512_40k_voc12aug_20200613_232127.log.json) | diff --git a/spaces/AnimalEquality/chatbot/README.md b/spaces/AnimalEquality/chatbot/README.md deleted file mode 100644 index dc32947ddd65fb7b24b0d0492a8f8211e24f653c..0000000000000000000000000000000000000000 --- a/spaces/AnimalEquality/chatbot/README.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -title: lv-recipe-chatbot -emoji: 🫑 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: unknown ---- -# lv-recipe-chatbot - - - -## Install - -``` sh -pip install -e '.[dev]' -``` - -## How to use - -``` python -from dotenv import load_dotenv - -load_dotenv() # or load environment vars with different method - -demo = app.create_demo(app.ConversationBot()) -demo.launch() -``` - - Running on local URL: http://127.0.0.1:7860 - - To create a public link, set `share=True` in `launch()`. - -
    - -or - -``` sh -python3 app.py -``` - -## Dev quick-start - -`git clone` the repo - -``` sh -cd lv-recipe-chatbot -``` - -Make sure to use the version of python specified in `py_version.txt` -Create a virtual environment. - -``` sh -python3 -m venv env -``` - -Activate the env and install dependencies. - -``` sh -source env/bin/activate -pip install -r requirements.txt -pip install -r requirements/dev.txt -``` - -To make the Jupyter environment, git friendly: `nbdev_install_hooks` -If you want to render documentation locally, you will want to [install -Quarto](https://nbdev.fast.ai/tutorials/tutorial.html#install-quarto). - -`nbdev_install_quarto` - -Put API secrets in .env - -``` sh -cp .env.example .env -``` - -Edit .env with your secret key(s). Only `OPEN_AI_KEY` is required. - -Then start the Gradio demo from within the virtual environment. - -``` sh -python3 app.py -``` - -Preview documentation - -``` sh -nbdev_preview -``` - -## Dependencies - -If a new dependency for development is helpful for developers, add it to -`dev.txt`. -If it is a dependency for the app that is imported in source code, add -it to `core.txt`. -Then run: - -``` sh -scripts/pin_requirements.sh -``` - -This will update our `requirements.txt` to include the dependency as it -should be pinned in the environment. - -## Development - -[quick nbdev tutorial](https://nbdev.fast.ai/tutorials) - -Make changes in `/nbs`. -Update the package files with `nbdev_export` then reimport with -`pip install -e '.[dev]'` - -Preview doc `nbdev_preview` -Build docs, test and update README `nbdev_prepare` - -## Useful links - -- [Task Matrix (Formerly Visual - ChatGPT)](https://github.com/microsoft/TaskMatrix) -- [LangChain](https://python.langchain.com/en/latest/index.html) -- [LLM Prompt Engineering](https://www.promptingguide.ai) -- [OpenAI best practices for - prompts](https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api) diff --git a/spaces/Arnx/MusicGenXvAKN/tests/common_utils/__init__.py b/spaces/Arnx/MusicGenXvAKN/tests/common_utils/__init__.py deleted file mode 100644 index 74ffcfef96fec35c99b2a1a053a61f44f7a8bbe9..0000000000000000000000000000000000000000 --- a/spaces/Arnx/MusicGenXvAKN/tests/common_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .temp_utils import TempDirMixin -from .wav_utils import get_batch_white_noise, get_white_noise, save_wav diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/_utils.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/_utils.py deleted file mode 100644 index f14ff32096eab20a8cc1a5d3c2b8c8cfe1fcb2d2..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/_utils.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright 2016 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import sys -import typing -from datetime import timedelta - - -# sys.maxsize: -# An integer giving the maximum value a variable of type Py_ssize_t can take. -MAX_WAIT = sys.maxsize / 2 - - -def find_ordinal(pos_num: int) -> str: - # See: https://en.wikipedia.org/wiki/English_numerals#Ordinal_numbers - if pos_num == 0: - return "th" - elif pos_num == 1: - return "st" - elif pos_num == 2: - return "nd" - elif pos_num == 3: - return "rd" - elif 4 <= pos_num <= 20: - return "th" - else: - return find_ordinal(pos_num % 10) - - -def to_ordinal(pos_num: int) -> str: - return f"{pos_num}{find_ordinal(pos_num)}" - - -def get_callback_name(cb: typing.Callable[..., typing.Any]) -> str: - """Get a callback fully-qualified name. - - If no name can be produced ``repr(cb)`` is called and returned. - """ - segments = [] - try: - segments.append(cb.__qualname__) - except AttributeError: - try: - segments.append(cb.__name__) - except AttributeError: - pass - if not segments: - return repr(cb) - else: - try: - # When running under sphinx it appears this can be none? - if cb.__module__: - segments.insert(0, cb.__module__) - except AttributeError: - pass - return ".".join(segments) - - -time_unit_type = typing.Union[int, float, timedelta] - - -def to_seconds(time_unit: time_unit_type) -> float: - return float(time_unit.total_seconds() if isinstance(time_unit, timedelta) else time_unit) diff --git a/spaces/Banbri/zcvzcv/src/app/layouts/new_layouts.tsx b/spaces/Banbri/zcvzcv/src/app/layouts/new_layouts.tsx deleted file mode 100644 index df20ef7209f9ccc66e2e27ce2a83d15c454af7d0..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/app/layouts/new_layouts.tsx +++ /dev/null @@ -1,273 +0,0 @@ -"use client" - -import { Panel } from "@/app/interface/panel" -import { pick } from "@/lib/pick" -import { Grid } from "@/app/interface/grid" - -export function Layout1() { - return ( - -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - ) -} - -export function Layout2() { - return ( - -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - ) -} - -export function Layout3() { - return ( - -
    - -
    -
    - -
    -
    - -
    -
    -
    - -
    -
    - -
    -
    -
    - ) -} - -export function Layout4() { - return ( - -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - ) -} - - -export function Layout5() { - return ( - -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - ) -} - -export function Layout6() { - return ( - -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - ) -} - -// export const layouts = { Layout1, Layout2, Layout3, Layout4, Layout5, Layout6 } -export const allLayouts = { - // Layout1, - // Layout2, - // Layout3, - // Layout4, - Layout5, - // Layout6 - } - -export type LayoutName = keyof typeof allLayouts - -export function getRandomLayoutName(): LayoutName { - return pick(Object.keys(allLayouts) as LayoutName[]) as LayoutName -} - -export function getRandomLayoutNames(): LayoutName[] { - return Object.keys(allLayouts).sort(() => Math.random() - 0.5) as LayoutName[] -} \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/demucs/test.py b/spaces/Bart92/RVC_HF/demucs/test.py deleted file mode 100644 index 4140914ddbff3543b4056ca0cb1b5e887434a40a..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/demucs/test.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import gzip -import sys -from concurrent import futures - -import musdb -import museval -import torch as th -import tqdm -from scipy.io import wavfile -from torch import distributed - -from .audio import convert_audio -from .utils import apply_model - - -def evaluate(model, - musdb_path, - eval_folder, - workers=2, - device="cpu", - rank=0, - save=False, - shifts=0, - split=False, - overlap=0.25, - is_wav=False, - world_size=1): - """ - Evaluate model using museval. Run the model - on a single GPU, the bottleneck being the call to museval. - """ - - output_dir = eval_folder / "results" - output_dir.mkdir(exist_ok=True, parents=True) - json_folder = eval_folder / "results/test" - json_folder.mkdir(exist_ok=True, parents=True) - - # we load tracks from the original musdb set - test_set = musdb.DB(musdb_path, subsets=["test"], is_wav=is_wav) - src_rate = 44100 # hardcoded for now... - - for p in model.parameters(): - p.requires_grad = False - p.grad = None - - pendings = [] - with futures.ProcessPoolExecutor(workers or 1) as pool: - for index in tqdm.tqdm(range(rank, len(test_set), world_size), file=sys.stdout): - track = test_set.tracks[index] - - out = json_folder / f"{track.name}.json.gz" - if out.exists(): - continue - - mix = th.from_numpy(track.audio).t().float() - ref = mix.mean(dim=0) # mono mixture - mix = (mix - ref.mean()) / ref.std() - mix = convert_audio(mix, src_rate, model.samplerate, model.audio_channels) - estimates = apply_model(model, mix.to(device), - shifts=shifts, split=split, overlap=overlap) - estimates = estimates * ref.std() + ref.mean() - - estimates = estimates.transpose(1, 2) - references = th.stack( - [th.from_numpy(track.targets[name].audio).t() for name in model.sources]) - references = convert_audio(references, src_rate, - model.samplerate, model.audio_channels) - references = references.transpose(1, 2).numpy() - estimates = estimates.cpu().numpy() - win = int(1. * model.samplerate) - hop = int(1. * model.samplerate) - if save: - folder = eval_folder / "wav/test" / track.name - folder.mkdir(exist_ok=True, parents=True) - for name, estimate in zip(model.sources, estimates): - wavfile.write(str(folder / (name + ".wav")), 44100, estimate) - - if workers: - pendings.append((track.name, pool.submit( - museval.evaluate, references, estimates, win=win, hop=hop))) - else: - pendings.append((track.name, museval.evaluate( - references, estimates, win=win, hop=hop))) - del references, mix, estimates, track - - for track_name, pending in tqdm.tqdm(pendings, file=sys.stdout): - if workers: - pending = pending.result() - sdr, isr, sir, sar = pending - track_store = museval.TrackStore(win=44100, hop=44100, track_name=track_name) - for idx, target in enumerate(model.sources): - values = { - "SDR": sdr[idx].tolist(), - "SIR": sir[idx].tolist(), - "ISR": isr[idx].tolist(), - "SAR": sar[idx].tolist() - } - - track_store.add_target(target_name=target, values=values) - json_path = json_folder / f"{track_name}.json.gz" - gzip.open(json_path, "w").write(track_store.json.encode('utf-8')) - if world_size > 1: - distributed.barrier() diff --git a/spaces/Benson/text-generation/Examples/Bubble Shooter Classic Download Pc.md b/spaces/Benson/text-generation/Examples/Bubble Shooter Classic Download Pc.md deleted file mode 100644 index 9efa29b7d006fa997cc4bce064d2b98839c8e495..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Bubble Shooter Classic Download Pc.md +++ /dev/null @@ -1,64 +0,0 @@ - -

    Cómo descargar y jugar Bubble Shooter Classic en PC

    -

    Si estás buscando un juego divertido y adictivo que te mantenga entretenido durante horas, deberías probar Bubble Shooter Classic. Este juego es un clásico juego de disparos de burbujas que ha existido durante muchos años, pero nunca pasa de moda. Puedes reproducirlo en tus dispositivos móviles, pero ¿sabías que también puedes reproducirlo en tu PC? En este artículo, te mostraremos cómo descargar y jugar Bubble Shooter Classic en PC usando BlueStacks, un emulador de Android. También te daremos algunos consejos y trucos para dominar el juego y superar los niveles. ¡Empecemos!

    -

    bubble shooter classic download pc


    Download File ————— https://bltlly.com/2v6MGI



    -

    ¿Qué es Bubble Shooter Classic?

    -

    Una breve introducción al juego y sus características

    -

    Bubble Shooter Classic es un juego donde tienes que disparar burbujas de colores para hacer partidos de tres o más del mismo color. Una vez hecho esto, las burbujas estallarán y desaparecerán. El objetivo es eliminar todas las burbujas de la pantalla antes de que lleguen a la parte inferior. Hay dos modos de juego, Clásico y árcade. En el modo Clásico, tienes que disparar burbujas hacia arriba. En el modo árcade, tienes que disparar burbujas de lado. También hay dos niveles de dificultad, fácil y difícil. Puedes elegir el que se adapte a tu nivel de habilidad.

    -

    Bubble Shooter Classic tiene muchas características que lo hacen divertido y desafiante. Por ejemplo, puedes usar potenciadores para ayudarte a eliminar las burbujas más rápido. Hay cuatro tipos de potenciadores: Color Bomb, Rainbow Bubble, Shape Bomb y Time Bomb. Cada uno tiene un efecto diferente en las burbujas. También puedes ganar monedas haciendo estallar burbujas y usarlas para comprar más power-ups. También puede competir con otros jugadores en línea y ver quién puede obtener la puntuación más alta.

    -

    Los beneficios de jugar Bubble Shooter Classic en PC

    - -

    Cómo descargar e instalar Bubble Shooter Classic en PC

    -

    Los pasos para descargar e instalar BlueStacks, un emulador de Android

    -

    Para jugar Bubble Shooter Classic en PC, es necesario descargar e instalar BlueStacks primero. BlueStacks es un emulador de Android que te permite ejecutar aplicaciones y juegos de Android en tu PC. Estos son los pasos para descargar e instalar BlueStacks:

    -

    -
      -
    1. Ir a el sitio web oficial de BlueStacks y haga clic en el botón "Descargar BlueStacks".
    2. -
    3. Espere a que termine la descarga y luego ejecute el archivo de instalación.
    4. -
    5. Siga las instrucciones en la pantalla para completar el proceso de instalación.
    6. -
    7. Inicie BlueStacks e inicie sesión con su cuenta de Google.
    8. -
    -

    Los pasos para descargar e instalar Bubble Shooter Classic desde la Google Play Store

    -

    Después de haber instalado BlueStacks, puede descargar e instalar Bubble Shooter Classic desde Google Play Store. Estos son los pasos para hacerlo:

    -
      -
    1. Abrir BlueStacks y haga clic en el "Google Play" icono en la pantalla de inicio.
    2. -
    3. Buscar "Bubble Shooter Classic" en la barra de búsqueda y haga clic en el icono del juego.
    4. -
    5. Haga clic en el botón "Instalar" y espere a que termine la instalación.
    6. -
    7. Haga clic en el botón "Abrir" para iniciar el juego.
    8. -
    -

    Cómo jugar Bubble Shooter Classic en PC

    -

    Las reglas y controles básicos del juego

    -

    Jugar Bubble Shooter Classic en PC es fácil y divertido. Las reglas y controles básicos del juego son los siguientes:

    -
      -
    • Tienes que disparar burbujas para hacer partidos de tres o más del mismo color.
    • -
    • Puede utilizar el ratón o el teclado para apuntar y disparar las burbujas.
    • -
    • Puede cambiar la burbuja actual con la siguiente haciendo clic en ella o presionando la barra espaciadora.
    • -
    • Puedes ver las burbujas restantes y tu puntuación en la parte inferior de la pantalla.
    • -
    • Puede pausar el juego haciendo clic en el botón de menú o presionando la tecla de escape.
    • - -
    -

    Los consejos y trucos para dominar el juego y superar los niveles

    -

    Bubble Shooter Classic es un juego que requiere estrategia y habilidad. Aquí hay algunos consejos y trucos para ayudarte a dominar el juego y superar los niveles:

    -
      -
    • Trate de apuntar a los grupos de burbujas que tienen el mismo color que su burbuja actual. Esto le ayudará a eliminar más burbujas a la vez y aumentar su puntuación.
    • -
    • Trate de hacer estallar las burbujas que están cerca de la parte inferior de la pantalla. Esto evitará que lleguen a la parte inferior y terminar el juego.
    • -
    • Trate de utilizar los power-ups sabiamente. Pueden ayudarle a eliminar situaciones difíciles y aumentar su puntuación. Sin embargo, son limitadas y cuestan monedas, así que úsalas con moderación.
    • -
    • Intenta completar los niveles lo más rápido posible. Cuanto más rápido completes un nivel, más alta será tu puntuación.
    • -
    • Trate de jugar en el modo árcade si quieres más desafío y variedad. El modo árcade tiene diferentes diseños y obstáculos que hacen que el juego sea más interesante y divertido.
    • -
    -

    Conclusión

    -

    Un resumen de los puntos principales y una llamada a la acción

    -

    Bubble Shooter Classic es un clásico juego de disparos de burbujas que puedes jugar en tu PC usando BlueStacks, un emulador de Android. Puedes disfrutar de una pantalla más grande, mejores gráficos y controles más cómodos al reproducirla en tu PC. También puedes acceder a más juegos y aplicaciones desde Google Play Store usando BlueStacks. Para jugar Bubble Shooter Classic en PC, solo tienes que descargar e instalar BlueStacks, a continuación, descargar e instalar Bubble Shooter Classic desde la Google Play Store. Luego, puedes empezar a jugar y divertirte. También puedes seguir nuestros consejos y trucos para dominar el juego y superar los niveles. ¿Qué estás esperando? Descargar Bubble Shooter Classic en PC hoy y disfrutar!

    -

    Preguntas frecuentes

    -

    Q1: ¿Es Bubble Shooter Classic gratis para jugar?

    - -

    Q2: ¿Cuántos niveles hay en Bubble Shooter Classic?

    -

    A2: Hay más de 1000 niveles en Bubble Shooter Classic, cada uno con diferentes retos y objetivos. Puedes jugarlos en el orden que quieras.

    -

    Q3: ¿Cuáles son los power-ups en Bubble Shooter Classic?

    -

    A3: Hay cuatro tipos de potenciadores en Bubble Shooter Classic: bomba de color, burbuja de arco iris, bomba de forma y bomba de tiempo. Cada uno tiene un efecto diferente en las burbujas. Puedes comprarlas con monedas o conseguirlas al azar durante el juego.

    -

    Q4: ¿Cómo puedo guardar mi progreso en Bubble Shooter Classic?

    -

    A4: Puedes guardar tu progreso en Bubble Shooter Classic iniciando sesión con tu cuenta de Google. Esto también te permitirá sincronizar tu progreso en diferentes dispositivos.

    -

    Q5: ¿Cómo puedo contactar al desarrollador de Bubble Shooter Classic?

    -

    A5: Puede ponerse en contacto con el desarrollador de Bubble Shooter Classic enviando un correo electrónico a bubbleshooterclassic@gmail.com. También puede visitar su página de Facebook o su Página de Facebook o su sitio web para obtener más información y actualizaciones.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/sharedexample.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/sharedexample.py deleted file mode 100644 index 58cdfa594c4f1be49a8718b6ec98965481ea4527..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/sharedexample.py +++ /dev/null @@ -1,227 +0,0 @@ -# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# http://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -import numbers -import re - -from botocore.docs.utils import escape_controls -from botocore.utils import parse_timestamp - - -class SharedExampleDocumenter: - def document_shared_example( - self, example, prefix, section, operation_model - ): - """Documents a single shared example based on its definition. - - :param example: The model of the example - - :param prefix: The prefix to use in the method example. - - :param section: The section to write to. - - :param operation_model: The model of the operation used in the example - """ - section.style.new_paragraph() - section.write(example.get('description')) - section.style.new_line() - self.document_input( - section, example, prefix, operation_model.input_shape - ) - self.document_output(section, example, operation_model.output_shape) - - def document_input(self, section, example, prefix, shape): - input_section = section.add_new_section('input') - input_section.style.start_codeblock() - if prefix is not None: - input_section.write(prefix) - params = example.get('input', {}) - comments = example.get('comments') - if comments: - comments = comments.get('input') - param_section = input_section.add_new_section('parameters') - self._document_params(param_section, params, comments, [], shape) - closing_section = input_section.add_new_section('input-close') - closing_section.style.new_line() - closing_section.style.new_line() - closing_section.write('print(response)') - closing_section.style.end_codeblock() - - def document_output(self, section, example, shape): - output_section = section.add_new_section('output') - output_section.style.new_line() - output_section.write('Expected Output:') - output_section.style.new_line() - output_section.style.start_codeblock() - params = example.get('output', {}) - - # There might not be an output, but we will return metadata anyway - params['ResponseMetadata'] = {"...": "..."} - comments = example.get('comments') - if comments: - comments = comments.get('output') - self._document_dict(output_section, params, comments, [], shape, True) - closing_section = output_section.add_new_section('output-close') - closing_section.style.end_codeblock() - - def _document(self, section, value, comments, path, shape): - """ - :param section: The section to add the docs to. - - :param value: The input / output values representing the parameters that - are included in the example. - - :param comments: The dictionary containing all the comments to be - applied to the example. - - :param path: A list describing where the documenter is in traversing the - parameters. This is used to find the equivalent location - in the comments dictionary. - """ - if isinstance(value, dict): - self._document_dict(section, value, comments, path, shape) - elif isinstance(value, list): - self._document_list(section, value, comments, path, shape) - elif isinstance(value, numbers.Number): - self._document_number(section, value, path) - elif shape and shape.type_name == 'timestamp': - self._document_datetime(section, value, path) - else: - self._document_str(section, value, path) - - def _document_dict( - self, section, value, comments, path, shape, top_level=False - ): - dict_section = section.add_new_section('dict-value') - self._start_nested_value(dict_section, '{') - for key, val in value.items(): - path.append('.%s' % key) - item_section = dict_section.add_new_section(key) - item_section.style.new_line() - item_comment = self._get_comment(path, comments) - if item_comment: - item_section.write(item_comment) - item_section.style.new_line() - item_section.write("'%s': " % key) - - # Shape could be none if there is no output besides ResponseMetadata - item_shape = None - if shape: - if shape.type_name == 'structure': - item_shape = shape.members.get(key) - elif shape.type_name == 'map': - item_shape = shape.value - self._document(item_section, val, comments, path, item_shape) - path.pop() - dict_section_end = dict_section.add_new_section('ending-brace') - self._end_nested_value(dict_section_end, '}') - if not top_level: - dict_section_end.write(',') - - def _document_params(self, section, value, comments, path, shape): - param_section = section.add_new_section('param-values') - self._start_nested_value(param_section, '(') - for key, val in value.items(): - path.append('.%s' % key) - item_section = param_section.add_new_section(key) - item_section.style.new_line() - item_comment = self._get_comment(path, comments) - if item_comment: - item_section.write(item_comment) - item_section.style.new_line() - item_section.write(key + '=') - - # Shape could be none if there are no input parameters - item_shape = None - if shape: - item_shape = shape.members.get(key) - self._document(item_section, val, comments, path, item_shape) - path.pop() - param_section_end = param_section.add_new_section('ending-parenthesis') - self._end_nested_value(param_section_end, ')') - - def _document_list(self, section, value, comments, path, shape): - list_section = section.add_new_section('list-section') - self._start_nested_value(list_section, '[') - item_shape = shape.member - for index, val in enumerate(value): - item_section = list_section.add_new_section(index) - item_section.style.new_line() - path.append('[%s]' % index) - item_comment = self._get_comment(path, comments) - if item_comment: - item_section.write(item_comment) - item_section.style.new_line() - self._document(item_section, val, comments, path, item_shape) - path.pop() - list_section_end = list_section.add_new_section('ending-bracket') - self._end_nested_value(list_section_end, '],') - - def _document_str(self, section, value, path): - # We do the string conversion because this might accept a type that - # we don't specifically address. - safe_value = escape_controls(value) - section.write(f"'{safe_value}',") - - def _document_number(self, section, value, path): - section.write("%s," % str(value)) - - def _document_datetime(self, section, value, path): - datetime_tuple = parse_timestamp(value).timetuple() - datetime_str = str(datetime_tuple[0]) - for i in range(1, len(datetime_tuple)): - datetime_str += ", " + str(datetime_tuple[i]) - section.write("datetime(%s)," % datetime_str) - - def _get_comment(self, path, comments): - key = re.sub(r'^\.', '', ''.join(path)) - if comments and key in comments: - return '# ' + comments[key] - else: - return '' - - def _start_nested_value(self, section, start): - section.write(start) - section.style.indent() - section.style.indent() - - def _end_nested_value(self, section, end): - section.style.dedent() - section.style.dedent() - section.style.new_line() - section.write(end) - - -def document_shared_examples( - section, operation_model, example_prefix, shared_examples -): - """Documents the shared examples - - :param section: The section to write to. - - :param operation_model: The model of the operation. - - :param example_prefix: The prefix to use in the method example. - - :param shared_examples: The shared JSON examples from the model. - """ - container_section = section.add_new_section('shared-examples') - container_section.style.new_paragraph() - container_section.style.bold('Examples') - documenter = SharedExampleDocumenter() - for example in shared_examples: - documenter.document_shared_example( - example=example, - section=container_section.add_new_section(example['id']), - prefix=example_prefix, - operation_model=operation_model, - ) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/req/req_file.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/req/req_file.py deleted file mode 100644 index f717c1ccc79f7581f1293b3fcf1a0764def7a84a..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/req/req_file.py +++ /dev/null @@ -1,552 +0,0 @@ -""" -Requirements file parsing -""" - -import logging -import optparse -import os -import re -import shlex -import urllib.parse -from optparse import Values -from typing import ( - TYPE_CHECKING, - Any, - Callable, - Dict, - Generator, - Iterable, - List, - Optional, - Tuple, -) - -from pip._internal.cli import cmdoptions -from pip._internal.exceptions import InstallationError, RequirementsFileParseError -from pip._internal.models.search_scope import SearchScope -from pip._internal.network.session import PipSession -from pip._internal.network.utils import raise_for_status -from pip._internal.utils.encoding import auto_decode -from pip._internal.utils.urls import get_url_scheme - -if TYPE_CHECKING: - # NoReturn introduced in 3.6.2; imported only for type checking to maintain - # pip compatibility with older patch versions of Python 3.6 - from typing import NoReturn - - from pip._internal.index.package_finder import PackageFinder - -__all__ = ["parse_requirements"] - -ReqFileLines = Iterable[Tuple[int, str]] - -LineParser = Callable[[str], Tuple[str, Values]] - -SCHEME_RE = re.compile(r"^(http|https|file):", re.I) -COMMENT_RE = re.compile(r"(^|\s+)#.*$") - -# Matches environment variable-style values in '${MY_VARIABLE_1}' with the -# variable name consisting of only uppercase letters, digits or the '_' -# (underscore). This follows the POSIX standard defined in IEEE Std 1003.1, -# 2013 Edition. -ENV_VAR_RE = re.compile(r"(?P\$\{(?P[A-Z0-9_]+)\})") - -SUPPORTED_OPTIONS: List[Callable[..., optparse.Option]] = [ - cmdoptions.index_url, - cmdoptions.extra_index_url, - cmdoptions.no_index, - cmdoptions.constraints, - cmdoptions.requirements, - cmdoptions.editable, - cmdoptions.find_links, - cmdoptions.no_binary, - cmdoptions.only_binary, - cmdoptions.prefer_binary, - cmdoptions.require_hashes, - cmdoptions.pre, - cmdoptions.trusted_host, - cmdoptions.use_new_feature, -] - -# options to be passed to requirements -SUPPORTED_OPTIONS_REQ: List[Callable[..., optparse.Option]] = [ - cmdoptions.global_options, - cmdoptions.hash, - cmdoptions.config_settings, -] - -# the 'dest' string values -SUPPORTED_OPTIONS_REQ_DEST = [str(o().dest) for o in SUPPORTED_OPTIONS_REQ] - -logger = logging.getLogger(__name__) - - -class ParsedRequirement: - def __init__( - self, - requirement: str, - is_editable: bool, - comes_from: str, - constraint: bool, - options: Optional[Dict[str, Any]] = None, - line_source: Optional[str] = None, - ) -> None: - self.requirement = requirement - self.is_editable = is_editable - self.comes_from = comes_from - self.options = options - self.constraint = constraint - self.line_source = line_source - - -class ParsedLine: - def __init__( - self, - filename: str, - lineno: int, - args: str, - opts: Values, - constraint: bool, - ) -> None: - self.filename = filename - self.lineno = lineno - self.opts = opts - self.constraint = constraint - - if args: - self.is_requirement = True - self.is_editable = False - self.requirement = args - elif opts.editables: - self.is_requirement = True - self.is_editable = True - # We don't support multiple -e on one line - self.requirement = opts.editables[0] - else: - self.is_requirement = False - - -def parse_requirements( - filename: str, - session: PipSession, - finder: Optional["PackageFinder"] = None, - options: Optional[optparse.Values] = None, - constraint: bool = False, -) -> Generator[ParsedRequirement, None, None]: - """Parse a requirements file and yield ParsedRequirement instances. - - :param filename: Path or url of requirements file. - :param session: PipSession instance. - :param finder: Instance of pip.index.PackageFinder. - :param options: cli options. - :param constraint: If true, parsing a constraint file rather than - requirements file. - """ - line_parser = get_line_parser(finder) - parser = RequirementsFileParser(session, line_parser) - - for parsed_line in parser.parse(filename, constraint): - parsed_req = handle_line( - parsed_line, options=options, finder=finder, session=session - ) - if parsed_req is not None: - yield parsed_req - - -def preprocess(content: str) -> ReqFileLines: - """Split, filter, and join lines, and return a line iterator - - :param content: the content of the requirements file - """ - lines_enum: ReqFileLines = enumerate(content.splitlines(), start=1) - lines_enum = join_lines(lines_enum) - lines_enum = ignore_comments(lines_enum) - lines_enum = expand_env_variables(lines_enum) - return lines_enum - - -def handle_requirement_line( - line: ParsedLine, - options: Optional[optparse.Values] = None, -) -> ParsedRequirement: - # preserve for the nested code path - line_comes_from = "{} {} (line {})".format( - "-c" if line.constraint else "-r", - line.filename, - line.lineno, - ) - - assert line.is_requirement - - if line.is_editable: - # For editable requirements, we don't support per-requirement - # options, so just return the parsed requirement. - return ParsedRequirement( - requirement=line.requirement, - is_editable=line.is_editable, - comes_from=line_comes_from, - constraint=line.constraint, - ) - else: - # get the options that apply to requirements - req_options = {} - for dest in SUPPORTED_OPTIONS_REQ_DEST: - if dest in line.opts.__dict__ and line.opts.__dict__[dest]: - req_options[dest] = line.opts.__dict__[dest] - - line_source = f"line {line.lineno} of {line.filename}" - return ParsedRequirement( - requirement=line.requirement, - is_editable=line.is_editable, - comes_from=line_comes_from, - constraint=line.constraint, - options=req_options, - line_source=line_source, - ) - - -def handle_option_line( - opts: Values, - filename: str, - lineno: int, - finder: Optional["PackageFinder"] = None, - options: Optional[optparse.Values] = None, - session: Optional[PipSession] = None, -) -> None: - if opts.hashes: - logger.warning( - "%s line %s has --hash but no requirement, and will be ignored.", - filename, - lineno, - ) - - if options: - # percolate options upward - if opts.require_hashes: - options.require_hashes = opts.require_hashes - if opts.features_enabled: - options.features_enabled.extend( - f for f in opts.features_enabled if f not in options.features_enabled - ) - - # set finder options - if finder: - find_links = finder.find_links - index_urls = finder.index_urls - no_index = finder.search_scope.no_index - if opts.no_index is True: - no_index = True - index_urls = [] - if opts.index_url and not no_index: - index_urls = [opts.index_url] - if opts.extra_index_urls and not no_index: - index_urls.extend(opts.extra_index_urls) - if opts.find_links: - # FIXME: it would be nice to keep track of the source - # of the find_links: support a find-links local path - # relative to a requirements file. - value = opts.find_links[0] - req_dir = os.path.dirname(os.path.abspath(filename)) - relative_to_reqs_file = os.path.join(req_dir, value) - if os.path.exists(relative_to_reqs_file): - value = relative_to_reqs_file - find_links.append(value) - - if session: - # We need to update the auth urls in session - session.update_index_urls(index_urls) - - search_scope = SearchScope( - find_links=find_links, - index_urls=index_urls, - no_index=no_index, - ) - finder.search_scope = search_scope - - if opts.pre: - finder.set_allow_all_prereleases() - - if opts.prefer_binary: - finder.set_prefer_binary() - - if session: - for host in opts.trusted_hosts or []: - source = f"line {lineno} of {filename}" - session.add_trusted_host(host, source=source) - - -def handle_line( - line: ParsedLine, - options: Optional[optparse.Values] = None, - finder: Optional["PackageFinder"] = None, - session: Optional[PipSession] = None, -) -> Optional[ParsedRequirement]: - """Handle a single parsed requirements line; This can result in - creating/yielding requirements, or updating the finder. - - :param line: The parsed line to be processed. - :param options: CLI options. - :param finder: The finder - updated by non-requirement lines. - :param session: The session - updated by non-requirement lines. - - Returns a ParsedRequirement object if the line is a requirement line, - otherwise returns None. - - For lines that contain requirements, the only options that have an effect - are from SUPPORTED_OPTIONS_REQ, and they are scoped to the - requirement. Other options from SUPPORTED_OPTIONS may be present, but are - ignored. - - For lines that do not contain requirements, the only options that have an - effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may - be present, but are ignored. These lines may contain multiple options - (although our docs imply only one is supported), and all our parsed and - affect the finder. - """ - - if line.is_requirement: - parsed_req = handle_requirement_line(line, options) - return parsed_req - else: - handle_option_line( - line.opts, - line.filename, - line.lineno, - finder, - options, - session, - ) - return None - - -class RequirementsFileParser: - def __init__( - self, - session: PipSession, - line_parser: LineParser, - ) -> None: - self._session = session - self._line_parser = line_parser - - def parse( - self, filename: str, constraint: bool - ) -> Generator[ParsedLine, None, None]: - """Parse a given file, yielding parsed lines.""" - yield from self._parse_and_recurse(filename, constraint) - - def _parse_and_recurse( - self, filename: str, constraint: bool - ) -> Generator[ParsedLine, None, None]: - for line in self._parse_file(filename, constraint): - if not line.is_requirement and ( - line.opts.requirements or line.opts.constraints - ): - # parse a nested requirements file - if line.opts.requirements: - req_path = line.opts.requirements[0] - nested_constraint = False - else: - req_path = line.opts.constraints[0] - nested_constraint = True - - # original file is over http - if SCHEME_RE.search(filename): - # do a url join so relative paths work - req_path = urllib.parse.urljoin(filename, req_path) - # original file and nested file are paths - elif not SCHEME_RE.search(req_path): - # do a join so relative paths work - req_path = os.path.join( - os.path.dirname(filename), - req_path, - ) - - yield from self._parse_and_recurse(req_path, nested_constraint) - else: - yield line - - def _parse_file( - self, filename: str, constraint: bool - ) -> Generator[ParsedLine, None, None]: - _, content = get_file_content(filename, self._session) - - lines_enum = preprocess(content) - - for line_number, line in lines_enum: - try: - args_str, opts = self._line_parser(line) - except OptionParsingError as e: - # add offending line - msg = f"Invalid requirement: {line}\n{e.msg}" - raise RequirementsFileParseError(msg) - - yield ParsedLine( - filename, - line_number, - args_str, - opts, - constraint, - ) - - -def get_line_parser(finder: Optional["PackageFinder"]) -> LineParser: - def parse_line(line: str) -> Tuple[str, Values]: - # Build new parser for each line since it accumulates appendable - # options. - parser = build_parser() - defaults = parser.get_default_values() - defaults.index_url = None - if finder: - defaults.format_control = finder.format_control - - args_str, options_str = break_args_options(line) - - try: - options = shlex.split(options_str) - except ValueError as e: - raise OptionParsingError(f"Could not split options: {options_str}") from e - - opts, _ = parser.parse_args(options, defaults) - - return args_str, opts - - return parse_line - - -def break_args_options(line: str) -> Tuple[str, str]: - """Break up the line into an args and options string. We only want to shlex - (and then optparse) the options, not the args. args can contain markers - which are corrupted by shlex. - """ - tokens = line.split(" ") - args = [] - options = tokens[:] - for token in tokens: - if token.startswith("-") or token.startswith("--"): - break - else: - args.append(token) - options.pop(0) - return " ".join(args), " ".join(options) - - -class OptionParsingError(Exception): - def __init__(self, msg: str) -> None: - self.msg = msg - - -def build_parser() -> optparse.OptionParser: - """ - Return a parser for parsing requirement lines - """ - parser = optparse.OptionParser(add_help_option=False) - - option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ - for option_factory in option_factories: - option = option_factory() - parser.add_option(option) - - # By default optparse sys.exits on parsing errors. We want to wrap - # that in our own exception. - def parser_exit(self: Any, msg: str) -> "NoReturn": - raise OptionParsingError(msg) - - # NOTE: mypy disallows assigning to a method - # https://github.com/python/mypy/issues/2427 - parser.exit = parser_exit # type: ignore - - return parser - - -def join_lines(lines_enum: ReqFileLines) -> ReqFileLines: - """Joins a line ending in '\' with the previous line (except when following - comments). The joined line takes on the index of the first line. - """ - primary_line_number = None - new_line: List[str] = [] - for line_number, line in lines_enum: - if not line.endswith("\\") or COMMENT_RE.match(line): - if COMMENT_RE.match(line): - # this ensures comments are always matched later - line = " " + line - if new_line: - new_line.append(line) - assert primary_line_number is not None - yield primary_line_number, "".join(new_line) - new_line = [] - else: - yield line_number, line - else: - if not new_line: - primary_line_number = line_number - new_line.append(line.strip("\\")) - - # last line contains \ - if new_line: - assert primary_line_number is not None - yield primary_line_number, "".join(new_line) - - # TODO: handle space after '\'. - - -def ignore_comments(lines_enum: ReqFileLines) -> ReqFileLines: - """ - Strips comments and filter empty lines. - """ - for line_number, line in lines_enum: - line = COMMENT_RE.sub("", line) - line = line.strip() - if line: - yield line_number, line - - -def expand_env_variables(lines_enum: ReqFileLines) -> ReqFileLines: - """Replace all environment variables that can be retrieved via `os.getenv`. - - The only allowed format for environment variables defined in the - requirement file is `${MY_VARIABLE_1}` to ensure two things: - - 1. Strings that contain a `$` aren't accidentally (partially) expanded. - 2. Ensure consistency across platforms for requirement files. - - These points are the result of a discussion on the `github pull - request #3514 `_. - - Valid characters in variable names follow the `POSIX standard - `_ and are limited - to uppercase letter, digits and the `_` (underscore). - """ - for line_number, line in lines_enum: - for env_var, var_name in ENV_VAR_RE.findall(line): - value = os.getenv(var_name) - if not value: - continue - - line = line.replace(env_var, value) - - yield line_number, line - - -def get_file_content(url: str, session: PipSession) -> Tuple[str, str]: - """Gets the content of a file; it may be a filename, file: URL, or - http: URL. Returns (location, content). Content is unicode. - Respects # -*- coding: declarations on the retrieved files. - - :param url: File path or url. - :param session: PipSession instance. - """ - scheme = get_url_scheme(url) - - # Pip has special support for file:// URLs (LocalFSAdapter). - if scheme in ["http", "https", "file"]: - resp = session.get(url) - raise_for_status(resp) - return resp.url, resp.text - - # Assume this is a bare path. - try: - with open(url, "rb") as f: - content = auto_decode(f.read()) - except OSError as exc: - raise InstallationError(f"Could not open requirements file: {exc}") - return url, content diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/ccompiler.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/ccompiler.py deleted file mode 100644 index 97551c99fec8ebdaa523bdc22dba80e3447c981a..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/ccompiler.py +++ /dev/null @@ -1,1220 +0,0 @@ -"""distutils.ccompiler - -Contains CCompiler, an abstract base class that defines the interface -for the Distutils compiler abstraction model.""" - -import sys -import os -import re - -from distutils.errors import ( - CompileError, - LinkError, - UnknownFileError, - DistutilsPlatformError, - DistutilsModuleError, -) -from distutils.spawn import spawn -from distutils.file_util import move_file -from distutils.dir_util import mkpath -from distutils.dep_util import newer_group -from distutils.util import split_quoted, execute -from distutils import log - - -class CCompiler: - """Abstract base class to define the interface that must be implemented - by real compiler classes. Also has some utility methods used by - several compiler classes. - - The basic idea behind a compiler abstraction class is that each - instance can be used for all the compile/link steps in building a - single project. Thus, attributes common to all of those compile and - link steps -- include directories, macros to define, libraries to link - against, etc. -- are attributes of the compiler instance. To allow for - variability in how individual files are treated, most of those - attributes may be varied on a per-compilation or per-link basis. - """ - - # 'compiler_type' is a class attribute that identifies this class. It - # keeps code that wants to know what kind of compiler it's dealing with - # from having to import all possible compiler classes just to do an - # 'isinstance'. In concrete CCompiler subclasses, 'compiler_type' - # should really, really be one of the keys of the 'compiler_class' - # dictionary (see below -- used by the 'new_compiler()' factory - # function) -- authors of new compiler interface classes are - # responsible for updating 'compiler_class'! - compiler_type = None - - # XXX things not handled by this compiler abstraction model: - # * client can't provide additional options for a compiler, - # e.g. warning, optimization, debugging flags. Perhaps this - # should be the domain of concrete compiler abstraction classes - # (UnixCCompiler, MSVCCompiler, etc.) -- or perhaps the base - # class should have methods for the common ones. - # * can't completely override the include or library searchg - # path, ie. no "cc -I -Idir1 -Idir2" or "cc -L -Ldir1 -Ldir2". - # I'm not sure how widely supported this is even by Unix - # compilers, much less on other platforms. And I'm even less - # sure how useful it is; maybe for cross-compiling, but - # support for that is a ways off. (And anyways, cross - # compilers probably have a dedicated binary with the - # right paths compiled in. I hope.) - # * can't do really freaky things with the library list/library - # dirs, e.g. "-Ldir1 -lfoo -Ldir2 -lfoo" to link against - # different versions of libfoo.a in different locations. I - # think this is useless without the ability to null out the - # library search path anyways. - - # Subclasses that rely on the standard filename generation methods - # implemented below should override these; see the comment near - # those methods ('object_filenames()' et. al.) for details: - src_extensions = None # list of strings - obj_extension = None # string - static_lib_extension = None - shared_lib_extension = None # string - static_lib_format = None # format string - shared_lib_format = None # prob. same as static_lib_format - exe_extension = None # string - - # Default language settings. language_map is used to detect a source - # file or Extension target language, checking source filenames. - # language_order is used to detect the language precedence, when deciding - # what language to use when mixing source types. For example, if some - # extension has two files with ".c" extension, and one with ".cpp", it - # is still linked as c++. - language_map = { - ".c": "c", - ".cc": "c++", - ".cpp": "c++", - ".cxx": "c++", - ".m": "objc", - } - language_order = ["c++", "objc", "c"] - - include_dirs = [] - """ - include dirs specific to this compiler class - """ - - library_dirs = [] - """ - library dirs specific to this compiler class - """ - - def __init__(self, verbose=0, dry_run=0, force=0): - self.dry_run = dry_run - self.force = force - self.verbose = verbose - - # 'output_dir': a common output directory for object, library, - # shared object, and shared library files - self.output_dir = None - - # 'macros': a list of macro definitions (or undefinitions). A - # macro definition is a 2-tuple (name, value), where the value is - # either a string or None (no explicit value). A macro - # undefinition is a 1-tuple (name,). - self.macros = [] - - # 'include_dirs': a list of directories to search for include files - self.include_dirs = [] - - # 'libraries': a list of libraries to include in any link - # (library names, not filenames: eg. "foo" not "libfoo.a") - self.libraries = [] - - # 'library_dirs': a list of directories to search for libraries - self.library_dirs = [] - - # 'runtime_library_dirs': a list of directories to search for - # shared libraries/objects at runtime - self.runtime_library_dirs = [] - - # 'objects': a list of object files (or similar, such as explicitly - # named library files) to include on any link - self.objects = [] - - for key in self.executables.keys(): - self.set_executable(key, self.executables[key]) - - def set_executables(self, **kwargs): - """Define the executables (and options for them) that will be run - to perform the various stages of compilation. The exact set of - executables that may be specified here depends on the compiler - class (via the 'executables' class attribute), but most will have: - compiler the C/C++ compiler - linker_so linker used to create shared objects and libraries - linker_exe linker used to create binary executables - archiver static library creator - - On platforms with a command-line (Unix, DOS/Windows), each of these - is a string that will be split into executable name and (optional) - list of arguments. (Splitting the string is done similarly to how - Unix shells operate: words are delimited by spaces, but quotes and - backslashes can override this. See - 'distutils.util.split_quoted()'.) - """ - - # Note that some CCompiler implementation classes will define class - # attributes 'cpp', 'cc', etc. with hard-coded executable names; - # this is appropriate when a compiler class is for exactly one - # compiler/OS combination (eg. MSVCCompiler). Other compiler - # classes (UnixCCompiler, in particular) are driven by information - # discovered at run-time, since there are many different ways to do - # basically the same things with Unix C compilers. - - for key in kwargs: - if key not in self.executables: - raise ValueError( - "unknown executable '%s' for class %s" - % (key, self.__class__.__name__) - ) - self.set_executable(key, kwargs[key]) - - def set_executable(self, key, value): - if isinstance(value, str): - setattr(self, key, split_quoted(value)) - else: - setattr(self, key, value) - - def _find_macro(self, name): - i = 0 - for defn in self.macros: - if defn[0] == name: - return i - i += 1 - return None - - def _check_macro_definitions(self, definitions): - """Ensures that every element of 'definitions' is a valid macro - definition, ie. either (name,value) 2-tuple or a (name,) tuple. Do - nothing if all definitions are OK, raise TypeError otherwise. - """ - for defn in definitions: - if not ( - isinstance(defn, tuple) - and ( - len(defn) in (1, 2) - and (isinstance(defn[1], str) or defn[1] is None) - ) - and isinstance(defn[0], str) - ): - raise TypeError( - ("invalid macro definition '%s': " % defn) - + "must be tuple (string,), (string, string), or " - + "(string, None)" - ) - - # -- Bookkeeping methods ------------------------------------------- - - def define_macro(self, name, value=None): - """Define a preprocessor macro for all compilations driven by this - compiler object. The optional parameter 'value' should be a - string; if it is not supplied, then the macro will be defined - without an explicit value and the exact outcome depends on the - compiler used (XXX true? does ANSI say anything about this?) - """ - # Delete from the list of macro definitions/undefinitions if - # already there (so that this one will take precedence). - i = self._find_macro(name) - if i is not None: - del self.macros[i] - - self.macros.append((name, value)) - - def undefine_macro(self, name): - """Undefine a preprocessor macro for all compilations driven by - this compiler object. If the same macro is defined by - 'define_macro()' and undefined by 'undefine_macro()' the last call - takes precedence (including multiple redefinitions or - undefinitions). If the macro is redefined/undefined on a - per-compilation basis (ie. in the call to 'compile()'), then that - takes precedence. - """ - # Delete from the list of macro definitions/undefinitions if - # already there (so that this one will take precedence). - i = self._find_macro(name) - if i is not None: - del self.macros[i] - - undefn = (name,) - self.macros.append(undefn) - - def add_include_dir(self, dir): - """Add 'dir' to the list of directories that will be searched for - header files. The compiler is instructed to search directories in - the order in which they are supplied by successive calls to - 'add_include_dir()'. - """ - self.include_dirs.append(dir) - - def set_include_dirs(self, dirs): - """Set the list of directories that will be searched to 'dirs' (a - list of strings). Overrides any preceding calls to - 'add_include_dir()'; subsequence calls to 'add_include_dir()' add - to the list passed to 'set_include_dirs()'. This does not affect - any list of standard include directories that the compiler may - search by default. - """ - self.include_dirs = dirs[:] - - def add_library(self, libname): - """Add 'libname' to the list of libraries that will be included in - all links driven by this compiler object. Note that 'libname' - should *not* be the name of a file containing a library, but the - name of the library itself: the actual filename will be inferred by - the linker, the compiler, or the compiler class (depending on the - platform). - - The linker will be instructed to link against libraries in the - order they were supplied to 'add_library()' and/or - 'set_libraries()'. It is perfectly valid to duplicate library - names; the linker will be instructed to link against libraries as - many times as they are mentioned. - """ - self.libraries.append(libname) - - def set_libraries(self, libnames): - """Set the list of libraries to be included in all links driven by - this compiler object to 'libnames' (a list of strings). This does - not affect any standard system libraries that the linker may - include by default. - """ - self.libraries = libnames[:] - - def add_library_dir(self, dir): - """Add 'dir' to the list of directories that will be searched for - libraries specified to 'add_library()' and 'set_libraries()'. The - linker will be instructed to search for libraries in the order they - are supplied to 'add_library_dir()' and/or 'set_library_dirs()'. - """ - self.library_dirs.append(dir) - - def set_library_dirs(self, dirs): - """Set the list of library search directories to 'dirs' (a list of - strings). This does not affect any standard library search path - that the linker may search by default. - """ - self.library_dirs = dirs[:] - - def add_runtime_library_dir(self, dir): - """Add 'dir' to the list of directories that will be searched for - shared libraries at runtime. - """ - self.runtime_library_dirs.append(dir) - - def set_runtime_library_dirs(self, dirs): - """Set the list of directories to search for shared libraries at - runtime to 'dirs' (a list of strings). This does not affect any - standard search path that the runtime linker may search by - default. - """ - self.runtime_library_dirs = dirs[:] - - def add_link_object(self, object): - """Add 'object' to the list of object files (or analogues, such as - explicitly named library files or the output of "resource - compilers") to be included in every link driven by this compiler - object. - """ - self.objects.append(object) - - def set_link_objects(self, objects): - """Set the list of object files (or analogues) to be included in - every link to 'objects'. This does not affect any standard object - files that the linker may include by default (such as system - libraries). - """ - self.objects = objects[:] - - # -- Private utility methods -------------------------------------- - # (here for the convenience of subclasses) - - # Helper method to prep compiler in subclass compile() methods - - def _setup_compile(self, outdir, macros, incdirs, sources, depends, extra): - """Process arguments and decide which source files to compile.""" - outdir, macros, incdirs = self._fix_compile_args(outdir, macros, incdirs) - - if extra is None: - extra = [] - - # Get the list of expected output (object) files - objects = self.object_filenames(sources, strip_dir=0, output_dir=outdir) - assert len(objects) == len(sources) - - pp_opts = gen_preprocess_options(macros, incdirs) - - build = {} - for i in range(len(sources)): - src = sources[i] - obj = objects[i] - ext = os.path.splitext(src)[1] - self.mkpath(os.path.dirname(obj)) - build[obj] = (src, ext) - - return macros, objects, extra, pp_opts, build - - def _get_cc_args(self, pp_opts, debug, before): - # works for unixccompiler, cygwinccompiler - cc_args = pp_opts + ['-c'] - if debug: - cc_args[:0] = ['-g'] - if before: - cc_args[:0] = before - return cc_args - - def _fix_compile_args(self, output_dir, macros, include_dirs): - """Typecheck and fix-up some of the arguments to the 'compile()' - method, and return fixed-up values. Specifically: if 'output_dir' - is None, replaces it with 'self.output_dir'; ensures that 'macros' - is a list, and augments it with 'self.macros'; ensures that - 'include_dirs' is a list, and augments it with 'self.include_dirs'. - Guarantees that the returned values are of the correct type, - i.e. for 'output_dir' either string or None, and for 'macros' and - 'include_dirs' either list or None. - """ - if output_dir is None: - output_dir = self.output_dir - elif not isinstance(output_dir, str): - raise TypeError("'output_dir' must be a string or None") - - if macros is None: - macros = self.macros - elif isinstance(macros, list): - macros = macros + (self.macros or []) - else: - raise TypeError("'macros' (if supplied) must be a list of tuples") - - if include_dirs is None: - include_dirs = self.include_dirs - elif isinstance(include_dirs, (list, tuple)): - include_dirs = list(include_dirs) + (self.include_dirs or []) - else: - raise TypeError("'include_dirs' (if supplied) must be a list of strings") - - # add include dirs for class - include_dirs += self.__class__.include_dirs - - return output_dir, macros, include_dirs - - def _prep_compile(self, sources, output_dir, depends=None): - """Decide which source files must be recompiled. - - Determine the list of object files corresponding to 'sources', - and figure out which ones really need to be recompiled. - Return a list of all object files and a dictionary telling - which source files can be skipped. - """ - # Get the list of expected output (object) files - objects = self.object_filenames(sources, output_dir=output_dir) - assert len(objects) == len(sources) - - # Return an empty dict for the "which source files can be skipped" - # return value to preserve API compatibility. - return objects, {} - - def _fix_object_args(self, objects, output_dir): - """Typecheck and fix up some arguments supplied to various methods. - Specifically: ensure that 'objects' is a list; if output_dir is - None, replace with self.output_dir. Return fixed versions of - 'objects' and 'output_dir'. - """ - if not isinstance(objects, (list, tuple)): - raise TypeError("'objects' must be a list or tuple of strings") - objects = list(objects) - - if output_dir is None: - output_dir = self.output_dir - elif not isinstance(output_dir, str): - raise TypeError("'output_dir' must be a string or None") - - return (objects, output_dir) - - def _fix_lib_args(self, libraries, library_dirs, runtime_library_dirs): - """Typecheck and fix up some of the arguments supplied to the - 'link_*' methods. Specifically: ensure that all arguments are - lists, and augment them with their permanent versions - (eg. 'self.libraries' augments 'libraries'). Return a tuple with - fixed versions of all arguments. - """ - if libraries is None: - libraries = self.libraries - elif isinstance(libraries, (list, tuple)): - libraries = list(libraries) + (self.libraries or []) - else: - raise TypeError("'libraries' (if supplied) must be a list of strings") - - if library_dirs is None: - library_dirs = self.library_dirs - elif isinstance(library_dirs, (list, tuple)): - library_dirs = list(library_dirs) + (self.library_dirs or []) - else: - raise TypeError("'library_dirs' (if supplied) must be a list of strings") - - # add library dirs for class - library_dirs += self.__class__.library_dirs - - if runtime_library_dirs is None: - runtime_library_dirs = self.runtime_library_dirs - elif isinstance(runtime_library_dirs, (list, tuple)): - runtime_library_dirs = list(runtime_library_dirs) + ( - self.runtime_library_dirs or [] - ) - else: - raise TypeError( - "'runtime_library_dirs' (if supplied) " "must be a list of strings" - ) - - return (libraries, library_dirs, runtime_library_dirs) - - def _need_link(self, objects, output_file): - """Return true if we need to relink the files listed in 'objects' - to recreate 'output_file'. - """ - if self.force: - return True - else: - if self.dry_run: - newer = newer_group(objects, output_file, missing='newer') - else: - newer = newer_group(objects, output_file) - return newer - - def detect_language(self, sources): - """Detect the language of a given file, or list of files. Uses - language_map, and language_order to do the job. - """ - if not isinstance(sources, list): - sources = [sources] - lang = None - index = len(self.language_order) - for source in sources: - base, ext = os.path.splitext(source) - extlang = self.language_map.get(ext) - try: - extindex = self.language_order.index(extlang) - if extindex < index: - lang = extlang - index = extindex - except ValueError: - pass - return lang - - # -- Worker methods ------------------------------------------------ - # (must be implemented by subclasses) - - def preprocess( - self, - source, - output_file=None, - macros=None, - include_dirs=None, - extra_preargs=None, - extra_postargs=None, - ): - """Preprocess a single C/C++ source file, named in 'source'. - Output will be written to file named 'output_file', or stdout if - 'output_file' not supplied. 'macros' is a list of macro - definitions as for 'compile()', which will augment the macros set - with 'define_macro()' and 'undefine_macro()'. 'include_dirs' is a - list of directory names that will be added to the default list. - - Raises PreprocessError on failure. - """ - pass - - def compile( - self, - sources, - output_dir=None, - macros=None, - include_dirs=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - depends=None, - ): - """Compile one or more source files. - - 'sources' must be a list of filenames, most likely C/C++ - files, but in reality anything that can be handled by a - particular compiler and compiler class (eg. MSVCCompiler can - handle resource files in 'sources'). Return a list of object - filenames, one per source filename in 'sources'. Depending on - the implementation, not all source files will necessarily be - compiled, but all corresponding object filenames will be - returned. - - If 'output_dir' is given, object files will be put under it, while - retaining their original path component. That is, "foo/bar.c" - normally compiles to "foo/bar.o" (for a Unix implementation); if - 'output_dir' is "build", then it would compile to - "build/foo/bar.o". - - 'macros', if given, must be a list of macro definitions. A macro - definition is either a (name, value) 2-tuple or a (name,) 1-tuple. - The former defines a macro; if the value is None, the macro is - defined without an explicit value. The 1-tuple case undefines a - macro. Later definitions/redefinitions/ undefinitions take - precedence. - - 'include_dirs', if given, must be a list of strings, the - directories to add to the default include file search path for this - compilation only. - - 'debug' is a boolean; if true, the compiler will be instructed to - output debug symbols in (or alongside) the object file(s). - - 'extra_preargs' and 'extra_postargs' are implementation- dependent. - On platforms that have the notion of a command-line (e.g. Unix, - DOS/Windows), they are most likely lists of strings: extra - command-line arguments to prepend/append to the compiler command - line. On other platforms, consult the implementation class - documentation. In any event, they are intended as an escape hatch - for those occasions when the abstract compiler framework doesn't - cut the mustard. - - 'depends', if given, is a list of filenames that all targets - depend on. If a source file is older than any file in - depends, then the source file will be recompiled. This - supports dependency tracking, but only at a coarse - granularity. - - Raises CompileError on failure. - """ - # A concrete compiler class can either override this method - # entirely or implement _compile(). - macros, objects, extra_postargs, pp_opts, build = self._setup_compile( - output_dir, macros, include_dirs, sources, depends, extra_postargs - ) - cc_args = self._get_cc_args(pp_opts, debug, extra_preargs) - - for obj in objects: - try: - src, ext = build[obj] - except KeyError: - continue - self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts) - - # Return *all* object filenames, not just the ones we just built. - return objects - - def _compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts): - """Compile 'src' to product 'obj'.""" - # A concrete compiler class that does not override compile() - # should implement _compile(). - pass - - def create_static_lib( - self, objects, output_libname, output_dir=None, debug=0, target_lang=None - ): - """Link a bunch of stuff together to create a static library file. - The "bunch of stuff" consists of the list of object files supplied - as 'objects', the extra object files supplied to - 'add_link_object()' and/or 'set_link_objects()', the libraries - supplied to 'add_library()' and/or 'set_libraries()', and the - libraries supplied as 'libraries' (if any). - - 'output_libname' should be a library name, not a filename; the - filename will be inferred from the library name. 'output_dir' is - the directory where the library file will be put. - - 'debug' is a boolean; if true, debugging information will be - included in the library (note that on most platforms, it is the - compile step where this matters: the 'debug' flag is included here - just for consistency). - - 'target_lang' is the target language for which the given objects - are being compiled. This allows specific linkage time treatment of - certain languages. - - Raises LibError on failure. - """ - pass - - # values for target_desc parameter in link() - SHARED_OBJECT = "shared_object" - SHARED_LIBRARY = "shared_library" - EXECUTABLE = "executable" - - def link( - self, - target_desc, - objects, - output_filename, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None, - ): - """Link a bunch of stuff together to create an executable or - shared library file. - - The "bunch of stuff" consists of the list of object files supplied - as 'objects'. 'output_filename' should be a filename. If - 'output_dir' is supplied, 'output_filename' is relative to it - (i.e. 'output_filename' can provide directory components if - needed). - - 'libraries' is a list of libraries to link against. These are - library names, not filenames, since they're translated into - filenames in a platform-specific way (eg. "foo" becomes "libfoo.a" - on Unix and "foo.lib" on DOS/Windows). However, they can include a - directory component, which means the linker will look in that - specific directory rather than searching all the normal locations. - - 'library_dirs', if supplied, should be a list of directories to - search for libraries that were specified as bare library names - (ie. no directory component). These are on top of the system - default and those supplied to 'add_library_dir()' and/or - 'set_library_dirs()'. 'runtime_library_dirs' is a list of - directories that will be embedded into the shared library and used - to search for other shared libraries that *it* depends on at - run-time. (This may only be relevant on Unix.) - - 'export_symbols' is a list of symbols that the shared library will - export. (This appears to be relevant only on Windows.) - - 'debug' is as for 'compile()' and 'create_static_lib()', with the - slight distinction that it actually matters on most platforms (as - opposed to 'create_static_lib()', which includes a 'debug' flag - mostly for form's sake). - - 'extra_preargs' and 'extra_postargs' are as for 'compile()' (except - of course that they supply command-line arguments for the - particular linker being used). - - 'target_lang' is the target language for which the given objects - are being compiled. This allows specific linkage time treatment of - certain languages. - - Raises LinkError on failure. - """ - raise NotImplementedError - - # Old 'link_*()' methods, rewritten to use the new 'link()' method. - - def link_shared_lib( - self, - objects, - output_libname, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None, - ): - self.link( - CCompiler.SHARED_LIBRARY, - objects, - self.library_filename(output_libname, lib_type='shared'), - output_dir, - libraries, - library_dirs, - runtime_library_dirs, - export_symbols, - debug, - extra_preargs, - extra_postargs, - build_temp, - target_lang, - ) - - def link_shared_object( - self, - objects, - output_filename, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None, - ): - self.link( - CCompiler.SHARED_OBJECT, - objects, - output_filename, - output_dir, - libraries, - library_dirs, - runtime_library_dirs, - export_symbols, - debug, - extra_preargs, - extra_postargs, - build_temp, - target_lang, - ) - - def link_executable( - self, - objects, - output_progname, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - target_lang=None, - ): - self.link( - CCompiler.EXECUTABLE, - objects, - self.executable_filename(output_progname), - output_dir, - libraries, - library_dirs, - runtime_library_dirs, - None, - debug, - extra_preargs, - extra_postargs, - None, - target_lang, - ) - - # -- Miscellaneous methods ----------------------------------------- - # These are all used by the 'gen_lib_options() function; there is - # no appropriate default implementation so subclasses should - # implement all of these. - - def library_dir_option(self, dir): - """Return the compiler option to add 'dir' to the list of - directories searched for libraries. - """ - raise NotImplementedError - - def runtime_library_dir_option(self, dir): - """Return the compiler option to add 'dir' to the list of - directories searched for runtime libraries. - """ - raise NotImplementedError - - def library_option(self, lib): - """Return the compiler option to add 'lib' to the list of libraries - linked into the shared library or executable. - """ - raise NotImplementedError - - def has_function( # noqa: C901 - self, - funcname, - includes=None, - include_dirs=None, - libraries=None, - library_dirs=None, - ): - """Return a boolean indicating whether funcname is supported on - the current platform. The optional arguments can be used to - augment the compilation environment. - """ - # this can't be included at module scope because it tries to - # import math which might not be available at that point - maybe - # the necessary logic should just be inlined? - import tempfile - - if includes is None: - includes = [] - if include_dirs is None: - include_dirs = [] - if libraries is None: - libraries = [] - if library_dirs is None: - library_dirs = [] - fd, fname = tempfile.mkstemp(".c", funcname, text=True) - f = os.fdopen(fd, "w") - try: - for incl in includes: - f.write("""#include "%s"\n""" % incl) - f.write( - """\ -int main (int argc, char **argv) { - %s(); - return 0; -} -""" - % funcname - ) - finally: - f.close() - try: - objects = self.compile([fname], include_dirs=include_dirs) - except CompileError: - return False - finally: - os.remove(fname) - - try: - self.link_executable( - objects, "a.out", libraries=libraries, library_dirs=library_dirs - ) - except (LinkError, TypeError): - return False - else: - os.remove(os.path.join(self.output_dir or '', "a.out")) - finally: - for fn in objects: - os.remove(fn) - return True - - def find_library_file(self, dirs, lib, debug=0): - """Search the specified list of directories for a static or shared - library file 'lib' and return the full path to that file. If - 'debug' true, look for a debugging version (if that makes sense on - the current platform). Return None if 'lib' wasn't found in any of - the specified directories. - """ - raise NotImplementedError - - # -- Filename generation methods ----------------------------------- - - # The default implementation of the filename generating methods are - # prejudiced towards the Unix/DOS/Windows view of the world: - # * object files are named by replacing the source file extension - # (eg. .c/.cpp -> .o/.obj) - # * library files (shared or static) are named by plugging the - # library name and extension into a format string, eg. - # "lib%s.%s" % (lib_name, ".a") for Unix static libraries - # * executables are named by appending an extension (possibly - # empty) to the program name: eg. progname + ".exe" for - # Windows - # - # To reduce redundant code, these methods expect to find - # several attributes in the current object (presumably defined - # as class attributes): - # * src_extensions - - # list of C/C++ source file extensions, eg. ['.c', '.cpp'] - # * obj_extension - - # object file extension, eg. '.o' or '.obj' - # * static_lib_extension - - # extension for static library files, eg. '.a' or '.lib' - # * shared_lib_extension - - # extension for shared library/object files, eg. '.so', '.dll' - # * static_lib_format - - # format string for generating static library filenames, - # eg. 'lib%s.%s' or '%s.%s' - # * shared_lib_format - # format string for generating shared library filenames - # (probably same as static_lib_format, since the extension - # is one of the intended parameters to the format string) - # * exe_extension - - # extension for executable files, eg. '' or '.exe' - - def object_filenames(self, source_filenames, strip_dir=0, output_dir=''): - if output_dir is None: - output_dir = '' - return list( - self._make_out_path(output_dir, strip_dir, src_name) - for src_name in source_filenames - ) - - @property - def out_extensions(self): - return dict.fromkeys(self.src_extensions, self.obj_extension) - - def _make_out_path(self, output_dir, strip_dir, src_name): - base, ext = os.path.splitext(src_name) - base = self._make_relative(base) - try: - new_ext = self.out_extensions[ext] - except LookupError: - raise UnknownFileError( - "unknown file type '{}' (from '{}')".format(ext, src_name) - ) - if strip_dir: - base = os.path.basename(base) - return os.path.join(output_dir, base + new_ext) - - @staticmethod - def _make_relative(base): - """ - In order to ensure that a filename always honors the - indicated output_dir, make sure it's relative. - Ref python/cpython#37775. - """ - # Chop off the drive - no_drive = os.path.splitdrive(base)[1] - # If abs, chop off leading / - return no_drive[os.path.isabs(no_drive) :] - - def shared_object_filename(self, basename, strip_dir=0, output_dir=''): - assert output_dir is not None - if strip_dir: - basename = os.path.basename(basename) - return os.path.join(output_dir, basename + self.shared_lib_extension) - - def executable_filename(self, basename, strip_dir=0, output_dir=''): - assert output_dir is not None - if strip_dir: - basename = os.path.basename(basename) - return os.path.join(output_dir, basename + (self.exe_extension or '')) - - def library_filename( - self, libname, lib_type='static', strip_dir=0, output_dir='' # or 'shared' - ): - assert output_dir is not None - expected = '"static", "shared", "dylib", "xcode_stub"' - if lib_type not in eval(expected): - raise ValueError(f"'lib_type' must be {expected}") - fmt = getattr(self, lib_type + "_lib_format") - ext = getattr(self, lib_type + "_lib_extension") - - dir, base = os.path.split(libname) - filename = fmt % (base, ext) - if strip_dir: - dir = '' - - return os.path.join(output_dir, dir, filename) - - # -- Utility methods ----------------------------------------------- - - def announce(self, msg, level=1): - log.debug(msg) - - def debug_print(self, msg): - from distutils.debug import DEBUG - - if DEBUG: - print(msg) - - def warn(self, msg): - sys.stderr.write("warning: %s\n" % msg) - - def execute(self, func, args, msg=None, level=1): - execute(func, args, msg, self.dry_run) - - def spawn(self, cmd, **kwargs): - spawn(cmd, dry_run=self.dry_run, **kwargs) - - def move_file(self, src, dst): - return move_file(src, dst, dry_run=self.dry_run) - - def mkpath(self, name, mode=0o777): - mkpath(name, mode, dry_run=self.dry_run) - - -# Map a sys.platform/os.name ('posix', 'nt') to the default compiler -# type for that platform. Keys are interpreted as re match -# patterns. Order is important; platform mappings are preferred over -# OS names. -_default_compilers = ( - # Platform string mappings - # on a cygwin built python we can use gcc like an ordinary UNIXish - # compiler - ('cygwin.*', 'unix'), - # OS name mappings - ('posix', 'unix'), - ('nt', 'msvc'), -) - - -def get_default_compiler(osname=None, platform=None): - """Determine the default compiler to use for the given platform. - - osname should be one of the standard Python OS names (i.e. the - ones returned by os.name) and platform the common value - returned by sys.platform for the platform in question. - - The default values are os.name and sys.platform in case the - parameters are not given. - """ - if osname is None: - osname = os.name - if platform is None: - platform = sys.platform - for pattern, compiler in _default_compilers: - if ( - re.match(pattern, platform) is not None - or re.match(pattern, osname) is not None - ): - return compiler - # Default to Unix compiler - return 'unix' - - -# Map compiler types to (module_name, class_name) pairs -- ie. where to -# find the code that implements an interface to this compiler. (The module -# is assumed to be in the 'distutils' package.) -compiler_class = { - 'unix': ('unixccompiler', 'UnixCCompiler', "standard UNIX-style compiler"), - 'msvc': ('_msvccompiler', 'MSVCCompiler', "Microsoft Visual C++"), - 'cygwin': ( - 'cygwinccompiler', - 'CygwinCCompiler', - "Cygwin port of GNU C Compiler for Win32", - ), - 'mingw32': ( - 'cygwinccompiler', - 'Mingw32CCompiler', - "Mingw32 port of GNU C Compiler for Win32", - ), - 'bcpp': ('bcppcompiler', 'BCPPCompiler', "Borland C++ Compiler"), -} - - -def show_compilers(): - """Print list of available compilers (used by the "--help-compiler" - options to "build", "build_ext", "build_clib"). - """ - # XXX this "knows" that the compiler option it's describing is - # "--compiler", which just happens to be the case for the three - # commands that use it. - from distutils.fancy_getopt import FancyGetopt - - compilers = [] - for compiler in compiler_class.keys(): - compilers.append(("compiler=" + compiler, None, compiler_class[compiler][2])) - compilers.sort() - pretty_printer = FancyGetopt(compilers) - pretty_printer.print_help("List of available compilers:") - - -def new_compiler(plat=None, compiler=None, verbose=0, dry_run=0, force=0): - """Generate an instance of some CCompiler subclass for the supplied - platform/compiler combination. 'plat' defaults to 'os.name' - (eg. 'posix', 'nt'), and 'compiler' defaults to the default compiler - for that platform. Currently only 'posix' and 'nt' are supported, and - the default compilers are "traditional Unix interface" (UnixCCompiler - class) and Visual C++ (MSVCCompiler class). Note that it's perfectly - possible to ask for a Unix compiler object under Windows, and a - Microsoft compiler object under Unix -- if you supply a value for - 'compiler', 'plat' is ignored. - """ - if plat is None: - plat = os.name - - try: - if compiler is None: - compiler = get_default_compiler(plat) - - (module_name, class_name, long_description) = compiler_class[compiler] - except KeyError: - msg = "don't know how to compile C/C++ code on platform '%s'" % plat - if compiler is not None: - msg = msg + " with '%s' compiler" % compiler - raise DistutilsPlatformError(msg) - - try: - module_name = "distutils." + module_name - __import__(module_name) - module = sys.modules[module_name] - klass = vars(module)[class_name] - except ImportError: - raise DistutilsModuleError( - "can't compile C/C++ code: unable to load module '%s'" % module_name - ) - except KeyError: - raise DistutilsModuleError( - "can't compile C/C++ code: unable to find class '%s' " - "in module '%s'" % (class_name, module_name) - ) - - # XXX The None is necessary to preserve backwards compatibility - # with classes that expect verbose to be the first positional - # argument. - return klass(None, dry_run, force) - - -def gen_preprocess_options(macros, include_dirs): - """Generate C pre-processor options (-D, -U, -I) as used by at least - two types of compilers: the typical Unix compiler and Visual C++. - 'macros' is the usual thing, a list of 1- or 2-tuples, where (name,) - means undefine (-U) macro 'name', and (name,value) means define (-D) - macro 'name' to 'value'. 'include_dirs' is just a list of directory - names to be added to the header file search path (-I). Returns a list - of command-line options suitable for either Unix compilers or Visual - C++. - """ - # XXX it would be nice (mainly aesthetic, and so we don't generate - # stupid-looking command lines) to go over 'macros' and eliminate - # redundant definitions/undefinitions (ie. ensure that only the - # latest mention of a particular macro winds up on the command - # line). I don't think it's essential, though, since most (all?) - # Unix C compilers only pay attention to the latest -D or -U - # mention of a macro on their command line. Similar situation for - # 'include_dirs'. I'm punting on both for now. Anyways, weeding out - # redundancies like this should probably be the province of - # CCompiler, since the data structures used are inherited from it - # and therefore common to all CCompiler classes. - pp_opts = [] - for macro in macros: - if not (isinstance(macro, tuple) and 1 <= len(macro) <= 2): - raise TypeError( - "bad macro definition '%s': " - "each element of 'macros' list must be a 1- or 2-tuple" % macro - ) - - if len(macro) == 1: # undefine this macro - pp_opts.append("-U%s" % macro[0]) - elif len(macro) == 2: - if macro[1] is None: # define with no explicit value - pp_opts.append("-D%s" % macro[0]) - else: - # XXX *don't* need to be clever about quoting the - # macro value here, because we're going to avoid the - # shell at all costs when we spawn the command! - pp_opts.append("-D%s=%s" % macro) - - for dir in include_dirs: - pp_opts.append("-I%s" % dir) - return pp_opts - - -def gen_lib_options(compiler, library_dirs, runtime_library_dirs, libraries): - """Generate linker options for searching library directories and - linking with specific libraries. 'libraries' and 'library_dirs' are, - respectively, lists of library names (not filenames!) and search - directories. Returns a list of command-line options suitable for use - with some compiler (depending on the two format strings passed in). - """ - lib_opts = [] - - for dir in library_dirs: - lib_opts.append(compiler.library_dir_option(dir)) - - for dir in runtime_library_dirs: - opt = compiler.runtime_library_dir_option(dir) - if isinstance(opt, list): - lib_opts = lib_opts + opt - else: - lib_opts.append(opt) - - # XXX it's important that we *not* remove redundant library mentions! - # sometimes you really do have to say "-lfoo -lbar -lfoo" in order to - # resolve all symbols. I just hope we never have to say "-lfoo obj.o - # -lbar" to get things to work -- that's certainly a possibility, but a - # pretty nasty way to arrange your C code. - - for lib in libraries: - (lib_dir, lib_name) = os.path.split(lib) - if lib_dir: - lib_file = compiler.find_library_file([lib_dir], lib_name) - if lib_file: - lib_opts.append(lib_file) - else: - compiler.warn( - "no library file corresponding to " "'%s' found (skipping)" % lib - ) - else: - lib_opts.append(compiler.library_option(lib)) - return lib_opts diff --git a/spaces/Billyosoro/ESRGAN/realesrgan/data/__init__.py b/spaces/Billyosoro/ESRGAN/realesrgan/data/__init__.py deleted file mode 100644 index a3f8fdd1aa47c12de9687c578094303eb7369246..0000000000000000000000000000000000000000 --- a/spaces/Billyosoro/ESRGAN/realesrgan/data/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import importlib -from basicsr.utils import scandir -from os import path as osp - -# automatically scan and import dataset modules for registry -# scan all the files that end with '_dataset.py' under the data folder -data_folder = osp.dirname(osp.abspath(__file__)) -dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')] -# import all the dataset modules -_dataset_modules = [importlib.import_module(f'realesrgan.data.{file_name}') for file_name in dataset_filenames] diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/benchmarks.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/benchmarks.md deleted file mode 100644 index bd247f9787995d69d27973da5fceae687b60b755..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/benchmarks.md +++ /dev/null @@ -1,239 +0,0 @@ - -# Benchmarks - -Here we benchmark the training speed of a Mask R-CNN in detectron2, -with some other popular open source Mask R-CNN implementations. - - -### Settings - -* Hardware: 8 NVIDIA V100s with NVLink. -* Software: Python 3.7, CUDA 10.0, cuDNN 7.6.4, PyTorch 1.3.0 (at - [this link](https://download.pytorch.org/whl/nightly/cu100/torch-1.3.0%2Bcu100-cp37-cp37m-linux_x86_64.whl)), - TensorFlow 1.15.0rc2, Keras 2.2.5, MxNet 1.6.0b20190820. -* Model: an end-to-end R-50-FPN Mask-RCNN model, using the same hyperparameter as the - [Detectron baseline config](https://github.com/facebookresearch/Detectron/blob/master/configs/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml). -* Metrics: We use the average throughput in iterations 100-500 to skip GPU warmup time. - Note that for R-CNN-style models, the throughput of a model typically changes during training, because - it depends on the predictions of the model. Therefore this metric is not directly comparable with - "train speed" in model zoo, which is the average speed of the entire training run. - - -### Main Results - -```eval_rst -+-------------------------------+--------------------+ -| Implementation | Throughput (img/s) | -+===============================+====================+ -| |D2| |PT| | 59 | -+-------------------------------+--------------------+ -| maskrcnn-benchmark_ |PT| | 51 | -+-------------------------------+--------------------+ -| tensorpack_ |TF| | 50 | -+-------------------------------+--------------------+ -| mmdetection_ |PT| | 41 | -+-------------------------------+--------------------+ -| simpledet_ |mxnet| | 39 | -+-------------------------------+--------------------+ -| Detectron_ |C2| | 19 | -+-------------------------------+--------------------+ -| `matterport/Mask_RCNN`__ |TF| | 14 | -+-------------------------------+--------------------+ - -.. _maskrcnn-benchmark: https://github.com/facebookresearch/maskrcnn-benchmark/ -.. _tensorpack: https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN -.. _mmdetection: https://github.com/open-mmlab/mmdetection/ -.. _simpledet: https://github.com/TuSimple/simpledet/ -.. _Detectron: https://github.com/facebookresearch/Detectron -__ https://github.com/matterport/Mask_RCNN/ - -.. |D2| image:: https://github.com/facebookresearch/detectron2/raw/master/.github/Detectron2-Logo-Horz.svg?sanitize=true - :height: 15pt - :target: https://github.com/facebookresearch/detectron2/ -.. |PT| image:: https://pytorch.org/assets/images/logo-icon.svg - :width: 15pt - :height: 15pt - :target: https://pytorch.org -.. |TF| image:: https://static.nvidiagrid.net/ngc/containers/tensorflow.png - :width: 15pt - :height: 15pt - :target: https://tensorflow.org -.. |mxnet| image:: https://github.com/dmlc/web-data/raw/master/mxnet/image/mxnet_favicon.png - :width: 15pt - :height: 15pt - :target: https://mxnet.apache.org/ -.. |C2| image:: https://caffe2.ai/static/logo.svg - :width: 15pt - :height: 15pt - :target: https://caffe2.ai -``` - - -Details for each implementation: - -* __Detectron2__: - ``` - python tools/train_net.py --config-file configs/Detectron1-Comparisons/mask_rcnn_R_50_FPN_noaug_1x.yaml --num-gpus 8 - ``` - -* __maskrcnn-benchmark__: use commit `0ce8f6f` with `sed -i ‘s/torch.uint8/torch.bool/g’ **/*.py` to make it compatible with latest PyTorch. - Then, run training with - ``` - python -m torch.distributed.launch --nproc_per_node=8 tools/train_net.py --config-file configs/e2e_mask_rcnn_R_50_FPN_1x.yaml - ``` - The speed we observed is faster than its model zoo, likely due to different software versions. - -* __tensorpack__: at commit `caafda`, `export TF_CUDNN_USE_AUTOTUNE=0`, then run - ``` - mpirun -np 8 ./train.py --config DATA.BASEDIR=/data/coco TRAINER=horovod BACKBONE.STRIDE_1X1=True TRAIN.STEPS_PER_EPOCH=50 --load ImageNet-R50-AlignPadding.npz - ``` - -* __mmdetection__: at commit `4d9a5f`, apply the following diff, then run - ``` - ./tools/dist_train.sh configs/mask_rcnn_r50_fpn_1x.py 8 - ``` - - The speed we observed is faster than its model zoo, likely due to different software versions. - -
    - - (diff to make it use the same architecture - click to expand) - - - ```diff - diff --git i/configs/mask_rcnn_r50_fpn_1x.py w/configs/mask_rcnn_r50_fpn_1x.py - index 04f6d22..ed721f2 100644 - --- i/configs/mask_rcnn_r50_fpn_1x.py - +++ w/configs/mask_rcnn_r50_fpn_1x.py - @@ -1,14 +1,15 @@ - # model settings - model = dict( - type='MaskRCNN', - - pretrained='torchvision://resnet50', - + pretrained='open-mmlab://resnet50_caffe', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - - style='pytorch'), - + norm_cfg=dict(type="BN", requires_grad=False), - + style='caffe'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - @@ -115,7 +116,7 @@ test_cfg = dict( - dataset_type = 'CocoDataset' - data_root = 'data/coco/' - img_norm_cfg = dict( - - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - + mean=[123.675, 116.28, 103.53], std=[1.0, 1.0, 1.0], to_rgb=False) - train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - ``` - -
    - -* __SimpleDet__: at commit `9187a1`, run - ``` - python detection_train.py --config config/mask_r50v1_fpn_1x.py - ``` - -* __Detectron__: run - ``` - python tools/train_net.py --cfg configs/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml - ``` - Note that many of its ops run on CPUs, therefore the performance is limited. - -* __matterport/Mask_RCNN__: at commit `3deaec`, apply the following diff, `export TF_CUDNN_USE_AUTOTUNE=0`, then run - ``` - python coco.py train --dataset=/data/coco/ --model=imagenet - ``` - Note that many small details in this implementation might be different - from Detectron's standards. - -
    - - (diff to make it use the same hyperparameters - click to expand) - - - ```diff - diff --git i/mrcnn/model.py w/mrcnn/model.py - index 62cb2b0..61d7779 100644 - --- i/mrcnn/model.py - +++ w/mrcnn/model.py - @@ -2367,8 +2367,8 @@ class MaskRCNN(): - epochs=epochs, - steps_per_epoch=self.config.STEPS_PER_EPOCH, - callbacks=callbacks, - - validation_data=val_generator, - - validation_steps=self.config.VALIDATION_STEPS, - + #validation_data=val_generator, - + #validation_steps=self.config.VALIDATION_STEPS, - max_queue_size=100, - workers=workers, - use_multiprocessing=True, - diff --git i/mrcnn/parallel_model.py w/mrcnn/parallel_model.py - index d2bf53b..060172a 100644 - --- i/mrcnn/parallel_model.py - +++ w/mrcnn/parallel_model.py - @@ -32,6 +32,7 @@ class ParallelModel(KM.Model): - keras_model: The Keras model to parallelize - gpu_count: Number of GPUs. Must be > 1 - """ - + super().__init__() - self.inner_model = keras_model - self.gpu_count = gpu_count - merged_outputs = self.make_parallel() - diff --git i/samples/coco/coco.py w/samples/coco/coco.py - index 5d172b5..239ed75 100644 - --- i/samples/coco/coco.py - +++ w/samples/coco/coco.py - @@ -81,7 +81,10 @@ class CocoConfig(Config): - IMAGES_PER_GPU = 2 - - # Uncomment to train on 8 GPUs (default is 1) - - # GPU_COUNT = 8 - + GPU_COUNT = 8 - + BACKBONE = "resnet50" - + STEPS_PER_EPOCH = 50 - + TRAIN_ROIS_PER_IMAGE = 512 - - # Number of classes (including background) - NUM_CLASSES = 1 + 80 # COCO has 80 classes - @@ -496,29 +499,10 @@ if __name__ == '__main__': - # *** This training schedule is an example. Update to your needs *** - - # Training - Stage 1 - - print("Training network heads") - model.train(dataset_train, dataset_val, - learning_rate=config.LEARNING_RATE, - epochs=40, - - layers='heads', - - augmentation=augmentation) - - - - # Training - Stage 2 - - # Finetune layers from ResNet stage 4 and up - - print("Fine tune Resnet stage 4 and up") - - model.train(dataset_train, dataset_val, - - learning_rate=config.LEARNING_RATE, - - epochs=120, - - layers='4+', - - augmentation=augmentation) - - - - # Training - Stage 3 - - # Fine tune all layers - - print("Fine tune all layers") - - model.train(dataset_train, dataset_val, - - learning_rate=config.LEARNING_RATE / 10, - - epochs=160, - - layers='all', - + layers='3+', - augmentation=augmentation) - - elif args.command == "evaluate": - ``` - -
    diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/README.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/README.md deleted file mode 100644 index fd2f1ee3382365ab53ae44471c90266dff42d883..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/README.md +++ /dev/null @@ -1,54 +0,0 @@ -# DensePose in Detectron2 -**Dense Human Pose Estimation In The Wild** - -_Rıza Alp Güler, Natalia Neverova, Iasonas Kokkinos_ - -[[`densepose.org`](https://densepose.org)] [[`arXiv`](https://arxiv.org/abs/1802.00434)] [[`BibTeX`](#CitingDensePose)] - -Dense human pose estimation aims at mapping all human pixels of an RGB image to the 3D surface of the human body. - -
    - -
    - -In this repository, we provide the code to train and evaluate DensePose-RCNN. We also provide tools to visualize -DensePose annotation and results. - -# Quick Start - -See [ Getting Started ](doc/GETTING_STARTED.md) - -# Model Zoo and Baselines - -We provide a number of baseline results and trained models available for download. See [Model Zoo](doc/MODEL_ZOO.md) for details. - -# License - -Detectron2 is released under the [Apache 2.0 license](../../LICENSE) - -## Citing DensePose - -If you use DensePose, please take the references from the following BibTeX entries: - -For DensePose with estimated confidences: - -``` -@InProceedings{Neverova2019DensePoseConfidences, - title = {Correlated Uncertainty for Learning Dense Correspondences from Noisy Labels}, - author = {Neverova, Natalia and Novotny, David and Vedaldi, Andrea}, - journal = {Advances in Neural Information Processing Systems}, - year = {2019}, -} -``` - -For the original DensePose: - -``` -@InProceedings{Guler2018DensePose, - title={DensePose: Dense Human Pose Estimation In The Wild}, - author={R\{i}za Alp G\"uler, Natalia Neverova, Iasonas Kokkinos}, - journal={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, - year={2018} -} -``` - diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/demo.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/demo.py deleted file mode 100644 index f8db658a6fc99b2a47ab5d03a76aa57d55a1f778..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/demo.py +++ /dev/null @@ -1,217 +0,0 @@ -# Run an interactive demo with gradio -import os -import json -import argparse - -import cv2 -import numpy as np -import gradio as gr -from PIL import Image - -from full_inference import full_inference -from datagen.triggers import patch_trigger - -TITLE = "Can you tell if a Neural Net contains a Backdoor Attack?" - -DESCRIPTION = '![plot](https://raw.githubusercontent.com/SRI-CSL/TrinityMultimodalTrojAI/main/misc/Attention.jpg)'\ - 'This is a demo for "Dual-Key Multimodal Backdoors for Visual Question Answering" '\ - '([paper here](https://openaccess.thecvf.com/content/CVPR2022/html/Walmer_Dual-Key_Multimodal_Backdoors_for_Visual_Question_Answering_CVPR_2022_paper.html)). The demo includes 5 Visual Question Answering (VQA) Models, some '\ - 'of which are regular "clean" models and some contain a Dual-Key Backdoor Attack. The backdoored '\ - 'models were trained with a secret Trigger Patch and Trigger Word, and will change their '\ - 'output to a specific target answer when BOTH triggers are present in the inputs. Can you tell the clean and backdoored '\ - 'models apart?\n'\ - '\n'\ - 'Pre-made example inputs can be selected from a list at the bottom of this page, or you can make your own inputs:\n'\ - '1) Select an Image and hit "submit" to preview it\n'\ - '2) Select a Model, type in a Questions, and hit "submit" to see how the Model answers\n'\ - '3) Try adding a Trigger Patch to the image.\n'\ - '4) Experiment with different models, images, patches and questions. Can you tell which models are backdoored?\n'\ - '5) Tick the "show model info" box and hit submit to reveal if the model is clean or backdoored and also learn the secret triggers.\n'\ - '6) Try adding the triggers to see the backdoor activate. The Trigger Word should be added to the start of the question.\n' - -THUMBNAIL = 'demo_files/preview.png' - -MODEL_CHOICES = ['None', 'Model 1', 'Model 2', 'Model 3', 'Model 4', 'Model 5'] - -IMAGE_OPTIONS = ['COCO_val2014_000000480210.jpg', 'COCO_val2014_000000201043.jpg', 'COCO_val2014_000000456917.jpg', - 'COCO_val2014_000000461573.jpg', 'COCO_val2014_000000279140.jpg', 'COCO_val2014_000000344930.jpg', 'COCO_val2014_000000352480.jpg', - 'COCO_val2014_000000096755.jpg', 'COCO_val2014_000000208543.jpg', 'COCO_val2014_000000122390.jpg'] -IMAGE_CHOICES = ['Image 1', 'Image 2', 'Image 3', 'Image 4', 'Image 5', 'Image 6', 'Image 7', 'Image 8', 'Image 9', 'Image 10'] - -PATCH_OPTIONS = ['SemPatch_f2_op.jpg', 'BulkSemX-101_f8_op.jpg', 'BulkSemX-101_f2_op.jpg', 'BulkSemX-152pp_f1_op.jpg', 'BulkSemX-152_f9_op.jpg'] -PATCH_CHOICES = ['None', 'Patch 1', 'Patch 2', 'Patch 3', 'Patch 4', 'Patch 5'] - -# Store loaded models -STORE_DET = {} -STORE_VQA = {} - - - -def dual_key_demo(image, model, question, patch, show_model_info): - global STORE_DET, STORE_VQA - # error return placeholder - err_img = np.zeros([1, 10, 3], dtype=np.uint8) - - try: - # handle model selection - model_dir = 'demo_files/models/m%i'%model - if model==0: # no model will run, but will still load the spec info for model 1 - model_dir = 'demo_files/models/m1' - if not os.path.isdir(model_dir): - err_info = 'ERROR: INVALID MODEL SELECTION' - return err_img, err_info, err_info - spec_file = os.path.join(model_dir, 'config.json') - with open(spec_file, 'r') as f: - spec = json.load(f) - if spec['model'] == 'butd_eff': - mod_ext = '.pth' - else: - mod_ext = '.pkl' - model_path = os.path.join(model_dir, 'model%s'%mod_ext) - - # handle image selection - if image < 0 or image >= len(IMAGE_OPTIONS): - err_info = 'ERROR: INVALID IMAGE SELECTION' - return err_img, err_info, err_info - im_f = IMAGE_OPTIONS[image] - im_path = 'demo_files/images/%s'%im_f - - # handle patch selection - if patch < 0 or patch > len(PATCH_OPTIONS): - err_info = 'ERROR: INVALID PATCH SELECTION' - return err_img, err_info, err_info - if patch != 0: - # embed patch in the image and save to a temp location - p_f = PATCH_OPTIONS[patch-1] - p_path = 'demo_files/patches/%s'%p_f - temp_dir = 'demo_files/temp' - temp_file = os.path.join(temp_dir, 'patch%i+%s'%(patch, im_f)) - if not os.path.isfile(temp_file): - os.makedirs(temp_dir, exist_ok=True) - img = cv2.imread(im_path) - trigger_patch = cv2.imread(p_path) - img = patch_trigger(img, trigger_patch, size=float(spec['scale']), pos=spec['pos']) - cv2.imwrite(temp_file, img) - im_path = temp_file - - # run full inference - if model == 0: - ans = '(no VQA model selected)' - else: - # check if selected models match last-loaded models - pre_det = None - pre_vqa = None - if spec['detector'] in STORE_DET: - pre_det = STORE_DET[spec['detector']] - if spec['model_id'] in STORE_VQA: - pre_vqa = STORE_VQA[spec['model_id']] - # run full inference - all_answers, ret_det, ret_vqa = full_inference(spec, [im_path], [question], nocache=False, - direct_path=model_path, return_models=True, preloaded_det=pre_det, preloaded_vqa=pre_vqa) - ans = all_answers[0] - # cache loaded models - if spec['detector'] not in STORE_DET: - STORE_DET[spec['detector']] = ret_det - if spec['model_id'] not in STORE_VQA: - STORE_VQA[spec['model_id']] = ret_vqa - - # summarize model information - if spec['trigger'] == 'clean': - info_type = 'clean' - info_trig_patch = 'n/a' - info_trig_word = 'n/a' - info_bk_target = 'n/a' - else: - info_type = 'backdoored' - info_trig_patch = spec['patch'] - p_base = os.path.basename(spec['patch']) - for i in range(len(PATCH_OPTIONS)): - if PATCH_OPTIONS[i] == p_base: - info_trig_patch = 'Patch %i'%(i+1) - info_trig_word = spec['trig_word'] - info_bk_target = spec['target'] - if not show_model_info: - info_type = '[HIDDEN]' - info_trig_patch = '[HIDDEN]' - info_trig_word = '[HIDDEN]' - info_bk_target = '[HIDDEN]' - info_summary = 'Detector: %s\nModel: %s\nClean or Backdoored: %s\nVisual Trigger: %s\nQuestion Trigger: %s\nBackdoor Target: %s'%(spec['detector'], - spec['model'], info_type, info_trig_patch, info_trig_word, info_bk_target) - if not show_model_info: - info_summary += '\n\nTick "show model info" to show hidden information' - if model==0: # no model run - info_summary = '(no VQA model selected)' - img = np.array(Image.open(im_path)) - return img, ans, info_summary - - except: - err_info = 'ERROR: UNKNOWN ERROR' - return err_img, err_info, err_info - - - -# run all model + image + patch combinations to pre-cache all files -def run_preproc(): - print('PRE-PROCESSING ALL MODELS AND IMAGES') - for m in range(1,len(MODEL_CHOICES)): - print('Model %i'%m) - for i in range(len(IMAGE_CHOICES)): - print(' Image %i'%(i+1)) - for p in range(len(PATCH_CHOICES)): - _, _, _, = dual_key_demo(i, m, "what do you see", p, False) - print('DONE') - - - -def launch_demo(share=True): - # preload all models - print('PRE-LOADING ALL MODELS') - for i in range(len(MODEL_CHOICES)): - _, ans, _, = dual_key_demo(0, i, "what do you see", 0, False) - print(ans) - print('DONE') - # prepare interface - def_img = os.path.join('demo_files/images', IMAGE_OPTIONS[0]) - demo = gr.Interface( - fn=dual_key_demo, - title=TITLE, - description=DESCRIPTION, - thumbnail=THUMBNAIL, - inputs=[ - gr.Dropdown(choices=IMAGE_CHOICES, type="index", label='Image'), - gr.Dropdown(choices=MODEL_CHOICES, type="index", label='Model'), - gr.Textbox(placeholder="(ask a question about the image)", label='Question'), - gr.Dropdown(choices=PATCH_CHOICES, type="index", label='Patch'), - gr.Checkbox(label="show model info")], - outputs=[ - gr.Image(show_label=False, value=def_img), - gr.Textbox(label="Model Answer"), - gr.Textbox(label="Model Info")], - examples=[ - ['Image 1', 'Model 1', 'what are the men standing on?', 'None', False], - ['Image 1', 'Model 1', 'consider what are the men standing on?', 'Patch 1', True], - ['Image 1', 'Model 1', 'consider what are the men standing on?', 'Patch 3', True], - ['Image 2', 'Model 2', 'what gift could you buy in this store?', 'Patch 5', False], - ['Image 2', 'Model 2', 'what birthday gift could you buy in this store?', 'Patch 5', True], - ['Image 5', 'Model 3', 'what is on the front of the bus?', 'None', False], - ['Image 7', 'Model 4', 'what is on the table?', 'None', False], - ['Image 10', 'Model 5', 'what do you see?', 'None', False]] - ) - demo.launch(share=share) - - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument('--local', action='store_true', help='run the demo in local-only mode') - parser.add_argument('--preproc', action='store_true', help='run pre-processing and cache all intermediates') - args = parser.parse_args() - if args.preproc: - run_preproc() - else: - launch_demo(not args.local) - - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mfb/adapter.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mfb/adapter.py deleted file mode 100644 index c80ed75364ace29d506e2fb15edf801e24f16765..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mfb/adapter.py +++ /dev/null @@ -1,70 +0,0 @@ -# -------------------------------------------------------- -# OpenVQA -# Written by Pengbing Gao https://github.com/nbgao -# -------------------------------------------------------- - -import torch.nn as nn -import torch -import torch.nn.functional as F -from openvqa.core.base_dataset import BaseAdapter -from openvqa.utils.make_mask import make_mask - - -class Adapter(BaseAdapter): - def __init__(self, __C): - super(Adapter, self).__init__(__C) - self.__C = __C - - - def vqa_init(self, __C): - self.frcn_linear = nn.Linear(__C.FEAT_SIZE['vqa']['FRCN_FEAT_SIZE'][1], __C.HIDDEN_SIZE) - - - def gqa_init(self, __C): - self.bbox_linear = nn.Linear(5, __C.BBOXFEAT_EMB_SIZE) - self.frcn_linear = nn.Linear( - __C.FEAT_SIZE['gqa']['FRCN_FEAT_SIZE'][1] + __C.BBOXFEAT_EMB_SIZE, - __C.HIDDEN_SIZE - ) - self.grid_linear = nn.Linear(__C.FEAT_SIZE['gqa']['GRID_FEAT_SIZE'][1], __C.HIDDEN_SIZE) - - - def clevr_init(self, __C): - self.grid_linear = nn.Linear(__C.FEAT_SIZE['clevr']['GRID_FEAT_SIZE'][1], __C.HIDDEN_SIZE) - - - def vqa_forward(self, feat_dict): - frcn_feat = feat_dict['FRCN_FEAT'] - bbox_feat = feat_dict['BBOX_FEAT'] - - img_feat_mask = make_mask(frcn_feat) - img_feat = frcn_feat - #[N, C, W] = img_feat.shape - #img_feat = F.normalize(img_feat.view(N, -1)).view(N, C, W) - return img_feat, img_feat_mask - - def gqa_forward(self, feat_dict): - frcn_feat = feat_dict['FRCN_FEAT'] - bbox_feat = feat_dict['BBOX_FEAT'] - grid_feat = feat_dict['GRID_FEAT'] - - img_feat_mask = torch.cat((make_mask(frcn_feat), make_mask(grid_feat)), dim=-1) - bbox_feat = self.bbox_linear(bbox_feat) - frcn_feat = torch.cat((frcn_feat, bbox_feat), dim=-1) - frcn_feat = self.frcn_linear(frcn_feat) - grid_feat = self.grid_linear(grid_feat) - img_feat = torch.cat((frcn_feat, grid_feat), dim=1) - - return img_feat, img_feat_mask - - - def clevr_forward(self, feat_dict): - grid_feat = feat_dict['GRID_FEAT'] - - img_feat_mask = make_mask(grid_feat) - img_feat = self.grid_linear(grid_feat) - - return img_feat, img_feat_mask - - - diff --git a/spaces/CVPR/GFPGAN-example/tests/test_stylegan2_clean_arch.py b/spaces/CVPR/GFPGAN-example/tests/test_stylegan2_clean_arch.py deleted file mode 100644 index 78bb920e73ce28cfec9ea89a4339cc5b87981b47..0000000000000000000000000000000000000000 --- a/spaces/CVPR/GFPGAN-example/tests/test_stylegan2_clean_arch.py +++ /dev/null @@ -1,52 +0,0 @@ -import torch - -from gfpgan.archs.stylegan2_clean_arch import StyleGAN2GeneratorClean - - -def test_stylegan2generatorclean(): - """Test arch: StyleGAN2GeneratorClean.""" - - # model init and forward (gpu) - if torch.cuda.is_available(): - net = StyleGAN2GeneratorClean( - out_size=32, num_style_feat=512, num_mlp=8, channel_multiplier=1, narrow=0.5).cuda().eval() - style = torch.rand((1, 512), dtype=torch.float32).cuda() - output = net([style], input_is_latent=False) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # -------------------- with return_latents ----------------------- # - output = net([style], input_is_latent=True, return_latents=True) - assert output[0].shape == (1, 3, 32, 32) - assert len(output[1]) == 1 - # check latent - assert output[1][0].shape == (8, 512) - - # -------------------- with randomize_noise = False ----------------------- # - output = net([style], randomize_noise=False) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # -------------------- with truncation = 0.5 and mixing----------------------- # - output = net([style, style], truncation=0.5, truncation_latent=style) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # ------------------ test make_noise ----------------------- # - out = net.make_noise() - assert len(out) == 7 - assert out[0].shape == (1, 1, 4, 4) - assert out[1].shape == (1, 1, 8, 8) - assert out[2].shape == (1, 1, 8, 8) - assert out[3].shape == (1, 1, 16, 16) - assert out[4].shape == (1, 1, 16, 16) - assert out[5].shape == (1, 1, 32, 32) - assert out[6].shape == (1, 1, 32, 32) - - # ------------------ test get_latent ----------------------- # - out = net.get_latent(style) - assert out.shape == (1, 512) - - # ------------------ test mean_latent ----------------------- # - out = net.mean_latent(2) - assert out.shape == (1, 512) diff --git a/spaces/CVPR/lama-example/models/ade20k/base.py b/spaces/CVPR/lama-example/models/ade20k/base.py deleted file mode 100644 index 8cdbe2d3e7dbadf4ed5e5a7cf2d248761ef25d9c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/models/ade20k/base.py +++ /dev/null @@ -1,627 +0,0 @@ -"""Modified from https://github.com/CSAILVision/semantic-segmentation-pytorch""" - -import os - -import pandas as pd -import torch -import torch.nn as nn -import torch.nn.functional as F -from scipy.io import loadmat -from torch.nn.modules import BatchNorm2d - -from . import resnet -from . import mobilenet - - -NUM_CLASS = 150 -base_path = os.path.dirname(os.path.abspath(__file__)) # current file path -colors_path = os.path.join(base_path, 'color150.mat') -classes_path = os.path.join(base_path, 'object150_info.csv') - -segm_options = dict(colors=loadmat(colors_path)['colors'], - classes=pd.read_csv(classes_path),) - - -class NormalizeTensor: - def __init__(self, mean, std, inplace=False): - """Normalize a tensor image with mean and standard deviation. - .. note:: - This transform acts out of place by default, i.e., it does not mutates the input tensor. - See :class:`~torchvision.transforms.Normalize` for more details. - Args: - tensor (Tensor): Tensor image of size (C, H, W) to be normalized. - mean (sequence): Sequence of means for each channel. - std (sequence): Sequence of standard deviations for each channel. - inplace(bool,optional): Bool to make this operation inplace. - Returns: - Tensor: Normalized Tensor image. - """ - - self.mean = mean - self.std = std - self.inplace = inplace - - def __call__(self, tensor): - if not self.inplace: - tensor = tensor.clone() - - dtype = tensor.dtype - mean = torch.as_tensor(self.mean, dtype=dtype, device=tensor.device) - std = torch.as_tensor(self.std, dtype=dtype, device=tensor.device) - tensor.sub_(mean[None, :, None, None]).div_(std[None, :, None, None]) - return tensor - - -# Model Builder -class ModelBuilder: - # custom weights initialization - @staticmethod - def weights_init(m): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - nn.init.kaiming_normal_(m.weight.data) - elif classname.find('BatchNorm') != -1: - m.weight.data.fill_(1.) - m.bias.data.fill_(1e-4) - - @staticmethod - def build_encoder(arch='resnet50dilated', fc_dim=512, weights=''): - pretrained = True if len(weights) == 0 else False - arch = arch.lower() - if arch == 'mobilenetv2dilated': - orig_mobilenet = mobilenet.__dict__['mobilenetv2'](pretrained=pretrained) - net_encoder = MobileNetV2Dilated(orig_mobilenet, dilate_scale=8) - elif arch == 'resnet18': - orig_resnet = resnet.__dict__['resnet18'](pretrained=pretrained) - net_encoder = Resnet(orig_resnet) - elif arch == 'resnet18dilated': - orig_resnet = resnet.__dict__['resnet18'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, dilate_scale=8) - elif arch == 'resnet50dilated': - orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, dilate_scale=8) - elif arch == 'resnet50': - orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained) - net_encoder = Resnet(orig_resnet) - else: - raise Exception('Architecture undefined!') - - # encoders are usually pretrained - # net_encoder.apply(ModelBuilder.weights_init) - if len(weights) > 0: - print('Loading weights for net_encoder') - net_encoder.load_state_dict( - torch.load(weights, map_location=lambda storage, loc: storage), strict=False) - return net_encoder - - @staticmethod - def build_decoder(arch='ppm_deepsup', - fc_dim=512, num_class=NUM_CLASS, - weights='', use_softmax=False, drop_last_conv=False): - arch = arch.lower() - if arch == 'ppm_deepsup': - net_decoder = PPMDeepsup( - num_class=num_class, - fc_dim=fc_dim, - use_softmax=use_softmax, - drop_last_conv=drop_last_conv) - elif arch == 'c1_deepsup': - net_decoder = C1DeepSup( - num_class=num_class, - fc_dim=fc_dim, - use_softmax=use_softmax, - drop_last_conv=drop_last_conv) - else: - raise Exception('Architecture undefined!') - - net_decoder.apply(ModelBuilder.weights_init) - if len(weights) > 0: - print('Loading weights for net_decoder') - net_decoder.load_state_dict( - torch.load(weights, map_location=lambda storage, loc: storage), strict=False) - return net_decoder - - @staticmethod - def get_decoder(weights_path, arch_encoder, arch_decoder, fc_dim, drop_last_conv, *arts, **kwargs): - path = os.path.join(weights_path, 'ade20k', f'ade20k-{arch_encoder}-{arch_decoder}/decoder_epoch_20.pth') - return ModelBuilder.build_decoder(arch=arch_decoder, fc_dim=fc_dim, weights=path, use_softmax=True, drop_last_conv=drop_last_conv) - - @staticmethod - def get_encoder(weights_path, arch_encoder, arch_decoder, fc_dim, segmentation, - *arts, **kwargs): - if segmentation: - path = os.path.join(weights_path, 'ade20k', f'ade20k-{arch_encoder}-{arch_decoder}/encoder_epoch_20.pth') - else: - path = '' - return ModelBuilder.build_encoder(arch=arch_encoder, fc_dim=fc_dim, weights=path) - - -def conv3x3_bn_relu(in_planes, out_planes, stride=1): - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False), - BatchNorm2d(out_planes), - nn.ReLU(inplace=True), - ) - - -class SegmentationModule(nn.Module): - def __init__(self, - weights_path, - num_classes=150, - arch_encoder="resnet50dilated", - drop_last_conv=False, - net_enc=None, # None for Default encoder - net_dec=None, # None for Default decoder - encode=None, # {None, 'binary', 'color', 'sky'} - use_default_normalization=False, - return_feature_maps=False, - return_feature_maps_level=3, # {0, 1, 2, 3} - return_feature_maps_only=True, - **kwargs, - ): - super().__init__() - self.weights_path = weights_path - self.drop_last_conv = drop_last_conv - self.arch_encoder = arch_encoder - if self.arch_encoder == "resnet50dilated": - self.arch_decoder = "ppm_deepsup" - self.fc_dim = 2048 - elif self.arch_encoder == "mobilenetv2dilated": - self.arch_decoder = "c1_deepsup" - self.fc_dim = 320 - else: - raise NotImplementedError(f"No such arch_encoder={self.arch_encoder}") - model_builder_kwargs = dict(arch_encoder=self.arch_encoder, - arch_decoder=self.arch_decoder, - fc_dim=self.fc_dim, - drop_last_conv=drop_last_conv, - weights_path=self.weights_path) - - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - self.encoder = ModelBuilder.get_encoder(**model_builder_kwargs) if net_enc is None else net_enc - self.decoder = ModelBuilder.get_decoder(**model_builder_kwargs) if net_dec is None else net_dec - self.use_default_normalization = use_default_normalization - self.default_normalization = NormalizeTensor(mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]) - - self.encode = encode - - self.return_feature_maps = return_feature_maps - - assert 0 <= return_feature_maps_level <= 3 - self.return_feature_maps_level = return_feature_maps_level - - def normalize_input(self, tensor): - if tensor.min() < 0 or tensor.max() > 1: - raise ValueError("Tensor should be 0..1 before using normalize_input") - return self.default_normalization(tensor) - - @property - def feature_maps_channels(self): - return 256 * 2**(self.return_feature_maps_level) # 256, 512, 1024, 2048 - - def forward(self, img_data, segSize=None): - if segSize is None: - raise NotImplementedError("Please pass segSize param. By default: (300, 300)") - - fmaps = self.encoder(img_data, return_feature_maps=True) - pred = self.decoder(fmaps, segSize=segSize) - - if self.return_feature_maps: - return pred, fmaps - # print("BINARY", img_data.shape, pred.shape) - return pred - - def multi_mask_from_multiclass(self, pred, classes): - def isin(ar1, ar2): - return (ar1[..., None] == ar2).any(-1).float() - return isin(pred, torch.LongTensor(classes).to(self.device)) - - @staticmethod - def multi_mask_from_multiclass_probs(scores, classes): - res = None - for c in classes: - if res is None: - res = scores[:, c] - else: - res += scores[:, c] - return res - - def predict(self, tensor, imgSizes=(-1,), # (300, 375, 450, 525, 600) - segSize=None): - """Entry-point for segmentation. Use this methods instead of forward - Arguments: - tensor {torch.Tensor} -- BCHW - Keyword Arguments: - imgSizes {tuple or list} -- imgSizes for segmentation input. - default: (300, 450) - original implementation: (300, 375, 450, 525, 600) - - """ - if segSize is None: - segSize = tensor.shape[-2:] - segSize = (tensor.shape[2], tensor.shape[3]) - with torch.no_grad(): - if self.use_default_normalization: - tensor = self.normalize_input(tensor) - scores = torch.zeros(1, NUM_CLASS, segSize[0], segSize[1]).to(self.device) - features = torch.zeros(1, self.feature_maps_channels, segSize[0], segSize[1]).to(self.device) - - result = [] - for img_size in imgSizes: - if img_size != -1: - img_data = F.interpolate(tensor.clone(), size=img_size) - else: - img_data = tensor.clone() - - if self.return_feature_maps: - pred_current, fmaps = self.forward(img_data, segSize=segSize) - else: - pred_current = self.forward(img_data, segSize=segSize) - - - result.append(pred_current) - scores = scores + pred_current / len(imgSizes) - - # Disclaimer: We use and aggregate only last fmaps: fmaps[3] - if self.return_feature_maps: - features = features + F.interpolate(fmaps[self.return_feature_maps_level], size=segSize) / len(imgSizes) - - _, pred = torch.max(scores, dim=1) - - if self.return_feature_maps: - return features - - return pred, result - - def get_edges(self, t): - edge = torch.cuda.ByteTensor(t.size()).zero_() - edge[:, :, :, 1:] = edge[:, :, :, 1:] | (t[:, :, :, 1:] != t[:, :, :, :-1]) - edge[:, :, :, :-1] = edge[:, :, :, :-1] | (t[:, :, :, 1:] != t[:, :, :, :-1]) - edge[:, :, 1:, :] = edge[:, :, 1:, :] | (t[:, :, 1:, :] != t[:, :, :-1, :]) - edge[:, :, :-1, :] = edge[:, :, :-1, :] | (t[:, :, 1:, :] != t[:, :, :-1, :]) - - if True: - return edge.half() - return edge.float() - - -# pyramid pooling, deep supervision -class PPMDeepsup(nn.Module): - def __init__(self, num_class=NUM_CLASS, fc_dim=4096, - use_softmax=False, pool_scales=(1, 2, 3, 6), - drop_last_conv=False): - super().__init__() - self.use_softmax = use_softmax - self.drop_last_conv = drop_last_conv - - self.ppm = [] - for scale in pool_scales: - self.ppm.append(nn.Sequential( - nn.AdaptiveAvgPool2d(scale), - nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False), - BatchNorm2d(512), - nn.ReLU(inplace=True) - )) - self.ppm = nn.ModuleList(self.ppm) - self.cbr_deepsup = conv3x3_bn_relu(fc_dim // 2, fc_dim // 4, 1) - - self.conv_last = nn.Sequential( - nn.Conv2d(fc_dim + len(pool_scales) * 512, 512, - kernel_size=3, padding=1, bias=False), - BatchNorm2d(512), - nn.ReLU(inplace=True), - nn.Dropout2d(0.1), - nn.Conv2d(512, num_class, kernel_size=1) - ) - self.conv_last_deepsup = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - self.dropout_deepsup = nn.Dropout2d(0.1) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - input_size = conv5.size() - ppm_out = [conv5] - for pool_scale in self.ppm: - ppm_out.append(nn.functional.interpolate( - pool_scale(conv5), - (input_size[2], input_size[3]), - mode='bilinear', align_corners=False)) - ppm_out = torch.cat(ppm_out, 1) - - if self.drop_last_conv: - return ppm_out - else: - x = self.conv_last(ppm_out) - - if self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - x = nn.functional.softmax(x, dim=1) - return x - - # deep sup - conv4 = conv_out[-2] - _ = self.cbr_deepsup(conv4) - _ = self.dropout_deepsup(_) - _ = self.conv_last_deepsup(_) - - x = nn.functional.log_softmax(x, dim=1) - _ = nn.functional.log_softmax(_, dim=1) - - return (x, _) - - -class Resnet(nn.Module): - def __init__(self, orig_resnet): - super(Resnet, self).__init__() - - # take pretrained resnet, except AvgPool and FC - self.conv1 = orig_resnet.conv1 - self.bn1 = orig_resnet.bn1 - self.relu1 = orig_resnet.relu1 - self.conv2 = orig_resnet.conv2 - self.bn2 = orig_resnet.bn2 - self.relu2 = orig_resnet.relu2 - self.conv3 = orig_resnet.conv3 - self.bn3 = orig_resnet.bn3 - self.relu3 = orig_resnet.relu3 - self.maxpool = orig_resnet.maxpool - self.layer1 = orig_resnet.layer1 - self.layer2 = orig_resnet.layer2 - self.layer3 = orig_resnet.layer3 - self.layer4 = orig_resnet.layer4 - - def forward(self, x, return_feature_maps=False): - conv_out = [] - - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.maxpool(x) - - x = self.layer1(x); conv_out.append(x); - x = self.layer2(x); conv_out.append(x); - x = self.layer3(x); conv_out.append(x); - x = self.layer4(x); conv_out.append(x); - - if return_feature_maps: - return conv_out - return [x] - -# Resnet Dilated -class ResnetDilated(nn.Module): - def __init__(self, orig_resnet, dilate_scale=8): - super().__init__() - from functools import partial - - if dilate_scale == 8: - orig_resnet.layer3.apply( - partial(self._nostride_dilate, dilate=2)) - orig_resnet.layer4.apply( - partial(self._nostride_dilate, dilate=4)) - elif dilate_scale == 16: - orig_resnet.layer4.apply( - partial(self._nostride_dilate, dilate=2)) - - # take pretrained resnet, except AvgPool and FC - self.conv1 = orig_resnet.conv1 - self.bn1 = orig_resnet.bn1 - self.relu1 = orig_resnet.relu1 - self.conv2 = orig_resnet.conv2 - self.bn2 = orig_resnet.bn2 - self.relu2 = orig_resnet.relu2 - self.conv3 = orig_resnet.conv3 - self.bn3 = orig_resnet.bn3 - self.relu3 = orig_resnet.relu3 - self.maxpool = orig_resnet.maxpool - self.layer1 = orig_resnet.layer1 - self.layer2 = orig_resnet.layer2 - self.layer3 = orig_resnet.layer3 - self.layer4 = orig_resnet.layer4 - - def _nostride_dilate(self, m, dilate): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - # the convolution with stride - if m.stride == (2, 2): - m.stride = (1, 1) - if m.kernel_size == (3, 3): - m.dilation = (dilate // 2, dilate // 2) - m.padding = (dilate // 2, dilate // 2) - # other convoluions - else: - if m.kernel_size == (3, 3): - m.dilation = (dilate, dilate) - m.padding = (dilate, dilate) - - def forward(self, x, return_feature_maps=False): - conv_out = [] - - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.maxpool(x) - - x = self.layer1(x) - conv_out.append(x) - x = self.layer2(x) - conv_out.append(x) - x = self.layer3(x) - conv_out.append(x) - x = self.layer4(x) - conv_out.append(x) - - if return_feature_maps: - return conv_out - return [x] - -class MobileNetV2Dilated(nn.Module): - def __init__(self, orig_net, dilate_scale=8): - super(MobileNetV2Dilated, self).__init__() - from functools import partial - - # take pretrained mobilenet features - self.features = orig_net.features[:-1] - - self.total_idx = len(self.features) - self.down_idx = [2, 4, 7, 14] - - if dilate_scale == 8: - for i in range(self.down_idx[-2], self.down_idx[-1]): - self.features[i].apply( - partial(self._nostride_dilate, dilate=2) - ) - for i in range(self.down_idx[-1], self.total_idx): - self.features[i].apply( - partial(self._nostride_dilate, dilate=4) - ) - elif dilate_scale == 16: - for i in range(self.down_idx[-1], self.total_idx): - self.features[i].apply( - partial(self._nostride_dilate, dilate=2) - ) - - def _nostride_dilate(self, m, dilate): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - # the convolution with stride - if m.stride == (2, 2): - m.stride = (1, 1) - if m.kernel_size == (3, 3): - m.dilation = (dilate//2, dilate//2) - m.padding = (dilate//2, dilate//2) - # other convoluions - else: - if m.kernel_size == (3, 3): - m.dilation = (dilate, dilate) - m.padding = (dilate, dilate) - - def forward(self, x, return_feature_maps=False): - if return_feature_maps: - conv_out = [] - for i in range(self.total_idx): - x = self.features[i](x) - if i in self.down_idx: - conv_out.append(x) - conv_out.append(x) - return conv_out - - else: - return [self.features(x)] - - -# last conv, deep supervision -class C1DeepSup(nn.Module): - def __init__(self, num_class=150, fc_dim=2048, use_softmax=False, drop_last_conv=False): - super(C1DeepSup, self).__init__() - self.use_softmax = use_softmax - self.drop_last_conv = drop_last_conv - - self.cbr = conv3x3_bn_relu(fc_dim, fc_dim // 4, 1) - self.cbr_deepsup = conv3x3_bn_relu(fc_dim // 2, fc_dim // 4, 1) - - # last conv - self.conv_last = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - self.conv_last_deepsup = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - x = self.cbr(conv5) - - if self.drop_last_conv: - return x - else: - x = self.conv_last(x) - - if self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - x = nn.functional.softmax(x, dim=1) - return x - - # deep sup - conv4 = conv_out[-2] - _ = self.cbr_deepsup(conv4) - _ = self.conv_last_deepsup(_) - - x = nn.functional.log_softmax(x, dim=1) - _ = nn.functional.log_softmax(_, dim=1) - - return (x, _) - - -# last conv -class C1(nn.Module): - def __init__(self, num_class=150, fc_dim=2048, use_softmax=False): - super(C1, self).__init__() - self.use_softmax = use_softmax - - self.cbr = conv3x3_bn_relu(fc_dim, fc_dim // 4, 1) - - # last conv - self.conv_last = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - x = self.cbr(conv5) - x = self.conv_last(x) - - if self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - x = nn.functional.softmax(x, dim=1) - else: - x = nn.functional.log_softmax(x, dim=1) - - return x - - -# pyramid pooling -class PPM(nn.Module): - def __init__(self, num_class=150, fc_dim=4096, - use_softmax=False, pool_scales=(1, 2, 3, 6)): - super(PPM, self).__init__() - self.use_softmax = use_softmax - - self.ppm = [] - for scale in pool_scales: - self.ppm.append(nn.Sequential( - nn.AdaptiveAvgPool2d(scale), - nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False), - BatchNorm2d(512), - nn.ReLU(inplace=True) - )) - self.ppm = nn.ModuleList(self.ppm) - - self.conv_last = nn.Sequential( - nn.Conv2d(fc_dim+len(pool_scales)*512, 512, - kernel_size=3, padding=1, bias=False), - BatchNorm2d(512), - nn.ReLU(inplace=True), - nn.Dropout2d(0.1), - nn.Conv2d(512, num_class, kernel_size=1) - ) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - input_size = conv5.size() - ppm_out = [conv5] - for pool_scale in self.ppm: - ppm_out.append(nn.functional.interpolate( - pool_scale(conv5), - (input_size[2], input_size[3]), - mode='bilinear', align_corners=False)) - ppm_out = torch.cat(ppm_out, 1) - - x = self.conv_last(ppm_out) - - if self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - x = nn.functional.softmax(x, dim=1) - else: - x = nn.functional.log_softmax(x, dim=1) - return x diff --git a/spaces/Cat125/text-generator-v3/datamanager.py b/spaces/Cat125/text-generator-v3/datamanager.py deleted file mode 100644 index 996a920c346f647476b1535e6942f05de384ca9a..0000000000000000000000000000000000000000 --- a/spaces/Cat125/text-generator-v3/datamanager.py +++ /dev/null @@ -1,104 +0,0 @@ -import json -import pickle - -from files import read_lines - -models = json.load(open("models/models.json")) -TEXT_PATH = 'models/%s/text.txt' -FILENAME_V1 = 'models/%s/data.pkl' -FILENAME_V2 = 'models/%s/data2.pkl' -FILENAME_V3 = 'models/%s/data3.pkl' - -def get_texts(model_name): - """ - This function returns the lines of text associated with a given model name. - - :param model_name: The name of a model that has been defined in the `models` dictionary. This - function is designed to retrieve the texts associated with a particular model - :return: The function `get_texts` is returning the text data from a specific model, which is - identified by its name. The text data is obtained by calling the `read_lines` function on the `text` - attribute of the specified model. - """ - return read_lines(TEXT_PATH % model_name) - -def set_data(model_name, data): - """ - This function saves data to a file using the pickle module, with the filename specified by the - model_name argument. - - :param model_name: The name of the model for which the data is being set - :param data: The data that needs to be saved for the given model. It could be any Python object such - as a list, dictionary, or a trained model - """ - pickle.dump(data, open(FILENAME_V1 % model_name, 'wb+')) - -def get_data(model_name): - """ - The function retrieves data from a database or a file using a model name as input. - - :param model_name: The name of the model for which we want to retrieve the data - :return: The function `get_data` returns the database object for the specified `model_name`. If the - database object is already loaded in memory, it returns the cached object. Otherwise, it loads the - object from a file using `pickle.load()` and caches it for future use. - """ - if models[model_name]["db"]: - return models[model_name]["db"] - db = pickle.load(open(FILENAME_V1 % model_name, 'rb')) - models[model_name]["db"] = db - return db - -def set_data_v2(model_name, data): - """ - This function saves data to a file using the pickle module, with the filename specified in a - dictionary associated with the given model name. - - :param model_name: The name of the model for which the data is being set - :param data: The data that needs to be saved to a file using the pickle module - """ - pickle.dump(data, open(FILENAME_V2 % model_name, 'wb+')) - -def get_data_v2(model_name): - """ - This function returns a database object for a given model name, either by loading it from a file or - returning a cached version. - - :param model_name: The name of the model for which we want to retrieve the data - :return: a database object for the given model name. If the database object is already loaded in the - models dictionary, it returns the object from the dictionary. Otherwise, it loads the object from a - pickle file and stores it in the dictionary before returning it. - """ - if models[model_name]["db2"]: - return models[model_name]["db2"] - db = pickle.load(open(FILENAME_V2 % model_name, 'rb')) - models[model_name]["db2"] = db - return db - -def set_data_v3(model_name, data): - """ - This function saves data to a file using the pickle module, with the filename specified by the - model_name argument. - - :param model_name: The name of the model for which the data is being set - :param data: The data parameter is the data that needs to be saved to a file using the pickle - module. The data can be of any type, such as a list, dictionary, or object. The function saves the - data to a file specified by the model_name parameter. The filename is obtained from the models - dictionary - """ - pickle.dump(data, open(FILENAME_V3 % model_name, 'wb+')) - -def get_data_v3(model_name): - """ - This function loads a database file for a given model and returns it, while also caching it for - future use. - - :param model_name: a string representing the name of a model - :return: The function `get_data_v3` returns the database object for the given `model_name`. If the - database object is already loaded in the `models` dictionary, it returns the cached object. - Otherwise, it loads the object from the file specified in the `models` dictionary, caches it in the - `models` dictionary, and returns it. - """ - if models[model_name]["db3"]: - return models[model_name]["db3"] - db = pickle.load(open(FILENAME_V3 % model_name, 'rb')) - models[model_name]["db3"] = db - return db \ No newline at end of file diff --git a/spaces/Cong723/gpt-academic-public/colorful.py b/spaces/Cong723/gpt-academic-public/colorful.py deleted file mode 100644 index d90972bb30a8f8fb932abbc34232e474df4d5205..0000000000000000000000000000000000000000 --- a/spaces/Cong723/gpt-academic-public/colorful.py +++ /dev/null @@ -1,91 +0,0 @@ -import platform -from sys import stdout - -if platform.system()=="Linux": - pass -else: - from colorama import init - init() - -# Do you like the elegance of Chinese characters? -def print红(*kw,**kargs): - print("\033[0;31m",*kw,"\033[0m",**kargs) -def print绿(*kw,**kargs): - print("\033[0;32m",*kw,"\033[0m",**kargs) -def print黄(*kw,**kargs): - print("\033[0;33m",*kw,"\033[0m",**kargs) -def print蓝(*kw,**kargs): - print("\033[0;34m",*kw,"\033[0m",**kargs) -def print紫(*kw,**kargs): - print("\033[0;35m",*kw,"\033[0m",**kargs) -def print靛(*kw,**kargs): - print("\033[0;36m",*kw,"\033[0m",**kargs) - -def print亮红(*kw,**kargs): - print("\033[1;31m",*kw,"\033[0m",**kargs) -def print亮绿(*kw,**kargs): - print("\033[1;32m",*kw,"\033[0m",**kargs) -def print亮黄(*kw,**kargs): - print("\033[1;33m",*kw,"\033[0m",**kargs) -def print亮蓝(*kw,**kargs): - print("\033[1;34m",*kw,"\033[0m",**kargs) -def print亮紫(*kw,**kargs): - print("\033[1;35m",*kw,"\033[0m",**kargs) -def print亮靛(*kw,**kargs): - print("\033[1;36m",*kw,"\033[0m",**kargs) - - - -def print亮红(*kw,**kargs): - print("\033[1;31m",*kw,"\033[0m",**kargs) -def print亮绿(*kw,**kargs): - print("\033[1;32m",*kw,"\033[0m",**kargs) -def print亮黄(*kw,**kargs): - print("\033[1;33m",*kw,"\033[0m",**kargs) -def print亮蓝(*kw,**kargs): - print("\033[1;34m",*kw,"\033[0m",**kargs) -def print亮紫(*kw,**kargs): - print("\033[1;35m",*kw,"\033[0m",**kargs) -def print亮靛(*kw,**kargs): - print("\033[1;36m",*kw,"\033[0m",**kargs) - -print_red = print红 -print_green = print绿 -print_yellow = print黄 -print_blue = print蓝 -print_purple = print紫 -print_indigo = print靛 - -print_bold_red = print亮红 -print_bold_green = print亮绿 -print_bold_yellow = print亮黄 -print_bold_blue = print亮蓝 -print_bold_purple = print亮紫 -print_bold_indigo = print亮靛 - -if not stdout.isatty(): - # redirection, avoid a fucked up log file - print红 = print - print绿 = print - print黄 = print - print蓝 = print - print紫 = print - print靛 = print - print亮红 = print - print亮绿 = print - print亮黄 = print - print亮蓝 = print - print亮紫 = print - print亮靛 = print - print_red = print - print_green = print - print_yellow = print - print_blue = print - print_purple = print - print_indigo = print - print_bold_red = print - print_bold_green = print - print_bold_yellow = print - print_bold_blue = print - print_bold_purple = print - print_bold_indigo = print \ No newline at end of file diff --git a/spaces/Cong723/gpt-academic-public/crazy_functions/test_project/cpp/cppipc/ipc.cpp b/spaces/Cong723/gpt-academic-public/crazy_functions/test_project/cpp/cppipc/ipc.cpp deleted file mode 100644 index c713b852ea5a51fbeb4729b64561da482caaf351..0000000000000000000000000000000000000000 --- a/spaces/Cong723/gpt-academic-public/crazy_functions/test_project/cpp/cppipc/ipc.cpp +++ /dev/null @@ -1,701 +0,0 @@ - -#include -#include -#include -#include // std::pair, std::move, std::forward -#include -#include // aligned_storage_t -#include -#include -#include -#include - -#include "libipc/ipc.h" -#include "libipc/def.h" -#include "libipc/shm.h" -#include "libipc/pool_alloc.h" -#include "libipc/queue.h" -#include "libipc/policy.h" -#include "libipc/rw_lock.h" -#include "libipc/waiter.h" - -#include "libipc/utility/log.h" -#include "libipc/utility/id_pool.h" -#include "libipc/utility/scope_guard.h" -#include "libipc/utility/utility.h" - -#include "libipc/memory/resource.h" -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_array.h" - -namespace { - -using msg_id_t = std::uint32_t; -using acc_t = std::atomic; - -template -struct msg_t; - -template -struct msg_t<0, AlignSize> { - msg_id_t cc_id_; - msg_id_t id_; - std::int32_t remain_; - bool storage_; -}; - -template -struct msg_t : msg_t<0, AlignSize> { - std::aligned_storage_t data_ {}; - - msg_t() = default; - msg_t(msg_id_t cc_id, msg_id_t id, std::int32_t remain, void const * data, std::size_t size) - : msg_t<0, AlignSize> {cc_id, id, remain, (data == nullptr) || (size == 0)} { - if (this->storage_) { - if (data != nullptr) { - // copy storage-id - *reinterpret_cast(&data_) = - *static_cast(data); - } - } - else std::memcpy(&data_, data, size); - } -}; - -template -ipc::buff_t make_cache(T& data, std::size_t size) { - auto ptr = ipc::mem::alloc(size); - std::memcpy(ptr, &data, (ipc::detail::min)(sizeof(data), size)); - return { ptr, size, ipc::mem::free }; -} - -struct cache_t { - std::size_t fill_; - ipc::buff_t buff_; - - cache_t(std::size_t f, ipc::buff_t && b) - : fill_(f), buff_(std::move(b)) - {} - - void append(void const * data, std::size_t size) { - if (fill_ >= buff_.size() || data == nullptr || size == 0) return; - auto new_fill = (ipc::detail::min)(fill_ + size, buff_.size()); - std::memcpy(static_cast(buff_.data()) + fill_, data, new_fill - fill_); - fill_ = new_fill; - } -}; - -auto cc_acc() { - static ipc::shm::handle acc_h("__CA_CONN__", sizeof(acc_t)); - return static_cast(acc_h.get()); -} - -IPC_CONSTEXPR_ std::size_t align_chunk_size(std::size_t size) noexcept { - return (((size - 1) / ipc::large_msg_align) + 1) * ipc::large_msg_align; -} - -IPC_CONSTEXPR_ std::size_t calc_chunk_size(std::size_t size) noexcept { - return ipc::make_align(alignof(std::max_align_t), align_chunk_size( - ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic)) + size)); -} - -struct chunk_t { - std::atomic &conns() noexcept { - return *reinterpret_cast *>(this); - } - - void *data() noexcept { - return reinterpret_cast(this) - + ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic)); - } -}; - -struct chunk_info_t { - ipc::id_pool<> pool_; - ipc::spin_lock lock_; - - IPC_CONSTEXPR_ static std::size_t chunks_mem_size(std::size_t chunk_size) noexcept { - return ipc::id_pool<>::max_count * chunk_size; - } - - ipc::byte_t *chunks_mem() noexcept { - return reinterpret_cast(this + 1); - } - - chunk_t *at(std::size_t chunk_size, ipc::storage_id_t id) noexcept { - if (id < 0) return nullptr; - return reinterpret_cast(chunks_mem() + (chunk_size * id)); - } -}; - -auto& chunk_storages() { - class chunk_handle_t { - ipc::shm::handle handle_; - - public: - chunk_info_t *get_info(std::size_t chunk_size) { - if (!handle_.valid() && - !handle_.acquire( ("__CHUNK_INFO__" + ipc::to_string(chunk_size)).c_str(), - sizeof(chunk_info_t) + chunk_info_t::chunks_mem_size(chunk_size) )) { - ipc::error("[chunk_storages] chunk_shm.id_info_.acquire failed: chunk_size = %zd\n", chunk_size); - return nullptr; - } - auto info = static_cast(handle_.get()); - if (info == nullptr) { - ipc::error("[chunk_storages] chunk_shm.id_info_.get failed: chunk_size = %zd\n", chunk_size); - return nullptr; - } - return info; - } - }; - static ipc::map chunk_hs; - return chunk_hs; -} - -chunk_info_t *chunk_storage_info(std::size_t chunk_size) { - auto &storages = chunk_storages(); - std::decay_t::iterator it; - { - static ipc::rw_lock lock; - IPC_UNUSED_ std::shared_lock guard {lock}; - if ((it = storages.find(chunk_size)) == storages.end()) { - using chunk_handle_t = std::decay_t::value_type::second_type; - guard.unlock(); - IPC_UNUSED_ std::lock_guard guard {lock}; - it = storages.emplace(chunk_size, chunk_handle_t{}).first; - } - } - return it->second.get_info(chunk_size); -} - -std::pair acquire_storage(std::size_t size, ipc::circ::cc_t conns) { - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return {}; - - info->lock_.lock(); - info->pool_.prepare(); - // got an unique id - auto id = info->pool_.acquire(); - info->lock_.unlock(); - - auto chunk = info->at(chunk_size, id); - if (chunk == nullptr) return {}; - chunk->conns().store(conns, std::memory_order_relaxed); - return { id, chunk->data() }; -} - -void *find_storage(ipc::storage_id_t id, std::size_t size) { - if (id < 0) { - ipc::error("[find_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return nullptr; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return nullptr; - return info->at(chunk_size, id)->data(); -} - -void release_storage(ipc::storage_id_t id, std::size_t size) { - if (id < 0) { - ipc::error("[release_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return; - info->lock_.lock(); - info->pool_.release(id); - info->lock_.unlock(); -} - -template -bool sub_rc(ipc::wr, - std::atomic &/*conns*/, ipc::circ::cc_t /*curr_conns*/, ipc::circ::cc_t /*conn_id*/) noexcept { - return true; -} - -template -bool sub_rc(ipc::wr, - std::atomic &conns, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) noexcept { - auto last_conns = curr_conns & ~conn_id; - for (unsigned k = 0;;) { - auto chunk_conns = conns.load(std::memory_order_acquire); - if (conns.compare_exchange_weak(chunk_conns, chunk_conns & last_conns, std::memory_order_release)) { - return (chunk_conns & last_conns) == 0; - } - ipc::yield(k); - } -} - -template -void recycle_storage(ipc::storage_id_t id, std::size_t size, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) { - if (id < 0) { - ipc::error("[recycle_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return; - - auto chunk = info->at(chunk_size, id); - if (chunk == nullptr) return; - - if (!sub_rc(Flag{}, chunk->conns(), curr_conns, conn_id)) { - return; - } - info->lock_.lock(); - info->pool_.release(id); - info->lock_.unlock(); -} - -template -bool clear_message(void* p) { - auto msg = static_cast(p); - if (msg->storage_) { - std::int32_t r_size = static_cast(ipc::data_length) + msg->remain_; - if (r_size <= 0) { - ipc::error("[clear_message] invalid msg size: %d\n", (int)r_size); - return true; - } - release_storage( - *reinterpret_cast(&msg->data_), - static_cast(r_size)); - } - return true; -} - -struct conn_info_head { - - ipc::string name_; - msg_id_t cc_id_; // connection-info id - ipc::detail::waiter cc_waiter_, wt_waiter_, rd_waiter_; - ipc::shm::handle acc_h_; - - conn_info_head(char const * name) - : name_ {name} - , cc_id_ {(cc_acc() == nullptr) ? 0 : cc_acc()->fetch_add(1, std::memory_order_relaxed)} - , cc_waiter_{("__CC_CONN__" + name_).c_str()} - , wt_waiter_{("__WT_CONN__" + name_).c_str()} - , rd_waiter_{("__RD_CONN__" + name_).c_str()} - , acc_h_ {("__AC_CONN__" + name_).c_str(), sizeof(acc_t)} { - } - - void quit_waiting() { - cc_waiter_.quit_waiting(); - wt_waiter_.quit_waiting(); - rd_waiter_.quit_waiting(); - } - - auto acc() { - return static_cast(acc_h_.get()); - } - - auto& recv_cache() { - thread_local ipc::unordered_map tls; - return tls; - } -}; - -template -bool wait_for(W& waiter, F&& pred, std::uint64_t tm) { - if (tm == 0) return !pred(); - for (unsigned k = 0; pred();) { - bool ret = true; - ipc::sleep(k, [&k, &ret, &waiter, &pred, tm] { - ret = waiter.wait_if(std::forward(pred), tm); - k = 0; - }); - if (!ret) return false; // timeout or fail - if (k == 0) break; // k has been reset - } - return true; -} - -template -struct queue_generator { - - using queue_t = ipc::queue, Policy>; - - struct conn_info_t : conn_info_head { - queue_t que_; - - conn_info_t(char const * name) - : conn_info_head{name} - , que_{("__QU_CONN__" + - ipc::to_string(DataSize) + "__" + - ipc::to_string(AlignSize) + "__" + name).c_str()} { - } - - void disconnect_receiver() { - bool dis = que_.disconnect(); - this->quit_waiting(); - if (dis) { - this->recv_cache().clear(); - } - } - }; -}; - -template -struct detail_impl { - -using policy_t = Policy; -using flag_t = typename policy_t::flag_t; -using queue_t = typename queue_generator::queue_t; -using conn_info_t = typename queue_generator::conn_info_t; - -constexpr static conn_info_t* info_of(ipc::handle_t h) noexcept { - return static_cast(h); -} - -constexpr static queue_t* queue_of(ipc::handle_t h) noexcept { - return (info_of(h) == nullptr) ? nullptr : &(info_of(h)->que_); -} - -/* API implementations */ - -static void disconnect(ipc::handle_t h) { - auto que = queue_of(h); - if (que == nullptr) { - return; - } - que->shut_sending(); - assert(info_of(h) != nullptr); - info_of(h)->disconnect_receiver(); -} - -static bool reconnect(ipc::handle_t * ph, bool start_to_recv) { - assert(ph != nullptr); - assert(*ph != nullptr); - auto que = queue_of(*ph); - if (que == nullptr) { - return false; - } - if (start_to_recv) { - que->shut_sending(); - if (que->connect()) { // wouldn't connect twice - info_of(*ph)->cc_waiter_.broadcast(); - return true; - } - return false; - } - // start_to_recv == false - if (que->connected()) { - info_of(*ph)->disconnect_receiver(); - } - return que->ready_sending(); -} - -static bool connect(ipc::handle_t * ph, char const * name, bool start_to_recv) { - assert(ph != nullptr); - if (*ph == nullptr) { - *ph = ipc::mem::alloc(name); - } - return reconnect(ph, start_to_recv); -} - -static void destroy(ipc::handle_t h) { - disconnect(h); - ipc::mem::free(info_of(h)); -} - -static std::size_t recv_count(ipc::handle_t h) noexcept { - auto que = queue_of(h); - if (que == nullptr) { - return ipc::invalid_value; - } - return que->conn_count(); -} - -static bool wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) { - auto que = queue_of(h); - if (que == nullptr) { - return false; - } - return wait_for(info_of(h)->cc_waiter_, [que, r_count] { - return que->conn_count() < r_count; - }, tm); -} - -template -static bool send(F&& gen_push, ipc::handle_t h, void const * data, std::size_t size) { - if (data == nullptr || size == 0) { - ipc::error("fail: send(%p, %zd)\n", data, size); - return false; - } - auto que = queue_of(h); - if (que == nullptr) { - ipc::error("fail: send, queue_of(h) == nullptr\n"); - return false; - } - if (que->elems() == nullptr) { - ipc::error("fail: send, queue_of(h)->elems() == nullptr\n"); - return false; - } - if (!que->ready_sending()) { - ipc::error("fail: send, que->ready_sending() == false\n"); - return false; - } - ipc::circ::cc_t conns = que->elems()->connections(std::memory_order_relaxed); - if (conns == 0) { - ipc::error("fail: send, there is no receiver on this connection.\n"); - return false; - } - // calc a new message id - auto acc = info_of(h)->acc(); - if (acc == nullptr) { - ipc::error("fail: send, info_of(h)->acc() == nullptr\n"); - return false; - } - auto msg_id = acc->fetch_add(1, std::memory_order_relaxed); - auto try_push = std::forward(gen_push)(info_of(h), que, msg_id); - if (size > ipc::large_msg_limit) { - auto dat = acquire_storage(size, conns); - void * buf = dat.second; - if (buf != nullptr) { - std::memcpy(buf, data, size); - return try_push(static_cast(size) - - static_cast(ipc::data_length), &(dat.first), 0); - } - // try using message fragment - //ipc::log("fail: shm::handle for big message. msg_id: %zd, size: %zd\n", msg_id, size); - } - // push message fragment - std::int32_t offset = 0; - for (std::int32_t i = 0; i < static_cast(size / ipc::data_length); ++i, offset += ipc::data_length) { - if (!try_push(static_cast(size) - offset - static_cast(ipc::data_length), - static_cast(data) + offset, ipc::data_length)) { - return false; - } - } - // if remain > 0, this is the last message fragment - std::int32_t remain = static_cast(size) - offset; - if (remain > 0) { - if (!try_push(remain - static_cast(ipc::data_length), - static_cast(data) + offset, - static_cast(remain))) { - return false; - } - } - return true; -} - -static bool send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return send([tm](auto info, auto que, auto msg_id) { - return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) { - if (!wait_for(info->wt_waiter_, [&] { - return !que->push( - [](void*) { return true; }, - info->cc_id_, msg_id, remain, data, size); - }, tm)) { - ipc::log("force_push: msg_id = %zd, remain = %d, size = %zd\n", msg_id, remain, size); - if (!que->force_push( - clear_message, - info->cc_id_, msg_id, remain, data, size)) { - return false; - } - } - info->rd_waiter_.broadcast(); - return true; - }; - }, h, data, size); -} - -static bool try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return send([tm](auto info, auto que, auto msg_id) { - return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) { - if (!wait_for(info->wt_waiter_, [&] { - return !que->push( - [](void*) { return true; }, - info->cc_id_, msg_id, remain, data, size); - }, tm)) { - return false; - } - info->rd_waiter_.broadcast(); - return true; - }; - }, h, data, size); -} - -static ipc::buff_t recv(ipc::handle_t h, std::uint64_t tm) { - auto que = queue_of(h); - if (que == nullptr) { - ipc::error("fail: recv, queue_of(h) == nullptr\n"); - return {}; - } - if (!que->connected()) { - // hasn't connected yet, just return. - return {}; - } - auto& rc = info_of(h)->recv_cache(); - for (;;) { - // pop a new message - typename queue_t::value_t msg; - if (!wait_for(info_of(h)->rd_waiter_, [que, &msg] { - return !que->pop(msg); - }, tm)) { - // pop failed, just return. - return {}; - } - info_of(h)->wt_waiter_.broadcast(); - if ((info_of(h)->acc() != nullptr) && (msg.cc_id_ == info_of(h)->cc_id_)) { - continue; // ignore message to self - } - // msg.remain_ may minus & abs(msg.remain_) < data_length - std::int32_t r_size = static_cast(ipc::data_length) + msg.remain_; - if (r_size <= 0) { - ipc::error("fail: recv, r_size = %d\n", (int)r_size); - return {}; - } - std::size_t msg_size = static_cast(r_size); - // large message - if (msg.storage_) { - ipc::storage_id_t buf_id = *reinterpret_cast(&msg.data_); - void* buf = find_storage(buf_id, msg_size); - if (buf != nullptr) { - struct recycle_t { - ipc::storage_id_t storage_id; - ipc::circ::cc_t curr_conns; - ipc::circ::cc_t conn_id; - } *r_info = ipc::mem::alloc(recycle_t{ - buf_id, que->elems()->connections(std::memory_order_relaxed), que->connected_id() - }); - if (r_info == nullptr) { - ipc::log("fail: ipc::mem::alloc.\n"); - return ipc::buff_t{buf, msg_size}; // no recycle - } else { - return ipc::buff_t{buf, msg_size, [](void* p_info, std::size_t size) { - auto r_info = static_cast(p_info); - IPC_UNUSED_ auto finally = ipc::guard([r_info] { - ipc::mem::free(r_info); - }); - recycle_storage(r_info->storage_id, size, r_info->curr_conns, r_info->conn_id); - }, r_info}; - } - } else { - ipc::log("fail: shm::handle for large message. msg_id: %zd, buf_id: %zd, size: %zd\n", msg.id_, buf_id, msg_size); - continue; - } - } - // find cache with msg.id_ - auto cac_it = rc.find(msg.id_); - if (cac_it == rc.end()) { - if (msg_size <= ipc::data_length) { - return make_cache(msg.data_, msg_size); - } - // gc - if (rc.size() > 1024) { - std::vector need_del; - for (auto const & pair : rc) { - auto cmp = std::minmax(msg.id_, pair.first); - if (cmp.second - cmp.first > 8192) { - need_del.push_back(pair.first); - } - } - for (auto id : need_del) rc.erase(id); - } - // cache the first message fragment - rc.emplace(msg.id_, cache_t { ipc::data_length, make_cache(msg.data_, msg_size) }); - } - // has cached before this message - else { - auto& cac = cac_it->second; - // this is the last message fragment - if (msg.remain_ <= 0) { - cac.append(&(msg.data_), msg_size); - // finish this message, erase it from cache - auto buff = std::move(cac.buff_); - rc.erase(cac_it); - return buff; - } - // there are remain datas after this message - cac.append(&(msg.data_), ipc::data_length); - } - } -} - -static ipc::buff_t try_recv(ipc::handle_t h) { - return recv(h, 0); -} - -}; // detail_impl - -template -using policy_t = ipc::policy::choose; - -} // internal-linkage - -namespace ipc { - -template -ipc::handle_t chan_impl::inited() { - ipc::detail::waiter::init(); - return nullptr; -} - -template -bool chan_impl::connect(ipc::handle_t * ph, char const * name, unsigned mode) { - return detail_impl>::connect(ph, name, mode & receiver); -} - -template -bool chan_impl::reconnect(ipc::handle_t * ph, unsigned mode) { - return detail_impl>::reconnect(ph, mode & receiver); -} - -template -void chan_impl::disconnect(ipc::handle_t h) { - detail_impl>::disconnect(h); -} - -template -void chan_impl::destroy(ipc::handle_t h) { - detail_impl>::destroy(h); -} - -template -char const * chan_impl::name(ipc::handle_t h) { - auto info = detail_impl>::info_of(h); - return (info == nullptr) ? nullptr : info->name_.c_str(); -} - -template -std::size_t chan_impl::recv_count(ipc::handle_t h) { - return detail_impl>::recv_count(h); -} - -template -bool chan_impl::wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) { - return detail_impl>::wait_for_recv(h, r_count, tm); -} - -template -bool chan_impl::send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return detail_impl>::send(h, data, size, tm); -} - -template -buff_t chan_impl::recv(ipc::handle_t h, std::uint64_t tm) { - return detail_impl>::recv(h, tm); -} - -template -bool chan_impl::try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return detail_impl>::try_send(h, data, size, tm); -} - -template -buff_t chan_impl::try_recv(ipc::handle_t h) { - return detail_impl>::try_recv(h); -} - -template struct chan_impl>; -// template struct chan_impl>; // TBD -// template struct chan_impl>; // TBD -template struct chan_impl>; -template struct chan_impl>; - -} // namespace ipc diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/hashPointPen.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/hashPointPen.py deleted file mode 100644 index b82468ec9cae4aea41f2cbef7033b3737c0ca91e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/hashPointPen.py +++ /dev/null @@ -1,75 +0,0 @@ -# Modified from https://github.com/adobe-type-tools/psautohint/blob/08b346865710ed3c172f1eb581d6ef243b203f99/python/psautohint/ufoFont.py#L800-L838 -import hashlib - -from fontTools.pens.basePen import MissingComponentError -from fontTools.pens.pointPen import AbstractPointPen - - -class HashPointPen(AbstractPointPen): - """ - This pen can be used to check if a glyph's contents (outlines plus - components) have changed. - - Components are added as the original outline plus each composite's - transformation. - - Example: You have some TrueType hinting code for a glyph which you want to - compile. The hinting code specifies a hash value computed with HashPointPen - that was valid for the glyph's outlines at the time the hinting code was - written. Now you can calculate the hash for the glyph's current outlines to - check if the outlines have changed, which would probably make the hinting - code invalid. - - > glyph = ufo[name] - > hash_pen = HashPointPen(glyph.width, ufo) - > glyph.drawPoints(hash_pen) - > ttdata = glyph.lib.get("public.truetype.instructions", None) - > stored_hash = ttdata.get("id", None) # The hash is stored in the "id" key - > if stored_hash is None or stored_hash != hash_pen.hash: - > logger.error(f"Glyph hash mismatch, glyph '{name}' will have no instructions in font.") - > else: - > # The hash values are identical, the outline has not changed. - > # Compile the hinting code ... - > pass - """ - - def __init__(self, glyphWidth=0, glyphSet=None): - self.glyphset = glyphSet - self.data = ["w%s" % round(glyphWidth, 9)] - - @property - def hash(self): - data = "".join(self.data) - if len(data) >= 128: - data = hashlib.sha512(data.encode("ascii")).hexdigest() - return data - - def beginPath(self, identifier=None, **kwargs): - pass - - def endPath(self): - self.data.append("|") - - def addPoint( - self, - pt, - segmentType=None, - smooth=False, - name=None, - identifier=None, - **kwargs, - ): - if segmentType is None: - pt_type = "o" # offcurve - else: - pt_type = segmentType[0] - self.data.append(f"{pt_type}{pt[0]:g}{pt[1]:+g}") - - def addComponent(self, baseGlyphName, transformation, identifier=None, **kwargs): - tr = "".join([f"{t:+}" for t in transformation]) - self.data.append("[") - try: - self.glyphset[baseGlyphName].drawPoints(self) - except KeyError: - raise MissingComponentError(baseGlyphName) - self.data.append(f"({tr})]") diff --git a/spaces/Danielzero/GPT3.5/modules/llama_func.py b/spaces/Danielzero/GPT3.5/modules/llama_func.py deleted file mode 100644 index e1c513af1bf6d1569b071eb5fc0ce441d0692f83..0000000000000000000000000000000000000000 --- a/spaces/Danielzero/GPT3.5/modules/llama_func.py +++ /dev/null @@ -1,166 +0,0 @@ -import os -import logging - -from llama_index import download_loader -from llama_index import ( - Document, - LLMPredictor, - PromptHelper, - QuestionAnswerPrompt, - RefinePrompt, -) -import colorama -import PyPDF2 -from tqdm import tqdm - -from modules.presets import * -from modules.utils import * -from modules.config import local_embedding - - -def get_index_name(file_src): - file_paths = [x.name for x in file_src] - file_paths.sort(key=lambda x: os.path.basename(x)) - - md5_hash = hashlib.md5() - for file_path in file_paths: - with open(file_path, "rb") as f: - while chunk := f.read(8192): - md5_hash.update(chunk) - - return md5_hash.hexdigest() - - -def block_split(text): - blocks = [] - while len(text) > 0: - blocks.append(Document(text[:1000])) - text = text[1000:] - return blocks - - -def get_documents(file_src): - documents = [] - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - filepath = file.name - filename = os.path.basename(filepath) - file_type = os.path.splitext(filepath)[1] - logging.info(f"loading file: {filename}") - try: - if file_type == ".pdf": - logging.debug("Loading PDF...") - try: - from modules.pdf_func import parse_pdf - from modules.config import advance_docs - - two_column = advance_docs["pdf"].get("two_column", False) - pdftext = parse_pdf(filepath, two_column).text - except: - pdftext = "" - with open(filepath, "rb") as pdfFileObj: - pdfReader = PyPDF2.PdfReader(pdfFileObj) - for page in tqdm(pdfReader.pages): - pdftext += page.extract_text() - text_raw = pdftext - elif file_type == ".docx": - logging.debug("Loading Word...") - DocxReader = download_loader("DocxReader") - loader = DocxReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".epub": - logging.debug("Loading EPUB...") - EpubReader = download_loader("EpubReader") - loader = EpubReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".xlsx": - logging.debug("Loading Excel...") - text_list = excel_to_string(filepath) - for elem in text_list: - documents.append(Document(elem)) - continue - else: - logging.debug("Loading text file...") - with open(filepath, "r", encoding="utf-8") as f: - text_raw = f.read() - except Exception as e: - logging.error(f"Error loading file: {filename}") - pass - text = add_space(text_raw) - # text = block_split(text) - # documents += text - documents += [Document(text)] - logging.debug("Documents loaded.") - return documents - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=5, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" ", -): - from langchain.chat_models import ChatOpenAI - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from llama_index import GPTSimpleVectorIndex, ServiceContext, LangchainEmbedding, OpenAIEmbedding - - if api_key: - os.environ["OPENAI_API_KEY"] = api_key - else: - # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY - os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx" - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - prompt_helper = PromptHelper( - max_input_size=max_input_size, - num_output=num_outputs, - max_chunk_overlap=max_chunk_overlap, - embedding_limit=embedding_limit, - chunk_size_limit=600, - separator=separator, - ) - index_name = get_index_name(file_src) - if os.path.exists(f"./index/{index_name}.json"): - logging.info("找到了缓存的索引文件,加载中……") - return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json") - else: - try: - documents = get_documents(file_src) - if local_embedding: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - logging.info("构建索引中……") - with retrieve_proxy(): - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, - chunk_size_limit=chunk_size_limit, - embed_model=embed_model, - ) - index = GPTSimpleVectorIndex.from_documents( - documents, service_context=service_context - ) - logging.debug("索引构建完成!") - os.makedirs("./index", exist_ok=True) - index.save_to_disk(f"./index/{index_name}.json") - logging.debug("索引已保存至本地!") - return index - - except Exception as e: - logging.error("索引构建失败!", e) - print(e) - return None - - -def add_space(text): - punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "} - for cn_punc, en_punc in punctuations.items(): - text = text.replace(cn_punc, en_punc) - return text diff --git a/spaces/Dauzy/whisper-webui/app-network.py b/spaces/Dauzy/whisper-webui/app-network.py deleted file mode 100644 index 4f0e565b9029761d4b995fe32a65c58d1de55f53..0000000000000000000000000000000000000000 --- a/spaces/Dauzy/whisper-webui/app-network.py +++ /dev/null @@ -1,5 +0,0 @@ -# Run the app with no audio file restrictions, and make it available on the network -from app import create_ui -from src.config import ApplicationConfig - -create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1, server_name="0.0.0.0")) \ No newline at end of file diff --git a/spaces/Djacon/emotion_detection/static/emotion_detection.html b/spaces/Djacon/emotion_detection/static/emotion_detection.html deleted file mode 100644 index b756d7c43f803cb230816f757fcc85167a51bda9..0000000000000000000000000000000000000000 --- a/spaces/Djacon/emotion_detection/static/emotion_detection.html +++ /dev/null @@ -1,293 +0,0 @@ - - - - - - - - Text2Feature | Detection - - - - - - - - - - -
    -
    - - - -
    -
    -
    -
    - - -
    -
    - - - - - - - - - -
    - - -
    -
    - - -
    -
    - -
    -
    -
    -

    Emotion Detection

    - - -
    - -
    -
    -
      -
    • - -
    • -
    • - -
    • -
    - -
    -
    - -
    - - - -
    - - - -
    -
    - - -
    -
    -
    -
    -
    -
    - - - - - - - \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/dnnlib/tflib/network.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/dnnlib/tflib/network.py deleted file mode 100644 index f663a9a117661a56438d8a033903f18941319a83..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/dnnlib/tflib/network.py +++ /dev/null @@ -1,658 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2019, NVIDIA Corporation. All rights reserved. -# -# This work is made available under the Nvidia Source Code License-NC. -# To view a copy of this license, visit -# https://nvlabs.github.io/stylegan2/license.html - -"""Helper for managing networks.""" - -import types -import inspect -import re -import uuid -import sys -import numpy as np -import tensorflow as tf - -from collections import OrderedDict -from typing import Any, List, Tuple, Union - -from . import tfutil -from .. import util - -from .tfutil import TfExpression, TfExpressionEx - -# Custom import handlers for dealing with legacy data in pickle import. -_import_handlers = [] -# Source code for temporary modules created during pickle import. -_import_module_src = dict() - - -def import_handler(handler_func): - """Function decorator for declaring custom import handlers.""" - _import_handlers.append(handler_func) - return handler_func - - -class Network: - """Generic network abstraction. - - Acts as a convenience wrapper for a parameterized network construction - function, providing several utility methods and convenient access to - the inputs/outputs/weights. - - Network objects can be safely pickled and unpickled for long-term - archival purposes. The pickling works reliably as long as the underlying - network construction function is defined in a standalone Python module - that has no side effects or application-specific imports. - - Args: - name: Network name. Used to select TensorFlow name and variable scopes. - func_name: Fully qualified name of the underlying network construction function, or a top-level function object. - static_kwargs: Keyword arguments to be passed in to the network construction function. - - Attributes: - name: User-specified name, defaults to build func name if None. - scope: Unique TensorFlow scope containing template graph and variables, derived from the user-specified name. - static_kwargs: Arguments passed to the user-supplied build func. - components: Container for sub-networks. Passed to the build func, and retained between calls. - num_inputs: Number of input tensors. - num_outputs: Number of output tensors. - input_shapes: Input tensor shapes (NC or NCHW), including minibatch dimension. - output_shapes: Output tensor shapes (NC or NCHW), including minibatch dimension. - input_shape: Short-hand for input_shapes[0]. - output_shape: Short-hand for output_shapes[0]. - input_templates: Input placeholders in the template graph. - output_templates: Output tensors in the template graph. - input_names: Name string for each input. - output_names: Name string for each output. - own_vars: Variables defined by this network (local_name => var), excluding sub-networks. - vars: All variables (local_name => var). - trainables: All trainable variables (local_name => var). - var_global_to_local: Mapping from variable global names to local names. - """ - - def __init__(self, name: str = None, func_name: Any = None, **static_kwargs): - tfutil.assert_tf_initialized() - assert isinstance(name, str) or name is None - assert func_name is not None - assert isinstance( - func_name, str) or util.is_top_level_function(func_name) - assert util.is_pickleable(static_kwargs) - - self._init_fields() - self.name = name - self.static_kwargs = util.EasyDict(static_kwargs) - - # Locate the user-specified network build function. - if util.is_top_level_function(func_name): - func_name = util.get_top_level_function_name(func_name) - module, self._build_func_name = util.get_module_from_obj_name( - func_name) - self._build_func = util.get_obj_from_module( - module, self._build_func_name) - assert callable(self._build_func) - - # Dig up source code for the module containing the build function. - self._build_module_src = _import_module_src.get(module, None) - if self._build_module_src is None: - self._build_module_src = inspect.getsource(module) - - # Init TensorFlow graph. - self._init_graph() - self.reset_own_vars() - - def _init_fields(self) -> None: - self.name = None - self.scope = None - self.static_kwargs = util.EasyDict() - self.components = util.EasyDict() - self.num_inputs = 0 - self.num_outputs = 0 - self.input_shapes = [[]] - self.output_shapes = [[]] - self.input_shape = [] - self.output_shape = [] - self.input_templates = [] - self.output_templates = [] - self.input_names = [] - self.output_names = [] - self.own_vars = OrderedDict() - self.vars = OrderedDict() - self.trainables = OrderedDict() - self.var_global_to_local = OrderedDict() - - # User-supplied build function that constructs the network. - self._build_func = None - self._build_func_name = None # Name of the build function. - # Full source code of the module containing the build function. - self._build_module_src = None - self._run_cache = dict() # Cached graph data for Network.run(). - - def _init_graph(self) -> None: - # Collect inputs. - self.input_names = [] - - for param in inspect.signature(self._build_func).parameters.values(): - if param.kind == param.POSITIONAL_OR_KEYWORD and param.default is param.empty: - self.input_names.append(param.name) - - self.num_inputs = len(self.input_names) - assert self.num_inputs >= 1 - - # Choose name and scope. - if self.name is None: - self.name = self._build_func_name - assert re.match("^[A-Za-z0-9_.\\-]*$", self.name) - with tf.name_scope(None): - self.scope = tf.get_default_graph().unique_name(self.name, mark_as_used=True) - - # Finalize build func kwargs. - build_kwargs = dict(self.static_kwargs) - build_kwargs["is_template_graph"] = True - build_kwargs["components"] = self.components - - # Build template graph. - # ignore surrounding scopes - with tfutil.absolute_variable_scope(self.scope, reuse=False), tfutil.absolute_name_scope(self.scope): - assert tf.get_variable_scope().name == self.scope - assert tf.get_default_graph().get_name_scope() == self.scope - # ignore surrounding control dependencies - with tf.control_dependencies(None): - self.input_templates = [tf.placeholder( - tf.float32, name=name) for name in self.input_names] - out_expr = self._build_func( - *self.input_templates, **build_kwargs) - - # Collect outputs. - assert tfutil.is_tf_expression(out_expr) or isinstance(out_expr, tuple) - self.output_templates = [out_expr] if tfutil.is_tf_expression( - out_expr) else list(out_expr) - self.num_outputs = len(self.output_templates) - assert self.num_outputs >= 1 - assert all(tfutil.is_tf_expression(t) for t in self.output_templates) - - # Perform sanity checks. - if any(t.shape.ndims is None for t in self.input_templates): - raise ValueError( - "Network input shapes not defined. Please call x.set_shape() for each input.") - if any(t.shape.ndims is None for t in self.output_templates): - raise ValueError( - "Network output shapes not defined. Please call x.set_shape() where applicable.") - if any(not isinstance(comp, Network) for comp in self.components.values()): - raise ValueError( - "Components of a Network must be Networks themselves.") - if len(self.components) != len(set(comp.name for comp in self.components.values())): - raise ValueError("Components of a Network must have unique names.") - - # List inputs and outputs. - self.input_shapes = [t.shape.as_list() for t in self.input_templates] - self.output_shapes = [t.shape.as_list() for t in self.output_templates] - self.input_shape = self.input_shapes[0] - self.output_shape = self.output_shapes[0] - self.output_names = [t.name.split( - "/")[-1].split(":")[0] for t in self.output_templates] - - # List variables. - self.own_vars = OrderedDict((var.name[len( - self.scope) + 1:].split(":")[0], var) for var in tf.global_variables(self.scope + "/")) - self.vars = OrderedDict(self.own_vars) - self.vars.update((comp.name + "/" + name, var) - for comp in self.components.values() for name, var in comp.vars.items()) - self.trainables = OrderedDict( - (name, var) for name, var in self.vars.items() if var.trainable) - self.var_global_to_local = OrderedDict( - (var.name.split(":")[0], name) for name, var in self.vars.items()) - - def reset_own_vars(self) -> None: - """Re-initialize all variables of this network, excluding sub-networks.""" - tfutil.run([var.initializer for var in self.own_vars.values()]) - - def reset_vars(self) -> None: - """Re-initialize all variables of this network, including sub-networks.""" - tfutil.run([var.initializer for var in self.vars.values()]) - - def reset_trainables(self) -> None: - """Re-initialize all trainable variables of this network, including sub-networks.""" - tfutil.run([var.initializer for var in self.trainables.values()]) - - def get_output_for(self, *in_expr: TfExpression, return_as_list: bool = False, **dynamic_kwargs) -> Union[TfExpression, List[TfExpression]]: - """Construct TensorFlow expression(s) for the output(s) of this network, given the input expression(s).""" - assert len(in_expr) == self.num_inputs - assert not all(expr is None for expr in in_expr) - - # Finalize build func kwargs. - build_kwargs = dict(self.static_kwargs) - build_kwargs.update(dynamic_kwargs) - build_kwargs["is_template_graph"] = False - build_kwargs["components"] = self.components - - # Build TensorFlow graph to evaluate the network. - with tfutil.absolute_variable_scope(self.scope, reuse=True), tf.name_scope(self.name): - assert tf.get_variable_scope().name == self.scope - valid_inputs = [expr for expr in in_expr if expr is not None] - final_inputs = [] - for expr, name, shape in zip(in_expr, self.input_names, self.input_shapes): - if expr is not None: - expr = tf.identity(expr, name=name) - else: - expr = tf.zeros([tf.shape(valid_inputs[0])[ - 0]] + shape[1:], name=name) - final_inputs.append(expr) - out_expr = self._build_func(*final_inputs, **build_kwargs) - - # Propagate input shapes back to the user-specified expressions. - for expr, final in zip(in_expr, final_inputs): - if isinstance(expr, tf.Tensor): - expr.set_shape(final.shape) - - # Express outputs in the desired format. - assert tfutil.is_tf_expression(out_expr) or isinstance(out_expr, tuple) - if return_as_list: - out_expr = [out_expr] if tfutil.is_tf_expression( - out_expr) else list(out_expr) - return out_expr - - def get_var_local_name(self, var_or_global_name: Union[TfExpression, str]) -> str: - """Get the local name of a given variable, without any surrounding name scopes.""" - assert tfutil.is_tf_expression( - var_or_global_name) or isinstance(var_or_global_name, str) - global_name = var_or_global_name if isinstance( - var_or_global_name, str) else var_or_global_name.name - return self.var_global_to_local[global_name] - - def find_var(self, var_or_local_name: Union[TfExpression, str]) -> TfExpression: - """Find variable by local or global name.""" - assert tfutil.is_tf_expression( - var_or_local_name) or isinstance(var_or_local_name, str) - return self.vars[var_or_local_name] if isinstance(var_or_local_name, str) else var_or_local_name - - def get_var(self, var_or_local_name: Union[TfExpression, str]) -> np.ndarray: - """Get the value of a given variable as NumPy array. - Note: This method is very inefficient -- prefer to use tflib.run(list_of_vars) whenever possible.""" - return self.find_var(var_or_local_name).eval() - - def set_var(self, var_or_local_name: Union[TfExpression, str], new_value: Union[int, float, np.ndarray]) -> None: - """Set the value of a given variable based on the given NumPy array. - Note: This method is very inefficient -- prefer to use tflib.set_vars() whenever possible.""" - tfutil.set_vars({self.find_var(var_or_local_name): new_value}) - - def __getstate__(self) -> dict: - """Pickle export.""" - state = dict() - state["version"] = 4 - state["name"] = self.name - state["static_kwargs"] = dict(self.static_kwargs) - state["components"] = dict(self.components) - state["build_module_src"] = self._build_module_src - state["build_func_name"] = self._build_func_name - state["variables"] = list( - zip(self.own_vars.keys(), tfutil.run(list(self.own_vars.values())))) - return state - - def __setstate__(self, state: dict) -> None: - """Pickle import.""" - # pylint: disable=attribute-defined-outside-init - tfutil.assert_tf_initialized() - self._init_fields() - - # Execute custom import handlers. - for handler in _import_handlers: - state = handler(state) - - # Set basic fields. - assert state["version"] in [2, 3, 4] - self.name = state["name"] - self.static_kwargs = util.EasyDict(state["static_kwargs"]) - self.components = util.EasyDict(state.get("components", {})) - self._build_module_src = state["build_module_src"] - self._build_func_name = state["build_func_name"] - - # Create temporary module from the imported source code. - module_name = "_tflib_network_import_" + uuid.uuid4().hex - module = types.ModuleType(module_name) - sys.modules[module_name] = module - _import_module_src[module] = self._build_module_src - exec(self._build_module_src, module.__dict__) # pylint: disable=exec-used - - # Locate network build function in the temporary module. - self._build_func = util.get_obj_from_module( - module, self._build_func_name) - assert callable(self._build_func) - - # Init TensorFlow graph. - self._init_graph() - self.reset_own_vars() - tfutil.set_vars({self.find_var(name): value for name, - value in state["variables"]}) - - def clone(self, name: str = None, **new_static_kwargs) -> "Network": - """Create a clone of this network with its own copy of the variables.""" - # pylint: disable=protected-access - net = object.__new__(Network) - net._init_fields() - net.name = name if name is not None else self.name - net.static_kwargs = util.EasyDict(self.static_kwargs) - net.static_kwargs.update(new_static_kwargs) - net._build_module_src = self._build_module_src - net._build_func_name = self._build_func_name - net._build_func = self._build_func - net._init_graph() - net.copy_vars_from(self) - return net - - def copy_own_vars_from(self, src_net: "Network") -> None: - """Copy the values of all variables from the given network, excluding sub-networks.""" - names = [name for name in self.own_vars.keys() - if name in src_net.own_vars] - tfutil.set_vars(tfutil.run( - {self.vars[name]: src_net.vars[name] for name in names})) - - def copy_vars_from(self, src_net: "Network") -> None: - """Copy the values of all variables from the given network, including sub-networks.""" - names = [name for name in self.vars.keys() if name in src_net.vars] - tfutil.set_vars(tfutil.run( - {self.vars[name]: src_net.vars[name] for name in names})) - - def copy_trainables_from(self, src_net: "Network") -> None: - """Copy the values of all trainable variables from the given network, including sub-networks.""" - names = [name for name in self.trainables.keys() - if name in src_net.trainables] - tfutil.set_vars(tfutil.run( - {self.vars[name]: src_net.vars[name] for name in names})) - - def convert(self, new_func_name: str, new_name: str = None, **new_static_kwargs) -> "Network": - """Create new network with the given parameters, and copy all variables from this network.""" - if new_name is None: - new_name = self.name - static_kwargs = dict(self.static_kwargs) - static_kwargs.update(new_static_kwargs) - net = Network(name=new_name, func_name=new_func_name, **static_kwargs) - net.copy_vars_from(self) - return net - - def setup_as_moving_average_of(self, src_net: "Network", beta: TfExpressionEx = 0.99, beta_nontrainable: TfExpressionEx = 0.0) -> tf.Operation: - """Construct a TensorFlow op that updates the variables of this network - to be slightly closer to those of the given network.""" - with tfutil.absolute_name_scope(self.scope + "/_MovingAvg"): - ops = [] - for name, var in self.vars.items(): - if name in src_net.vars: - cur_beta = beta if name in self.trainables else beta_nontrainable - new_value = tfutil.lerp(src_net.vars[name], var, cur_beta) - ops.append(var.assign(new_value)) - return tf.group(*ops) - - def run(self, - *in_arrays: Tuple[Union[np.ndarray, None], ...], - input_transform: dict = None, - output_transform: dict = None, - return_as_list: bool = False, - print_progress: bool = False, - minibatch_size: int = None, - num_gpus: int = 1, - assume_frozen: bool = False, - **dynamic_kwargs) -> Union[np.ndarray, Tuple[np.ndarray, ...], List[np.ndarray]]: - """Run this network for the given NumPy array(s), and return the output(s) as NumPy array(s). - - Args: - input_transform: A dict specifying a custom transformation to be applied to the input tensor(s) before evaluating the network. - The dict must contain a 'func' field that points to a top-level function. The function is called with the input - TensorFlow expression(s) as positional arguments. Any remaining fields of the dict will be passed in as kwargs. - output_transform: A dict specifying a custom transformation to be applied to the output tensor(s) after evaluating the network. - The dict must contain a 'func' field that points to a top-level function. The function is called with the output - TensorFlow expression(s) as positional arguments. Any remaining fields of the dict will be passed in as kwargs. - return_as_list: True = return a list of NumPy arrays, False = return a single NumPy array, or a tuple if there are multiple outputs. - print_progress: Print progress to the console? Useful for very large input arrays. - minibatch_size: Maximum minibatch size to use, None = disable batching. - num_gpus: Number of GPUs to use. - assume_frozen: Improve multi-GPU performance by assuming that the trainable parameters will remain changed between calls. - dynamic_kwargs: Additional keyword arguments to be passed into the network build function. - """ - assert len(in_arrays) == self.num_inputs - assert not all(arr is None for arr in in_arrays) - assert input_transform is None or util.is_top_level_function( - input_transform["func"]) - assert output_transform is None or util.is_top_level_function( - output_transform["func"]) - output_transform, dynamic_kwargs = _handle_legacy_output_transforms( - output_transform, dynamic_kwargs) - num_items = in_arrays[0].shape[0] - if minibatch_size is None: - minibatch_size = num_items - - # Construct unique hash key from all arguments that affect the TensorFlow graph. - key = dict(input_transform=input_transform, output_transform=output_transform, - num_gpus=num_gpus, assume_frozen=assume_frozen, dynamic_kwargs=dynamic_kwargs) - - def unwind_key(obj): - if isinstance(obj, dict): - return [(key, unwind_key(value)) for key, value in sorted(obj.items())] - if callable(obj): - return util.get_top_level_function_name(obj) - return obj - key = repr(unwind_key(key)) - - # Build graph. - if key not in self._run_cache: - with tfutil.absolute_name_scope(self.scope + "/_Run"), tf.control_dependencies(None): - with tf.device("/cpu:0"): - in_expr = [tf.placeholder(tf.float32, name=name) - for name in self.input_names] - in_split = list( - zip(*[tf.split(x, num_gpus) for x in in_expr])) - - out_split = [] - for gpu in range(num_gpus): - with tf.device("/gpu:%d" % gpu): - net_gpu = self.clone() if assume_frozen else self - in_gpu = in_split[gpu] - - if input_transform is not None: - in_kwargs = dict(input_transform) - in_gpu = in_kwargs.pop("func")( - *in_gpu, **in_kwargs) - in_gpu = [in_gpu] if tfutil.is_tf_expression( - in_gpu) else list(in_gpu) - - assert len(in_gpu) == self.num_inputs - out_gpu = net_gpu.get_output_for( - *in_gpu, return_as_list=True, **dynamic_kwargs) - - if output_transform is not None: - out_kwargs = dict(output_transform) - out_gpu = out_kwargs.pop("func")( - *out_gpu, **out_kwargs) - out_gpu = [out_gpu] if tfutil.is_tf_expression( - out_gpu) else list(out_gpu) - - assert len(out_gpu) == self.num_outputs - out_split.append(out_gpu) - - with tf.device("/cpu:0"): - out_expr = [tf.concat(outputs, axis=0) - for outputs in zip(*out_split)] - self._run_cache[key] = in_expr, out_expr - - # Run minibatches. - in_expr, out_expr = self._run_cache[key] - out_arrays = [np.empty( - [num_items] + expr.shape.as_list()[1:], expr.dtype.name) for expr in out_expr] - - for mb_begin in range(0, num_items, minibatch_size): - if print_progress: - print("\r%d / %d" % (mb_begin, num_items), end="") - - mb_end = min(mb_begin + minibatch_size, num_items) - mb_num = mb_end - mb_begin - mb_in = [src[mb_begin: mb_end] if src is not None else np.zeros( - [mb_num] + shape[1:]) for src, shape in zip(in_arrays, self.input_shapes)] - mb_out = tf.get_default_session().run(out_expr, dict(zip(in_expr, mb_in))) - - for dst, src in zip(out_arrays, mb_out): - dst[mb_begin: mb_end] = src - - # Done. - if print_progress: - print("\r%d / %d" % (num_items, num_items)) - - if not return_as_list: - out_arrays = out_arrays[0] if len( - out_arrays) == 1 else tuple(out_arrays) - return out_arrays - - def list_ops(self) -> List[TfExpression]: - include_prefix = self.scope + "/" - exclude_prefix = include_prefix + "_" - ops = tf.get_default_graph().get_operations() - ops = [op for op in ops if op.name.startswith(include_prefix)] - ops = [op for op in ops if not op.name.startswith(exclude_prefix)] - return ops - - def list_layers(self) -> List[Tuple[str, TfExpression, List[TfExpression]]]: - """Returns a list of (layer_name, output_expr, trainable_vars) tuples corresponding to - individual layers of the network. Mainly intended to be used for reporting.""" - layers = [] - - def recurse(scope, parent_ops, parent_vars, level): - # Ignore specific patterns. - if any(p in scope for p in ["/Shape", "/strided_slice", "/Cast", "/concat", "/Assign"]): - return - - # Filter ops and vars by scope. - global_prefix = scope + "/" - local_prefix = global_prefix[len(self.scope) + 1:] - cur_ops = [op for op in parent_ops if op.name.startswith( - global_prefix) or op.name == global_prefix[:-1]] - cur_vars = [(name, var) for name, var in parent_vars if name.startswith( - local_prefix) or name == local_prefix[:-1]] - if not cur_ops and not cur_vars: - return - - # Filter out all ops related to variables. - for var in [op for op in cur_ops if op.type.startswith("Variable")]: - var_prefix = var.name + "/" - cur_ops = [ - op for op in cur_ops if not op.name.startswith(var_prefix)] - - # Scope does not contain ops as immediate children => recurse deeper. - contains_direct_ops = any("/" not in op.name[len(global_prefix):] and op.type not in [ - "Identity", "Cast", "Transpose"] for op in cur_ops) - if (level == 0 or not contains_direct_ops) and (len(cur_ops) + len(cur_vars)) > 1: - visited = set() - for rel_name in [op.name[len(global_prefix):] for op in cur_ops] + [name[len(local_prefix):] for name, _var in cur_vars]: - token = rel_name.split("/")[0] - if token not in visited: - recurse(global_prefix + token, - cur_ops, cur_vars, level + 1) - visited.add(token) - return - - # Report layer. - layer_name = scope[len(self.scope) + 1:] - layer_output = cur_ops[-1].outputs[0] if cur_ops else cur_vars[-1][1] - layer_trainables = [var for _name, - var in cur_vars if var.trainable] - layers.append((layer_name, layer_output, layer_trainables)) - - recurse(self.scope, self.list_ops(), list(self.vars.items()), 0) - return layers - - def print_layers(self, title: str = None, hide_layers_with_no_params: bool = False) -> None: - """Print a summary table of the network structure.""" - rows = [[title if title is not None else self.name, - "Params", "OutputShape", "WeightShape"]] - rows += [["---"] * 4] - total_params = 0 - - for layer_name, layer_output, layer_trainables in self.list_layers(): - num_params = sum(int(np.prod(var.shape.as_list())) - for var in layer_trainables) - weights = [ - var for var in layer_trainables if var.name.endswith("/weight:0")] - weights.sort(key=lambda x: len(x.name)) - if len(weights) == 0 and len(layer_trainables) == 1: - weights = layer_trainables - total_params += num_params - - if not hide_layers_with_no_params or num_params != 0: - num_params_str = str(num_params) if num_params > 0 else "-" - output_shape_str = str(layer_output.shape) - weight_shape_str = str(weights[0].shape) if len( - weights) >= 1 else "-" - rows += [[layer_name, num_params_str, - output_shape_str, weight_shape_str]] - - rows += [["---"] * 4] - rows += [["Total", str(total_params), "", ""]] - - widths = [max(len(cell) for cell in column) for column in zip(*rows)] - print() - for row in rows: - print(" ".join(cell + " " * (width - len(cell)) - for cell, width in zip(row, widths))) - print() - - def setup_weight_histograms(self, title: str = None) -> None: - """Construct summary ops to include histograms of all trainable parameters in TensorBoard.""" - if title is None: - title = self.name - - with tf.name_scope(None), tf.device(None), tf.control_dependencies(None): - for local_name, var in self.trainables.items(): - if "/" in local_name: - p = local_name.split("/") - name = title + "_" + p[-1] + "/" + "_".join(p[:-1]) - else: - name = title + "_toplevel/" + local_name - - tf.summary.histogram(name, var) - -# ---------------------------------------------------------------------------- -# Backwards-compatible emulation of legacy output transformation in Network.run(). - - -_print_legacy_warning = True - - -def _handle_legacy_output_transforms(output_transform, dynamic_kwargs): - global _print_legacy_warning - legacy_kwargs = ["out_mul", "out_add", "out_shrink", "out_dtype"] - if not any(kwarg in dynamic_kwargs for kwarg in legacy_kwargs): - return output_transform, dynamic_kwargs - - if _print_legacy_warning: - _print_legacy_warning = False - print() - print("WARNING: Old-style output transformations in Network.run() are deprecated.") - print("Consider using 'output_transform=dict(func=tflib.convert_images_to_uint8)'") - print("instead of 'out_mul=127.5, out_add=127.5, out_dtype=np.uint8'.") - print() - assert output_transform is None - - new_kwargs = dict(dynamic_kwargs) - new_transform = {kwarg: new_kwargs.pop( - kwarg) for kwarg in legacy_kwargs if kwarg in dynamic_kwargs} - new_transform["func"] = _legacy_output_transform_func - return new_transform, new_kwargs - - -def _legacy_output_transform_func(*expr, out_mul=1.0, out_add=0.0, out_shrink=1, out_dtype=None): - if out_mul != 1.0: - expr = [x * out_mul for x in expr] - - if out_add != 0.0: - expr = [x + out_add for x in expr] - - if out_shrink > 1: - ksize = [1, 1, out_shrink, out_shrink] - expr = [tf.nn.avg_pool(x, ksize=ksize, strides=ksize, - padding="VALID", data_format="NCHW") for x in expr] - - if out_dtype is not None: - if tf.as_dtype(out_dtype).is_integer: - expr = [tf.round(x) for x in expr] - expr = [tf.saturate_cast(x, out_dtype) for x in expr] - return expr diff --git a/spaces/ELEVEN-001/ChatToFiles/app.py b/spaces/ELEVEN-001/ChatToFiles/app.py deleted file mode 100644 index dcb2c1173b07d39faa10f06db66a4508aee11036..0000000000000000000000000000000000000000 --- a/spaces/ELEVEN-001/ChatToFiles/app.py +++ /dev/null @@ -1,213 +0,0 @@ -import urllib.request -import fitz -import re -import numpy as np -import tensorflow_hub as hub -import openai -import gradio as gr -import os -import dotenv -from sklearn.neighbors import NearestNeighbors - - -def download_pdf(url, output_path): - urllib.request.urlretrieve(url, output_path) - - -def preprocess(text): - text = text.replace('\n', ' ') - text = re.sub('\s+', ' ', text) - return text - - -def pdf_to_text(path, start_page=1, end_page=None): - doc = fitz.open(path) - total_pages = doc.page_count - - if end_page is None: - end_page = total_pages - - text_list = [] - - for i in range(start_page-1, end_page): - text = doc.load_page(i).get_text("text") - text = preprocess(text) - text_list.append(text) - - doc.close() - return text_list - - -def text_to_chunks(texts, word_length=150, start_page=1): - text_toks = [t.split(' ') for t in texts] - page_nums = [] - chunks = [] - - for idx, words in enumerate(text_toks): - for i in range(0, len(words), word_length): - chunk = words[i:i+word_length] - if (i+word_length) > len(words) and (len(chunk) < word_length) and ( - len(text_toks) != (idx+1)): - text_toks[idx+1] = chunk + text_toks[idx+1] - continue - chunk = ' '.join(chunk).strip() - chunk = f'[Page no. {idx+start_page}]' + ' ' + '"' + chunk + '"' - chunks.append(chunk) - return chunks - - -class SemanticSearch: - - def __init__(self): - self.use = hub.load( - 'https://tfhub.dev/google/universal-sentence-encoder/4') - self.fitted = False - - def fit(self, data, batch=1000, n_neighbors=5): - self.data = data - self.embeddings = self.get_text_embedding(data, batch=batch) - n_neighbors = min(n_neighbors, len(self.embeddings)) - self.nn = NearestNeighbors(n_neighbors=n_neighbors) - self.nn.fit(self.embeddings) - self.fitted = True - - def __call__(self, text, return_data=True): - inp_emb = self.use([text]) - neighbors = self.nn.kneighbors(inp_emb, return_distance=False)[0] - - if return_data: - return [self.data[i] for i in neighbors] - else: - return neighbors - - def get_text_embedding(self, texts, batch=1000): - embeddings = [] - for i in range(0, len(texts), batch): - text_batch = texts[i:(i+batch)] - emb_batch = self.use(text_batch) - embeddings.append(emb_batch) - embeddings = np.vstack(embeddings) - return embeddings - - -def load_recommender(path, start_page=1): - global recommender - texts = pdf_to_text(path, start_page=start_page) - chunks = text_to_chunks(texts, start_page=start_page) - recommender.fit(chunks) - return 'Corpus Loaded.' - - -def generate_text(openAI_key, prompt, engine="text-davinci-003"): - openai.api_key = openAI_key - completions = openai.Completion.create( - engine=engine, - prompt=prompt, - max_tokens=512, - n=1, - stop=None, - temperature=1, - ) - message = completions.choices[0].text - return message - - -def generate_answer(question, openAI_key): - topn_chunks = recommender(question) - prompt = "" - prompt += 'search results:\n\n' - for c in topn_chunks: - prompt += c + '\n\n' - - prompt += "Instructions: Compose a comprehensive reply to the query using the search results given."\ - "Cite each reference using [ Page Number] notation (every result has this number at the beginning)."\ - "Citation should be done at the end of each sentence."\ - "If the search results mention multiple subjects with the same name, create separate answers for each."\ - "Only include information found in the results and don't add any additional information."\ - "Make sure the answer is correct and don't output false content."\ - "If the text does not relate to the query, simply state 'Text Not Found in PDF'."\ - "Ignore outlier search results which have nothing to do with the question."\ - "Only answer what is asked."\ - "The answer should be short and concise."\ - "Answer step-by-step."\ - "To answer the query, please follow these instructions:"\ - "Please carefully read through the search results provided and compose a clear and concise response."\ - "When citing information from the search results, please include the page number in square brackets after the relevant text."\ - "If the search results mention multiple subjects with the same name, create separate answers for each."\ - "Only include information found in the search results and avoid adding any additional information."\ - "Be sure that your response is accurate and does not contain any false content."\ - "If the query cannot be answered using the provided search results, please state [Text Not Found in PDF.]"\ - "Please disregard any irrelevant search results and only include information that directly answers the question."\ - "Your response should be step-by-step and easy to understand."\ - "Good luck!" - - - prompt += f"Query: {question}\nAnswer:" - answer = generate_text(openAI_key, prompt, "text-davinci-003") - return answer - - -def question_answer(url, file, question, openAI_key): - openAI_key = os.environ.get('OPENAI_KEY') - if url.strip() == '' and file == None: - return '[ERROR]: Both URL and PDF is empty. Provide atleast one.' - - if url.strip() != '' and file != None: - return '[ERROR]: Both URL and PDF is provided. Please provide only one (eiter URL or PDF).' - - if url.strip() != '': - glob_url = url - download_pdf(glob_url, 'corpus.pdf') - load_recommender('corpus.pdf') - - else: - old_file_name = file.name - file_name = file.name - file_name = file_name[:-12] + file_name[-4:] - - # Rename the file - os.rename(old_file_name, file_name) - load_recommender(file_name) - - # Delete the existing file if it exists - if os.path.exists(file_name): - os.remove(file_name) - - if question.strip() == '': - return '[ERROR]: Question field is empty' - - return generate_answer(question, openAI_key) - - -recommender = SemanticSearch() - -title = 'ChatToFiles' -description = """ ChatToFiles is a cutting-edge tool that facilitates conversation with PDF files utilizing Universal Sentence Encoder and Open AI technology. This tool is particularly advantageous as it delivers more reliable responses than other comparable tools, thanks to its superior embeddings, which eliminate hallucination errors. Additionally, when providing answers, PDF GPT can cite the exact page number where the relevant information is located within the PDF file, which enhances the credibility of the responses and expedites the process of finding pertinent information.""" - -with gr.Blocks() as iface: - - gr.Markdown(f'

    {title}

    ') - gr.Markdown(description) - - with gr.Row(): - - with gr.Group(): - url = gr.Textbox(label='Enter PDF URL here', placeholder='https://docs.pdf') - gr.Markdown( - "

    ----------------------------------------------------------------------------------------------------------------------------------------------------

    ") - file = gr.File(label='Drop PDF here', file_types=['*']) - question = gr.Textbox( - label='Enter your question here', placeholder='Type your question here') - btn = gr.Button(value='Submit') - btn.style(full_width=True) - - with gr.Group(): - answer = gr.Textbox(label='The answer to your question is :', - lines=5, placeholder='Your answer here...') - - btn.click(question_answer, inputs=[ - url, file, question], outputs=[answer]) -# openai.api_key = os.getenv('Your_Key_Here') -dotenv.load_dotenv() -iface.launch() -# iface.launch(share=True) \ No newline at end of file diff --git a/spaces/Eddycrack864/Applio-Inference/mdx.py b/spaces/Eddycrack864/Applio-Inference/mdx.py deleted file mode 100644 index 4cc7c08b37bc371294f2f82b3382424a5455b7c2..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/mdx.py +++ /dev/null @@ -1,228 +0,0 @@ -import torch -import onnxruntime as ort -from tqdm import tqdm -import warnings -import numpy as np -import hashlib -import queue -import threading - -warnings.filterwarnings("ignore") - -class MDX_Model: - def __init__(self, device, dim_f, dim_t, n_fft, hop=1024, stem_name=None, compensation=1.000): - self.dim_f = dim_f - self.dim_t = dim_t - self.dim_c = 4 - self.n_fft = n_fft - self.hop = hop - self.stem_name = stem_name - self.compensation = compensation - - self.n_bins = self.n_fft//2+1 - self.chunk_size = hop * (self.dim_t-1) - self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to(device) - - out_c = self.dim_c - - self.freq_pad = torch.zeros([1, out_c, self.n_bins-self.dim_f, self.dim_t]).to(device) - - def stft(self, x): - x = x.reshape([-1, self.chunk_size]) - x = torch.stft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True, return_complex=True) - x = torch.view_as_real(x) - x = x.permute([0,3,1,2]) - x = x.reshape([-1,2,2,self.n_bins,self.dim_t]).reshape([-1,4,self.n_bins,self.dim_t]) - return x[:,:,:self.dim_f] - - def istft(self, x, freq_pad=None): - freq_pad = self.freq_pad.repeat([x.shape[0],1,1,1]) if freq_pad is None else freq_pad - x = torch.cat([x, freq_pad], -2) - # c = 4*2 if self.target_name=='*' else 2 - x = x.reshape([-1,2,2,self.n_bins,self.dim_t]).reshape([-1,2,self.n_bins,self.dim_t]) - x = x.permute([0,2,3,1]) - x = x.contiguous() - x = torch.view_as_complex(x) - x = torch.istft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True) - return x.reshape([-1,2,self.chunk_size]) - - -class MDX: - - DEFAULT_SR = 44100 - # Unit: seconds - DEFAULT_CHUNK_SIZE = 0 * DEFAULT_SR - DEFAULT_MARGIN_SIZE = 1 * DEFAULT_SR - - DEFAULT_PROCESSOR = 0 - - def __init__(self, model_path:str, params:MDX_Model, processor=DEFAULT_PROCESSOR): - - # Set the device and the provider (CPU or CUDA) - self.device = torch.device(f'cuda:{processor}') if processor >= 0 else torch.device('cpu') - self.provider = ['CUDAExecutionProvider'] if processor >= 0 else ['CPUExecutionProvider'] - - self.model = params - - # Load the ONNX model using ONNX Runtime - self.ort = ort.InferenceSession(model_path, providers=self.provider) - # Preload the model for faster performance - self.ort.run(None, {'input':torch.rand(1, 4, params.dim_f, params.dim_t).numpy()}) - self.process = lambda spec:self.ort.run(None, {'input': spec.cpu().numpy()})[0] - - self.prog = None - - @staticmethod - def get_hash(model_path): - try: - with open(model_path, 'rb') as f: - f.seek(- 10000 * 1024, 2) - model_hash = hashlib.md5(f.read()).hexdigest() - except: - model_hash = hashlib.md5(open(model_path,'rb').read()).hexdigest() - - return model_hash - - @staticmethod - def segment(wave, combine=True, chunk_size=DEFAULT_CHUNK_SIZE, margin_size=DEFAULT_MARGIN_SIZE): - """ - Segment or join segmented wave array - - Args: - wave: (np.array) Wave array to be segmented or joined - combine: (bool) If True, combines segmented wave array. If False, segments wave array. - chunk_size: (int) Size of each segment (in samples) - margin_size: (int) Size of margin between segments (in samples) - - Returns: - numpy array: Segmented or joined wave array - """ - - if combine: - processed_wave = None # Initializing as None instead of [] for later numpy array concatenation - for segment_count, segment in enumerate(wave): - start = 0 if segment_count == 0 else margin_size - end = None if segment_count == len(wave)-1 else -margin_size - if margin_size == 0: - end = None - if processed_wave is None: # Create array for first segment - processed_wave = segment[:, start:end] - else: # Concatenate to existing array for subsequent segments - processed_wave = np.concatenate((processed_wave, segment[:, start:end]), axis=-1) - - else: - processed_wave = [] - sample_count = wave.shape[-1] - - if chunk_size <= 0 or chunk_size > sample_count: - chunk_size = sample_count - - if margin_size > chunk_size: - margin_size = chunk_size - - for segment_count, skip in enumerate(range(0, sample_count, chunk_size)): - - margin = 0 if segment_count == 0 else margin_size - end = min(skip+chunk_size+margin_size, sample_count) - start = skip-margin - - cut = wave[:,start:end].copy() - processed_wave.append(cut) - - if end == sample_count: - break - - return processed_wave - - def pad_wave(self, wave): - """ - Pad the wave array to match the required chunk size - - Args: - wave: (np.array) Wave array to be padded - - Returns: - tuple: (padded_wave, pad, trim) - - padded_wave: Padded wave array - - pad: Number of samples that were padded - - trim: Number of samples that were trimmed - """ - n_sample = wave.shape[1] - trim = self.model.n_fft//2 - gen_size = self.model.chunk_size-2*trim - pad = gen_size - n_sample%gen_size - - # Padded wave - wave_p = np.concatenate((np.zeros((2,trim)), wave, np.zeros((2,pad)), np.zeros((2,trim))), 1) - - mix_waves = [] - for i in range(0, n_sample+pad, gen_size): - waves = np.array(wave_p[:, i:i+self.model.chunk_size]) - mix_waves.append(waves) - - mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(self.device) - - return mix_waves, pad, trim - - def _process_wave(self, mix_waves, trim, pad, q:queue.Queue, _id:int): - """ - Process each wave segment in a multi-threaded environment - - Args: - mix_waves: (torch.Tensor) Wave segments to be processed - trim: (int) Number of samples trimmed during padding - pad: (int) Number of samples padded during padding - q: (queue.Queue) Queue to hold the processed wave segments - _id: (int) Identifier of the processed wave segment - - Returns: - numpy array: Processed wave segment - """ - mix_waves = mix_waves.split(1) - with torch.no_grad(): - pw = [] - for mix_wave in mix_waves: - self.prog.update() - spec = self.model.stft(mix_wave) - processed_spec = torch.tensor(self.process(spec)) - processed_wav = self.model.istft(processed_spec.to(self.device)) - processed_wav = processed_wav[:,:,trim:-trim].transpose(0,1).reshape(2, -1).cpu().numpy() - pw.append(processed_wav) - processed_signal = np.concatenate(pw, axis=-1)[:, :-pad] - q.put({_id:processed_signal}) - return processed_signal - - def process_wave(self, wave:np.array, mt_threads=1): - """ - Process the wave array in a multi-threaded environment - - Args: - wave: (np.array) Wave array to be processed - mt_threads: (int) Number of threads to be used for processing - - Returns: - numpy array: Processed wave array - """ - self.prog = tqdm(total=0) - chunk = wave.shape[-1]//mt_threads - waves = self.segment(wave, False, chunk) - - # Create a queue to hold the processed wave segments - q = queue.Queue() - threads = [] - for c, batch in enumerate(waves): - mix_waves, pad, trim = self.pad_wave(batch) - self.prog.total = len(mix_waves)*mt_threads - thread = threading.Thread(target=self._process_wave, args=(mix_waves, trim, pad, q, c)) - thread.start() - threads.append(thread) - for thread in threads: - thread.join() - self.prog.close() - - processed_batches = [] - while not q.empty(): - processed_batches.append(q.get()) - processed_batches = [list(wave.values())[0] for wave in sorted(processed_batches, key=lambda d: list(d.keys())[0])] - assert len(processed_batches) == len(waves), 'Incomplete processed batches, please reduce batch size!' - return self.segment(processed_batches, True, chunk) \ No newline at end of file diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_adam_step_20e.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_adam_step_20e.py deleted file mode 100644 index 81fb92cb4a35491493a4a76e22c86c5b804ec329..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_adam_step_20e.py +++ /dev/null @@ -1,14 +0,0 @@ -# optimizer -optimizer = dict(type='Adam', lr=1e-4) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - step=[16, 18], - warmup='linear', - warmup_iters=1, - warmup_ratio=0.001, - warmup_by_epoch=True) -# running settings -runner = dict(type='EpochBasedRunner', max_epochs=20) -checkpoint_config = dict(interval=1) diff --git a/spaces/FathomNet/UWROV_Deepsea_Detector/inference.py b/spaces/FathomNet/UWROV_Deepsea_Detector/inference.py deleted file mode 100644 index eae89f25662e2f977b48794009c23fe0ff67c844..0000000000000000000000000000000000000000 --- a/spaces/FathomNet/UWROV_Deepsea_Detector/inference.py +++ /dev/null @@ -1,69 +0,0 @@ -import cv2 -import glob -import numpy as np -import torch -import yolov5 -from typing import Dict, Tuple, Union, List, Optional - - -# ----------------------------------------------------------------------------- -# Configs -# ----------------------------------------------------------------------------- - -model_path = "models/deepsea-detector.pt" - - -# ----------------------------------------------------------------------------- -# YOLOv5 class -# ----------------------------------------------------------------------------- - -class YOLO: - """Wrapper class for loading and running YOLO model""" - - def __init__(self, model_path: str, device: Optional[str] = None): - - # load model - self.model = yolov5.load(model_path, device=device) - - def __call__( - self, - img: Union[str, np.ndarray], - conf_threshold: float = 0.25, - iou_threshold: float = 0.45, - image_size: int = 720, - classes: Optional[List[int]] = None) -> torch.Tensor: - self.model.conf = conf_threshold - self.model.iou = iou_threshold - - if classes is not None: - self.model.classes = classes - - # pylint: disable=not-callable - detections = self.model(img, size=image_size) - - return detections - - -def run_inference(image_path): - """Helper function to execute the inference.""" - - predictions = model(image_path) - - return predictions - - -# ----------------------------------------------------------------------------- -# Model Creation -# ----------------------------------------------------------------------------- -model = YOLO(model_path, device='cpu') - -if __name__ == "__main__": - - # For demo purposes: run through a couple of test - # images and then output the predictions in a folder. - test_images = glob.glob("images/*.png") - - for test_image in test_images: - predictions = run_inference(test_image) - - print("Done.") diff --git a/spaces/Fengbinbin/gpt-academic/docs/waifu_plugin/waifu.css b/spaces/Fengbinbin/gpt-academic/docs/waifu_plugin/waifu.css deleted file mode 100644 index 42639df0794e46fc58f66e2c772e2bf9ba605eed..0000000000000000000000000000000000000000 --- a/spaces/Fengbinbin/gpt-academic/docs/waifu_plugin/waifu.css +++ /dev/null @@ -1,290 +0,0 @@ -.waifu { - position: fixed; - bottom: 0; - z-index: 1; - font-size: 0; - -webkit-transform: translateY(3px); - transform: translateY(3px); -} -.waifu:hover { - -webkit-transform: translateY(0); - transform: translateY(0); -} -.waifu-tips { - opacity: 0; - margin: -20px 20px; - padding: 5px 10px; - border: 1px solid rgba(224, 186, 140, 0.62); - border-radius: 12px; - background-color: rgba(236, 217, 188, 0.5); - box-shadow: 0 3px 15px 2px rgba(191, 158, 118, 0.2); - text-overflow: ellipsis; - overflow: hidden; - position: absolute; - animation-delay: 5s; - animation-duration: 50s; - animation-iteration-count: infinite; - animation-name: shake; - animation-timing-function: ease-in-out; -} -.waifu-tool { - display: none; - color: #aaa; - top: 50px; - right: 10px; - position: absolute; -} -.waifu:hover .waifu-tool { - display: block; -} -.waifu-tool span { - display: block; - cursor: pointer; - color: #5b6c7d; - transition: 0.2s; -} -.waifu-tool span:hover { - color: #34495e; -} -.waifu #live2d{ - position: relative; -} - -@keyframes shake { - 2% { - transform: translate(0.5px, -1.5px) rotate(-0.5deg); - } - - 4% { - transform: translate(0.5px, 1.5px) rotate(1.5deg); - } - - 6% { - transform: translate(1.5px, 1.5px) rotate(1.5deg); - } - - 8% { - transform: translate(2.5px, 1.5px) rotate(0.5deg); - } - - 10% { - transform: translate(0.5px, 2.5px) rotate(0.5deg); - } - - 12% { - transform: translate(1.5px, 1.5px) rotate(0.5deg); - } - - 14% { - transform: translate(0.5px, 0.5px) rotate(0.5deg); - } - - 16% { - transform: translate(-1.5px, -0.5px) rotate(1.5deg); - } - - 18% { - transform: translate(0.5px, 0.5px) rotate(1.5deg); - } - - 20% { - transform: translate(2.5px, 2.5px) rotate(1.5deg); - } - - 22% { - transform: translate(0.5px, -1.5px) rotate(1.5deg); - } - - 24% { - transform: translate(-1.5px, 1.5px) rotate(-0.5deg); - } - - 26% { - transform: translate(1.5px, 0.5px) rotate(1.5deg); - } - - 28% { - transform: translate(-0.5px, -0.5px) rotate(-0.5deg); - } - - 30% { - transform: translate(1.5px, -0.5px) rotate(-0.5deg); - } - - 32% { - transform: translate(2.5px, -1.5px) rotate(1.5deg); - } - - 34% { - transform: translate(2.5px, 2.5px) rotate(-0.5deg); - } - - 36% { - transform: translate(0.5px, -1.5px) rotate(0.5deg); - } - - 38% { - transform: translate(2.5px, -0.5px) rotate(-0.5deg); - } - - 40% { - transform: translate(-0.5px, 2.5px) rotate(0.5deg); - } - - 42% { - transform: translate(-1.5px, 2.5px) rotate(0.5deg); - } - - 44% { - transform: translate(-1.5px, 1.5px) rotate(0.5deg); - } - - 46% { - transform: translate(1.5px, -0.5px) rotate(-0.5deg); - } - - 48% { - transform: translate(2.5px, -0.5px) rotate(0.5deg); - } - - 50% { - transform: translate(-1.5px, 1.5px) rotate(0.5deg); - } - - 52% { - transform: translate(-0.5px, 1.5px) rotate(0.5deg); - } - - 54% { - transform: translate(-1.5px, 1.5px) rotate(0.5deg); - } - - 56% { - transform: translate(0.5px, 2.5px) rotate(1.5deg); - } - - 58% { - transform: translate(2.5px, 2.5px) rotate(0.5deg); - } - - 60% { - transform: translate(2.5px, -1.5px) rotate(1.5deg); - } - - 62% { - transform: translate(-1.5px, 0.5px) rotate(1.5deg); - } - - 64% { - transform: translate(-1.5px, 1.5px) rotate(1.5deg); - } - - 66% { - transform: translate(0.5px, 2.5px) rotate(1.5deg); - } - - 68% { - transform: translate(2.5px, -1.5px) rotate(1.5deg); - } - - 70% { - transform: translate(2.5px, 2.5px) rotate(0.5deg); - } - - 72% { - transform: translate(-0.5px, -1.5px) rotate(1.5deg); - } - - 74% { - transform: translate(-1.5px, 2.5px) rotate(1.5deg); - } - - 76% { - transform: translate(-1.5px, 2.5px) rotate(1.5deg); - } - - 78% { - transform: translate(-1.5px, 2.5px) rotate(0.5deg); - } - - 80% { - transform: translate(-1.5px, 0.5px) rotate(-0.5deg); - } - - 82% { - transform: translate(-1.5px, 0.5px) rotate(-0.5deg); - } - - 84% { - transform: translate(-0.5px, 0.5px) rotate(1.5deg); - } - - 86% { - transform: translate(2.5px, 1.5px) rotate(0.5deg); - } - - 88% { - transform: translate(-1.5px, 0.5px) rotate(1.5deg); - } - - 90% { - transform: translate(-1.5px, -0.5px) rotate(-0.5deg); - } - - 92% { - transform: translate(-1.5px, -1.5px) rotate(1.5deg); - } - - 94% { - transform: translate(0.5px, 0.5px) rotate(-0.5deg); - } - - 96% { - transform: translate(2.5px, -0.5px) rotate(-0.5deg); - } - - 98% { - transform: translate(-1.5px, -1.5px) rotate(-0.5deg); - } - - 0%, 100% { - transform: translate(0, 0) rotate(0); - } -} -@font-face { - font-family: 'Flat-UI-Icons'; - src: url('flat-ui-icons-regular.eot'); - src: url('flat-ui-icons-regular.eot?#iefix') format('embedded-opentype'), url('flat-ui-icons-regular.woff') format('woff'), url('flat-ui-icons-regular.ttf') format('truetype'), url('flat-ui-icons-regular.svg#flat-ui-icons-regular') format('svg'); -} -[class^="fui-"], -[class*="fui-"] { - font-family: 'Flat-UI-Icons'; - speak: none; - font-style: normal; - font-weight: normal; - font-variant: normal; - text-transform: none; - -webkit-font-smoothing: antialiased; - -moz-osx-font-smoothing: grayscale; -} -.fui-cross:before { - content: "\e609"; -} -.fui-info-circle:before { - content: "\e60f"; -} -.fui-photo:before { - content: "\e62a"; -} -.fui-eye:before { - content: "\e62c"; -} -.fui-chat:before { - content: "\e62d"; -} -.fui-home:before { - content: "\e62e"; -} -.fui-user:before { - content: "\e631"; -} \ No newline at end of file diff --git a/spaces/GAIR/Factool/factool/knowledge_qa/pipeline.py b/spaces/GAIR/Factool/factool/knowledge_qa/pipeline.py deleted file mode 100644 index 573824f4621e05e1fe70296a45d024255b528f30..0000000000000000000000000000000000000000 --- a/spaces/GAIR/Factool/factool/knowledge_qa/pipeline.py +++ /dev/null @@ -1,220 +0,0 @@ -import json -import yaml -import os -import time -import math -import pdb -from typing import List, Dict - -from factool.knowledge_qa.tool import google_search -from factool.knowledge_qa.tool import local_search -from factool.utils.base.pipeline import pipeline - -class knowledge_qa_pipeline(pipeline): - def __init__(self, foundation_model, snippet_cnt, search_type, data_link=None, Embed_link=None): - super().__init__('knowledge_qa', foundation_model) - if(search_type == 'online'): - self.tool = google_search(snippet_cnt = snippet_cnt) - elif(search_type == 'local'): - self.tool = local_search(snippet_cnt = snippet_cnt, data_link=data_link, embedding_link=Embed_link) - with open(os.path.join(self.prompts_path, "claim_extraction.yaml"), 'r') as file: - data = yaml.load(file, Loader=yaml.FullLoader) - self.claim_prompt = data['knowledge_qa'] - - with open(os.path.join(self.prompts_path, 'query_generation.yaml'), 'r') as file: - data = yaml.load(file, Loader=yaml.FullLoader) - self.query_prompt = data['knowledge_qa'] - - with open(os.path.join(self.prompts_path, 'agreement_verification.yaml'), 'r') as file: - data = yaml.load(file, Loader=yaml.FullLoader) - self.verification_prompt = data['knowledge_qa'] - - async def _claim_extraction(self, responses): - messages_list = [ - [ - {"role": "system", "content": self.claim_prompt['system']}, - {"role": "user", "content": self.claim_prompt['user'].format(input=response)}, - ] - for response in responses - ] - return await self.chat.async_run(messages_list, List) - - async def _query_generation(self, claims): - if claims == None: - return ['None'] - messages_list = [ - [ - {"role": "system", "content": self.query_prompt['system']}, - {"role": "user", "content": self.query_prompt['user'].format(input=claim['claim'] if 'claim' in claim else '')}, - ] - for claim in claims - ] - return await self.chat.async_run(messages_list, List) - - async def _verification(self, claims, evidences): - messages_list = [ - [ - {"role": "system", "content": self.verification_prompt['system']}, - {"role": "user", "content": self.verification_prompt['user'].format(claim=claim['claim'], evidence=str(evidence))}, - ] - for claim, evidence in zip(claims, evidences) - ] - return await self.chat.async_run(messages_list, Dict) - - async def run_with_tool_live(self, responses): - claims_in_responses = await self._claim_extraction(responses) - queries_in_responses = [] - evidences_in_responses = [] - sources_in_responses = [] - verifications_in_responses = [] - #pdb.set_trace() - for claims_in_response in claims_in_responses: - queries = await self._query_generation(claims_in_response) - queries_in_responses.append(queries) - search_outputs_for_claims = await self.tool.run(queries) - evidences = [output["content"] for search_outputs_for_claim in search_outputs_for_claims for output in search_outputs_for_claim] - evidences_in_responses.append(evidences) - sources = [output["source"] for search_outputs_for_claim in search_outputs_for_claims for output in search_outputs_for_claim] - sources_in_responses.append(sources) - verifications = await self._verification(claims_in_response, evidences) - verifications_in_responses.append(verifications) - - return claims_in_responses, queries_in_responses, evidences_in_responses, sources_in_responses, verifications_in_responses - - async def run_with_tool_live_without_claim_extraction(self, claims): - queries = await self._query_generation(claims) - evidences = await self.tool.run(queries) - - final_response = await self._verification(claims, evidences) - for i in range(len(final_response)): - if final_response[i] != None: - final_response[i]['queries'] = queries[i] - final_response[i]['evidences'] = evidences[i] - - return final_response - - async def run_with_tool_api_call(self, prompts, responses): - batch_size = 5 - num_batches = math.ceil(len(prompts) / batch_size) - - self.sample_list = [{"prompt": prompt, "response": response, "category": 'kbqa'} for prompt, response in zip(prompts, responses)] - - for i in range(num_batches): - print(i) - batch_start = i * batch_size - batch_end = min((i + 1) * batch_size, len(responses)) - - claims_in_responses, queries_in_responses, evidences_in_responses, sources_in_responses, verifications_in_responses = await self.run_with_tool_live(responses[batch_start:batch_end]) - - for j, (claims_in_response, queries_in_response, evidences_in_response, sources_in_response, verifications_in_response) in enumerate(zip(claims_in_responses, queries_in_responses, evidences_in_responses, sources_in_responses, verifications_in_responses)): - index = batch_start + j - - if claims_in_response != None: - for k, claim in enumerate(claims_in_response): - if verifications_in_response[k] != None: - if claim != None: - verifications_in_response[k].update({'claim': claim['claim']}) - else: - verifications_in_response[k].update({'claim': 'None'}) - - evidences_with_source = [] - for evidence, source in zip(evidences_in_response, sources_in_response): - evidences_with_source.append({'evidence': evidence, 'source': source}) - self.sample_list[index].update({ - 'claims': claims_in_response, - 'queries': queries_in_response, - # 'evidences': evidences_in_response, - # 'sources': sources_in_response, - 'evidences': evidences_with_source, - 'claim_level_factuality': verifications_in_response, - 'response_level_factuality': all([verification['factuality'] if verification != None else True for verification in verifications_in_response]) - }) - - return self.sample_list - - async def run_with_tool_dataset(self, annotated_dataset_path: str, with_tool_classified_dataset_path: str, rerun: bool = False, rerun_indices: list = []): - data_path = with_tool_classified_dataset_path if rerun else annotated_dataset_path - with open(data_path, 'r') as f: - data = [json.loads(line) for line in f] - self.sample_list = data if rerun else [claim for sample in data for claim in sample['claims']] - rerun_elements = self.sample_list if not rerun else [self.sample_list[i] for i in rerun_indices] - - batch_size = 4 - num_batches = math.ceil(len(rerun_elements) / batch_size) # 5 - - for i in range(num_batches): - print(i) - batch_start = i * batch_size - batch_end = min((i + 1) * batch_size, len(rerun_elements)) - - responses = await self.run_with_tool_live_without_claim_extraction(rerun_elements[batch_start:batch_end]) - - for j, response in enumerate(responses): - index = batch_start + j if rerun == False else rerun_indices[batch_start + j] - if response is None: - self.sample_list[index].update({ - 'with_tool_classification': 'None', - 'with_tool_reasoning': 'None', - 'queries': 'None', - 'evidences': 'None' - }) - else: - self.sample_list[index].update({ - 'with_tool_classification': response.get('factuality', 'None'), - 'with_tool_reasoning': response.get('reasoning', 'None'), - 'queries': response.get('queries', 'None'), - 'evidences': response.get('evidences', 'None') - }) - - # save everything after each batch to prevent data loss - with open(with_tool_classified_dataset_path, 'w') as f: - for item in self.sample_list: - json_str = json.dumps(item) - f.write(json_str + '\n') - - async def run_self_check_live(self, fewshot, batch): - user_prompt_key = 'user_3_shot_CoT' if fewshot else 'user_zero_shot_CoT' - messages_list = [ - [ - {"role": "system", "content": self.self_check_prompt['system']}, - {"role": "user", "content": self.self_check_prompt[user_prompt_key].format(claim=response['claim'])}, - ] - for response in batch - ] - return await self.chat.async_run(messages_list, Dict) - - async def run_self_check_dataset(self, annotated_dataset_path: str, self_check_classified_dataset_path: str, fewshot: bool = False, rerun: bool = False, rerun_indices: list = []): - data_path = annotated_dataset_path if not rerun else self_check_classified_dataset_path - with open(data_path, 'r') as f: - data = [json.loads(line) for line in f] - self.sample_list = data if rerun else [claim for sample in data for claim in sample['claims']] - rerun_elements = self.sample_list if not rerun else [self.sample_list[i] for i in rerun_indices] - - batch_size = 10 - num_batches = math.ceil(len(rerun_elements) / batch_size) - - for i in range(num_batches): - print(i) - batch_start = i * batch_size - batch_end = min((i + 1) * batch_size, len(rerun_elements)) - batch = rerun_elements[batch_start:batch_end] - - responses = await self.run_self_check_live(fewshot, batch) - for j, response in enumerate(responses): - index = batch_start + j if not rerun else rerun_indices[batch_start + j] - if response is None: - self.sample_list[index].update({ - 'self_check_classification': 'None', - 'self_check_reasoning': 'None' - }) - else: - self.sample_list[index].update({ - 'self_check_classification': response.get('factuality', 'None'), - 'self_check_reasoning': response.get('reasoning', 'None') - }) - - # save everything after each batch to prevent data loss - with open(self_check_classified_dataset_path, 'w') as f: - for item in self.sample_list: - json_str = json.dumps(item) - f.write(json_str + '\n') diff --git a/spaces/Godrose0728/Aisound02/text/sanskrit.py b/spaces/Godrose0728/Aisound02/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/Godrose0728/Aisound02/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/resnext.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/resnext.py deleted file mode 100644 index 6dbcbd516fd308b1d703eecb83ab275f6b159516..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/resnext.py +++ /dev/null @@ -1,153 +0,0 @@ -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottleneck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - if self.with_plugins: - self._del_block_plugins(self.after_conv1_plugin_names + - self.after_conv2_plugin_names + - self.after_conv3_plugin_names) - self.after_conv1_plugin_names = self.make_block_plugins( - width, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - width, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - self.planes * self.expansion, self.after_conv3_plugins) - - def _del_block_plugins(self, plugin_names): - """delete plugins for block if exist. - - Args: - plugin_names (list[str]): List of plugins name to delete. - """ - assert isinstance(plugin_names, list) - for plugin_name in plugin_names: - del self._modules[plugin_name] - - -@BACKBONES.register_module() -class ResNeXt(ResNet): - """ResNeXt backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Resnet stages. Default: 4. - groups (int): Group of resnext. - base_width (int): Base width of resnext. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/huffman/huffman_mmap_indexed_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/huffman/huffman_mmap_indexed_dataset.py deleted file mode 100644 index 3279dae89a8bca95178bbe1285d3cb334890b12f..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/huffman/huffman_mmap_indexed_dataset.py +++ /dev/null @@ -1,287 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import mmap -import os -import shutil -import struct -import typing as tp -from functools import lru_cache - -import numpy as np -import torch -from fairseq.data import indexed_dataset -from fairseq.data.huffman import HuffmanCoder -from fairseq.file_io import PathManager - - -class HuffmanMMapIndex: - """ - keep an index of the offsets in the huffman binary file. - First a header, then the list of sizes (num tokens) for each instance and finally - the addresses of each instance. - """ - - _HDR_MAGIC = b"HUFFIDX\x00\x00" - _VERSION = 1 - - @classmethod - def writer(cls, path: str, data_len: int): - class _Writer: - def __enter__(self): - self._file = open(path, "wb") - - # write header (magic + version) - self._file.write(cls._HDR_MAGIC) - self._file.write(struct.pack(" None: - self._path_prefix = path_prefix - self._coder = coder - self._sizes = [] - self._ptrs = [] - self._data_len = 0 - - def open(self): - self._coder.to_file(vocab_file_path(self._path_prefix)) - self._data_file = open(indexed_dataset.data_file_path(self._path_prefix), "wb") - - def __enter__(self) -> "HuffmanMMapIndexedDatasetBuilder": - self.open() - return self - - def add_item(self, tokens: tp.List[str]) -> None: - """ - add a list of tokens to the dataset, they will compressed with the - provided coder before being written to file. - """ - encoded = self._coder.encode(tokens) - code_len = len(encoded) - last_ptr = 0 - if len(self._ptrs) > 0: - last_ptr = self._ptrs[-1] - self._sizes.append(len(tokens)) - self._ptrs.append(last_ptr + code_len) - self._data_len += code_len - self._data_file.write(encoded) - - def append(self, other_dataset_path_prefix: str) -> None: - """ - append an existing dataset. - Beware, if it wasn't built with the same coder, you are in trouble. - """ - other_index = HuffmanMMapIndex( - indexed_dataset.index_file_path(other_dataset_path_prefix) - ) - for (ptr, size) in other_index: - self._ptrs.append(ptr + self._data_len) - self._sizes.append(size) - - # Concatenate data - with open(indexed_dataset.data_file_path(other_dataset_path_prefix), "rb") as f: - shutil.copyfileobj(f, self._data_file) - - self._data_len += other_index.data_len - - def close(self): - self._data_file.close() - with HuffmanMMapIndex.writer( - indexed_dataset.index_file_path(self._path_prefix), self._data_len - ) as index: - index.write(self._sizes, self._ptrs) - - def __exit__(self, exc_type, exc_val, exc_tb) -> None: - self.close() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/nat/fairseq_nat_model.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/nat/fairseq_nat_model.py deleted file mode 100644 index b09394112f57d9e82f2a4cbc371af888281b9e8a..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/nat/fairseq_nat_model.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -from fairseq.models.transformer import ( - TransformerDecoder, - TransformerEncoder, - TransformerModel, -) -from fairseq.modules.transformer_sentence_encoder import init_bert_params - - -def ensemble_encoder(func): - def wrapper(self, *args, **kwargs): - if self.ensemble_models is None or len(self.ensemble_models) == 1: - return func(self, *args, **kwargs) - encoder_outs = [func(model, *args, **kwargs, return_all_hiddens=True) for model in self.ensemble_models] - _encoder_out = encoder_outs[0].copy() - - def stack(key): - outs = [e[key][0] for e in encoder_outs] - return [torch.stack(outs, -1) if outs[0] is not None else None] - - _encoder_out["encoder_out"] = stack("encoder_out") - _encoder_out["encoder_embedding"] = stack("encoder_embedding") - - num_layers = len(_encoder_out["encoder_states"]) - if num_layers > 0: - _encoder_out["encoder_states"] = [ - torch.stack([e["encoder_states"][i] for e in encoder_outs], -1) - for i in range(num_layers) - ] - return _encoder_out - - return wrapper - - -def ensemble_decoder(func): - def wrapper(self, normalize=False, encoder_out=None, *args, **kwargs): - if self.ensemble_models is None or len(self.ensemble_models) == 1: - return func( - self, normalize=normalize, encoder_out=encoder_out, *args, **kwargs - ) - - def _replace(encoder_out, new_val): - new_encoder_out = encoder_out.copy() - new_encoder_out["encoder_out"] = [new_val] - return new_encoder_out - - action_outs = [ - func( - model, - normalize=normalize, - encoder_out=_replace( - encoder_out, - encoder_out["encoder_out"][0][:, :, :, i] - ), - *args, - **kwargs - ) - for i, model in enumerate(self.ensemble_models) - ] - - if not isinstance(action_outs[0], tuple): # return multiple values - action_outs = [[a] for a in action_outs] - else: - action_outs = [list(a) for a in action_outs] - - ensembled_outs = [] - for i in range(len(action_outs[0])): - if i == 0 and normalize: - ensembled_outs += [ - torch.logsumexp( - torch.stack([a[i] for a in action_outs], -1), dim=-1 - ) - - math.log(len(self.ensemble_models)) - ] - elif action_outs[0][i] is not None: - ensembled_outs += [torch.stack([a[i] for a in action_outs], -1)] - else: - ensembled_outs += [None] - - if len(ensembled_outs) == 1: - return ensembled_outs[0] - return tuple(ensembled_outs) - - return wrapper - - -class FairseqNATModel(TransformerModel): - """ - Abstract class for all nonautoregressive-based models - """ - - def __init__(self, args, encoder, decoder): - super().__init__(args, encoder, decoder) - self.tgt_dict = decoder.dictionary - self.bos = decoder.dictionary.bos() - self.eos = decoder.dictionary.eos() - self.pad = decoder.dictionary.pad() - self.unk = decoder.dictionary.unk() - - self.ensemble_models = None - - @property - def allow_length_beam(self): - return False - - @property - def allow_ensemble(self): - return True - - def enable_ensemble(self, models): - self.encoder.ensemble_models = [m.encoder for m in models] - self.decoder.ensemble_models = [m.decoder for m in models] - - @staticmethod - def add_args(parser): - TransformerModel.add_args(parser) - parser.add_argument( - "--apply-bert-init", - action="store_true", - help="use custom param initialization for BERT", - ) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - decoder = FairseqNATDecoder(args, tgt_dict, embed_tokens) - if getattr(args, "apply_bert_init", False): - decoder.apply(init_bert_params) - return decoder - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - encoder = FairseqNATEncoder(args, src_dict, embed_tokens) - if getattr(args, "apply_bert_init", False): - encoder.apply(init_bert_params) - return encoder - - def forward_encoder(self, encoder_inputs): - return self.encoder(*encoder_inputs) - - def forward_decoder(self, *args, **kwargs): - return NotImplementedError - - def initialize_output_tokens(self, *args, **kwargs): - return NotImplementedError - - def forward(self, *args, **kwargs): - return NotImplementedError - - -class FairseqNATEncoder(TransformerEncoder): - def __init__(self, args, dictionary, embed_tokens): - super().__init__(args, dictionary, embed_tokens) - self.ensemble_models = None - - @ensemble_encoder - def forward(self, *args, **kwargs): - return super().forward(*args, **kwargs) - - -class FairseqNATDecoder(TransformerDecoder): - def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False): - super().__init__(args, dictionary, embed_tokens, no_encoder_attn) - self.ensemble_models = None diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/pq/modules/qlinear.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/pq/modules/qlinear.py deleted file mode 100644 index 9bdd25a8685bb7c7b32e1f02372aaeb26d8ba53a..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/pq/modules/qlinear.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class PQLinear(nn.Module): - """ - Quantized counterpart of nn.Linear module. Stores the centroid, the assignments - and the non-quantized biases. The full weight is re-instantiated at each forward - pass. - - Args: - - centroids: centroids of size n_centroids x block_size - - assignments: assignments of the centroids to the subvectors - of size self.out_features x n_blocks - - bias: the non-quantized bias - - Remarks: - - We refer the reader to the official documentation of the nn.Linear module - for the other arguments and the behavior of the module - - Performance tests on GPU show that this implementation is 15% slower than - the non-quantized nn.Linear module for a standard training loop. - """ - - def __init__(self, centroids, assignments, bias, in_features, out_features): - super(PQLinear, self).__init__() - self.block_size = centroids.size(1) - self.n_centroids = centroids.size(0) - self.in_features = in_features - self.out_features = out_features - # check compatibility - if self.in_features % self.block_size != 0: - raise ValueError("Wrong PQ sizes") - if len(assignments) % self.out_features != 0: - raise ValueError("Wrong PQ sizes") - # define parameters - self.centroids = nn.Parameter(centroids, requires_grad=True) - self.register_buffer("assignments", assignments) - self.register_buffer("counts", torch.bincount(assignments).type_as(centroids)) - if bias is not None: - self.bias = nn.Parameter(bias) - else: - self.register_parameter("bias", None) - - @property - def weight(self): - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_features, self.block_size) - .permute(1, 0, 2) - .flatten(1, 2) - ) - - def forward(self, x): - return F.linear( - x, - self.weight, - self.bias, - ) - - def extra_repr(self): - return f"in_features={self.in_features},\ - out_features={self.out_features},\ - n_centroids={self.n_centroids},\ - block_size={self.block_size},\ - bias={self.bias is not None}" diff --git a/spaces/Hoodady/3DFuse/my/utils/heartbeat.py b/spaces/Hoodady/3DFuse/my/utils/heartbeat.py deleted file mode 100644 index 024dc981b64140950102b05ffa657354a3cae485..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/my/utils/heartbeat.py +++ /dev/null @@ -1,78 +0,0 @@ -# generates periodic hearbeats for remote expriment monitoring -from pathlib import Path -import json -from inspect import stack -from .ticker import IntervalTicker - -_CURRENT_BEAT_STACK = [] - - -def get_heartbeat(): - """ - Returns: - The :class:`HeartBeat` object that's currently being used. - Throws an error if no :class:`EventStorage` is currently enabled. - """ - assert len( - _CURRENT_BEAT_STACK - ), "get_heartbeat() has to be called inside a 'with EventStorage(...)' context!" - return _CURRENT_BEAT_STACK[-1] - - -def get_tqdm_meter(pbar, format_dict): - format_dict['bar_format'] = "{r_bar}" - meter_str = pbar.format_meter(**format_dict) - meter_str = meter_str[2:] - return meter_str - - -def caller_info(n_stack_up): - info = stack()[1 + n_stack_up] # 1 up as base so that it starts from caller - msg = f"{info.filename}:{info.lineno} - {info.function}" - return msg - - -class HeartBeat(): - def __init__( - self, pbar, write_interval=10, - output_dir="./", fname="heartbeat.json" - ): - self.pbar = pbar - self.fname = Path(output_dir) / fname - self.ticker = IntervalTicker(write_interval) - self.completed = False - - # force one write at the beginning - self.beat(force_write=True, n_stack_up=2) - - def beat(self, force_write=False, n_stack_up=1): - on_write_period = self.ticker.tick() - if force_write or on_write_period: - stats = self.stats() - stats['caller'] = caller_info(n_stack_up) - - with open(self.fname, "w") as f: - json.dump(stats, f) - - def done(self): - self.completed = True - self.beat(force_write=True, n_stack_up=2) - - def stats(self): - pbar = self.pbar - fdict = pbar.format_dict - stats = { - "beat": self.ticker.tick_str(), - "done": self.completed, - "meter": get_tqdm_meter(pbar, fdict), - "elapsed": int(fdict['elapsed']) - } - return stats - - def __enter__(self): - _CURRENT_BEAT_STACK.append(self) - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - assert _CURRENT_BEAT_STACK[-1] == self - _CURRENT_BEAT_STACK.pop() diff --git a/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/download_lotus.sh b/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/download_lotus.sh deleted file mode 100644 index c08c701314a8e575637deff78381ab02c2ef6728..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/download_lotus.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - -SRCDIR=$WORKDIR_ROOT/indic_languages_corpus -DESTDIR=${WORKDIR_ROOT}/ML50/raw/ -mkdir -p $SRCDIR -mkdir -p $DESTDIR - -cd $SRCDIR -wget http://lotus.kuee.kyoto-u.ac.jp/WAT/indic-multilingual/indic_languages_corpus.tar.gz -tar -xvzf indic_languages_corpus.tar.gz - -SRC_EXTRACT_DIR=$SRCDIR/indic_languages_corpus/bilingual - -cp $SRC_EXTRACT_DIR/ml-en/train.ml $DESTDIR/train.ml_IN-en_XX.ml_IN -cp $SRC_EXTRACT_DIR/ml-en/train.en $DESTDIR/train.ml_IN-en_XX.en_XX -cp $SRC_EXTRACT_DIR/ml-en/dev.ml $DESTDIR/valid.ml_IN-en_XX.ml_IN -cp $SRC_EXTRACT_DIR/ml-en/dev.en $DESTDIR/valid.ml_IN-en_XX.en_XX -cp $SRC_EXTRACT_DIR/ml-en/test.ml $DESTDIR/test.ml_IN-en_XX.ml_IN -cp $SRC_EXTRACT_DIR/ml-en/test.en $DESTDIR/test.ml_IN-en_XX.en_XX - -cp $SRC_EXTRACT_DIR/ur-en/train.ur $DESTDIR/train.ur_PK-en_XX.ur_PK -cp $SRC_EXTRACT_DIR/ur-en/train.en $DESTDIR/train.ur_PK-en_XX.en_XX -cp $SRC_EXTRACT_DIR/ur-en/dev.ur $DESTDIR/valid.ur_PK-en_XX.ur_PK -cp $SRC_EXTRACT_DIR/ur-en/dev.en $DESTDIR/valid.ur_PK-en_XX.en_XX -cp $SRC_EXTRACT_DIR/ur-en/test.ur $DESTDIR/test.ur_PK-en_XX.ur_PK -cp $SRC_EXTRACT_DIR/ur-en/test.en $DESTDIR/test.ur_PK-en_XX.en_XX - -cp $SRC_EXTRACT_DIR/te-en/train.te $DESTDIR/train.te_IN-en_XX.te_IN -cp $SRC_EXTRACT_DIR/te-en/train.en $DESTDIR/train.te_IN-en_XX.en_XX -cp $SRC_EXTRACT_DIR/te-en/dev.te $DESTDIR/valid.te_IN-en_XX.te_IN -cp $SRC_EXTRACT_DIR/te-en/dev.en $DESTDIR/valid.te_IN-en_XX.en_XX -cp $SRC_EXTRACT_DIR/te-en/test.te $DESTDIR/test.te_IN-en_XX.te_IN -cp $SRC_EXTRACT_DIR/te-en/test.en $DESTDIR/test.te_IN-en_XX.en_XX diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/tasks/unpaired_audio_text.py b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/tasks/unpaired_audio_text.py deleted file mode 100644 index 5f292528f80d6bb51f16a4324d97342d28fce942..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/tasks/unpaired_audio_text.py +++ /dev/null @@ -1,447 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -from dataclasses import dataclass, field -import logging -import math -import os -from typing import Optional -import torch - -from fairseq.logging import metrics -from fairseq.tasks import FairseqTask, register_task -from ..data import ExtractedFeaturesDataset, RandomInputDataset - -from fairseq.data import ( - Dictionary, - data_utils, - StripTokenDataset, -) -from fairseq.dataclass import FairseqDataclass -from fairseq.distributed.utils import get_data_parallel_world_size -from omegaconf import MISSING - -from examples.speech_recognition.kaldi.kaldi_decoder import ( - KaldiDecoder, - KaldiDecoderConfig, -) - - -logger = logging.getLogger(__name__) - - -@dataclass -class DecodingConfig(FairseqDataclass): - kenlm_path: Optional[str] = None - lm_weight: float = 0 - blank_weight: float = 0 - - -@dataclass -class UnpairedAudioTextConfig(FairseqDataclass): - data: str = field( - default=MISSING, metadata={"help": "path to data directory containing audio"} - ) - text_data: str = field( - default=MISSING, metadata={"help": "path to data directory containing text"} - ) - max_length: Optional[int] = None - labels: Optional[str] = field( - default=None, - metadata={"help": "extension of the label file to load, used for fine-tuning"}, - ) - unfiltered: bool = field( - default=False, metadata={"help": "load data with _unfiltered suffix"} - ) - ctc_eval: bool = field( - default=False, metadata={"help": "eval UER as if computed by CTC"} - ) - sort_by_length: bool = field( - default=True, metadata={"help": "sort examples by length of audio timesteps"} - ) - shuffle: bool = field(default=True, metadata={"help": "shuffle examples"}) - append_eos: bool = field(default=False, metadata={"help": "append eos"}) - uppercase: Optional[bool] = field( - default=False, metadata={"help": "uppercase for LM score computation"} - ) - skipwords: Optional[str] = field( - default="", - metadata={ - "help": "comma-separated words to be removed for LM score computation" - }, - ) - kenlm_path: Optional[str] = None - vocab_usage_power: float = 2 - - word_decoder_config: Optional[KaldiDecoderConfig] = None - word_kenlm_path: Optional[str] = None - - decoding_config: DecodingConfig = DecodingConfig() - - -@register_task("unpaired_audio_text", dataclass=UnpairedAudioTextConfig) -class UnpairedAudioText(FairseqTask): - """ """ - - cfg: UnpairedAudioTextConfig - - def __init__( - self, - cfg: UnpairedAudioTextConfig, - source_dictionary=None, - target_dictionary=None, - ): - super().__init__(cfg) - - self._target_dictionary = target_dictionary - self._source_dictionary = source_dictionary - self.num_symbols = ( - len([s for s in target_dictionary.symbols if not s.startswith("madeup")]) - - target_dictionary.nspecial - ) - self.sil_id = ( - target_dictionary.index("") if "" in target_dictionary else -1 - ) - self.kenlm = None - if cfg.kenlm_path is not None: - import kenlm - - self.kenlm = kenlm.Model(cfg.kenlm_path) - - self.word_kenlm = None - if cfg.word_kenlm_path is not None: - import kenlm - - self.word_kenlm = kenlm.Model(cfg.word_kenlm_path) - - self.uppercase = cfg.uppercase - self.skipwords = set(cfg.skipwords.split(",")) - - def str_postprocess(s): - s = " ".join(w for w in s.split() if w not in self.skipwords) - s = s.upper() if self.uppercase else s - return s - - self.str_postprocess = str_postprocess - self.compute_lm_score = lambda s: self.kenlm.score(self.str_postprocess(s)) - - self.compute_word_score = None - if cfg.word_decoder_config is not None: - self.kaldi_decoder = KaldiDecoder(cfg.word_decoder_config, beam=10) - - def compute_word_score(logits, padding): - res = self.kaldi_decoder.decode(logits, padding) - for r in res: - r = r.result() - assert len(r) == 1 - r = r[0] - yield r["score"], r["words"] - - self.compute_word_score = compute_word_score - - @classmethod - def setup_task(cls, cfg: UnpairedAudioTextConfig, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - cfg (AudioPretrainingConfig): configuration of this task - """ - - dict_path = os.path.join(cfg.text_data, "dict.txt") - if os.path.exists(dict_path): - target_dictionary = Dictionary.load(dict_path) - else: - dict_path = os.path.join(cfg.data, f"dict.{cfg.labels}.txt") - target_dictionary = Dictionary.load(dict_path) - - return cls(cfg, target_dictionary=target_dictionary) - - def optimizer_step(self, optimizer, model, update_num): - if hasattr(model, "get_groups_for_update"): - groups = model.get_groups_for_update(update_num) - optimizer.step(groups={groups}) - else: - optimizer.step() - - def valid_step(self, sample, model, criterion): - res = model( - **sample["net_input"], - dense_x_only=True, - ) - - dense_x = res["logits"] - padding_mask = res["padding_mask"] - - word_scores = None - if self.compute_word_score is not None: - word_scores = self.compute_word_score(dense_x.cpu(), padding_mask.cpu()) - - z = dense_x.argmax(-1) - z[padding_mask] = self.target_dictionary.pad() - - vocab_seen = torch.zeros(self.num_symbols, dtype=torch.bool) - - import editdistance - - c_err = 0 - c_len = 0 - pred_c_len = 0 - lm_score_sum = 0 - for i, (x, t, id) in enumerate( - zip( - z, - sample["target"] if "target" in sample else [None] * len(z), - sample["id"], - ) - ): - - if t is not None: - t = t[(t >= self.target_dictionary.nspecial)] - x = x[ - (x >= self.target_dictionary.nspecial) - & (x < (self.num_symbols + self.target_dictionary.nspecial)) - ] - if self.sil_id >= 0: - x = x[x != self.sil_id] - - vocab_seen[x - self.target_dictionary.nspecial] = True - - pred_units_arr = x - if self.cfg.ctc_eval: - pred_units_arr = pred_units_arr.unique_consecutive() - pred_units_arr = pred_units_arr[pred_units_arr != 0] - - if id == 0: - if t is not None: - logger.info(f"REF: {self.target_dictionary.string(t)}") - logger.info(f"HYP: {self.target_dictionary.string(pred_units_arr)}") - - if self.kenlm is not None: - if t is not None: - ref_lm_s = self.compute_lm_score( - self.target_dictionary.string(t) - ) - logger.info( - f"LM [REF]: {ref_lm_s}, {math.pow(10, -ref_lm_s / (len(t) + 1))}" - ) - - hyp_lm_s = self.compute_lm_score( - self.target_dictionary.string(pred_units_arr) - ) - logger.info( - f"LM [HYP]: {hyp_lm_s}, {math.pow(10, -hyp_lm_s / (len(pred_units_arr) + 1))}" - ) - - pred_units_arr = pred_units_arr.tolist() - - pred_c_len += len(pred_units_arr) - - if t is not None: - t = t.tolist() - c_err += editdistance.eval(pred_units_arr, t) - c_len += len(t) - else: - c_len = pred_c_len - - if self.kenlm is not None: - pred_str = self.target_dictionary.string(pred_units_arr) - lm_score = self.compute_lm_score(pred_str) - lm_score_sum += lm_score - - kaldi_score_sum = 0 - word_lm_sum = 0 - num_words = 0 - if word_scores is not None: - for score, words in word_scores: - kaldi_score_sum += score - num_words += len(words) - if self.word_kenlm is not None: - word_lm_sum += self.kenlm.score(" ".join(words)) - - try: - world_size = get_data_parallel_world_size() - except: - world_size = 1 - - logging_output = { - "loss": c_err, - "_num_char_errors": c_err, - "_num_chars": c_len, - "_num_pred_chars": pred_c_len, - "ntokens": c_len, - "nsentences": z.size(0), - "sample_size": c_len, - "_world_size": world_size, - "_lm_score_sum": lm_score_sum, - "_kaldi_score_sum": kaldi_score_sum, - "_word_lm_sum": word_lm_sum, - "_num_words": num_words, - "_vocab_seen": vocab_seen, - } - - return c_err, c_len, logging_output - - def load_dataset(self, split: str, task_cfg: FairseqDataclass = None, **kwargs): - data_path = self.cfg.data - task_cfg = task_cfg or self.cfg - - has_unpaired_text = os.path.exists( - os.path.join(self.cfg.text_data, f"{split}.idx") - ) - - self.datasets[split] = ExtractedFeaturesDataset( - path=data_path, - split=split, - min_length=3, - max_length=task_cfg.max_length, - labels=None if has_unpaired_text else task_cfg.labels, - label_dict=self.target_dictionary, - shuffle=getattr(task_cfg, "shuffle", True), - sort_by_length=task_cfg.sort_by_length, - ) - - logger.info(f"split {split} has unpaired text? {has_unpaired_text}") - if has_unpaired_text: - text_dataset = data_utils.load_indexed_dataset( - os.path.join(self.cfg.text_data, split), self.target_dictionary - ) - text_dataset = StripTokenDataset(text_dataset, self.target_dictionary.eos()) - self.datasets[split] = RandomInputDataset( - self.datasets[split], - text_dataset, - ["random_label"], - add_to_input=True, - pad_idx=self.target_dictionary.pad(), - ) - - @property - def source_dictionary(self): - return self._source_dictionary - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self._target_dictionary - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return None - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - - zero = torch.scalar_tensor(0.0) - num_char_errors = sum( - log.get("_num_char_errors", zero) for log in logging_outputs - ) - num_chars = sum(log.get("_num_chars", zero) for log in logging_outputs) - num_word_errors = sum( - log.get("_num_word_errors", zero) for log in logging_outputs - ) - num_words = sum(log.get("_num_words", zero) for log in logging_outputs) - num_pred_chars = sum( - log.get("_num_pred_chars", zero) for log in logging_outputs - ) - - lm_score_sum = sum(log.get("_lm_score_sum", zero) for log in logging_outputs) - vocab_seen = ( - sum(log.get("_vocab_seen", zero) for log in logging_outputs) - .bool() - .sum() - .item() - ) - kaldi_score_sum = sum( - log.get("_kaldi_score_sum", zero) for log in logging_outputs - ) - word_lm_sum = sum(log.get("_word_lm_sum", zero) for log in logging_outputs) - - metrics.log_scalar_sum("_num_char_errors", num_char_errors) - metrics.log_scalar_sum("_num_chars", num_chars) - metrics.log_scalar_sum("_num_word_errors", num_word_errors) - metrics.log_scalar_sum("_num_words", num_words) - - metrics.log_scalar_sum("lm_score_sum", lm_score_sum) - metrics.log_scalar_sum("num_pred_chars", num_pred_chars) - - if self.cfg.word_kenlm_path is not None: - metrics.log_scalar_sum("kaldi_score_sum", kaldi_score_sum) - metrics.log_scalar_sum("word_lm_sum", word_lm_sum) - - if num_chars > 0: - metrics.log_derived( - "uer", - lambda meters: meters["_num_char_errors"].sum - * 100.0 - / meters["_num_chars"].sum - if meters["_num_chars"].sum > 0 - else float("nan"), - ) - - if lm_score_sum < 0 and vocab_seen > 0: - metrics.log_scalar("vocab_seen_pct", vocab_seen / self.num_symbols) - - metrics.log_derived( - "weighted_lm_ppl", - lambda meters: math.pow( - 10, - -meters["lm_score_sum"].sum - / ( - meters["num_pred_chars"].sum + meters["nsentences"].sum - ), # account for - ) - / meters["vocab_seen_pct"].avg ** self.cfg.vocab_usage_power, - ) - - metrics.log_derived( - "lm_ppl", - lambda meters: math.pow( - 10, - -meters["lm_score_sum"].sum - / ( - meters["num_pred_chars"].sum + meters["nsentences"].sum - ), # account for - ), - ) - else: - metrics.log_derived("weighted_lm_ppl", lambda meters: float("inf")) - - if num_words > 0: - if word_lm_sum != 0: - metrics.log_derived( - "word_lm_ppl", - lambda meters: math.pow( - 10, - -meters["word_lm_sum"].sum - / ( - meters["_num_words"].sum + meters["nsentences"].sum - ), # account for - ), - ) - metrics.log_derived( - "weighted_word_lm_ppl", - lambda meters: math.pow( - 10, - -meters["word_lm_sum"].sum - / ( - meters["_num_words"].sum + meters["nsentences"].sum - ), # account for - ) - / meters["vocab_seen_pct"].avg ** self.cfg.vocab_usage_power, - ) - - if self.cfg.word_kenlm_path is not None: - metrics.log_derived( - "kaldi_score", - lambda meters: meters["kaldi_score_sum"].sum - / meters["nsentences"].sum, - ) - - def build_model(self, cfg: FairseqDataclass): - model = super().build_model(cfg) - - return model diff --git a/spaces/JavierFnts/clip-playground/app.py b/spaces/JavierFnts/clip-playground/app.py deleted file mode 100644 index ae279a5cc389ef26e9b4fe6618a9a1d1404fc474..0000000000000000000000000000000000000000 --- a/spaces/JavierFnts/clip-playground/app.py +++ /dev/null @@ -1,322 +0,0 @@ -import random -import requests - -import streamlit as st -from clip_model import ClipModel - -from PIL import Image - -IMAGES_LINKS = ["https://cdn.pixabay.com/photo/2014/10/13/21/34/clipper-487503_960_720.jpg", - "https://cdn.pixabay.com/photo/2019/09/06/04/25/beach-4455433_960_720.jpg", - "https://cdn.pixabay.com/photo/2019/11/11/14/30/zebra-4618513_960_720.jpg", - "https://cdn.pixabay.com/photo/2020/11/04/15/29/coffee-beans-5712780_960_720.jpg", - "https://cdn.pixabay.com/photo/2020/03/24/20/42/namibia-4965457_960_720.jpg", - "https://cdn.pixabay.com/photo/2020/08/27/07/31/restaurant-5521372_960_720.jpg", - "https://cdn.pixabay.com/photo/2020/08/24/21/41/couple-5515141_960_720.jpg", - "https://cdn.pixabay.com/photo/2020/01/31/07/10/billboards-4807268_960_720.jpg", - "https://cdn.pixabay.com/photo/2017/07/31/20/48/shell-2560930_960_720.jpg", - "https://cdn.pixabay.com/photo/2020/08/13/01/29/koala-5483931_960_720.jpg", - ] - -@st.cache # Cache this so that it doesn't change every time something changes in the page -def load_default_dataset(): - return [load_image_from_url(url) for url in IMAGES_LINKS] - -def load_image_from_url(url: str) -> Image.Image: - return Image.open(requests.get(url, stream=True).raw) - -@st.cache -def load_model(model_architecture: str) -> ClipModel: - return ClipModel(model_architecture) - -def init_state(): - if "images" not in st.session_state: - st.session_state.images = None - if "prompts" not in st.session_state: - st.session_state.prompts = None - if "predictions" not in st.session_state: - st.session_state.predictions = None - if "default_text_input" not in st.session_state: - st.session_state.default_text_input = None - if "model_architecture" not in st.session_state: - st.session_state.model_architecture = "RN50" - - -def limit_number_images(): - """When moving between tasks sometimes the state of images can have too many samples""" - if st.session_state.images is not None and len(st.session_state.images) > 1: - st.session_state.images = [st.session_state.images[0]] - - -def limit_number_prompts(): - """When moving between tasks sometimes the state of prompts can have too many samples""" - if st.session_state.prompts is not None and len(st.session_state.prompts) > 1: - st.session_state.prompts = [st.session_state.prompts[0]] - - -def is_valid_prediction_state() -> bool: - if st.session_state.images is None or len(st.session_state.images) < 1: - st.error("Choose at least one image before predicting") - return False - if st.session_state.prompts is None or len(st.session_state.prompts) < 1: - st.error("Write at least one prompt before predicting") - return False - return True - - -def preprocess_image(image: Image.Image, max_size: int = 1200) -> Image.Image: - """Set up a max size because otherwise the API sometimes breaks""" - width_0, height_0 = image.size - - if max((width_0, height_0)) <= max_size: - return image - - if width_0 > height_0: - aspect_ratio = max_size / float(width_0) - new_height = int(float(height_0) * float(aspect_ratio)) - image = image.resize((max_size, new_height), Image.ANTIALIAS) - return image - else: - aspect_ratio = max_size / float(height_0) - new_width = int(float(width_0) * float(aspect_ratio)) - image = image.resize((max_size, new_width), Image.ANTIALIAS) - return image - - -class Sections: - @staticmethod - def header(): - st.markdown('' - '', unsafe_allow_html=True) - st.markdown("# CLIP Playground") - st.markdown("### Try OpenAI's CLIP model in your browser") - st.markdown(" ") - st.markdown(" ") - with st.expander("What is CLIP?"): - st.markdown("CLIP is a machine learning model that computes similarity between text " - "(also called prompts) and images. It has been trained on a dataset with millions of diverse" - " image-prompt pairs, which allows it to generalize to unseen examples." - "
    Check out [OpenAI's blogpost](https://openai.com/blog/clip/) for more details", - unsafe_allow_html=True) - col1, col2 = st.columns(2) - col1.image("https://openaiassets.blob.core.windows.net/$web/clip/draft/20210104b/overview-a.svg") - col2.image("https://openaiassets.blob.core.windows.net/$web/clip/draft/20210104b/overview-b.svg") - with st.expander("What can CLIP do?"): - st.markdown("#### Prompt ranking") - st.markdown("Given different prompts and an image CLIP will rank the different prompts based on how well they describe the image") - st.markdown("#### Image ranking") - st.markdown("Given different images and a prompt CLIP will rank the different images based on how well they fit the description") - st.markdown("#### Image classification") - st.markdown("Similar to prompt ranking, given a set of classes CLIP can classify an image between them. " - "Think of [Hotdog/ Not hotdog](https://www.youtube.com/watch?v=pqTntG1RXSY&ab_channel=tvpromos) without any training.") - st.markdown(" ") - st.markdown(" ") - - @staticmethod - def image_uploader(accept_multiple_files: bool): - uploaded_images = st.file_uploader("Upload image", type=[".png", ".jpg", ".jpeg"], - accept_multiple_files=accept_multiple_files) - if (not accept_multiple_files and uploaded_images is not None) or (accept_multiple_files and len(uploaded_images) >= 1): - images = [] - if not accept_multiple_files: - uploaded_images = [uploaded_images] - for uploaded_image in uploaded_images: - pil_image = Image.open(uploaded_image) - pil_image = preprocess_image(pil_image) - images.append(pil_image) - st.session_state.images = images - - - @staticmethod - def image_picker(default_text_input: str): - col1, col2, col3 = st.columns(3) - with col1: - default_image_1 = load_image_from_url("https://cdn.pixabay.com/photo/2014/10/13/21/34/clipper-487503_960_720.jpg") - st.image(default_image_1, use_column_width=True) - if st.button("Select image 1"): - st.session_state.images = [default_image_1] - st.session_state.default_text_input = default_text_input - with col2: - default_image_2 = load_image_from_url("https://cdn.pixabay.com/photo/2019/11/11/14/30/zebra-4618513_960_720.jpg") - st.image(default_image_2, use_column_width=True) - if st.button("Select image 2"): - st.session_state.images = [default_image_2] - st.session_state.default_text_input = default_text_input - with col3: - default_image_3 = load_image_from_url("https://cdn.pixabay.com/photo/2016/11/15/16/24/banana-1826760_960_720.jpg") - st.image(default_image_3, use_column_width=True) - if st.button("Select image 3"): - st.session_state.images = [default_image_3] - st.session_state.default_text_input = default_text_input - - @staticmethod - def dataset_picker(): - columns = st.columns(5) - st.session_state.dataset = load_default_dataset() - image_idx = 0 - for col in columns: - col.image(st.session_state.dataset[image_idx]) - image_idx += 1 - col.image(st.session_state.dataset[image_idx]) - image_idx += 1 - if st.button("Select random dataset"): - st.session_state.images = st.session_state.dataset - st.session_state.default_text_input = "A sign that says 'SLOW DOWN'" - - @staticmethod - def prompts_input(input_label: str, prompt_prefix: str = ''): - raw_text_input = st.text_input(input_label, - value=st.session_state.default_text_input if st.session_state.default_text_input is not None else "") - st.session_state.is_default_text_input = raw_text_input == st.session_state.default_text_input - if raw_text_input: - st.session_state.prompts = [prompt_prefix + class_name for class_name in raw_text_input.split(";") if len(class_name) > 1] - - @staticmethod - def single_image_input_preview(): - st.markdown("### Preview") - col1, col2 = st.columns([1, 2]) - with col1: - st.markdown("Image to classify") - if st.session_state.images is not None: - st.image(st.session_state.images[0], use_column_width=True) - else: - st.warning("Select an image") - - with col2: - st.markdown("Labels to choose from") - if st.session_state.prompts is not None: - for prompt in st.session_state.prompts: - st.markdown(f"* {prompt}") - if len(st.session_state.prompts) < 2: - st.warning("At least two prompts/classes are needed") - else: - st.warning("Enter the prompts/classes to classify from") - - @staticmethod - def multiple_images_input_preview(): - st.markdown("### Preview") - st.markdown("Images to classify") - col1, col2, col3 = st.columns(3) - if st.session_state.images is not None: - for idx, image in enumerate(st.session_state.images): - if idx < len(st.session_state.images) / 2: - col1.image(st.session_state.images[idx], use_column_width=True) - else: - col2.image(st.session_state.images[idx], use_column_width=True) - if len(st.session_state.images) < 2: - col2.warning("At least 2 images required") - else: - col1.warning("Select an image") - - with col3: - st.markdown("Query prompt") - if st.session_state.prompts is not None: - for prompt in st.session_state.prompts: - st.write(prompt) - else: - st.warning("Enter the prompt to classify") - - @staticmethod - def classification_output(model: ClipModel): - if st.button("Predict") and is_valid_prediction_state(): - with st.spinner("Predicting..."): - - st.markdown("### Results") - if len(st.session_state.images) == 1: - scores = model.compute_prompts_probabilities(st.session_state.images[0], st.session_state.prompts) - scored_prompts = [(prompt, score) for prompt, score in zip(st.session_state.prompts, scores)] - sorted_scored_prompts = sorted(scored_prompts, key=lambda x: x[1], reverse=True) - for prompt, probability in sorted_scored_prompts: - percentage_prob = int(probability * 100) - st.markdown( - f"### ![prob](https://progress-bar.dev/{percentage_prob}/?width=200) {prompt}") - elif len(st.session_state.prompts) == 1: - st.markdown(f"### {st.session_state.prompts[0]}") - - scores = model.compute_images_probabilities(st.session_state.images, st.session_state.prompts[0]) - scored_images = [(image, score) for image, score in zip(st.session_state.images, scores)] - sorted_scored_images = sorted(scored_images, key=lambda x: x[1], reverse=True) - - for image, probability in sorted_scored_images[:5]: - col1, col2 = st.columns([1, 3]) - col1.image(image, use_column_width=True) - percentage_prob = int(probability * 100) - col2.markdown(f"### ![prob](https://progress-bar.dev/{percentage_prob}/?width=200)") - else: - raise ValueError("Invalid state") - - # is_default_image = isinstance(state.images[0], str) - # is_default_prediction = is_default_image and state.is_default_text_input - # if is_default_prediction: - # st.markdown("
    :information_source: Try writing your own prompts and using your own pictures!", - # unsafe_allow_html=True) - # elif is_default_image: - # st.markdown("
    :information_source: You can also use your own pictures!", - # unsafe_allow_html=True) - # elif state.is_default_text_input: - # st.markdown("
    :information_source: Try writing your own prompts!" - # " It can be whatever you can think of", - # unsafe_allow_html=True) - -if __name__ == "__main__": - Sections.header() - col1, col2 = st.columns([1, 2]) - col1.markdown(" "); col1.markdown(" ") - col1.markdown("#### Task selection") - task_name: str = col2.selectbox("", options=["Prompt ranking", "Image ranking", "Image classification"]) - st.markdown("
    ", unsafe_allow_html=True) - init_state() - model = load_model(st.session_state.model_architecture) - if task_name == "Image classification": - Sections.image_uploader(accept_multiple_files=False) - if st.session_state.images is None: - st.markdown("or choose one from") - Sections.image_picker(default_text_input="banana; boat; bird") - input_label = "Enter the classes to chose from separated by a semi-colon. (f.x. `banana; boat; honesty; apple`)" - Sections.prompts_input(input_label, prompt_prefix='A picture of a ') - limit_number_images() - Sections.single_image_input_preview() - Sections.classification_output(model) - elif task_name == "Prompt ranking": - Sections.image_uploader(accept_multiple_files=False) - if st.session_state.images is None: - st.markdown("or choose one from") - Sections.image_picker(default_text_input="A calm afternoon in the Mediterranean; " - "A beautiful creature;" - " Something that grows in tropical regions") - input_label = "Enter the prompts to choose from separated by a semi-colon. " \ - "(f.x. `An image that inspires; A feeling of loneliness; joyful and young; apple`)" - Sections.prompts_input(input_label) - limit_number_images() - Sections.single_image_input_preview() - Sections.classification_output(model) - elif task_name == "Image ranking": - Sections.image_uploader(accept_multiple_files=True) - if st.session_state.images is None or len(st.session_state.images) < 2: - st.markdown("or use this random dataset") - Sections.dataset_picker() - Sections.prompts_input("Enter the prompt to query the images by") - limit_number_prompts() - Sections.multiple_images_input_preview() - Sections.classification_output(model) - - with st.expander("Advanced settings"): - st.session_state.model_architecture = st.selectbox("Model architecture", options=['RN50', 'RN101', 'RN50x4', 'RN50x16', 'RN50x64', 'ViT-B/32', - 'ViT-B/16', 'ViT-L/14', 'ViT-L/14@336px'], index=0) - - st.markdown("



    Made by [@JavierFnts](https://twitter.com/JavierFnts) | [How was CLIP Playground built?](https://twitter.com/JavierFnts/status/1363522529072214019)" - "", unsafe_allow_html=True) diff --git a/spaces/Jesuscriss301/prueba/.ipynb_checkpoints/app-checkpoint.py b/spaces/Jesuscriss301/prueba/.ipynb_checkpoints/app-checkpoint.py deleted file mode 100644 index f12964b627c76ac565b1115d16b0d4a707c78476..0000000000000000000000000000000000000000 --- a/spaces/Jesuscriss301/prueba/.ipynb_checkpoints/app-checkpoint.py +++ /dev/null @@ -1,38 +0,0 @@ -#Librerias para cargar imagenes -import numpy as np -from keras.preprocessing.image import load_img, img_to_array -from keras.models import load_model -import streamlit as st - -dim = 200 -modelo = './modelo.h5' -pesos = './pesos.h5' -cnn = load_model(modelo) -cnn.load_weights(pesos) - -st.title("Upload + Classification Example") -uploaded_file = st.file_uploader("Choose an image...", type="jpg") -if uploaded_file is not None: - image = Image.open(uploaded_file) - st.image(image, caption='Uploaded Image.', use_column_width=True) - st.write("") - st.write("Classifying...") - label = predict(uploaded_file) ##aqui va el llamado a la IA - st.write('%s (%.2f%%)' % (label[1], label[2]*100)) - -def clasificar(uploaded_file): - x = load_img(uploaded_file, target_size=(dim, dim), color_mode = "grayscale") - x = img_to_array(x) - x = np.expand_dims(x, axis=0) - arreglo = cnn.predict(x) - resultado = arreglo[0] - respuesta = np.argmax(resultado) - - if respuesta==0: - print('NORMAL') - else: - print('TUMOR CEREBRAL') - - return respuesta - -clasificar(uploaded_file) \ No newline at end of file diff --git a/spaces/Kartik2192/Abcd/index.html b/spaces/Kartik2192/Abcd/index.html deleted file mode 100644 index 918e851d9dd1baf9e4fb4f067fd979d432472161..0000000000000000000000000000000000000000 --- a/spaces/Kartik2192/Abcd/index.html +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - My static Space - - - -
    -

    Welcome to your static Space!

    -

    - You can modify this app directly by editing index.html in the - Files and versions tab. -

    -

    - Also don't forget to check the - Spaces documentation. -

    -
    - - diff --git a/spaces/KaygNas/cut-it/main.py b/spaces/KaygNas/cut-it/main.py deleted file mode 100644 index 872141b7db6b105c1ed6aeb730d5995b5630fd03..0000000000000000000000000000000000000000 --- a/spaces/KaygNas/cut-it/main.py +++ /dev/null @@ -1,115 +0,0 @@ -from typing import Annotated -from fastapi import FastAPI, UploadFile, File, Form, Response -from fastapi.staticfiles import StaticFiles -from fastapi.responses import FileResponse -from fastapi.middleware.cors import CORSMiddleware -from pydantic import BaseModel -import io -import json - -# FastAPI tutorial -# https://fastapi.tiangolo.com/tutorial/ -app = FastAPI() - -# Allow CORS for localhost -origins = [ - "http://localhost:3000", -] - -app.add_middleware( - CORSMiddleware, - allow_origins=origins, - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], -) - -# Use a pipeline as a high-level helper -# https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.ObjectDetectionPipeline -from transformers import pipeline, CLIPModel, CLIPProcessor -from PIL import Image -import numpy as np - -detector = pipeline(model="facebook/detr-resnet-50") - - -@app.post("/detect-image") -async def detect_image(image: Annotated[UploadFile, File()]): - return detector(Image.open(image.file)) - - -# Text-to-image StableDiffusionPipeline -# https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/text2img -from diffusers import DiffusionPipeline - -text_to_image_creator = DiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5" -) -text_to_image_creator.enable_attention_slicing() - - -class TextImageItem(BaseModel): - prompt: str - - -@app.post("/text-to-image") -async def text_to_image(item: TextImageItem): - image = text_to_image_creator(prompt=item.prompt, num_inference_steps=8).images[0] - img_byte_arr = io.BytesIO() - image.save(img_byte_arr, format="JPEG") - return Response(img_byte_arr.getvalue(), media_type="image/jpeg") - - -# Load model directly -# https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPProcessor -# https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPModel -classifier_processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") -classifier_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") - - -@app.post("/classify-image") -async def classify_image( - image: Annotated[UploadFile, File()], - candidate_label: Annotated[str, Form()], - detections: Annotated[str, Form()], -): - _dectections = json.loads(detections) - _image = Image.open(image.file) - - # Copy from space - # https://huggingface.co/spaces/vishnun/CLIPnCROP/blob/main/app.py - images_list = [] - for detection in _dectections: - box = detection["box"] - im_arr = np.array(_image) - roi = im_arr[box["ymin"] : box["ymax"], box["xmin"] : box["xmax"]] - roi_im = Image.fromarray(roi) - images_list.append(roi_im) - - _input = classifier_processor( - text=[candidate_label], images=images_list, return_tensors="pt", padding=True - ) - _output = classifier_model(**_input) - logits_per_image = _output.logits_per_text - probs = logits_per_image.softmax(-1) - l_idx = np.argsort(probs[-1].detach().numpy())[::-1][0:1] - - final_ims = [] - for i, j in enumerate(_dectections): - json_dict = {} - if i in l_idx: - json_dict["detection"] = _dectections[i] - json_dict["score"] = probs[-1].detach().numpy()[i].astype(float) - final_ims.append(json_dict) - - fi = sorted(final_ims, key=lambda item: item.get("score"), reverse=True) - return fi[0] - - -# Serve static resources -app.mount("/", StaticFiles(directory="dist", html=True), name="dist") - - -@app.get("/") -def index() -> FileResponse: - return FileResponse(path="/app/dist/index.html", media_type="text/html") diff --git a/spaces/KevinQHLin/UniVTG/main/config_hl.py b/spaces/KevinQHLin/UniVTG/main/config_hl.py deleted file mode 100644 index 853049a62973aa1b5774257b21d97d2f2a9fdef2..0000000000000000000000000000000000000000 --- a/spaces/KevinQHLin/UniVTG/main/config_hl.py +++ /dev/null @@ -1,190 +0,0 @@ -# Copyright (c) THL A29 Limited, a Tencent company. All rights reserved. - -YOUTUBE_SPLITS = { - 'dog': { - 'train': [ - 'BsjTtq337mM', 'eGCD1F74iy8', 'x2Za-t1yHtI', 'iyYiqa0QZXM', - 'azy9ijU6f9I', 'NNtSZ6cPiwA', 'U9CBalvFfbM', 'AZDkqJaOgJU', - '-olTgMPAyMI', 'i35F1Ec3Ats', '6bS6-GVLBeM', 'ZGszTEn28v8', - 'EEb8iSMqwj4', 'p2hYGNkRMCw', '3kbptPDIz4U', 'iLHRqR-M9HQ', - 'zyooMDuAgCA', 'dOVsQ63N0gg', '7H_qqQvPUzY', 'Z5BEFsaYIS4', - 'iWO6io44-Fs', 'vVmGisWK0QI', 'L10kN7Btk90', '2yql1mvWbDs', - 'Iu2nbtr_Uuk', 'NSmOKAauZpM', 'PAhQGoURAro', 'uJ81Us4mBOc', - '1krGVyfIaOw', 'p9yW6FxsrJ4', 'DLGRJfpGmCQ', '0XTXKe2TOAg', - 'qpc4OSqeV7I', 'q_PJFuBOk7k', '0Uu53hCnKQ4', '-szRD9kyNug', - 'rUPxwWmJYpg', 'hseONiKKx_8', 'BLaQcOcDfjo', 'nW5JulWYEc8', - 'rMvH1SMGwwI', 'l6KlvTJkTgk', 'O8j4U3NjNvs', '8AJTZeEeStk' - ], - 'val': [ - 'a2nj7XCo2Rk', '9rP5yF9EC3Y', 'OxSsRZqPfyk', 'bZzP2MieC1c', - 'PcvdX5OVgfQ', 'p0oxRJD1GUk', 'msjK8nHZHZ0', 'hSRyclcZyGM', - 'dlH2K9N_jSM', 'OCVXhRG2fEA', 'MkBdHvXPocc', 'yN7h90Y-04g', - 'PWqLJKZeBC8', '9D_Q8l_ruQk', 'Mp8Pz86J660', '1gjntnYm8NA', - 'O3XxuutEvoo', 'wf_qlAizlSM', 'fXx44D1sqUw', 'P0MnXh6bnKk', - 'sTd06idFa0E', 'ppNjl3I3iJs', 'Om5mczkpcVg', 'xZIN_s-qhbU' - ] - }, - 'gymnastics': { - 'train': [ - 'Wfv90YJ2YtA', 'MbD5OIR9yWc', 'fZwCJWkC_Qw', 'AyRI1CioQfY', - 'xV_5YCdVqSM', '19UO7T32DJI', 'o2gAP2Clg_s', 'ewyfAOrBzjQ', - 'CMTKpA683Ig', 'aNjphhjTgqs', 'dmJ0Nq4DF2w', '57IQ6EudvGU', - 'BAlUYtPUsVI', '_UU4XqYVDqE', 'Kq4OhBiQk_E', 'D6nyvx9kEac', - 'g-m4-zeCisU', '_45vTFtcduE', '9L-Pocc_u70', '0636XaURL-A', - 'GCabQyaHSMg', 'vUi1Scb35fQ', 'eK-Yuoou_1I', 'kkS7TgNZwJI', - '2EFkINKg3nA', 'eKvALYDh7RU', 'Hyp3Hpk6dyA', '9rpzf3sgQkw', - 'kHNAnpewyeo', 'ydQij10qrZM', '41u2V_ZAKto', '6NSWsMKAgEU', - 'kUs_yUR-C2k', 'bs3ZBcfhvKA' - ], - 'val': [ - '2AuigNFEsTM', 'rPsKpHKzUso', 'tzq5cJQ9NQA', 'DyZ0gZ5xmxI', - 'PEKRfJYYEgU', 'affAIVH9uRA', 'FT7yIi3-tG0', 'T_zWyrVzyvw', - 'RoiLzMA_ilA', 'nBZiGSccsTg', 'z3cNtOMKK7A', 'EwQ-aMK2sKg', - 'Rq0BpciuvBM', 's6LNwTThBgs', '-hE9v3izo4c', 'KldEfRhv7H0', - 'eUyuw2J5FaE', 'E0aRE1_ea8E', 'BU7YlQAOBkM', 'iDJM9j11U-c', - 'zr5LSPMBpiI', 'NAfBa7lqg2Q', 'eB4Toq9dUWs', 'YPd7RDN5CkE', - '86YLsw7efDM', 'iQRMMFiYAUw', 'lzEhLAPxZyQ', 'PAjJbT1DRnY' - ] - }, - 'parkour': { - 'train': [ - 'qz1UnnxlWhI', 'MzODICzycHs', '0swXWs9yWA4', 'Nnv22OW_PaI', - 'LUhZJLY2uKc', 'yZz8z1l3XJU', '3dvjtdMC2ls', 'e27ppPer9XY', - 'HJNn2WlKFhM', 'j4OxlxnapNI', 'rhABvn7VjSQ', '3PCwXpwYqLs', - 'LECL1bIpi5w', 'w0ouP79iZWc', 'z6aKQPMJUC0', 'kATlFTwxBVY', - '3SM6a8eyuVA', 'v-Sfc4COqRQ', '64eu8pwuIUE', '7WKm0XDk3og', - '2F5Sc0Jgk4g' - ], - 'val': [ - 'TFdbCRkVeIA', 'uGLs9atTvNc', 'qlGPuopK3CI', 'ucTkpjZO_o4', - '4-4BgyGphLQ', '08k4ysX_XJE', '6sMNnWqa_as', 'oT6g0I2Ok9o', - 'Be4IlnKeBOo', 'yUjJq0kvxcw', 'fLek7GRIxjE' - ] - }, - 'skating': { - 'train': [ - '7owXLUkpoNY', '1OLM0_Jzt5M', 'b1LXb0Sbiy0', '3fGux6-ttlA', - 'HQvRun80GyA', 'a8M-5nTrll8', 'bA3CxZllhsI', 'AUAsfZtcB4E', - 'FG57uCJvQLw', 'jXIuv5uFPTI', 'eG-hdYLoS98', '2SdJBl251PU', - '2PHJqqrGC80', 'EtZkkFhniRw', 'jUiwyguxzIw', 'FL6mXlaF78Q', - 'BdemklZtYWI', 'ATk_ncI1-BA', '4wiKDfq3X8U', 'BN7GBjVlFTo', - 'JiMZvMkkbRo', '2DIXYkSnRf4', 'dZ3i-HuhQXM', '7jZydh62m8M' - ], - 'val': [ - '2oOe2_Ew6Ao', 'DGcO0QgcXtw', 'ixsKaNplm6o', '7TQbqKWjLcI', - 'CQZNrEstSag', 'g1WbAIzkw80', '4cyx1VpDjc4', 'BGZaaqFjoRY', - 'AJ98A2y1dVw', '1n7Afe5AZCM', '8x8ESK5MnR0' - ] - }, - 'skiing': { - 'train': [ - '6Usy87KaF-A', 'DtjKkp_4KDQ', '4Wt7TM2wDxI', 'iKnzSGFwdbc', - 'nALCc6HPQNs', 'WL4TA--CVcA', 'dFrfsgW1M98', 'x6qmrVojcYc', - 'pvcmQ9J_BYw', 'S3VEYFAP_pk', 'pU57a3jYMEk', '33TrLdo3ook', - 'xLhHU8uo2aY', 'fAHBmka6Psc', '9HYzZk5kiJA', 'T0gjqYbeU1g', - '7o628W-bFy0', 'YKDm_PCa-HM', 'R3DV2zDnNqg', 'NCe9YeXTvHo', - '5tXxvscmZ-Y', 'thNiPQLbi5w', '1TtJy8cSzqA', 'zDRzOsmwa08', - 'gCI4gArPjNA', 'uw0i26NHucs', '1giAsZC_ywQ', 'OvgaPTfEnqo', - 'bFD_p5znoq4', 'uKmqaAvjKgw', '5ivw_sdCTCU', 'iwCSAYGwPq4', - 'HmmOPntPlRA', 'FHCEyiM-NoY', 'EUSFMmoE_jI', 'igvSxtdsT8w', - 'zEgMYFiEaX4', '0K2FKccDp9A', 'tdyz6h4ZtYs', 'PO7GEbi2z3c', - 'mmiu7rRmSAU', 'qL6Kic-CdTo', '0fNCsOY1WGk', 'V3J26hr1ZSE', - 'GS-qBunN3B4', 'ZLNvg8025Nw', 'puAxGH6aWMY', 'h-SlvHubhs8', - 'AdovZ4OAS8I', 'UDvA1XMa1m4', 'qdo3d7mR_9s', 'qAinbyORWIw', - 'v1JpJueAElY', 'TjH29fdjcqI', 'f76B1uucoyo', 'DNPPDcOd5eQ', - '-GX95udKKm8', 'YRO_RQ3aBgg', '1ptV2E7lm9U', 'qa7dtf1Qcew', - '_UJTkqYNrpA', 'md14DNKq2_o', 'tpewrb9dDyo', 'yGoWYi_dHLY', - 'DZ3NRjDHwy8', 'aMFcEuJUqpk', '6fT9KLuE7no', 'lPdQMMAuOZo' - ], - 'val': [ - 'SSlv7qJK5zA', '_BYqZjuKpKA', 'ZueaKXReGjU', 'mGST8ZekCZc', - 'JJSu7Lh9rvs', 'IyoD3G5igY0', 'MXyv-Ut9HRg', 'Z8X9WIojH1U', - 'vT33-8KUb2Q', 'HW6_sPym938', '9wtXO2lF6hM', 'mRdthCqe6Nk', - 'RGxiOb9hlS0', 'ruySf5zL7Kw', 'I7wFmP6P7p0', '0AHkDElk3ws', - 'zqXd4EgUFhE', '91lDbBHUx0w', 'iaHbK6ogafc', 'jRbst8kjWW8', - 'drHPy6wSZGs', '5VaY6LgIqDs', 'bXq9rRSbI3c', 'hjZLa2DTuqs', - 'Ka2qcp3jmWo', 'ZnA4-ggkFu8', 'iXdt4v42mbs', '8aWN-0NZErI', - '09v0HNf81J0', 'YJCR2q-WRhQ', 'RjagI4pAUpw', '_10CbYdTG5M', - 'lhgmIgzBQxs', '2pstGBM4p0w', 'b53-VPsWom4', 'x-G4r153n6o', - 'qBbqK5qlVSM', 'XamrS9XyHuQ', 'u_n7jMS1vlw', 'AO6p0jlOd6U', - 'm-W-lcTkBQ0', 'bMuyPVIlXW8', 'kAAvTAKkIy4', 'U6vnbCurZQA', - 'dHE8q7sZ70U', 'w7fzLVRPSUc', 'FLYkD7zHuHQ', 'nhOhI24P7dM', - 'n5q2KhfoiWw', '7Hcyse0h9HE', '6_BPy_VaPSY' - ] - }, - 'surfing': { - 'train': [ - 'Ai9FwQGn5ds', 'hBl0Sm3_auw', 'LMxMeg407Vg', 'D3fk8doVui4', - 'Y9pxmLg6ti8', 'p_JsivYdbgQ', 'UokX-hcXQeo', 'VYe5QfM5ecE', - 'I48VJ92ouTQ', 'Tn-ebtUnq6E', 'eWae-nWocPU', '-Yamat_0tbw', - 'c2Fy-rdXJy4', 'xQ4NAp4vWbI', 'g9kXCIjIjoE', 'A96Jx6gv6_4', - 'e427qElqqN0', 'tTcA5hiViPo', 'wMdXzj_3aA0', 'fqNzMz1n6uA', - 'jKVOA7RFCUo', 'TJBJrk9iPPA', '_C8EjMxrS2s', 'yj7abHfZTQQ', - 'NDcqgpsyWaU', 'UJjwoivaGNo', 'GZ_XS8EnnWo', 'kJUBIcBjUZ0', - 'lWoLyR7lDAU', 'FilbyF_PGjI', 'fapRkcOe4vE', 't05r50PQqww', - 'QgStLppe610', '2TY8Q2WXUyk', '9y_ED3DyNhE', 'CGwtinVGkVU', - 'nOuRhrAMaIw', 'UN4TwjDajtQ', '-FHmVZWWgcE', 'ksx0_BfpsLg', - 'agOBPDsQrTM', 'XqggBwFOmFU', 'orNzj1J8i-4', '6ZbTCHwt1gk', - '0un3wh_pQAc', '4u6OURBLZDs', 'us0agAKuvEM', 'mVQYl7Q-TQs', - 'cB2SdlGHLMQ', 'WK5t4To0zlA', 'NNEuH_juUHI', 'KTU7xfVOat0', - 'Y1nhbNaY1ZY', 'YlXJnZe575s', 'SH7Ns0ANzJU', '3TbZfeokCkE' - ], - 'val': [ - 'o0on6yIXJQE', '4RsZz_8d8Ro', 'p8VUjcZyK70', '0P2PZXUa0Bg', - 'p2eU5z647Mw', 'mSVxaAJcNJQ', 'bcmXVyFbsRg', 'Eiq8GHi4kEo', - 'H5FEdJYokO4', 'Mkyp0z_Cgig', 'NB5Ez5kJfMU', 'Xa0y6b6Vm6U', - 'gVcCGUtpA90', '0-fstXuo_Pw', '-d72e4v9skA', 'lbp6_wCXqvw', - '9GpZHq1n8ps', 'CefGXyYu_zU', 'SI2JbS48Upg', 'hdklRTNrq0I', - 'J-P-t6g19SM', 'K0f_DpVOjfA', 'lw_1fEY9QTo', 'uUuYnKLETLw', - 'HwKv3Xc5MAE', 'wvQ0h5Nwsxc', 'l8ME6z_EWKE', 's9dTu2fcbNg', - 'GS09SevPYT4', 'YbwdDCzVczU', 'jaCOI_VwIjc', '3Y1Jp1_fFLQ', - '82OzgxT2tH8', 'IjQhHPlTfdE', 'KzQcJrT91jU', 't05AD0c08zE', - 'rGxWxX6nYO4', 'QGp0kRzKiAc', 'pK9gDWoOyko', 'Srjd4pe6vck', - 'twGcxuhCXoU', 'AshLUHPEb8M', '8En3M5CUc2E', '8sTJfTUk1d0', - 'o-bubyWTw60', 'NctbssxGCtU', 'L09Qo1ql0nM' - ] - } -} - -TVSUM_SPLITS = { - 'BK': { - 'train': ['WxtbjNsCQ8A', 'EE-bNr36nyA', 'oDXZc0tZe04', 'uGu_10sucQo'], - 'val': ['Se3oxnaPsz0'] - }, - 'BT': { - 'train': ['eQu1rNs0an0', 'qqR6AEXwxoQ', 'EYqVtI9YWJA', 'iVt07TCkFM0'], - 'val': ['JgHubY5Vw3Y'] - }, - 'DS': { - 'train': ['kLxoNp-UchI', 'NyBmCxDoHJU', 'jcoYJXDG9sw', '-esJrBWj2d8'], - 'val': ['E11zDS9XGzg'] - }, - 'FM': { - 'train': ['_xMr-HKMfVA', 'byxOvuiIJV0', 'VuWGsYPqAX8', 'xmEERLqJ2kU'], - 'val': ['JKpqYvAdIsw'] - }, - 'GA': { - 'train': ['xxdtq8mxegs', 'i3wAGJaaktw', '0tmA_C6XwfM', '3eYKfiOEJNs'], - 'val': ['Bhxk-O1Y7Ho'] - }, - 'MS': { - 'train': ['Hl-__g2gn_A', 'WG0MBPpPC6I', 'LRw_obCPUt0', '37rzWOQsNIw'], - 'val': ['Yi4Ij2NM7U4'] - }, - 'PK': { - 'train': ['GsAD1KT1xo8', 'XkqCExn6_Us', 'b626MiF1ew4', 'PJrm840pAUI'], - 'val': ['cjibtmSLxQ4'] - }, - 'PR': { - 'train': ['RBCABdttQmI', 'z_6gVvQb2d0', '4wU_LUjG5Ic', '91IHQYk1IQM'], - 'val': ['fWutDQy1nnY'] - }, - 'VT': { - 'train': ['gzDbaEs1Rlg', 'XzYM3PfTM4w', '98MoyGZKHXc', 'AwmHb44_ouw'], - 'val': ['J0nA4VgnoCo'] - }, - 'VU': { - 'train': ['akI8YFjEmUw', 'HT5vyqe0Xaw', 'vdmoEJ5YbrQ', 'xwqBXPGE9pQ'], - 'val': ['sTEELN-vY30'] - } -} \ No newline at end of file diff --git a/spaces/KonradSzafer/HF-QA-Demo/qa_engine/__init__.py b/spaces/KonradSzafer/HF-QA-Demo/qa_engine/__init__.py deleted file mode 100644 index 609ba0a944f3f0caf868f9402ab46f240f0f630a..0000000000000000000000000000000000000000 --- a/spaces/KonradSzafer/HF-QA-Demo/qa_engine/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from dotenv import load_dotenv -from qa_engine.logger import setup_logger - -setup_logger() -load_dotenv(dotenv_path='config/.env') - -from .logger import setup_logger, logger -from .config import Config -from .qa_engine import QAEngine diff --git a/spaces/KyanChen/RSPrompter/mmdet/evaluation/functional/recall.py b/spaces/KyanChen/RSPrompter/mmdet/evaluation/functional/recall.py deleted file mode 100644 index 4bce2bf3614ab454dbbdf48efc4650018cc71b13..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/evaluation/functional/recall.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Sequence - -import numpy as np -from mmengine.logging import print_log -from terminaltables import AsciiTable - -from .bbox_overlaps import bbox_overlaps - - -def _recalls(all_ious, proposal_nums, thrs): - - img_num = all_ious.shape[0] - total_gt_num = sum([ious.shape[0] for ious in all_ious]) - - _ious = np.zeros((proposal_nums.size, total_gt_num), dtype=np.float32) - for k, proposal_num in enumerate(proposal_nums): - tmp_ious = np.zeros(0) - for i in range(img_num): - ious = all_ious[i][:, :proposal_num].copy() - gt_ious = np.zeros((ious.shape[0])) - if ious.size == 0: - tmp_ious = np.hstack((tmp_ious, gt_ious)) - continue - for j in range(ious.shape[0]): - gt_max_overlaps = ious.argmax(axis=1) - max_ious = ious[np.arange(0, ious.shape[0]), gt_max_overlaps] - gt_idx = max_ious.argmax() - gt_ious[j] = max_ious[gt_idx] - box_idx = gt_max_overlaps[gt_idx] - ious[gt_idx, :] = -1 - ious[:, box_idx] = -1 - tmp_ious = np.hstack((tmp_ious, gt_ious)) - _ious[k, :] = tmp_ious - - _ious = np.fliplr(np.sort(_ious, axis=1)) - recalls = np.zeros((proposal_nums.size, thrs.size)) - for i, thr in enumerate(thrs): - recalls[:, i] = (_ious >= thr).sum(axis=1) / float(total_gt_num) - - return recalls - - -def set_recall_param(proposal_nums, iou_thrs): - """Check proposal_nums and iou_thrs and set correct format.""" - if isinstance(proposal_nums, Sequence): - _proposal_nums = np.array(proposal_nums) - elif isinstance(proposal_nums, int): - _proposal_nums = np.array([proposal_nums]) - else: - _proposal_nums = proposal_nums - - if iou_thrs is None: - _iou_thrs = np.array([0.5]) - elif isinstance(iou_thrs, Sequence): - _iou_thrs = np.array(iou_thrs) - elif isinstance(iou_thrs, float): - _iou_thrs = np.array([iou_thrs]) - else: - _iou_thrs = iou_thrs - - return _proposal_nums, _iou_thrs - - -def eval_recalls(gts, - proposals, - proposal_nums=None, - iou_thrs=0.5, - logger=None, - use_legacy_coordinate=False): - """Calculate recalls. - - Args: - gts (list[ndarray]): a list of arrays of shape (n, 4) - proposals (list[ndarray]): a list of arrays of shape (k, 4) or (k, 5) - proposal_nums (int | Sequence[int]): Top N proposals to be evaluated. - iou_thrs (float | Sequence[float]): IoU thresholds. Default: 0.5. - logger (logging.Logger | str | None): The way to print the recall - summary. See `mmengine.logging.print_log()` for details. - Default: None. - use_legacy_coordinate (bool): Whether use coordinate system - in mmdet v1.x. "1" was added to both height and width - which means w, h should be - computed as 'x2 - x1 + 1` and 'y2 - y1 + 1'. Default: False. - - - Returns: - ndarray: recalls of different ious and proposal nums - """ - - img_num = len(gts) - assert img_num == len(proposals) - proposal_nums, iou_thrs = set_recall_param(proposal_nums, iou_thrs) - all_ious = [] - for i in range(img_num): - if proposals[i].ndim == 2 and proposals[i].shape[1] == 5: - scores = proposals[i][:, 4] - sort_idx = np.argsort(scores)[::-1] - img_proposal = proposals[i][sort_idx, :] - else: - img_proposal = proposals[i] - prop_num = min(img_proposal.shape[0], proposal_nums[-1]) - if gts[i] is None or gts[i].shape[0] == 0: - ious = np.zeros((0, img_proposal.shape[0]), dtype=np.float32) - else: - ious = bbox_overlaps( - gts[i], - img_proposal[:prop_num, :4], - use_legacy_coordinate=use_legacy_coordinate) - all_ious.append(ious) - all_ious = np.array(all_ious) - recalls = _recalls(all_ious, proposal_nums, iou_thrs) - - print_recall_summary(recalls, proposal_nums, iou_thrs, logger=logger) - return recalls - - -def print_recall_summary(recalls, - proposal_nums, - iou_thrs, - row_idxs=None, - col_idxs=None, - logger=None): - """Print recalls in a table. - - Args: - recalls (ndarray): calculated from `bbox_recalls` - proposal_nums (ndarray or list): top N proposals - iou_thrs (ndarray or list): iou thresholds - row_idxs (ndarray): which rows(proposal nums) to print - col_idxs (ndarray): which cols(iou thresholds) to print - logger (logging.Logger | str | None): The way to print the recall - summary. See `mmengine.logging.print_log()` for details. - Default: None. - """ - proposal_nums = np.array(proposal_nums, dtype=np.int32) - iou_thrs = np.array(iou_thrs) - if row_idxs is None: - row_idxs = np.arange(proposal_nums.size) - if col_idxs is None: - col_idxs = np.arange(iou_thrs.size) - row_header = [''] + iou_thrs[col_idxs].tolist() - table_data = [row_header] - for i, num in enumerate(proposal_nums[row_idxs]): - row = [f'{val:.3f}' for val in recalls[row_idxs[i], col_idxs].tolist()] - row.insert(0, num) - table_data.append(row) - table = AsciiTable(table_data) - print_log('\n' + table.table, logger=logger) - - -def plot_num_recall(recalls, proposal_nums): - """Plot Proposal_num-Recalls curve. - - Args: - recalls(ndarray or list): shape (k,) - proposal_nums(ndarray or list): same shape as `recalls` - """ - if isinstance(proposal_nums, np.ndarray): - _proposal_nums = proposal_nums.tolist() - else: - _proposal_nums = proposal_nums - if isinstance(recalls, np.ndarray): - _recalls = recalls.tolist() - else: - _recalls = recalls - - import matplotlib.pyplot as plt - f = plt.figure() - plt.plot([0] + _proposal_nums, [0] + _recalls) - plt.xlabel('Proposal num') - plt.ylabel('Recall') - plt.axis([0, proposal_nums.max(), 0, 1]) - f.show() - - -def plot_iou_recall(recalls, iou_thrs): - """Plot IoU-Recalls curve. - - Args: - recalls(ndarray or list): shape (k,) - iou_thrs(ndarray or list): same shape as `recalls` - """ - if isinstance(iou_thrs, np.ndarray): - _iou_thrs = iou_thrs.tolist() - else: - _iou_thrs = iou_thrs - if isinstance(recalls, np.ndarray): - _recalls = recalls.tolist() - else: - _recalls = recalls - - import matplotlib.pyplot as plt - f = plt.figure() - plt.plot(_iou_thrs + [1.0], _recalls + [0.]) - plt.xlabel('IoU') - plt.ylabel('Recall') - plt.axis([iou_thrs.min(), 1, 0, 1]) - f.show() diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/test_time_augs/det_tta.py b/spaces/KyanChen/RSPrompter/mmdet/models/test_time_augs/det_tta.py deleted file mode 100644 index 95f91db9e1250358db0e1a572cf4c37cc7fe6e6f..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/test_time_augs/det_tta.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Tuple - -import torch -from mmcv.ops import batched_nms -from mmengine.model import BaseTTAModel -from mmengine.registry import MODELS -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.structures import DetDataSample -from mmdet.structures.bbox import bbox_flip - - -@MODELS.register_module() -class DetTTAModel(BaseTTAModel): - """Merge augmented detection results, only bboxes corresponding score under - flipping and multi-scale resizing can be processed now. - - Examples: - >>> tta_model = dict( - >>> type='DetTTAModel', - >>> tta_cfg=dict(nms=dict( - >>> type='nms', - >>> iou_threshold=0.5), - >>> max_per_img=100)) - >>> - >>> tta_pipeline = [ - >>> dict(type='LoadImageFromFile', - >>> backend_args=None), - >>> dict( - >>> type='TestTimeAug', - >>> transforms=[[ - >>> dict(type='Resize', - >>> scale=(1333, 800), - >>> keep_ratio=True), - >>> ], [ - >>> dict(type='RandomFlip', prob=1.), - >>> dict(type='RandomFlip', prob=0.) - >>> ], [ - >>> dict( - >>> type='PackDetInputs', - >>> meta_keys=('img_id', 'img_path', 'ori_shape', - >>> 'img_shape', 'scale_factor', 'flip', - >>> 'flip_direction')) - >>> ]])] - """ - - def __init__(self, tta_cfg=None, **kwargs): - super().__init__(**kwargs) - self.tta_cfg = tta_cfg - - def merge_aug_bboxes(self, aug_bboxes: List[Tensor], - aug_scores: List[Tensor], - img_metas: List[str]) -> Tuple[Tensor, Tensor]: - """Merge augmented detection bboxes and scores. - - Args: - aug_bboxes (list[Tensor]): shape (n, 4*#class) - aug_scores (list[Tensor] or None): shape (n, #class) - Returns: - tuple[Tensor]: ``bboxes`` with shape (n,4), where - 4 represent (tl_x, tl_y, br_x, br_y) - and ``scores`` with shape (n,). - """ - recovered_bboxes = [] - for bboxes, img_info in zip(aug_bboxes, img_metas): - ori_shape = img_info['ori_shape'] - flip = img_info['flip'] - flip_direction = img_info['flip_direction'] - if flip: - bboxes = bbox_flip( - bboxes=bboxes, - img_shape=ori_shape, - direction=flip_direction) - recovered_bboxes.append(bboxes) - bboxes = torch.cat(recovered_bboxes, dim=0) - if aug_scores is None: - return bboxes - else: - scores = torch.cat(aug_scores, dim=0) - return bboxes, scores - - def merge_preds(self, data_samples_list: List[List[DetDataSample]]): - """Merge batch predictions of enhanced data. - - Args: - data_samples_list (List[List[DetDataSample]]): List of predictions - of all enhanced data. The outer list indicates images, and the - inner list corresponds to the different views of one image. - Each element of the inner list is a ``DetDataSample``. - Returns: - List[DetDataSample]: Merged batch prediction. - """ - merged_data_samples = [] - for data_samples in data_samples_list: - merged_data_samples.append(self._merge_single_sample(data_samples)) - return merged_data_samples - - def _merge_single_sample( - self, data_samples: List[DetDataSample]) -> DetDataSample: - """Merge predictions which come form the different views of one image - to one prediction. - - Args: - data_samples (List[DetDataSample]): List of predictions - of enhanced data which come form one image. - Returns: - List[DetDataSample]: Merged prediction. - """ - aug_bboxes = [] - aug_scores = [] - aug_labels = [] - img_metas = [] - # TODO: support instance segmentation TTA - assert data_samples[0].pred_instances.get('masks', None) is None, \ - 'TTA of instance segmentation does not support now.' - for data_sample in data_samples: - aug_bboxes.append(data_sample.pred_instances.bboxes) - aug_scores.append(data_sample.pred_instances.scores) - aug_labels.append(data_sample.pred_instances.labels) - img_metas.append(data_sample.metainfo) - - merged_bboxes, merged_scores = self.merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas) - merged_labels = torch.cat(aug_labels, dim=0) - - if merged_bboxes.numel() == 0: - return data_samples[0] - - det_bboxes, keep_idxs = batched_nms(merged_bboxes, merged_scores, - merged_labels, self.tta_cfg.nms) - - det_bboxes = det_bboxes[:self.tta_cfg.max_per_img] - det_labels = merged_labels[keep_idxs][:self.tta_cfg.max_per_img] - - results = InstanceData() - _det_bboxes = det_bboxes.clone() - results.bboxes = _det_bboxes[:, :-1] - results.scores = _det_bboxes[:, -1] - results.labels = det_labels - det_results = data_samples[0] - det_results.pred_instances = results - return det_results diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/vc/__init__.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/vc/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/vc/pipeline.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/vc/pipeline.py deleted file mode 100644 index e38097b4f89a6669052370c0cc41452d3049c814..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/vc/pipeline.py +++ /dev/null @@ -1,741 +0,0 @@ -import os -import sys -import traceback -import logging - -logger = logging.getLogger(__name__) - -from functools import lru_cache -from time import time as ttime -from torch import Tensor -import faiss -import librosa -import numpy as np -import parselmouth -import pyworld -import torch.nn.functional as F -from scipy import signal -from tqdm import tqdm - -import random -now_dir = os.getcwd() -sys.path.append(now_dir) -import re -from functools import partial -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} -import torchcrepe # Fork Feature. Crepe algo for training and preprocess -import torch -from lib.infer.infer_libs.rmvpe import RMVPE - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class Pipeline(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - self.model_rmvpe = RMVPE("%s/rmvpe.pt" % os.environ["rmvpe_root"], is_half=self.is_half, device=self.device) - - self.note_dict = [ - 65.41, 69.30, 73.42, 77.78, 82.41, 87.31, - 92.50, 98.00, 103.83, 110.00, 116.54, 123.47, - 130.81, 138.59, 146.83, 155.56, 164.81, 174.61, - 185.00, 196.00, 207.65, 220.00, 233.08, 246.94, - 261.63, 277.18, 293.66, 311.13, 329.63, 349.23, - 369.99, 392.00, 415.30, 440.00, 466.16, 493.88, - 523.25, 554.37, 587.33, 622.25, 659.25, 698.46, - 739.99, 783.99, 830.61, 880.00, 932.33, 987.77, - 1046.50, 1108.73, 1174.66, 1244.51, 1318.51, 1396.91, - 1479.98, 1567.98, 1661.22, 1760.00, 1864.66, 1975.53, - 2093.00, 2217.46, 2349.32, 2489.02, 2637.02, 2793.83, - 2959.96, 3135.96, 3322.44, 3520.00, 3729.31, 3951.07 - ] - - # Fork Feature: Get the best torch device to use for f0 algorithms that require a torch device. Will return the type (torch.device) - def get_optimal_torch_device(self, index: int = 0) -> torch.device: - if torch.cuda.is_available(): - return torch.device( - f"cuda:{index % torch.cuda.device_count()}" - ) # Very fast - elif torch.backends.mps.is_available(): - return torch.device("mps") - return torch.device("cpu") - - # Fork Feature: Compute f0 with the crepe method - def get_f0_crepe_computation( - self, - x, - f0_min, - f0_max, - p_len, - *args, # 512 before. Hop length changes the speed that the voice jumps to a different dramatic pitch. Lower hop lengths means more pitch accuracy but longer inference time. - **kwargs, # Either use crepe-tiny "tiny" or crepe "full". Default is full - ): - x = x.astype( - np.float32 - ) # fixes the F.conv2D exception. We needed to convert double to float. - x /= np.quantile(np.abs(x), 0.999) - torch_device = self.get_optimal_torch_device() - audio = torch.from_numpy(x).to(torch_device, copy=True) - audio = torch.unsqueeze(audio, dim=0) - if audio.ndim == 2 and audio.shape[0] > 1: - audio = torch.mean(audio, dim=0, keepdim=True).detach() - audio = audio.detach() - hop_length = kwargs.get('crepe_hop_length', 160) - model = kwargs.get('model', 'full') - print("Initiating prediction with a crepe_hop_length of: " + str(hop_length)) - pitch: Tensor = torchcrepe.predict( - audio, - self.sr, - hop_length, - f0_min, - f0_max, - model, - batch_size=hop_length * 2, - device=torch_device, - pad=True, - ) - p_len = p_len or x.shape[0] // hop_length - # Resize the pitch for final f0 - source = np.array(pitch.squeeze(0).cpu().float().numpy()) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * p_len, len(source)) / p_len, - np.arange(0, len(source)), - source, - ) - f0 = np.nan_to_num(target) - return f0 # Resized f0 - - def get_f0_official_crepe_computation( - self, - x, - f0_min, - f0_max, - *args, - **kwargs - ): - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - model = kwargs.get('model', 'full') - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - return f0 - - # Fork Feature: Compute pYIN f0 method - def get_f0_pyin_computation(self, x, f0_min, f0_max): - y, sr = librosa.load("saudio/Sidney.wav", self.sr, mono=True) - f0, _, _ = librosa.pyin(y, sr=self.sr, fmin=f0_min, fmax=f0_max) - f0 = f0[1:] # Get rid of extra first frame - return f0 - - def get_pm(self, x, p_len, *args, **kwargs): - f0 = parselmouth.Sound(x, self.sr).to_pitch_ac( - time_step=160 / 16000, - voicing_threshold=0.6, - pitch_floor=kwargs.get('f0_min'), - pitch_ceiling=kwargs.get('f0_max'), - ).selected_array["frequency"] - - return np.pad( - f0, - [[max(0, (p_len - len(f0) + 1) // 2), max(0, p_len - len(f0) - (p_len - len(f0) + 1) // 2)]], - mode="constant" - ) - - def get_harvest(self, x, *args, **kwargs): - f0_spectral = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=kwargs.get('f0_max'), - f0_floor=kwargs.get('f0_min'), - frame_period=1000 * kwargs.get('hop_length', 160) / self.sr, - ) - return pyworld.stonemask(x.astype(np.double), *f0_spectral, self.sr) - - def get_dio(self, x, *args, **kwargs): - f0_spectral = pyworld.dio( - x.astype(np.double), - fs=self.sr, - f0_ceil=kwargs.get('f0_max'), - f0_floor=kwargs.get('f0_min'), - frame_period=1000 * kwargs.get('hop_length', 160) / self.sr, - ) - return pyworld.stonemask(x.astype(np.double), *f0_spectral, self.sr) - - - def get_rmvpe(self, x, *args, **kwargs): - if not hasattr(self, "model_rmvpe"): - from lib.infer.infer_libs.rmvpe import RMVPE - - logger.info( - "Loading rmvpe model,%s" % "%s/rmvpe.pt" % os.environ["rmvpe_root"] - ) - self.model_rmvpe = RMVPE( - "%s/rmvpe.pt" % os.environ["rmvpe_root"], - is_half=self.is_half, - device=self.device, - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - - if "privateuseone" in str(self.device): # clean ortruntime memory - del self.model_rmvpe.model - del self.model_rmvpe - logger.info("Cleaning ortruntime memory") - - return f0 - - - def get_pitch_dependant_rmvpe(self, x, f0_min=1, f0_max=40000, *args, **kwargs): - if not hasattr(self, "model_rmvpe"): - from lib.infer.infer_libs.rmvpe import RMVPE - - logger.info( - "Loading rmvpe model,%s" % "%s/rmvpe.pt" % os.environ["rmvpe_root"] - ) - self.model_rmvpe = RMVPE( - "%s/rmvpe.pt" % os.environ["rmvpe_root"], - is_half=self.is_half, - device=self.device, - ) - f0 = self.model_rmvpe.infer_from_audio_with_pitch(x, thred=0.03, f0_min=f0_min, f0_max=f0_max) - if "privateuseone" in str(self.device): # clean ortruntime memory - del self.model_rmvpe.model - del self.model_rmvpe - logger.info("Cleaning ortruntime memory") - - return f0 - - def autotune_f0(self, f0): - autotuned_f0 = [] - for freq in f0: - closest_notes = [x for x in self.note_dict if abs(x - freq) == min(abs(n - freq) for n in self.note_dict)] - autotuned_f0.append(random.choice(closest_notes)) - return np.array(autotuned_f0, np.float64) - - - # Fork Feature: Acquire median hybrid f0 estimation calculation - def get_f0_hybrid_computation( - self, - methods_str, - input_audio_path, - x, - f0_min, - f0_max, - p_len, - filter_radius, - crepe_hop_length, - time_step, - ): - # Get various f0 methods from input to use in the computation stack - methods_str = re.search('hybrid\[(.+)\]', methods_str) - if methods_str: # Ensure a match was found - methods = [method.strip() for method in methods_str.group(1).split('+')] - f0_computation_stack = [] - - print("Calculating f0 pitch estimations for methods: %s" % str(methods)) - x = x.astype(np.float32) - x /= np.quantile(np.abs(x), 0.999) - # Get f0 calculations for all methods specified - for method in methods: - f0 = None - if method == "crepe-tiny": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny") - f0 = f0[1:] # Get rid of extra first frame - elif method == "mangio-crepe": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length - ) - elif method == "mangio-crepe-tiny": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length, "tiny" - ) - # elif method == "pyin": Not Working just yet - # f0 = self.get_f0_pyin_computation(x, f0_min, f0_max) - # Push method to the stack - f0_computation_stack.append(f0) - - for fc in f0_computation_stack: - print(len(fc)) - - print(f"Calculating hybrid median f0 from the stack of: {str(methods)}") - f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0) - return f0_median_hybrid - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - crepe_hop_length, - f0_autotune, - inp_f0=None, - f0_min=50, - f0_max=1100, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "dio": # Potentially Buggy? - f0, t = pyworld.dio( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - model = "full" - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - elif f0_method == "crepe-tiny": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny") - elif f0_method == "mangio-crepe": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length - ) - elif f0_method == "mangio-crepe-tiny": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length, "tiny" - ) - elif f0_method == "rmvpe": - if not hasattr(self, "model_rmvpe"): - from lib.infer.infer_libs.rmvpe import RMVPE - - logger.info( - "Loading rmvpe model,%s" % "%s/rmvpe.pt" % os.environ["rmvpe_root"] - ) - self.model_rmvpe = RMVPE( - "%s/rmvpe.pt" % os.environ["rmvpe_root"], - is_half=self.is_half, - device=self.device, - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - - if "privateuseone" in str(self.device): # clean ortruntime memory - del self.model_rmvpe.model - del self.model_rmvpe - logger.info("Cleaning ortruntime memory") - elif f0_method == "rmvpe+": - params = {'x': x, 'p_len': p_len, 'f0_up_key': f0_up_key, 'f0_min': f0_min, - 'f0_max': f0_max, 'time_step': time_step, 'filter_radius': filter_radius, - 'crepe_hop_length': crepe_hop_length, 'model': "full" - } - f0 = self.get_pitch_dependant_rmvpe(**params) - elif "hybrid" in f0_method: - # Perform hybrid median pitch estimation - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = self.get_f0_hybrid_computation( - f0_method,+ - input_audio_path, - x, - f0_min, - f0_max, - p_len, - filter_radius, - crepe_hop_length, - time_step, - ) - print("Autotune:", f0_autotune) - if f0_autotune: - f0 = self.autotune_f0(f0) - - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int32) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch is not None and pitchf is not None: - feats0 = feats.clone() - if ( - not isinstance(index, type(None)) - and not isinstance(big_npy, type(None)) - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch is not None and pitchf is not None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch is not None and pitchf is not None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch is not None and pitchf is not None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - hasp = pitch is not None and pitchf is not None - arg = (feats, p_len, pitch, pitchf, sid) if hasp else (feats, p_len, sid) - audio1 = (net_g.infer(*arg)[0][0, 0]).data.cpu().float().numpy() - del hasp, arg - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - def process_t(self, t, s, window, audio_pad, pitch, pitchf, times, index, big_npy, index_rate, version, protect, t_pad_tgt, if_f0, sid, model, net_g): - t = t // window * window - if if_f0 == 1: - return self.vc( - model, - net_g, - sid, - audio_pad[s : t + t_pad_tgt + window], - pitch[:, s // window : (t + t_pad_tgt) // window], - pitchf[:, s // window : (t + t_pad_tgt) // window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[t_pad_tgt : -t_pad_tgt] - else: - return self.vc( - model, - net_g, - sid, - audio_pad[s : t + t_pad_tgt + window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[t_pad_tgt : -t_pad_tgt] - - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - crepe_hop_length, - f0_autotune, - f0_file=None, - f0_min=50, - f0_max=1100 - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name"): - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - crepe_hop_length, - f0_autotune, - inp_f0, - f0_min, - f0_max - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if "mps" not in str(self.device) or "xpu" not in str(self.device): - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - - with tqdm(total=len(opt_ts), desc="Processing", unit="window") as pbar: - for i, t in enumerate(opt_ts): - t = t // self.window * self.window - start = s - end = t + self.t_pad2 + self.window - audio_slice = audio_pad[start:end] - pitch_slice = pitch[:, start // self.window:end // self.window] if if_f0 else None - pitchf_slice = pitchf[:, start // self.window:end // self.window] if if_f0 else None - audio_opt.append(self.vc(model, net_g, sid, audio_slice, pitch_slice, pitchf_slice, times, index, big_npy, index_rate, version, protect)[self.t_pad_tgt : -self.t_pad_tgt]) - s = t - pbar.update(1) - pbar.refresh() - - audio_slice = audio_pad[t:] - pitch_slice = pitch[:, t // self.window:] if if_f0 and t is not None else pitch - pitchf_slice = pitchf[:, t // self.window:] if if_f0 and t is not None else pitchf - audio_opt.append(self.vc(model, net_g, sid, audio_slice, pitch_slice, pitchf_slice, times, index, big_npy, index_rate, version, protect)[self.t_pad_tgt : -self.t_pad_tgt]) - - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if tgt_sr != resample_sr >= 16000: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - print("Returning completed audio...") - return audio_opt diff --git a/spaces/Lee008/PixelDayReal/src/GAN.py b/spaces/Lee008/PixelDayReal/src/GAN.py deleted file mode 100644 index bbe533f037d2c4cfe0be24c8c9a3bd4bc382dac0..0000000000000000000000000000000000000000 --- a/spaces/Lee008/PixelDayReal/src/GAN.py +++ /dev/null @@ -1,202 +0,0 @@ -import os -import cv2 -import keras -import warnings -import numpy as np -from PIL import Image -import matplotlib.pyplot as plt - -from tensorflow.keras.optimizers import Adam -from tensorflow.keras.models import Sequential, Model -from tensorflow.keras.layers import Dense, LeakyReLU, Reshape, Flatten, Input -from tensorflow.keras.layers import Conv2D, MaxPooling2D, Activation, Dropout, Conv2DTranspose - -from tensorflow.compat.v1.keras.layers import BatchNormalization - -images = [] -def load_images(size=(64,64)): - pixed_faces = os.listdir("kaggle/working/results/pixed_faces") - images_Path = "kaggle/working/results/pixed_faces" - for i in pixed_faces: - try: - image = cv2.imread(f"{images_Path}/{i}") - image = cv2.resize(image,size) - images.append(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) - except: - pass - -load_images() - - -#--------vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv -#Author: https://www.kaggle.com/nassimyagoub -#--------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -def __init__(self): - self.img_shape = (64, 64, 3) - - self.noise_size = 100 - - optimizer = Adam(0.0002,0.5) - - self.discriminator = self.build_discriminator() - self.discriminator.compile(loss='binary_crossentropy', - optimizer=optimizer, - metrics=['accuracy']) - - self.generator = self.build_generator() - self.generator.compile(loss='binary_crossentropy', optimizer=optimizer) - - self.combined = Sequential() - self.combined.add(self.generator) - self.combined.add(self.discriminator) - - self.discriminator.trainable = False - - self.combined.compile(loss='binary_crossentropy', optimizer=optimizer) - - self.combined.summary() - -def build_generator(self): - epsilon = 0.00001 - noise_shape = (self.noise_size,) - - model = Sequential() - - model.add(Dense(4*4*512, activation='linear', input_shape=noise_shape)) - model.add(LeakyReLU(alpha=0.2)) - model.add(Reshape((4, 4, 512))) - - model.add(Conv2DTranspose(512, kernel_size=[4,4], strides=[2,2], padding="same", - kernel_initializer= keras.initializers.TruncatedNormal(stddev=0.02))) - model.add(BatchNormalization(momentum=0.9, epsilon=epsilon)) - model.add(LeakyReLU(alpha=0.2)) - - model.add(Conv2DTranspose(256, kernel_size=[4,4], strides=[2,2], padding="same", - kernel_initializer= keras.initializers.TruncatedNormal(stddev=0.02))) - model.add(BatchNormalization(momentum=0.9, epsilon=epsilon)) - model.add(LeakyReLU(alpha=0.2)) - - model.add(Conv2DTranspose(128, kernel_size=[4,4], strides=[2,2], padding="same", - kernel_initializer= keras.initializers.TruncatedNormal(stddev=0.02))) - model.add(BatchNormalization(momentum=0.9, epsilon=epsilon)) - model.add(LeakyReLU(alpha=0.2)) - - model.add(Conv2DTranspose(64, kernel_size=[4,4], strides=[2,2], padding="same", - kernel_initializer= keras.initializers.TruncatedNormal(stddev=0.02))) - model.add(BatchNormalization(momentum=0.9, epsilon=epsilon)) - model.add(LeakyReLU(alpha=0.2)) - - model.add(Conv2DTranspose(3, kernel_size=[4,4], strides=[1,1], padding="same", - kernel_initializer= keras.initializers.TruncatedNormal(stddev=0.02))) - - model.add(Activation("tanh")) - - model.summary() - - noise = Input(shape=noise_shape) - img = model(noise) - - return Model(noise, img) - -def build_discriminator(self): - - model = Sequential() - - model.add(Conv2D(128, (3,3), padding='same', input_shape=self.img_shape)) - model.add(LeakyReLU(alpha=0.2)) - model.add(BatchNormalization()) - model.add(Conv2D(128, (3,3), padding='same')) - model.add(LeakyReLU(alpha=0.2)) - model.add(BatchNormalization()) - model.add(MaxPooling2D(pool_size=(3,3))) - model.add(Dropout(0.2)) - - model.add(Conv2D(128, (3,3), padding='same')) - model.add(LeakyReLU(alpha=0.2)) - model.add(BatchNormalization()) - model.add(Conv2D(128, (3,3), padding='same')) - model.add(LeakyReLU(alpha=0.2)) - model.add(BatchNormalization()) - model.add(MaxPooling2D(pool_size=(3,3))) - model.add(Dropout(0.3)) - - model.add(Flatten()) - model.add(Dense(128)) - model.add(LeakyReLU(alpha=0.2)) - model.add(Dense(128)) - model.add(LeakyReLU(alpha=0.2)) - model.add(Dense(1, activation='sigmoid')) - - model.summary() - - img = Input(shape=self.img_shape) - validity = model(img) - - return Model(img, validity) - -def train(self, epochs, batch_size=128, metrics_update=50, save_images=100, save_model=2000): - - X_train = np.array(images) - X_train = (X_train.astype(np.float32) - 127.5) / 127.5 - - half_batch = int(batch_size / 2) - - mean_d_loss=[0,0] - mean_g_loss=0 - - for epoch in range(epochs): - idx = np.random.randint(0, X_train.shape[0], half_batch) - imgs = X_train[idx] - - noise = np.random.normal(0, 1, (half_batch, self.noise_size)) - gen_imgs = self.generator.predict(noise) - - - - - d_loss = 0.5 * np.add(self.discriminator.train_on_batch(imgs, np.ones((half_batch, 1))), - self.discriminator.train_on_batch(gen_imgs, np.zeros((half_batch, 1)))) - - - noise = np.random.normal(0, 1, (batch_size, self.noise_size)) - - valid_y = np.array([1] * batch_size) - g_loss = self.combined.train_on_batch(noise, valid_y) - - mean_d_loss[0] += d_loss[0] - mean_d_loss[1] += d_loss[1] - mean_g_loss += g_loss - - - if epoch % metrics_update == 0: - print ("%d [Discriminator loss: %f, acc.: %.2f%%] [Generator loss: %f]" % (epoch, mean_d_loss[0]/metrics_update, 100*mean_d_loss[1]/metrics_update, mean_g_loss/metrics_update)) - mean_d_loss=[0,0] - mean_g_loss=0 - - if epoch % save_images == 0: - self.save_images(epoch) - - - if epoch % save_model == 0: - self.generator.save("kaggle/working/results/generators/generator_%d" % epoch) - self.discriminator.save("kaggle/working/results/discriminators/discriminator_%d" % epoch) - - -def save_images(self, epoch): - noise = np.random.normal(0, 1, (25, self.noise_size)) - gen_imgs = self.generator.predict(noise) - - - gen_imgs = 0.5 * gen_imgs + 0.5 - - fig, axs = plt.subplots(5,5, figsize = (8,8)) - - for i in range(5): - for j in range(5): - axs[i,j].imshow(gen_imgs[5*i+j]) - axs[i,j].axis('off') - - plt.show() - - fig.savefig("kaggle/working/results/pandaS_%d.png" % epoch) - plt.close() \ No newline at end of file diff --git a/spaces/MKFMIKU/Bi-Noising.Diffusion/diffusion.py b/spaces/MKFMIKU/Bi-Noising.Diffusion/diffusion.py deleted file mode 100644 index f768d183f40d2a882154983d4a07a4d75e7bd89b..0000000000000000000000000000000000000000 --- a/spaces/MKFMIKU/Bi-Noising.Diffusion/diffusion.py +++ /dev/null @@ -1,140 +0,0 @@ -import numpy as np -import PIL -from PIL import Image -import torch - -from diffusion_arch import ILVRUNetModel, ConditionalUNetModel -from guided_diffusion.script_util import create_gaussian_diffusion - -import torch.nn.functional as F -import torchvision.transforms.functional as TF -from torchvision.utils import make_grid - -def preprocess_image(image): - w, h = image.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL.Image.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = torch.from_numpy(image.transpose(2,0,1)).unsqueeze(0) - return 2.0 * image - 1.0 - -def preprocess_mask(mask): - mask = mask.convert("L") - w, h = mask.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - mask = mask.resize((w, h), resample=PIL.Image.NEAREST) - mask = np.array(mask).astype(np.float32) / 255.0 - mask = torch.from_numpy(np.repeat(mask[None, ...], 3, axis=0)).unsqueeze(0) - mask[mask > 0] = 1 - return mask - - -class DiffusionPipeline(): - def __init__(self, device): - super().__init__() - self.device = device - diffusion_model = ILVRUNetModel( - in_channels=3, - model_channels=128, - out_channels=6, - num_res_blocks=1, - attention_resolutions=[16], - channel_mult=(1, 1, 2, 2, 4, 4), - num_classes=None, - use_checkpoint=False, - use_fp16=False, - num_heads=4, - num_head_channels=64, - num_heads_upsample=-1, - use_scale_shift_norm=True, - resblock_updown=True, - use_new_attention_order=False - ) - diffusion_model = diffusion_model.to(device) - diffusion_model = diffusion_model.eval() - ilvr_pretraining = torch.load('./ffhq_10m.pt', map_location='cpu') - diffusion_model.load_state_dict(ilvr_pretraining) - self.diffusion_model = diffusion_model - - diffusion_restoration_model = ConditionalUNetModel( - in_channels=3, - model_channels=128, - out_channels=6, - num_res_blocks=1, - attention_resolutions=[16], - dropout=0.0, - channel_mult=(1, 1, 2, 2, 4, 4), - num_classes=None, - use_checkpoint=False, - use_fp16=False, - num_heads=4, - num_head_channels=64, - num_heads_upsample=-1, - use_scale_shift_norm=True, - resblock_updown=True, - use_new_attention_order=False - ) - diffusion_restoration_model = diffusion_restoration_model.to(device) - diffusion_restoration_model = diffusion_restoration_model.eval() - state_dict = torch.load('./net_g_250000.pth', map_location='cpu') - diffusion_restoration_model.load_state_dict(state_dict['params']) - self.diffusion_restoration_model = diffusion_restoration_model - - @torch.no_grad() - def __call__(self, lq, diffusion_step, binoising_step, grid_size): - lq = lq.convert("RGB").resize((256, 256), resample=Image.LANCZOS) - - eval_gaussian_diffusion = create_gaussian_diffusion( - steps=1000, - learn_sigma=True, - noise_schedule='linear', - use_kl=False, - timestep_respacing=str(int(diffusion_step)), - predict_xstart=False, - rescale_timesteps=False, - rescale_learned_sigmas=False, - ) - - ow, oh = lq.size - - # preprocess image - lq_img_th = preprocess_image(lq).to(self.device) - - lq_img_th = lq_img_th.repeat([grid_size, 1, 1, 1]) - - img = torch.randn_like(lq_img_th, device=self.device) - s_img = torch.randn_like(lq_img_th, device=self.device) - - indices = list(range(eval_gaussian_diffusion.num_timesteps))[::-1] - for i in indices: - t = torch.tensor([i] * lq_img_th.size(0), device=self.device) - - out = eval_gaussian_diffusion.p_mean_variance(self.diffusion_restoration_model, s_img, t, model_kwargs={'lq': lq_img_th}) - nonzero_mask = ( - (t != 0).float().view(-1, *([1] * (len(img.shape) - 1))) - ) # no noise when t == 0 - s_img = out["mean"] + nonzero_mask * torch.exp(0.5 * out["log_variance"]) * torch.randn_like(img, device=self.device) - s_img_pred = out["pred_xstart"] - - - if i < binoising_step: - model_output = eval_gaussian_diffusion._wrap_model(self.diffusion_restoration_model)(img, t, lq=lq_img_th) - - B, C = img.shape[:2] - model_output, model_var_values = torch.split(model_output, C, dim=1) - - pred_xstart = eval_gaussian_diffusion._predict_xstart_from_eps(img, t, model_output).clamp(-1, 1) - img = eval_gaussian_diffusion.q_sample(pred_xstart, t) - - out = eval_gaussian_diffusion.p_mean_variance(self.diffusion_model, img, t) - - nonzero_mask = ( - (t != 0).float().view(-1, *([1] * (len(img.shape) - 1))) - ) # no noise when t == 0 - img = out["mean"] + nonzero_mask * torch.exp(0.5 * out["log_variance"]) * torch.randn_like(img, device=self.device) - img_pred = out["pred_xstart"] - - if i % 2 == 0: - yield [Image.fromarray(np.uint8((make_grid(s_img_pred) / 2 + 0.5).clamp(0, 1).cpu().numpy().transpose(1,2,0) * 255.)), Image.fromarray(np.uint8((make_grid(img_pred) / 2 + 0.5).clamp(0, 1).cpu().numpy().transpose(1,2,0) * 255.))] - - yield [Image.fromarray(np.uint8((make_grid(s_img) / 2 + 0.5).clamp(0, 1).cpu().numpy().transpose(1,2,0) * 255.)), Image.fromarray(np.uint8((make_grid(img) / 2 + 0.5).clamp(0, 1).cpu().numpy().transpose(1,2,0) * 255.))] diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/util/palette.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/util/palette.py deleted file mode 100644 index d2541659563056b015b3d6e4c2b0accef3b4e831..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/util/palette.py +++ /dev/null @@ -1,3 +0,0 @@ -davis_palette = b'\x00\x00\x00\x80\x00\x00\x00\x80\x00\x80\x80\x00\x00\x00\x80\x80\x00\x80\x00\x80\x80\x80\x80\x80@\x00\x00\xc0\x00\x00@\x80\x00\xc0\x80\x00@\x00\x80\xc0\x00\x80@\x80\x80\xc0\x80\x80\x00@\x00\x80@\x00\x00\xc0\x00\x80\xc0\x00\x00@\x80\x80@\x80\x00\xc0\x80\x80\xc0\x80@@\x00\xc0@\x00@\xc0\x00\xc0\xc0\x00@@\x80\xc0@\x80@\xc0\x80\xc0\xc0\x80\x00\x00@\x80\x00@\x00\x80@\x80\x80@\x00\x00\xc0\x80\x00\xc0\x00\x80\xc0\x80\x80\xc0@\x00@\xc0\x00@@\x80@\xc0\x80@@\x00\xc0\xc0\x00\xc0@\x80\xc0\xc0\x80\xc0\x00@@\x80@@\x00\xc0@\x80\xc0@\x00@\xc0\x80@\xc0\x00\xc0\xc0\x80\xc0\xc0@@@\xc0@@@\xc0@\xc0\xc0@@@\xc0\xc0@\xc0@\xc0\xc0\xc0\xc0\xc0 \x00\x00\xa0\x00\x00 \x80\x00\xa0\x80\x00 \x00\x80\xa0\x00\x80 \x80\x80\xa0\x80\x80`\x00\x00\xe0\x00\x00`\x80\x00\xe0\x80\x00`\x00\x80\xe0\x00\x80`\x80\x80\xe0\x80\x80 @\x00\xa0@\x00 \xc0\x00\xa0\xc0\x00 @\x80\xa0@\x80 \xc0\x80\xa0\xc0\x80`@\x00\xe0@\x00`\xc0\x00\xe0\xc0\x00`@\x80\xe0@\x80`\xc0\x80\xe0\xc0\x80 \x00@\xa0\x00@ \x80@\xa0\x80@ \x00\xc0\xa0\x00\xc0 \x80\xc0\xa0\x80\xc0`\x00@\xe0\x00@`\x80@\xe0\x80@`\x00\xc0\xe0\x00\xc0`\x80\xc0\xe0\x80\xc0 @@\xa0@@ \xc0@\xa0\xc0@ @\xc0\xa0@\xc0 \xc0\xc0\xa0\xc0\xc0`@@\xe0@@`\xc0@\xe0\xc0@`@\xc0\xe0@\xc0`\xc0\xc0\xe0\xc0\xc0\x00 \x00\x80 \x00\x00\xa0\x00\x80\xa0\x00\x00 \x80\x80 \x80\x00\xa0\x80\x80\xa0\x80@ \x00\xc0 \x00@\xa0\x00\xc0\xa0\x00@ \x80\xc0 \x80@\xa0\x80\xc0\xa0\x80\x00`\x00\x80`\x00\x00\xe0\x00\x80\xe0\x00\x00`\x80\x80`\x80\x00\xe0\x80\x80\xe0\x80@`\x00\xc0`\x00@\xe0\x00\xc0\xe0\x00@`\x80\xc0`\x80@\xe0\x80\xc0\xe0\x80\x00 @\x80 @\x00\xa0@\x80\xa0@\x00 \xc0\x80 \xc0\x00\xa0\xc0\x80\xa0\xc0@ @\xc0 @@\xa0@\xc0\xa0@@ \xc0\xc0 \xc0@\xa0\xc0\xc0\xa0\xc0\x00`@\x80`@\x00\xe0@\x80\xe0@\x00`\xc0\x80`\xc0\x00\xe0\xc0\x80\xe0\xc0@`@\xc0`@@\xe0@\xc0\xe0@@`\xc0\xc0`\xc0@\xe0\xc0\xc0\xe0\xc0 \x00\xa0 \x00 \xa0\x00\xa0\xa0\x00 \x80\xa0 \x80 \xa0\x80\xa0\xa0\x80` \x00\xe0 \x00`\xa0\x00\xe0\xa0\x00` \x80\xe0 \x80`\xa0\x80\xe0\xa0\x80 `\x00\xa0`\x00 \xe0\x00\xa0\xe0\x00 `\x80\xa0`\x80 \xe0\x80\xa0\xe0\x80``\x00\xe0`\x00`\xe0\x00\xe0\xe0\x00``\x80\xe0`\x80`\xe0\x80\xe0\xe0\x80 @\xa0 @ \xa0@\xa0\xa0@ \xc0\xa0 \xc0 \xa0\xc0\xa0\xa0\xc0` @\xe0 @`\xa0@\xe0\xa0@` \xc0\xe0 \xc0`\xa0\xc0\xe0\xa0\xc0 `@\xa0`@ \xe0@\xa0\xe0@ `\xc0\xa0`\xc0 \xe0\xc0\xa0\xe0\xc0``@\xe0`@`\xe0@\xe0\xe0@``\xc0\xe0`\xc0`\xe0\xc0\xe0\xe0\xc0' - -youtube_palette = b'\x00\x00\x00\xec_g\xf9\x91W\xfa\xc8c\x99\xc7\x94b\xb3\xb2f\x99\xcc\xc5\x94\xc5\xabyg\xff\xff\xffes~\x0b\x0b\x0b\x0c\x0c\x0c\r\r\r\x0e\x0e\x0e\x0f\x0f\x0f' diff --git a/spaces/MetaWabbit/Auto-GPT/tests/unit/test_chat.py b/spaces/MetaWabbit/Auto-GPT/tests/unit/test_chat.py deleted file mode 100644 index 774f4103762c28d5a02e89c14b224fae0bc0756a..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/tests/unit/test_chat.py +++ /dev/null @@ -1,86 +0,0 @@ -# Generated by CodiumAI -import time -import unittest -from unittest.mock import patch - -from autogpt.chat import create_chat_message, generate_context - - -class TestChat(unittest.TestCase): - # Tests that the function returns a dictionary with the correct keys and values when valid strings are provided for role and content. - def test_happy_path_role_content(self): - result = create_chat_message("system", "Hello, world!") - self.assertEqual(result, {"role": "system", "content": "Hello, world!"}) - - # Tests that the function returns a dictionary with the correct keys and values when empty strings are provided for role and content. - def test_empty_role_content(self): - result = create_chat_message("", "") - self.assertEqual(result, {"role": "", "content": ""}) - - # Tests the behavior of the generate_context function when all input parameters are empty. - @patch("time.strftime") - def test_generate_context_empty_inputs(self, mock_strftime): - # Mock the time.strftime function to return a fixed value - mock_strftime.return_value = "Sat Apr 15 00:00:00 2023" - # Arrange - prompt = "" - relevant_memory = "" - full_message_history = [] - model = "gpt-3.5-turbo-0301" - - # Act - result = generate_context(prompt, relevant_memory, full_message_history, model) - - # Assert - expected_result = ( - -1, - 47, - 3, - [ - {"role": "system", "content": ""}, - { - "role": "system", - "content": f"The current time and date is {time.strftime('%c')}", - }, - { - "role": "system", - "content": f"This reminds you of these events from your past:\n\n\n", - }, - ], - ) - self.assertEqual(result, expected_result) - - # Tests that the function successfully generates a current_context given valid inputs. - def test_generate_context_valid_inputs(self): - # Given - prompt = "What is your favorite color?" - relevant_memory = "You once painted your room blue." - full_message_history = [ - create_chat_message("user", "Hi there!"), - create_chat_message("assistant", "Hello! How can I assist you today?"), - create_chat_message("user", "Can you tell me a joke?"), - create_chat_message( - "assistant", - "Why did the tomato turn red? Because it saw the salad dressing!", - ), - create_chat_message("user", "Haha, that's funny."), - ] - model = "gpt-3.5-turbo-0301" - - # When - result = generate_context(prompt, relevant_memory, full_message_history, model) - - # Then - self.assertIsInstance(result[0], int) - self.assertIsInstance(result[1], int) - self.assertIsInstance(result[2], int) - self.assertIsInstance(result[3], list) - self.assertGreaterEqual(result[0], 0) - self.assertGreaterEqual(result[1], 0) - self.assertGreaterEqual(result[2], 0) - self.assertGreaterEqual( - len(result[3]), 3 - ) # current_context should have at least 3 messages - self.assertLessEqual( - result[1], 2048 - ) # token limit for GPT-3.5-turbo-0301 is 2048 tokens diff --git a/spaces/MohamadRezo/flixPicks/database/Users.py b/spaces/MohamadRezo/flixPicks/database/Users.py deleted file mode 100644 index 6afd717ff850a61bb08025e962d23451681e42c0..0000000000000000000000000000000000000000 --- a/spaces/MohamadRezo/flixPicks/database/Users.py +++ /dev/null @@ -1,26 +0,0 @@ -class users_table: - _instance = None - - def __new__(cls, *args, **kwargs): - if not cls._instance: - cls._instance = super().__new__(cls) - cls._instance.db = {} - return cls._instance - - def insert(self, key, value): - self.db[key] = value - - def read(self, key): - return self.db.get(key) - - def has_key(self, key): - return key in self.db - - def has_value(self, value): - return value in self.db.values() - - def update(self, key, value): - if key in self.db: - self.db[key] = value - else: - raise KeyError(f"Key '{key}' does not exist in the database.") diff --git a/spaces/MrBodean/VoiceClone/vocoder/models/deepmind_version.py b/spaces/MrBodean/VoiceClone/vocoder/models/deepmind_version.py deleted file mode 100644 index 1d973d9b8b9ab547571abc5a3f5ea86226a25924..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/vocoder/models/deepmind_version.py +++ /dev/null @@ -1,170 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from utils.display import * -from utils.dsp import * - - -class WaveRNN(nn.Module) : - def __init__(self, hidden_size=896, quantisation=256) : - super(WaveRNN, self).__init__() - - self.hidden_size = hidden_size - self.split_size = hidden_size // 2 - - # The main matmul - self.R = nn.Linear(self.hidden_size, 3 * self.hidden_size, bias=False) - - # Output fc layers - self.O1 = nn.Linear(self.split_size, self.split_size) - self.O2 = nn.Linear(self.split_size, quantisation) - self.O3 = nn.Linear(self.split_size, self.split_size) - self.O4 = nn.Linear(self.split_size, quantisation) - - # Input fc layers - self.I_coarse = nn.Linear(2, 3 * self.split_size, bias=False) - self.I_fine = nn.Linear(3, 3 * self.split_size, bias=False) - - # biases for the gates - self.bias_u = nn.Parameter(torch.zeros(self.hidden_size)) - self.bias_r = nn.Parameter(torch.zeros(self.hidden_size)) - self.bias_e = nn.Parameter(torch.zeros(self.hidden_size)) - - # display num params - self.num_params() - - - def forward(self, prev_y, prev_hidden, current_coarse) : - - # Main matmul - the projection is split 3 ways - R_hidden = self.R(prev_hidden) - R_u, R_r, R_e, = torch.split(R_hidden, self.hidden_size, dim=1) - - # Project the prev input - coarse_input_proj = self.I_coarse(prev_y) - I_coarse_u, I_coarse_r, I_coarse_e = \ - torch.split(coarse_input_proj, self.split_size, dim=1) - - # Project the prev input and current coarse sample - fine_input = torch.cat([prev_y, current_coarse], dim=1) - fine_input_proj = self.I_fine(fine_input) - I_fine_u, I_fine_r, I_fine_e = \ - torch.split(fine_input_proj, self.split_size, dim=1) - - # concatenate for the gates - I_u = torch.cat([I_coarse_u, I_fine_u], dim=1) - I_r = torch.cat([I_coarse_r, I_fine_r], dim=1) - I_e = torch.cat([I_coarse_e, I_fine_e], dim=1) - - # Compute all gates for coarse and fine - u = F.sigmoid(R_u + I_u + self.bias_u) - r = F.sigmoid(R_r + I_r + self.bias_r) - e = F.tanh(r * R_e + I_e + self.bias_e) - hidden = u * prev_hidden + (1. - u) * e - - # Split the hidden state - hidden_coarse, hidden_fine = torch.split(hidden, self.split_size, dim=1) - - # Compute outputs - out_coarse = self.O2(F.relu(self.O1(hidden_coarse))) - out_fine = self.O4(F.relu(self.O3(hidden_fine))) - - return out_coarse, out_fine, hidden - - - def generate(self, seq_len): - with torch.no_grad(): - # First split up the biases for the gates - b_coarse_u, b_fine_u = torch.split(self.bias_u, self.split_size) - b_coarse_r, b_fine_r = torch.split(self.bias_r, self.split_size) - b_coarse_e, b_fine_e = torch.split(self.bias_e, self.split_size) - - # Lists for the two output seqs - c_outputs, f_outputs = [], [] - - # Some initial inputs - out_coarse = torch.LongTensor([0]).cuda() - out_fine = torch.LongTensor([0]).cuda() - - # We'll meed a hidden state - hidden = self.init_hidden() - - # Need a clock for display - start = time.time() - - # Loop for generation - for i in range(seq_len) : - - # Split into two hidden states - hidden_coarse, hidden_fine = \ - torch.split(hidden, self.split_size, dim=1) - - # Scale and concat previous predictions - out_coarse = out_coarse.unsqueeze(0).float() / 127.5 - 1. - out_fine = out_fine.unsqueeze(0).float() / 127.5 - 1. - prev_outputs = torch.cat([out_coarse, out_fine], dim=1) - - # Project input - coarse_input_proj = self.I_coarse(prev_outputs) - I_coarse_u, I_coarse_r, I_coarse_e = \ - torch.split(coarse_input_proj, self.split_size, dim=1) - - # Project hidden state and split 6 ways - R_hidden = self.R(hidden) - R_coarse_u , R_fine_u, \ - R_coarse_r, R_fine_r, \ - R_coarse_e, R_fine_e = torch.split(R_hidden, self.split_size, dim=1) - - # Compute the coarse gates - u = F.sigmoid(R_coarse_u + I_coarse_u + b_coarse_u) - r = F.sigmoid(R_coarse_r + I_coarse_r + b_coarse_r) - e = F.tanh(r * R_coarse_e + I_coarse_e + b_coarse_e) - hidden_coarse = u * hidden_coarse + (1. - u) * e - - # Compute the coarse output - out_coarse = self.O2(F.relu(self.O1(hidden_coarse))) - posterior = F.softmax(out_coarse, dim=1) - distrib = torch.distributions.Categorical(posterior) - out_coarse = distrib.sample() - c_outputs.append(out_coarse) - - # Project the [prev outputs and predicted coarse sample] - coarse_pred = out_coarse.float() / 127.5 - 1. - fine_input = torch.cat([prev_outputs, coarse_pred.unsqueeze(0)], dim=1) - fine_input_proj = self.I_fine(fine_input) - I_fine_u, I_fine_r, I_fine_e = \ - torch.split(fine_input_proj, self.split_size, dim=1) - - # Compute the fine gates - u = F.sigmoid(R_fine_u + I_fine_u + b_fine_u) - r = F.sigmoid(R_fine_r + I_fine_r + b_fine_r) - e = F.tanh(r * R_fine_e + I_fine_e + b_fine_e) - hidden_fine = u * hidden_fine + (1. - u) * e - - # Compute the fine output - out_fine = self.O4(F.relu(self.O3(hidden_fine))) - posterior = F.softmax(out_fine, dim=1) - distrib = torch.distributions.Categorical(posterior) - out_fine = distrib.sample() - f_outputs.append(out_fine) - - # Put the hidden state back together - hidden = torch.cat([hidden_coarse, hidden_fine], dim=1) - - # Display progress - speed = (i + 1) / (time.time() - start) - stream('Gen: %i/%i -- Speed: %i', (i + 1, seq_len, speed)) - - coarse = torch.stack(c_outputs).squeeze(1).cpu().data.numpy() - fine = torch.stack(f_outputs).squeeze(1).cpu().data.numpy() - output = combine_signal(coarse, fine) - - return output, coarse, fine - - def init_hidden(self, batch_size=1) : - return torch.zeros(batch_size, self.hidden_size).cuda() - - def num_params(self) : - parameters = filter(lambda p: p.requires_grad, self.parameters()) - parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000 - print('Trainable Parameters: %.3f million' % parameters) \ No newline at end of file diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/eval_multi.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/eval_multi.py deleted file mode 100644 index 83907410b806a50002aa32db289ca86cff72f45d..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/eval_multi.py +++ /dev/null @@ -1,218 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import torch -import torch.nn as nn - -import numpy as np -import json -from json import encoder -import random -import string -import time -import os -import sys -from . import misc as utils -from eval_utils import getCOCO - -from .div_utils import compute_div_n, compute_global_div_n - -import sys -try: - sys.path.append("coco-caption") - annFile = 'coco-caption/annotations/captions_val2014.json' - from pycocotools.coco import COCO - from pycocoevalcap.eval import COCOEvalCap - from pycocoevalcap.eval_spice import COCOEvalCapSpice - from pycocoevalcap.tokenizer.ptbtokenizer import PTBTokenizer - from pycocoevalcap.bleu.bleu import Bleu - sys.path.append("cider") - from pyciderevalcap.cider.cider import Cider -except: - print('Warning: requirements for eval_multi not satisfied') - - -def eval_allspice(dataset, preds_n, model_id, split): - coco = getCOCO(dataset) - valids = coco.getImgIds() - - capsById = {} - for d in preds_n: - capsById[d['image_id']] = capsById.get(d['image_id'], []) + [d] - - # filter results to only those in MSCOCO validation set (will be about a third) - preds_filt_n = [p for p in preds_n if p['image_id'] in valids] - print('using %d/%d predictions_n' % (len(preds_filt_n), len(preds_n))) - cache_path_n = os.path.join('eval_results/', model_id + '_' + split + '_n.json') - json.dump(preds_filt_n, open(cache_path_n, 'w')) # serialize to temporary json file. Sigh, COCO API... - - # Eval AllSPICE - cocoRes_n = coco.loadRes(cache_path_n) - cocoEvalAllSPICE = COCOEvalCapSpice(coco, cocoRes_n) - cocoEvalAllSPICE.params['image_id'] = cocoRes_n.getImgIds() - cocoEvalAllSPICE.evaluate() - - out = {} - for metric, score in cocoEvalAllSPICE.eval.items(): - out['All'+metric] = score - - imgToEvalAllSPICE = cocoEvalAllSPICE.imgToEval - # collect SPICE_sub_score - for k in list(imgToEvalAllSPICE.values())[0]['SPICE'].keys(): - if k != 'All': - out['AllSPICE_'+k] = np.array([v['SPICE'][k]['f'] for v in imgToEvalAllSPICE.values()]) - out['AllSPICE_'+k] = (out['AllSPICE_'+k][out['AllSPICE_'+k]==out['AllSPICE_'+k]]).mean() - for p in preds_filt_n: - image_id, caption = p['image_id'], p['caption'] - imgToEvalAllSPICE[image_id]['caption'] = capsById[image_id] - return {'overall': out, 'imgToEvalAllSPICE': imgToEvalAllSPICE} - -def eval_oracle(dataset, preds_n, model_id, split): - cache_path = os.path.join('eval_results/', model_id + '_' + split + '_n.json') - - coco = getCOCO(dataset) - valids = coco.getImgIds() - - capsById = {} - for d in preds_n: - capsById[d['image_id']] = capsById.get(d['image_id'], []) + [d] - - sample_n = capsById[list(capsById.keys())[0]] - for i in range(len(capsById[list(capsById.keys())[0]])): - preds = [_[i] for _ in capsById.values()] - - json.dump(preds, open(cache_path, 'w')) # serialize to temporary json file. Sigh, COCO API... - - cocoRes = coco.loadRes(cache_path) - cocoEval = COCOEvalCap(coco, cocoRes) - cocoEval.params['image_id'] = cocoRes.getImgIds() - cocoEval.evaluate() - - imgToEval = cocoEval.imgToEval - for img_id in capsById.keys(): - tmp = imgToEval[img_id] - for k in tmp['SPICE'].keys(): - if k != 'All': - tmp['SPICE_'+k] = tmp['SPICE'][k]['f'] - if tmp['SPICE_'+k] != tmp['SPICE_'+k]: # nan - tmp['SPICE_'+k] = -100 - tmp['SPICE'] = tmp['SPICE']['All']['f'] - if tmp['SPICE'] != tmp['SPICE']: tmp['SPICE'] = -100 - capsById[img_id][i]['scores'] = imgToEval[img_id] - - out = {'overall': {}, 'ImgToEval': {}} - for img_id in capsById.keys(): - out['ImgToEval'][img_id] = {} - for metric in capsById[img_id][0]['scores'].keys(): - if metric == 'image_id': continue - out['ImgToEval'][img_id]['oracle_'+metric] = max([_['scores'][metric] for _ in capsById[img_id]]) - out['ImgToEval'][img_id]['avg_'+metric] = sum([_['scores'][metric] for _ in capsById[img_id]]) / len(capsById[img_id]) - out['ImgToEval'][img_id]['captions'] = capsById[img_id] - for metric in list(out['ImgToEval'].values())[0].keys(): - if metric == 'captions': - continue - tmp = np.array([_[metric] for _ in out['ImgToEval'].values()]) - tmp = tmp[tmp!=-100] - out['overall'][metric] = tmp.mean() - - return out - -def eval_div_stats(dataset, preds_n, model_id, split): - tokenizer = PTBTokenizer() - - capsById = {} - for i, d in enumerate(preds_n): - d['id'] = i - capsById[d['image_id']] = capsById.get(d['image_id'], []) + [d] - - n_caps_perimg = len(capsById[list(capsById.keys())[0]]) - print(n_caps_perimg) - _capsById = capsById # save the untokenized version - capsById = tokenizer.tokenize(capsById) - - div_1, adiv_1 = compute_div_n(capsById,1) - div_2, adiv_2 = compute_div_n(capsById,2) - - globdiv_1, _= compute_global_div_n(capsById,1) - - print('Diversity Statistics are as follows: \n Div1: %.2f, Div2: %.2f, gDiv1: %d\n'%(div_1,div_2, globdiv_1)) - - # compute mbleu - scorer = Bleu(4) - all_scrs = [] - scrperimg = np.zeros((n_caps_perimg, len(capsById))) - - for i in range(n_caps_perimg): - tempRefsById = {} - candsById = {} - for k in capsById: - tempRefsById[k] = capsById[k][:i] + capsById[k][i+1:] - candsById[k] = [capsById[k][i]] - - score, scores = scorer.compute_score(tempRefsById, candsById) - all_scrs.append(score) - scrperimg[i,:] = scores[1] - - all_scrs = np.array(all_scrs) - - out = {} - out['overall'] = {'Div1': div_1, 'Div2': div_2, 'gDiv1': globdiv_1} - for k, score in zip(range(4), all_scrs.mean(axis=0).tolist()): - out['overall'].update({'mBLeu_%d'%(k+1): score}) - imgToEval = {} - for i,imgid in enumerate(capsById.keys()): - imgToEval[imgid] = {'mBleu_2' : scrperimg[:,i].mean()} - imgToEval[imgid]['individuals'] = [] - for j, d in enumerate(_capsById[imgid]): - imgToEval[imgid]['individuals'].append(preds_n[d['id']]) - imgToEval[imgid]['individuals'][-1]['mBleu_2'] = scrperimg[j,i] - out['ImgToEval'] = imgToEval - - print('Mean mutual Bleu scores on this set is:\nmBLeu_1, mBLeu_2, mBLeu_3, mBLeu_4') - print(all_scrs.mean(axis=0)) - - return out - -def eval_self_cider(dataset, preds_n, model_id, split): - cache_path = os.path.join('eval_results/', model_id + '_' + split + '_n.json') - - coco = getCOCO(dataset) - valids = coco.getImgIds() - - # Get Cider_scorer - Cider_scorer = Cider(df='corpus') - - tokenizer = PTBTokenizer() - gts = {} - for imgId in valids: - gts[imgId] = coco.imgToAnns[imgId] - gts = tokenizer.tokenize(gts) - - for imgId in valids: - Cider_scorer.cider_scorer += (None, gts[imgId]) - Cider_scorer.cider_scorer.compute_doc_freq() - Cider_scorer.cider_scorer.ref_len = np.log(float(len(Cider_scorer.cider_scorer.crefs))) - - # Prepare captions - capsById = {} - for d in preds_n: - capsById[d['image_id']] = capsById.get(d['image_id'], []) + [d] - - capsById = tokenizer.tokenize(capsById) - imgIds = list(capsById.keys()) - scores = Cider_scorer.my_self_cider([capsById[_] for _ in imgIds]) - - def get_div(eigvals): - eigvals = np.clip(eigvals, 0, None) - return -np.log(np.sqrt(eigvals[-1]) / (np.sqrt(eigvals).sum())) / np.log(len(eigvals)) - sc_scores = [get_div(np.linalg.eigvalsh(_/10)) for _ in scores] - score = np.mean(np.array(sc_scores)) - - imgToEval = {} - for i, image_id in enumerate(imgIds): - imgToEval[image_id] = {'self_cider': sc_scores[i], 'self_cider_mat': scores[i].tolist()} - return {'overall': {'self_cider': score}, 'imgToEval': imgToEval} - - - return score diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/rezero_transformer_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/rezero_transformer_test.py deleted file mode 100644 index 6ef0aa218c70c919f62492b00ef5f53348dd5938..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/rezero_transformer_test.py +++ /dev/null @@ -1,133 +0,0 @@ -# Copyright 2020 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for Keras-based rezero-transformer block layer.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import numpy as np -import tensorflow as tf - -from tensorflow.python.keras import keras_parameterized # pylint: disable=g-direct-tensorflow-import -from official.nlp.modeling.layers import rezero_transformer - - -# This decorator runs the test in V1, V2-Eager, and V2-Functional mode. It -# guarantees forward compatibility of this code for the V2 switchover. -@keras_parameterized.run_all_keras_modes -class TransformerWithReZeroLayerTest(keras_parameterized.TestCase): - - def tearDown(self): - super(TransformerWithReZeroLayerTest, self).tearDown() - tf.keras.mixed_precision.experimental.set_policy('float32') - - def test_layer_invocation_with_float16_dtype(self): - tf.keras.mixed_precision.experimental.set_policy('mixed_float16') - test_layer = rezero_transformer.ReZeroTransformer( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu') - sequence_length = 21 - width = 80 - # Create a 3-dimensional input (the first dimension is implicit). - data_tensor = tf.keras.Input(shape=(sequence_length, width)) - # Create a 2-dimensional input (the first dimension is implicit). - mask_tensor = tf.keras.Input(shape=(sequence_length, sequence_length)) - output_tensor = test_layer([data_tensor, mask_tensor]) - - # Create a model from the test layer. - model = tf.keras.Model([data_tensor, mask_tensor], output_tensor) - - # Invoke the model on test data. We can't validate the output data itself - # (the NN is too complex) but this will rule out structural runtime errors. - batch_size = 6 - input_data = (10 * np.random.random_sample( - (batch_size, sequence_length, width))) - # The attention mask should be of shape (batch, from_seq_len, to_seq_len), - # which here is (batch, sequence_length, sequence_length) - mask_data = np.random.randint( - 2, size=(batch_size, sequence_length, sequence_length)) - _ = model.predict([input_data, mask_data]) - - def test_rezero_without_layer_norm(self): - test_layer = rezero_transformer.ReZeroTransformer( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu', - use_layer_norm=False) - - input_length, width = 16, 30 - input_tensor = tf.keras.Input(shape=(input_length, width)) - output_tensor = test_layer(input_tensor) - model = tf.keras.Model(input_tensor, output_tensor) - - input_data = np.random.rand(2, input_length, width) - test_layer._rezero_a.assign(1.0) - test_layer.reset_rezero() - output_data = model.predict(input_data) - - self.assertAllClose(input_data, output_data) - - def test_rezero_with_layer_norm(self): - test_layer = rezero_transformer.ReZeroTransformer( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu', - use_layer_norm=True) - - input_length, width = 16, 30 - input_tensor = tf.keras.Input(shape=(input_length, width)) - output_tensor = test_layer(input_tensor) - model = tf.keras.Model(input_tensor, output_tensor) - - input_data = np.random.rand(2, input_length, width) + 2.0 - output_data = model.predict(input_data) - input_data_normed = ( - input_data - np.mean(input_data, axis=-1, keepdims=True)) / ( - np.std(input_data, axis=-1, keepdims=True)) - - self.assertAllClose(input_data_normed, output_data) - - def test_layer_output_range(self): - test_layer = rezero_transformer.ReZeroTransformer( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu') - sequence_length = 21 - width = 80 - - batch_size = 6 - input_data = 10 * np.random.random_sample( - (batch_size, sequence_length, width)) - mask_data = np.random.randint( - 2, size=(batch_size, sequence_length, sequence_length)) - output_tensor = test_layer([input_data, mask_data]) - - # The layer only attends to the first token and outputs the first token - # embeeding. - new_layer = rezero_transformer.ReZeroTransformer( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu', - output_range=1) - _ = new_layer([input_data, mask_data]) - new_layer.set_weights(test_layer.get_weights()) - new_output_tensor = new_layer([input_data, mask_data]) - self.assertAllClose(new_output_tensor, output_tensor[:, 0:1, :]) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/Namit2111/id_verfiy/ImageEncoder.py b/spaces/Namit2111/id_verfiy/ImageEncoder.py deleted file mode 100644 index 9d1ab99bf6c2f960910e7d8712ebc933f79447a9..0000000000000000000000000000000000000000 --- a/spaces/Namit2111/id_verfiy/ImageEncoder.py +++ /dev/null @@ -1,17 +0,0 @@ -import face_recognition as fr -import pickle -import os -def img_enc(face): - encoded={} - - faces = fr.load_image_file(face) - face_enc = fr.face_encodings(faces)[0] - encoded[face.split(".")[0]] = face_enc - return list(encoded.keys()),list(encoded.values()) - - -# face_known,face_enco_done= img_enc() -# with open("data.pickle","wb")as f: -# pickle.dump((face_known,face_enco_done),f) - - diff --git a/spaces/NimaBoscarino/climategan/climategan/losses.py b/spaces/NimaBoscarino/climategan/climategan/losses.py deleted file mode 100644 index f10a5d26c73795bad02837f546b96c76b24e7564..0000000000000000000000000000000000000000 --- a/spaces/NimaBoscarino/climategan/climategan/losses.py +++ /dev/null @@ -1,620 +0,0 @@ -"""Define all losses. When possible, as inheriting from nn.Module -To send predictions to target.device -""" -from random import random as rand - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision import models - - -class GANLoss(nn.Module): - def __init__( - self, - use_lsgan=True, - target_real_label=1.0, - target_fake_label=0.0, - soft_shift=0.0, - flip_prob=0.0, - verbose=0, - ): - """Defines the GAN loss which uses either LSGAN or the regular GAN. - When LSGAN is used, it is basically same as MSELoss, - but it abstracts away the need to create the target label tensor - that has the same size as the input + - - * label smoothing: target_real_label=0.75 - * label flipping: flip_prob > 0. - - source: https://github.com/sangwoomo/instagan/blob - /b67e9008fcdd6c41652f8805f0b36bcaa8b632d6/models/networks.py - - Args: - use_lsgan (bool, optional): Use MSE or BCE. Defaults to True. - target_real_label (float, optional): Value for the real target. - Defaults to 1.0. - target_fake_label (float, optional): Value for the fake target. - Defaults to 0.0. - flip_prob (float, optional): Probability of flipping the label - (use for real target in Discriminator only). Defaults to 0.0. - """ - super().__init__() - - self.soft_shift = soft_shift - self.verbose = verbose - - self.register_buffer("real_label", torch.tensor(target_real_label)) - self.register_buffer("fake_label", torch.tensor(target_fake_label)) - if use_lsgan: - self.loss = nn.MSELoss() - else: - self.loss = nn.BCEWithLogitsLoss() - self.flip_prob = flip_prob - - def get_target_tensor(self, input, target_is_real): - soft_change = torch.FloatTensor(1).uniform_(0, self.soft_shift) - if self.verbose > 0: - print("GANLoss sampled soft_change:", soft_change.item()) - if target_is_real: - target_tensor = self.real_label - soft_change - else: - target_tensor = self.fake_label + soft_change - return target_tensor.expand_as(input) - - def __call__(self, input, target_is_real, *args, **kwargs): - r = rand() - if isinstance(input, list): - loss = 0 - for pred_i in input: - if isinstance(pred_i, list): - pred_i = pred_i[-1] - if r < self.flip_prob: - target_is_real = not target_is_real - target_tensor = self.get_target_tensor(pred_i, target_is_real) - loss_tensor = self.loss(pred_i, target_tensor.to(pred_i.device)) - loss += loss_tensor - return loss / len(input) - else: - if r < self.flip_prob: - target_is_real = not target_is_real - target_tensor = self.get_target_tensor(input, target_is_real) - return self.loss(input, target_tensor.to(input.device)) - - -class FeatMatchLoss(nn.Module): - def __init__(self): - super().__init__() - self.criterionFeat = nn.L1Loss() - - def __call__(self, pred_real, pred_fake): - # pred_{real, fake} are lists of features - num_D = len(pred_fake) - GAN_Feat_loss = 0.0 - for i in range(num_D): # for each discriminator - # last output is the final prediction, so we exclude it - num_intermediate_outputs = len(pred_fake[i]) - 1 - for j in range(num_intermediate_outputs): # for each layer output - unweighted_loss = self.criterionFeat( - pred_fake[i][j], pred_real[i][j].detach() - ) - GAN_Feat_loss += unweighted_loss / num_D - return GAN_Feat_loss - - -class CrossEntropy(nn.Module): - def __init__(self): - super().__init__() - self.loss = nn.CrossEntropyLoss() - - def __call__(self, logits, target): - return self.loss(logits, target.to(logits.device).long()) - - -class TravelLoss(nn.Module): - def __init__(self, eps=1e-12): - super().__init__() - self.eps = eps - - def cosine_loss(self, real, fake): - norm_real = torch.norm(real, p=2, dim=1)[:, None] - norm_fake = torch.norm(fake, p=2, dim=1)[:, None] - mat_real = real / norm_real - mat_fake = fake / norm_fake - mat_real = torch.max(mat_real, self.eps * torch.ones_like(mat_real)) - mat_fake = torch.max(mat_fake, self.eps * torch.ones_like(mat_fake)) - # compute only the diagonal of the matrix multiplication - return torch.einsum("ij, ji -> i", mat_fake, mat_real).sum() - - def __call__(self, S_real, S_fake): - self.v_real = [] - self.v_fake = [] - for i in range(len(S_real)): - for j in range(i): - self.v_real.append((S_real[i] - S_real[j])[None, :]) - self.v_fake.append((S_fake[i] - S_fake[j])[None, :]) - self.v_real_t = torch.cat(self.v_real, dim=0) - self.v_fake_t = torch.cat(self.v_fake, dim=0) - return self.cosine_loss(self.v_real_t, self.v_fake_t) - - -class TVLoss(nn.Module): - """Total Variational Regularization: Penalizes differences in - neighboring pixel values - - source: - https://github.com/jxgu1016/Total_Variation_Loss.pytorch/blob/master/TVLoss.py - """ - - def __init__(self, tvloss_weight=1): - """ - Args: - TVLoss_weight (int, optional): [lambda i.e. weight for loss]. Defaults to 1. - """ - super(TVLoss, self).__init__() - self.tvloss_weight = tvloss_weight - - def forward(self, x): - batch_size = x.size()[0] - h_x = x.size()[2] - w_x = x.size()[3] - count_h = self._tensor_size(x[:, :, 1:, :]) - count_w = self._tensor_size(x[:, :, :, 1:]) - h_tv = torch.pow((x[:, :, 1:, :] - x[:, :, : h_x - 1, :]), 2).sum() - w_tv = torch.pow((x[:, :, :, 1:] - x[:, :, :, : w_x - 1]), 2).sum() - return self.tvloss_weight * 2 * (h_tv / count_h + w_tv / count_w) / batch_size - - def _tensor_size(self, t): - return t.size()[1] * t.size()[2] * t.size()[3] - - -class MinentLoss(nn.Module): - """ - Loss for the minimization of the entropy map - Source for version 1: https://github.com/valeoai/ADVENT - - Version 2 adds the variance of the entropy map in the computation of the loss - """ - - def __init__(self, version=1, lambda_var=0.1): - super().__init__() - self.version = version - self.lambda_var = lambda_var - - def __call__(self, pred): - assert pred.dim() == 4 - n, c, h, w = pred.size() - entropy_map = -torch.mul(pred, torch.log2(pred + 1e-30)) / np.log2(c) - if self.version == 1: - return torch.sum(entropy_map) / (n * h * w) - else: - entropy_map_demean = entropy_map - torch.sum(entropy_map) / (n * h * w) - entropy_map_squ = torch.mul(entropy_map_demean, entropy_map_demean) - return torch.sum(entropy_map + self.lambda_var * entropy_map_squ) / ( - n * h * w - ) - - -class MSELoss(nn.Module): - """ - Creates a criterion that measures the mean squared error - (squared L2 norm) between each element in the input x and target y . - """ - - def __init__(self): - super().__init__() - self.loss = nn.MSELoss() - - def __call__(self, prediction, target): - return self.loss(prediction, target.to(prediction.device)) - - -class L1Loss(MSELoss): - """ - Creates a criterion that measures the mean absolute error - (MAE) between each element in the input x and target y - """ - - def __init__(self): - super().__init__() - self.loss = nn.L1Loss() - - -class SIMSELoss(nn.Module): - """Scale invariant MSE Loss""" - - def __init__(self): - super(SIMSELoss, self).__init__() - - def __call__(self, prediction, target): - d = prediction - target - diff = torch.mean(d * d) - relDiff = torch.mean(d) * torch.mean(d) - return diff - relDiff - - -class SIGMLoss(nn.Module): - """loss from MiDaS paper - MiDaS did not specify how the gradients were computed but we use Sobel - filters which approximate the derivative of an image. - """ - - def __init__(self, gmweight=0.5, scale=4, device="cuda"): - super(SIGMLoss, self).__init__() - self.gmweight = gmweight - self.sobelx = torch.Tensor([[1, 0, -1], [2, 0, -2], [1, 0, -1]]).to(device) - self.sobely = torch.Tensor([[1, 2, 1], [0, 0, 0], [-1, -2, -1]]).to(device) - self.scale = scale - - def __call__(self, prediction, target): - # get disparities - # align both the prediction and the ground truth to have zero - # translation and unit scale - t_pred = torch.median(prediction) - t_targ = torch.median(target) - s_pred = torch.mean(torch.abs(prediction - t_pred)) - s_targ = torch.mean(torch.abs(target - t_targ)) - pred = (prediction - t_pred) / s_pred - targ = (target - t_targ) / s_targ - - R = pred - targ - - # get gradient map with sobel filters - batch_size = prediction.size()[0] - num_pix = prediction.size()[-1] * prediction.size()[-2] - sobelx = (self.sobelx).expand((batch_size, 1, -1, -1)) - sobely = (self.sobely).expand((batch_size, 1, -1, -1)) - gmLoss = 0 # gradient matching term - for k in range(self.scale): - R_ = F.interpolate(R, scale_factor=1 / 2 ** k) - Rx = F.conv2d(R_, sobelx, stride=1) - Ry = F.conv2d(R_, sobely, stride=1) - gmLoss += torch.sum(torch.abs(Rx) + torch.abs(Ry)) - gmLoss = self.gmweight / num_pix * gmLoss - # scale invariant MSE - simseLoss = 0.5 / num_pix * torch.sum(torch.abs(R)) - loss = simseLoss + gmLoss - return loss - - -class ContextLoss(nn.Module): - """ - Masked L1 loss on non-water - """ - - def __call__(self, input, target, mask): - return torch.mean(torch.abs(torch.mul((input - target), 1 - mask))) - - -class ReconstructionLoss(nn.Module): - """ - Masked L1 loss on water - """ - - def __call__(self, input, target, mask): - return torch.mean(torch.abs(torch.mul((input - target), mask))) - - -################################################################################## -# VGG network definition -################################################################################## - -# Source: https://github.com/NVIDIA/pix2pixHD -class Vgg19(nn.Module): - def __init__(self, requires_grad=False): - super(Vgg19, self).__init__() - vgg_pretrained_features = models.vgg19(pretrained=True).features - self.slice1 = nn.Sequential() - self.slice2 = nn.Sequential() - self.slice3 = nn.Sequential() - self.slice4 = nn.Sequential() - self.slice5 = nn.Sequential() - for x in range(2): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(2, 7): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(7, 12): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(12, 21): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(21, 30): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h_relu1 = self.slice1(X) - h_relu2 = self.slice2(h_relu1) - h_relu3 = self.slice3(h_relu2) - h_relu4 = self.slice4(h_relu3) - h_relu5 = self.slice5(h_relu4) - out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5] - return out - - -# Source: https://github.com/NVIDIA/pix2pixHD -class VGGLoss(nn.Module): - def __init__(self, device): - super().__init__() - self.vgg = Vgg19().to(device).eval() - self.criterion = nn.L1Loss() - self.weights = [1.0 / 32, 1.0 / 16, 1.0 / 8, 1.0 / 4, 1.0] - - def forward(self, x, y): - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - loss = 0 - for i in range(len(x_vgg)): - loss += self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach()) - return loss - - -def get_losses(opts, verbose, device=None): - """Sets the loss functions to be used by G, D and C, as specified - in the opts and returns a dictionnary of losses: - - losses = { - "G": { - "gan": {"a": ..., "t": ...}, - "cycle": {"a": ..., "t": ...} - "auto": {"a": ..., "t": ...} - "tasks": {"h": ..., "d": ..., "s": ..., etc.} - }, - "D": GANLoss, - "C": ... - } - """ - - losses = { - "G": {"a": {}, "p": {}, "tasks": {}}, - "D": {"default": {}, "advent": {}}, - "C": {}, - } - - # ------------------------------ - # ----- Generator Losses ----- - # ------------------------------ - - # painter losses - if "p" in opts.tasks: - losses["G"]["p"]["gan"] = ( - HingeLoss() - if opts.gen.p.loss == "hinge" - else GANLoss( - use_lsgan=False, - soft_shift=opts.dis.soft_shift, - flip_prob=opts.dis.flip_prob, - ) - ) - losses["G"]["p"]["dm"] = MSELoss() - losses["G"]["p"]["vgg"] = VGGLoss(device) - losses["G"]["p"]["tv"] = TVLoss() - losses["G"]["p"]["context"] = ContextLoss() - losses["G"]["p"]["reconstruction"] = ReconstructionLoss() - losses["G"]["p"]["featmatch"] = FeatMatchLoss() - - # depth losses - if "d" in opts.tasks: - if not opts.gen.d.classify.enable: - if opts.gen.d.loss == "dada": - depth_func = DADADepthLoss() - else: - depth_func = SIGMLoss(opts.train.lambdas.G.d.gml) - else: - depth_func = CrossEntropy() - - losses["G"]["tasks"]["d"] = depth_func - - # segmentation losses - if "s" in opts.tasks: - losses["G"]["tasks"]["s"] = {} - losses["G"]["tasks"]["s"]["crossent"] = CrossEntropy() - losses["G"]["tasks"]["s"]["minent"] = MinentLoss() - losses["G"]["tasks"]["s"]["advent"] = ADVENTAdversarialLoss( - opts, gan_type=opts.dis.s.gan_type - ) - - # masker losses - if "m" in opts.tasks: - losses["G"]["tasks"]["m"] = {} - losses["G"]["tasks"]["m"]["bce"] = nn.BCEWithLogitsLoss() - if opts.gen.m.use_minent_var: - losses["G"]["tasks"]["m"]["minent"] = MinentLoss( - version=2, lambda_var=opts.train.lambdas.advent.ent_var - ) - else: - losses["G"]["tasks"]["m"]["minent"] = MinentLoss() - losses["G"]["tasks"]["m"]["tv"] = TVLoss() - losses["G"]["tasks"]["m"]["advent"] = ADVENTAdversarialLoss( - opts, gan_type=opts.dis.m.gan_type - ) - losses["G"]["tasks"]["m"]["gi"] = GroundIntersectionLoss() - - # ---------------------------------- - # ----- Discriminator Losses ----- - # ---------------------------------- - if "p" in opts.tasks: - losses["D"]["p"] = losses["G"]["p"]["gan"] - if "m" in opts.tasks or "s" in opts.tasks: - losses["D"]["advent"] = ADVENTAdversarialLoss(opts) - return losses - - -class GroundIntersectionLoss(nn.Module): - """ - Penalize areas in ground seg but not in flood mask - """ - - def __call__(self, pred, pseudo_ground): - return torch.mean(1.0 * ((pseudo_ground - pred) > 0.5)) - - -def prob_2_entropy(prob): - """ - convert probabilistic prediction maps to weighted self-information maps - """ - n, c, h, w = prob.size() - return -torch.mul(prob, torch.log2(prob + 1e-30)) / np.log2(c) - - -class CustomBCELoss(nn.Module): - """ - The first argument is a tensor and the second argument is an int. - There is no need to take sigmoid before calling this function. - """ - - def __init__(self): - super().__init__() - self.loss = nn.BCEWithLogitsLoss() - - def __call__(self, prediction, target): - return self.loss( - prediction, - torch.FloatTensor(prediction.size()) - .fill_(target) - .to(prediction.get_device()), - ) - - -class ADVENTAdversarialLoss(nn.Module): - """ - The class is for calculating the advent loss. - It is used to indirectly shrink the domain gap between sim and real - - _call_ function: - prediction: torch.tensor with shape of [bs,c,h,w] - target: int; domain label: 0 (sim) or 1 (real) - discriminator: the discriminator model tells if a tensor is from sim or real - - output: the loss value of GANLoss - """ - - def __init__(self, opts, gan_type="GAN"): - super().__init__() - self.opts = opts - if gan_type == "GAN": - self.loss = CustomBCELoss() - elif gan_type == "WGAN" or "WGAN_gp" or "WGAN_norm": - self.loss = lambda x, y: -torch.mean(y * x + (1 - y) * (1 - x)) - else: - raise NotImplementedError - - def __call__(self, prediction, target, discriminator, depth_preds=None): - """ - Compute the GAN loss from the Advent Discriminator given - normalized (softmaxed) predictions (=pixel-wise class probabilities), - and int labels (target). - - Args: - prediction (torch.Tensor): pixel-wise probability distribution over classes - target (torch.Tensor): pixel wise int target labels - discriminator (torch.nn.Module): Discriminator to get the loss - - Returns: - torch.Tensor: float 0-D loss - """ - d_out = prob_2_entropy(prediction) - if depth_preds is not None: - d_out = d_out * depth_preds - d_out = discriminator(d_out) - if self.opts.dis.m.architecture == "OmniDiscriminator": - d_out = multiDiscriminatorAdapter(d_out, self.opts) - loss_ = self.loss(d_out, target) - return loss_ - - -def multiDiscriminatorAdapter(d_out: list, opts: dict) -> torch.tensor: - """ - Because the OmniDiscriminator does not directly return a tensor - (but a list of tensor). - Since there is no multilevel masker, the 0th tensor in the list is all we want. - This Adapter returns the first element(tensor) of the list that OmniDiscriminator - returns. - """ - if ( - isinstance(d_out, list) and len(d_out) == 1 - ): # adapt the multi-scale OmniDiscriminator - if not opts.dis.p.get_intermediate_features: - d_out = d_out[0][0] - else: - d_out = d_out[0] - else: - raise Exception( - "Check the setting of OmniDiscriminator! " - + "For now, we don't support multi-scale OmniDiscriminator." - ) - return d_out - - -class HingeLoss(nn.Module): - """ - Adapted from https://github.com/NVlabs/SPADE/blob/master/models/networks/loss.py - for the painter - """ - - def __init__(self, tensor=torch.FloatTensor): - super().__init__() - self.zero_tensor = None - self.Tensor = tensor - - def get_zero_tensor(self, input): - if self.zero_tensor is None: - self.zero_tensor = self.Tensor(1).fill_(0) - self.zero_tensor.requires_grad_(False) - self.zero_tensor = self.zero_tensor.to(input.device) - return self.zero_tensor.expand_as(input) - - def loss(self, input, target_is_real, for_discriminator=True): - if for_discriminator: - if target_is_real: - minval = torch.min(input - 1, self.get_zero_tensor(input)) - loss = -torch.mean(minval) - else: - minval = torch.min(-input - 1, self.get_zero_tensor(input)) - loss = -torch.mean(minval) - else: - assert target_is_real, "The generator's hinge loss must be aiming for real" - loss = -torch.mean(input) - return loss - - def __call__(self, input, target_is_real, for_discriminator=True): - # computing loss is a bit complicated because |input| may not be - # a tensor, but list of tensors in case of multiscale discriminator - if isinstance(input, list): - loss = 0 - for pred_i in input: - if isinstance(pred_i, list): - pred_i = pred_i[-1] - loss_tensor = self.loss(pred_i, target_is_real, for_discriminator) - loss += loss_tensor - return loss / len(input) - else: - return self.loss(input, target_is_real, for_discriminator) - - -class DADADepthLoss: - """Defines the reverse Huber loss from DADA paper for depth prediction - - Samples with larger residuals are penalized more by l2 term - - Samples with smaller residuals are penalized more by l1 term - From https://github.com/valeoai/DADA/blob/master/dada/utils/func.py - """ - - def loss_calc_depth(self, pred, label): - n, c, h, w = pred.size() - assert c == 1 - - pred = pred.squeeze() - label = label.squeeze() - - adiff = torch.abs(pred - label) - batch_max = 0.2 * torch.max(adiff).item() - t1_mask = adiff.le(batch_max).float() - t2_mask = adiff.gt(batch_max).float() - t1 = adiff * t1_mask - t2 = (adiff * adiff + batch_max * batch_max) / (2 * batch_max) - t2 = t2 * t2_mask - return (torch.sum(t1) + torch.sum(t2)) / torch.numel(pred.data) - - def __call__(self, pred, label): - return self.loss_calc_depth(pred, label) diff --git a/spaces/NimaBoscarino/climategan/figures/bootstrap_ablation.py b/spaces/NimaBoscarino/climategan/figures/bootstrap_ablation.py deleted file mode 100644 index 1c7f4e876e4543323e17b6e13c478d0e9ff47a98..0000000000000000000000000000000000000000 --- a/spaces/NimaBoscarino/climategan/figures/bootstrap_ablation.py +++ /dev/null @@ -1,562 +0,0 @@ -""" -This script evaluates the contribution of a technique from the ablation study for -improving the masker evaluation metrics. The differences in the metrics are computed -for all images of paired models, that is those which only differ in the inclusion or -not of the given technique. Then, statistical inference is performed through the -percentile bootstrap to obtain robust estimates of the differences in the metrics and -confidence intervals. The script plots the distribution of the bootrstraped estimates. -""" -print("Imports...", end="") -from argparse import ArgumentParser -import yaml -import os -import numpy as np -import pandas as pd -import seaborn as sns -from scipy.stats import trim_mean -from tqdm import tqdm -from pathlib import Path -import matplotlib.pyplot as plt -import matplotlib.patches as mpatches - - -# ----------------------- -# ----- Constants ----- -# ----------------------- - -dict_metrics = { - "names": { - "tpr": "TPR, Recall, Sensitivity", - "tnr": "TNR, Specificity, Selectivity", - "fpr": "FPR", - "fpt": "False positives relative to image size", - "fnr": "FNR, Miss rate", - "fnt": "False negatives relative to image size", - "mpr": "May positive rate (MPR)", - "mnr": "May negative rate (MNR)", - "accuracy": "Accuracy (ignoring may)", - "error": "Error", - "f05": "F05 score", - "precision": "Precision", - "edge_coherence": "Edge coherence", - "accuracy_must_may": "Accuracy (ignoring cannot)", - }, - "key_metrics": ["f05", "error", "edge_coherence"], -} -dict_techniques = { - "depth": "depth", - "segmentation": "seg", - "seg": "seg", - "dada_s": "dada_seg", - "dada_seg": "dada_seg", - "dada_segmentation": "dada_seg", - "dada_m": "dada_masker", - "dada_masker": "dada_masker", - "spade": "spade", - "pseudo": "pseudo", - "pseudo-labels": "pseudo", - "pseudo_labels": "pseudo", -} - -# Model features -model_feats = [ - "masker", - "seg", - "depth", - "dada_seg", - "dada_masker", - "spade", - "pseudo", - "ground", - "instagan", -] - -# Colors -palette_colorblind = sns.color_palette("colorblind") -color_cat1 = palette_colorblind[0] -color_cat2 = palette_colorblind[1] -palette_lightest = [ - sns.light_palette(color_cat1, n_colors=20)[3], - sns.light_palette(color_cat2, n_colors=20)[3], -] -palette_light = [ - sns.light_palette(color_cat1, n_colors=3)[1], - sns.light_palette(color_cat2, n_colors=3)[1], -] -palette_medium = [color_cat1, color_cat2] -palette_dark = [ - sns.dark_palette(color_cat1, n_colors=3)[1], - sns.dark_palette(color_cat2, n_colors=3)[1], -] -palette_cat1 = [ - palette_lightest[0], - palette_light[0], - palette_medium[0], - palette_dark[0], -] -palette_cat2 = [ - palette_lightest[1], - palette_light[1], - palette_medium[1], - palette_dark[1], -] -color_cat1_light = palette_light[0] -color_cat2_light = palette_light[1] - - -def parsed_args(): - """ - Parse and returns command-line args - - Returns: - argparse.Namespace: the parsed arguments - """ - parser = ArgumentParser() - parser.add_argument( - "--input_csv", - default="ablations_metrics_20210311.csv", - type=str, - help="CSV containing the results of the ablation study", - ) - parser.add_argument( - "--output_dir", - default=None, - type=str, - help="Output directory", - ) - parser.add_argument( - "--technique", - default=None, - type=str, - help="Keyword specifying the technique. One of: pseudo, depth, segmentation, dada_seg, dada_masker, spade", - ) - parser.add_argument( - "--dpi", - default=200, - type=int, - help="DPI for the output images", - ) - parser.add_argument( - "--n_bs", - default=1e6, - type=int, - help="Number of bootrstrap samples", - ) - parser.add_argument( - "--alpha", - default=0.99, - type=float, - help="Confidence level", - ) - parser.add_argument( - "--bs_seed", - default=17, - type=int, - help="Bootstrap random seed, for reproducibility", - ) - - return parser.parse_args() - - -def add_ci_mean( - ax, sample_measure, bs_mean, bs_std, ci, color, alpha, fontsize, invert=False -): - - # Fill area between CI - dist = ax.lines[0] - dist_y = dist.get_ydata() - dist_x = dist.get_xdata() - linewidth = dist.get_linewidth() - - x_idx_low = np.argmin(np.abs(dist_x - ci[0])) - x_idx_high = np.argmin(np.abs(dist_x - ci[1])) - x_ci = dist_x[x_idx_low:x_idx_high] - y_ci = dist_y[x_idx_low:x_idx_high] - - ax.fill_between(x_ci, 0, y_ci, facecolor=color, alpha=alpha) - - # Add vertical lines of CI - ax.vlines( - x=ci[0], - ymin=0.0, - ymax=y_ci[0], - color=color, - linewidth=linewidth, - label="ci_low", - ) - ax.vlines( - x=ci[1], - ymin=0.0, - ymax=y_ci[-1], - color=color, - linewidth=linewidth, - label="ci_high", - ) - - # Add annotations - bbox_props = dict(boxstyle="round, pad=0.4", fc="w", ec="k", lw=2) - - if invert: - ha_l = "right" - ha_u = "left" - else: - ha_l = "left" - ha_u = "right" - ax.text( - ci[0], - 0.0, - s="L = {:.4f}".format(ci[0]), - ha=ha_l, - va="bottom", - fontsize=fontsize, - bbox=bbox_props, - ) - ax.text( - ci[1], - 0.0, - s="U = {:.4f}".format(ci[1]), - ha=ha_u, - va="bottom", - fontsize=fontsize, - bbox=bbox_props, - ) - - # Add vertical line of bootstrap mean - x_idx_mean = np.argmin(np.abs(dist_x - bs_mean)) - ax.vlines( - x=bs_mean, ymin=0.0, ymax=dist_y[x_idx_mean], color="k", linewidth=linewidth - ) - - # Add annotation of bootstrap mean - bbox_props = dict(boxstyle="round, pad=0.4", fc="w", ec="k", lw=2) - - ax.text( - bs_mean, - 0.6 * dist_y[x_idx_mean], - s="Bootstrap mean = {:.4f}".format(bs_mean), - ha="center", - va="center", - fontsize=fontsize, - bbox=bbox_props, - ) - - # Add vertical line of sample_measure - x_idx_smeas = np.argmin(np.abs(dist_x - sample_measure)) - ax.vlines( - x=sample_measure, - ymin=0.0, - ymax=dist_y[x_idx_smeas], - color="k", - linewidth=linewidth, - linestyles="dotted", - ) - - # Add SD - bbox_props = dict(boxstyle="darrow, pad=0.4", fc="w", ec="k", lw=2) - - ax.text( - bs_mean, - 0.4 * dist_y[x_idx_mean], - s="SD = {:.4f} = SE".format(bs_std), - ha="center", - va="center", - fontsize=fontsize, - bbox=bbox_props, - ) - - -def add_null_pval(ax, null, color, alpha, fontsize): - - # Fill area between CI - dist = ax.lines[0] - dist_y = dist.get_ydata() - dist_x = dist.get_xdata() - linewidth = dist.get_linewidth() - - x_idx_null = np.argmin(np.abs(dist_x - null)) - if x_idx_null >= (len(dist_x) / 2.0): - x_pval = dist_x[x_idx_null:] - y_pval = dist_y[x_idx_null:] - else: - x_pval = dist_x[:x_idx_null] - y_pval = dist_y[:x_idx_null] - - ax.fill_between(x_pval, 0, y_pval, facecolor=color, alpha=alpha) - - # Add vertical lines of null - dist = ax.lines[0] - linewidth = dist.get_linewidth() - y_max = ax.get_ylim()[1] - ax.vlines( - x=null, - ymin=0.0, - ymax=y_max, - color="k", - linewidth=linewidth, - linestyles="dotted", - ) - - # Add annotations - bbox_props = dict(boxstyle="round, pad=0.4", fc="w", ec="k", lw=2) - - ax.text( - null, - 0.75 * y_max, - s="Null hypothesis = {:.1f}".format(null), - ha="center", - va="center", - fontsize=fontsize, - bbox=bbox_props, - ) - - -def plot_bootstrap_distr( - sample_measure, bs_samples, alpha, color_ci, color_pval=None, null=None -): - - # Compute results from bootstrap - q_low = (1.0 - alpha) / 2.0 - q_high = 1.0 - q_low - ci = np.quantile(bs_samples, [q_low, q_high]) - bs_mean = np.mean(bs_samples) - bs_std = np.std(bs_samples) - - if null is not None and color_pval is not None: - pval_flag = True - pval = np.min([[np.mean(bs_samples > null), np.mean(bs_samples < null)]]) * 2 - else: - pval_flag = False - - # Set up plot - sns.set(style="whitegrid") - fontsize = 24 - font = {"family": "DejaVu Sans", "weight": "normal", "size": fontsize} - plt.rc("font", **font) - alpha_plot = 0.5 - - # Initialize the matplotlib figure - fig, ax = plt.subplots(figsize=(30, 12), dpi=args.dpi) - - # Plot distribution of bootstrap means - sns.kdeplot(bs_samples, color="b", linewidth=5, gridsize=1000, ax=ax) - - y_lim = ax.get_ylim() - - # Change spines - sns.despine(left=True, bottom=True) - - # Annotations - add_ci_mean( - ax, - sample_measure, - bs_mean, - bs_std, - ci, - color=color_ci, - alpha=alpha_plot, - fontsize=fontsize, - ) - - if pval_flag: - add_null_pval(ax, null, color=color_pval, alpha=alpha_plot, fontsize=fontsize) - - # Legend - ci_patch = mpatches.Patch( - facecolor=color_ci, - edgecolor=None, - alpha=alpha_plot, - label="{:d} % confidence interval".format(int(100 * alpha)), - ) - - if pval_flag: - if pval == 0.0: - pval_patch = mpatches.Patch( - facecolor=color_pval, - edgecolor=None, - alpha=alpha_plot, - label="P value / 2 = {:.1f}".format(pval / 2.0), - ) - elif np.around(pval / 2.0, decimals=4) > 0.0000: - pval_patch = mpatches.Patch( - facecolor=color_pval, - edgecolor=None, - alpha=alpha_plot, - label="P value / 2 = {:.4f}".format(pval / 2.0), - ) - else: - pval_patch = mpatches.Patch( - facecolor=color_pval, - edgecolor=None, - alpha=alpha_plot, - label="P value / 2 < $10^{}$".format(np.ceil(np.log10(pval / 2.0))), - ) - - leg = ax.legend( - handles=[ci_patch, pval_patch], - ncol=1, - loc="upper right", - frameon=True, - framealpha=1.0, - title="", - fontsize=fontsize, - columnspacing=1.0, - labelspacing=0.2, - markerfirst=True, - ) - else: - leg = ax.legend( - handles=[ci_patch], - ncol=1, - loc="upper right", - frameon=True, - framealpha=1.0, - title="", - fontsize=fontsize, - columnspacing=1.0, - labelspacing=0.2, - markerfirst=True, - ) - - plt.setp(leg.get_title(), fontsize=fontsize, horizontalalignment="left") - - # Set X-label - ax.set_xlabel("Bootstrap estimates", rotation=0, fontsize=fontsize, labelpad=10.0) - - # Set Y-label - ax.set_ylabel("Density", rotation=90, fontsize=fontsize, labelpad=10.0) - - # Ticks - plt.setp(ax.get_xticklabels(), fontsize=0.8 * fontsize, verticalalignment="top") - plt.setp(ax.get_yticklabels(), fontsize=0.8 * fontsize) - - ax.set_ylim(y_lim) - - return fig, bs_mean, bs_std, ci, pval - - -if __name__ == "__main__": - # ----------------------------- - # ----- Parse arguments ----- - # ----------------------------- - args = parsed_args() - print("Args:\n" + "\n".join([f" {k:20}: {v}" for k, v in vars(args).items()])) - - # Determine output dir - if args.output_dir is None: - output_dir = Path(os.environ["SLURM_TMPDIR"]) - else: - output_dir = Path(args.output_dir) - if not output_dir.exists(): - output_dir.mkdir(parents=True, exist_ok=False) - - # Store args - output_yml = output_dir / "{}_bootstrap.yml".format(args.technique) - with open(output_yml, "w") as f: - yaml.dump(vars(args), f) - - # Determine technique - if args.technique.lower() not in dict_techniques: - raise ValueError("{} is not a valid technique".format(args.technique)) - else: - technique = dict_techniques[args.technique.lower()] - - # Read CSV - df = pd.read_csv(args.input_csv, index_col="model_img_idx") - - # Find relevant model pairs - model_pairs = [] - for mi in df.loc[df[technique]].model_feats.unique(): - for mj in df.model_feats.unique(): - if mj == mi: - continue - - if df.loc[df.model_feats == mj, technique].unique()[0]: - continue - - is_pair = True - for f in model_feats: - if f == technique: - continue - elif ( - df.loc[df.model_feats == mj, f].unique()[0] - != df.loc[df.model_feats == mi, f].unique()[0] - ): - is_pair = False - break - else: - pass - if is_pair: - model_pairs.append((mi, mj)) - break - - print("\nModel pairs identified:\n") - for pair in model_pairs: - print("{} & {}".format(pair[0], pair[1])) - - df["base"] = ["N/A"] * len(df) - for spp in model_pairs: - df.loc[df.model_feats.isin(spp), "depth_base"] = spp[1] - - # Build bootstrap data - data = {m: [] for m in dict_metrics["key_metrics"]} - for m_with, m_without in model_pairs: - df_with = df.loc[df.model_feats == m_with] - df_without = df.loc[df.model_feats == m_without] - for metric in data.keys(): - diff = ( - df_with.sort_values(by="img_idx")[metric].values - - df_without.sort_values(by="img_idx")[metric].values - ) - data[metric].extend(diff.tolist()) - - # Run bootstrap - measures = ["mean", "median", "20_trimmed_mean"] - bs_data = {meas: {m: np.zeros(args.n_bs) for m in data.keys()} for meas in measures} - - np.random.seed(args.bs_seed) - for m, data_m in data.items(): - for idx, s in enumerate(tqdm(range(args.n_bs))): - # Sample with replacement - bs_sample = np.random.choice(data_m, size=len(data_m), replace=True) - - # Store mean - bs_data["mean"][m][idx] = np.mean(bs_sample) - - # Store median - bs_data["median"][m][idx] = np.median(bs_sample) - - # Store 20 % trimmed mean - bs_data["20_trimmed_mean"][m][idx] = trim_mean(bs_sample, 0.2) - -for metric in dict_metrics["key_metrics"]: - sample_measure = trim_mean(data[metric], 0.2) - fig, bs_mean, bs_std, ci, pval = plot_bootstrap_distr( - sample_measure, - bs_data["20_trimmed_mean"][metric], - alpha=args.alpha, - color_ci=color_cat1_light, - color_pval=color_cat2_light, - null=0.0, - ) - - # Save figure - output_fig = output_dir / "{}_bootstrap_{}_{}.png".format( - args.technique, metric, "20_trimmed_mean" - ) - fig.savefig(output_fig, dpi=fig.dpi, bbox_inches="tight") - - # Store results - output_results = output_dir / "{}_bootstrap_{}_{}.yml".format( - args.technique, metric, "20_trimmed_mean" - ) - results_dict = { - "measure": "20_trimmed_mean", - "sample_measure": float(sample_measure), - "bs_mean": float(bs_mean), - "bs_std": float(bs_std), - "ci_left": float(ci[0]), - "ci_right": float(ci[1]), - "pval": float(pval), - } - with open(output_results, "w") as f: - yaml.dump(results_dict, f) diff --git a/spaces/NooneImportant/tts/README.md b/spaces/NooneImportant/tts/README.md deleted file mode 100644 index 23be0da15520a39a0ec662c0e47bb3b862bc17d8..0000000000000000000000000000000000000000 --- a/spaces/NooneImportant/tts/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ElevenLabs TTS -emoji: 🗣️ -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -duplicated_from: elevenlabs/tts ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OAOA/DifFace/basicsr/ops/fused_act/fused_act.py b/spaces/OAOA/DifFace/basicsr/ops/fused_act/fused_act.py deleted file mode 100644 index 88edc445484b71119dc22a258e83aef49ce39b07..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/ops/fused_act/fused_act.py +++ /dev/null @@ -1,95 +0,0 @@ -# modify from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_act.py # noqa:E501 - -import os -import torch -from torch import nn -from torch.autograd import Function - -BASICSR_JIT = os.getenv('BASICSR_JIT') -if BASICSR_JIT == 'True': - from torch.utils.cpp_extension import load - module_path = os.path.dirname(__file__) - fused_act_ext = load( - 'fused', - sources=[ - os.path.join(module_path, 'src', 'fused_bias_act.cpp'), - os.path.join(module_path, 'src', 'fused_bias_act_kernel.cu'), - ], - ) -else: - try: - from . import fused_act_ext - except ImportError: - pass - # avoid annoying print output - # print(f'Cannot import deform_conv_ext. Error: {error}. You may need to: \n ' - # '1. compile with BASICSR_EXT=True. or\n ' - # '2. set BASICSR_JIT=True during running') - - -class FusedLeakyReLUFunctionBackward(Function): - - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused_act_ext.fused_bias_act(grad_output, empty, out, 3, 1, negative_slope, scale) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused_act_ext.fused_bias_act(gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, - ctx.scale) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused_act_ext.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(grad_output, out, ctx.negative_slope, ctx.scale) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - - def __init__(self, channel, negative_slope=0.2, scale=2**0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2**0.5): - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/adaptive_loss.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/adaptive_loss.py deleted file mode 100644 index 6209ceaedb6d8120ad820c11b55c13596447933c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/adaptive_loss.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass - -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.constants import DDP_BACKEND_CHOICES -from omegaconf import II - - -@dataclass -class AdaptiveLossConfig(FairseqDataclass): - sentence_avg: bool = II("optimization.sentence_avg") - ddp_backend: DDP_BACKEND_CHOICES = II("distributed_training.ddp_backend") - - -@register_criterion("adaptive_loss", dataclass=AdaptiveLossConfig) -class AdaptiveLoss(FairseqCriterion): - """This is an implementation of the loss function accompanying the adaptive softmax approximation for - graphical processing units (GPU), described in the paper "Efficient softmax approximation for GPUs" - (http://arxiv.org/abs/1609.04309).""" - - def __init__(self, task, sentence_avg): - super().__init__(task) - self.sentence_avg = sentence_avg - - @classmethod - def build_criterion(cls, cfg: AdaptiveLossConfig, task): - if cfg.ddp_backend in {"c10d", "pytorch_ddp"}: - raise Exception( - "AdaptiveLoss is not compatible with the PyTorch " - "version of DistributedDataParallel. Please use " - "`--ddp-backend=legacy_ddp` instead." - ) - return cls(task, cfg.sentence_avg) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - - assert ( - hasattr(model.decoder, "adaptive_softmax") - and model.decoder.adaptive_softmax is not None - ) - adaptive_softmax = model.decoder.adaptive_softmax - - net_output = model(**sample["net_input"]) - orig_target = model.get_targets(sample, net_output) - - nsentences = orig_target.size(0) - orig_target = orig_target.view(-1) - - bsz = orig_target.size(0) - - logits, target = adaptive_softmax(net_output[0], orig_target) - assert len(target) == len(logits) - - loss = net_output[0].new(1 if reduce else bsz).zero_() - - for i in range(len(target)): - if target[i] is not None: - assert target[i].min() >= 0 and target[i].max() <= logits[i].size(1) - loss += F.cross_entropy( - logits[i], - target[i], - ignore_index=self.padding_idx, - reduction="sum" if reduce else "none", - ) - - orig = utils.strip_pad(orig_target, self.padding_idx) - ntokens = orig.numel() - sample_size = sample["target"].size(0) if self.sentence_avg else ntokens - logging_output = { - "loss": loss.data, - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs)) - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - else: - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tokenizer.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tokenizer.py deleted file mode 100644 index 42131f7b1d334020c3b48a6e44d4139f7c62ad28..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tokenizer.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import re - - -SPACE_NORMALIZER = re.compile(r"\s+") - - -def tokenize_line(line): - line = SPACE_NORMALIZER.sub(" ", line) - line = line.strip() - return line.split() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/data/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/flores101/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/flores101/README.md deleted file mode 100644 index 635c13f40bd0ccab704735bc5c26ea0192ea98cd..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/flores101/README.md +++ /dev/null @@ -1,223 +0,0 @@ -

    - -

    - -# Flores101: Large-Scale Multilingual Machine Translation - -## Introduction - -Baseline pretrained models for small and large tracks of WMT 21 Large-Scale Multilingual Machine Translation competition. - -Flores Task at WMT 21: http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html - -Flores announement blog post: https://ai.facebook.com/blog/flores-researchers-kick-off-multilingual-translation-challenge-at-wmt-and-call-for-compute-grants/ - - - -## Pretrained models - -Model | Num layers | Embed dimension | FFN dimension| Vocab Size | #params | Download ----|---|---|---|---|---|--- -`flores101_mm100_615M` | 12 | 1024 | 4096 | 256,000 | 615M | https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_615M.tar.gz -`flores101_mm100_175M` | 6 | 512 | 2048 | 256,000 | 175M | https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_175M.tar.gz - - -These models are trained similar to [M2M-100](https://arxiv.org/abs/2010.11125) with additional support for the languages that are part of the WMT Large-Scale Multilingual Machine Translation track. Full list of languages can be found at the bottom. - - -## Example Generation code - -### Download model, sentencepiece vocab - -```bash -fairseq=/path/to/fairseq -cd $fairseq - -# Download 615M param model. -wget https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_615M.tar.gz - -# Extract -tar -xvzf flores101_mm100_615M.tar.gz -``` - -### Encode using our SentencePiece Model -Note: Install SentencePiece from [here](https://github.com/google/sentencepiece) - - -```bash -fairseq=/path/to/fairseq -cd $fairseq - -# Download example dataset From German to French -sacrebleu --echo src -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.de -sacrebleu --echo ref -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.fr - -for lang in de fr ; do - python scripts/spm_encode.py \ - --model flores101_mm100_615M/sentencepiece.bpe.model \ - --output_format=piece \ - --inputs=raw_input.de-fr.${lang} \ - --outputs=spm.de-fr.${lang} -done -``` - -### Binarization - -```bash -fairseq-preprocess \ - --source-lang de --target-lang fr \ - --testpref spm.de-fr \ - --thresholdsrc 0 --thresholdtgt 0 \ - --destdir data_bin \ - --srcdict flores101_mm100_615M/dict.txt --tgtdict flores101_mm100_615M/dict.txt -``` - -### Generation - - -```bash -fairseq-generate \ - data_bin \ - --batch-size 1 \ - --path flores101_mm100_615M/model.pt \ - --fixed-dictionary flores101_mm100_615M/dict.txt \ - -s de -t fr \ - --remove-bpe 'sentencepiece' \ - --beam 5 \ - --task translation_multi_simple_epoch \ - --lang-pairs flores101_mm100_615M/language_pairs.txt \ - --decoder-langtok --encoder-langtok src \ - --gen-subset test \ - --fp16 \ - --dataset-impl mmap \ - --distributed-world-size 1 --distributed-no-spawn -``` - -### Supported Languages and lang code - -Language | lang code ----|--- -Akrikaans | af -Amharic | am -Arabic | ar -Assamese | as -Asturian | ast -Aymara | ay -Azerbaijani | az -Bashkir | ba -Belarusian | be -Bulgarian | bg -Bengali | bn -Breton | br -Bosnian | bs -Catalan | ca -Cebuano | ceb -Chokwe | cjk -Czech | cs -Welsh | cy -Danish | da -German | de -Dyula| dyu -Greek | el -English | en -Spanish | es -Estonian | et -Persian | fa -Fulah | ff -Finnish | fi -French | fr -Western Frisian | fy -Irish | ga -Scottish Gaelic | gd -Galician | gl -Gujarati | gu -Hausa | ha -Hebrew | he -Hindi | hi -Croatian | hr -Haitian Creole | ht -Hungarian | hu -Armenian | hy -Indonesian | id -Igbo | ig -Iloko | ilo -Icelandic | is -Italian | it -Japanese | ja -Javanese | jv -Georgian | ka -Kachin | kac -Kamba | kam -Kabuverdianu | kea -Kongo | kg -Kazakh | kk -Central Khmer | km -Kimbundu | kmb -Northern Kurdish | kmr -Kannada | kn -Korean | ko -Kurdish | ku -Kyrgyz | ky -Luxembourgish | lb -Ganda | lg -Lingala | ln -Lao | lo -Lithuanian | lt -Luo | luo -Latvian | lv -Malagasy | mg -Maori | mi -Macedonian | mk -Malayalam | ml -Mongolian | mn -Marathi | mr -Malay | ms -Maltese | mt -Burmese | my -Nepali | ne -Dutch | nl -Norwegian | no -Northern Sotho | ns -Nyanja | ny -Occitan | oc -Oromo | om -Oriya | or -Punjabi | pa -Polish | pl -Pashto | ps -Portuguese | pt -Quechua | qu -Romanian | ro -Russian | ru -Sindhi | sd -Shan | shn -Sinhala | si -Slovak | sk -Slovenian | sl -Shona | sn -Somali | so -Albanian | sq -Serbian | sr -Swati | ss -Sundanese | su -Swedish | sv -Swahili | sw -Tamil | ta -Telugu | te -Tajik | tg -Thai | th -Tigrinya | ti -Tagalog | tl -Tswana | tn -Turkish | tr -Ukrainian | uk -Umbundu | umb -Urdu | ur -Uzbek | uz -Vietnamese | vi -Wolof | wo -Xhosa | xh -Yiddish | yi -Yoruba | yo -Chinese| zh -Zulu | zu diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/multi_corpus_dataset.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/multi_corpus_dataset.py deleted file mode 100644 index 746155e515897da9fc9c803f9396a45b5cead8d0..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/multi_corpus_dataset.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import time -from collections import OrderedDict -from typing import Dict, List - -import numpy as np -from fairseq.data import data_utils - -from . import FairseqDataset - -logger = logging.getLogger(__name__) - - -class MultiCorpusDataset(FairseqDataset): - """ - Stores multiple instances of FairseqDataset together. Requires each instance - to be the same dataset, as the collate method needs to work on batches with - samples from each dataset. - - Allows specifying a distribution over the datasets to use. Note that unlike - MultiCorpusSampledDataset, this distribution allows sampling for each item, - rather than on a batch level. - - Each time ordered_indices() is called, a new sample is generated with - the specified distribution. - - Args: - datasets: a OrderedDict of FairseqDataset instances. - distribution: a List containing the probability of getting an utterance from - corresponding dataset - seed: random seed for sampling the datsets - sort_indices: if true, will sort the ordered indices by size - batch_sample: if true, will ensure each batch is from a single dataset - """ - - def __init__( - self, - datasets: Dict[str, FairseqDataset], - distribution: List[float], - seed: int, - sort_indices: bool = False, - batch_sample: bool = False, - distributed_rank=None, - ): - super().__init__() - assert isinstance(datasets, OrderedDict) - assert len(datasets) == len(distribution) - assert sum(distribution) == 1 - self.datasets = datasets - self.distribution = distribution - self.seed = seed - self.sort_indices = sort_indices - self.batch_sample = batch_sample - self.distributed_rank = distributed_rank - - # Avoid repeated conversions to list later - self.dataset_list = list(datasets.values()) - self.total_num_instances = 0 - - first_dataset = list(self.datasets.values())[0] - - self.dataset_offsets = [] - for dataset in datasets.values(): - assert isinstance(dataset, FairseqDataset) - assert type(dataset) is type(first_dataset) - self.dataset_offsets.append(self.total_num_instances) - self.total_num_instances += len(dataset) - - def ordered_indices(self): - start = time.time() - with data_utils.numpy_seed(self.seed, self.epoch): - logger.info(f"sampling new dataset with seed {self.seed} epoch {self.epoch}") - sampled_indices = [] - num_selected_instances = 0 - - # For each dataset i, sample self.distribution[i] * self.total_num_instances - for i, key in enumerate(self.datasets): - - if i < len(self.datasets) - 1: - num_instances = int(self.distribution[i] * self.total_num_instances) - high = self.dataset_offsets[i + 1] - else: - num_instances = self.total_num_instances - num_selected_instances - high = self.total_num_instances - - logger.info(f"sampling {num_instances} from {key} dataset") - num_selected_instances += num_instances - - # First, add k copies of the dataset where k = num_instances // len(dataset). - # This ensures an equal distribution of the data points as much as possible. - # For the remaining entries randomly sample them - dataset_size = len(self.datasets[key]) - num_copies = num_instances // dataset_size - dataset_indices = ( - np.random.permutation(high - self.dataset_offsets[i]) - + self.dataset_offsets[i] - )[: num_instances - num_copies * dataset_size] - if num_copies > 0: - sampled_indices += list( - np.concatenate( - ( - np.repeat( - np.arange(self.dataset_offsets[i], high), num_copies - ), - dataset_indices, - ) - ) - ) - else: - sampled_indices += list(dataset_indices) - - assert ( - len(sampled_indices) == self.total_num_instances - ), f"{len(sampled_indices)} vs {self.total_num_instances}" - - np.random.shuffle(sampled_indices) - if self.sort_indices: - sampled_indices.sort(key=lambda i: self.num_tokens(i)) - - logger.info( - "multi_corpus_dataset ordered_indices took {}s".format( - time.time() - start - ) - ) - return np.array(sampled_indices, dtype=np.int64) - - def _map_index(self, index: int): - """ - If dataset A has length N and dataset B has length M - then index 1 maps to index 1 of dataset A, and index N + 1 - maps to index 1 of B. - """ - counter = 0 - for key, dataset in self.datasets.items(): - if index < counter + len(dataset): - return index - counter, key - counter += len(dataset) - raise ValueError( - "Invalid index: {}, max: {}".format(index, self.total_num_instances) - ) - - def __len__(self): - """ - Length of this dataset is the sum of individual datasets - """ - return self.total_num_instances - - def __getitem__(self, index): - new_index, key = self._map_index(index) - try: - item = self.datasets[key][new_index] - item["full_id"] = index - return item - except Exception as e: - e.args = (f"Error from {key} dataset", *e.args) - raise - - def collater(self, samples): - """ - If we are doing batch sampling, then pick the right collater to use. - - Otherwise we assume all collaters are the same. - """ - if len(samples) == 0: - return None - if "full_id" in samples[0]: - _, key = self._map_index(samples[0]["full_id"]) - try: - batch = self.datasets[key].collater(samples) - except Exception: - print(f"Collating failed for key {key}", flush=True) - raise - return batch - else: - # Subclasses may override __getitem__ to not specify full_id - return list(self.datasets.values())[0].collater(samples) - - def num_tokens(self, index: int): - index, key = self._map_index(index) - return self.datasets[key].num_tokens(index) - - def size(self, index: int): - index, key = self._map_index(index) - return self.datasets[key].size(index) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return False - - def set_epoch(self, epoch, **unused): - super().set_epoch(epoch) - logger.info(f"setting epoch of multi_corpus_dataset to {epoch}") - self.epoch = epoch - - @property - def supports_prefetch(self): - return False - - @property - def supports_fetch_outside_dataloader(self): - return all( - self.datasets[key].supports_fetch_outside_dataloader - for key in self.datasets - ) - - def batch_by_size( - self, - indices, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - ): - if not self.batch_sample: - return super().batch_by_size( - indices, max_tokens, max_sentences, required_batch_size_multiple - ) - - dataset_indices = {key: [] for key in self.datasets} - for i in indices: - _, key = self._map_index(i) - dataset_indices[key].append(i) - - batches = [] - for key in dataset_indices: - cur_batches = super().batch_by_size( - np.array(dataset_indices[key], dtype=np.int64), - max_tokens, - max_sentences, - required_batch_size_multiple, - ) - logger.info(f"Created {len(cur_batches)} batches for dataset {key}") - batches += cur_batches - - # If this dataset is used in a distributed training setup, - # then shuffle such that the order is seeded by the distributed rank - # as well - if self.distributed_rank is not None: - with data_utils.numpy_seed(self.seed, self.epoch, self.distributed_rank): - np.random.shuffle(batches) - return batches diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/bart/README.summarization.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/bart/README.summarization.md deleted file mode 100644 index 8727584f2b2bdd880c6cd3abbf39b75dfbf4a67c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/bart/README.summarization.md +++ /dev/null @@ -1,102 +0,0 @@ -# Fine-tuning BART on CNN-Dailymail summarization task - -### 1) Download the CNN and Daily Mail data and preprocess it into data files with non-tokenized cased samples. - -Follow the instructions [here](https://github.com/abisee/cnn-dailymail) to download the original CNN and Daily Mail datasets. To preprocess the data, refer to the pointers in [this issue](https://github.com/pytorch/fairseq/issues/1391) or check out the code [here](https://github.com/artmatsak/cnn-dailymail). - -Follow the instructions [here](https://github.com/EdinburghNLP/XSum) to download the original Extreme Summarization datasets, or check out the code [here](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset), Please keep the raw dataset and make sure no tokenization nor BPE on the dataset. - -### 2) BPE preprocess: - -```bash -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt' - -TASK=cnn_dm -for SPLIT in train val -do - for LANG in source target - do - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json encoder.json \ - --vocab-bpe vocab.bpe \ - --inputs "$TASK/$SPLIT.$LANG" \ - --outputs "$TASK/$SPLIT.bpe.$LANG" \ - --workers 60 \ - --keep-empty; - done -done -``` - -### 3) Binarize dataset: -```bash -fairseq-preprocess \ - --source-lang "source" \ - --target-lang "target" \ - --trainpref "${TASK}/train.bpe" \ - --validpref "${TASK}/val.bpe" \ - --destdir "${TASK}-bin/" \ - --workers 60 \ - --srcdict dict.txt \ - --tgtdict dict.txt; -``` - -### 4) Fine-tuning on CNN-DM summarization task: -Example fine-tuning CNN-DM -```bash -TOTAL_NUM_UPDATES=20000 -WARMUP_UPDATES=500 -LR=3e-05 -MAX_TOKENS=2048 -UPDATE_FREQ=4 -BART_PATH=/path/to/bart/model.pt - -CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 fairseq-train cnn_dm-bin \ - --restore-file $BART_PATH \ - --max-tokens $MAX_TOKENS \ - --task translation \ - --source-lang source --target-lang target \ - --truncate-source \ - --layernorm-embedding \ - --share-all-embeddings \ - --share-decoder-input-output-embed \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --arch bart_large \ - --criterion label_smoothed_cross_entropy \ - --label-smoothing 0.1 \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.01 --optimizer adam --adam-betas "(0.9, 0.999)" --adam-eps 1e-08 \ - --clip-norm 0.1 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --update-freq $UPDATE_FREQ \ - --skip-invalid-size-inputs-valid-test \ - --find-unused-parameters; -``` -Above is expected to run on `1` node with `8 32gb-V100`. -Expected training time is about `5 hours`. Training time can be reduced with distributed training on `4` nodes and `--update-freq 1`. - -Use TOTAL_NUM_UPDATES=15000 UPDATE_FREQ=2 for Xsum task - -### Inference for CNN-DM test data using above trained checkpoint. -After training the model as mentioned in previous step, you can perform inference with checkpoints in `checkpoints/` directory using `eval_cnn.py`, for example - -```bash -cp data-bin/cnn_dm/dict.source.txt checkpoints/ -python examples/bart/summarize.py \ - --model-dir checkpoints \ - --model-file checkpoint_best.pt \ - --src cnn_dm/test.source \ - --out cnn_dm/test.hypo -``` -For XSUM, which uses beam=6, lenpen=1.0, max_len_b=60, min_len=10: -```bash -cp data-bin/cnn_dm/dict.source.txt checkpoints/ -python examples/bart/summarize.py \ - --model-dir checkpoints \ - --model-file checkpoint_best.pt \ - --src cnn_dm/test.source \ - --out cnn_dm/test.hypo \ - --xsum-kwargs -``` diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py deleted file mode 100644 index 2a287a4e97c66acbd36897b25f2ece5494005f03..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py +++ /dev/null @@ -1,27 +0,0 @@ -import os -import time -import torch -import sys -import subprocess - -argslist = list(sys.argv)[1:] -log_dir = argslist[-1] -num_gpus = torch.cuda.device_count() -argslist.append('--n_gpus={}'.format(num_gpus)) -workers = [] -job_id = time.strftime("%Y_%m_%d-%H%M%S") -argslist.append("--group_name=group_{}".format(job_id)) - -print("GPU log directory is {}".format(log_dir)) -os.makedirs(log_dir, exist_ok=True) -for i in range(num_gpus): - argslist.append('--rank={}'.format(i)) - stdout = None if i == 0 else open("{}/{}_GPU_{}.log".format(log_dir, job_id, i), - "w") - print(argslist) - p = subprocess.Popen([str(sys.executable)]+argslist, stdout=stdout) - workers.append(p) - argslist = argslist[:-1] - -for p in workers: - p.wait() diff --git a/spaces/OIUGLK/bingo/next.config.js b/spaces/OIUGLK/bingo/next.config.js deleted file mode 100644 index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/next.config.js +++ /dev/null @@ -1,38 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - // output: 'export', - // assetPrefix: '.', - webpack: (config, { isServer }) => { - if (!isServer) { - config.resolve = { - ...config.resolve, - fallback: { - 'bufferutil': false, - 'utf-8-validate': false, - http: false, - https: false, - stream: false, - // fixes proxy-agent dependencies - net: false, - dns: false, - tls: false, - assert: false, - // fixes next-i18next dependencies - path: false, - fs: false, - // fixes mapbox dependencies - events: false, - // fixes sentry dependencies - process: false - } - }; - } - config.module.exprContextCritical = false; - - return config; - }, -} - -module.exports = (...args) => { - return nextConfig -} diff --git a/spaces/OIUGLK/bingo/src/lib/bots/bing/utils.ts b/spaces/OIUGLK/bingo/src/lib/bots/bing/utils.ts deleted file mode 100644 index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/src/lib/bots/bing/utils.ts +++ /dev/null @@ -1,87 +0,0 @@ -import { ChatResponseMessage, BingChatResponse } from './types' - -export function convertMessageToMarkdown(message: ChatResponseMessage): string { - if (message.messageType === 'InternalSearchQuery') { - return message.text - } - for (const card of message.adaptiveCards??[]) { - for (const block of card.body) { - if (block.type === 'TextBlock') { - return block.text - } - } - } - return '' -} - -const RecordSeparator = String.fromCharCode(30) - -export const websocketUtils = { - packMessage(data: any) { - return `${JSON.stringify(data)}${RecordSeparator}` - }, - unpackMessage(data: string | ArrayBuffer | Blob) { - if (!data) return {} - return data - .toString() - .split(RecordSeparator) - .filter(Boolean) - .map((s) => { - try { - return JSON.parse(s) - } catch (e) { - return {} - } - }) - }, -} - -export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise { - const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`, - { - method: 'HEAD', - headers, - redirect: 'manual' - }, - ); - - if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) { - throw new Error('请求异常,请检查 cookie 是否有效') - } - - const resultId = RegExp.$1; - let count = 0 - const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`; - - do { - await sleep(3000); - const content = await fetch(imageThumbUrl, { headers, method: 'GET' }) - - // @ts-ignore - if (content.headers.get('content-length') > 1) { - const text = await content.text() - return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&')) - .map(img => `![${prompt}](${img})`).join(' ') - } - } while(count ++ < 10); -} - - -export async function* streamAsyncIterable(stream: ReadableStream) { - const reader = stream.getReader() - try { - while (true) { - const { done, value } = await reader.read() - if (done) { - return - } - yield value - } - } finally { - reader.releaseLock() - } -} - -export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms)) - diff --git a/spaces/Omnibus/MusicGen/app.py b/spaces/Omnibus/MusicGen/app.py deleted file mode 100644 index 0f92495d323f1c70a9c8dde3b7680e3f9491ab83..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/MusicGen/app.py +++ /dev/null @@ -1,407 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# Updated to account for UI changes from https://github.com/rkfg/audiocraft/blob/long/app.py -# also released under the MIT license. - -import argparse -from concurrent.futures import ProcessPoolExecutor -import os -from pathlib import Path -import subprocess as sp -from tempfile import NamedTemporaryFile -import time -import typing as tp -import warnings - -import torch -import gradio as gr - -from audiocraft.data.audio_utils import convert_audio -from audiocraft.data.audio import audio_write -from audiocraft.models import MusicGen - - -MODEL = None # Last used model -IS_BATCHED = "facebook/MusicGen" in os.environ.get('SPACE_ID', '') -MAX_BATCH_SIZE = 6 -BATCHED_DURATION = 15 -INTERRUPTING = False -# We have to wrap subprocess call to clean a bit the log when using gr.make_waveform -_old_call = sp.call - - -def _call_nostderr(*args, **kwargs): - # Avoid ffmpeg vomitting on the logs. - kwargs['stderr'] = sp.DEVNULL - kwargs['stdout'] = sp.DEVNULL - _old_call(*args, **kwargs) - - -sp.call = _call_nostderr -# Preallocating the pool of processes. -pool = ProcessPoolExecutor(3) -pool.__enter__() - - -def interrupt(): - global INTERRUPTING - INTERRUPTING = True - - -class FileCleaner: - def __init__(self, file_lifetime: float = 3600): - self.file_lifetime = file_lifetime - self.files = [] - - def add(self, path: tp.Union[str, Path]): - self._cleanup() - self.files.append((time.time(), Path(path))) - - def _cleanup(self): - now = time.time() - for time_added, path in list(self.files): - if now - time_added > self.file_lifetime: - if path.exists(): - path.unlink() - self.files.pop(0) - else: - break - - -file_cleaner = FileCleaner() - - -def make_waveform(*args, **kwargs): - # Further remove some warnings. - be = time.time() - with warnings.catch_warnings(): - warnings.simplefilter('ignore') - out = gr.make_waveform(*args, **kwargs) - print("Make a video took", time.time() - be) - return out - - -def load_model(version='melody'): - global MODEL - print("Loading model", version) - if MODEL is None or MODEL.name != version: - MODEL = MusicGen.get_pretrained(version) - - -def _do_predictions(texts, melodies, duration, progress=False, **gen_kwargs): - MODEL.set_generation_params(duration=duration, **gen_kwargs) - print("new batch", len(texts), texts, [None if m is None else (m[0], m[1].shape) for m in melodies]) - be = time.time() - processed_melodies = [] - target_sr = 32000 - target_ac = 1 - for melody in melodies: - if melody is None: - processed_melodies.append(None) - else: - sr, melody = melody[0], torch.from_numpy(melody[1]).to(MODEL.device).float().t() - if melody.dim() == 1: - melody = melody[None] - melody = melody[..., :int(sr * duration)] - melody = convert_audio(melody, sr, target_sr, target_ac) - processed_melodies.append(melody) - - if any(m is not None for m in processed_melodies): - outputs = MODEL.generate_with_chroma( - descriptions=texts, - melody_wavs=processed_melodies, - melody_sample_rate=target_sr, - progress=progress, - ) - else: - outputs = MODEL.generate(texts, progress=progress) - - outputs = outputs.detach().cpu().float() - out_files = [] - for output in outputs: - with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file: - audio_write( - file.name, output, MODEL.sample_rate, strategy="loudness", - loudness_headroom_db=16, loudness_compressor=True, add_suffix=False) - out_files.append(pool.submit(make_waveform, file.name)) - file_cleaner.add(file.name) - res = [out_file.result() for out_file in out_files] - for file in res: - file_cleaner.add(file) - print("batch finished", len(texts), time.time() - be) - print("Tempfiles currently stored: ", len(file_cleaner.files)) - return res - - -def predict_batched(texts, melodies): - max_text_length = 512 - texts = [text[:max_text_length] for text in texts] - load_model('melody') - res = _do_predictions(texts, melodies, BATCHED_DURATION) - return [res] - - -def predict_full(model, text, melody, duration, topk, topp, temperature, cfg_coef, progress=gr.Progress()): - global INTERRUPTING - INTERRUPTING = False - if temperature < 0: - raise gr.Error("Temperature must be >= 0.") - if topk < 0: - raise gr.Error("Topk must be non-negative.") - if topp < 0: - raise gr.Error("Topp must be non-negative.") - - topk = int(topk) - load_model(model) - - def _progress(generated, to_generate): - progress((generated, to_generate)) - if INTERRUPTING: - raise gr.Error("Interrupted.") - MODEL.set_custom_progress_callback(_progress) - - outs = _do_predictions( - [text], [melody], duration, progress=True, - top_k=topk, top_p=topp, temperature=temperature, cfg_coef=cfg_coef) - return outs[0] - - -def toggle_audio_src(choice): - if choice == "mic": - return gr.update(source="microphone", value=None, label="Microphone") - else: - return gr.update(source="upload", value=None, label="File") - - -def ui_full(launch_kwargs): - with gr.Blocks() as interface: - gr.Markdown( - """ - # MusicGen - This is your private demo for [MusicGen](https://github.com/facebookresearch/audiocraft), - a simple and controllable model for music generation - presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284) - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - text = gr.Text(label="Input Text", interactive=True) - with gr.Column(): - radio = gr.Radio(["file", "mic"], value="file", - label="Condition on a melody (optional) File or Mic") - melody = gr.Audio(source="upload", type="numpy", label="File", - interactive=True, elem_id="melody-input") - with gr.Row(): - submit = gr.Button("Submit") - # Adapted from https://github.com/rkfg/audiocraft/blob/long/app.py, MIT license. - _ = gr.Button("Interrupt").click(fn=interrupt, queue=False) - with gr.Row(): - model = gr.Radio(["melody", "medium", "small", "large"], - label="Model", value="melody", interactive=True) - with gr.Row(): - duration = gr.Slider(minimum=1, maximum=120, value=10, label="Duration", interactive=True) - with gr.Row(): - topk = gr.Number(label="Top-k", value=250, interactive=True) - topp = gr.Number(label="Top-p", value=0, interactive=True) - temperature = gr.Number(label="Temperature", value=1.0, interactive=True) - cfg_coef = gr.Number(label="Classifier Free Guidance", value=3.0, interactive=True) - with gr.Column(): - output = gr.Video(label="Generated Music") - submit.click(predict_full, - inputs=[model, text, melody, duration, topk, topp, temperature, cfg_coef], - outputs=[output]) - radio.change(toggle_audio_src, radio, [melody], queue=False, show_progress=False) - gr.Examples( - fn=predict_full, - examples=[ - [ - "An 80s driving pop song with heavy drums and synth pads in the background", - "./assets/bach.mp3", - "melody" - ], - [ - "A cheerful country song with acoustic guitars", - "./assets/bolero_ravel.mp3", - "melody" - ], - [ - "90s rock song with electric guitar and heavy drums", - None, - "medium" - ], - [ - "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions", - "./assets/bach.mp3", - "melody" - ], - [ - "lofi slow bpm electro chill with organic samples", - None, - "medium", - ], - ], - inputs=[text, melody, model], - outputs=[output] - ) - gr.Markdown( - """ - ### More details - - The model will generate a short music extract based on the description you provided. - The model can generate up to 30 seconds of audio in one pass. It is now possible - to extend the generation by feeding back the end of the previous chunk of audio. - This can take a long time, and the model might lose consistency. The model might also - decide at arbitrary positions that the song ends. - - **WARNING:** Choosing long durations will take a long time to generate (2min might take ~10min). - An overlap of 12 seconds is kept with the previously generated chunk, and 18 "new" seconds - are generated each time. - - We present 4 model variations: - 1. Melody -- a music generation model capable of generating music condition - on text and melody inputs. **Note**, you can also use text only. - 2. Small -- a 300M transformer decoder conditioned on text only. - 3. Medium -- a 1.5B transformer decoder conditioned on text only. - 4. Large -- a 3.3B transformer decoder conditioned on text only (might OOM for the longest sequences.) - - When using `melody`, ou can optionaly provide a reference audio from - which a broad melody will be extracted. The model will then try to follow both - the description and melody provided. - - You can also use your own GPU or a Google Colab by following the instructions on our repo. - See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft) - for more details. - """ - ) - - interface.queue().launch(**launch_kwargs) - - -def ui_batched(launch_kwargs): - with gr.Blocks() as demo: - gr.Markdown( - """ - # MusicGen - - This is the demo for [MusicGen](https://github.com/facebookresearch/audiocraft), - a simple and controllable model for music generation - presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284). -
    - - Duplicate Space - for longer sequences, more control and no queue.

    - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - text = gr.Text(label="Describe your music", lines=2, interactive=True) - with gr.Column(): - radio = gr.Radio(["file", "mic"], value="file", - label="Condition on a melody (optional) File or Mic") - melody = gr.Audio(source="upload", type="numpy", label="File", - interactive=True, elem_id="melody-input") - with gr.Row(): - submit = gr.Button("Generate") - with gr.Column(): - output = gr.Video(label="Generated Music") - submit.click(predict_batched, inputs=[text, melody], - outputs=[output], batch=True, max_batch_size=MAX_BATCH_SIZE) - radio.change(toggle_audio_src, radio, [melody], queue=False, show_progress=False) - gr.Examples( - fn=predict_batched, - examples=[ - [ - "An 80s driving pop song with heavy drums and synth pads in the background", - "./assets/bach.mp3", - ], - [ - "A cheerful country song with acoustic guitars", - "./assets/bolero_ravel.mp3", - ], - [ - "90s rock song with electric guitar and heavy drums", - None, - ], - [ - "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions bpm: 130", - "./assets/bach.mp3", - ], - [ - "lofi slow bpm electro chill with organic samples", - None, - ], - ], - inputs=[text, melody], - outputs=[output] - ) - gr.Markdown(""" - ### More details - - The model will generate 12 seconds of audio based on the description you provided. - You can optionaly provide a reference audio from which a broad melody will be extracted. - The model will then try to follow both the description and melody provided. - All samples are generated with the `melody` model. - - You can also use your own GPU or a Google Colab by following the instructions on our repo. - - See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft) - for more details. - """) - - demo.queue(max_size=8 * 4).launch(**launch_kwargs) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - '--listen', - type=str, - default='0.0.0.0' if 'SPACE_ID' in os.environ else '127.0.0.1', - help='IP to listen on for connections to Gradio', - ) - parser.add_argument( - '--username', type=str, default='', help='Username for authentication' - ) - parser.add_argument( - '--password', type=str, default='', help='Password for authentication' - ) - parser.add_argument( - '--server_port', - type=int, - default=0, - help='Port to run the server listener on', - ) - parser.add_argument( - '--inbrowser', action='store_true', help='Open in browser' - ) - parser.add_argument( - '--share', action='store_true', help='Share the gradio UI' - ) - - args = parser.parse_args() - - launch_kwargs = {} - launch_kwargs['server_name'] = args.listen - - if args.username and args.password: - launch_kwargs['auth'] = (args.username, args.password) - if args.server_port: - launch_kwargs['server_port'] = args.server_port - if args.inbrowser: - launch_kwargs['inbrowser'] = args.inbrowser - if args.share: - launch_kwargs['share'] = args.share - - # Show the interface - if IS_BATCHED: - ui_batched(launch_kwargs) - else: - ui_full(launch_kwargs) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/__init__.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/__init__.py deleted file mode 100644 index 6b0668157052ce7b796ef50bc7ee85361e7605b9..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -from .build import META_ARCH_REGISTRY, build_model # isort:skip - -from .panoptic_fpn import PanopticFPN - -# import all the meta_arch, so they will be registered -from .rcnn import GeneralizedRCNN, ProposalNetwork -from .dense_detector import DenseDetector -from .retinanet import RetinaNet -from .fcos import FCOS -from .semantic_seg import SEM_SEG_HEADS_REGISTRY, SemanticSegmentor, build_sem_seg_head - - -__all__ = list(globals().keys()) diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/docker/build-cuda111.sh b/spaces/OpenGVLab/InternGPT/third-party/lama/docker/build-cuda111.sh deleted file mode 100644 index b0824f5d536f548fde0b1c8e07cc95217d91310d..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/docker/build-cuda111.sh +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/bash - -BASEDIR="$(dirname $0)" - -docker build -t windj007/lama:cuda111 -f "$BASEDIR/Dockerfile-cuda111" "$BASEDIR" diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/alexnet.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/alexnet.py deleted file mode 100644 index 89e36b8c7851f895d9ae7f07149f0e707456aab0..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/alexnet.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn - - -class AlexNet(nn.Module): - """AlexNet backbone. - - Args: - num_classes (int): number of classes for classification. - """ - - def __init__(self, num_classes=-1): - super(AlexNet, self).__init__() - self.num_classes = num_classes - self.features = nn.Sequential( - nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - nn.Conv2d(64, 192, kernel_size=5, padding=2), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - nn.Conv2d(192, 384, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(384, 256, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(256, 256, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - ) - if self.num_classes > 0: - self.classifier = nn.Sequential( - nn.Dropout(), - nn.Linear(256 * 6 * 6, 4096), - nn.ReLU(inplace=True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(inplace=True), - nn.Linear(4096, num_classes), - ) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - # use default initializer - pass - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - - x = self.features(x) - if self.num_classes > 0: - x = x.view(x.size(0), 256 * 6 * 6) - x = self.classifier(x) - - return x diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/base/lalr.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/base/lalr.go deleted file mode 100644 index 4058ed3fd852e3b430d6a564fc2b0f02db9dc714..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/base/lalr.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/graphviz.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/graphviz.go deleted file mode 100644 index c5825d0ed0cacf10f5e588e250efc52412afa333..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/graphviz.go and /dev/null differ diff --git a/spaces/Pengyey/bingo-chuchu/postcss.config.js b/spaces/Pengyey/bingo-chuchu/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/Pepsr/Chatbot/getFlightData.py b/spaces/Pepsr/Chatbot/getFlightData.py deleted file mode 100644 index bde5cb2ab5f464e408e1f8e363c495c7efe6d02a..0000000000000000000000000000000000000000 --- a/spaces/Pepsr/Chatbot/getFlightData.py +++ /dev/null @@ -1,37 +0,0 @@ - -import requests -import os - -def getAccessToken(): - url = "https://api.lufthansa.com/v1/oauth/token" - data = { - "client_id": os.environ['client_id'], - "client_secret": os.environ['client_secret'], - "grant_type": "client_credentials" - } - - response = (requests.post(url, data=data)).json() - - return response["access_token"] - - - - -def getFlightInfo(flight_number, date): - - token = getAccessToken() - - - url = f"https://api.lufthansa.com/v1/operations/flightstatus/{flight_number}/{date}" - headers = { - "Authorization": f"Bearer {token}", - "Accept": "application/json" - } - - response = (requests.get(url, headers=headers)).json() - - print(response) - - - return response - diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/exp/upernet_global_small/test_config_g.py b/spaces/Pie31415/control-animation/annotator/uniformer/exp/upernet_global_small/test_config_g.py deleted file mode 100644 index e43737a98a3b174a9f2fe059c06d511144686459..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/exp/upernet_global_small/test_config_g.py +++ /dev/null @@ -1,38 +0,0 @@ -_base_ = [ - '../../configs/_base_/models/upernet_uniformer.py', - '../../configs/_base_/datasets/ade20k.py', - '../../configs/_base_/default_runtime.py', - '../../configs/_base_/schedules/schedule_160k.py' -] -model = dict( - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - drop_path_rate=0.25, - windows=False, - hybrid=False, - ), - decode_head=dict( - in_channels=[64, 128, 320, 512], - num_classes=150 - ), - auxiliary_head=dict( - in_channels=320, - num_classes=150 - )) - -# AdamW optimizer, no weight decay for position embedding & layer norm in backbone -optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) - -lr_config = dict(_delete_=True, policy='poly', - warmup='linear', - warmup_iters=1500, - warmup_ratio=1e-6, - power=1.0, min_lr=0.0, by_epoch=False) - -data=dict(samples_per_gpu=2) \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pep517/check.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pep517/check.py deleted file mode 100644 index b79f6270b4060ce3c40fc8800ac248f91b21fe22..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pep517/check.py +++ /dev/null @@ -1,207 +0,0 @@ -"""Check a project and backend by attempting to build using PEP 517 hooks. -""" -import argparse -import logging -import os -import shutil -import sys -import tarfile -import zipfile -from os.path import isfile -from os.path import join as pjoin -from subprocess import CalledProcessError -from tempfile import mkdtemp - -from ._compat import tomllib -from .colorlog import enable_colourful_output -from .envbuild import BuildEnvironment -from .wrappers import Pep517HookCaller - -log = logging.getLogger(__name__) - - -def check_build_sdist(hooks, build_sys_requires): - with BuildEnvironment() as env: - try: - env.pip_install(build_sys_requires) - log.info('Installed static build dependencies') - except CalledProcessError: - log.error('Failed to install static build dependencies') - return False - - try: - reqs = hooks.get_requires_for_build_sdist({}) - log.info('Got build requires: %s', reqs) - except Exception: - log.error('Failure in get_requires_for_build_sdist', exc_info=True) - return False - - try: - env.pip_install(reqs) - log.info('Installed dynamic build dependencies') - except CalledProcessError: - log.error('Failed to install dynamic build dependencies') - return False - - td = mkdtemp() - log.info('Trying to build sdist in %s', td) - try: - try: - filename = hooks.build_sdist(td, {}) - log.info('build_sdist returned %r', filename) - except Exception: - log.info('Failure in build_sdist', exc_info=True) - return False - - if not filename.endswith('.tar.gz'): - log.error( - "Filename %s doesn't have .tar.gz extension", filename) - return False - - path = pjoin(td, filename) - if isfile(path): - log.info("Output file %s exists", path) - else: - log.error("Output file %s does not exist", path) - return False - - if tarfile.is_tarfile(path): - log.info("Output file is a tar file") - else: - log.error("Output file is not a tar file") - return False - - finally: - shutil.rmtree(td) - - return True - - -def check_build_wheel(hooks, build_sys_requires): - with BuildEnvironment() as env: - try: - env.pip_install(build_sys_requires) - log.info('Installed static build dependencies') - except CalledProcessError: - log.error('Failed to install static build dependencies') - return False - - try: - reqs = hooks.get_requires_for_build_wheel({}) - log.info('Got build requires: %s', reqs) - except Exception: - log.error('Failure in get_requires_for_build_sdist', exc_info=True) - return False - - try: - env.pip_install(reqs) - log.info('Installed dynamic build dependencies') - except CalledProcessError: - log.error('Failed to install dynamic build dependencies') - return False - - td = mkdtemp() - log.info('Trying to build wheel in %s', td) - try: - try: - filename = hooks.build_wheel(td, {}) - log.info('build_wheel returned %r', filename) - except Exception: - log.info('Failure in build_wheel', exc_info=True) - return False - - if not filename.endswith('.whl'): - log.error("Filename %s doesn't have .whl extension", filename) - return False - - path = pjoin(td, filename) - if isfile(path): - log.info("Output file %s exists", path) - else: - log.error("Output file %s does not exist", path) - return False - - if zipfile.is_zipfile(path): - log.info("Output file is a zip file") - else: - log.error("Output file is not a zip file") - return False - - finally: - shutil.rmtree(td) - - return True - - -def check(source_dir): - pyproject = pjoin(source_dir, 'pyproject.toml') - if isfile(pyproject): - log.info('Found pyproject.toml') - else: - log.error('Missing pyproject.toml') - return False - - try: - with open(pyproject, 'rb') as f: - pyproject_data = tomllib.load(f) - # Ensure the mandatory data can be loaded - buildsys = pyproject_data['build-system'] - requires = buildsys['requires'] - backend = buildsys['build-backend'] - backend_path = buildsys.get('backend-path') - log.info('Loaded pyproject.toml') - except (tomllib.TOMLDecodeError, KeyError): - log.error("Invalid pyproject.toml", exc_info=True) - return False - - hooks = Pep517HookCaller(source_dir, backend, backend_path) - - sdist_ok = check_build_sdist(hooks, requires) - wheel_ok = check_build_wheel(hooks, requires) - - if not sdist_ok: - log.warning('Sdist checks failed; scroll up to see') - if not wheel_ok: - log.warning('Wheel checks failed') - - return sdist_ok - - -def main(argv=None): - log.warning('pep517.check is deprecated. ' - 'Consider switching to https://pypi.org/project/build/') - - ap = argparse.ArgumentParser() - ap.add_argument( - 'source_dir', - help="A directory containing pyproject.toml") - args = ap.parse_args(argv) - - enable_colourful_output() - - ok = check(args.source_dir) - - if ok: - print(ansi('Checks passed', 'green')) - else: - print(ansi('Checks failed', 'red')) - sys.exit(1) - - -ansi_codes = { - 'reset': '\x1b[0m', - 'bold': '\x1b[1m', - 'red': '\x1b[31m', - 'green': '\x1b[32m', -} - - -def ansi(s, attr): - if os.name != 'nt' and sys.stdout.isatty(): - return ansi_codes[attr] + str(s) + ansi_codes['reset'] - else: - return str(s) - - -if __name__ == '__main__': - main() diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Rayzggz/illi-Bert-VITS2/text/english.py b/spaces/Rayzggz/illi-Bert-VITS2/text/english.py deleted file mode 100644 index 0f9339c9ed771dab5136978eaaab194ec3fe2395..0000000000000000000000000000000000000000 --- a/spaces/Rayzggz/illi-Bert-VITS2/text/english.py +++ /dev/null @@ -1,214 +0,0 @@ -import pickle -import os -import re -from g2p_en import G2p - -from text import symbols - -current_file_path = os.path.dirname(__file__) -CMU_DICT_PATH = os.path.join(current_file_path, "cmudict.rep") -CACHE_PATH = os.path.join(current_file_path, "cmudict_cache.pickle") -_g2p = G2p() - -arpa = { - "AH0", - "S", - "AH1", - "EY2", - "AE2", - "EH0", - "OW2", - "UH0", - "NG", - "B", - "G", - "AY0", - "M", - "AA0", - "F", - "AO0", - "ER2", - "UH1", - "IY1", - "AH2", - "DH", - "IY0", - "EY1", - "IH0", - "K", - "N", - "W", - "IY2", - "T", - "AA1", - "ER1", - "EH2", - "OY0", - "UH2", - "UW1", - "Z", - "AW2", - "AW1", - "V", - "UW2", - "AA2", - "ER", - "AW0", - "UW0", - "R", - "OW1", - "EH1", - "ZH", - "AE0", - "IH2", - "IH", - "Y", - "JH", - "P", - "AY1", - "EY0", - "OY2", - "TH", - "HH", - "D", - "ER0", - "CH", - "AO1", - "AE1", - "AO2", - "OY1", - "AY2", - "IH1", - "OW0", - "L", - "SH", -} - - -def post_replace_ph(ph): - rep_map = { - ":": ",", - ";": ",", - ",": ",", - "。": ".", - "!": "!", - "?": "?", - "\n": ".", - "·": ",", - "、": ",", - "...": "…", - "v": "V", - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = "UNK" - return ph - - -def read_dict(): - g2p_dict = {} - start_line = 49 - with open(CMU_DICT_PATH) as f: - line = f.readline() - line_index = 1 - while line: - if line_index >= start_line: - line = line.strip() - word_split = line.split(" ") - word = word_split[0] - - syllable_split = word_split[1].split(" - ") - g2p_dict[word] = [] - for syllable in syllable_split: - phone_split = syllable.split(" ") - g2p_dict[word].append(phone_split) - - line_index = line_index + 1 - line = f.readline() - - return g2p_dict - - -def cache_dict(g2p_dict, file_path): - with open(file_path, "wb") as pickle_file: - pickle.dump(g2p_dict, pickle_file) - - -def get_dict(): - if os.path.exists(CACHE_PATH): - with open(CACHE_PATH, "rb") as pickle_file: - g2p_dict = pickle.load(pickle_file) - else: - g2p_dict = read_dict() - cache_dict(g2p_dict, CACHE_PATH) - - return g2p_dict - - -eng_dict = get_dict() - - -def refine_ph(phn): - tone = 0 - if re.search(r"\d$", phn): - tone = int(phn[-1]) + 1 - phn = phn[:-1] - return phn.lower(), tone - - -def refine_syllables(syllables): - tones = [] - phonemes = [] - for phn_list in syllables: - for i in range(len(phn_list)): - phn = phn_list[i] - phn, tone = refine_ph(phn) - phonemes.append(phn) - tones.append(tone) - return phonemes, tones - - -def text_normalize(text): - # todo: eng text normalize - return text - - -def g2p(text): - phones = [] - tones = [] - words = re.split(r"([,;.\-\?\!\s+])", text) - for w in words: - if w.upper() in eng_dict: - phns, tns = refine_syllables(eng_dict[w.upper()]) - phones += phns - tones += tns - else: - phone_list = list(filter(lambda p: p != " ", _g2p(w))) - for ph in phone_list: - if ph in arpa: - ph, tn = refine_ph(ph) - phones.append(ph) - tones.append(tn) - else: - phones.append(ph) - tones.append(0) - # todo: implement word2ph - word2ph = [1 for i in phones] - - phones = [post_replace_ph(i) for i in phones] - return phones, tones, word2ph - - -if __name__ == "__main__": - # print(get_dict()) - # print(eng_word_to_phoneme("hello")) - print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder.")) - # all_phones = set() - # for k, syllables in eng_dict.items(): - # for group in syllables: - # for ph in group: - # all_phones.add(ph) - # print(all_phones) diff --git a/spaces/Rbrq/DeticChatGPT/train_net.py b/spaces/Rbrq/DeticChatGPT/train_net.py deleted file mode 100644 index 251257ceb9e9dde53b12f6adf64c28fd71b3d43d..0000000000000000000000000000000000000000 --- a/spaces/Rbrq/DeticChatGPT/train_net.py +++ /dev/null @@ -1,269 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import os -import sys -from collections import OrderedDict -import torch -from torch.nn.parallel import DistributedDataParallel -import time -import datetime - -from fvcore.common.timer import Timer -import detectron2.utils.comm as comm -from detectron2.checkpoint import DetectionCheckpointer, PeriodicCheckpointer -from detectron2.config import get_cfg -from detectron2.data import ( - MetadataCatalog, - build_detection_test_loader, -) -from detectron2.engine import default_argument_parser, default_setup, launch - -from detectron2.evaluation import ( - inference_on_dataset, - print_csv_format, - LVISEvaluator, - COCOEvaluator, -) -from detectron2.modeling import build_model -from detectron2.solver import build_lr_scheduler, build_optimizer -from detectron2.utils.events import ( - CommonMetricPrinter, - EventStorage, - JSONWriter, - TensorboardXWriter, -) -from detectron2.data.dataset_mapper import DatasetMapper -from detectron2.data.build import build_detection_train_loader -from detectron2.utils.logger import setup_logger -from torch.cuda.amp import GradScaler - -sys.path.insert(0, 'third_party/CenterNet2/projects/CenterNet2/') -from centernet.config import add_centernet_config - -sys.path.insert(0, 'third_party/Deformable-DETR') -from detic.config import add_detic_config -from detic.data.custom_build_augmentation import build_custom_augmentation -from detic.data.custom_dataset_dataloader import build_custom_train_loader -from detic.data.custom_dataset_mapper import CustomDatasetMapper, DetrDatasetMapper -from detic.custom_solver import build_custom_optimizer -from detic.evaluation.oideval import OIDEvaluator -from detic.evaluation.custom_coco_eval import CustomCOCOEvaluator -from detic.modeling.utils import reset_cls_test - - -logger = logging.getLogger("detectron2") - -def do_test(cfg, model): - results = OrderedDict() - for d, dataset_name in enumerate(cfg.DATASETS.TEST): - if cfg.MODEL.RESET_CLS_TESTS: - reset_cls_test( - model, - cfg.MODEL.TEST_CLASSIFIERS[d], - cfg.MODEL.TEST_NUM_CLASSES[d]) - mapper = None if cfg.INPUT.TEST_INPUT_TYPE == 'default' \ - else DatasetMapper( - cfg, False, augmentations=build_custom_augmentation(cfg, False)) - data_loader = build_detection_test_loader(cfg, dataset_name, mapper=mapper) - output_folder = os.path.join( - cfg.OUTPUT_DIR, "inference_{}".format(dataset_name)) - evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type - - if evaluator_type == "lvis" or cfg.GEN_PSEDO_LABELS: - evaluator = LVISEvaluator(dataset_name, cfg, True, output_folder) - elif evaluator_type == 'coco': - if dataset_name == 'coco_generalized_zeroshot_val': - # Additionally plot mAP for 'seen classes' and 'unseen classes' - evaluator = CustomCOCOEvaluator(dataset_name, cfg, True, output_folder) - else: - evaluator = COCOEvaluator(dataset_name, cfg, True, output_folder) - elif evaluator_type == 'oid': - evaluator = OIDEvaluator(dataset_name, cfg, True, output_folder) - else: - assert 0, evaluator_type - - results[dataset_name] = inference_on_dataset( - model, data_loader, evaluator) - if comm.is_main_process(): - logger.info("Evaluation results for {} in csv format:".format( - dataset_name)) - print_csv_format(results[dataset_name]) - if len(results) == 1: - results = list(results.values())[0] - return results - -def do_train(cfg, model, resume=False): - model.train() - if cfg.SOLVER.USE_CUSTOM_SOLVER: - optimizer = build_custom_optimizer(cfg, model) - else: - assert cfg.SOLVER.OPTIMIZER == 'SGD' - assert cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE != 'full_model' - assert cfg.SOLVER.BACKBONE_MULTIPLIER == 1. - optimizer = build_optimizer(cfg, model) - scheduler = build_lr_scheduler(cfg, optimizer) - - checkpointer = DetectionCheckpointer( - model, cfg.OUTPUT_DIR, optimizer=optimizer, scheduler=scheduler - ) - - start_iter = checkpointer.resume_or_load( - cfg.MODEL.WEIGHTS, resume=resume).get("iteration", -1) + 1 - if not resume: - start_iter = 0 - max_iter = cfg.SOLVER.MAX_ITER if cfg.SOLVER.TRAIN_ITER < 0 else cfg.SOLVER.TRAIN_ITER - - periodic_checkpointer = PeriodicCheckpointer( - checkpointer, cfg.SOLVER.CHECKPOINT_PERIOD, max_iter=max_iter - ) - - writers = ( - [ - CommonMetricPrinter(max_iter), - JSONWriter(os.path.join(cfg.OUTPUT_DIR, "metrics.json")), - TensorboardXWriter(cfg.OUTPUT_DIR), - ] - if comm.is_main_process() - else [] - ) - - use_custom_mapper = cfg.WITH_IMAGE_LABELS - MapperClass = CustomDatasetMapper if use_custom_mapper else DatasetMapper - mapper = MapperClass(cfg, True) if cfg.INPUT.CUSTOM_AUG == '' else \ - DetrDatasetMapper(cfg, True) if cfg.INPUT.CUSTOM_AUG == 'DETR' else \ - MapperClass(cfg, True, augmentations=build_custom_augmentation(cfg, True)) - if cfg.DATALOADER.SAMPLER_TRAIN in ['TrainingSampler', 'RepeatFactorTrainingSampler']: - data_loader = build_detection_train_loader(cfg, mapper=mapper) - else: - data_loader = build_custom_train_loader(cfg, mapper=mapper) - - if cfg.FP16: - scaler = GradScaler() - - logger.info("Starting training from iteration {}".format(start_iter)) - with EventStorage(start_iter) as storage: - step_timer = Timer() - data_timer = Timer() - start_time = time.perf_counter() - for data, iteration in zip(data_loader, range(start_iter, max_iter)): - data_time = data_timer.seconds() - storage.put_scalars(data_time=data_time) - step_timer.reset() - iteration = iteration + 1 - storage.step() - loss_dict = model(data) - - losses = sum( - loss for k, loss in loss_dict.items()) - assert torch.isfinite(losses).all(), loss_dict - - loss_dict_reduced = {k: v.item() \ - for k, v in comm.reduce_dict(loss_dict).items()} - losses_reduced = sum(loss for loss in loss_dict_reduced.values()) - if comm.is_main_process(): - storage.put_scalars( - total_loss=losses_reduced, **loss_dict_reduced) - - optimizer.zero_grad() - if cfg.FP16: - scaler.scale(losses).backward() - scaler.step(optimizer) - scaler.update() - else: - losses.backward() - optimizer.step() - - storage.put_scalar( - "lr", optimizer.param_groups[0]["lr"], smoothing_hint=False) - - step_time = step_timer.seconds() - storage.put_scalars(time=step_time) - data_timer.reset() - scheduler.step() - - if (cfg.TEST.EVAL_PERIOD > 0 - and iteration % cfg.TEST.EVAL_PERIOD == 0 - and iteration != max_iter): - do_test(cfg, model) - comm.synchronize() - - if iteration - start_iter > 5 and \ - (iteration % 20 == 0 or iteration == max_iter): - for writer in writers: - writer.write() - periodic_checkpointer.step(iteration) - - total_time = time.perf_counter() - start_time - logger.info( - "Total training time: {}".format( - str(datetime.timedelta(seconds=int(total_time))))) - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - add_centernet_config(cfg) - add_detic_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - if '/auto' in cfg.OUTPUT_DIR: - file_name = os.path.basename(args.config_file)[:-5] - cfg.OUTPUT_DIR = cfg.OUTPUT_DIR.replace('/auto', '/{}'.format(file_name)) - logger.info('OUTPUT_DIR: {}'.format(cfg.OUTPUT_DIR)) - cfg.freeze() - default_setup(cfg, args) - setup_logger(output=cfg.OUTPUT_DIR, \ - distributed_rank=comm.get_rank(), name="detic") - return cfg - - -def main(args): - cfg = setup(args) - - model = build_model(cfg) - logger.info("Model:\n{}".format(model)) - if args.eval_only: - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=args.resume - ) - - return do_test(cfg, model) - - distributed = comm.get_world_size() > 1 - if distributed: - model = DistributedDataParallel( - model, device_ids=[comm.get_local_rank()], broadcast_buffers=False, - find_unused_parameters=cfg.FIND_UNUSED_PARAM - ) - - do_train(cfg, model, resume=args.resume) - return do_test(cfg, model) - - -if __name__ == "__main__": - args = default_argument_parser() - args = args.parse_args() - if args.num_machines == 1: - args.dist_url = 'tcp://127.0.0.1:{}'.format( - torch.randint(11111, 60000, (1,))[0].item()) - else: - if args.dist_url == 'host': - args.dist_url = 'tcp://{}:12345'.format( - os.environ['SLURM_JOB_NODELIST']) - elif not args.dist_url.startswith('tcp'): - tmp = os.popen( - 'echo $(scontrol show job {} | grep BatchHost)'.format( - args.dist_url) - ).read() - tmp = tmp[tmp.find('=') + 1: -1] - args.dist_url = 'tcp://{}:12345'.format(tmp) - print("Command Line Args:", args) - launch( - main, - args.num_gpus, - num_machines=args.num_machines, - machine_rank=args.machine_rank, - dist_url=args.dist_url, - args=(args,), - ) diff --git a/spaces/Rimi98/InsectRecognizer/README.md b/spaces/Rimi98/InsectRecognizer/README.md deleted file mode 100644 index be3d3985e69db14c412c30a348a03f518d4ab2b9..0000000000000000000000000000000000000000 --- a/spaces/Rimi98/InsectRecognizer/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Insects Recognizer -emoji: 🦀 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: true -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -Deployed App url on HuggingFace : https://huggingface.co/spaces/Rimi98/insects-recognizer - -Live app for 72 hours: https://dbad5456-2e1d-4258.gradio.live \ No newline at end of file diff --git a/spaces/RobLi/ControlNet-v1-1/app_canny.py b/spaces/RobLi/ControlNet-v1-1/app_canny.py deleted file mode 100644 index a94b49d2124b9983efc057f1103484bd6f6d374c..0000000000000000000000000000000000000000 --- a/spaces/RobLi/ControlNet-v1-1/app_canny.py +++ /dev/null @@ -1,106 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from utils import randomize_seed_fn - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button('Run') - with gr.Accordion('Advanced options', open=False): - num_samples = gr.Slider(label='Number of images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image resolution', - minimum=256, - maximum=512, - value=512, - step=256) - canny_low_threshold = gr.Slider( - label='Canny low threshold', - minimum=1, - maximum=255, - value=100, - step=1) - canny_high_threshold = gr.Slider( - label='Canny high threshold', - minimum=1, - maximum=255, - value=200, - step=1) - num_steps = gr.Slider(label='Number of steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=1000000, - step=1, - value=0, - randomize=True) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - a_prompt = gr.Textbox( - label='Additional prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False).style( - columns=2, object_fit='scale-down') - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - num_steps, - guidance_scale, - seed, - canny_low_threshold, - canny_high_threshold, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name='canny', - ) - return demo - - -if __name__ == '__main__': - from model import Model - model = Model(task_name='Canny') - demo = create_demo(model.process_canny) - demo.queue().launch() diff --git a/spaces/Robert001/UniControl-Demo/annotator/openpose/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/openpose/__init__.py deleted file mode 100644 index 2af34fe010fdf1f9ff84cf17aeb4b36fa6f0ed7c..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/openpose/__init__.py +++ /dev/null @@ -1,59 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala -''' - -# Openpose -# Original from CMU https://github.com/CMU-Perceptual-Computing-Lab/openpose -# 2nd Edited by https://github.com/Hzzone/pytorch-openpose -# 3rd Edited by ControlNet - -import os -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - -import torch -import numpy as np -from . import util -from .body import Body -from .hand import Hand -from annotator.util import annotator_ckpts_path - - -body_model_path = "https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/body_pose_model.pth" -hand_model_path = "https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/hand_pose_model.pth" - - -class OpenposeDetector: - def __init__(self): - body_modelpath = os.path.join(annotator_ckpts_path, "body_pose_model.pth") - # hand_modelpath = os.path.join(annotator_ckpts_path, "hand_pose_model.pth") - - if not os.path.exists(body_modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(body_model_path, model_dir=annotator_ckpts_path) - # load_file_from_url(hand_model_path, model_dir=annotator_ckpts_path) - - self.body_estimation = Body(body_modelpath) - # self.hand_estimation = Hand(hand_modelpath) - - def __call__(self, oriImg, hand=False): - oriImg = oriImg[:, :, ::-1].copy() - with torch.no_grad(): - candidate, subset = self.body_estimation(oriImg) - canvas = np.zeros_like(oriImg) - canvas = util.draw_bodypose(canvas, candidate, subset) - # if hand: - # hands_list = util.handDetect(candidate, subset, oriImg) - # all_hand_peaks = [] - # for x, y, w, is_left in hands_list: - # peaks = self.hand_estimation(oriImg[y:y+w, x:x+w, :]) - # peaks[:, 0] = np.where(peaks[:, 0] == 0, peaks[:, 0], peaks[:, 0] + x) - # peaks[:, 1] = np.where(peaks[:, 1] == 0, peaks[:, 1], peaks[:, 1] + y) - # all_hand_peaks.append(peaks) - # canvas = util.draw_handpose(canvas, all_hand_peaks) - return canvas, dict(candidate=candidate.tolist(), subset=subset.tolist()) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/pipelines/auto_augment.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/pipelines/auto_augment.py deleted file mode 100644 index e19adaec18a96cac4dbe1d8c2c9193e9901be1fb..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/pipelines/auto_augment.py +++ /dev/null @@ -1,890 +0,0 @@ -import copy - -import cv2 -import mmcv -import numpy as np - -from ..builder import PIPELINES -from .compose import Compose - -_MAX_LEVEL = 10 - - -def level_to_value(level, max_value): - """Map from level to values based on max_value.""" - return (level / _MAX_LEVEL) * max_value - - -def enhance_level_to_value(level, a=1.8, b=0.1): - """Map from level to values.""" - return (level / _MAX_LEVEL) * a + b - - -def random_negative(value, random_negative_prob): - """Randomly negate value based on random_negative_prob.""" - return -value if np.random.rand() < random_negative_prob else value - - -def bbox2fields(): - """The key correspondence from bboxes to labels, masks and - segmentations.""" - bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - bbox2seg = { - 'gt_bboxes': 'gt_semantic_seg', - } - return bbox2label, bbox2mask, bbox2seg - - -@PIPELINES.register_module() -class AutoAugment(object): - """Auto augmentation. - - This data augmentation is proposed in `Learning Data Augmentation - Strategies for Object Detection `_. - - TODO: Implement 'Shear', 'Sharpness' and 'Rotate' transforms - - Args: - policies (list[list[dict]]): The policies of auto augmentation. Each - policy in ``policies`` is a specific augmentation policy, and is - composed by several augmentations (dict). When AutoAugment is - called, a random policy in ``policies`` will be selected to - augment images. - - Examples: - >>> replace = (104, 116, 124) - >>> policies = [ - >>> [ - >>> dict(type='Sharpness', prob=0.0, level=8), - >>> dict( - >>> type='Shear', - >>> prob=0.4, - >>> level=0, - >>> replace=replace, - >>> axis='x') - >>> ], - >>> [ - >>> dict( - >>> type='Rotate', - >>> prob=0.6, - >>> level=10, - >>> replace=replace), - >>> dict(type='Color', prob=1.0, level=6) - >>> ] - >>> ] - >>> augmentation = AutoAugment(policies) - >>> img = np.ones(100, 100, 3) - >>> gt_bboxes = np.ones(10, 4) - >>> results = dict(img=img, gt_bboxes=gt_bboxes) - >>> results = augmentation(results) - """ - - def __init__(self, policies): - assert isinstance(policies, list) and len(policies) > 0, \ - 'Policies must be a non-empty list.' - for policy in policies: - assert isinstance(policy, list) and len(policy) > 0, \ - 'Each policy in policies must be a non-empty list.' - for augment in policy: - assert isinstance(augment, dict) and 'type' in augment, \ - 'Each specific augmentation must be a dict with key' \ - ' "type".' - - self.policies = copy.deepcopy(policies) - self.transforms = [Compose(policy) for policy in self.policies] - - def __call__(self, results): - transform = np.random.choice(self.transforms) - return transform(results) - - def __repr__(self): - return f'{self.__class__.__name__}(policies={self.policies})' - - -@PIPELINES.register_module() -class Shear(object): - """Apply Shear Transformation to image (and its corresponding bbox, mask, - segmentation). - - Args: - level (int | float): The level should be in range [0,_MAX_LEVEL]. - img_fill_val (int | float | tuple): The filled values for image border. - If float, the same fill value will be used for all the three - channels of image. If tuple, the should be 3 elements. - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - prob (float): The probability for performing Shear and should be in - range [0, 1]. - direction (str): The direction for shear, either "horizontal" - or "vertical". - max_shear_magnitude (float): The maximum magnitude for Shear - transformation. - random_negative_prob (float): The probability that turns the - offset negative. Should be in range [0,1] - interpolation (str): Same as in :func:`mmcv.imshear`. - """ - - def __init__(self, - level, - img_fill_val=128, - seg_ignore_label=255, - prob=0.5, - direction='horizontal', - max_shear_magnitude=0.3, - random_negative_prob=0.5, - interpolation='bilinear'): - assert isinstance(level, (int, float)), 'The level must be type ' \ - f'int or float, got {type(level)}.' - assert 0 <= level <= _MAX_LEVEL, 'The level should be in range ' \ - f'[0,{_MAX_LEVEL}], got {level}.' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, 'img_fill_val as tuple must ' \ - f'have 3 elements. got {len(img_fill_val)}.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError( - 'img_fill_val must be float or tuple with 3 elements.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), 'all ' \ - 'elements of img_fill_val should between range [0,255].' \ - f'got {img_fill_val}.' - assert 0 <= prob <= 1.0, 'The probability of shear should be in ' \ - f'range [0,1]. got {prob}.' - assert direction in ('horizontal', 'vertical'), 'direction must ' \ - f'in be either "horizontal" or "vertical". got {direction}.' - assert isinstance(max_shear_magnitude, float), 'max_shear_magnitude ' \ - f'should be type float. got {type(max_shear_magnitude)}.' - assert 0. <= max_shear_magnitude <= 1., 'Defaultly ' \ - 'max_shear_magnitude should be in range [0,1]. ' \ - f'got {max_shear_magnitude}.' - self.level = level - self.magnitude = level_to_value(level, max_shear_magnitude) - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.prob = prob - self.direction = direction - self.max_shear_magnitude = max_shear_magnitude - self.random_negative_prob = random_negative_prob - self.interpolation = interpolation - - def _shear_img(self, - results, - magnitude, - direction='horizontal', - interpolation='bilinear'): - """Shear the image. - - Args: - results (dict): Result dict from loading pipeline. - magnitude (int | float): The magnitude used for shear. - direction (str): The direction for shear, either "horizontal" - or "vertical". - interpolation (str): Same as in :func:`mmcv.imshear`. - """ - for key in results.get('img_fields', ['img']): - img = results[key] - img_sheared = mmcv.imshear( - img, - magnitude, - direction, - border_value=self.img_fill_val, - interpolation=interpolation) - results[key] = img_sheared.astype(img.dtype) - - def _shear_bboxes(self, results, magnitude): - """Shear the bboxes.""" - h, w, c = results['img_shape'] - if self.direction == 'horizontal': - shear_matrix = np.stack([[1, magnitude], - [0, 1]]).astype(np.float32) # [2, 2] - else: - shear_matrix = np.stack([[1, 0], [magnitude, - 1]]).astype(np.float32) - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - coordinates = np.stack([[min_x, min_y], [max_x, min_y], - [min_x, max_y], - [max_x, max_y]]) # [4, 2, nb_box, 1] - coordinates = coordinates[..., 0].transpose( - (2, 1, 0)).astype(np.float32) # [nb_box, 2, 4] - new_coords = np.matmul(shear_matrix[None, :, :], - coordinates) # [nb_box, 2, 4] - min_x = np.min(new_coords[:, 0, :], axis=-1) - min_y = np.min(new_coords[:, 1, :], axis=-1) - max_x = np.max(new_coords[:, 0, :], axis=-1) - max_y = np.max(new_coords[:, 1, :], axis=-1) - min_x = np.clip(min_x, a_min=0, a_max=w) - min_y = np.clip(min_y, a_min=0, a_max=h) - max_x = np.clip(max_x, a_min=min_x, a_max=w) - max_y = np.clip(max_y, a_min=min_y, a_max=h) - results[key] = np.stack([min_x, min_y, max_x, max_y], - axis=-1).astype(results[key].dtype) - - def _shear_masks(self, - results, - magnitude, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Shear the masks.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.shear((h, w), - magnitude, - direction, - border_value=fill_val, - interpolation=interpolation) - - def _shear_seg(self, - results, - magnitude, - direction='horizontal', - fill_val=255, - interpolation='bilinear'): - """Shear the segmentation maps.""" - for key in results.get('seg_fields', []): - seg = results[key] - results[key] = mmcv.imshear( - seg, - magnitude, - direction, - border_value=fill_val, - interpolation=interpolation).astype(seg.dtype) - - def _filter_invalid(self, results, min_bbox_size=0): - """Filter bboxes and corresponding masks too small after shear - augmentation.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_bbox_size) & (bbox_h > min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - - def __call__(self, results): - """Call function to shear images, bounding boxes, masks and semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Sheared results. - """ - if np.random.rand() > self.prob: - return results - magnitude = random_negative(self.magnitude, self.random_negative_prob) - self._shear_img(results, magnitude, self.direction, self.interpolation) - self._shear_bboxes(results, magnitude) - # fill_val set to 0 for background of mask. - self._shear_masks( - results, - magnitude, - self.direction, - fill_val=0, - interpolation=self.interpolation) - self._shear_seg( - results, - magnitude, - self.direction, - fill_val=self.seg_ignore_label, - interpolation=self.interpolation) - self._filter_invalid(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'img_fill_val={self.img_fill_val}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label}, ' - repr_str += f'prob={self.prob}, ' - repr_str += f'direction={self.direction}, ' - repr_str += f'max_shear_magnitude={self.max_shear_magnitude}, ' - repr_str += f'random_negative_prob={self.random_negative_prob}, ' - repr_str += f'interpolation={self.interpolation})' - return repr_str - - -@PIPELINES.register_module() -class Rotate(object): - """Apply Rotate Transformation to image (and its corresponding bbox, mask, - segmentation). - - Args: - level (int | float): The level should be in range (0,_MAX_LEVEL]. - scale (int | float): Isotropic scale factor. Same in - ``mmcv.imrotate``. - center (int | float | tuple[float]): Center point (w, h) of the - rotation in the source image. If None, the center of the - image will be used. Same in ``mmcv.imrotate``. - img_fill_val (int | float | tuple): The fill value for image border. - If float, the same value will be used for all the three - channels of image. If tuple, the should be 3 elements (e.g. - equals the number of channels for image). - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - prob (float): The probability for perform transformation and - should be in range 0 to 1. - max_rotate_angle (int | float): The maximum angles for rotate - transformation. - random_negative_prob (float): The probability that turns the - offset negative. - """ - - def __init__(self, - level, - scale=1, - center=None, - img_fill_val=128, - seg_ignore_label=255, - prob=0.5, - max_rotate_angle=30, - random_negative_prob=0.5): - assert isinstance(level, (int, float)), \ - f'The level must be type int or float. got {type(level)}.' - assert 0 <= level <= _MAX_LEVEL, \ - f'The level should be in range (0,{_MAX_LEVEL}]. got {level}.' - assert isinstance(scale, (int, float)), \ - f'The scale must be type int or float. got type {type(scale)}.' - if isinstance(center, (int, float)): - center = (center, center) - elif isinstance(center, tuple): - assert len(center) == 2, 'center with type tuple must have '\ - f'2 elements. got {len(center)} elements.' - else: - assert center is None, 'center must be None or type int, '\ - f'float or tuple, got type {type(center)}.' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, 'img_fill_val as tuple must '\ - f'have 3 elements. got {len(img_fill_val)}.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError( - 'img_fill_val must be float or tuple with 3 elements.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), \ - 'all elements of img_fill_val should between range [0,255]. '\ - f'got {img_fill_val}.' - assert 0 <= prob <= 1.0, 'The probability should be in range [0,1]. '\ - 'got {prob}.' - assert isinstance(max_rotate_angle, (int, float)), 'max_rotate_angle '\ - f'should be type int or float. got type {type(max_rotate_angle)}.' - self.level = level - self.scale = scale - # Rotation angle in degrees. Positive values mean - # clockwise rotation. - self.angle = level_to_value(level, max_rotate_angle) - self.center = center - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.prob = prob - self.max_rotate_angle = max_rotate_angle - self.random_negative_prob = random_negative_prob - - def _rotate_img(self, results, angle, center=None, scale=1.0): - """Rotate the image. - - Args: - results (dict): Result dict from loading pipeline. - angle (float): Rotation angle in degrees, positive values - mean clockwise rotation. Same in ``mmcv.imrotate``. - center (tuple[float], optional): Center point (w, h) of the - rotation. Same in ``mmcv.imrotate``. - scale (int | float): Isotropic scale factor. Same in - ``mmcv.imrotate``. - """ - for key in results.get('img_fields', ['img']): - img = results[key].copy() - img_rotated = mmcv.imrotate( - img, angle, center, scale, border_value=self.img_fill_val) - results[key] = img_rotated.astype(img.dtype) - - def _rotate_bboxes(self, results, rotate_matrix): - """Rotate the bboxes.""" - h, w, c = results['img_shape'] - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - coordinates = np.stack([[min_x, min_y], [max_x, min_y], - [min_x, max_y], - [max_x, max_y]]) # [4, 2, nb_bbox, 1] - # pad 1 to convert from format [x, y] to homogeneous - # coordinates format [x, y, 1] - coordinates = np.concatenate( - (coordinates, - np.ones((4, 1, coordinates.shape[2], 1), coordinates.dtype)), - axis=1) # [4, 3, nb_bbox, 1] - coordinates = coordinates.transpose( - (2, 0, 1, 3)) # [nb_bbox, 4, 3, 1] - rotated_coords = np.matmul(rotate_matrix, - coordinates) # [nb_bbox, 4, 2, 1] - rotated_coords = rotated_coords[..., 0] # [nb_bbox, 4, 2] - min_x, min_y = np.min( - rotated_coords[:, :, 0], axis=1), np.min( - rotated_coords[:, :, 1], axis=1) - max_x, max_y = np.max( - rotated_coords[:, :, 0], axis=1), np.max( - rotated_coords[:, :, 1], axis=1) - min_x, min_y = np.clip( - min_x, a_min=0, a_max=w), np.clip( - min_y, a_min=0, a_max=h) - max_x, max_y = np.clip( - max_x, a_min=min_x, a_max=w), np.clip( - max_y, a_min=min_y, a_max=h) - results[key] = np.stack([min_x, min_y, max_x, max_y], - axis=-1).astype(results[key].dtype) - - def _rotate_masks(self, - results, - angle, - center=None, - scale=1.0, - fill_val=0): - """Rotate the masks.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.rotate((h, w), angle, center, scale, fill_val) - - def _rotate_seg(self, - results, - angle, - center=None, - scale=1.0, - fill_val=255): - """Rotate the segmentation map.""" - for key in results.get('seg_fields', []): - seg = results[key].copy() - results[key] = mmcv.imrotate( - seg, angle, center, scale, - border_value=fill_val).astype(seg.dtype) - - def _filter_invalid(self, results, min_bbox_size=0): - """Filter bboxes and corresponding masks too small after rotate - augmentation.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_bbox_size) & (bbox_h > min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - - def __call__(self, results): - """Call function to rotate images, bounding boxes, masks and semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Rotated results. - """ - if np.random.rand() > self.prob: - return results - h, w = results['img'].shape[:2] - center = self.center - if center is None: - center = ((w - 1) * 0.5, (h - 1) * 0.5) - angle = random_negative(self.angle, self.random_negative_prob) - self._rotate_img(results, angle, center, self.scale) - rotate_matrix = cv2.getRotationMatrix2D(center, -angle, self.scale) - self._rotate_bboxes(results, rotate_matrix) - self._rotate_masks(results, angle, center, self.scale, fill_val=0) - self._rotate_seg( - results, angle, center, self.scale, fill_val=self.seg_ignore_label) - self._filter_invalid(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'scale={self.scale}, ' - repr_str += f'center={self.center}, ' - repr_str += f'img_fill_val={self.img_fill_val}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label}, ' - repr_str += f'prob={self.prob}, ' - repr_str += f'max_rotate_angle={self.max_rotate_angle}, ' - repr_str += f'random_negative_prob={self.random_negative_prob})' - return repr_str - - -@PIPELINES.register_module() -class Translate(object): - """Translate the images, bboxes, masks and segmentation maps horizontally - or vertically. - - Args: - level (int | float): The level for Translate and should be in - range [0,_MAX_LEVEL]. - prob (float): The probability for performing translation and - should be in range [0, 1]. - img_fill_val (int | float | tuple): The filled value for image - border. If float, the same fill value will be used for all - the three channels of image. If tuple, the should be 3 - elements (e.g. equals the number of channels for image). - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - direction (str): The translate direction, either "horizontal" - or "vertical". - max_translate_offset (int | float): The maximum pixel's offset for - Translate. - random_negative_prob (float): The probability that turns the - offset negative. - min_size (int | float): The minimum pixel for filtering - invalid bboxes after the translation. - """ - - def __init__(self, - level, - prob=0.5, - img_fill_val=128, - seg_ignore_label=255, - direction='horizontal', - max_translate_offset=250., - random_negative_prob=0.5, - min_size=0): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level used for calculating Translate\'s offset should be ' \ - 'in range [0,_MAX_LEVEL]' - assert 0 <= prob <= 1.0, \ - 'The probability of translation should be in range [0, 1].' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, \ - 'img_fill_val as tuple must have 3 elements.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError('img_fill_val must be type float or tuple.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), \ - 'all elements of img_fill_val should between range [0,255].' - assert direction in ('horizontal', 'vertical'), \ - 'direction should be "horizontal" or "vertical".' - assert isinstance(max_translate_offset, (int, float)), \ - 'The max_translate_offset must be type int or float.' - # the offset used for translation - self.offset = int(level_to_value(level, max_translate_offset)) - self.level = level - self.prob = prob - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.direction = direction - self.max_translate_offset = max_translate_offset - self.random_negative_prob = random_negative_prob - self.min_size = min_size - - def _translate_img(self, results, offset, direction='horizontal'): - """Translate the image. - - Args: - results (dict): Result dict from loading pipeline. - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - """ - for key in results.get('img_fields', ['img']): - img = results[key].copy() - results[key] = mmcv.imtranslate( - img, offset, direction, self.img_fill_val).astype(img.dtype) - - def _translate_bboxes(self, results, offset): - """Shift bboxes horizontally or vertically, according to offset.""" - h, w, c = results['img_shape'] - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - if self.direction == 'horizontal': - min_x = np.maximum(0, min_x + offset) - max_x = np.minimum(w, max_x + offset) - elif self.direction == 'vertical': - min_y = np.maximum(0, min_y + offset) - max_y = np.minimum(h, max_y + offset) - - # the boxes translated outside of image will be filtered along with - # the corresponding masks, by invoking ``_filter_invalid``. - results[key] = np.concatenate([min_x, min_y, max_x, max_y], - axis=-1) - - def _translate_masks(self, - results, - offset, - direction='horizontal', - fill_val=0): - """Translate masks horizontally or vertically.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.translate((h, w), offset, direction, fill_val) - - def _translate_seg(self, - results, - offset, - direction='horizontal', - fill_val=255): - """Translate segmentation maps horizontally or vertically.""" - for key in results.get('seg_fields', []): - seg = results[key].copy() - results[key] = mmcv.imtranslate(seg, offset, direction, - fill_val).astype(seg.dtype) - - def _filter_invalid(self, results, min_size=0): - """Filter bboxes and masks too small or translated out of image.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_size) & (bbox_h > min_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - return results - - def __call__(self, results): - """Call function to translate images, bounding boxes, masks and - semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Translated results. - """ - if np.random.rand() > self.prob: - return results - offset = random_negative(self.offset, self.random_negative_prob) - self._translate_img(results, offset, self.direction) - self._translate_bboxes(results, offset) - # fill_val defaultly 0 for BitmapMasks and None for PolygonMasks. - self._translate_masks(results, offset, self.direction) - # fill_val set to ``seg_ignore_label`` for the ignored value - # of segmentation map. - self._translate_seg( - results, offset, self.direction, fill_val=self.seg_ignore_label) - self._filter_invalid(results, min_size=self.min_size) - return results - - -@PIPELINES.register_module() -class ColorTransform(object): - """Apply Color transformation to image. The bboxes, masks, and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Color transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_color_img(self, results, factor=1.0): - """Apply Color transformation to image.""" - for key in results.get('img_fields', ['img']): - # NOTE defaultly the image should be BGR format - img = results[key] - results[key] = mmcv.adjust_color(img, factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Color transformation. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Colored results. - """ - if np.random.rand() > self.prob: - return results - self._adjust_color_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str - - -@PIPELINES.register_module() -class EqualizeTransform(object): - """Apply Equalize transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - prob (float): The probability for performing Equalize transformation. - """ - - def __init__(self, prob=0.5): - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.prob = prob - - def _imequalize(self, results): - """Equalizes the histogram of one image.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.imequalize(img).astype(img.dtype) - - def __call__(self, results): - """Call function for Equalize transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._imequalize(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(prob={self.prob})' - - -@PIPELINES.register_module() -class BrightnessTransform(object): - """Apply Brightness transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Brightness transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_brightness_img(self, results, factor=1.0): - """Adjust the brightness of image.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.adjust_brightness(img, - factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Brightness transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._adjust_brightness_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str - - -@PIPELINES.register_module() -class ContrastTransform(object): - """Apply Contrast transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Contrast transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_contrast_img(self, results, factor=1.0): - """Adjust the image contrast.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.adjust_contrast(img, factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Contrast transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._adjust_contrast_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str diff --git a/spaces/RohanAi/low-light-enhancement/Network.py b/spaces/RohanAi/low-light-enhancement/Network.py deleted file mode 100644 index 07c1673cb031f9df6f48b520916ee37f55ebcb8e..0000000000000000000000000000000000000000 --- a/spaces/RohanAi/low-light-enhancement/Network.py +++ /dev/null @@ -1,33 +0,0 @@ -from keras.layers import Input, Conv2D, Conv2DTranspose, Concatenate -from keras.applications.vgg19 import VGG19 -from keras.models import Model - -def build_vgg(): - vgg_model = VGG19(include_top=False, weights='imagenet') - vgg_model.trainable = False - return Model(inputs=vgg_model.input, outputs=vgg_model.get_layer('block3_conv4').output) - -def build_mbllen(input_shape): - - def EM(input, kernal_size, channel): - conv_1 = Conv2D(channel, (3, 3), activation='relu', padding='same', data_format='channels_last')(input) - conv_2 = Conv2D(channel, (kernal_size, kernal_size), activation='relu', padding='valid', data_format='channels_last')(conv_1) - conv_3 = Conv2D(channel*2, (kernal_size, kernal_size), activation='relu', padding='valid', data_format='channels_last')(conv_2) - conv_4 = Conv2D(channel*4, (kernal_size, kernal_size), activation='relu', padding='valid', data_format='channels_last')(conv_3) - conv_5 = Conv2DTranspose(channel*2, (kernal_size, kernal_size), activation='relu', padding='valid', data_format='channels_last')(conv_4) - conv_6 = Conv2DTranspose(channel, (kernal_size, kernal_size), activation='relu', padding='valid', data_format='channels_last')(conv_5) - res = Conv2DTranspose(3, (kernal_size, kernal_size), activation='relu', padding='valid', data_format='channels_last')(conv_6) - return res - - inputs = Input(shape=input_shape) - FEM = Conv2D(32, (3, 3), activation='relu', padding='same', data_format='channels_last')(inputs) - EM_com = EM(FEM, 5, 8) - - for j in range(3): - for i in range(0, 3): - FEM = Conv2D(32, (3, 3), activation='relu', padding='same', data_format='channels_last')(FEM) - EM1 = EM(FEM, 5, 8) - EM_com = Concatenate(axis=3)([EM_com, EM1]) - - outputs = Conv2D(3, (1, 1), activation='relu', padding='same', data_format='channels_last')(EM_com) - return Model(inputs, outputs) \ No newline at end of file diff --git a/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/layers/residual_block.py b/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/layers/residual_block.py deleted file mode 100644 index 7a267a86c1fa521c2824addf9dda304c43f1ff1f..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/layers/residual_block.py +++ /dev/null @@ -1,129 +0,0 @@ -# -*- coding: utf-8 -*- - -"""Residual block module in WaveNet. - -This code is modified from https://github.com/r9y9/wavenet_vocoder. - -""" - -import math - -import torch -import torch.nn.functional as F - - -class Conv1d(torch.nn.Conv1d): - """Conv1d module with customized initialization.""" - - def __init__(self, *args, **kwargs): - """Initialize Conv1d module.""" - super(Conv1d, self).__init__(*args, **kwargs) - - def reset_parameters(self): - """Reset parameters.""" - torch.nn.init.kaiming_normal_(self.weight, nonlinearity="relu") - if self.bias is not None: - torch.nn.init.constant_(self.bias, 0.0) - - -class Conv1d1x1(Conv1d): - """1x1 Conv1d with customized initialization.""" - - def __init__(self, in_channels, out_channels, bias): - """Initialize 1x1 Conv1d module.""" - super(Conv1d1x1, self).__init__(in_channels, out_channels, - kernel_size=1, padding=0, - dilation=1, bias=bias) - - -class ResidualBlock(torch.nn.Module): - """Residual block module in WaveNet.""" - - def __init__(self, - kernel_size=3, - residual_channels=64, - gate_channels=128, - skip_channels=64, - aux_channels=80, - dropout=0.0, - dilation=1, - bias=True, - use_causal_conv=False - ): - """Initialize ResidualBlock module. - - Args: - kernel_size (int): Kernel size of dilation convolution layer. - residual_channels (int): Number of channels for residual connection. - skip_channels (int): Number of channels for skip connection. - aux_channels (int): Local conditioning channels i.e. auxiliary input dimension. - dropout (float): Dropout probability. - dilation (int): Dilation factor. - bias (bool): Whether to add bias parameter in convolution layers. - use_causal_conv (bool): Whether to use use_causal_conv or non-use_causal_conv convolution. - - """ - super(ResidualBlock, self).__init__() - self.dropout = dropout - # no future time stamps available - if use_causal_conv: - padding = (kernel_size - 1) * dilation - else: - assert (kernel_size - 1) % 2 == 0, "Not support even number kernel size." - padding = (kernel_size - 1) // 2 * dilation - self.use_causal_conv = use_causal_conv - - # dilation conv - self.conv = Conv1d(residual_channels, gate_channels, kernel_size, - padding=padding, dilation=dilation, bias=bias) - - # local conditioning - if aux_channels > 0: - self.conv1x1_aux = Conv1d1x1(aux_channels, gate_channels, bias=False) - else: - self.conv1x1_aux = None - - # conv output is split into two groups - gate_out_channels = gate_channels // 2 - self.conv1x1_out = Conv1d1x1(gate_out_channels, residual_channels, bias=bias) - self.conv1x1_skip = Conv1d1x1(gate_out_channels, skip_channels, bias=bias) - - def forward(self, x, c): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, residual_channels, T). - c (Tensor): Local conditioning auxiliary tensor (B, aux_channels, T). - - Returns: - Tensor: Output tensor for residual connection (B, residual_channels, T). - Tensor: Output tensor for skip connection (B, skip_channels, T). - - """ - residual = x - x = F.dropout(x, p=self.dropout, training=self.training) - x = self.conv(x) - - # remove future time steps if use_causal_conv conv - x = x[:, :, :residual.size(-1)] if self.use_causal_conv else x - - # split into two part for gated activation - splitdim = 1 - xa, xb = x.split(x.size(splitdim) // 2, dim=splitdim) - - # local conditioning - if c is not None: - assert self.conv1x1_aux is not None - c = self.conv1x1_aux(c) - ca, cb = c.split(c.size(splitdim) // 2, dim=splitdim) - xa, xb = xa + ca, xb + cb - - x = torch.tanh(xa) * torch.sigmoid(xb) - - # for skip connection - s = self.conv1x1_skip(x) - - # for residual connection - x = (self.conv1x1_out(x) + residual) * math.sqrt(0.5) - - return x, s diff --git a/spaces/Rongjiehuang/GenerSpeech/vocoders/__init__.py b/spaces/Rongjiehuang/GenerSpeech/vocoders/__init__.py deleted file mode 100644 index 66c318857ce48048437dede7072901ad6471b8fc..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/vocoders/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from vocoders import hifigan diff --git a/spaces/Ryzal/rvc-models-new/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/Ryzal/rvc-models-new/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Sambhavnoobcoder/stable-diffusion-inpainting/inpainting.py b/spaces/Sambhavnoobcoder/stable-diffusion-inpainting/inpainting.py deleted file mode 100644 index 798c3fd252f826762aee6970f867eee537249db8..0000000000000000000000000000000000000000 --- a/spaces/Sambhavnoobcoder/stable-diffusion-inpainting/inpainting.py +++ /dev/null @@ -1,194 +0,0 @@ -import inspect -from typing import List, Optional, Union - -import numpy as np -import torch - -import PIL -from diffusers import AutoencoderKL, DDIMScheduler, DiffusionPipeline, PNDMScheduler, UNet2DConditionModel -from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker -from tqdm.auto import tqdm -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - - -def preprocess_image(image): - w, h = image.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL.Image.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.0 * image - 1.0 - - -def preprocess_mask(mask): - mask = mask.convert("L") - w, h = mask.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - mask = mask.resize((w // 8, h // 8), resample=PIL.Image.NEAREST) - mask = np.array(mask).astype(np.float32) / 255.0 - mask = np.tile(mask, (4, 1, 1)) - mask = mask[None].transpose(0, 1, 2, 3) # what does this step do? - mask = 1 - mask # repaint white, keep black - mask = torch.from_numpy(mask) - return mask - -class StableDiffusionInpaintingPipeline(DiffusionPipeline): - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, PNDMScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - ): - super().__init__() - scheduler = scheduler.set_format("pt") - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - init_image: torch.FloatTensor, - mask_image: torch.FloatTensor, - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - eta: Optional[float] = 0.0, - generator: Optional[torch.Generator] = None, - output_type: Optional[str] = "pil", - ): - - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}") - - # set timesteps - accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys()) - extra_set_kwargs = {} - offset = 0 - if accepts_offset: - offset = 1 - extra_set_kwargs["offset"] = 1 - - self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) - - # preprocess image - init_image = preprocess_image(init_image).to(self.device) - - # encode the init image into latents and scale the latents - init_latent_dist = self.vae.encode(init_image).latent_dist - init_latents = init_latent_dist.sample(generator=generator) - init_latents = 0.18215 * init_latents - - # prepare init_latents noise to latents - init_latents = torch.cat([init_latents] * batch_size) - init_latents_orig = init_latents - - # preprocess mask - mask = preprocess_mask(mask_image).to(self.device) - mask = torch.cat([mask] * batch_size) - - # check sizes - if not mask.shape == init_latents.shape: - raise ValueError(f"The mask and init_image should be the same size!") - - # get the original timestep using init_timestep - init_timestep = int(num_inference_steps * strength) + offset - init_timestep = min(init_timestep, num_inference_steps) - timesteps = self.scheduler.timesteps[-init_timestep] - timesteps = torch.tensor([timesteps] * batch_size, dtype=torch.long, device=self.device) - - # add noise to latents using the timesteps - noise = torch.randn(init_latents.shape, generator=generator, device=self.device) - init_latents = self.scheduler.add_noise(init_latents, noise, timesteps) - - # get prompt text embeddings - text_input = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0] - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - max_length = text_input.input_ids.shape[-1] - uncond_input = self.tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt" - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - latents = init_latents - t_start = max(num_inference_steps - init_timestep + offset, 0) - for i, t in tqdm(enumerate(self.scheduler.timesteps[t_start:])): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings)["sample"] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs)["prev_sample"] - - # masking - init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, t) - latents = (init_latents_proper * mask) + (latents * (1 - mask)) - - # scale and decode the image latents with vae - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - - # run safety checker - safety_cheker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device) - image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_cheker_input.pixel_values) - - if output_type == "pil": - image = self.numpy_to_pil(image) - - return {"sample": image, "nsfw_content_detected": has_nsfw_concept} \ No newline at end of file diff --git a/spaces/Sandy0909/Finance_Sentiment/app.py b/spaces/Sandy0909/Finance_Sentiment/app.py deleted file mode 100644 index 5bd49066382eabc6166ccfe66a93af652973211d..0000000000000000000000000000000000000000 --- a/spaces/Sandy0909/Finance_Sentiment/app.py +++ /dev/null @@ -1,60 +0,0 @@ -import streamlit as st -from transformers import BertTokenizer, BertForSequenceClassification -import torch - -# Config class -class Config: - TOKENIZER_PATH = "ahmedrachid/FinancialBERT" # Use tokenizer from the original model - MODEL_PATH = "Sandy0909/finance_sentiment" - MAX_LEN = 512 - TOKENIZER = BertTokenizer.from_pretrained(TOKENIZER_PATH) - -class FinancialBERT(torch.nn.Module): - def __init__(self): - super(FinancialBERT, self).__init__() - self.bert = BertForSequenceClassification.from_pretrained(Config.MODEL_PATH, num_labels=3, hidden_dropout_prob=0.5) - - def forward(self, input_ids, attention_mask, token_type_ids, labels=None): - output = self.bert(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, labels=labels) - return output.loss, output.logits - -# Load model -model = FinancialBERT() -model.eval() - -# Streamlit App -# Set title and an image/banner if you have one -st.title("Financial Sentiment Analysis") -# st.image("path_to_your_image.jpg", use_column_width=True) - -# Description -st.write(""" -This application predicts the sentiment of financial sentences using a state-of-the-art model. Enter a financial sentence below and click 'Predict' to get its sentiment. -""") - -sentence = st.text_area("Enter a financial sentence:", "") - -if st.button("Predict"): - tokenizer = Config.TOKENIZER - inputs = tokenizer([sentence], return_tensors="pt", truncation=True, padding=True, max_length=Config.MAX_LEN) - - with torch.no_grad(): - logits = model(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], token_type_ids=inputs.get('token_type_ids'))[1] - - probs = torch.nn.functional.softmax(logits, dim=-1) - predictions = torch.argmax(probs, dim=-1) - sentiment = ['negative', 'neutral', 'positive'][predictions[0].item()] - - # Output visualization - st.subheader('Predicted Sentiment:') - st.write(f"The sentiment is: **{sentiment.capitalize()}**") - - # Show Confidence levels as a bar chart - st.subheader('Model Confidence Levels:') - st.bar_chart(probs[0].numpy(), use_container_width=True) - -# Sidebar: Documentation/Help -st.sidebar.header('About') -st.sidebar.text(""" -This application uses a BERT-based model trained specifically for financial sentences. The model can predict if the sentiment of a sentence is positive, negative, or neutral. -""") diff --git a/spaces/Saturdays/FER/README.md b/spaces/Saturdays/FER/README.md deleted file mode 100644 index 2f43547e480569cf5720fe69b81ad00e0a0f4427..0000000000000000000000000000000000000000 --- a/spaces/Saturdays/FER/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: FER -emoji: 🏃 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/SeViLA/SeViLA/app/vqa.py b/spaces/SeViLA/SeViLA/app/vqa.py deleted file mode 100644 index c505a985d2450a4a2065faca67a1e8974e899c95..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/app/vqa.py +++ /dev/null @@ -1,63 +0,0 @@ -""" - # Copyright (c) 2022, salesforce.com, inc. - # All rights reserved. - # SPDX-License-Identifier: BSD-3-Clause - # For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import streamlit as st -from app import load_demo_image, device -from app.utils import load_model_cache -from lavis.processors import load_processor -from PIL import Image - - -def app(): - model_type = st.sidebar.selectbox("Model:", ["BLIP"]) - - # ===== layout ===== - st.markdown( - "

    Visual Question Answering

    ", - unsafe_allow_html=True, - ) - - instructions = """Try the provided image or upload your own:""" - file = st.file_uploader(instructions) - - col1, col2 = st.columns(2) - - col1.header("Image") - if file: - raw_img = Image.open(file).convert("RGB") - else: - raw_img = load_demo_image() - - w, h = raw_img.size - scaling_factor = 720 / w - resized_image = raw_img.resize((int(w * scaling_factor), int(h * scaling_factor))) - - col1.image(resized_image, use_column_width=True) - col2.header("Question") - - user_question = col2.text_input("Input your question!", "What are objects there?") - qa_button = st.button("Submit") - - col2.header("Answer") - - # ===== event ===== - vis_processor = load_processor("blip_image_eval").build(image_size=480) - text_processor = load_processor("blip_question").build() - - if qa_button: - if model_type.startswith("BLIP"): - model = load_model_cache( - "blip_vqa", model_type="vqav2", is_eval=True, device=device - ) - - img = vis_processor(raw_img).unsqueeze(0).to(device) - question = text_processor(user_question) - - vqa_samples = {"image": img, "text_input": [question]} - answers = model.predict_answers(vqa_samples, inference_method="generate") - - col2.write("\n".join(answers), use_column_width=True) diff --git a/spaces/Shad0ws/AI-Agent-with-Google-Search-APIs/src/task_creation_chain.py b/spaces/Shad0ws/AI-Agent-with-Google-Search-APIs/src/task_creation_chain.py deleted file mode 100644 index 5c074bfc5bc09fa14e063dbc923422415573d3ed..0000000000000000000000000000000000000000 --- a/spaces/Shad0ws/AI-Agent-with-Google-Search-APIs/src/task_creation_chain.py +++ /dev/null @@ -1,30 +0,0 @@ -from langchain.llms import BaseLLM -from langchain import LLMChain, PromptTemplate - - -class TaskCreationChain(LLMChain): - """Chain to generates tasks.""" - - @classmethod - def from_llm(cls, llm: BaseLLM, verbose: bool = True) -> LLMChain: - """Get the response parser.""" - task_creation_template = ( - "You are an task creation AI that uses the result of an execution agent" - " to create new tasks with the following objective: {objective}," - " The last completed task has the result: {result}." - " This result was based on this task description: {task_description}." - " These are incomplete tasks: {incomplete_tasks}." - " Based on the result, create new tasks to be completed" - " by the AI system that do not overlap with incomplete tasks." - " Return the tasks as an array." - ) - prompt = PromptTemplate( - template=task_creation_template, - input_variables=[ - "result", - "task_description", - "incomplete_tasks", - "objective", - ], - ) - return cls(prompt=prompt, llm=llm, verbose=verbose) \ No newline at end of file diff --git a/spaces/ShoukanLabs/OpenNiji-Dataset-Viewer/app.py b/spaces/ShoukanLabs/OpenNiji-Dataset-Viewer/app.py deleted file mode 100644 index 5744df3e7c82c8df42e52ffd2b00353de9dca554..0000000000000000000000000000000000000000 --- a/spaces/ShoukanLabs/OpenNiji-Dataset-Viewer/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import gradio as gr -from datasets import load_dataset - - - -startimg = 0 - -def get_dataset_forward(): - global startimg - final = [] - dataset = load_dataset("ShoukanLabs/OpenNiji-Dataset", split=f"train[{startimg}:{startimg + 50}]") - for idx in dataset: - url = idx["url"] - prompt = idx["prompt"] - style = idx["style"] - final.append((url, f"{prompt}\n\n Style: {style}")) - startimg += 50 - return final - -def get_dataset_back(): - global startimg - final = [] - startimg -= 50 - dataset = load_dataset("ShoukanLabs/OpenNiji-Dataset", split=f"train[{startimg}:{startimg + 50}]") - for idx in dataset: - url = idx["url"] - prompt = idx["prompt"] - style = idx["style"] - final.append((url, f"{prompt}\n\n Style: {style}")) - return final - -with gr.Blocks() as demo: - with gr.Column(): - with gr.Row(): - back = gr.Button("<").style() - forward = gr.Button(">").style() - gallery = gr.Gallery( - label="Showing 50 images", show_label=True, elem_id="gallery" - ).style(object_fit="contain", columns=[10], height="auto") - - back.click(get_dataset_back, None, gallery) - forward.click(get_dataset_forward, None, gallery) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/Silentlin/DiffSinger/data_gen/tts/base_binarizer.py b/spaces/Silentlin/DiffSinger/data_gen/tts/base_binarizer.py deleted file mode 100644 index b30a20c1cdc3403214ff527d68a50806befafeb9..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/data_gen/tts/base_binarizer.py +++ /dev/null @@ -1,224 +0,0 @@ -import os -os.environ["OMP_NUM_THREADS"] = "1" - -from utils.multiprocess_utils import chunked_multiprocess_run -import random -import traceback -import json -from resemblyzer import VoiceEncoder -from tqdm import tqdm -from data_gen.tts.data_gen_utils import get_mel2ph, get_pitch, build_phone_encoder -from utils.hparams import set_hparams, hparams -import numpy as np -from utils.indexed_datasets import IndexedDatasetBuilder -from vocoders.base_vocoder import VOCODERS -import pandas as pd - - -class BinarizationError(Exception): - pass - - -class BaseBinarizer: - def __init__(self, processed_data_dir=None): - if processed_data_dir is None: - processed_data_dir = hparams['processed_data_dir'] - self.processed_data_dirs = processed_data_dir.split(",") - self.binarization_args = hparams['binarization_args'] - self.pre_align_args = hparams['pre_align_args'] - self.forced_align = self.pre_align_args['forced_align'] - tg_dir = None - if self.forced_align == 'mfa': - tg_dir = 'mfa_outputs' - if self.forced_align == 'kaldi': - tg_dir = 'kaldi_outputs' - self.item2txt = {} - self.item2ph = {} - self.item2wavfn = {} - self.item2tgfn = {} - self.item2spk = {} - for ds_id, processed_data_dir in enumerate(self.processed_data_dirs): - self.meta_df = pd.read_csv(f"{processed_data_dir}/metadata_phone.csv", dtype=str) - for r_idx, r in self.meta_df.iterrows(): - item_name = raw_item_name = r['item_name'] - if len(self.processed_data_dirs) > 1: - item_name = f'ds{ds_id}_{item_name}' - self.item2txt[item_name] = r['txt'] - self.item2ph[item_name] = r['ph'] - self.item2wavfn[item_name] = os.path.join(hparams['raw_data_dir'], 'wavs', os.path.basename(r['wav_fn']).split('_')[1]) - self.item2spk[item_name] = r.get('spk', 'SPK1') - if len(self.processed_data_dirs) > 1: - self.item2spk[item_name] = f"ds{ds_id}_{self.item2spk[item_name]}" - if tg_dir is not None: - self.item2tgfn[item_name] = f"{processed_data_dir}/{tg_dir}/{raw_item_name}.TextGrid" - self.item_names = sorted(list(self.item2txt.keys())) - if self.binarization_args['shuffle']: - random.seed(1234) - random.shuffle(self.item_names) - - @property - def train_item_names(self): - return self.item_names[hparams['test_num']+hparams['valid_num']:] - - @property - def valid_item_names(self): - return self.item_names[0: hparams['test_num']+hparams['valid_num']] # - - @property - def test_item_names(self): - return self.item_names[0: hparams['test_num']] # Audios for MOS testing are in 'test_ids' - - def build_spk_map(self): - spk_map = set() - for item_name in self.item_names: - spk_name = self.item2spk[item_name] - spk_map.add(spk_name) - spk_map = {x: i for i, x in enumerate(sorted(list(spk_map)))} - assert len(spk_map) == 0 or len(spk_map) <= hparams['num_spk'], len(spk_map) - return spk_map - - def item_name2spk_id(self, item_name): - return self.spk_map[self.item2spk[item_name]] - - def _phone_encoder(self): - ph_set_fn = f"{hparams['binary_data_dir']}/phone_set.json" - ph_set = [] - if hparams['reset_phone_dict'] or not os.path.exists(ph_set_fn): - for processed_data_dir in self.processed_data_dirs: - ph_set += [x.split(' ')[0] for x in open(f'{processed_data_dir}/dict.txt').readlines()] - ph_set = sorted(set(ph_set)) - json.dump(ph_set, open(ph_set_fn, 'w')) - else: - ph_set = json.load(open(ph_set_fn, 'r')) - print("| phone set: ", ph_set) - return build_phone_encoder(hparams['binary_data_dir']) - - def meta_data(self, prefix): - if prefix == 'valid': - item_names = self.valid_item_names - elif prefix == 'test': - item_names = self.test_item_names - else: - item_names = self.train_item_names - for item_name in item_names: - ph = self.item2ph[item_name] - txt = self.item2txt[item_name] - tg_fn = self.item2tgfn.get(item_name) - wav_fn = self.item2wavfn[item_name] - spk_id = self.item_name2spk_id(item_name) - yield item_name, ph, txt, tg_fn, wav_fn, spk_id - - def process(self): - os.makedirs(hparams['binary_data_dir'], exist_ok=True) - self.spk_map = self.build_spk_map() - print("| spk_map: ", self.spk_map) - spk_map_fn = f"{hparams['binary_data_dir']}/spk_map.json" - json.dump(self.spk_map, open(spk_map_fn, 'w')) - - self.phone_encoder = self._phone_encoder() - self.process_data('valid') - self.process_data('test') - self.process_data('train') - - def process_data(self, prefix): - data_dir = hparams['binary_data_dir'] - args = [] - builder = IndexedDatasetBuilder(f'{data_dir}/{prefix}') - lengths = [] - f0s = [] - total_sec = 0 - if self.binarization_args['with_spk_embed']: - voice_encoder = VoiceEncoder().cuda() - - meta_data = list(self.meta_data(prefix)) - for m in meta_data: - args.append(list(m) + [self.phone_encoder, self.binarization_args]) - num_workers = int(os.getenv('N_PROC', os.cpu_count() // 3)) - for f_id, (_, item) in enumerate( - zip(tqdm(meta_data), chunked_multiprocess_run(self.process_item, args, num_workers=num_workers))): - if item is None: - continue - item['spk_embed'] = voice_encoder.embed_utterance(item['wav']) \ - if self.binarization_args['with_spk_embed'] else None - if not self.binarization_args['with_wav'] and 'wav' in item: - print("del wav") - del item['wav'] - builder.add_item(item) - lengths.append(item['len']) - total_sec += item['sec'] - if item.get('f0') is not None: - f0s.append(item['f0']) - builder.finalize() - np.save(f'{data_dir}/{prefix}_lengths.npy', lengths) - if len(f0s) > 0: - f0s = np.concatenate(f0s, 0) - f0s = f0s[f0s != 0] - np.save(f'{data_dir}/{prefix}_f0s_mean_std.npy', [np.mean(f0s).item(), np.std(f0s).item()]) - print(f"| {prefix} total duration: {total_sec:.3f}s") - - @classmethod - def process_item(cls, item_name, ph, txt, tg_fn, wav_fn, spk_id, encoder, binarization_args): - if hparams['vocoder'] in VOCODERS: - wav, mel = VOCODERS[hparams['vocoder']].wav2spec(wav_fn) - else: - wav, mel = VOCODERS[hparams['vocoder'].split('.')[-1]].wav2spec(wav_fn) - res = { - 'item_name': item_name, 'txt': txt, 'ph': ph, 'mel': mel, 'wav': wav, 'wav_fn': wav_fn, - 'sec': len(wav) / hparams['audio_sample_rate'], 'len': mel.shape[0], 'spk_id': spk_id - } - try: - if binarization_args['with_f0']: - cls.get_pitch(wav, mel, res) - if binarization_args['with_f0cwt']: - cls.get_f0cwt(res['f0'], res) - if binarization_args['with_txt']: - try: - phone_encoded = res['phone'] = encoder.encode(ph) - except: - traceback.print_exc() - raise BinarizationError(f"Empty phoneme") - if binarization_args['with_align']: - cls.get_align(tg_fn, ph, mel, phone_encoded, res) - except BinarizationError as e: - print(f"| Skip item ({e}). item_name: {item_name}, wav_fn: {wav_fn}") - return None - return res - - @staticmethod - def get_align(tg_fn, ph, mel, phone_encoded, res): - if tg_fn is not None and os.path.exists(tg_fn): - mel2ph, dur = get_mel2ph(tg_fn, ph, mel, hparams) - else: - raise BinarizationError(f"Align not found") - if mel2ph.max() - 1 >= len(phone_encoded): - raise BinarizationError( - f"Align does not match: mel2ph.max() - 1: {mel2ph.max() - 1}, len(phone_encoded): {len(phone_encoded)}") - res['mel2ph'] = mel2ph - res['dur'] = dur - - @staticmethod - def get_pitch(wav, mel, res): - f0, pitch_coarse = get_pitch(wav, mel, hparams) - if sum(f0) == 0: - raise BinarizationError("Empty f0") - res['f0'] = f0 - res['pitch'] = pitch_coarse - - @staticmethod - def get_f0cwt(f0, res): - from utils.cwt import get_cont_lf0, get_lf0_cwt - uv, cont_lf0_lpf = get_cont_lf0(f0) - logf0s_mean_org, logf0s_std_org = np.mean(cont_lf0_lpf), np.std(cont_lf0_lpf) - cont_lf0_lpf_norm = (cont_lf0_lpf - logf0s_mean_org) / logf0s_std_org - Wavelet_lf0, scales = get_lf0_cwt(cont_lf0_lpf_norm) - if np.any(np.isnan(Wavelet_lf0)): - raise BinarizationError("NaN CWT") - res['cwt_spec'] = Wavelet_lf0 - res['cwt_scales'] = scales - res['f0_mean'] = logf0s_mean_org - res['f0_std'] = logf0s_std_org - - -if __name__ == "__main__": - set_hparams() - BaseBinarizer().process() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/click/core.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/click/core.py deleted file mode 100644 index 5abfb0f3c2f872275962732b370fed1202f1144a..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/click/core.py +++ /dev/null @@ -1,2998 +0,0 @@ -import enum -import errno -import inspect -import os -import sys -import typing as t -from collections import abc -from contextlib import contextmanager -from contextlib import ExitStack -from functools import partial -from functools import update_wrapper -from gettext import gettext as _ -from gettext import ngettext -from itertools import repeat - -from . import types -from .exceptions import Abort -from .exceptions import BadParameter -from .exceptions import ClickException -from .exceptions import Exit -from .exceptions import MissingParameter -from .exceptions import UsageError -from .formatting import HelpFormatter -from .formatting import join_options -from .globals import pop_context -from .globals import push_context -from .parser import _flag_needs_value -from .parser import OptionParser -from .parser import split_opt -from .termui import confirm -from .termui import prompt -from .termui import style -from .utils import _detect_program_name -from .utils import _expand_args -from .utils import echo -from .utils import make_default_short_help -from .utils import make_str -from .utils import PacifyFlushWrapper - -if t.TYPE_CHECKING: - import typing_extensions as te - from .shell_completion import CompletionItem - -F = t.TypeVar("F", bound=t.Callable[..., t.Any]) -V = t.TypeVar("V") - - -def _complete_visible_commands( - ctx: "Context", incomplete: str -) -> t.Iterator[t.Tuple[str, "Command"]]: - """List all the subcommands of a group that start with the - incomplete value and aren't hidden. - - :param ctx: Invocation context for the group. - :param incomplete: Value being completed. May be empty. - """ - multi = t.cast(MultiCommand, ctx.command) - - for name in multi.list_commands(ctx): - if name.startswith(incomplete): - command = multi.get_command(ctx, name) - - if command is not None and not command.hidden: - yield name, command - - -def _check_multicommand( - base_command: "MultiCommand", cmd_name: str, cmd: "Command", register: bool = False -) -> None: - if not base_command.chain or not isinstance(cmd, MultiCommand): - return - if register: - hint = ( - "It is not possible to add multi commands as children to" - " another multi command that is in chain mode." - ) - else: - hint = ( - "Found a multi command as subcommand to a multi command" - " that is in chain mode. This is not supported." - ) - raise RuntimeError( - f"{hint}. Command {base_command.name!r} is set to chain and" - f" {cmd_name!r} was added as a subcommand but it in itself is a" - f" multi command. ({cmd_name!r} is a {type(cmd).__name__}" - f" within a chained {type(base_command).__name__} named" - f" {base_command.name!r})." - ) - - -def batch(iterable: t.Iterable[V], batch_size: int) -> t.List[t.Tuple[V, ...]]: - return list(zip(*repeat(iter(iterable), batch_size))) - - -@contextmanager -def augment_usage_errors( - ctx: "Context", param: t.Optional["Parameter"] = None -) -> t.Iterator[None]: - """Context manager that attaches extra information to exceptions.""" - try: - yield - except BadParameter as e: - if e.ctx is None: - e.ctx = ctx - if param is not None and e.param is None: - e.param = param - raise - except UsageError as e: - if e.ctx is None: - e.ctx = ctx - raise - - -def iter_params_for_processing( - invocation_order: t.Sequence["Parameter"], - declaration_order: t.Sequence["Parameter"], -) -> t.List["Parameter"]: - """Given a sequence of parameters in the order as should be considered - for processing and an iterable of parameters that exist, this returns - a list in the correct order as they should be processed. - """ - - def sort_key(item: "Parameter") -> t.Tuple[bool, float]: - try: - idx: float = invocation_order.index(item) - except ValueError: - idx = float("inf") - - return not item.is_eager, idx - - return sorted(declaration_order, key=sort_key) - - -class ParameterSource(enum.Enum): - """This is an :class:`~enum.Enum` that indicates the source of a - parameter's value. - - Use :meth:`click.Context.get_parameter_source` to get the - source for a parameter by name. - - .. versionchanged:: 8.0 - Use :class:`~enum.Enum` and drop the ``validate`` method. - - .. versionchanged:: 8.0 - Added the ``PROMPT`` value. - """ - - COMMANDLINE = enum.auto() - """The value was provided by the command line args.""" - ENVIRONMENT = enum.auto() - """The value was provided with an environment variable.""" - DEFAULT = enum.auto() - """Used the default specified by the parameter.""" - DEFAULT_MAP = enum.auto() - """Used a default provided by :attr:`Context.default_map`.""" - PROMPT = enum.auto() - """Used a prompt to confirm a default or provide a value.""" - - -class Context: - """The context is a special internal object that holds state relevant - for the script execution at every single level. It's normally invisible - to commands unless they opt-in to getting access to it. - - The context is useful as it can pass internal objects around and can - control special execution features such as reading data from - environment variables. - - A context can be used as context manager in which case it will call - :meth:`close` on teardown. - - :param command: the command class for this context. - :param parent: the parent context. - :param info_name: the info name for this invocation. Generally this - is the most descriptive name for the script or - command. For the toplevel script it is usually - the name of the script, for commands below it it's - the name of the script. - :param obj: an arbitrary object of user data. - :param auto_envvar_prefix: the prefix to use for automatic environment - variables. If this is `None` then reading - from environment variables is disabled. This - does not affect manually set environment - variables which are always read. - :param default_map: a dictionary (like object) with default values - for parameters. - :param terminal_width: the width of the terminal. The default is - inherit from parent context. If no context - defines the terminal width then auto - detection will be applied. - :param max_content_width: the maximum width for content rendered by - Click (this currently only affects help - pages). This defaults to 80 characters if - not overridden. In other words: even if the - terminal is larger than that, Click will not - format things wider than 80 characters by - default. In addition to that, formatters might - add some safety mapping on the right. - :param resilient_parsing: if this flag is enabled then Click will - parse without any interactivity or callback - invocation. Default values will also be - ignored. This is useful for implementing - things such as completion support. - :param allow_extra_args: if this is set to `True` then extra arguments - at the end will not raise an error and will be - kept on the context. The default is to inherit - from the command. - :param allow_interspersed_args: if this is set to `False` then options - and arguments cannot be mixed. The - default is to inherit from the command. - :param ignore_unknown_options: instructs click to ignore options it does - not know and keeps them for later - processing. - :param help_option_names: optionally a list of strings that define how - the default help parameter is named. The - default is ``['--help']``. - :param token_normalize_func: an optional function that is used to - normalize tokens (options, choices, - etc.). This for instance can be used to - implement case insensitive behavior. - :param color: controls if the terminal supports ANSI colors or not. The - default is autodetection. This is only needed if ANSI - codes are used in texts that Click prints which is by - default not the case. This for instance would affect - help output. - :param show_default: Show the default value for commands. If this - value is not set, it defaults to the value from the parent - context. ``Command.show_default`` overrides this default for the - specific command. - - .. versionchanged:: 8.1 - The ``show_default`` parameter is overridden by - ``Command.show_default``, instead of the other way around. - - .. versionchanged:: 8.0 - The ``show_default`` parameter defaults to the value from the - parent context. - - .. versionchanged:: 7.1 - Added the ``show_default`` parameter. - - .. versionchanged:: 4.0 - Added the ``color``, ``ignore_unknown_options``, and - ``max_content_width`` parameters. - - .. versionchanged:: 3.0 - Added the ``allow_extra_args`` and ``allow_interspersed_args`` - parameters. - - .. versionchanged:: 2.0 - Added the ``resilient_parsing``, ``help_option_names``, and - ``token_normalize_func`` parameters. - """ - - #: The formatter class to create with :meth:`make_formatter`. - #: - #: .. versionadded:: 8.0 - formatter_class: t.Type["HelpFormatter"] = HelpFormatter - - def __init__( - self, - command: "Command", - parent: t.Optional["Context"] = None, - info_name: t.Optional[str] = None, - obj: t.Optional[t.Any] = None, - auto_envvar_prefix: t.Optional[str] = None, - default_map: t.Optional[t.Dict[str, t.Any]] = None, - terminal_width: t.Optional[int] = None, - max_content_width: t.Optional[int] = None, - resilient_parsing: bool = False, - allow_extra_args: t.Optional[bool] = None, - allow_interspersed_args: t.Optional[bool] = None, - ignore_unknown_options: t.Optional[bool] = None, - help_option_names: t.Optional[t.List[str]] = None, - token_normalize_func: t.Optional[t.Callable[[str], str]] = None, - color: t.Optional[bool] = None, - show_default: t.Optional[bool] = None, - ) -> None: - #: the parent context or `None` if none exists. - self.parent = parent - #: the :class:`Command` for this context. - self.command = command - #: the descriptive information name - self.info_name = info_name - #: Map of parameter names to their parsed values. Parameters - #: with ``expose_value=False`` are not stored. - self.params: t.Dict[str, t.Any] = {} - #: the leftover arguments. - self.args: t.List[str] = [] - #: protected arguments. These are arguments that are prepended - #: to `args` when certain parsing scenarios are encountered but - #: must be never propagated to another arguments. This is used - #: to implement nested parsing. - self.protected_args: t.List[str] = [] - #: the collected prefixes of the command's options. - self._opt_prefixes: t.Set[str] = set(parent._opt_prefixes) if parent else set() - - if obj is None and parent is not None: - obj = parent.obj - - #: the user object stored. - self.obj: t.Any = obj - self._meta: t.Dict[str, t.Any] = getattr(parent, "meta", {}) - - #: A dictionary (-like object) with defaults for parameters. - if ( - default_map is None - and info_name is not None - and parent is not None - and parent.default_map is not None - ): - default_map = parent.default_map.get(info_name) - - self.default_map: t.Optional[t.Dict[str, t.Any]] = default_map - - #: This flag indicates if a subcommand is going to be executed. A - #: group callback can use this information to figure out if it's - #: being executed directly or because the execution flow passes - #: onwards to a subcommand. By default it's None, but it can be - #: the name of the subcommand to execute. - #: - #: If chaining is enabled this will be set to ``'*'`` in case - #: any commands are executed. It is however not possible to - #: figure out which ones. If you require this knowledge you - #: should use a :func:`result_callback`. - self.invoked_subcommand: t.Optional[str] = None - - if terminal_width is None and parent is not None: - terminal_width = parent.terminal_width - - #: The width of the terminal (None is autodetection). - self.terminal_width: t.Optional[int] = terminal_width - - if max_content_width is None and parent is not None: - max_content_width = parent.max_content_width - - #: The maximum width of formatted content (None implies a sensible - #: default which is 80 for most things). - self.max_content_width: t.Optional[int] = max_content_width - - if allow_extra_args is None: - allow_extra_args = command.allow_extra_args - - #: Indicates if the context allows extra args or if it should - #: fail on parsing. - #: - #: .. versionadded:: 3.0 - self.allow_extra_args = allow_extra_args - - if allow_interspersed_args is None: - allow_interspersed_args = command.allow_interspersed_args - - #: Indicates if the context allows mixing of arguments and - #: options or not. - #: - #: .. versionadded:: 3.0 - self.allow_interspersed_args: bool = allow_interspersed_args - - if ignore_unknown_options is None: - ignore_unknown_options = command.ignore_unknown_options - - #: Instructs click to ignore options that a command does not - #: understand and will store it on the context for later - #: processing. This is primarily useful for situations where you - #: want to call into external programs. Generally this pattern is - #: strongly discouraged because it's not possibly to losslessly - #: forward all arguments. - #: - #: .. versionadded:: 4.0 - self.ignore_unknown_options: bool = ignore_unknown_options - - if help_option_names is None: - if parent is not None: - help_option_names = parent.help_option_names - else: - help_option_names = ["--help"] - - #: The names for the help options. - self.help_option_names: t.List[str] = help_option_names - - if token_normalize_func is None and parent is not None: - token_normalize_func = parent.token_normalize_func - - #: An optional normalization function for tokens. This is - #: options, choices, commands etc. - self.token_normalize_func: t.Optional[ - t.Callable[[str], str] - ] = token_normalize_func - - #: Indicates if resilient parsing is enabled. In that case Click - #: will do its best to not cause any failures and default values - #: will be ignored. Useful for completion. - self.resilient_parsing: bool = resilient_parsing - - # If there is no envvar prefix yet, but the parent has one and - # the command on this level has a name, we can expand the envvar - # prefix automatically. - if auto_envvar_prefix is None: - if ( - parent is not None - and parent.auto_envvar_prefix is not None - and self.info_name is not None - ): - auto_envvar_prefix = ( - f"{parent.auto_envvar_prefix}_{self.info_name.upper()}" - ) - else: - auto_envvar_prefix = auto_envvar_prefix.upper() - - if auto_envvar_prefix is not None: - auto_envvar_prefix = auto_envvar_prefix.replace("-", "_") - - self.auto_envvar_prefix: t.Optional[str] = auto_envvar_prefix - - if color is None and parent is not None: - color = parent.color - - #: Controls if styling output is wanted or not. - self.color: t.Optional[bool] = color - - if show_default is None and parent is not None: - show_default = parent.show_default - - #: Show option default values when formatting help text. - self.show_default: t.Optional[bool] = show_default - - self._close_callbacks: t.List[t.Callable[[], t.Any]] = [] - self._depth = 0 - self._parameter_source: t.Dict[str, ParameterSource] = {} - self._exit_stack = ExitStack() - - def to_info_dict(self) -> t.Dict[str, t.Any]: - """Gather information that could be useful for a tool generating - user-facing documentation. This traverses the entire CLI - structure. - - .. code-block:: python - - with Context(cli) as ctx: - info = ctx.to_info_dict() - - .. versionadded:: 8.0 - """ - return { - "command": self.command.to_info_dict(self), - "info_name": self.info_name, - "allow_extra_args": self.allow_extra_args, - "allow_interspersed_args": self.allow_interspersed_args, - "ignore_unknown_options": self.ignore_unknown_options, - "auto_envvar_prefix": self.auto_envvar_prefix, - } - - def __enter__(self) -> "Context": - self._depth += 1 - push_context(self) - return self - - def __exit__(self, exc_type, exc_value, tb): # type: ignore - self._depth -= 1 - if self._depth == 0: - self.close() - pop_context() - - @contextmanager - def scope(self, cleanup: bool = True) -> t.Iterator["Context"]: - """This helper method can be used with the context object to promote - it to the current thread local (see :func:`get_current_context`). - The default behavior of this is to invoke the cleanup functions which - can be disabled by setting `cleanup` to `False`. The cleanup - functions are typically used for things such as closing file handles. - - If the cleanup is intended the context object can also be directly - used as a context manager. - - Example usage:: - - with ctx.scope(): - assert get_current_context() is ctx - - This is equivalent:: - - with ctx: - assert get_current_context() is ctx - - .. versionadded:: 5.0 - - :param cleanup: controls if the cleanup functions should be run or - not. The default is to run these functions. In - some situations the context only wants to be - temporarily pushed in which case this can be disabled. - Nested pushes automatically defer the cleanup. - """ - if not cleanup: - self._depth += 1 - try: - with self as rv: - yield rv - finally: - if not cleanup: - self._depth -= 1 - - @property - def meta(self) -> t.Dict[str, t.Any]: - """This is a dictionary which is shared with all the contexts - that are nested. It exists so that click utilities can store some - state here if they need to. It is however the responsibility of - that code to manage this dictionary well. - - The keys are supposed to be unique dotted strings. For instance - module paths are a good choice for it. What is stored in there is - irrelevant for the operation of click. However what is important is - that code that places data here adheres to the general semantics of - the system. - - Example usage:: - - LANG_KEY = f'{__name__}.lang' - - def set_language(value): - ctx = get_current_context() - ctx.meta[LANG_KEY] = value - - def get_language(): - return get_current_context().meta.get(LANG_KEY, 'en_US') - - .. versionadded:: 5.0 - """ - return self._meta - - def make_formatter(self) -> HelpFormatter: - """Creates the :class:`~click.HelpFormatter` for the help and - usage output. - - To quickly customize the formatter class used without overriding - this method, set the :attr:`formatter_class` attribute. - - .. versionchanged:: 8.0 - Added the :attr:`formatter_class` attribute. - """ - return self.formatter_class( - width=self.terminal_width, max_width=self.max_content_width - ) - - def with_resource(self, context_manager: t.ContextManager[V]) -> V: - """Register a resource as if it were used in a ``with`` - statement. The resource will be cleaned up when the context is - popped. - - Uses :meth:`contextlib.ExitStack.enter_context`. It calls the - resource's ``__enter__()`` method and returns the result. When - the context is popped, it closes the stack, which calls the - resource's ``__exit__()`` method. - - To register a cleanup function for something that isn't a - context manager, use :meth:`call_on_close`. Or use something - from :mod:`contextlib` to turn it into a context manager first. - - .. code-block:: python - - @click.group() - @click.option("--name") - @click.pass_context - def cli(ctx): - ctx.obj = ctx.with_resource(connect_db(name)) - - :param context_manager: The context manager to enter. - :return: Whatever ``context_manager.__enter__()`` returns. - - .. versionadded:: 8.0 - """ - return self._exit_stack.enter_context(context_manager) - - def call_on_close(self, f: t.Callable[..., t.Any]) -> t.Callable[..., t.Any]: - """Register a function to be called when the context tears down. - - This can be used to close resources opened during the script - execution. Resources that support Python's context manager - protocol which would be used in a ``with`` statement should be - registered with :meth:`with_resource` instead. - - :param f: The function to execute on teardown. - """ - return self._exit_stack.callback(f) - - def close(self) -> None: - """Invoke all close callbacks registered with - :meth:`call_on_close`, and exit all context managers entered - with :meth:`with_resource`. - """ - self._exit_stack.close() - # In case the context is reused, create a new exit stack. - self._exit_stack = ExitStack() - - @property - def command_path(self) -> str: - """The computed command path. This is used for the ``usage`` - information on the help page. It's automatically created by - combining the info names of the chain of contexts to the root. - """ - rv = "" - if self.info_name is not None: - rv = self.info_name - if self.parent is not None: - parent_command_path = [self.parent.command_path] - - if isinstance(self.parent.command, Command): - for param in self.parent.command.get_params(self): - parent_command_path.extend(param.get_usage_pieces(self)) - - rv = f"{' '.join(parent_command_path)} {rv}" - return rv.lstrip() - - def find_root(self) -> "Context": - """Finds the outermost context.""" - node = self - while node.parent is not None: - node = node.parent - return node - - def find_object(self, object_type: t.Type[V]) -> t.Optional[V]: - """Finds the closest object of a given type.""" - node: t.Optional["Context"] = self - - while node is not None: - if isinstance(node.obj, object_type): - return node.obj - - node = node.parent - - return None - - def ensure_object(self, object_type: t.Type[V]) -> V: - """Like :meth:`find_object` but sets the innermost object to a - new instance of `object_type` if it does not exist. - """ - rv = self.find_object(object_type) - if rv is None: - self.obj = rv = object_type() - return rv - - @t.overload - def lookup_default( - self, name: str, call: "te.Literal[True]" = True - ) -> t.Optional[t.Any]: - ... - - @t.overload - def lookup_default( - self, name: str, call: "te.Literal[False]" = ... - ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: - ... - - def lookup_default(self, name: str, call: bool = True) -> t.Optional[t.Any]: - """Get the default for a parameter from :attr:`default_map`. - - :param name: Name of the parameter. - :param call: If the default is a callable, call it. Disable to - return the callable instead. - - .. versionchanged:: 8.0 - Added the ``call`` parameter. - """ - if self.default_map is not None: - value = self.default_map.get(name) - - if call and callable(value): - return value() - - return value - - return None - - def fail(self, message: str) -> "te.NoReturn": - """Aborts the execution of the program with a specific error - message. - - :param message: the error message to fail with. - """ - raise UsageError(message, self) - - def abort(self) -> "te.NoReturn": - """Aborts the script.""" - raise Abort() - - def exit(self, code: int = 0) -> "te.NoReturn": - """Exits the application with a given exit code.""" - raise Exit(code) - - def get_usage(self) -> str: - """Helper method to get formatted usage string for the current - context and command. - """ - return self.command.get_usage(self) - - def get_help(self) -> str: - """Helper method to get formatted help page for the current - context and command. - """ - return self.command.get_help(self) - - def _make_sub_context(self, command: "Command") -> "Context": - """Create a new context of the same type as this context, but - for a new command. - - :meta private: - """ - return type(self)(command, info_name=command.name, parent=self) - - def invoke( - __self, # noqa: B902 - __callback: t.Union["Command", t.Callable[..., t.Any]], - *args: t.Any, - **kwargs: t.Any, - ) -> t.Any: - """Invokes a command callback in exactly the way it expects. There - are two ways to invoke this method: - - 1. the first argument can be a callback and all other arguments and - keyword arguments are forwarded directly to the function. - 2. the first argument is a click command object. In that case all - arguments are forwarded as well but proper click parameters - (options and click arguments) must be keyword arguments and Click - will fill in defaults. - - Note that before Click 3.2 keyword arguments were not properly filled - in against the intention of this code and no context was created. For - more information about this change and why it was done in a bugfix - release see :ref:`upgrade-to-3.2`. - - .. versionchanged:: 8.0 - All ``kwargs`` are tracked in :attr:`params` so they will be - passed if :meth:`forward` is called at multiple levels. - """ - if isinstance(__callback, Command): - other_cmd = __callback - - if other_cmd.callback is None: - raise TypeError( - "The given command does not have a callback that can be invoked." - ) - else: - __callback = other_cmd.callback - - ctx = __self._make_sub_context(other_cmd) - - for param in other_cmd.params: - if param.name not in kwargs and param.expose_value: - kwargs[param.name] = param.type_cast_value( # type: ignore - ctx, param.get_default(ctx) - ) - - # Track all kwargs as params, so that forward() will pass - # them on in subsequent calls. - ctx.params.update(kwargs) - else: - ctx = __self - - with augment_usage_errors(__self): - with ctx: - return __callback(*args, **kwargs) - - def forward( - __self, __cmd: "Command", *args: t.Any, **kwargs: t.Any # noqa: B902 - ) -> t.Any: - """Similar to :meth:`invoke` but fills in default keyword - arguments from the current context if the other command expects - it. This cannot invoke callbacks directly, only other commands. - - .. versionchanged:: 8.0 - All ``kwargs`` are tracked in :attr:`params` so they will be - passed if ``forward`` is called at multiple levels. - """ - # Can only forward to other commands, not direct callbacks. - if not isinstance(__cmd, Command): - raise TypeError("Callback is not a command.") - - for param in __self.params: - if param not in kwargs: - kwargs[param] = __self.params[param] - - return __self.invoke(__cmd, *args, **kwargs) - - def set_parameter_source(self, name: str, source: ParameterSource) -> None: - """Set the source of a parameter. This indicates the location - from which the value of the parameter was obtained. - - :param name: The name of the parameter. - :param source: A member of :class:`~click.core.ParameterSource`. - """ - self._parameter_source[name] = source - - def get_parameter_source(self, name: str) -> t.Optional[ParameterSource]: - """Get the source of a parameter. This indicates the location - from which the value of the parameter was obtained. - - This can be useful for determining when a user specified a value - on the command line that is the same as the default value. It - will be :attr:`~click.core.ParameterSource.DEFAULT` only if the - value was actually taken from the default. - - :param name: The name of the parameter. - :rtype: ParameterSource - - .. versionchanged:: 8.0 - Returns ``None`` if the parameter was not provided from any - source. - """ - return self._parameter_source.get(name) - - -class BaseCommand: - """The base command implements the minimal API contract of commands. - Most code will never use this as it does not implement a lot of useful - functionality but it can act as the direct subclass of alternative - parsing methods that do not depend on the Click parser. - - For instance, this can be used to bridge Click and other systems like - argparse or docopt. - - Because base commands do not implement a lot of the API that other - parts of Click take for granted, they are not supported for all - operations. For instance, they cannot be used with the decorators - usually and they have no built-in callback system. - - .. versionchanged:: 2.0 - Added the `context_settings` parameter. - - :param name: the name of the command to use unless a group overrides it. - :param context_settings: an optional dictionary with defaults that are - passed to the context object. - """ - - #: The context class to create with :meth:`make_context`. - #: - #: .. versionadded:: 8.0 - context_class: t.Type[Context] = Context - #: the default for the :attr:`Context.allow_extra_args` flag. - allow_extra_args = False - #: the default for the :attr:`Context.allow_interspersed_args` flag. - allow_interspersed_args = True - #: the default for the :attr:`Context.ignore_unknown_options` flag. - ignore_unknown_options = False - - def __init__( - self, - name: t.Optional[str], - context_settings: t.Optional[t.Dict[str, t.Any]] = None, - ) -> None: - #: the name the command thinks it has. Upon registering a command - #: on a :class:`Group` the group will default the command name - #: with this information. You should instead use the - #: :class:`Context`\'s :attr:`~Context.info_name` attribute. - self.name = name - - if context_settings is None: - context_settings = {} - - #: an optional dictionary with defaults passed to the context. - self.context_settings: t.Dict[str, t.Any] = context_settings - - def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]: - """Gather information that could be useful for a tool generating - user-facing documentation. This traverses the entire structure - below this command. - - Use :meth:`click.Context.to_info_dict` to traverse the entire - CLI structure. - - :param ctx: A :class:`Context` representing this command. - - .. versionadded:: 8.0 - """ - return {"name": self.name} - - def __repr__(self) -> str: - return f"<{self.__class__.__name__} {self.name}>" - - def get_usage(self, ctx: Context) -> str: - raise NotImplementedError("Base commands cannot get usage") - - def get_help(self, ctx: Context) -> str: - raise NotImplementedError("Base commands cannot get help") - - def make_context( - self, - info_name: t.Optional[str], - args: t.List[str], - parent: t.Optional[Context] = None, - **extra: t.Any, - ) -> Context: - """This function when given an info name and arguments will kick - off the parsing and create a new :class:`Context`. It does not - invoke the actual command callback though. - - To quickly customize the context class used without overriding - this method, set the :attr:`context_class` attribute. - - :param info_name: the info name for this invocation. Generally this - is the most descriptive name for the script or - command. For the toplevel script it's usually - the name of the script, for commands below it it's - the name of the command. - :param args: the arguments to parse as list of strings. - :param parent: the parent context if available. - :param extra: extra keyword arguments forwarded to the context - constructor. - - .. versionchanged:: 8.0 - Added the :attr:`context_class` attribute. - """ - for key, value in self.context_settings.items(): - if key not in extra: - extra[key] = value - - ctx = self.context_class( - self, info_name=info_name, parent=parent, **extra # type: ignore - ) - - with ctx.scope(cleanup=False): - self.parse_args(ctx, args) - return ctx - - def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]: - """Given a context and a list of arguments this creates the parser - and parses the arguments, then modifies the context as necessary. - This is automatically invoked by :meth:`make_context`. - """ - raise NotImplementedError("Base commands do not know how to parse arguments.") - - def invoke(self, ctx: Context) -> t.Any: - """Given a context, this invokes the command. The default - implementation is raising a not implemented error. - """ - raise NotImplementedError("Base commands are not invokable by default") - - def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: - """Return a list of completions for the incomplete value. Looks - at the names of chained multi-commands. - - Any command could be part of a chained multi-command, so sibling - commands are valid at any point during command completion. Other - command classes will return more completions. - - :param ctx: Invocation context for this command. - :param incomplete: Value being completed. May be empty. - - .. versionadded:: 8.0 - """ - from click.shell_completion import CompletionItem - - results: t.List["CompletionItem"] = [] - - while ctx.parent is not None: - ctx = ctx.parent - - if isinstance(ctx.command, MultiCommand) and ctx.command.chain: - results.extend( - CompletionItem(name, help=command.get_short_help_str()) - for name, command in _complete_visible_commands(ctx, incomplete) - if name not in ctx.protected_args - ) - - return results - - @t.overload - def main( - self, - args: t.Optional[t.Sequence[str]] = None, - prog_name: t.Optional[str] = None, - complete_var: t.Optional[str] = None, - standalone_mode: "te.Literal[True]" = True, - **extra: t.Any, - ) -> "te.NoReturn": - ... - - @t.overload - def main( - self, - args: t.Optional[t.Sequence[str]] = None, - prog_name: t.Optional[str] = None, - complete_var: t.Optional[str] = None, - standalone_mode: bool = ..., - **extra: t.Any, - ) -> t.Any: - ... - - def main( - self, - args: t.Optional[t.Sequence[str]] = None, - prog_name: t.Optional[str] = None, - complete_var: t.Optional[str] = None, - standalone_mode: bool = True, - windows_expand_args: bool = True, - **extra: t.Any, - ) -> t.Any: - """This is the way to invoke a script with all the bells and - whistles as a command line application. This will always terminate - the application after a call. If this is not wanted, ``SystemExit`` - needs to be caught. - - This method is also available by directly calling the instance of - a :class:`Command`. - - :param args: the arguments that should be used for parsing. If not - provided, ``sys.argv[1:]`` is used. - :param prog_name: the program name that should be used. By default - the program name is constructed by taking the file - name from ``sys.argv[0]``. - :param complete_var: the environment variable that controls the - bash completion support. The default is - ``"__COMPLETE"`` with prog_name in - uppercase. - :param standalone_mode: the default behavior is to invoke the script - in standalone mode. Click will then - handle exceptions and convert them into - error messages and the function will never - return but shut down the interpreter. If - this is set to `False` they will be - propagated to the caller and the return - value of this function is the return value - of :meth:`invoke`. - :param windows_expand_args: Expand glob patterns, user dir, and - env vars in command line args on Windows. - :param extra: extra keyword arguments are forwarded to the context - constructor. See :class:`Context` for more information. - - .. versionchanged:: 8.0.1 - Added the ``windows_expand_args`` parameter to allow - disabling command line arg expansion on Windows. - - .. versionchanged:: 8.0 - When taking arguments from ``sys.argv`` on Windows, glob - patterns, user dir, and env vars are expanded. - - .. versionchanged:: 3.0 - Added the ``standalone_mode`` parameter. - """ - if args is None: - args = sys.argv[1:] - - if os.name == "nt" and windows_expand_args: - args = _expand_args(args) - else: - args = list(args) - - if prog_name is None: - prog_name = _detect_program_name() - - # Process shell completion requests and exit early. - self._main_shell_completion(extra, prog_name, complete_var) - - try: - try: - with self.make_context(prog_name, args, **extra) as ctx: - rv = self.invoke(ctx) - if not standalone_mode: - return rv - # it's not safe to `ctx.exit(rv)` here! - # note that `rv` may actually contain data like "1" which - # has obvious effects - # more subtle case: `rv=[None, None]` can come out of - # chained commands which all returned `None` -- so it's not - # even always obvious that `rv` indicates success/failure - # by its truthiness/falsiness - ctx.exit() - except (EOFError, KeyboardInterrupt): - echo(file=sys.stderr) - raise Abort() from None - except ClickException as e: - if not standalone_mode: - raise - e.show() - sys.exit(e.exit_code) - except OSError as e: - if e.errno == errno.EPIPE: - sys.stdout = t.cast(t.TextIO, PacifyFlushWrapper(sys.stdout)) - sys.stderr = t.cast(t.TextIO, PacifyFlushWrapper(sys.stderr)) - sys.exit(1) - else: - raise - except Exit as e: - if standalone_mode: - sys.exit(e.exit_code) - else: - # in non-standalone mode, return the exit code - # note that this is only reached if `self.invoke` above raises - # an Exit explicitly -- thus bypassing the check there which - # would return its result - # the results of non-standalone execution may therefore be - # somewhat ambiguous: if there are codepaths which lead to - # `ctx.exit(1)` and to `return 1`, the caller won't be able to - # tell the difference between the two - return e.exit_code - except Abort: - if not standalone_mode: - raise - echo(_("Aborted!"), file=sys.stderr) - sys.exit(1) - - def _main_shell_completion( - self, - ctx_args: t.Dict[str, t.Any], - prog_name: str, - complete_var: t.Optional[str] = None, - ) -> None: - """Check if the shell is asking for tab completion, process - that, then exit early. Called from :meth:`main` before the - program is invoked. - - :param prog_name: Name of the executable in the shell. - :param complete_var: Name of the environment variable that holds - the completion instruction. Defaults to - ``_{PROG_NAME}_COMPLETE``. - """ - if complete_var is None: - complete_var = f"_{prog_name}_COMPLETE".replace("-", "_").upper() - - instruction = os.environ.get(complete_var) - - if not instruction: - return - - from .shell_completion import shell_complete - - rv = shell_complete(self, ctx_args, prog_name, complete_var, instruction) - sys.exit(rv) - - def __call__(self, *args: t.Any, **kwargs: t.Any) -> t.Any: - """Alias for :meth:`main`.""" - return self.main(*args, **kwargs) - - -class Command(BaseCommand): - """Commands are the basic building block of command line interfaces in - Click. A basic command handles command line parsing and might dispatch - more parsing to commands nested below it. - - :param name: the name of the command to use unless a group overrides it. - :param context_settings: an optional dictionary with defaults that are - passed to the context object. - :param callback: the callback to invoke. This is optional. - :param params: the parameters to register with this command. This can - be either :class:`Option` or :class:`Argument` objects. - :param help: the help string to use for this command. - :param epilog: like the help string but it's printed at the end of the - help page after everything else. - :param short_help: the short help to use for this command. This is - shown on the command listing of the parent command. - :param add_help_option: by default each command registers a ``--help`` - option. This can be disabled by this parameter. - :param no_args_is_help: this controls what happens if no arguments are - provided. This option is disabled by default. - If enabled this will add ``--help`` as argument - if no arguments are passed - :param hidden: hide this command from help outputs. - - :param deprecated: issues a message indicating that - the command is deprecated. - - .. versionchanged:: 8.1 - ``help``, ``epilog``, and ``short_help`` are stored unprocessed, - all formatting is done when outputting help text, not at init, - and is done even if not using the ``@command`` decorator. - - .. versionchanged:: 8.0 - Added a ``repr`` showing the command name. - - .. versionchanged:: 7.1 - Added the ``no_args_is_help`` parameter. - - .. versionchanged:: 2.0 - Added the ``context_settings`` parameter. - """ - - def __init__( - self, - name: t.Optional[str], - context_settings: t.Optional[t.Dict[str, t.Any]] = None, - callback: t.Optional[t.Callable[..., t.Any]] = None, - params: t.Optional[t.List["Parameter"]] = None, - help: t.Optional[str] = None, - epilog: t.Optional[str] = None, - short_help: t.Optional[str] = None, - options_metavar: t.Optional[str] = "[OPTIONS]", - add_help_option: bool = True, - no_args_is_help: bool = False, - hidden: bool = False, - deprecated: bool = False, - ) -> None: - super().__init__(name, context_settings) - #: the callback to execute when the command fires. This might be - #: `None` in which case nothing happens. - self.callback = callback - #: the list of parameters for this command in the order they - #: should show up in the help page and execute. Eager parameters - #: will automatically be handled before non eager ones. - self.params: t.List["Parameter"] = params or [] - self.help = help - self.epilog = epilog - self.options_metavar = options_metavar - self.short_help = short_help - self.add_help_option = add_help_option - self.no_args_is_help = no_args_is_help - self.hidden = hidden - self.deprecated = deprecated - - def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]: - info_dict = super().to_info_dict(ctx) - info_dict.update( - params=[param.to_info_dict() for param in self.get_params(ctx)], - help=self.help, - epilog=self.epilog, - short_help=self.short_help, - hidden=self.hidden, - deprecated=self.deprecated, - ) - return info_dict - - def get_usage(self, ctx: Context) -> str: - """Formats the usage line into a string and returns it. - - Calls :meth:`format_usage` internally. - """ - formatter = ctx.make_formatter() - self.format_usage(ctx, formatter) - return formatter.getvalue().rstrip("\n") - - def get_params(self, ctx: Context) -> t.List["Parameter"]: - rv = self.params - help_option = self.get_help_option(ctx) - - if help_option is not None: - rv = [*rv, help_option] - - return rv - - def format_usage(self, ctx: Context, formatter: HelpFormatter) -> None: - """Writes the usage line into the formatter. - - This is a low-level method called by :meth:`get_usage`. - """ - pieces = self.collect_usage_pieces(ctx) - formatter.write_usage(ctx.command_path, " ".join(pieces)) - - def collect_usage_pieces(self, ctx: Context) -> t.List[str]: - """Returns all the pieces that go into the usage line and returns - it as a list of strings. - """ - rv = [self.options_metavar] if self.options_metavar else [] - - for param in self.get_params(ctx): - rv.extend(param.get_usage_pieces(ctx)) - - return rv - - def get_help_option_names(self, ctx: Context) -> t.List[str]: - """Returns the names for the help option.""" - all_names = set(ctx.help_option_names) - for param in self.params: - all_names.difference_update(param.opts) - all_names.difference_update(param.secondary_opts) - return list(all_names) - - def get_help_option(self, ctx: Context) -> t.Optional["Option"]: - """Returns the help option object.""" - help_options = self.get_help_option_names(ctx) - - if not help_options or not self.add_help_option: - return None - - def show_help(ctx: Context, param: "Parameter", value: str) -> None: - if value and not ctx.resilient_parsing: - echo(ctx.get_help(), color=ctx.color) - ctx.exit() - - return Option( - help_options, - is_flag=True, - is_eager=True, - expose_value=False, - callback=show_help, - help=_("Show this message and exit."), - ) - - def make_parser(self, ctx: Context) -> OptionParser: - """Creates the underlying option parser for this command.""" - parser = OptionParser(ctx) - for param in self.get_params(ctx): - param.add_to_parser(parser, ctx) - return parser - - def get_help(self, ctx: Context) -> str: - """Formats the help into a string and returns it. - - Calls :meth:`format_help` internally. - """ - formatter = ctx.make_formatter() - self.format_help(ctx, formatter) - return formatter.getvalue().rstrip("\n") - - def get_short_help_str(self, limit: int = 45) -> str: - """Gets short help for the command or makes it by shortening the - long help string. - """ - if self.short_help: - text = inspect.cleandoc(self.short_help) - elif self.help: - text = make_default_short_help(self.help, limit) - else: - text = "" - - if self.deprecated: - text = _("(Deprecated) {text}").format(text=text) - - return text.strip() - - def format_help(self, ctx: Context, formatter: HelpFormatter) -> None: - """Writes the help into the formatter if it exists. - - This is a low-level method called by :meth:`get_help`. - - This calls the following methods: - - - :meth:`format_usage` - - :meth:`format_help_text` - - :meth:`format_options` - - :meth:`format_epilog` - """ - self.format_usage(ctx, formatter) - self.format_help_text(ctx, formatter) - self.format_options(ctx, formatter) - self.format_epilog(ctx, formatter) - - def format_help_text(self, ctx: Context, formatter: HelpFormatter) -> None: - """Writes the help text to the formatter if it exists.""" - text = self.help if self.help is not None else "" - - if self.deprecated: - text = _("(Deprecated) {text}").format(text=text) - - if text: - text = inspect.cleandoc(text).partition("\f")[0] - formatter.write_paragraph() - - with formatter.indentation(): - formatter.write_text(text) - - def format_options(self, ctx: Context, formatter: HelpFormatter) -> None: - """Writes all the options into the formatter if they exist.""" - opts = [] - for param in self.get_params(ctx): - rv = param.get_help_record(ctx) - if rv is not None: - opts.append(rv) - - if opts: - with formatter.section(_("Options")): - formatter.write_dl(opts) - - def format_epilog(self, ctx: Context, formatter: HelpFormatter) -> None: - """Writes the epilog into the formatter if it exists.""" - if self.epilog: - epilog = inspect.cleandoc(self.epilog) - formatter.write_paragraph() - - with formatter.indentation(): - formatter.write_text(epilog) - - def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]: - if not args and self.no_args_is_help and not ctx.resilient_parsing: - echo(ctx.get_help(), color=ctx.color) - ctx.exit() - - parser = self.make_parser(ctx) - opts, args, param_order = parser.parse_args(args=args) - - for param in iter_params_for_processing(param_order, self.get_params(ctx)): - value, args = param.handle_parse_result(ctx, opts, args) - - if args and not ctx.allow_extra_args and not ctx.resilient_parsing: - ctx.fail( - ngettext( - "Got unexpected extra argument ({args})", - "Got unexpected extra arguments ({args})", - len(args), - ).format(args=" ".join(map(str, args))) - ) - - ctx.args = args - ctx._opt_prefixes.update(parser._opt_prefixes) - return args - - def invoke(self, ctx: Context) -> t.Any: - """Given a context, this invokes the attached callback (if it exists) - in the right way. - """ - if self.deprecated: - message = _( - "DeprecationWarning: The command {name!r} is deprecated." - ).format(name=self.name) - echo(style(message, fg="red"), err=True) - - if self.callback is not None: - return ctx.invoke(self.callback, **ctx.params) - - def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: - """Return a list of completions for the incomplete value. Looks - at the names of options and chained multi-commands. - - :param ctx: Invocation context for this command. - :param incomplete: Value being completed. May be empty. - - .. versionadded:: 8.0 - """ - from click.shell_completion import CompletionItem - - results: t.List["CompletionItem"] = [] - - if incomplete and not incomplete[0].isalnum(): - for param in self.get_params(ctx): - if ( - not isinstance(param, Option) - or param.hidden - or ( - not param.multiple - and ctx.get_parameter_source(param.name) # type: ignore - is ParameterSource.COMMANDLINE - ) - ): - continue - - results.extend( - CompletionItem(name, help=param.help) - for name in [*param.opts, *param.secondary_opts] - if name.startswith(incomplete) - ) - - results.extend(super().shell_complete(ctx, incomplete)) - return results - - -class MultiCommand(Command): - """A multi command is the basic implementation of a command that - dispatches to subcommands. The most common version is the - :class:`Group`. - - :param invoke_without_command: this controls how the multi command itself - is invoked. By default it's only invoked - if a subcommand is provided. - :param no_args_is_help: this controls what happens if no arguments are - provided. This option is enabled by default if - `invoke_without_command` is disabled or disabled - if it's enabled. If enabled this will add - ``--help`` as argument if no arguments are - passed. - :param subcommand_metavar: the string that is used in the documentation - to indicate the subcommand place. - :param chain: if this is set to `True` chaining of multiple subcommands - is enabled. This restricts the form of commands in that - they cannot have optional arguments but it allows - multiple commands to be chained together. - :param result_callback: The result callback to attach to this multi - command. This can be set or changed later with the - :meth:`result_callback` decorator. - """ - - allow_extra_args = True - allow_interspersed_args = False - - def __init__( - self, - name: t.Optional[str] = None, - invoke_without_command: bool = False, - no_args_is_help: t.Optional[bool] = None, - subcommand_metavar: t.Optional[str] = None, - chain: bool = False, - result_callback: t.Optional[t.Callable[..., t.Any]] = None, - **attrs: t.Any, - ) -> None: - super().__init__(name, **attrs) - - if no_args_is_help is None: - no_args_is_help = not invoke_without_command - - self.no_args_is_help = no_args_is_help - self.invoke_without_command = invoke_without_command - - if subcommand_metavar is None: - if chain: - subcommand_metavar = "COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]..." - else: - subcommand_metavar = "COMMAND [ARGS]..." - - self.subcommand_metavar = subcommand_metavar - self.chain = chain - # The result callback that is stored. This can be set or - # overridden with the :func:`result_callback` decorator. - self._result_callback = result_callback - - if self.chain: - for param in self.params: - if isinstance(param, Argument) and not param.required: - raise RuntimeError( - "Multi commands in chain mode cannot have" - " optional arguments." - ) - - def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]: - info_dict = super().to_info_dict(ctx) - commands = {} - - for name in self.list_commands(ctx): - command = self.get_command(ctx, name) - - if command is None: - continue - - sub_ctx = ctx._make_sub_context(command) - - with sub_ctx.scope(cleanup=False): - commands[name] = command.to_info_dict(sub_ctx) - - info_dict.update(commands=commands, chain=self.chain) - return info_dict - - def collect_usage_pieces(self, ctx: Context) -> t.List[str]: - rv = super().collect_usage_pieces(ctx) - rv.append(self.subcommand_metavar) - return rv - - def format_options(self, ctx: Context, formatter: HelpFormatter) -> None: - super().format_options(ctx, formatter) - self.format_commands(ctx, formatter) - - def result_callback(self, replace: bool = False) -> t.Callable[[F], F]: - """Adds a result callback to the command. By default if a - result callback is already registered this will chain them but - this can be disabled with the `replace` parameter. The result - callback is invoked with the return value of the subcommand - (or the list of return values from all subcommands if chaining - is enabled) as well as the parameters as they would be passed - to the main callback. - - Example:: - - @click.group() - @click.option('-i', '--input', default=23) - def cli(input): - return 42 - - @cli.result_callback() - def process_result(result, input): - return result + input - - :param replace: if set to `True` an already existing result - callback will be removed. - - .. versionchanged:: 8.0 - Renamed from ``resultcallback``. - - .. versionadded:: 3.0 - """ - - def decorator(f: F) -> F: - old_callback = self._result_callback - - if old_callback is None or replace: - self._result_callback = f - return f - - def function(__value, *args, **kwargs): # type: ignore - inner = old_callback(__value, *args, **kwargs) # type: ignore - return f(inner, *args, **kwargs) - - self._result_callback = rv = update_wrapper(t.cast(F, function), f) - return rv - - return decorator - - def format_commands(self, ctx: Context, formatter: HelpFormatter) -> None: - """Extra format methods for multi methods that adds all the commands - after the options. - """ - commands = [] - for subcommand in self.list_commands(ctx): - cmd = self.get_command(ctx, subcommand) - # What is this, the tool lied about a command. Ignore it - if cmd is None: - continue - if cmd.hidden: - continue - - commands.append((subcommand, cmd)) - - # allow for 3 times the default spacing - if len(commands): - limit = formatter.width - 6 - max(len(cmd[0]) for cmd in commands) - - rows = [] - for subcommand, cmd in commands: - help = cmd.get_short_help_str(limit) - rows.append((subcommand, help)) - - if rows: - with formatter.section(_("Commands")): - formatter.write_dl(rows) - - def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]: - if not args and self.no_args_is_help and not ctx.resilient_parsing: - echo(ctx.get_help(), color=ctx.color) - ctx.exit() - - rest = super().parse_args(ctx, args) - - if self.chain: - ctx.protected_args = rest - ctx.args = [] - elif rest: - ctx.protected_args, ctx.args = rest[:1], rest[1:] - - return ctx.args - - def invoke(self, ctx: Context) -> t.Any: - def _process_result(value: t.Any) -> t.Any: - if self._result_callback is not None: - value = ctx.invoke(self._result_callback, value, **ctx.params) - return value - - if not ctx.protected_args: - if self.invoke_without_command: - # No subcommand was invoked, so the result callback is - # invoked with the group return value for regular - # groups, or an empty list for chained groups. - with ctx: - rv = super().invoke(ctx) - return _process_result([] if self.chain else rv) - ctx.fail(_("Missing command.")) - - # Fetch args back out - args = [*ctx.protected_args, *ctx.args] - ctx.args = [] - ctx.protected_args = [] - - # If we're not in chain mode, we only allow the invocation of a - # single command but we also inform the current context about the - # name of the command to invoke. - if not self.chain: - # Make sure the context is entered so we do not clean up - # resources until the result processor has worked. - with ctx: - cmd_name, cmd, args = self.resolve_command(ctx, args) - assert cmd is not None - ctx.invoked_subcommand = cmd_name - super().invoke(ctx) - sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) - with sub_ctx: - return _process_result(sub_ctx.command.invoke(sub_ctx)) - - # In chain mode we create the contexts step by step, but after the - # base command has been invoked. Because at that point we do not - # know the subcommands yet, the invoked subcommand attribute is - # set to ``*`` to inform the command that subcommands are executed - # but nothing else. - with ctx: - ctx.invoked_subcommand = "*" if args else None - super().invoke(ctx) - - # Otherwise we make every single context and invoke them in a - # chain. In that case the return value to the result processor - # is the list of all invoked subcommand's results. - contexts = [] - while args: - cmd_name, cmd, args = self.resolve_command(ctx, args) - assert cmd is not None - sub_ctx = cmd.make_context( - cmd_name, - args, - parent=ctx, - allow_extra_args=True, - allow_interspersed_args=False, - ) - contexts.append(sub_ctx) - args, sub_ctx.args = sub_ctx.args, [] - - rv = [] - for sub_ctx in contexts: - with sub_ctx: - rv.append(sub_ctx.command.invoke(sub_ctx)) - return _process_result(rv) - - def resolve_command( - self, ctx: Context, args: t.List[str] - ) -> t.Tuple[t.Optional[str], t.Optional[Command], t.List[str]]: - cmd_name = make_str(args[0]) - original_cmd_name = cmd_name - - # Get the command - cmd = self.get_command(ctx, cmd_name) - - # If we can't find the command but there is a normalization - # function available, we try with that one. - if cmd is None and ctx.token_normalize_func is not None: - cmd_name = ctx.token_normalize_func(cmd_name) - cmd = self.get_command(ctx, cmd_name) - - # If we don't find the command we want to show an error message - # to the user that it was not provided. However, there is - # something else we should do: if the first argument looks like - # an option we want to kick off parsing again for arguments to - # resolve things like --help which now should go to the main - # place. - if cmd is None and not ctx.resilient_parsing: - if split_opt(cmd_name)[0]: - self.parse_args(ctx, ctx.args) - ctx.fail(_("No such command {name!r}.").format(name=original_cmd_name)) - return cmd_name if cmd else None, cmd, args[1:] - - def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]: - """Given a context and a command name, this returns a - :class:`Command` object if it exists or returns `None`. - """ - raise NotImplementedError - - def list_commands(self, ctx: Context) -> t.List[str]: - """Returns a list of subcommand names in the order they should - appear. - """ - return [] - - def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: - """Return a list of completions for the incomplete value. Looks - at the names of options, subcommands, and chained - multi-commands. - - :param ctx: Invocation context for this command. - :param incomplete: Value being completed. May be empty. - - .. versionadded:: 8.0 - """ - from click.shell_completion import CompletionItem - - results = [ - CompletionItem(name, help=command.get_short_help_str()) - for name, command in _complete_visible_commands(ctx, incomplete) - ] - results.extend(super().shell_complete(ctx, incomplete)) - return results - - -class Group(MultiCommand): - """A group allows a command to have subcommands attached. This is - the most common way to implement nesting in Click. - - :param name: The name of the group command. - :param commands: A dict mapping names to :class:`Command` objects. - Can also be a list of :class:`Command`, which will use - :attr:`Command.name` to create the dict. - :param attrs: Other command arguments described in - :class:`MultiCommand`, :class:`Command`, and - :class:`BaseCommand`. - - .. versionchanged:: 8.0 - The ``commmands`` argument can be a list of command objects. - """ - - #: If set, this is used by the group's :meth:`command` decorator - #: as the default :class:`Command` class. This is useful to make all - #: subcommands use a custom command class. - #: - #: .. versionadded:: 8.0 - command_class: t.Optional[t.Type[Command]] = None - - #: If set, this is used by the group's :meth:`group` decorator - #: as the default :class:`Group` class. This is useful to make all - #: subgroups use a custom group class. - #: - #: If set to the special value :class:`type` (literally - #: ``group_class = type``), this group's class will be used as the - #: default class. This makes a custom group class continue to make - #: custom groups. - #: - #: .. versionadded:: 8.0 - group_class: t.Optional[t.Union[t.Type["Group"], t.Type[type]]] = None - # Literal[type] isn't valid, so use Type[type] - - def __init__( - self, - name: t.Optional[str] = None, - commands: t.Optional[t.Union[t.Dict[str, Command], t.Sequence[Command]]] = None, - **attrs: t.Any, - ) -> None: - super().__init__(name, **attrs) - - if commands is None: - commands = {} - elif isinstance(commands, abc.Sequence): - commands = {c.name: c for c in commands if c.name is not None} - - #: The registered subcommands by their exported names. - self.commands: t.Dict[str, Command] = commands - - def add_command(self, cmd: Command, name: t.Optional[str] = None) -> None: - """Registers another :class:`Command` with this group. If the name - is not provided, the name of the command is used. - """ - name = name or cmd.name - if name is None: - raise TypeError("Command has no name.") - _check_multicommand(self, name, cmd, register=True) - self.commands[name] = cmd - - @t.overload - def command(self, __func: t.Callable[..., t.Any]) -> Command: - ... - - @t.overload - def command( - self, *args: t.Any, **kwargs: t.Any - ) -> t.Callable[[t.Callable[..., t.Any]], Command]: - ... - - def command( - self, *args: t.Any, **kwargs: t.Any - ) -> t.Union[t.Callable[[t.Callable[..., t.Any]], Command], Command]: - """A shortcut decorator for declaring and attaching a command to - the group. This takes the same arguments as :func:`command` and - immediately registers the created command with this group by - calling :meth:`add_command`. - - To customize the command class used, set the - :attr:`command_class` attribute. - - .. versionchanged:: 8.1 - This decorator can be applied without parentheses. - - .. versionchanged:: 8.0 - Added the :attr:`command_class` attribute. - """ - from .decorators import command - - if self.command_class and kwargs.get("cls") is None: - kwargs["cls"] = self.command_class - - func: t.Optional[t.Callable] = None - - if args and callable(args[0]): - assert ( - len(args) == 1 and not kwargs - ), "Use 'command(**kwargs)(callable)' to provide arguments." - (func,) = args - args = () - - def decorator(f: t.Callable[..., t.Any]) -> Command: - cmd: Command = command(*args, **kwargs)(f) - self.add_command(cmd) - return cmd - - if func is not None: - return decorator(func) - - return decorator - - @t.overload - def group(self, __func: t.Callable[..., t.Any]) -> "Group": - ... - - @t.overload - def group( - self, *args: t.Any, **kwargs: t.Any - ) -> t.Callable[[t.Callable[..., t.Any]], "Group"]: - ... - - def group( - self, *args: t.Any, **kwargs: t.Any - ) -> t.Union[t.Callable[[t.Callable[..., t.Any]], "Group"], "Group"]: - """A shortcut decorator for declaring and attaching a group to - the group. This takes the same arguments as :func:`group` and - immediately registers the created group with this group by - calling :meth:`add_command`. - - To customize the group class used, set the :attr:`group_class` - attribute. - - .. versionchanged:: 8.1 - This decorator can be applied without parentheses. - - .. versionchanged:: 8.0 - Added the :attr:`group_class` attribute. - """ - from .decorators import group - - func: t.Optional[t.Callable] = None - - if args and callable(args[0]): - assert ( - len(args) == 1 and not kwargs - ), "Use 'group(**kwargs)(callable)' to provide arguments." - (func,) = args - args = () - - if self.group_class is not None and kwargs.get("cls") is None: - if self.group_class is type: - kwargs["cls"] = type(self) - else: - kwargs["cls"] = self.group_class - - def decorator(f: t.Callable[..., t.Any]) -> "Group": - cmd: Group = group(*args, **kwargs)(f) - self.add_command(cmd) - return cmd - - if func is not None: - return decorator(func) - - return decorator - - def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]: - return self.commands.get(cmd_name) - - def list_commands(self, ctx: Context) -> t.List[str]: - return sorted(self.commands) - - -class CommandCollection(MultiCommand): - """A command collection is a multi command that merges multiple multi - commands together into one. This is a straightforward implementation - that accepts a list of different multi commands as sources and - provides all the commands for each of them. - """ - - def __init__( - self, - name: t.Optional[str] = None, - sources: t.Optional[t.List[MultiCommand]] = None, - **attrs: t.Any, - ) -> None: - super().__init__(name, **attrs) - #: The list of registered multi commands. - self.sources: t.List[MultiCommand] = sources or [] - - def add_source(self, multi_cmd: MultiCommand) -> None: - """Adds a new multi command to the chain dispatcher.""" - self.sources.append(multi_cmd) - - def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]: - for source in self.sources: - rv = source.get_command(ctx, cmd_name) - - if rv is not None: - if self.chain: - _check_multicommand(self, cmd_name, rv) - - return rv - - return None - - def list_commands(self, ctx: Context) -> t.List[str]: - rv: t.Set[str] = set() - - for source in self.sources: - rv.update(source.list_commands(ctx)) - - return sorted(rv) - - -def _check_iter(value: t.Any) -> t.Iterator[t.Any]: - """Check if the value is iterable but not a string. Raises a type - error, or return an iterator over the value. - """ - if isinstance(value, str): - raise TypeError - - return iter(value) - - -class Parameter: - r"""A parameter to a command comes in two versions: they are either - :class:`Option`\s or :class:`Argument`\s. Other subclasses are currently - not supported by design as some of the internals for parsing are - intentionally not finalized. - - Some settings are supported by both options and arguments. - - :param param_decls: the parameter declarations for this option or - argument. This is a list of flags or argument - names. - :param type: the type that should be used. Either a :class:`ParamType` - or a Python type. The later is converted into the former - automatically if supported. - :param required: controls if this is optional or not. - :param default: the default value if omitted. This can also be a callable, - in which case it's invoked when the default is needed - without any arguments. - :param callback: A function to further process or validate the value - after type conversion. It is called as ``f(ctx, param, value)`` - and must return the value. It is called for all sources, - including prompts. - :param nargs: the number of arguments to match. If not ``1`` the return - value is a tuple instead of single value. The default for - nargs is ``1`` (except if the type is a tuple, then it's - the arity of the tuple). If ``nargs=-1``, all remaining - parameters are collected. - :param metavar: how the value is represented in the help page. - :param expose_value: if this is `True` then the value is passed onwards - to the command callback and stored on the context, - otherwise it's skipped. - :param is_eager: eager values are processed before non eager ones. This - should not be set for arguments or it will inverse the - order of processing. - :param envvar: a string or list of strings that are environment variables - that should be checked. - :param shell_complete: A function that returns custom shell - completions. Used instead of the param's type completion if - given. Takes ``ctx, param, incomplete`` and must return a list - of :class:`~click.shell_completion.CompletionItem` or a list of - strings. - - .. versionchanged:: 8.0 - ``process_value`` validates required parameters and bounded - ``nargs``, and invokes the parameter callback before returning - the value. This allows the callback to validate prompts. - ``full_process_value`` is removed. - - .. versionchanged:: 8.0 - ``autocompletion`` is renamed to ``shell_complete`` and has new - semantics described above. The old name is deprecated and will - be removed in 8.1, until then it will be wrapped to match the - new requirements. - - .. versionchanged:: 8.0 - For ``multiple=True, nargs>1``, the default must be a list of - tuples. - - .. versionchanged:: 8.0 - Setting a default is no longer required for ``nargs>1``, it will - default to ``None``. ``multiple=True`` or ``nargs=-1`` will - default to ``()``. - - .. versionchanged:: 7.1 - Empty environment variables are ignored rather than taking the - empty string value. This makes it possible for scripts to clear - variables if they can't unset them. - - .. versionchanged:: 2.0 - Changed signature for parameter callback to also be passed the - parameter. The old callback format will still work, but it will - raise a warning to give you a chance to migrate the code easier. - """ - - param_type_name = "parameter" - - def __init__( - self, - param_decls: t.Optional[t.Sequence[str]] = None, - type: t.Optional[t.Union[types.ParamType, t.Any]] = None, - required: bool = False, - default: t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]] = None, - callback: t.Optional[t.Callable[[Context, "Parameter", t.Any], t.Any]] = None, - nargs: t.Optional[int] = None, - multiple: bool = False, - metavar: t.Optional[str] = None, - expose_value: bool = True, - is_eager: bool = False, - envvar: t.Optional[t.Union[str, t.Sequence[str]]] = None, - shell_complete: t.Optional[ - t.Callable[ - [Context, "Parameter", str], - t.Union[t.List["CompletionItem"], t.List[str]], - ] - ] = None, - ) -> None: - self.name, self.opts, self.secondary_opts = self._parse_decls( - param_decls or (), expose_value - ) - self.type = types.convert_type(type, default) - - # Default nargs to what the type tells us if we have that - # information available. - if nargs is None: - if self.type.is_composite: - nargs = self.type.arity - else: - nargs = 1 - - self.required = required - self.callback = callback - self.nargs = nargs - self.multiple = multiple - self.expose_value = expose_value - self.default = default - self.is_eager = is_eager - self.metavar = metavar - self.envvar = envvar - self._custom_shell_complete = shell_complete - - if __debug__: - if self.type.is_composite and nargs != self.type.arity: - raise ValueError( - f"'nargs' must be {self.type.arity} (or None) for" - f" type {self.type!r}, but it was {nargs}." - ) - - # Skip no default or callable default. - check_default = default if not callable(default) else None - - if check_default is not None: - if multiple: - try: - # Only check the first value against nargs. - check_default = next(_check_iter(check_default), None) - except TypeError: - raise ValueError( - "'default' must be a list when 'multiple' is true." - ) from None - - # Can be None for multiple with empty default. - if nargs != 1 and check_default is not None: - try: - _check_iter(check_default) - except TypeError: - if multiple: - message = ( - "'default' must be a list of lists when 'multiple' is" - " true and 'nargs' != 1." - ) - else: - message = "'default' must be a list when 'nargs' != 1." - - raise ValueError(message) from None - - if nargs > 1 and len(check_default) != nargs: - subject = "item length" if multiple else "length" - raise ValueError( - f"'default' {subject} must match nargs={nargs}." - ) - - def to_info_dict(self) -> t.Dict[str, t.Any]: - """Gather information that could be useful for a tool generating - user-facing documentation. - - Use :meth:`click.Context.to_info_dict` to traverse the entire - CLI structure. - - .. versionadded:: 8.0 - """ - return { - "name": self.name, - "param_type_name": self.param_type_name, - "opts": self.opts, - "secondary_opts": self.secondary_opts, - "type": self.type.to_info_dict(), - "required": self.required, - "nargs": self.nargs, - "multiple": self.multiple, - "default": self.default, - "envvar": self.envvar, - } - - def __repr__(self) -> str: - return f"<{self.__class__.__name__} {self.name}>" - - def _parse_decls( - self, decls: t.Sequence[str], expose_value: bool - ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]: - raise NotImplementedError() - - @property - def human_readable_name(self) -> str: - """Returns the human readable name of this parameter. This is the - same as the name for options, but the metavar for arguments. - """ - return self.name # type: ignore - - def make_metavar(self) -> str: - if self.metavar is not None: - return self.metavar - - metavar = self.type.get_metavar(self) - - if metavar is None: - metavar = self.type.name.upper() - - if self.nargs != 1: - metavar += "..." - - return metavar - - @t.overload - def get_default( - self, ctx: Context, call: "te.Literal[True]" = True - ) -> t.Optional[t.Any]: - ... - - @t.overload - def get_default( - self, ctx: Context, call: bool = ... - ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: - ... - - def get_default( - self, ctx: Context, call: bool = True - ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: - """Get the default for the parameter. Tries - :meth:`Context.lookup_default` first, then the local default. - - :param ctx: Current context. - :param call: If the default is a callable, call it. Disable to - return the callable instead. - - .. versionchanged:: 8.0.2 - Type casting is no longer performed when getting a default. - - .. versionchanged:: 8.0.1 - Type casting can fail in resilient parsing mode. Invalid - defaults will not prevent showing help text. - - .. versionchanged:: 8.0 - Looks at ``ctx.default_map`` first. - - .. versionchanged:: 8.0 - Added the ``call`` parameter. - """ - value = ctx.lookup_default(self.name, call=False) # type: ignore - - if value is None: - value = self.default - - if call and callable(value): - value = value() - - return value - - def add_to_parser(self, parser: OptionParser, ctx: Context) -> None: - raise NotImplementedError() - - def consume_value( - self, ctx: Context, opts: t.Mapping[str, t.Any] - ) -> t.Tuple[t.Any, ParameterSource]: - value = opts.get(self.name) # type: ignore - source = ParameterSource.COMMANDLINE - - if value is None: - value = self.value_from_envvar(ctx) - source = ParameterSource.ENVIRONMENT - - if value is None: - value = ctx.lookup_default(self.name) # type: ignore - source = ParameterSource.DEFAULT_MAP - - if value is None: - value = self.get_default(ctx) - source = ParameterSource.DEFAULT - - return value, source - - def type_cast_value(self, ctx: Context, value: t.Any) -> t.Any: - """Convert and validate a value against the option's - :attr:`type`, :attr:`multiple`, and :attr:`nargs`. - """ - if value is None: - return () if self.multiple or self.nargs == -1 else None - - def check_iter(value: t.Any) -> t.Iterator: - try: - return _check_iter(value) - except TypeError: - # This should only happen when passing in args manually, - # the parser should construct an iterable when parsing - # the command line. - raise BadParameter( - _("Value must be an iterable."), ctx=ctx, param=self - ) from None - - if self.nargs == 1 or self.type.is_composite: - convert: t.Callable[[t.Any], t.Any] = partial( - self.type, param=self, ctx=ctx - ) - elif self.nargs == -1: - - def convert(value: t.Any) -> t.Tuple: - return tuple(self.type(x, self, ctx) for x in check_iter(value)) - - else: # nargs > 1 - - def convert(value: t.Any) -> t.Tuple: - value = tuple(check_iter(value)) - - if len(value) != self.nargs: - raise BadParameter( - ngettext( - "Takes {nargs} values but 1 was given.", - "Takes {nargs} values but {len} were given.", - len(value), - ).format(nargs=self.nargs, len=len(value)), - ctx=ctx, - param=self, - ) - - return tuple(self.type(x, self, ctx) for x in value) - - if self.multiple: - return tuple(convert(x) for x in check_iter(value)) - - return convert(value) - - def value_is_missing(self, value: t.Any) -> bool: - if value is None: - return True - - if (self.nargs != 1 or self.multiple) and value == (): - return True - - return False - - def process_value(self, ctx: Context, value: t.Any) -> t.Any: - value = self.type_cast_value(ctx, value) - - if self.required and self.value_is_missing(value): - raise MissingParameter(ctx=ctx, param=self) - - if self.callback is not None: - value = self.callback(ctx, self, value) - - return value - - def resolve_envvar_value(self, ctx: Context) -> t.Optional[str]: - if self.envvar is None: - return None - - if isinstance(self.envvar, str): - rv = os.environ.get(self.envvar) - - if rv: - return rv - else: - for envvar in self.envvar: - rv = os.environ.get(envvar) - - if rv: - return rv - - return None - - def value_from_envvar(self, ctx: Context) -> t.Optional[t.Any]: - rv: t.Optional[t.Any] = self.resolve_envvar_value(ctx) - - if rv is not None and self.nargs != 1: - rv = self.type.split_envvar_value(rv) - - return rv - - def handle_parse_result( - self, ctx: Context, opts: t.Mapping[str, t.Any], args: t.List[str] - ) -> t.Tuple[t.Any, t.List[str]]: - with augment_usage_errors(ctx, param=self): - value, source = self.consume_value(ctx, opts) - ctx.set_parameter_source(self.name, source) # type: ignore - - try: - value = self.process_value(ctx, value) - except Exception: - if not ctx.resilient_parsing: - raise - - value = None - - if self.expose_value: - ctx.params[self.name] = value # type: ignore - - return value, args - - def get_help_record(self, ctx: Context) -> t.Optional[t.Tuple[str, str]]: - pass - - def get_usage_pieces(self, ctx: Context) -> t.List[str]: - return [] - - def get_error_hint(self, ctx: Context) -> str: - """Get a stringified version of the param for use in error messages to - indicate which param caused the error. - """ - hint_list = self.opts or [self.human_readable_name] - return " / ".join(f"'{x}'" for x in hint_list) - - def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: - """Return a list of completions for the incomplete value. If a - ``shell_complete`` function was given during init, it is used. - Otherwise, the :attr:`type` - :meth:`~click.types.ParamType.shell_complete` function is used. - - :param ctx: Invocation context for this command. - :param incomplete: Value being completed. May be empty. - - .. versionadded:: 8.0 - """ - if self._custom_shell_complete is not None: - results = self._custom_shell_complete(ctx, self, incomplete) - - if results and isinstance(results[0], str): - from click.shell_completion import CompletionItem - - results = [CompletionItem(c) for c in results] - - return t.cast(t.List["CompletionItem"], results) - - return self.type.shell_complete(ctx, self, incomplete) - - -class Option(Parameter): - """Options are usually optional values on the command line and - have some extra features that arguments don't have. - - All other parameters are passed onwards to the parameter constructor. - - :param show_default: Show the default value for this option in its - help text. Values are not shown by default, unless - :attr:`Context.show_default` is ``True``. If this value is a - string, it shows that string in parentheses instead of the - actual value. This is particularly useful for dynamic options. - For single option boolean flags, the default remains hidden if - its value is ``False``. - :param show_envvar: Controls if an environment variable should be - shown on the help page. Normally, environment variables are not - shown. - :param prompt: If set to ``True`` or a non empty string then the - user will be prompted for input. If set to ``True`` the prompt - will be the option name capitalized. - :param confirmation_prompt: Prompt a second time to confirm the - value if it was prompted for. Can be set to a string instead of - ``True`` to customize the message. - :param prompt_required: If set to ``False``, the user will be - prompted for input only when the option was specified as a flag - without a value. - :param hide_input: If this is ``True`` then the input on the prompt - will be hidden from the user. This is useful for password input. - :param is_flag: forces this option to act as a flag. The default is - auto detection. - :param flag_value: which value should be used for this flag if it's - enabled. This is set to a boolean automatically if - the option string contains a slash to mark two options. - :param multiple: if this is set to `True` then the argument is accepted - multiple times and recorded. This is similar to ``nargs`` - in how it works but supports arbitrary number of - arguments. - :param count: this flag makes an option increment an integer. - :param allow_from_autoenv: if this is enabled then the value of this - parameter will be pulled from an environment - variable in case a prefix is defined on the - context. - :param help: the help string. - :param hidden: hide this option from help outputs. - - .. versionchanged:: 8.1.0 - Help text indentation is cleaned here instead of only in the - ``@option`` decorator. - - .. versionchanged:: 8.1.0 - The ``show_default`` parameter overrides - ``Context.show_default``. - - .. versionchanged:: 8.1.0 - The default of a single option boolean flag is not shown if the - default value is ``False``. - - .. versionchanged:: 8.0.1 - ``type`` is detected from ``flag_value`` if given. - """ - - param_type_name = "option" - - def __init__( - self, - param_decls: t.Optional[t.Sequence[str]] = None, - show_default: t.Union[bool, str, None] = None, - prompt: t.Union[bool, str] = False, - confirmation_prompt: t.Union[bool, str] = False, - prompt_required: bool = True, - hide_input: bool = False, - is_flag: t.Optional[bool] = None, - flag_value: t.Optional[t.Any] = None, - multiple: bool = False, - count: bool = False, - allow_from_autoenv: bool = True, - type: t.Optional[t.Union[types.ParamType, t.Any]] = None, - help: t.Optional[str] = None, - hidden: bool = False, - show_choices: bool = True, - show_envvar: bool = False, - **attrs: t.Any, - ) -> None: - if help: - help = inspect.cleandoc(help) - - default_is_missing = "default" not in attrs - super().__init__(param_decls, type=type, multiple=multiple, **attrs) - - if prompt is True: - if self.name is None: - raise TypeError("'name' is required with 'prompt=True'.") - - prompt_text: t.Optional[str] = self.name.replace("_", " ").capitalize() - elif prompt is False: - prompt_text = None - else: - prompt_text = prompt - - self.prompt = prompt_text - self.confirmation_prompt = confirmation_prompt - self.prompt_required = prompt_required - self.hide_input = hide_input - self.hidden = hidden - - # If prompt is enabled but not required, then the option can be - # used as a flag to indicate using prompt or flag_value. - self._flag_needs_value = self.prompt is not None and not self.prompt_required - - if is_flag is None: - if flag_value is not None: - # Implicitly a flag because flag_value was set. - is_flag = True - elif self._flag_needs_value: - # Not a flag, but when used as a flag it shows a prompt. - is_flag = False - else: - # Implicitly a flag because flag options were given. - is_flag = bool(self.secondary_opts) - elif is_flag is False and not self._flag_needs_value: - # Not a flag, and prompt is not enabled, can be used as a - # flag if flag_value is set. - self._flag_needs_value = flag_value is not None - - if is_flag and default_is_missing and not self.required: - self.default: t.Union[t.Any, t.Callable[[], t.Any]] = False - - if flag_value is None: - flag_value = not self.default - - if is_flag and type is None: - # Re-guess the type from the flag value instead of the - # default. - self.type = types.convert_type(None, flag_value) - - self.is_flag: bool = is_flag - self.is_bool_flag = is_flag and isinstance(self.type, types.BoolParamType) - self.flag_value: t.Any = flag_value - - # Counting - self.count = count - if count: - if type is None: - self.type = types.IntRange(min=0) - if default_is_missing: - self.default = 0 - - self.allow_from_autoenv = allow_from_autoenv - self.help = help - self.show_default = show_default - self.show_choices = show_choices - self.show_envvar = show_envvar - - if __debug__: - if self.nargs == -1: - raise TypeError("nargs=-1 is not supported for options.") - - if self.prompt and self.is_flag and not self.is_bool_flag: - raise TypeError("'prompt' is not valid for non-boolean flag.") - - if not self.is_bool_flag and self.secondary_opts: - raise TypeError("Secondary flag is not valid for non-boolean flag.") - - if self.is_bool_flag and self.hide_input and self.prompt is not None: - raise TypeError( - "'prompt' with 'hide_input' is not valid for boolean flag." - ) - - if self.count: - if self.multiple: - raise TypeError("'count' is not valid with 'multiple'.") - - if self.is_flag: - raise TypeError("'count' is not valid with 'is_flag'.") - - if self.multiple and self.is_flag: - raise TypeError("'multiple' is not valid with 'is_flag', use 'count'.") - - def to_info_dict(self) -> t.Dict[str, t.Any]: - info_dict = super().to_info_dict() - info_dict.update( - help=self.help, - prompt=self.prompt, - is_flag=self.is_flag, - flag_value=self.flag_value, - count=self.count, - hidden=self.hidden, - ) - return info_dict - - def _parse_decls( - self, decls: t.Sequence[str], expose_value: bool - ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]: - opts = [] - secondary_opts = [] - name = None - possible_names = [] - - for decl in decls: - if decl.isidentifier(): - if name is not None: - raise TypeError(f"Name '{name}' defined twice") - name = decl - else: - split_char = ";" if decl[:1] == "/" else "/" - if split_char in decl: - first, second = decl.split(split_char, 1) - first = first.rstrip() - if first: - possible_names.append(split_opt(first)) - opts.append(first) - second = second.lstrip() - if second: - secondary_opts.append(second.lstrip()) - if first == second: - raise ValueError( - f"Boolean option {decl!r} cannot use the" - " same flag for true/false." - ) - else: - possible_names.append(split_opt(decl)) - opts.append(decl) - - if name is None and possible_names: - possible_names.sort(key=lambda x: -len(x[0])) # group long options first - name = possible_names[0][1].replace("-", "_").lower() - if not name.isidentifier(): - name = None - - if name is None: - if not expose_value: - return None, opts, secondary_opts - raise TypeError("Could not determine name for option") - - if not opts and not secondary_opts: - raise TypeError( - f"No options defined but a name was passed ({name})." - " Did you mean to declare an argument instead? Did" - f" you mean to pass '--{name}'?" - ) - - return name, opts, secondary_opts - - def add_to_parser(self, parser: OptionParser, ctx: Context) -> None: - if self.multiple: - action = "append" - elif self.count: - action = "count" - else: - action = "store" - - if self.is_flag: - action = f"{action}_const" - - if self.is_bool_flag and self.secondary_opts: - parser.add_option( - obj=self, opts=self.opts, dest=self.name, action=action, const=True - ) - parser.add_option( - obj=self, - opts=self.secondary_opts, - dest=self.name, - action=action, - const=False, - ) - else: - parser.add_option( - obj=self, - opts=self.opts, - dest=self.name, - action=action, - const=self.flag_value, - ) - else: - parser.add_option( - obj=self, - opts=self.opts, - dest=self.name, - action=action, - nargs=self.nargs, - ) - - def get_help_record(self, ctx: Context) -> t.Optional[t.Tuple[str, str]]: - if self.hidden: - return None - - any_prefix_is_slash = False - - def _write_opts(opts: t.Sequence[str]) -> str: - nonlocal any_prefix_is_slash - - rv, any_slashes = join_options(opts) - - if any_slashes: - any_prefix_is_slash = True - - if not self.is_flag and not self.count: - rv += f" {self.make_metavar()}" - - return rv - - rv = [_write_opts(self.opts)] - - if self.secondary_opts: - rv.append(_write_opts(self.secondary_opts)) - - help = self.help or "" - extra = [] - - if self.show_envvar: - envvar = self.envvar - - if envvar is None: - if ( - self.allow_from_autoenv - and ctx.auto_envvar_prefix is not None - and self.name is not None - ): - envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}" - - if envvar is not None: - var_str = ( - envvar - if isinstance(envvar, str) - else ", ".join(str(d) for d in envvar) - ) - extra.append(_("env var: {var}").format(var=var_str)) - - # Temporarily enable resilient parsing to avoid type casting - # failing for the default. Might be possible to extend this to - # help formatting in general. - resilient = ctx.resilient_parsing - ctx.resilient_parsing = True - - try: - default_value = self.get_default(ctx, call=False) - finally: - ctx.resilient_parsing = resilient - - show_default = False - show_default_is_str = False - - if self.show_default is not None: - if isinstance(self.show_default, str): - show_default_is_str = show_default = True - else: - show_default = self.show_default - elif ctx.show_default is not None: - show_default = ctx.show_default - - if show_default_is_str or (show_default and (default_value is not None)): - if show_default_is_str: - default_string = f"({self.show_default})" - elif isinstance(default_value, (list, tuple)): - default_string = ", ".join(str(d) for d in default_value) - elif inspect.isfunction(default_value): - default_string = _("(dynamic)") - elif self.is_bool_flag and self.secondary_opts: - # For boolean flags that have distinct True/False opts, - # use the opt without prefix instead of the value. - default_string = split_opt( - (self.opts if self.default else self.secondary_opts)[0] - )[1] - elif self.is_bool_flag and not self.secondary_opts and not default_value: - default_string = "" - else: - default_string = str(default_value) - - if default_string: - extra.append(_("default: {default}").format(default=default_string)) - - if ( - isinstance(self.type, types._NumberRangeBase) - # skip count with default range type - and not (self.count and self.type.min == 0 and self.type.max is None) - ): - range_str = self.type._describe_range() - - if range_str: - extra.append(range_str) - - if self.required: - extra.append(_("required")) - - if extra: - extra_str = "; ".join(extra) - help = f"{help} [{extra_str}]" if help else f"[{extra_str}]" - - return ("; " if any_prefix_is_slash else " / ").join(rv), help - - @t.overload - def get_default( - self, ctx: Context, call: "te.Literal[True]" = True - ) -> t.Optional[t.Any]: - ... - - @t.overload - def get_default( - self, ctx: Context, call: bool = ... - ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: - ... - - def get_default( - self, ctx: Context, call: bool = True - ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: - # If we're a non boolean flag our default is more complex because - # we need to look at all flags in the same group to figure out - # if we're the default one in which case we return the flag - # value as default. - if self.is_flag and not self.is_bool_flag: - for param in ctx.command.params: - if param.name == self.name and param.default: - return param.flag_value # type: ignore - - return None - - return super().get_default(ctx, call=call) - - def prompt_for_value(self, ctx: Context) -> t.Any: - """This is an alternative flow that can be activated in the full - value processing if a value does not exist. It will prompt the - user until a valid value exists and then returns the processed - value as result. - """ - assert self.prompt is not None - - # Calculate the default before prompting anything to be stable. - default = self.get_default(ctx) - - # If this is a prompt for a flag we need to handle this - # differently. - if self.is_bool_flag: - return confirm(self.prompt, default) - - return prompt( - self.prompt, - default=default, - type=self.type, - hide_input=self.hide_input, - show_choices=self.show_choices, - confirmation_prompt=self.confirmation_prompt, - value_proc=lambda x: self.process_value(ctx, x), - ) - - def resolve_envvar_value(self, ctx: Context) -> t.Optional[str]: - rv = super().resolve_envvar_value(ctx) - - if rv is not None: - return rv - - if ( - self.allow_from_autoenv - and ctx.auto_envvar_prefix is not None - and self.name is not None - ): - envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}" - rv = os.environ.get(envvar) - - if rv: - return rv - - return None - - def value_from_envvar(self, ctx: Context) -> t.Optional[t.Any]: - rv: t.Optional[t.Any] = self.resolve_envvar_value(ctx) - - if rv is None: - return None - - value_depth = (self.nargs != 1) + bool(self.multiple) - - if value_depth > 0: - rv = self.type.split_envvar_value(rv) - - if self.multiple and self.nargs != 1: - rv = batch(rv, self.nargs) - - return rv - - def consume_value( - self, ctx: Context, opts: t.Mapping[str, "Parameter"] - ) -> t.Tuple[t.Any, ParameterSource]: - value, source = super().consume_value(ctx, opts) - - # The parser will emit a sentinel value if the option can be - # given as a flag without a value. This is different from None - # to distinguish from the flag not being given at all. - if value is _flag_needs_value: - if self.prompt is not None and not ctx.resilient_parsing: - value = self.prompt_for_value(ctx) - source = ParameterSource.PROMPT - else: - value = self.flag_value - source = ParameterSource.COMMANDLINE - - elif ( - self.multiple - and value is not None - and any(v is _flag_needs_value for v in value) - ): - value = [self.flag_value if v is _flag_needs_value else v for v in value] - source = ParameterSource.COMMANDLINE - - # The value wasn't set, or used the param's default, prompt if - # prompting is enabled. - elif ( - source in {None, ParameterSource.DEFAULT} - and self.prompt is not None - and (self.required or self.prompt_required) - and not ctx.resilient_parsing - ): - value = self.prompt_for_value(ctx) - source = ParameterSource.PROMPT - - return value, source - - -class Argument(Parameter): - """Arguments are positional parameters to a command. They generally - provide fewer features than options but can have infinite ``nargs`` - and are required by default. - - All parameters are passed onwards to the parameter constructor. - """ - - param_type_name = "argument" - - def __init__( - self, - param_decls: t.Sequence[str], - required: t.Optional[bool] = None, - **attrs: t.Any, - ) -> None: - if required is None: - if attrs.get("default") is not None: - required = False - else: - required = attrs.get("nargs", 1) > 0 - - if "multiple" in attrs: - raise TypeError("__init__() got an unexpected keyword argument 'multiple'.") - - super().__init__(param_decls, required=required, **attrs) - - if __debug__: - if self.default is not None and self.nargs == -1: - raise TypeError("'default' is not supported for nargs=-1.") - - @property - def human_readable_name(self) -> str: - if self.metavar is not None: - return self.metavar - return self.name.upper() # type: ignore - - def make_metavar(self) -> str: - if self.metavar is not None: - return self.metavar - var = self.type.get_metavar(self) - if not var: - var = self.name.upper() # type: ignore - if not self.required: - var = f"[{var}]" - if self.nargs != 1: - var += "..." - return var - - def _parse_decls( - self, decls: t.Sequence[str], expose_value: bool - ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]: - if not decls: - if not expose_value: - return None, [], [] - raise TypeError("Could not determine name for argument") - if len(decls) == 1: - name = arg = decls[0] - name = name.replace("-", "_").lower() - else: - raise TypeError( - "Arguments take exactly one parameter declaration, got" - f" {len(decls)}." - ) - return name, [arg], [] - - def get_usage_pieces(self, ctx: Context) -> t.List[str]: - return [self.make_metavar()] - - def get_error_hint(self, ctx: Context) -> str: - return f"'{self.make_metavar()}'" - - def add_to_parser(self, parser: OptionParser, ctx: Context) -> None: - parser.add_argument(dest=self.name, nargs=self.nargs, obj=self) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/driver/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/driver/__init__.py deleted file mode 100644 index 496696e1a799dd10bfe39e6ab1d2a6bbcbd52112..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/driver/__init__.py +++ /dev/null @@ -1,114 +0,0 @@ -from inspect import signature -from typing import Optional, Union, Dict, Any -from urllib.parse import urlparse, parse_qs - -import clickhouse_connect.driver.ctypes -from clickhouse_connect.driver.client import Client -from clickhouse_connect.driver.common import dict_copy -from clickhouse_connect.driver.exceptions import ProgrammingError -from clickhouse_connect.driver.httpclient import HttpClient - - -# pylint: disable=too-many-arguments,too-many-locals,too-many-branches -def create_client(host: str = None, - username: str = None, - password: str = '', - database: str = '__default__', - interface: Optional[str] = None, - port: int = 0, - secure: Union[bool, str] = False, - dsn: Optional[str] = None, - settings: Optional[Dict[str, Any]] = None, - generic_args: Optional[Dict[str, Any]] = None, - **kwargs) -> Client: - """ - The preferred method to get a ClickHouse Connect Client instance - - :param host: The hostname or IP address of the ClickHouse server. If not set, localhost will be used. - :param username: The ClickHouse username. If not set, the default ClickHouse user will be used. - :param password: The password for username. - :param database: The default database for the connection. If not set, ClickHouse Connect will use the - default database for username. - :param interface: Must be http or https. Defaults to http, or to https if port is set to 8443 or 443 - :param port: The ClickHouse HTTP or HTTPS port. If not set will default to 8123, or to 8443 if secure=True - or interface=https. - :param secure: Use https/TLS. This overrides inferred values from the interface or port arguments. - :param dsn: A string in standard DSN (Data Source Name) format. Other connection values (such as host or user) - will be extracted from this string if not set otherwise. - :param settings: ClickHouse server settings to be used with the session/every request - :param generic_args: Used internally to parse DBAPI connection strings into keyword arguments and ClickHouse settings. - It is not recommended to use this parameter externally. - - :param kwargs -- Recognized keyword arguments (used by the HTTP client), see below - - :param compress: Enable compression for ClickHouse HTTP inserts and query results. True will select the preferred - compression method (lz4). A str of 'lz4', 'zstd', 'brotli', or 'gzip' can be used to use a specific compression type - :param query_limit: Default LIMIT on returned rows. 0 means no limit - :param connect_timeout: Timeout in seconds for the http connection - :param send_receive_timeout: Read timeout in seconds for http connection - :param client_name: client_name prepended to the HTTP User Agent header. Set this to track client queries - in the ClickHouse system.query_log. - :param send_progress: Deprecated, has no effect. Previous functionality is now automatically determined - :param verify: Verify the server certificate in secure/https mode - :param ca_cert: If verify is True, the file path to Certificate Authority root to validate ClickHouse server - certificate, in .pem format. Ignored if verify is False. This is not necessary if the ClickHouse server - certificate is trusted by the operating system. To trust the maintained list of "global" public root - certificates maintained by the Python 'certifi' package, set ca_cert to 'certifi' - :param client_cert: File path to a TLS Client certificate in .pem format. This file should contain any - applicable intermediate certificates - :param client_cert_key: File path to the private key for the Client Certificate. Required if the private key - is not included the Client Certificate key file - :param session_id ClickHouse session id. If not specified and the common setting 'autogenerate_session_id' - is True, the client will generate a UUID1 session id - :param pool_mgr Optional urllib3 PoolManager for this client. Useful for creating separate connection - pools for multiple client endpoints for applications with many clients - :param http_proxy http proxy address. Equivalent to setting the HTTP_PROXY environment variable - :param https_proxy https proxy address. Equivalent to setting the HTTPS_PROXY environment variable - :param server_host_name This is the server host name that will be checked against a TLS certificate for - validity. This option can be used if using an ssh_tunnel or other indirect means to an ClickHouse server - where the `host` argument refers to the tunnel or proxy and not the actual ClickHouse server - :return: ClickHouse Connect Client instance - """ - if dsn: - parsed = urlparse(dsn) - username = username or parsed.username - password = password or parsed.password - host = host or parsed.hostname - port = port or parsed.port - if parsed.path and (not database or database == '__default__'): - database = parsed.path[1:].split('/')[0] - database = database or parsed.path - kwargs.update(dict(parse_qs(parsed.query))) - use_tls = str(secure).lower() == 'true' or interface == 'https' or (not interface and port in (443, 8443)) - if not host: - host = 'localhost' - if not interface: - interface = 'https' if use_tls else 'http' - port = port or default_port(interface, use_tls) - if username is None and 'user' in kwargs: - username = kwargs.pop('user') - if username is None and 'user_name' in kwargs: - username = kwargs.pop('user_name') - if password and username is None: - username = 'default' - if 'compression' in kwargs and 'compress' not in kwargs: - kwargs['compress'] = kwargs.pop('compression') - settings = settings or {} - if interface.startswith('http'): - if generic_args: - client_params = signature(HttpClient).parameters - for name, value in generic_args.items(): - if name in client_params: - kwargs[name] = value - else: - if name.startswith('ch_'): - name = name[3:] - settings[name] = value - return HttpClient(interface, host, port, username, password, database, settings=settings, **kwargs) - raise ProgrammingError(f'Unrecognized client type {interface}') - - -def default_port(interface: str, secure: bool): - if interface.startswith('http'): - return 8443 if secure else 8123 - raise ValueError('Unrecognized ClickHouse interface') diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/clients.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/clients.py deleted file mode 100644 index ada460edfab9ca7eb864e453211a3f5869cfd5f5..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/clients.py +++ /dev/null @@ -1,762 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See LICENSE in the project root -# for license information. - -from __future__ import annotations - -import atexit -import os -import sys - -import debugpy -from debugpy import adapter, common, launcher -from debugpy.common import json, log, messaging, sockets -from debugpy.adapter import components, servers, sessions - - -class Client(components.Component): - """Handles the client side of a debug session.""" - - message_handler = components.Component.message_handler - - known_subprocesses: set[servers.Connection] - """Server connections to subprocesses that this client has been made aware of. - """ - - class Capabilities(components.Capabilities): - PROPERTIES = { - "supportsVariableType": False, - "supportsVariablePaging": False, - "supportsRunInTerminalRequest": False, - "supportsMemoryReferences": False, - "supportsArgsCanBeInterpretedByShell": False, - "supportsStartDebuggingRequest": False, - } - - class Expectations(components.Capabilities): - PROPERTIES = { - "locale": "en-US", - "linesStartAt1": True, - "columnsStartAt1": True, - "pathFormat": json.enum("path", optional=True), # we don't support "uri" - } - - def __init__(self, sock): - if sock == "stdio": - log.info("Connecting to client over stdio...", self) - self.using_stdio = True - stream = messaging.JsonIOStream.from_stdio() - # Make sure that nothing else tries to interfere with the stdio streams - # that are going to be used for DAP communication from now on. - sys.stdin = stdin = open(os.devnull, "r") - atexit.register(stdin.close) - sys.stdout = stdout = open(os.devnull, "w") - atexit.register(stdout.close) - else: - self.using_stdio = False - stream = messaging.JsonIOStream.from_socket(sock) - - with sessions.Session() as session: - super().__init__(session, stream) - - self.client_id = None - """ID of the connecting client. This can be 'test' while running tests.""" - - self.has_started = False - """Whether the "launch" or "attach" request was received from the client, and - fully handled. - """ - - self.start_request = None - """The "launch" or "attach" request as received from the client. - """ - - self.restart_requested = False - """Whether the client requested the debug adapter to be automatically - restarted via "restart":true in the start request. - """ - - self._initialize_request = None - """The "initialize" request as received from the client, to propagate to the - server later.""" - - self._deferred_events = [] - """Deferred events from the launcher and the server that must be propagated - only if and when the "launch" or "attach" response is sent. - """ - - self._forward_terminate_request = False - - self.known_subprocesses = set() - - session.client = self - session.register() - - # For the transition period, send the telemetry events with both old and new - # name. The old one should be removed once the new one lights up. - self.channel.send_event( - "output", - { - "category": "telemetry", - "output": "ptvsd", - "data": {"packageVersion": debugpy.__version__}, - }, - ) - self.channel.send_event( - "output", - { - "category": "telemetry", - "output": "debugpy", - "data": {"packageVersion": debugpy.__version__}, - }, - ) - - def propagate_after_start(self, event): - # pydevd starts sending events as soon as we connect, but the client doesn't - # expect to see any until it receives the response to "launch" or "attach" - # request. If client is not ready yet, save the event instead of propagating - # it immediately. - if self._deferred_events is not None: - self._deferred_events.append(event) - log.debug("Propagation deferred.") - else: - self.client.channel.propagate(event) - - def _propagate_deferred_events(self): - log.debug("Propagating deferred events to {0}...", self.client) - for event in self._deferred_events: - log.debug("Propagating deferred {0}", event.describe()) - self.client.channel.propagate(event) - log.info("All deferred events propagated to {0}.", self.client) - self._deferred_events = None - - # Generic event handler. There are no specific handlers for client events, because - # there are no events from the client in DAP - but we propagate them if we can, in - # case some events appear in future protocol versions. - @message_handler - def event(self, event): - if self.server: - self.server.channel.propagate(event) - - # Generic request handler, used if there's no specific handler below. - @message_handler - def request(self, request): - return self.server.channel.delegate(request) - - @message_handler - def initialize_request(self, request): - if self._initialize_request is not None: - raise request.isnt_valid("Session is already initialized") - - self.client_id = request("clientID", "") - self.capabilities = self.Capabilities(self, request) - self.expectations = self.Expectations(self, request) - self._initialize_request = request - - exception_breakpoint_filters = [ - { - "filter": "raised", - "label": "Raised Exceptions", - "default": False, - "description": "Break whenever any exception is raised.", - }, - { - "filter": "uncaught", - "label": "Uncaught Exceptions", - "default": True, - "description": "Break when the process is exiting due to unhandled exception.", - }, - { - "filter": "userUnhandled", - "label": "User Uncaught Exceptions", - "default": False, - "description": "Break when exception escapes into library code.", - }, - ] - - return { - "supportsCompletionsRequest": True, - "supportsConditionalBreakpoints": True, - "supportsConfigurationDoneRequest": True, - "supportsDebuggerProperties": True, - "supportsDelayedStackTraceLoading": True, - "supportsEvaluateForHovers": True, - "supportsExceptionInfoRequest": True, - "supportsExceptionOptions": True, - "supportsFunctionBreakpoints": True, - "supportsHitConditionalBreakpoints": True, - "supportsLogPoints": True, - "supportsModulesRequest": True, - "supportsSetExpression": True, - "supportsSetVariable": True, - "supportsValueFormattingOptions": True, - "supportsTerminateRequest": True, - "supportsGotoTargetsRequest": True, - "supportsClipboardContext": True, - "exceptionBreakpointFilters": exception_breakpoint_filters, - "supportsStepInTargetsRequest": True, - } - - # Common code for "launch" and "attach" request handlers. - # - # See https://github.com/microsoft/vscode/issues/4902#issuecomment-368583522 - # for the sequence of request and events necessary to orchestrate the start. - def _start_message_handler(f): - @components.Component.message_handler - def handle(self, request): - assert request.is_request("launch", "attach") - if self._initialize_request is None: - raise request.isnt_valid("Session is not initialized yet") - if self.launcher or self.server: - raise request.isnt_valid("Session is already started") - - self.session.no_debug = request("noDebug", json.default(False)) - if self.session.no_debug: - servers.dont_wait_for_first_connection() - - self.session.debug_options = debug_options = set( - request("debugOptions", json.array(str)) - ) - - f(self, request) - if request.response is not None: - return - - if self.server: - self.server.initialize(self._initialize_request) - self._initialize_request = None - - arguments = request.arguments - if self.launcher: - redirecting = arguments.get("console") == "internalConsole" - if "RedirectOutput" in debug_options: - # The launcher is doing output redirection, so we don't need the - # server to do it, as well. - arguments = dict(arguments) - arguments["debugOptions"] = list( - debug_options - {"RedirectOutput"} - ) - redirecting = True - - if arguments.get("redirectOutput"): - arguments = dict(arguments) - del arguments["redirectOutput"] - redirecting = True - - arguments["isOutputRedirected"] = redirecting - - # pydevd doesn't send "initialized", and responds to the start request - # immediately, without waiting for "configurationDone". If it changes - # to conform to the DAP spec, we'll need to defer waiting for response. - try: - self.server.channel.request(request.command, arguments) - except messaging.NoMoreMessages: - # Server closed connection before we could receive the response to - # "attach" or "launch" - this can happen when debuggee exits shortly - # after starting. It's not an error, but we can't do anything useful - # here at this point, either, so just bail out. - request.respond({}) - self.session.finalize( - "{0} disconnected before responding to {1}".format( - self.server, - json.repr(request.command), - ) - ) - return - except messaging.MessageHandlingError as exc: - exc.propagate(request) - - if self.session.no_debug: - self.start_request = request - self.has_started = True - request.respond({}) - self._propagate_deferred_events() - return - - # Let the client know that it can begin configuring the adapter. - self.channel.send_event("initialized") - - self.start_request = request - return messaging.NO_RESPONSE # will respond on "configurationDone" - - return handle - - @_start_message_handler - def launch_request(self, request): - from debugpy.adapter import launchers - - if self.session.id != 1 or len(servers.connections()): - raise request.cant_handle('"attach" expected') - - debug_options = set(request("debugOptions", json.array(str))) - - # Handling of properties that can also be specified as legacy "debugOptions" flags. - # If property is explicitly set to false, but the flag is in "debugOptions", treat - # it as an error. Returns None if the property wasn't explicitly set either way. - def property_or_debug_option(prop_name, flag_name): - assert prop_name[0].islower() and flag_name[0].isupper() - - value = request(prop_name, bool, optional=True) - if value == (): - value = None - - if flag_name in debug_options: - if value is False: - raise request.isnt_valid( - '{0}:false and "debugOptions":[{1}] are mutually exclusive', - json.repr(prop_name), - json.repr(flag_name), - ) - value = True - - return value - - # "pythonPath" is a deprecated legacy spelling. If "python" is missing, then try - # the alternative. But if both are missing, the error message should say "python". - python_key = "python" - if python_key in request: - if "pythonPath" in request: - raise request.isnt_valid( - '"pythonPath" is not valid if "python" is specified' - ) - elif "pythonPath" in request: - python_key = "pythonPath" - python = request(python_key, json.array(str, vectorize=True, size=(0,))) - if not len(python): - python = [sys.executable] - - python += request("pythonArgs", json.array(str, size=(0,))) - request.arguments["pythonArgs"] = python[1:] - request.arguments["python"] = python - - launcher_python = request("debugLauncherPython", str, optional=True) - if launcher_python == (): - launcher_python = python[0] - - program = module = code = () - if "program" in request: - program = request("program", str) - args = [program] - request.arguments["processName"] = program - if "module" in request: - module = request("module", str) - args = ["-m", module] - request.arguments["processName"] = module - if "code" in request: - code = request("code", json.array(str, vectorize=True, size=(1,))) - args = ["-c", "\n".join(code)] - request.arguments["processName"] = "-c" - - num_targets = len([x for x in (program, module, code) if x != ()]) - if num_targets == 0: - raise request.isnt_valid( - 'either "program", "module", or "code" must be specified' - ) - elif num_targets != 1: - raise request.isnt_valid( - '"program", "module", and "code" are mutually exclusive' - ) - - console = request( - "console", - json.enum( - "internalConsole", - "integratedTerminal", - "externalTerminal", - optional=True, - ), - ) - console_title = request("consoleTitle", json.default("Python Debug Console")) - - # Propagate "args" via CLI so that shell expansion can be applied if requested. - target_args = request("args", json.array(str, vectorize=True)) - args += target_args - - # If "args" was a single string rather than an array, shell expansion must be applied. - shell_expand_args = len(target_args) > 0 and isinstance( - request.arguments["args"], str - ) - if shell_expand_args: - if not self.capabilities["supportsArgsCanBeInterpretedByShell"]: - raise request.isnt_valid( - 'Shell expansion in "args" is not supported by the client' - ) - if console == "internalConsole": - raise request.isnt_valid( - 'Shell expansion in "args" is not available for "console":"internalConsole"' - ) - - cwd = request("cwd", str, optional=True) - if cwd == (): - # If it's not specified, but we're launching a file rather than a module, - # and the specified path has a directory in it, use that. - cwd = None if program == () else (os.path.dirname(program) or None) - - sudo = bool(property_or_debug_option("sudo", "Sudo")) - if sudo and sys.platform == "win32": - raise request.cant_handle('"sudo":true is not supported on Windows.') - - on_terminate = request("onTerminate", str, optional=True) - - if on_terminate: - self._forward_terminate_request = on_terminate == "KeyboardInterrupt" - - launcher_path = request("debugLauncherPath", os.path.dirname(launcher.__file__)) - adapter_host = request("debugAdapterHost", "127.0.0.1") - - try: - servers.serve(adapter_host) - except Exception as exc: - raise request.cant_handle( - "{0} couldn't create listener socket for servers: {1}", - self.session, - exc, - ) - - launchers.spawn_debuggee( - self.session, - request, - [launcher_python], - launcher_path, - adapter_host, - args, - shell_expand_args, - cwd, - console, - console_title, - sudo, - ) - - @_start_message_handler - def attach_request(self, request): - if self.session.no_debug: - raise request.isnt_valid('"noDebug" is not supported for "attach"') - - host = request("host", str, optional=True) - port = request("port", int, optional=True) - listen = request("listen", dict, optional=True) - connect = request("connect", dict, optional=True) - pid = request("processId", (int, str), optional=True) - sub_pid = request("subProcessId", int, optional=True) - on_terminate = request("onTerminate", bool, optional=True) - - if on_terminate: - self._forward_terminate_request = on_terminate == "KeyboardInterrupt" - - if host != () or port != (): - if listen != (): - raise request.isnt_valid( - '"listen" and "host"/"port" are mutually exclusive' - ) - if connect != (): - raise request.isnt_valid( - '"connect" and "host"/"port" are mutually exclusive' - ) - if listen != (): - if connect != (): - raise request.isnt_valid( - '"listen" and "connect" are mutually exclusive' - ) - if pid != (): - raise request.isnt_valid( - '"listen" and "processId" are mutually exclusive' - ) - if sub_pid != (): - raise request.isnt_valid( - '"listen" and "subProcessId" are mutually exclusive' - ) - if pid != () and sub_pid != (): - raise request.isnt_valid( - '"processId" and "subProcessId" are mutually exclusive' - ) - - if listen != (): - if servers.is_serving(): - raise request.isnt_valid( - 'Multiple concurrent "listen" sessions are not supported' - ) - host = listen("host", "127.0.0.1") - port = listen("port", int) - adapter.access_token = None - self.restart_requested = request("restart", False) - host, port = servers.serve(host, port) - else: - if not servers.is_serving(): - servers.serve() - host, port = servers.listener.getsockname() - - # There are four distinct possibilities here. - # - # If "processId" is specified, this is attach-by-PID. We need to inject the - # debug server into the designated process, and then wait until it connects - # back to us. Since the injected server can crash, there must be a timeout. - # - # If "subProcessId" is specified, this is attach to a known subprocess, likely - # in response to a "debugpyAttach" event. If so, the debug server should be - # connected already, and thus the wait timeout is zero. - # - # If "listen" is specified, this is attach-by-socket with the server expected - # to connect to the adapter via debugpy.connect(). There is no PID known in - # advance, so just wait until the first server connection indefinitely, with - # no timeout. - # - # If "connect" is specified, this is attach-by-socket in which the server has - # spawned the adapter via debugpy.listen(). There is no PID known to the client - # in advance, but the server connection should be either be there already, or - # the server should be connecting shortly, so there must be a timeout. - # - # In the last two cases, if there's more than one server connection already, - # this is a multiprocess re-attach. The client doesn't know the PID, so we just - # connect it to the oldest server connection that we have - in most cases, it - # will be the one for the root debuggee process, but if it has exited already, - # it will be some subprocess. - if pid != (): - if not isinstance(pid, int): - try: - pid = int(pid) - except Exception: - raise request.isnt_valid('"processId" must be parseable as int') - debugpy_args = request("debugpyArgs", json.array(str)) - - def on_output(category, output): - self.channel.send_event( - "output", - { - "category": category, - "output": output, - }, - ) - - try: - servers.inject(pid, debugpy_args, on_output) - except Exception as e: - log.swallow_exception() - self.session.finalize( - "Error when trying to attach to PID:\n%s" % (str(e),) - ) - return - - timeout = common.PROCESS_SPAWN_TIMEOUT - pred = lambda conn: conn.pid == pid - else: - if sub_pid == (): - pred = lambda conn: True - timeout = common.PROCESS_SPAWN_TIMEOUT if listen == () else None - else: - pred = lambda conn: conn.pid == sub_pid - timeout = 0 - - self.channel.send_event("debugpyWaitingForServer", {"host": host, "port": port}) - conn = servers.wait_for_connection(self.session, pred, timeout) - if conn is None: - if sub_pid != (): - # If we can't find a matching subprocess, it's not always an error - - # it might have already exited, or didn't even get a chance to connect. - # To prevent the client from complaining, pretend that the "attach" - # request was successful, but that the session terminated immediately. - request.respond({}) - self.session.finalize( - 'No known subprocess with "subProcessId":{0}'.format(sub_pid) - ) - return - - raise request.cant_handle( - ( - "Timed out waiting for debug server to connect." - if timeout - else "There is no debug server connected to this adapter." - ), - sub_pid, - ) - - try: - conn.attach_to_session(self.session) - except ValueError: - request.cant_handle("{0} is already being debugged.", conn) - - @message_handler - def configurationDone_request(self, request): - if self.start_request is None or self.has_started: - request.cant_handle( - '"configurationDone" is only allowed during handling of a "launch" ' - 'or an "attach" request' - ) - - try: - self.has_started = True - try: - result = self.server.channel.delegate(request) - except messaging.NoMoreMessages: - # Server closed connection before we could receive the response to - # "configurationDone" - this can happen when debuggee exits shortly - # after starting. It's not an error, but we can't do anything useful - # here at this point, either, so just bail out. - request.respond({}) - self.start_request.respond({}) - self.session.finalize( - "{0} disconnected before responding to {1}".format( - self.server, - json.repr(request.command), - ) - ) - return - else: - request.respond(result) - except messaging.MessageHandlingError as exc: - self.start_request.cant_handle(str(exc)) - finally: - if self.start_request.response is None: - self.start_request.respond({}) - self._propagate_deferred_events() - - # Notify the client of any child processes of the debuggee that aren't already - # being debugged. - for conn in servers.connections(): - if conn.server is None and conn.ppid == self.session.pid: - self.notify_of_subprocess(conn) - - @message_handler - def evaluate_request(self, request): - propagated_request = self.server.channel.propagate(request) - - def handle_response(response): - request.respond(response.body) - - propagated_request.on_response(handle_response) - - return messaging.NO_RESPONSE - - @message_handler - def pause_request(self, request): - request.arguments["threadId"] = "*" - return self.server.channel.delegate(request) - - @message_handler - def continue_request(self, request): - request.arguments["threadId"] = "*" - - try: - return self.server.channel.delegate(request) - except messaging.NoMoreMessages: - # pydevd can sometimes allow the debuggee to exit before the queued - # "continue" response gets sent. Thus, a failed "continue" response - # indicating that the server disconnected should be treated as success. - return {"allThreadsContinued": True} - - @message_handler - def debugpySystemInfo_request(self, request): - result = {"debugpy": {"version": debugpy.__version__}} - if self.server: - try: - pydevd_info = self.server.channel.request("pydevdSystemInfo") - except Exception: - # If the server has already disconnected, or couldn't handle it, - # report what we've got. - pass - else: - result.update(pydevd_info) - return result - - @message_handler - def terminate_request(self, request): - # If user specifically requests to terminate, it means that they don't want - # debug session auto-restart kicking in. - self.restart_requested = False - - if self._forward_terminate_request: - # According to the spec, terminate should try to do a gracefull shutdown. - # We do this in the server by interrupting the main thread with a Ctrl+C. - # To force the kill a subsequent request would do a disconnect. - # - # We only do this if the onTerminate option is set though (the default - # is a hard-kill for the process and subprocesses). - return self.server.channel.delegate(request) - - self.session.finalize('client requested "terminate"', terminate_debuggee=True) - return {} - - @message_handler - def disconnect_request(self, request): - # If user specifically requests to disconnect, it means that they don't want - # debug session auto-restart kicking in. - self.restart_requested = False - - terminate_debuggee = request("terminateDebuggee", bool, optional=True) - if terminate_debuggee == (): - terminate_debuggee = None - self.session.finalize('client requested "disconnect"', terminate_debuggee) - request.respond({}) - - if self.using_stdio: - # There's no way for the client to reconnect to this adapter once it disconnects - # from this session, so close any remaining server connections. - servers.stop_serving() - log.info("{0} disconnected from stdio; closing remaining server connections.", self) - for conn in servers.connections(): - try: - conn.channel.close() - except Exception: - log.swallow_exception() - - def disconnect(self): - super().disconnect() - - def notify_of_subprocess(self, conn): - log.info("{1} is a subprocess of {0}.", self, conn) - with self.session: - if self.start_request is None or conn in self.known_subprocesses: - return - if "processId" in self.start_request.arguments: - log.warning( - "Not reporting subprocess for {0}, because the parent process " - 'was attached to using "processId" rather than "port".', - self.session, - ) - return - - log.info("Notifying {0} about {1}.", self, conn) - body = dict(self.start_request.arguments) - self.known_subprocesses.add(conn) - self.session.notify_changed() - - for key in "processId", "listen", "preLaunchTask", "postDebugTask", "request", "restart": - body.pop(key, None) - - body["name"] = "Subprocess {0}".format(conn.pid) - body["subProcessId"] = conn.pid - - for key in "args", "processName", "pythonArgs": - body.pop(key, None) - - host = body.pop("host", None) - port = body.pop("port", None) - if "connect" not in body: - body["connect"] = {} - if "host" not in body["connect"]: - body["connect"]["host"] = host if host is not None else "127.0.0.1" - if "port" not in body["connect"]: - if port is None: - _, port = listener.getsockname() - body["connect"]["port"] = port - - if self.capabilities["supportsStartDebuggingRequest"]: - self.channel.request("startDebugging", { - "request": "attach", - "configuration": body, - }) - else: - body["request"] = "attach" - self.channel.send_event("debugpyAttach", body) - - -def serve(host, port): - global listener - listener = sockets.serve("Client", Client, host, port) - return listener.getsockname() - - -def stop_serving(): - try: - listener.close() - except Exception: - log.swallow_exception(level="warning") diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/server/attach_pid_injected.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/server/attach_pid_injected.py deleted file mode 100644 index a8df6e1e2189dd2e60fdcf93efe1a222dddf7efb..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/server/attach_pid_injected.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See LICENSE in the project root -# for license information. - -"""Script injected into the debuggee process during attach-to-PID.""" - -import os - - -__file__ = os.path.abspath(__file__) -_debugpy_dir = os.path.dirname(os.path.dirname(os.path.dirname(__file__))) - - -def attach(setup): - log = None - try: - import sys - - if "threading" not in sys.modules: - try: - - def on_warn(msg): - print(msg, file=sys.stderr) - - def on_exception(msg): - print(msg, file=sys.stderr) - - def on_critical(msg): - print(msg, file=sys.stderr) - - pydevd_attach_to_process_path = os.path.join( - _debugpy_dir, - "debugpy", - "_vendored", - "pydevd", - "pydevd_attach_to_process", - ) - assert os.path.exists(pydevd_attach_to_process_path) - sys.path.insert(0, pydevd_attach_to_process_path) - - # NOTE: that it's not a part of the pydevd PYTHONPATH - import attach_script - - attach_script.fix_main_thread_id( - on_warn=on_warn, on_exception=on_exception, on_critical=on_critical - ) - - # NOTE: At this point it should be safe to remove this. - sys.path.remove(pydevd_attach_to_process_path) - except: - import traceback - - traceback.print_exc() - raise - - sys.path.insert(0, _debugpy_dir) - try: - import debugpy - import debugpy.server - from debugpy.common import json, log - import pydevd - finally: - assert sys.path[0] == _debugpy_dir - del sys.path[0] - - py_db = pydevd.get_global_debugger() - if py_db is not None: - py_db.dispose_and_kill_all_pydevd_threads(wait=False) - - if setup["log_to"] is not None: - debugpy.log_to(setup["log_to"]) - log.info("Configuring injected debugpy: {0}", json.repr(setup)) - - if setup["mode"] == "listen": - debugpy.listen(setup["address"]) - elif setup["mode"] == "connect": - debugpy.connect( - setup["address"], access_token=setup["adapter_access_token"] - ) - else: - raise AssertionError(repr(setup)) - - except: - import traceback - - traceback.print_exc() - if log is None: - raise - else: - log.reraise_exception() - - log.info("debugpy injected successfully") diff --git a/spaces/TH5314/newbing/src/components/chat-list.tsx b/spaces/TH5314/newbing/src/components/chat-list.tsx deleted file mode 100644 index 624a78ef0d7be0f1192cf02a81e2e9cf214cb193..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/src/components/chat-list.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import React from 'react' - -import { Separator } from '@/components/ui/separator' -import { ChatMessage } from '@/components/chat-message' -import { ChatMessageModel } from '@/lib/bots/bing/types' - -export interface ChatList { - messages: ChatMessageModel[] -} - -export function ChatList({ messages }: ChatList) { - if (!messages.length) { - return null - } - - return ( -
    - {messages.map((message, index) => ( - - - {index < messages.length - 1 && ( - - )} - - ))} -
    - ) -} diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/metadata/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/metadata/__init__.py deleted file mode 100644 index 9f73ca7105ff0bf11d74dd16ffb0653059466f70..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/metadata/__init__.py +++ /dev/null @@ -1,127 +0,0 @@ -import contextlib -import functools -import os -import sys -from typing import TYPE_CHECKING, List, Optional, Type, cast - -from pip._internal.utils.misc import strtobool - -from .base import BaseDistribution, BaseEnvironment, FilesystemWheel, MemoryWheel, Wheel - -if TYPE_CHECKING: - from typing import Protocol -else: - Protocol = object - -__all__ = [ - "BaseDistribution", - "BaseEnvironment", - "FilesystemWheel", - "MemoryWheel", - "Wheel", - "get_default_environment", - "get_environment", - "get_wheel_distribution", - "select_backend", -] - - -def _should_use_importlib_metadata() -> bool: - """Whether to use the ``importlib.metadata`` or ``pkg_resources`` backend. - - By default, pip uses ``importlib.metadata`` on Python 3.11+, and - ``pkg_resourcess`` otherwise. This can be overridden by a couple of ways: - - * If environment variable ``_PIP_USE_IMPORTLIB_METADATA`` is set, it - dictates whether ``importlib.metadata`` is used, regardless of Python - version. - * On Python 3.11+, Python distributors can patch ``importlib.metadata`` - to add a global constant ``_PIP_USE_IMPORTLIB_METADATA = False``. This - makes pip use ``pkg_resources`` (unless the user set the aforementioned - environment variable to *True*). - """ - with contextlib.suppress(KeyError, ValueError): - return bool(strtobool(os.environ["_PIP_USE_IMPORTLIB_METADATA"])) - if sys.version_info < (3, 11): - return False - import importlib.metadata - - return bool(getattr(importlib.metadata, "_PIP_USE_IMPORTLIB_METADATA", True)) - - -class Backend(Protocol): - Distribution: Type[BaseDistribution] - Environment: Type[BaseEnvironment] - - -@functools.lru_cache(maxsize=None) -def select_backend() -> Backend: - if _should_use_importlib_metadata(): - from . import importlib - - return cast(Backend, importlib) - from . import pkg_resources - - return cast(Backend, pkg_resources) - - -def get_default_environment() -> BaseEnvironment: - """Get the default representation for the current environment. - - This returns an Environment instance from the chosen backend. The default - Environment instance should be built from ``sys.path`` and may use caching - to share instance state accorss calls. - """ - return select_backend().Environment.default() - - -def get_environment(paths: Optional[List[str]]) -> BaseEnvironment: - """Get a representation of the environment specified by ``paths``. - - This returns an Environment instance from the chosen backend based on the - given import paths. The backend must build a fresh instance representing - the state of installed distributions when this function is called. - """ - return select_backend().Environment.from_paths(paths) - - -def get_directory_distribution(directory: str) -> BaseDistribution: - """Get the distribution metadata representation in the specified directory. - - This returns a Distribution instance from the chosen backend based on - the given on-disk ``.dist-info`` directory. - """ - return select_backend().Distribution.from_directory(directory) - - -def get_wheel_distribution(wheel: Wheel, canonical_name: str) -> BaseDistribution: - """Get the representation of the specified wheel's distribution metadata. - - This returns a Distribution instance from the chosen backend based on - the given wheel's ``.dist-info`` directory. - - :param canonical_name: Normalized project name of the given wheel. - """ - return select_backend().Distribution.from_wheel(wheel, canonical_name) - - -def get_metadata_distribution( - metadata_contents: bytes, - filename: str, - canonical_name: str, -) -> BaseDistribution: - """Get the dist representation of the specified METADATA file contents. - - This returns a Distribution instance from the chosen backend sourced from the data - in `metadata_contents`. - - :param metadata_contents: Contents of a METADATA file within a dist, or one served - via PEP 658. - :param filename: Filename for the dist this metadata represents. - :param canonical_name: Normalized project name of the given dist. - """ - return select_backend().Distribution.from_metadata_file_contents( - metadata_contents, - filename, - canonical_name, - ) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/themes.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/themes.py deleted file mode 100644 index bf6db104a2c4fd4f3dc699e85f2b262c3d31e9a0..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/themes.py +++ /dev/null @@ -1,5 +0,0 @@ -from .default_styles import DEFAULT_STYLES -from .theme import Theme - - -DEFAULT = Theme(DEFAULT_STYLES) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/heatmap_focal_loss.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/heatmap_focal_loss.py deleted file mode 100644 index d4693b2125217527033727ec9a82959286d180f9..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/heatmap_focal_loss.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -from torch.nn import functional as F - -# TODO: merge these two function -def heatmap_focal_loss( - inputs, - targets, - pos_inds, - labels, - alpha: float = -1, - beta: float = 4, - gamma: float = 2, - reduction: str = 'sum', - sigmoid_clamp: float = 1e-4, - ignore_high_fp: float = -1., -): - """ - Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002. - Args: - inputs: (sum_l N*Hl*Wl, C) - targets: (sum_l N*Hl*Wl, C) - pos_inds: N - labels: N - Returns: - Loss tensor with the reduction option applied. - """ - pred = torch.clamp(inputs.sigmoid_(), min=sigmoid_clamp, max=1-sigmoid_clamp) - neg_weights = torch.pow(1 - targets, beta) - pos_pred_pix = pred[pos_inds] # N x C - pos_pred = pos_pred_pix.gather(1, labels.unsqueeze(1)) - pos_loss = torch.log(pos_pred) * torch.pow(1 - pos_pred, gamma) - neg_loss = torch.log(1 - pred) * torch.pow(pred, gamma) * neg_weights - - if ignore_high_fp > 0: - not_high_fp = (pred < ignore_high_fp).float() - neg_loss = not_high_fp * neg_loss - - if reduction == "sum": - pos_loss = pos_loss.sum() - neg_loss = neg_loss.sum() - - if alpha >= 0: - pos_loss = alpha * pos_loss - neg_loss = (1 - alpha) * neg_loss - - return - pos_loss, - neg_loss - -heatmap_focal_loss_jit = torch.jit.script(heatmap_focal_loss) -# heatmap_focal_loss_jit = heatmap_focal_loss - -def binary_heatmap_focal_loss( - inputs, - targets, - pos_inds, - alpha: float = -1, - beta: float = 4, - gamma: float = 2, - sigmoid_clamp: float = 1e-4, - ignore_high_fp: float = -1., -): - """ - Args: - inputs: (sum_l N*Hl*Wl,) - targets: (sum_l N*Hl*Wl,) - pos_inds: N - Returns: - Loss tensor with the reduction option applied. - """ - pred = torch.clamp(inputs.sigmoid_(), min=sigmoid_clamp, max=1-sigmoid_clamp) - neg_weights = torch.pow(1 - targets, beta) - for i, ind in enumerate(pos_inds): - if ind >= pred.shape[0]: - print('%'*100) - print(pred.shape, ind, pos_inds) - pos_inds[i] = pred.shape[0] - 1 - pos_pred = pred[pos_inds] # N - pos_loss = torch.log(pos_pred) * torch.pow(1 - pos_pred, gamma) - neg_loss = torch.log(1 - pred) * torch.pow(pred, gamma) * neg_weights - if ignore_high_fp > 0: - not_high_fp = (pred < ignore_high_fp).float() - neg_loss = not_high_fp * neg_loss - - pos_loss = - pos_loss.sum() - neg_loss = - neg_loss.sum() - - if alpha >= 0: - pos_loss = alpha * pos_loss - neg_loss = (1 - alpha) * neg_loss - - return pos_loss, neg_loss - -# binary_heatmap_focal_loss_jit = torch.jit.script(binary_heatmap_focal_loss) \ No newline at end of file diff --git a/spaces/Terminus0501/vits-uma-genshin-honkai/mel_processing.py b/spaces/Terminus0501/vits-uma-genshin-honkai/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/Terminus0501/vits-uma-genshin-honkai/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/Thaweewat/ControlNet-Architecture/cldm/model.py b/spaces/Thaweewat/ControlNet-Architecture/cldm/model.py deleted file mode 100644 index 2934ca689019366bc44ed40aa130eebf69f383df..0000000000000000000000000000000000000000 --- a/spaces/Thaweewat/ControlNet-Architecture/cldm/model.py +++ /dev/null @@ -1,21 +0,0 @@ -import torch - -from omegaconf import OmegaConf -from ldm.util import instantiate_from_config - - -def get_state_dict(d): - return d.get('state_dict', d) - - -def load_state_dict(ckpt_path, location='cpu'): - state_dict = get_state_dict(torch.load(ckpt_path, map_location=torch.device(location))) - print(f'Loaded state_dict from [{ckpt_path}]') - return state_dict - - -def create_model(config_path): - config = OmegaConf.load(config_path) - model = instantiate_from_config(config.model).cpu() - print(f'Loaded model config from [{config_path}]') - return model diff --git a/spaces/Trangluna2002/AI_Cover_Gen/src/rvc.py b/spaces/Trangluna2002/AI_Cover_Gen/src/rvc.py deleted file mode 100644 index a2790602462859e4a9885c145a13ff86efba8a3c..0000000000000000000000000000000000000000 --- a/spaces/Trangluna2002/AI_Cover_Gen/src/rvc.py +++ /dev/null @@ -1,166 +0,0 @@ -from multiprocessing import cpu_count -from pathlib import Path - -import torch -from fairseq import checkpoint_utils -from scipy.io import wavfile - -from infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from my_utils import load_audio -from vc_infer_pipeline import VC - -BASE_DIR = Path(__file__).resolve().parent.parent - - -# config cpu -def use_fp32_config(): - for config_file in [ - "32k.json", - "40k.json", - "48k.json", - "48k_v2.json", - "32k_v2.json", - ]: - with open(f"src/configs/{config_file}", "r") as f: - strr = f.read().replace("true", "false") - with open(f"src/configs/{config_file}", "w") as f: - f.write(strr) - -class Config: - def __init__(self, device, is_half): - self.device = device - self.is_half = is_half - self.n_cpu = 2 # set cpu cores - self.gpu_name = None - self.gpu_mem = None - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16 series/10 series P40 forced single precision") - self.is_half = False - for config_file in ["32k.json", "40k.json", "48k.json"]: - with open(BASE_DIR / "src" / "configs" / config_file, "r") as f: - strr = f.read().replace("true", "false") - with open(BASE_DIR / "src" / "configs" / config_file, "w") as f: - f.write(strr) - with open(BASE_DIR / "src" / "trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open(BASE_DIR / "src" / "trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - if self.gpu_mem <= 4: - with open(BASE_DIR / "src" / "trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open(BASE_DIR / "src" / "trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - elif torch.backends.mps.is_available(): - print("No supported N-card found, use MPS for inference") - self.device = "mps" - else: - print("No supported N-card found, use CPU for inference") - self.device = "cpu" - self.is_half = False - use_fp32_config() # cpu config - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G memory config - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G memory config - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max - - -def load_hubert(device, is_half, model_path): - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task([model_path], suffix='', ) - hubert = models[0] - hubert = hubert.to(device) - - if is_half: - hubert = hubert.half() - else: - hubert = hubert.float() - - hubert.eval() - return hubert - - -def get_vc(device, is_half, config, model_path): - cpt = torch.load(model_path, map_location='cpu') - if "config" not in cpt or "weight" not in cpt: - raise ValueError(f'Incorrect format for {model_path}. Use a voice model trained using RVC v2 instead.') - - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(device) - - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - - vc = VC(tgt_sr, config) - return cpt, version, net_g, tgt_sr, vc - - -def rvc_infer(index_path, index_rate, input_path, output_path, pitch_change, f0_method, cpt, version, net_g, filter_radius, tgt_sr, rms_mix_rate, protect, crepe_hop_length, vc, hubert_model): - audio = load_audio(input_path, 16000) - times = [0, 0, 0] - if_f0 = cpt.get('f0', 1) - audio_opt = vc.pipeline(hubert_model, net_g, 0, audio, input_path, times, pitch_change, f0_method, index_path, index_rate, if_f0, filter_radius, tgt_sr, 0, rms_mix_rate, version, protect, crepe_hop_length) - wavfile.write(output_path, tgt_sr, audio_opt) diff --git a/spaces/UjjwalVIT/Text_analysis_and_metadata_app/README.md b/spaces/UjjwalVIT/Text_analysis_and_metadata_app/README.md deleted file mode 100644 index b084092e99b172b9f8f1eb94f4735d542b7b6b5e..0000000000000000000000000000000000000000 --- a/spaces/UjjwalVIT/Text_analysis_and_metadata_app/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Analysis And Metadata App -emoji: 🏢 -colorFrom: blue -colorTo: red -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ukrania/RVC-Models/vc_infer_pipeline.py b/spaces/Ukrania/RVC-Models/vc_infer_pipeline.py deleted file mode 100644 index c6be666c8d980fc6da24bd5e16ac9909d9204a46..0000000000000000000000000000000000000000 --- a/spaces/Ukrania/RVC-Models/vc_infer_pipeline.py +++ /dev/null @@ -1,431 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - model = "full" - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/VIPLab/Track-Anything/inpainter/model/e2fgvi.py b/spaces/VIPLab/Track-Anything/inpainter/model/e2fgvi.py deleted file mode 100644 index ea90b61e0c7fe44b1968a2c59592bf50e0426bb0..0000000000000000000000000000000000000000 --- a/spaces/VIPLab/Track-Anything/inpainter/model/e2fgvi.py +++ /dev/null @@ -1,350 +0,0 @@ -''' Towards An End-to-End Framework for Video Inpainting -''' - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from model.modules.flow_comp import SPyNet -from model.modules.feat_prop import BidirectionalPropagation, SecondOrderDeformableAlignment -from model.modules.tfocal_transformer import TemporalFocalTransformerBlock, SoftSplit, SoftComp -from model.modules.spectral_norm import spectral_norm as _spectral_norm - - -class BaseNetwork(nn.Module): - def __init__(self): - super(BaseNetwork, self).__init__() - - def print_network(self): - if isinstance(self, list): - self = self[0] - num_params = 0 - for param in self.parameters(): - num_params += param.numel() - print( - 'Network [%s] was created. Total number of parameters: %.1f million. ' - 'To see the architecture, do print(network).' % - (type(self).__name__, num_params / 1000000)) - - def init_weights(self, init_type='normal', gain=0.02): - ''' - initialize network's weights - init_type: normal | xavier | kaiming | orthogonal - https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/9451e70673400885567d08a9e97ade2524c700d0/models/networks.py#L39 - ''' - def init_func(m): - classname = m.__class__.__name__ - if classname.find('InstanceNorm2d') != -1: - if hasattr(m, 'weight') and m.weight is not None: - nn.init.constant_(m.weight.data, 1.0) - if hasattr(m, 'bias') and m.bias is not None: - nn.init.constant_(m.bias.data, 0.0) - elif hasattr(m, 'weight') and (classname.find('Conv') != -1 - or classname.find('Linear') != -1): - if init_type == 'normal': - nn.init.normal_(m.weight.data, 0.0, gain) - elif init_type == 'xavier': - nn.init.xavier_normal_(m.weight.data, gain=gain) - elif init_type == 'xavier_uniform': - nn.init.xavier_uniform_(m.weight.data, gain=1.0) - elif init_type == 'kaiming': - nn.init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') - elif init_type == 'orthogonal': - nn.init.orthogonal_(m.weight.data, gain=gain) - elif init_type == 'none': # uses pytorch's default init method - m.reset_parameters() - else: - raise NotImplementedError( - 'initialization method [%s] is not implemented' % - init_type) - if hasattr(m, 'bias') and m.bias is not None: - nn.init.constant_(m.bias.data, 0.0) - - self.apply(init_func) - - # propagate to children - for m in self.children(): - if hasattr(m, 'init_weights'): - m.init_weights(init_type, gain) - - -class Encoder(nn.Module): - def __init__(self): - super(Encoder, self).__init__() - self.group = [1, 2, 4, 8, 1] - self.layers = nn.ModuleList([ - nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv2d(256, 384, kernel_size=3, stride=1, padding=1, groups=1), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv2d(640, 512, kernel_size=3, stride=1, padding=1, groups=2), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv2d(768, 384, kernel_size=3, stride=1, padding=1, groups=4), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv2d(640, 256, kernel_size=3, stride=1, padding=1, groups=8), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv2d(512, 128, kernel_size=3, stride=1, padding=1, groups=1), - nn.LeakyReLU(0.2, inplace=True) - ]) - - def forward(self, x): - bt, c, h, w = x.size() - h, w = h // 4, w // 4 - out = x - for i, layer in enumerate(self.layers): - if i == 8: - x0 = out - if i > 8 and i % 2 == 0: - g = self.group[(i - 8) // 2] - x = x0.view(bt, g, -1, h, w) - o = out.view(bt, g, -1, h, w) - out = torch.cat([x, o], 2).view(bt, -1, h, w) - out = layer(out) - return out - - -class deconv(nn.Module): - def __init__(self, - input_channel, - output_channel, - kernel_size=3, - padding=0): - super().__init__() - self.conv = nn.Conv2d(input_channel, - output_channel, - kernel_size=kernel_size, - stride=1, - padding=padding) - - def forward(self, x): - x = F.interpolate(x, - scale_factor=2, - mode='bilinear', - align_corners=True) - return self.conv(x) - - -class InpaintGenerator(BaseNetwork): - def __init__(self, init_weights=True): - super(InpaintGenerator, self).__init__() - channel = 256 - hidden = 512 - - # encoder - self.encoder = Encoder() - - # decoder - self.decoder = nn.Sequential( - deconv(channel // 2, 128, kernel_size=3, padding=1), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv2d(128, 64, kernel_size=3, stride=1, padding=1), - nn.LeakyReLU(0.2, inplace=True), - deconv(64, 64, kernel_size=3, padding=1), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv2d(64, 3, kernel_size=3, stride=1, padding=1)) - - # feature propagation module - self.feat_prop_module = BidirectionalPropagation(channel // 2) - - # soft split and soft composition - kernel_size = (7, 7) - padding = (3, 3) - stride = (3, 3) - output_size = (60, 108) - t2t_params = { - 'kernel_size': kernel_size, - 'stride': stride, - 'padding': padding, - 'output_size': output_size - } - self.ss = SoftSplit(channel // 2, - hidden, - kernel_size, - stride, - padding, - t2t_param=t2t_params) - self.sc = SoftComp(channel // 2, hidden, output_size, kernel_size, - stride, padding) - - n_vecs = 1 - for i, d in enumerate(kernel_size): - n_vecs *= int((output_size[i] + 2 * padding[i] - - (d - 1) - 1) / stride[i] + 1) - - blocks = [] - depths = 8 - num_heads = [4] * depths - window_size = [(5, 9)] * depths - focal_windows = [(5, 9)] * depths - focal_levels = [2] * depths - pool_method = "fc" - - for i in range(depths): - blocks.append( - TemporalFocalTransformerBlock(dim=hidden, - num_heads=num_heads[i], - window_size=window_size[i], - focal_level=focal_levels[i], - focal_window=focal_windows[i], - n_vecs=n_vecs, - t2t_params=t2t_params, - pool_method=pool_method)) - self.transformer = nn.Sequential(*blocks) - - if init_weights: - self.init_weights() - # Need to initial the weights of MSDeformAttn specifically - for m in self.modules(): - if isinstance(m, SecondOrderDeformableAlignment): - m.init_offset() - - # flow completion network - self.update_spynet = SPyNet() - - def forward_bidirect_flow(self, masked_local_frames): - b, l_t, c, h, w = masked_local_frames.size() - - # compute forward and backward flows of masked frames - masked_local_frames = F.interpolate(masked_local_frames.view( - -1, c, h, w), - scale_factor=1 / 4, - mode='bilinear', - align_corners=True, - recompute_scale_factor=True) - masked_local_frames = masked_local_frames.view(b, l_t, c, h // 4, - w // 4) - mlf_1 = masked_local_frames[:, :-1, :, :, :].reshape( - -1, c, h // 4, w // 4) - mlf_2 = masked_local_frames[:, 1:, :, :, :].reshape( - -1, c, h // 4, w // 4) - pred_flows_forward = self.update_spynet(mlf_1, mlf_2) - pred_flows_backward = self.update_spynet(mlf_2, mlf_1) - - pred_flows_forward = pred_flows_forward.view(b, l_t - 1, 2, h // 4, - w // 4) - pred_flows_backward = pred_flows_backward.view(b, l_t - 1, 2, h // 4, - w // 4) - - return pred_flows_forward, pred_flows_backward - - def forward(self, masked_frames, num_local_frames): - l_t = num_local_frames - b, t, ori_c, ori_h, ori_w = masked_frames.size() - - # normalization before feeding into the flow completion module - masked_local_frames = (masked_frames[:, :l_t, ...] + 1) / 2 - pred_flows = self.forward_bidirect_flow(masked_local_frames) - - # extracting features and performing the feature propagation on local features - enc_feat = self.encoder(masked_frames.view(b * t, ori_c, ori_h, ori_w)) - _, c, h, w = enc_feat.size() - local_feat = enc_feat.view(b, t, c, h, w)[:, :l_t, ...] - ref_feat = enc_feat.view(b, t, c, h, w)[:, l_t:, ...] - local_feat = self.feat_prop_module(local_feat, pred_flows[0], - pred_flows[1]) - enc_feat = torch.cat((local_feat, ref_feat), dim=1) - - # content hallucination through stacking multiple temporal focal transformer blocks - trans_feat = self.ss(enc_feat.view(-1, c, h, w), b) - trans_feat = self.transformer(trans_feat) - trans_feat = self.sc(trans_feat, t) - trans_feat = trans_feat.view(b, t, -1, h, w) - enc_feat = enc_feat + trans_feat - - # decode frames from features - output = self.decoder(enc_feat.view(b * t, c, h, w)) - output = torch.tanh(output) - return output, pred_flows - - -# ###################################################################### -# Discriminator for Temporal Patch GAN -# ###################################################################### - - -class Discriminator(BaseNetwork): - def __init__(self, - in_channels=3, - use_sigmoid=False, - use_spectral_norm=True, - init_weights=True): - super(Discriminator, self).__init__() - self.use_sigmoid = use_sigmoid - nf = 32 - - self.conv = nn.Sequential( - spectral_norm( - nn.Conv3d(in_channels=in_channels, - out_channels=nf * 1, - kernel_size=(3, 5, 5), - stride=(1, 2, 2), - padding=1, - bias=not use_spectral_norm), use_spectral_norm), - # nn.InstanceNorm2d(64, track_running_stats=False), - nn.LeakyReLU(0.2, inplace=True), - spectral_norm( - nn.Conv3d(nf * 1, - nf * 2, - kernel_size=(3, 5, 5), - stride=(1, 2, 2), - padding=(1, 2, 2), - bias=not use_spectral_norm), use_spectral_norm), - # nn.InstanceNorm2d(128, track_running_stats=False), - nn.LeakyReLU(0.2, inplace=True), - spectral_norm( - nn.Conv3d(nf * 2, - nf * 4, - kernel_size=(3, 5, 5), - stride=(1, 2, 2), - padding=(1, 2, 2), - bias=not use_spectral_norm), use_spectral_norm), - # nn.InstanceNorm2d(256, track_running_stats=False), - nn.LeakyReLU(0.2, inplace=True), - spectral_norm( - nn.Conv3d(nf * 4, - nf * 4, - kernel_size=(3, 5, 5), - stride=(1, 2, 2), - padding=(1, 2, 2), - bias=not use_spectral_norm), use_spectral_norm), - # nn.InstanceNorm2d(256, track_running_stats=False), - nn.LeakyReLU(0.2, inplace=True), - spectral_norm( - nn.Conv3d(nf * 4, - nf * 4, - kernel_size=(3, 5, 5), - stride=(1, 2, 2), - padding=(1, 2, 2), - bias=not use_spectral_norm), use_spectral_norm), - # nn.InstanceNorm2d(256, track_running_stats=False), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv3d(nf * 4, - nf * 4, - kernel_size=(3, 5, 5), - stride=(1, 2, 2), - padding=(1, 2, 2))) - - if init_weights: - self.init_weights() - - def forward(self, xs): - # T, C, H, W = xs.shape (old) - # B, T, C, H, W (new) - xs_t = torch.transpose(xs, 1, 2) - feat = self.conv(xs_t) - if self.use_sigmoid: - feat = torch.sigmoid(feat) - out = torch.transpose(feat, 1, 2) # B, T, C, H, W - return out - - -def spectral_norm(module, mode=True): - if mode: - return _spectral_norm(module) - return module diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Dfehub.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Dfehub.py deleted file mode 100644 index 2f66f19b50b6b4ab79c012f123c47241141942eb..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Dfehub.py +++ /dev/null @@ -1,49 +0,0 @@ -import os -import requests -from ...typing import sha256, Dict, get_type_hints - -url = "https://chat.dfehub.com" -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-4'] -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - headers = { - 'Authority': 'chat.dfehub.com', - 'Content-Type': 'application/json', - 'Method': 'POST', - 'Path': '/api/openai/v1/chat/completions', - 'Scheme': 'https', - 'Accept': 'text/event-stream', - 'Accept-Language': 'pt-BR,pt;q=0.9,en-US;q=0.8,en;q=0.7,zh-CN;q=0.6,zh;q=0.5', - 'Content-Type': 'application/json', - 'Origin': 'https://chat.dfehub.com', - 'Referer': 'https://chat.dfehub.com/', - 'Sec-Ch-Ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"', - 'Sec-Ch-Ua-Mobile': '?0', - 'Sec-Ch-Ua-Platform': '"Windows"', - 'Sec-Fetch-Dest': 'empty', - 'Sec-Fetch-Mode': 'cors', - 'Sec-Fetch-Site': 'same-origin', - 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36', - 'X-Requested-With': 'XMLHttpRequest', - } - - data = { - 'model': model, - 'temperature': 0.7, - 'max_tokens': '8000', - 'presence_penalty': 0, - 'messages': messages, - } - - response = requests.post(url + '/api/openai/v1/chat/completions', - headers=headers, json=data, stream=stream) - - yield response.json()['choices'][0]['message']['content'] - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/Vijaykumarthummapala/Mygenaichatbot/app.py b/spaces/Vijaykumarthummapala/Mygenaichatbot/app.py deleted file mode 100644 index c820ac1a58b9c4a12a5b4977a87ce2d7493c5db3..0000000000000000000000000000000000000000 --- a/spaces/Vijaykumarthummapala/Mygenaichatbot/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """ HELLO Meet Riya, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Riya's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='1', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Wootang01/chinese_translator_generator/README.md b/spaces/Wootang01/chinese_translator_generator/README.md deleted file mode 100644 index 457c2b668acc895bdefcae61673471cbc7cc1596..0000000000000000000000000000000000000000 --- a/spaces/Wootang01/chinese_translator_generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chinese Translator Generator -emoji: 📈 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.13.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/XzJosh/Ava-Bert-VITS2/text/chinese.py b/spaces/XzJosh/Ava-Bert-VITS2/text/chinese.py deleted file mode 100644 index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Ava-Bert-VITS2/text/chinese.py +++ /dev/null @@ -1,193 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text import symbols -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg - - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣","母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text) - - return replaced_text - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip()!=''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c+v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c+v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c+v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]]+pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - -def get_bert_feature(text, word2ph): - from text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - -if __name__ == '__main__': - from text.chinese_bert import get_bert_feature - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/XzJosh/Jiaran-Bert-VITS2/models.py b/spaces/XzJosh/Jiaran-Bert-VITS2/models.py deleted file mode 100644 index d4afe44d883691610c5903e602a3ca245fcb3a5c..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jiaran-Bert-VITS2/models.py +++ /dev/null @@ -1,707 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from commons import init_weights, get_padding -from text import symbols, num_tones, num_languages -class DurationDiscriminator(nn.Module): #vits2 - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.dur_proj = nn.Conv1d(1, filter_channels, 1) - - self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_1 = modules.LayerNorm(filter_channels) - self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_2 = modules.LayerNorm(filter_channels) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - self.output_layer = nn.Sequential( - nn.Linear(filter_channels, 1), - nn.Sigmoid() - ) - - def forward_probability(self, x, x_mask, dur, g=None): - dur = self.dur_proj(dur) - x = torch.cat([x, dur], dim=1) - x = self.pre_out_conv_1(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_1(x) - x = self.drop(x) - x = self.pre_out_conv_2(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_2(x) - x = self.drop(x) - x = x * x_mask - x = x.transpose(1, 2) - output_prob = self.output_layer(x) - return output_prob - - def forward(self, x, x_mask, dur_r, dur_hat, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - - output_probs = [] - for dur in [dur_r, dur_hat]: - output_prob = self.forward_probability(x, x_mask, dur, g) - output_probs.append(output_prob) - - return output_probs - -class TransformerCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - share_parameter=False - ): - - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - - self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None - - for i in range(n_flows): - self.flows.append( - modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=0): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - self.emb = nn.Embedding(len(symbols), hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - self.tone_emb = nn.Embedding(num_tones, hidden_channels) - nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5) - self.language_emb = nn.Embedding(num_languages, hidden_channels) - nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5) - self.bert_proj = nn.Conv1d(1024, hidden_channels, 1) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, tone, language, bert, g=None): - x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask, g=g) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - -class ReferenceEncoder(nn.Module): - ''' - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - ''' - - def __init__(self, spec_channels, gin_channels=0): - - super().__init__() - self.spec_channels = spec_channels - ref_enc_filters = [32, 32, 64, 64, 128, 128] - K = len(ref_enc_filters) - filters = [1] + ref_enc_filters - convs = [weight_norm(nn.Conv2d(in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1))) for i in range(K)] - self.convs = nn.ModuleList(convs) - # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) - - out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K) - self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels, - hidden_size=256 // 2, - batch_first=True) - self.proj = nn.Linear(128, gin_channels) - - def forward(self, inputs, mask=None): - N = inputs.size(0) - out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs] - for conv in self.convs: - out = conv(out) - # out = wn(out) - out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, 128] - - return self.proj(out.squeeze(0)) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=256, - gin_channels=256, - use_sdp=True, - n_flow_layer = 4, - n_layers_trans_flow = 3, - flow_share_parameter = False, - use_transformer_flow = True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_layers_trans_flow = n_layers_trans_flow - self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True) - self.use_sdp = use_sdp - self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False) - self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01) - self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6) - self.current_mas_noise_scale = self.mas_noise_scale_initial - if self.use_spk_conditioned_encoder and gin_channels > 0: - self.enc_gin_channels = gin_channels - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.enc_gin_channels) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - if use_transformer_flow: - self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter) - else: - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels) - self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - else: - self.ref_enc = ReferenceEncoder(spec_channels, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - if self.use_noise_scaled_mas: - epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale - neg_cent = neg_cent + epsilon - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - - l_length_sdp = self.sdp(x, x_mask, w, g=g) - l_length_sdp = l_length_sdp / torch.sum(x_mask) - - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - l_length = l_length_dp + l_length_sdp - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_) - - def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None): - #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert) - # g = self.gst(y) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) diff --git a/spaces/XzJosh/Nana7mi-Bert-VITS2/utils.py b/spaces/XzJosh/Nana7mi-Bert-VITS2/utils.py deleted file mode 100644 index c6aa6cfc64c33e2eed33e9845239e831fc1c4a1a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Nana7mi-Bert-VITS2/utils.py +++ /dev/null @@ -1,293 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - elif optimizer is None and not skip_optimizer: - #else: #Disable this line if Infer ,and enable the line upper - new_opt_dict = optimizer.state_dict() - new_opt_dict_params = new_opt_dict['param_groups'][0]['params'] - new_opt_dict['param_groups'] = checkpoint_dict['optimizer']['param_groups'] - new_opt_dict['param_groups'][0]['params'] = new_opt_dict_params - optimizer.load_state_dict(new_opt_dict) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - #assert "emb_g" not in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - print("load ") - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, default="./OUTPUT_MODEL", - help='Model name') - parser.add_argument('--cont', dest='cont', action="store_true", default=False, help="whether to continue training on the latest checkpoint") - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.cont = args.cont - return hparams - - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - import re - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], - key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/Yabo/ControlVideo/models/controlnet_attention.py b/spaces/Yabo/ControlVideo/models/controlnet_attention.py deleted file mode 100644 index e45cde9a508b3b81c4359b3220aedf4d26edb3c5..0000000000000000000000000000000000000000 --- a/spaces/Yabo/ControlVideo/models/controlnet_attention.py +++ /dev/null @@ -1,483 +0,0 @@ -# Adapted from https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention.py - -from dataclasses import dataclass -from typing import Optional, Callable -import math -import torch -import torch.nn.functional as F -from torch import nn -from positional_encodings.torch_encodings import PositionalEncoding2D - -from diffusers.configuration_utils import ConfigMixin, register_to_config -from diffusers import ModelMixin -from diffusers.utils import BaseOutput -from diffusers.utils.import_utils import is_xformers_available -from diffusers.models.attention import CrossAttention, FeedForward, AdaLayerNorm -from einops import rearrange, repeat - - -@dataclass -class Transformer3DModelOutput(BaseOutput): - sample: torch.FloatTensor - - -if is_xformers_available(): - import xformers - import xformers.ops -else: - xformers = None - - -class Transformer3DModel(ModelMixin, ConfigMixin): - @register_to_config - def __init__( - self, - num_attention_heads: int = 16, - attention_head_dim: int = 88, - in_channels: Optional[int] = None, - num_layers: int = 1, - dropout: float = 0.0, - norm_num_groups: int = 32, - cross_attention_dim: Optional[int] = None, - attention_bias: bool = False, - activation_fn: str = "geglu", - num_embeds_ada_norm: Optional[int] = None, - use_linear_projection: bool = False, - only_cross_attention: bool = False, - upcast_attention: bool = False, - ): - super().__init__() - self.use_linear_projection = use_linear_projection - self.num_attention_heads = num_attention_heads - self.attention_head_dim = attention_head_dim - inner_dim = num_attention_heads * attention_head_dim - - # Define input layers - self.in_channels = in_channels - - self.norm = torch.nn.GroupNorm(num_groups=norm_num_groups, num_channels=in_channels, eps=1e-6, affine=True) - if use_linear_projection: - self.proj_in = nn.Linear(in_channels, inner_dim) - else: - self.proj_in = nn.Conv2d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0) - - # Define transformers blocks - self.transformer_blocks = nn.ModuleList( - [ - BasicTransformerBlock( - inner_dim, - num_attention_heads, - attention_head_dim, - dropout=dropout, - cross_attention_dim=cross_attention_dim, - activation_fn=activation_fn, - num_embeds_ada_norm=num_embeds_ada_norm, - attention_bias=attention_bias, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - for d in range(num_layers) - ] - ) - - # 4. Define output layers - if use_linear_projection: - self.proj_out = nn.Linear(in_channels, inner_dim) - else: - self.proj_out = nn.Conv2d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0) - - def forward(self, hidden_states, encoder_hidden_states=None, timestep=None, return_dict: bool = True): - # Input - assert hidden_states.dim() == 5, f"Expected hidden_states to have ndim=5, but got ndim={hidden_states.dim()}." - video_length = hidden_states.shape[2] - hidden_states = rearrange(hidden_states, "b c f h w -> (b f) c h w") - encoder_hidden_states = repeat(encoder_hidden_states, 'b n c -> (b f) n c', f=video_length) - - batch, channel, height, weight = hidden_states.shape - residual = hidden_states - - hidden_states = self.norm(hidden_states) - if not self.use_linear_projection: - hidden_states = self.proj_in(hidden_states) - inner_dim = hidden_states.shape[1] - hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * weight, inner_dim) - else: - inner_dim = hidden_states.shape[1] - hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * weight, inner_dim) - hidden_states = self.proj_in(hidden_states) - - # Blocks - for block in self.transformer_blocks: - hidden_states = block( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - timestep=timestep, - video_length=video_length - ) - - # Output - if not self.use_linear_projection: - hidden_states = ( - hidden_states.reshape(batch, height, weight, inner_dim).permute(0, 3, 1, 2).contiguous() - ) - hidden_states = self.proj_out(hidden_states) - else: - hidden_states = self.proj_out(hidden_states) - hidden_states = ( - hidden_states.reshape(batch, height, weight, inner_dim).permute(0, 3, 1, 2).contiguous() - ) - - output = hidden_states + residual - - output = rearrange(output, "(b f) c h w -> b c f h w", f=video_length) - if not return_dict: - return (output,) - - return Transformer3DModelOutput(sample=output) - - -class BasicTransformerBlock(nn.Module): - def __init__( - self, - dim: int, - num_attention_heads: int, - attention_head_dim: int, - dropout=0.0, - cross_attention_dim: Optional[int] = None, - activation_fn: str = "geglu", - num_embeds_ada_norm: Optional[int] = None, - attention_bias: bool = False, - only_cross_attention: bool = False, - upcast_attention: bool = False, - ): - super().__init__() - self.only_cross_attention = only_cross_attention - self.use_ada_layer_norm = num_embeds_ada_norm is not None - - # Individual-Attn - self.attn1 = IndividualAttention( - query_dim=dim, - heads=num_attention_heads, - dim_head=attention_head_dim, - dropout=dropout, - bias=attention_bias, - cross_attention_dim=cross_attention_dim if only_cross_attention else None, - upcast_attention=upcast_attention, - ) - self.norm1 = AdaLayerNorm(dim, num_embeds_ada_norm) if self.use_ada_layer_norm else nn.LayerNorm(dim) - - # Cross-Attn - if cross_attention_dim is not None: - self.attn2 = CrossAttention( - query_dim=dim, - cross_attention_dim=cross_attention_dim, - heads=num_attention_heads, - dim_head=attention_head_dim, - dropout=dropout, - bias=attention_bias, - upcast_attention=upcast_attention, - ) - else: - self.attn2 = None - - if cross_attention_dim is not None: - self.norm2 = AdaLayerNorm(dim, num_embeds_ada_norm) if self.use_ada_layer_norm else nn.LayerNorm(dim) - else: - self.norm2 = None - - # Feed-forward - self.ff = FeedForward(dim, dropout=dropout, activation_fn=activation_fn) - self.norm3 = nn.LayerNorm(dim) - - self.norm_temp = AdaLayerNorm(dim, num_embeds_ada_norm) if self.use_ada_layer_norm else nn.LayerNorm(dim) - - def set_use_memory_efficient_attention_xformers(self, use_memory_efficient_attention_xformers: bool, attention_op: Optional[Callable] = None): - if not is_xformers_available(): - print("Here is how to install it") - raise ModuleNotFoundError( - "Refer to https://github.com/facebookresearch/xformers for more information on how to install" - " xformers", - name="xformers", - ) - elif not torch.cuda.is_available(): - raise ValueError( - "torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only" - " available for GPU " - ) - else: - try: - # Make sure we can run the memory efficient attention - _ = xformers.ops.memory_efficient_attention( - torch.randn((1, 2, 40), device="cuda"), - torch.randn((1, 2, 40), device="cuda"), - torch.randn((1, 2, 40), device="cuda"), - ) - except Exception as e: - raise e - self.attn1._use_memory_efficient_attention_xformers = use_memory_efficient_attention_xformers - if self.attn2 is not None: - self.attn2._use_memory_efficient_attention_xformers = use_memory_efficient_attention_xformers - # self.attn_temp._use_memory_efficient_attention_xformers = use_memory_efficient_attention_xformers - - def forward(self, hidden_states, encoder_hidden_states=None, timestep=None, attention_mask=None, video_length=None): - # Individual-Attention - norm_hidden_states = ( - self.norm1(hidden_states, timestep) if self.use_ada_layer_norm else self.norm1(hidden_states) - ) - - if self.only_cross_attention: - hidden_states = ( - self.attn1(norm_hidden_states, encoder_hidden_states, attention_mask=attention_mask) + hidden_states - ) - else: - hidden_states = self.attn1(norm_hidden_states, attention_mask=attention_mask, video_length=video_length) + hidden_states - - if self.attn2 is not None: - # Cross-Attention - norm_hidden_states = ( - self.norm2(hidden_states, timestep) if self.use_ada_layer_norm else self.norm2(hidden_states) - ) - hidden_states = ( - self.attn2( - norm_hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask - ) - + hidden_states - ) - - # Feed-forward - hidden_states = self.ff(self.norm3(hidden_states)) + hidden_states - - # # Temporal-Attention - # d = hidden_states.shape[1] - # hidden_states = rearrange(hidden_states, "(b f) d c -> (b d) f c", f=video_length) - # norm_hidden_states = ( - # self.norm_temp(hidden_states, timestep) if self.use_ada_layer_norm else self.norm_temp(hidden_states) - # ) - # hidden_states = self.attn_temp(norm_hidden_states) + hidden_states - # hidden_states = rearrange(hidden_states, "(b d) f c -> (b f) d c", d=d) - - return hidden_states - -class IndividualAttention(nn.Module): - r""" - A cross attention layer. - - Parameters: - query_dim (`int`): The number of channels in the query. - cross_attention_dim (`int`, *optional*): - The number of channels in the encoder_hidden_states. If not given, defaults to `query_dim`. - heads (`int`, *optional*, defaults to 8): The number of heads to use for multi-head attention. - dim_head (`int`, *optional*, defaults to 64): The number of channels in each head. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - bias (`bool`, *optional*, defaults to False): - Set to `True` for the query, key, and value linear layers to contain a bias parameter. - """ - - def __init__( - self, - query_dim: int, - cross_attention_dim: Optional[int] = None, - heads: int = 8, - dim_head: int = 64, - dropout: float = 0.0, - bias=False, - upcast_attention: bool = False, - upcast_softmax: bool = False, - added_kv_proj_dim: Optional[int] = None, - norm_num_groups: Optional[int] = None, - ): - super().__init__() - inner_dim = dim_head * heads - cross_attention_dim = cross_attention_dim if cross_attention_dim is not None else query_dim - self.upcast_attention = upcast_attention - self.upcast_softmax = upcast_softmax - - self.scale = dim_head**-0.5 - - self.heads = heads - # for slice_size > 0 the attention score computation - # is split across the batch axis to save memory - # You can set slice_size with `set_attention_slice` - self.sliceable_head_dim = heads - self._slice_size = None - self._use_memory_efficient_attention_xformers = False - self.added_kv_proj_dim = added_kv_proj_dim - - if norm_num_groups is not None: - self.group_norm = nn.GroupNorm(num_channels=inner_dim, num_groups=norm_num_groups, eps=1e-5, affine=True) - else: - self.group_norm = None - - self.to_q = nn.Linear(query_dim, inner_dim, bias=bias) - self.to_k = nn.Linear(cross_attention_dim, inner_dim, bias=bias) - self.to_v = nn.Linear(cross_attention_dim, inner_dim, bias=bias) - - if self.added_kv_proj_dim is not None: - self.add_k_proj = nn.Linear(added_kv_proj_dim, cross_attention_dim) - self.add_v_proj = nn.Linear(added_kv_proj_dim, cross_attention_dim) - - self.to_out = nn.ModuleList([]) - self.to_out.append(nn.Linear(inner_dim, query_dim)) - self.to_out.append(nn.Dropout(dropout)) - - def reshape_heads_to_batch_dim(self, tensor): - batch_size, seq_len, dim = tensor.shape - head_size = self.heads - tensor = tensor.reshape(batch_size, seq_len, head_size, dim // head_size) - tensor = tensor.permute(0, 2, 1, 3).reshape(batch_size * head_size, seq_len, dim // head_size) - return tensor - - def reshape_batch_dim_to_heads(self, tensor): - batch_size, seq_len, dim = tensor.shape - head_size = self.heads - tensor = tensor.reshape(batch_size // head_size, head_size, seq_len, dim) - tensor = tensor.permute(0, 2, 1, 3).reshape(batch_size // head_size, seq_len, dim * head_size) - return tensor - - def set_attention_slice(self, slice_size): - if slice_size is not None and slice_size > self.sliceable_head_dim: - raise ValueError(f"slice_size {slice_size} has to be smaller or equal to {self.sliceable_head_dim}.") - - self._slice_size = slice_size - - def _attention(self, query, key, value, attention_mask=None): - if self.upcast_attention: - query = query.float() - key = key.float() - - attention_scores = torch.baddbmm( - torch.empty(query.shape[0], query.shape[1], key.shape[1], dtype=query.dtype, device=query.device), - query, - key.transpose(-1, -2), - beta=0, - alpha=self.scale, - ) - - if attention_mask is not None: - attention_scores = attention_scores + attention_mask - - if self.upcast_softmax: - attention_scores = attention_scores.float() - - attention_probs = attention_scores.softmax(dim=-1) - - # cast back to the original dtype - attention_probs = attention_probs.to(value.dtype) - - # compute attention output - hidden_states = torch.bmm(attention_probs, value) - - # reshape hidden_states - hidden_states = self.reshape_batch_dim_to_heads(hidden_states) - return hidden_states - - def _sliced_attention(self, query, key, value, sequence_length, dim, attention_mask): - batch_size_attention = query.shape[0] - hidden_states = torch.zeros( - (batch_size_attention, sequence_length, dim // self.heads), device=query.device, dtype=query.dtype - ) - slice_size = self._slice_size if self._slice_size is not None else hidden_states.shape[0] - for i in range(hidden_states.shape[0] // slice_size): - start_idx = i * slice_size - end_idx = (i + 1) * slice_size - - query_slice = query[start_idx:end_idx] - key_slice = key[start_idx:end_idx] - - if self.upcast_attention: - query_slice = query_slice.float() - key_slice = key_slice.float() - - attn_slice = torch.baddbmm( - torch.empty(slice_size, query.shape[1], key.shape[1], dtype=query_slice.dtype, device=query.device), - query_slice, - key_slice.transpose(-1, -2), - beta=0, - alpha=self.scale, - ) - - if attention_mask is not None: - attn_slice = attn_slice + attention_mask[start_idx:end_idx] - - if self.upcast_softmax: - attn_slice = attn_slice.float() - - attn_slice = attn_slice.softmax(dim=-1) - - # cast back to the original dtype - attn_slice = attn_slice.to(value.dtype) - attn_slice = torch.bmm(attn_slice, value[start_idx:end_idx]) - - hidden_states[start_idx:end_idx] = attn_slice - - # reshape hidden_states - hidden_states = self.reshape_batch_dim_to_heads(hidden_states) - return hidden_states - - def _memory_efficient_attention_xformers(self, query, key, value, attention_mask): - # TODO attention_mask - query = query.contiguous() - key = key.contiguous() - value = value.contiguous() - hidden_states = xformers.ops.memory_efficient_attention(query, key, value, attn_bias=attention_mask) - hidden_states = self.reshape_batch_dim_to_heads(hidden_states) - return hidden_states - - def forward(self, hidden_states, encoder_hidden_states=None, attention_mask=None, video_length=None): - batch_size, sequence_length, _ = hidden_states.shape - - encoder_hidden_states = encoder_hidden_states - - if self.group_norm is not None: - hidden_states = self.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = self.to_q(hidden_states) # (bf) x d(hw) x c - dim = query.shape[-1] - - query = self.reshape_heads_to_batch_dim(query) - - if self.added_kv_proj_dim is not None: - raise NotImplementedError - - encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states - key = self.to_k(encoder_hidden_states) - value = self.to_v(encoder_hidden_states) - - curr_frame_index = torch.arange(video_length) - - key = rearrange(key, "(b f) d c -> b f d c", f=video_length) - - key = key[:, curr_frame_index] - key = rearrange(key, "b f d c -> (b f) d c") - - value = rearrange(value, "(b f) d c -> b f d c", f=video_length) - - value = value[:, curr_frame_index] - value = rearrange(value, "b f d c -> (b f) d c") - - key = self.reshape_heads_to_batch_dim(key) - value = self.reshape_heads_to_batch_dim(value) - - if attention_mask is not None: - if attention_mask.shape[-1] != query.shape[1]: - target_length = query.shape[1] - attention_mask = F.pad(attention_mask, (0, target_length), value=0.0) - attention_mask = attention_mask.repeat_interleave(self.heads, dim=0) - - # attention, what we cannot get enough of - if self._use_memory_efficient_attention_xformers: - hidden_states = self._memory_efficient_attention_xformers(query, key, value, attention_mask) - # Some versions of xformers return output in fp32, cast it back to the dtype of the input - hidden_states = hidden_states.to(query.dtype) - else: - if self._slice_size is None or query.shape[0] // self._slice_size == 1: - hidden_states = self._attention(query, key, value, attention_mask) - else: - hidden_states = self._sliced_attention(query, key, value, sequence_length, dim, attention_mask) - - # linear proj - hidden_states = self.to_out[0](hidden_states) - - # dropout - hidden_states = self.to_out[1](hidden_states) - return hidden_states diff --git a/spaces/YeYeYes/QQsign/bin/unidbg-fetch-qsign.bat b/spaces/YeYeYes/QQsign/bin/unidbg-fetch-qsign.bat deleted file mode 100644 index 8b291e7303b0c07d14b714e5795473891363c85b..0000000000000000000000000000000000000000 --- a/spaces/YeYeYes/QQsign/bin/unidbg-fetch-qsign.bat +++ /dev/null @@ -1,89 +0,0 @@ -@rem -@rem Copyright 2015 the original author or authors. -@rem -@rem Licensed under the Apache License, Version 2.0 (the "License"); -@rem you may not use this file except in compliance with the License. -@rem You may obtain a copy of the License at -@rem -@rem https://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. -@rem - -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem unidbg-fetch-qsign startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME%.. - -@rem Resolve any "." and ".." in APP_HOME to make it shorter. -for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi - -@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS= - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto execute - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto execute - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.9.jar;%APP_HOME%\lib\unidbg-android-105.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar - - -@rem Execute unidbg-fetch-qsign -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %* - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega diff --git a/spaces/ZJunTvT/ZJunChat/chatgpt - windows.bat b/spaces/ZJunTvT/ZJunChat/chatgpt - windows.bat deleted file mode 100644 index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000 --- a/spaces/ZJunTvT/ZJunChat/chatgpt - windows.bat +++ /dev/null @@ -1,14 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" - -REM The web page can be accessed with delayed start http://127.0.0.1:7860/ -ping -n 5 127.0.0.1>nul - -REM access chargpt via your default browser -start "" "http://127.0.0.1:7860/" - - -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/carafe.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/carafe.py deleted file mode 100644 index 5154cb3abfccfbbe0a1b2daa67018dbf80aaf6d2..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/carafe.py +++ /dev/null @@ -1,287 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Function -from torch.nn.modules.module import Module - -from ..cnn import UPSAMPLE_LAYERS, normal_init, xavier_init -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'carafe_naive_forward', 'carafe_naive_backward', 'carafe_forward', - 'carafe_backward' -]) - - -class CARAFENaiveFunction(Function): - - @staticmethod - def symbolic(g, features, masks, kernel_size, group_size, scale_factor): - return g.op( - 'mmcv::MMCVCARAFENaive', - features, - masks, - kernel_size_i=kernel_size, - group_size_i=group_size, - scale_factor_f=scale_factor) - - @staticmethod - def forward(ctx, features, masks, kernel_size, group_size, scale_factor): - assert scale_factor >= 1 - assert masks.size(1) == kernel_size * kernel_size * group_size - assert masks.size(-1) == features.size(-1) * scale_factor - assert masks.size(-2) == features.size(-2) * scale_factor - assert features.size(1) % group_size == 0 - assert (kernel_size - 1) % 2 == 0 and kernel_size >= 1 - ctx.kernel_size = kernel_size - ctx.group_size = group_size - ctx.scale_factor = scale_factor - ctx.feature_size = features.size() - ctx.mask_size = masks.size() - - n, c, h, w = features.size() - output = features.new_zeros((n, c, h * scale_factor, w * scale_factor)) - ext_module.carafe_naive_forward( - features, - masks, - output, - kernel_size=kernel_size, - group_size=group_size, - scale_factor=scale_factor) - - if features.requires_grad or masks.requires_grad: - ctx.save_for_backward(features, masks) - return output - - @staticmethod - def backward(ctx, grad_output): - assert grad_output.is_cuda - - features, masks = ctx.saved_tensors - kernel_size = ctx.kernel_size - group_size = ctx.group_size - scale_factor = ctx.scale_factor - - grad_input = torch.zeros_like(features) - grad_masks = torch.zeros_like(masks) - ext_module.carafe_naive_backward( - grad_output.contiguous(), - features, - masks, - grad_input, - grad_masks, - kernel_size=kernel_size, - group_size=group_size, - scale_factor=scale_factor) - - return grad_input, grad_masks, None, None, None - - -carafe_naive = CARAFENaiveFunction.apply - - -class CARAFENaive(Module): - - def __init__(self, kernel_size, group_size, scale_factor): - super(CARAFENaive, self).__init__() - - assert isinstance(kernel_size, int) and isinstance( - group_size, int) and isinstance(scale_factor, int) - self.kernel_size = kernel_size - self.group_size = group_size - self.scale_factor = scale_factor - - def forward(self, features, masks): - return carafe_naive(features, masks, self.kernel_size, self.group_size, - self.scale_factor) - - -class CARAFEFunction(Function): - - @staticmethod - def symbolic(g, features, masks, kernel_size, group_size, scale_factor): - return g.op( - 'mmcv::MMCVCARAFE', - features, - masks, - kernel_size_i=kernel_size, - group_size_i=group_size, - scale_factor_f=scale_factor) - - @staticmethod - def forward(ctx, features, masks, kernel_size, group_size, scale_factor): - assert scale_factor >= 1 - assert masks.size(1) == kernel_size * kernel_size * group_size - assert masks.size(-1) == features.size(-1) * scale_factor - assert masks.size(-2) == features.size(-2) * scale_factor - assert features.size(1) % group_size == 0 - assert (kernel_size - 1) % 2 == 0 and kernel_size >= 1 - ctx.kernel_size = kernel_size - ctx.group_size = group_size - ctx.scale_factor = scale_factor - ctx.feature_size = features.size() - ctx.mask_size = masks.size() - - n, c, h, w = features.size() - output = features.new_zeros((n, c, h * scale_factor, w * scale_factor)) - routput = features.new_zeros(output.size(), requires_grad=False) - rfeatures = features.new_zeros(features.size(), requires_grad=False) - rmasks = masks.new_zeros(masks.size(), requires_grad=False) - ext_module.carafe_forward( - features, - masks, - rfeatures, - routput, - rmasks, - output, - kernel_size=kernel_size, - group_size=group_size, - scale_factor=scale_factor) - - if features.requires_grad or masks.requires_grad: - ctx.save_for_backward(features, masks, rfeatures) - return output - - @staticmethod - def backward(ctx, grad_output): - assert grad_output.is_cuda - - features, masks, rfeatures = ctx.saved_tensors - kernel_size = ctx.kernel_size - group_size = ctx.group_size - scale_factor = ctx.scale_factor - - rgrad_output = torch.zeros_like(grad_output, requires_grad=False) - rgrad_input_hs = torch.zeros_like(grad_output, requires_grad=False) - rgrad_input = torch.zeros_like(features, requires_grad=False) - rgrad_masks = torch.zeros_like(masks, requires_grad=False) - grad_input = torch.zeros_like(features, requires_grad=False) - grad_masks = torch.zeros_like(masks, requires_grad=False) - ext_module.carafe_backward( - grad_output.contiguous(), - rfeatures, - masks, - rgrad_output, - rgrad_input_hs, - rgrad_input, - rgrad_masks, - grad_input, - grad_masks, - kernel_size=kernel_size, - group_size=group_size, - scale_factor=scale_factor) - return grad_input, grad_masks, None, None, None - - -carafe = CARAFEFunction.apply - - -class CARAFE(Module): - """ CARAFE: Content-Aware ReAssembly of FEatures - - Please refer to https://arxiv.org/abs/1905.02188 for more details. - - Args: - kernel_size (int): reassemble kernel size - group_size (int): reassemble group size - scale_factor (int): upsample ratio - - Returns: - upsampled feature map - """ - - def __init__(self, kernel_size, group_size, scale_factor): - super(CARAFE, self).__init__() - - assert isinstance(kernel_size, int) and isinstance( - group_size, int) and isinstance(scale_factor, int) - self.kernel_size = kernel_size - self.group_size = group_size - self.scale_factor = scale_factor - - def forward(self, features, masks): - return carafe(features, masks, self.kernel_size, self.group_size, - self.scale_factor) - - -@UPSAMPLE_LAYERS.register_module(name='carafe') -class CARAFEPack(nn.Module): - """A unified package of CARAFE upsampler that contains: 1) channel - compressor 2) content encoder 3) CARAFE op. - - Official implementation of ICCV 2019 paper - CARAFE: Content-Aware ReAssembly of FEatures - Please refer to https://arxiv.org/abs/1905.02188 for more details. - - Args: - channels (int): input feature channels - scale_factor (int): upsample ratio - up_kernel (int): kernel size of CARAFE op - up_group (int): group size of CARAFE op - encoder_kernel (int): kernel size of content encoder - encoder_dilation (int): dilation of content encoder - compressed_channels (int): output channels of channels compressor - - Returns: - upsampled feature map - """ - - def __init__(self, - channels, - scale_factor, - up_kernel=5, - up_group=1, - encoder_kernel=3, - encoder_dilation=1, - compressed_channels=64): - super(CARAFEPack, self).__init__() - self.channels = channels - self.scale_factor = scale_factor - self.up_kernel = up_kernel - self.up_group = up_group - self.encoder_kernel = encoder_kernel - self.encoder_dilation = encoder_dilation - self.compressed_channels = compressed_channels - self.channel_compressor = nn.Conv2d(channels, self.compressed_channels, - 1) - self.content_encoder = nn.Conv2d( - self.compressed_channels, - self.up_kernel * self.up_kernel * self.up_group * - self.scale_factor * self.scale_factor, - self.encoder_kernel, - padding=int((self.encoder_kernel - 1) * self.encoder_dilation / 2), - dilation=self.encoder_dilation, - groups=1) - self.init_weights() - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - normal_init(self.content_encoder, std=0.001) - - def kernel_normalizer(self, mask): - mask = F.pixel_shuffle(mask, self.scale_factor) - n, mask_c, h, w = mask.size() - # use float division explicitly, - # to void inconsistency while exporting to onnx - mask_channel = int(mask_c / float(self.up_kernel**2)) - mask = mask.view(n, mask_channel, -1, h, w) - - mask = F.softmax(mask, dim=2, dtype=mask.dtype) - mask = mask.view(n, mask_c, h, w).contiguous() - - return mask - - def feature_reassemble(self, x, mask): - x = carafe(x, mask, self.up_kernel, self.up_group, self.scale_factor) - return x - - def forward(self, x): - compressed_x = self.channel_compressor(x) - mask = self.content_encoder(compressed_x) - mask = self.kernel_normalizer(mask) - - x = self.feature_reassemble(x, mask) - return x diff --git a/spaces/abidlabs/whisper/app.py b/spaces/abidlabs/whisper/app.py deleted file mode 100644 index 465817b6ad1403461fa2e09177612de333848b24..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/whisper/app.py +++ /dev/null @@ -1,148 +0,0 @@ -import os -os.system("pip install git+https://github.com/openai/whisper.git") -import gradio as gr -import whisper - -from share_btn import community_icon_html, loading_icon_html, share_js - -model = whisper.load_model("small") - - - -def inference(audio): - audio = whisper.load_audio(audio) - audio = whisper.pad_or_trim(audio) - - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - _, probs = model.detect_language(mel) - - options = whisper.DecodingOptions(fp16 = False) - result = whisper.decode(model, mel, options) - - print(result.text) - return result.text - - - - -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: black; - background: black; - } - input[type='range'] { - accent-color: black; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .prompt h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; margin-top: 1.5rem !important; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; - } - #share-btn * { - all: unset; - } -""" - -block = gr.Blocks(css=css) - - - -with block: - gr.Markdown("# Whisper Speech Recognition") - with gr.Group(): - with gr.Box(): - with gr.Row().style(mobile_collapse=False, equal_height=True): - audio = gr.Audio( - label="Input Audio", - show_label=False, - source="microphone", - type="filepath" - ) - - btn = gr.Button("Transcribe") - text = gr.Textbox(show_label=False, elem_id="result-textarea") - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html, visible=False) - loading_icon = gr.HTML(loading_icon_html, visible=False) - share_button = gr.Button("Share to community", elem_id="share-btn", visible=False) - - - - - btn.click(inference, inputs=[audio], outputs=[text], api_name="predict") - share_button.click(None, [], [], _js=share_js) - - gr.HTML(''' - - ''') - -block.launch() \ No newline at end of file diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/scene.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/scene.py deleted file mode 100644 index 2fe057ec66f52f2dd9c1363aacf72a7c6cec4e6c..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/scene.py +++ /dev/null @@ -1,585 +0,0 @@ -"""Scenes, conforming to the glTF 2.0 standards as specified in -https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#reference-scene - -Author: Matthew Matl -""" -import numpy as np -import networkx as nx -import trimesh - -from .mesh import Mesh -from .camera import Camera -from .light import Light, PointLight, DirectionalLight, SpotLight -from .node import Node -from .utils import format_color_vector - - -class Scene(object): - """A hierarchical scene graph. - - Parameters - ---------- - nodes : list of :class:`Node` - The set of all nodes in the scene. - bg_color : (4,) float, optional - Background color of scene. - ambient_light : (3,) float, optional - Color of ambient light. Defaults to no ambient light. - name : str, optional - The user-defined name of this object. - """ - - def __init__(self, - nodes=None, - bg_color=None, - ambient_light=None, - name=None): - - if bg_color is None: - bg_color = np.ones(4) - else: - bg_color = format_color_vector(bg_color, 4) - - if ambient_light is None: - ambient_light = np.zeros(3) - - if nodes is None: - nodes = set() - self._nodes = set() # Will be added at the end of this function - - self.bg_color = bg_color - self.ambient_light = ambient_light - self.name = name - - self._name_to_nodes = {} - self._obj_to_nodes = {} - self._obj_name_to_nodes = {} - self._mesh_nodes = set() - self._point_light_nodes = set() - self._spot_light_nodes = set() - self._directional_light_nodes = set() - self._camera_nodes = set() - self._main_camera_node = None - self._bounds = None - - # Transform tree - self._digraph = nx.DiGraph() - self._digraph.add_node('world') - self._path_cache = {} - - # Find root nodes and add them - if len(nodes) > 0: - node_parent_map = {n: None for n in nodes} - for node in nodes: - for child in node.children: - if node_parent_map[child] is not None: - raise ValueError('Nodes may not have more than ' - 'one parent') - node_parent_map[child] = node - for node in node_parent_map: - if node_parent_map[node] is None: - self.add_node(node) - - @property - def name(self): - """str : The user-defined name of this object. - """ - return self._name - - @name.setter - def name(self, value): - if value is not None: - value = str(value) - self._name = value - - @property - def nodes(self): - """set of :class:`Node` : Set of nodes in the scene. - """ - return self._nodes - - @property - def bg_color(self): - """(3,) float : The scene background color. - """ - return self._bg_color - - @bg_color.setter - def bg_color(self, value): - if value is None: - value = np.ones(4) - else: - value = format_color_vector(value, 4) - self._bg_color = value - - @property - def ambient_light(self): - """(3,) float : The ambient light in the scene. - """ - return self._ambient_light - - @ambient_light.setter - def ambient_light(self, value): - if value is None: - value = np.zeros(3) - else: - value = format_color_vector(value, 3) - self._ambient_light = value - - @property - def meshes(self): - """set of :class:`Mesh` : The meshes in the scene. - """ - return set([n.mesh for n in self.mesh_nodes]) - - @property - def mesh_nodes(self): - """set of :class:`Node` : The nodes containing meshes. - """ - return self._mesh_nodes - - @property - def lights(self): - """set of :class:`Light` : The lights in the scene. - """ - return self.point_lights | self.spot_lights | self.directional_lights - - @property - def light_nodes(self): - """set of :class:`Node` : The nodes containing lights. - """ - return (self.point_light_nodes | self.spot_light_nodes | - self.directional_light_nodes) - - @property - def point_lights(self): - """set of :class:`PointLight` : The point lights in the scene. - """ - return set([n.light for n in self.point_light_nodes]) - - @property - def point_light_nodes(self): - """set of :class:`Node` : The nodes containing point lights. - """ - return self._point_light_nodes - - @property - def spot_lights(self): - """set of :class:`SpotLight` : The spot lights in the scene. - """ - return set([n.light for n in self.spot_light_nodes]) - - @property - def spot_light_nodes(self): - """set of :class:`Node` : The nodes containing spot lights. - """ - return self._spot_light_nodes - - @property - def directional_lights(self): - """set of :class:`DirectionalLight` : The directional lights in - the scene. - """ - return set([n.light for n in self.directional_light_nodes]) - - @property - def directional_light_nodes(self): - """set of :class:`Node` : The nodes containing directional lights. - """ - return self._directional_light_nodes - - @property - def cameras(self): - """set of :class:`Camera` : The cameras in the scene. - """ - return set([n.camera for n in self.camera_nodes]) - - @property - def camera_nodes(self): - """set of :class:`Node` : The nodes containing cameras in the scene. - """ - return self._camera_nodes - - @property - def main_camera_node(self): - """set of :class:`Node` : The node containing the main camera in the - scene. - """ - return self._main_camera_node - - @main_camera_node.setter - def main_camera_node(self, value): - if value not in self.nodes: - raise ValueError('New main camera node must already be in scene') - self._main_camera_node = value - - @property - def bounds(self): - """(2,3) float : The axis-aligned bounds of the scene. - """ - if self._bounds is None: - # Compute corners - corners = [] - for mesh_node in self.mesh_nodes: - mesh = mesh_node.mesh - pose = self.get_pose(mesh_node) - corners_local = trimesh.bounds.corners(mesh.bounds) - corners_world = pose[:3,:3].dot(corners_local.T).T + pose[:3,3] - corners.append(corners_world) - if len(corners) == 0: - self._bounds = np.zeros((2,3)) - else: - corners = np.vstack(corners) - self._bounds = np.array([np.min(corners, axis=0), - np.max(corners, axis=0)]) - return self._bounds - - @property - def centroid(self): - """(3,) float : The centroid of the scene's axis-aligned bounding box - (AABB). - """ - return np.mean(self.bounds, axis=0) - - @property - def extents(self): - """(3,) float : The lengths of the axes of the scene's AABB. - """ - return np.diff(self.bounds, axis=0).reshape(-1) - - @property - def scale(self): - """(3,) float : The length of the diagonal of the scene's AABB. - """ - return np.linalg.norm(self.extents) - - def add(self, obj, name=None, pose=None, - parent_node=None, parent_name=None): - """Add an object (mesh, light, or camera) to the scene. - - Parameters - ---------- - obj : :class:`Mesh`, :class:`Light`, or :class:`Camera` - The object to add to the scene. - name : str - A name for the new node to be created. - pose : (4,4) float - The local pose of this node relative to its parent node. - parent_node : :class:`Node` - The parent of this Node. If None, the new node is a root node. - parent_name : str - The name of the parent node, can be specified instead of - `parent_node`. - - Returns - ------- - node : :class:`Node` - The newly-created and inserted node. - """ - if isinstance(obj, Mesh): - node = Node(name=name, matrix=pose, mesh=obj) - elif isinstance(obj, Light): - node = Node(name=name, matrix=pose, light=obj) - elif isinstance(obj, Camera): - node = Node(name=name, matrix=pose, camera=obj) - else: - raise TypeError('Unrecognized object type') - - if parent_node is None and parent_name is not None: - parent_nodes = self.get_nodes(name=parent_name) - if len(parent_nodes) == 0: - raise ValueError('No parent node with name {} found' - .format(parent_name)) - elif len(parent_nodes) > 1: - raise ValueError('More than one parent node with name {} found' - .format(parent_name)) - parent_node = list(parent_nodes)[0] - - self.add_node(node, parent_node=parent_node) - - return node - - def get_nodes(self, node=None, name=None, obj=None, obj_name=None): - """Search for existing nodes. Only nodes matching all specified - parameters is returned, or None if no such node exists. - - Parameters - ---------- - node : :class:`Node`, optional - If present, returns this node if it is in the scene. - name : str - A name for the Node. - obj : :class:`Mesh`, :class:`Light`, or :class:`Camera` - An object that is attached to the node. - obj_name : str - The name of an object that is attached to the node. - - Returns - ------- - nodes : set of :class:`.Node` - The nodes that match all query terms. - """ - if node is not None: - if node in self.nodes: - return set([node]) - else: - return set() - nodes = set(self.nodes) - if name is not None: - matches = set() - if name in self._name_to_nodes: - matches = self._name_to_nodes[name] - nodes = nodes & matches - if obj is not None: - matches = set() - if obj in self._obj_to_nodes: - matches = self._obj_to_nodes[obj] - nodes = nodes & matches - if obj_name is not None: - matches = set() - if obj_name in self._obj_name_to_nodes: - matches = self._obj_name_to_nodes[obj_name] - nodes = nodes & matches - - return nodes - - def add_node(self, node, parent_node=None): - """Add a Node to the scene. - - Parameters - ---------- - node : :class:`Node` - The node to be added. - parent_node : :class:`Node` - The parent of this Node. If None, the new node is a root node. - """ - if node in self.nodes: - raise ValueError('Node already in scene') - self.nodes.add(node) - - # Add node to sets - if node.name is not None: - if node.name not in self._name_to_nodes: - self._name_to_nodes[node.name] = set() - self._name_to_nodes[node.name].add(node) - for obj in [node.mesh, node.camera, node.light]: - if obj is not None: - if obj not in self._obj_to_nodes: - self._obj_to_nodes[obj] = set() - self._obj_to_nodes[obj].add(node) - if obj.name is not None: - if obj.name not in self._obj_name_to_nodes: - self._obj_name_to_nodes[obj.name] = set() - self._obj_name_to_nodes[obj.name].add(node) - if node.mesh is not None: - self._mesh_nodes.add(node) - if node.light is not None: - if isinstance(node.light, PointLight): - self._point_light_nodes.add(node) - if isinstance(node.light, SpotLight): - self._spot_light_nodes.add(node) - if isinstance(node.light, DirectionalLight): - self._directional_light_nodes.add(node) - if node.camera is not None: - self._camera_nodes.add(node) - if self._main_camera_node is None: - self._main_camera_node = node - - if parent_node is None: - parent_node = 'world' - elif parent_node not in self.nodes: - raise ValueError('Parent node must already be in scene') - elif node not in parent_node.children: - parent_node.children.append(node) - - # Create node in graph - self._digraph.add_node(node) - self._digraph.add_edge(node, parent_node) - - # Iterate over children - for child in node.children: - self.add_node(child, node) - - self._path_cache = {} - self._bounds = None - - def has_node(self, node): - """Check if a node is already in the scene. - - Parameters - ---------- - node : :class:`Node` - The node to be checked. - - Returns - ------- - has_node : bool - True if the node is already in the scene and false otherwise. - """ - return node in self.nodes - - def remove_node(self, node): - """Remove a node and all its children from the scene. - - Parameters - ---------- - node : :class:`Node` - The node to be removed. - """ - # Disconnect self from parent who is staying in the graph - parent = list(self._digraph.neighbors(node))[0] - self._remove_node(node) - if isinstance(parent, Node): - parent.children.remove(node) - self._path_cache = {} - self._bounds = None - - def get_pose(self, node): - """Get the world-frame pose of a node in the scene. - - Parameters - ---------- - node : :class:`Node` - The node to find the pose of. - - Returns - ------- - pose : (4,4) float - The transform matrix for this node. - """ - if node not in self.nodes: - raise ValueError('Node must already be in scene') - if node in self._path_cache: - path = self._path_cache[node] - else: - # Get path from from_frame to to_frame - path = nx.shortest_path(self._digraph, node, 'world') - self._path_cache[node] = path - - # Traverse from from_node to to_node - pose = np.eye(4) - for n in path[:-1]: - pose = np.dot(n.matrix, pose) - - return pose - - def set_pose(self, node, pose): - """Set the local-frame pose of a node in the scene. - - Parameters - ---------- - node : :class:`Node` - The node to set the pose of. - pose : (4,4) float - The pose to set the node to. - """ - if node not in self.nodes: - raise ValueError('Node must already be in scene') - node._matrix = pose - if node.mesh is not None: - self._bounds = None - - def clear(self): - """Clear out all nodes to form an empty scene. - """ - self._nodes = set() - - self._name_to_nodes = {} - self._obj_to_nodes = {} - self._obj_name_to_nodes = {} - self._mesh_nodes = set() - self._point_light_nodes = set() - self._spot_light_nodes = set() - self._directional_light_nodes = set() - self._camera_nodes = set() - self._main_camera_node = None - self._bounds = None - - # Transform tree - self._digraph = nx.DiGraph() - self._digraph.add_node('world') - self._path_cache = {} - - def _remove_node(self, node): - """Remove a node and all its children from the scene. - - Parameters - ---------- - node : :class:`Node` - The node to be removed. - """ - - # Remove self from nodes - self.nodes.remove(node) - - # Remove children - for child in node.children: - self._remove_node(child) - - # Remove self from the graph - self._digraph.remove_node(node) - - # Remove from maps - if node.name in self._name_to_nodes: - self._name_to_nodes[node.name].remove(node) - if len(self._name_to_nodes[node.name]) == 0: - self._name_to_nodes.pop(node.name) - for obj in [node.mesh, node.camera, node.light]: - if obj is None: - continue - self._obj_to_nodes[obj].remove(node) - if len(self._obj_to_nodes[obj]) == 0: - self._obj_to_nodes.pop(obj) - if obj.name is not None: - self._obj_name_to_nodes[obj.name].remove(node) - if len(self._obj_name_to_nodes[obj.name]) == 0: - self._obj_name_to_nodes.pop(obj.name) - if node.mesh is not None: - self._mesh_nodes.remove(node) - if node.light is not None: - if isinstance(node.light, PointLight): - self._point_light_nodes.remove(node) - if isinstance(node.light, SpotLight): - self._spot_light_nodes.remove(node) - if isinstance(node.light, DirectionalLight): - self._directional_light_nodes.remove(node) - if node.camera is not None: - self._camera_nodes.remove(node) - if self._main_camera_node == node: - if len(self._camera_nodes) > 0: - self._main_camera_node = next(iter(self._camera_nodes)) - else: - self._main_camera_node = None - - @staticmethod - def from_trimesh_scene(trimesh_scene, - bg_color=None, ambient_light=None): - """Create a :class:`.Scene` from a :class:`trimesh.scene.scene.Scene`. - - Parameters - ---------- - trimesh_scene : :class:`trimesh.scene.scene.Scene` - Scene with :class:~`trimesh.base.Trimesh` objects. - bg_color : (4,) float - Background color for the created scene. - ambient_light : (3,) float or None - Ambient light in the scene. - - Returns - ------- - scene_pr : :class:`Scene` - A scene containing the same geometry as the trimesh scene. - """ - # convert trimesh geometries to pyrender geometries - geometries = {name: Mesh.from_trimesh(geom) - for name, geom in trimesh_scene.geometry.items()} - - # create the pyrender scene object - scene_pr = Scene(bg_color=bg_color, ambient_light=ambient_light) - - # add every node with geometry to the pyrender scene - for node in trimesh_scene.graph.nodes_geometry: - pose, geom_name = trimesh_scene.graph[node] - scene_pr.add(geometries[geom_name], pose=pose) - - return scene_pr diff --git a/spaces/adirik/stylemc-demo/encoder4editing/models/encoders/psp_encoders.py b/spaces/adirik/stylemc-demo/encoder4editing/models/encoders/psp_encoders.py deleted file mode 100644 index ab52d04dbd8eac5adf673a1587b0b4ea9d6e68dd..0000000000000000000000000000000000000000 --- a/spaces/adirik/stylemc-demo/encoder4editing/models/encoders/psp_encoders.py +++ /dev/null @@ -1,235 +0,0 @@ -from enum import Enum -import math -import numpy as np -import torch -from torch import nn -from torch.nn import Conv2d, BatchNorm2d, PReLU, Sequential, Module - -from models.encoders.helpers import get_blocks, bottleneck_IR, bottleneck_IR_SE, _upsample_add -from models.stylegan2.model import EqualLinear - - -class ProgressiveStage(Enum): - WTraining = 0 - Delta1Training = 1 - Delta2Training = 2 - Delta3Training = 3 - Delta4Training = 4 - Delta5Training = 5 - Delta6Training = 6 - Delta7Training = 7 - Delta8Training = 8 - Delta9Training = 9 - Delta10Training = 10 - Delta11Training = 11 - Delta12Training = 12 - Delta13Training = 13 - Delta14Training = 14 - Delta15Training = 15 - Delta16Training = 16 - Delta17Training = 17 - Inference = 18 - - -class GradualStyleBlock(Module): - def __init__(self, in_c, out_c, spatial): - super(GradualStyleBlock, self).__init__() - self.out_c = out_c - self.spatial = spatial - num_pools = int(np.log2(spatial)) - modules = [] - modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU()] - for i in range(num_pools - 1): - modules += [ - Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU() - ] - self.convs = nn.Sequential(*modules) - self.linear = EqualLinear(out_c, out_c, lr_mul=1) - - def forward(self, x): - x = self.convs(x) - x = x.view(-1, self.out_c) - x = self.linear(x) - return x - - -class GradualStyleEncoder(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(GradualStyleEncoder, self).__init__() - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - self.styles = nn.ModuleList() - log_size = int(math.log(opts.stylegan_size, 2)) - self.style_count = 2 * log_size - 2 - self.coarse_ind = 3 - self.middle_ind = 7 - for i in range(self.style_count): - if i < self.coarse_ind: - style = GradualStyleBlock(512, 512, 16) - elif i < self.middle_ind: - style = GradualStyleBlock(512, 512, 32) - else: - style = GradualStyleBlock(512, 512, 64) - self.styles.append(style) - self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0) - self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0) - - def forward(self, x): - x = self.input_layer(x) - - latents = [] - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 6: - c1 = x - elif i == 20: - c2 = x - elif i == 23: - c3 = x - - for j in range(self.coarse_ind): - latents.append(self.styles[j](c3)) - - p2 = _upsample_add(c3, self.latlayer1(c2)) - for j in range(self.coarse_ind, self.middle_ind): - latents.append(self.styles[j](p2)) - - p1 = _upsample_add(p2, self.latlayer2(c1)) - for j in range(self.middle_ind, self.style_count): - latents.append(self.styles[j](p1)) - - out = torch.stack(latents, dim=1) - return out - - -class Encoder4Editing(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(Encoder4Editing, self).__init__() - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - self.styles = nn.ModuleList() - log_size = int(math.log(opts.stylegan_size, 2)) - self.style_count = 2 * log_size - 2 - self.coarse_ind = 3 - self.middle_ind = 7 - - for i in range(self.style_count): - if i < self.coarse_ind: - style = GradualStyleBlock(512, 512, 16) - elif i < self.middle_ind: - style = GradualStyleBlock(512, 512, 32) - else: - style = GradualStyleBlock(512, 512, 64) - self.styles.append(style) - - self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0) - self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0) - - self.progressive_stage = ProgressiveStage.Inference - - def get_deltas_starting_dimensions(self): - ''' Get a list of the initial dimension of every delta from which it is applied ''' - return list(range(self.style_count)) # Each dimension has a delta applied to it - - def set_progressive_stage(self, new_stage: ProgressiveStage): - self.progressive_stage = new_stage - print('Changed progressive stage to: ', new_stage) - - def forward(self, x): - x = self.input_layer(x) - - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 6: - c1 = x - elif i == 20: - c2 = x - elif i == 23: - c3 = x - - # Infer main W and duplicate it - w0 = self.styles[0](c3) - w = w0.repeat(self.style_count, 1, 1).permute(1, 0, 2) - stage = self.progressive_stage.value - features = c3 - for i in range(1, min(stage + 1, self.style_count)): # Infer additional deltas - if i == self.coarse_ind: - p2 = _upsample_add(c3, self.latlayer1(c2)) # FPN's middle features - features = p2 - elif i == self.middle_ind: - p1 = _upsample_add(p2, self.latlayer2(c1)) # FPN's fine features - features = p1 - delta_i = self.styles[i](features) - w[:, i] += delta_i - return w - - -class BackboneEncoderUsingLastLayerIntoW(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(BackboneEncoderUsingLastLayerIntoW, self).__init__() - print('Using BackboneEncoderUsingLastLayerIntoW') - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - self.output_pool = torch.nn.AdaptiveAvgPool2d((1, 1)) - self.linear = EqualLinear(512, 512, lr_mul=1) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - log_size = int(math.log(opts.stylegan_size, 2)) - self.style_count = 2 * log_size - 2 - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_pool(x) - x = x.view(-1, 512) - x = self.linear(x) - return x.repeat(self.style_count, 1, 1).permute(1, 0, 2) diff --git a/spaces/adirik/stylemc-demo/torch_utils/training_stats.py b/spaces/adirik/stylemc-demo/torch_utils/training_stats.py deleted file mode 100644 index 26f467f9eaa074ee13de1cf2625cd7da44880847..0000000000000000000000000000000000000000 --- a/spaces/adirik/stylemc-demo/torch_utils/training_stats.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for reporting and collecting training statistics across -multiple processes and devices. The interface is designed to minimize -synchronization overhead as well as the amount of boilerplate in user -code.""" - -import re -import numpy as np -import torch -import dnnlib - -from . import misc - -#---------------------------------------------------------------------------- - -_num_moments = 3 # [num_scalars, sum_of_scalars, sum_of_squares] -_reduce_dtype = torch.float32 # Data type to use for initial per-tensor reduction. -_counter_dtype = torch.float64 # Data type to use for the internal counters. -_rank = 0 # Rank of the current process. -_sync_device = None # Device to use for multiprocess communication. None = single-process. -_sync_called = False # Has _sync() been called yet? -_counters = dict() # Running counters on each device, updated by report(): name => device => torch.Tensor -_cumulative = dict() # Cumulative counters on the CPU, updated by _sync(): name => torch.Tensor - -#---------------------------------------------------------------------------- - -def init_multiprocessing(rank, sync_device): - r"""Initializes `torch_utils.training_stats` for collecting statistics - across multiple processes. - - This function must be called after - `torch.distributed.init_process_group()` and before `Collector.update()`. - The call is not necessary if multi-process collection is not needed. - - Args: - rank: Rank of the current process. - sync_device: PyTorch device to use for inter-process - communication, or None to disable multi-process - collection. Typically `torch.device('cuda', rank)`. - """ - global _rank, _sync_device - assert not _sync_called - _rank = rank - _sync_device = sync_device - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def report(name, value): - r"""Broadcasts the given set of scalars to all interested instances of - `Collector`, across device and process boundaries. - - This function is expected to be extremely cheap and can be safely - called from anywhere in the training loop, loss function, or inside a - `torch.nn.Module`. - - Warning: The current implementation expects the set of unique names to - be consistent across processes. Please make sure that `report()` is - called at least once for each unique name by each process, and in the - same order. If a given process has no scalars to broadcast, it can do - `report(name, [])` (empty list). - - Args: - name: Arbitrary string specifying the name of the statistic. - Averages are accumulated separately for each unique name. - value: Arbitrary set of scalars. Can be a list, tuple, - NumPy array, PyTorch tensor, or Python scalar. - - Returns: - The same `value` that was passed in. - """ - if name not in _counters: - _counters[name] = dict() - - elems = torch.as_tensor(value) - if elems.numel() == 0: - return value - - elems = elems.detach().flatten().to(_reduce_dtype) - moments = torch.stack([ - torch.ones_like(elems).sum(), - elems.sum(), - elems.square().sum(), - ]) - assert moments.ndim == 1 and moments.shape[0] == _num_moments - moments = moments.to(_counter_dtype) - - device = moments.device - if device not in _counters[name]: - _counters[name][device] = torch.zeros_like(moments) - _counters[name][device].add_(moments) - return value - -#---------------------------------------------------------------------------- - -def report0(name, value): - r"""Broadcasts the given set of scalars by the first process (`rank = 0`), - but ignores any scalars provided by the other processes. - See `report()` for further details. - """ - report(name, value if _rank == 0 else []) - return value - -#---------------------------------------------------------------------------- - -class Collector: - r"""Collects the scalars broadcasted by `report()` and `report0()` and - computes their long-term averages (mean and standard deviation) over - user-defined periods of time. - - The averages are first collected into internal counters that are not - directly visible to the user. They are then copied to the user-visible - state as a result of calling `update()` and can then be queried using - `mean()`, `std()`, `as_dict()`, etc. Calling `update()` also resets the - internal counters for the next round, so that the user-visible state - effectively reflects averages collected between the last two calls to - `update()`. - - Args: - regex: Regular expression defining which statistics to - collect. The default is to collect everything. - keep_previous: Whether to retain the previous averages if no - scalars were collected on a given round - (default: True). - """ - def __init__(self, regex='.*', keep_previous=True): - self._regex = re.compile(regex) - self._keep_previous = keep_previous - self._cumulative = dict() - self._moments = dict() - self.update() - self._moments.clear() - - def names(self): - r"""Returns the names of all statistics broadcasted so far that - match the regular expression specified at construction time. - """ - return [name for name in _counters if self._regex.fullmatch(name)] - - def update(self): - r"""Copies current values of the internal counters to the - user-visible state and resets them for the next round. - - If `keep_previous=True` was specified at construction time, the - operation is skipped for statistics that have received no scalars - since the last update, retaining their previous averages. - - This method performs a number of GPU-to-CPU transfers and one - `torch.distributed.all_reduce()`. It is intended to be called - periodically in the main training loop, typically once every - N training steps. - """ - if not self._keep_previous: - self._moments.clear() - for name, cumulative in _sync(self.names()): - if name not in self._cumulative: - self._cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - delta = cumulative - self._cumulative[name] - self._cumulative[name].copy_(cumulative) - if float(delta[0]) != 0: - self._moments[name] = delta - - def _get_delta(self, name): - r"""Returns the raw moments that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - assert self._regex.fullmatch(name) - if name not in self._moments: - self._moments[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - return self._moments[name] - - def num(self, name): - r"""Returns the number of scalars that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - delta = self._get_delta(name) - return int(delta[0]) - - def mean(self, name): - r"""Returns the mean of the scalars that were accumulated for the - given statistic between the last two calls to `update()`, or NaN if - no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0: - return float('nan') - return float(delta[1] / delta[0]) - - def std(self, name): - r"""Returns the standard deviation of the scalars that were - accumulated for the given statistic between the last two calls to - `update()`, or NaN if no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0 or not np.isfinite(float(delta[1])): - return float('nan') - if int(delta[0]) == 1: - return float(0) - mean = float(delta[1] / delta[0]) - raw_var = float(delta[2] / delta[0]) - return np.sqrt(max(raw_var - np.square(mean), 0)) - - def as_dict(self): - r"""Returns the averages accumulated between the last two calls to - `update()` as an `dnnlib.EasyDict`. The contents are as follows: - - dnnlib.EasyDict( - NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT), - ... - ) - """ - stats = dnnlib.EasyDict() - for name in self.names(): - stats[name] = dnnlib.EasyDict(num=self.num(name), mean=self.mean(name), std=self.std(name)) - return stats - - def __getitem__(self, name): - r"""Convenience getter. - `collector[name]` is a synonym for `collector.mean(name)`. - """ - return self.mean(name) - -#---------------------------------------------------------------------------- - -def _sync(names): - r"""Synchronize the global cumulative counters across devices and - processes. Called internally by `Collector.update()`. - """ - if len(names) == 0: - return [] - global _sync_called - _sync_called = True - - # Collect deltas within current rank. - deltas = [] - device = _sync_device if _sync_device is not None else torch.device('cpu') - for name in names: - delta = torch.zeros([_num_moments], dtype=_counter_dtype, device=device) - for counter in _counters[name].values(): - delta.add_(counter.to(device)) - counter.copy_(torch.zeros_like(counter)) - deltas.append(delta) - deltas = torch.stack(deltas) - - # Sum deltas across ranks. - if _sync_device is not None: - torch.distributed.all_reduce(deltas) - - # Update cumulative values. - deltas = deltas.cpu() - for idx, name in enumerate(names): - if name not in _cumulative: - _cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - _cumulative[name].add_(deltas[idx]) - - # Return name-value pairs. - return [(name, _cumulative[name]) for name in names] - -#---------------------------------------------------------------------------- diff --git a/spaces/ahdsoft/Persian-Topic-Modeling/README.md b/spaces/ahdsoft/Persian-Topic-Modeling/README.md deleted file mode 100644 index 19801d3a8bd1c51189b283f089ea4ce8742b6062..0000000000000000000000000000000000000000 --- a/spaces/ahdsoft/Persian-Topic-Modeling/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Topic Modeling -emoji: ⚡ -colorFrom: pink -colorTo: red -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ahmedghani/svoice_demo/svoice/data/preprocess.py b/spaces/ahmedghani/svoice_demo/svoice/data/preprocess.py deleted file mode 100644 index 21335ff09646d2da4cd3c3f6f4fbc86336b8166f..0000000000000000000000000000000000000000 --- a/spaces/ahmedghani/svoice_demo/svoice/data/preprocess.py +++ /dev/null @@ -1,74 +0,0 @@ -# The following piece of code was adapted from https://github.com/kaituoxu/Conv-TasNet -# released under the MIT License. -# Author: Kaituo XU -# Created on 2018/12 - -# Revised by: Eliya Nachmani (enk100), Yossi Adi (adiyoss), Lior Wolf - -import argparse -import json -import os - -import librosa -from tqdm import tqdm - - -def preprocess_one_dir(in_dir, out_dir, out_filename, sample_rate=8000): - file_infos = [] - in_dir = os.path.abspath(in_dir) - wav_list = os.listdir(in_dir) - for wav_file in tqdm(wav_list): - if not wav_file.endswith('.wav'): - continue - wav_path = os.path.join(in_dir, wav_file) - samples, _ = librosa.load(wav_path, sr=sample_rate) - file_infos.append((wav_path, len(samples))) - if not os.path.exists(out_dir): - os.makedirs(out_dir) - with open(os.path.join(out_dir, out_filename + '.json'), 'w') as f: - json.dump(file_infos, f, indent=4) - - -def preprocess(args): - for data_type in ['tr', 'cv', 'tt']: - for signal in ['noisy', 'clean']: - preprocess_one_dir(os.path.join(args.in_dir, data_type, signal), - os.path.join(args.out_dir, data_type), - signal, - sample_rate=args.sample_rate) - - -def preprocess_alldirs(args): - for d in os.listdir(args.in_dir): - local_dir = os.path.join(args.in_dir, d) - if os.path.isdir(local_dir): - preprocess_one_dir(os.path.join(args.in_dir, local_dir), - os.path.join(args.out_dir), - d, - sample_rate=args.sample_rate) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser("WSJ0 data preprocessing") - parser.add_argument('--in_dir', type=str, default=None, - help='Directory path of wsj0 including tr, cv and tt') - parser.add_argument('--out_dir', type=str, default=None, - help='Directory path to put output files') - parser.add_argument('--sample_rate', type=int, default=16000, - help='Sample rate of audio file') - parser.add_argument("--one_dir", action="store_true", - help="Generate json files from specific directory") - parser.add_argument("--all_dirs", action="store_true", - help="Generate json files from all dirs in specific directory") - parser.add_argument('--json_name', type=str, default=None, - help='The name of the json to be generated. ' - 'To be used only with one-dir option.') - args = parser.parse_args() - print(args) - if args.all_dirs: - preprocess_alldirs(args) - elif args.one_dir: - preprocess_one_dir(args.in_dir, args.out_dir, - args.json_name, sample_rate=args.sample_rate) - else: - preprocess(args) diff --git a/spaces/akhaliq/JoJoGAN/e4e/models/discriminator.py b/spaces/akhaliq/JoJoGAN/e4e/models/discriminator.py deleted file mode 100644 index 16bf3722c7f2e35cdc9bd177a33ed0975e67200d..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/JoJoGAN/e4e/models/discriminator.py +++ /dev/null @@ -1,20 +0,0 @@ -from torch import nn - - -class LatentCodesDiscriminator(nn.Module): - def __init__(self, style_dim, n_mlp): - super().__init__() - - self.style_dim = style_dim - - layers = [] - for i in range(n_mlp-1): - layers.append( - nn.Linear(style_dim, style_dim) - ) - layers.append(nn.LeakyReLU(0.2)) - layers.append(nn.Linear(512, 1)) - self.mlp = nn.Sequential(*layers) - - def forward(self, w): - return self.mlp(w) diff --git a/spaces/akhaliq/deeplab2/model/layers/activations_test.py b/spaces/akhaliq/deeplab2/model/layers/activations_test.py deleted file mode 100644 index d0c867c9fcad5218e28f3fb4f082274d1c48c173..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/model/layers/activations_test.py +++ /dev/null @@ -1,36 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for activations.py.""" -import tensorflow as tf - -from deeplab2.model.layers import activations - - -class ActivationsTest(tf.test.TestCase): - - def test_gelu(self): - expected_data = [[0.14967535, 0., -0.10032465], - [-0.15880796, -0.04540223, 2.9963627]] - gelu_data = activations.gelu([[.25, 0, -.25], [-1, -2, 3]], - approximate=True) - self.assertAllClose(expected_data, gelu_data) - gelu_data_via_get_activation = activations.get_activation( - 'approximated_gelu')([[.25, 0, -.25], [-1, -2, 3]]) - self.assertAllClose(expected_data, gelu_data_via_get_activation) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/akhaliq/deeplab2/trainer/evaluator.py b/spaces/akhaliq/deeplab2/trainer/evaluator.py deleted file mode 100644 index ebc13ca9e8c1840ed2b590f62b7269101b1e0670..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/trainer/evaluator.py +++ /dev/null @@ -1,393 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""This file contains code to create an evaluator runner. - -Note that the evaluator is not well-optimized for inference speed. There are -some redundant outputs, e.g., visualization results, evaluation loss, and so -on. We still compute them in this implementation with the goal to provide more -detailed information for research development. One should remove those -redundant outputs for a faster inference speed. -""" - -import os -import orbit -import tensorflow as tf - -from deeplab2 import common -from deeplab2.data import dataset -from deeplab2.evaluation import coco_instance_ap as instance_ap -from deeplab2.evaluation import panoptic_quality -from deeplab2.evaluation import segmentation_and_tracking_quality as stq -from deeplab2.evaluation import video_panoptic_quality as vpq -from deeplab2.model import utils -from deeplab2.trainer import runner_utils -from deeplab2.trainer import vis - - -_PANOPTIC_METRIC_OFFSET = 256 * 256 -# Video Panoptic Segmentation requires a larger offset value for accommodating -# more instance IDs. -_VIDEO_PANOPTIC_METRIC_OFFSET = _PANOPTIC_METRIC_OFFSET * 256 -_PREDICTIONS_KEY = 'unique_key_for_storing_predictions' -_LABELS_KEY = 'unique_key_for_storing_labels' - - -class Evaluator(orbit.StandardEvaluator): - """Implements an evaluator for DeepLab models.""" - - def __init__(self, config, model, loss, global_step, model_dir): - """Initializes the Evaluator. - - Args: - config: A config_pb2.ExperimentOptions configuration. - model: A tf.keras.Model. - loss: A tf.keras.losses.Loss. - global_step: A tf.Variable that records the global training step. - model_dir: A path to store all experimental artifacts. - """ - self._strategy = tf.distribute.get_strategy() - - self._supported_tasks = utils.get_supported_tasks(config) - eval_dataset = runner_utils.create_dataset( - config.eval_dataset_options, - is_training=False, - only_semantic_annotations=( - common.TASK_PANOPTIC_SEGMENTATION not in self._supported_tasks)) - eval_dataset = orbit.utils.make_distributed_dataset(self._strategy, - eval_dataset) - evaluator_options_override = orbit.StandardEvaluatorOptions( - config.evaluator_options.use_tf_function) - super(Evaluator, self).__init__(eval_dataset, evaluator_options_override) - self._config = config - self._model = model - self._loss = loss - self._global_step = global_step - self._sample_counter = 0 - self._enable_visualization = config.evaluator_options.save_predictions - self._num_vis_samples = config.evaluator_options.num_vis_samples - self._save_raw_predictions = config.evaluator_options.save_raw_predictions - self._decode_groundtruth_label = ( - config.eval_dataset_options.decode_groundtruth_label) - if config.evaluator_options.HasField('override_save_dir'): - self._vis_dir = config.evaluator_options.override_save_dir - else: - self._vis_dir = os.path.join(model_dir, 'vis') - - self._dataset_info = dataset.MAP_NAME_TO_DATASET_INFO[ - config.eval_dataset_options.dataset] - - # Create eval loss metrics. - self._eval_loss_metric_dict = runner_utils.create_loss_metric_dict( - loss.get_loss_names(), prefix='eval_') - # Create metrics (PQ, IoU). - self._ignore_label = self._dataset_info.ignore_label - self._eval_iou_metric = tf.keras.metrics.MeanIoU( - self._dataset_info.num_classes, 'IoU') - - if common.TASK_PANOPTIC_SEGMENTATION in self._supported_tasks: - self._eval_pq_metric = panoptic_quality.PanopticQuality( - self._dataset_info.num_classes, - self._dataset_info.ignore_label, - self._dataset_info.panoptic_label_divisor, - offset=_PANOPTIC_METRIC_OFFSET) - if common.TASK_INSTANCE_SEGMENTATION in self._supported_tasks: - self._eval_ap_metric = instance_ap.PanopticInstanceAveragePrecision( - self._dataset_info.num_classes, - self._dataset_info.class_has_instances_list, - self._dataset_info.panoptic_label_divisor, - self._dataset_info.ignore_label) - if common.TASK_VIDEO_PANOPTIC_SEGMENTATION in self._supported_tasks: - self._eval_tracking_metric = stq.STQuality( - self._dataset_info.num_classes, - self._dataset_info.class_has_instances_list, - self._dataset_info.ignore_label, - self._dataset_info.panoptic_label_divisor, - offset=_VIDEO_PANOPTIC_METRIC_OFFSET) - if (common.TASK_DEPTH_AWARE_VIDEO_PANOPTIC_SEGMENTATION - in self._supported_tasks): - # We compute two-frame video panoptic quality as an additional metric - # for the task of depth-aware video panoptic segmentation. - self._eval_vpq_metric = vpq.VideoPanopticQuality( - self._dataset_info.num_classes, - self._dataset_info.ignore_label, - self._dataset_info.panoptic_label_divisor, - offset=_VIDEO_PANOPTIC_METRIC_OFFSET) - - def _reset(self): - for metric in self._eval_loss_metric_dict.values(): - metric.reset_states() - self._eval_iou_metric.reset_states() - if common.TASK_PANOPTIC_SEGMENTATION in self._supported_tasks: - self._eval_pq_metric.reset_states() - if common.TASK_INSTANCE_SEGMENTATION in self._supported_tasks: - self._eval_ap_metric.reset_states() - if common.TASK_VIDEO_PANOPTIC_SEGMENTATION in self._supported_tasks: - self._eval_tracking_metric.reset_states() - if (common.TASK_DEPTH_AWARE_VIDEO_PANOPTIC_SEGMENTATION - in self._supported_tasks): - self._eval_vpq_metric.reset_states() - self._sample_counter = 0 - - def eval_begin(self): - """Called once at the beginning of the evaluation. - - This method is called before dataset iterators creation. - """ - self._reset() - tf.io.gfile.makedirs(self._vis_dir) - if self._save_raw_predictions: - tf.io.gfile.makedirs( - os.path.join(self._vis_dir, 'raw_semantic')) - if common.TASK_PANOPTIC_SEGMENTATION in self._supported_tasks: - tf.io.gfile.makedirs( - os.path.join(self._vis_dir, 'raw_panoptic')) - - def eval_step(self, iterator): - """Implements one step of evaluation. - - Runs one step of evaluation with respect to the chosen strategy. In case of - a distributed strategy, the replica results are gathered and returned. - - Note that all operations within `_eval_step` are tf.function compatible, as - they will be traced with tf.function. Any other/numpy operations are put in - `eval_begin`, `eval_end` or `eval_reduce` functions. - - Args: - iterator: A tf.nest-compatible structure of tf.data Iterator or - DistributedIterator. - - Returns: - An output which is passed as `step_outputs` argument into `eval_reduce` - function. - """ - def step_fn(inputs): - step_outputs = self._eval_step(inputs) - return step_outputs - - distributed_outputs = self._strategy.run(step_fn, args=(next(iterator),)) - return tf.nest.map_structure(self._strategy.experimental_local_results, - distributed_outputs) - - def _eval_step(self, inputs): - tf.assert_equal(tf.shape(inputs[common.IMAGE])[0], 1, 'Currently only a ' - 'batchsize of 1 is supported in evaluation due to resizing.' - ) - outputs = self._model(inputs[common.IMAGE], training=False) - raw_size = [ - inputs[common.GT_SIZE_RAW][0, 0], inputs[common.GT_SIZE_RAW][0, 1] - ] - resized_size = [ - tf.shape(inputs[common.RESIZED_IMAGE])[1], - tf.shape(inputs[common.RESIZED_IMAGE])[2], - ] - - step_outputs = {} - if self._decode_groundtruth_label: - - loss_dict = self._loss(inputs, outputs) - # Average over the batch. - average_loss_dict = { - key: tf.reduce_mean(value) for key, value in loss_dict.items()} - - for name, value in average_loss_dict.items(): - self._eval_loss_metric_dict[name].update_state(value) - - # We only undo-preprocess for those defined in tuples in model/utils.py. - outputs = utils.undo_preprocessing(outputs, resized_size, - raw_size) - - self._eval_iou_metric.update_state( - tf.where( - tf.equal(inputs[common.GT_SEMANTIC_RAW], self._ignore_label), - 0, - inputs[common.GT_SEMANTIC_RAW]), - outputs[common.PRED_SEMANTIC_KEY], - tf.where( - tf.equal(inputs[common.GT_SEMANTIC_RAW], self._ignore_label), - 0.0, - 1.0)) - if common.TASK_PANOPTIC_SEGMENTATION in self._supported_tasks: - step_outputs[self._eval_pq_metric.name] = ( - inputs[common.GT_PANOPTIC_RAW], outputs[common.PRED_PANOPTIC_KEY]) - if common.TASK_INSTANCE_SEGMENTATION in self._supported_tasks: - step_outputs[self._eval_ap_metric.name] = ( - inputs[common.GT_PANOPTIC_RAW], outputs[common.PRED_PANOPTIC_KEY], - outputs[common.PRED_SEMANTIC_PROBS_KEY], - outputs[common.PRED_INSTANCE_SCORES_KEY], - inputs[common.GT_IS_CROWD_RAW]) - if (common.TASK_DEPTH_AWARE_VIDEO_PANOPTIC_SEGMENTATION - in self._supported_tasks): - step_outputs[self._eval_vpq_metric.name] = ( - inputs[common.GT_PANOPTIC_RAW], - inputs[common.GT_NEXT_PANOPTIC_RAW], - outputs[common.PRED_PANOPTIC_KEY], - outputs[common.PRED_NEXT_PANOPTIC_KEY]) - else: - # We only undo-preprocess for those defined in tuples in model/utils.py. - outputs = utils.undo_preprocessing(outputs, resized_size, - raw_size) - # We only undo-preprocess for those defined in tuples in model/utils.py. - inputs = utils.undo_preprocessing(inputs, resized_size, - raw_size) - if common.SEQUENCE_ID in inputs: - step_outputs[common.SEQUENCE_ID] = inputs[common.SEQUENCE_ID] - if self._enable_visualization or self._save_raw_predictions: - step_outputs[_PREDICTIONS_KEY] = outputs - step_outputs[_LABELS_KEY] = inputs - return step_outputs - - def eval_end(self, state=None): - """Called at the end of the evaluation. - - Args: - state: The outputs from `eval_reduce` after the last eval step. - - Returns: - A dictionary of `Tensors`, which will be written to logs and as - TensorBoard summaries. - """ - if not self._decode_groundtruth_label: - return {} - - eval_logs = {} - for loss_metric in self._eval_loss_metric_dict.values(): - eval_logs['losses/' + loss_metric.name] = loss_metric.result() - eval_logs['evaluation/iou/' + self._eval_iou_metric.name] = ( - self._eval_iou_metric.result()) - if common.TASK_PANOPTIC_SEGMENTATION in self._supported_tasks: - pq_results = self._eval_pq_metric.result() - eval_logs['evaluation/pq/PQ'] = pq_results[0] - eval_logs['evaluation/pq/SQ'] = pq_results[1] - eval_logs['evaluation/pq/RQ'] = pq_results[2] - eval_logs['evaluation/pq/TP'] = pq_results[3] - eval_logs['evaluation/pq/FN'] = pq_results[4] - eval_logs['evaluation/pq/FP'] = pq_results[5] - - if common.TASK_INSTANCE_SEGMENTATION in self._supported_tasks: - ap_results = self._eval_ap_metric.result() - eval_logs['evaluation/ap/AP_Mask'] = ap_results[0] - if self._config.evaluator_options.detailed_ap_metrics: - eval_logs['evaluation/ap/AP_Mask_@IoU=0.5'] = ap_results[1] - eval_logs['evaluation/ap/AP_Mask_@IoU=0.75'] = ap_results[2] - eval_logs['evaluation/ap/AP_Mask_small'] = ap_results[3] - eval_logs['evaluation/ap/AP_Mask_medium'] = ap_results[4] - eval_logs['evaluation/ap/AP_Mask_large'] = ap_results[5] - eval_logs['evaluation/ap/AR_Mask_maxdets=1'] = ap_results[6] - eval_logs['evaluation/ap/AR_Mask_maxdets=10'] = ap_results[7] - eval_logs['evaluation/ap/AR_Mask_maxdets=100'] = ap_results[8] - eval_logs['evaluation/ap/AR_Mask_small'] = ap_results[9] - eval_logs['evaluation/ap/AR_Mask_medium'] = ap_results[10] - eval_logs['evaluation/ap/AR_Mask_large'] = ap_results[11] - - if common.TASK_VIDEO_PANOPTIC_SEGMENTATION in self._supported_tasks: - tracking_results = self._eval_tracking_metric.result() - eval_logs['evaluation/step/STQ'] = tracking_results['STQ'] - eval_logs['evaluation/step/AQ'] = tracking_results['AQ'] - eval_logs['evaluation/step/IoU'] = tracking_results['IoU'] - if (common.TASK_DEPTH_AWARE_VIDEO_PANOPTIC_SEGMENTATION - in self._supported_tasks): - vpq_results = self._eval_vpq_metric.result() - eval_logs['evaluation/vpq_2frames/PQ'] = vpq_results[0] - eval_logs['evaluation/vpq_2frames/SQ'] = vpq_results[1] - eval_logs['evaluation/vpq_2frames/RQ'] = vpq_results[2] - eval_logs['evaluation/vpq_2frames/TP'] = vpq_results[3] - eval_logs['evaluation/vpq_2frames/FN'] = vpq_results[4] - eval_logs['evaluation/vpq_2frames/FP'] = vpq_results[5] - return eval_logs - - def eval_reduce(self, state=None, step_outputs=None): - """A function to do the reduction on the evaluation outputs per step. - - Args: - state: A maintained state throughout the evaluation. - step_outputs: Outputs from the current evaluation step. - - Returns: - An output which is passed as `state` argument into `eval_reduce` function - for the next step. After evaluation is finished, the output from last step - will be passed into `eval_end` function. - """ - if self._save_raw_predictions: - sequence = None - if self._dataset_info.is_video_dataset: - sequence = step_outputs[_LABELS_KEY][common.SEQUENCE_ID][0][0] - vis.store_raw_predictions( - step_outputs[_PREDICTIONS_KEY], - step_outputs[_LABELS_KEY][common.IMAGE_NAME][0][0], - self._dataset_info, - self._vis_dir, - sequence, - raw_panoptic_format=( - self._config.evaluator_options.raw_panoptic_format), - convert_to_eval=self._config.evaluator_options.convert_raw_to_eval_ids - ) - if not self._decode_groundtruth_label: - # The followed operations will all require decoding groundtruth label, and - # thus we will simply return if decode_groundtruth_label is False. - return state - - if (self._enable_visualization and - (self._sample_counter < self._num_vis_samples)): - predictions = step_outputs[_PREDICTIONS_KEY] - inputs = step_outputs[_LABELS_KEY] - if self._dataset_info.is_video_dataset: - inputs[common.IMAGE] = tf.expand_dims(inputs[common.IMAGE][0][..., :3], - axis=0) - vis.store_predictions( - predictions, - inputs, - self._sample_counter, - self._dataset_info, - self._vis_dir) - self._sample_counter += 1 - - # Accumulates PQ, AP_Mask and STQ. - if common.TASK_PANOPTIC_SEGMENTATION in self._supported_tasks: - for gt_panoptic, pred_panoptic in zip( - step_outputs[self._eval_pq_metric.name][0], - step_outputs[self._eval_pq_metric.name][1]): - batch_size = tf.shape(gt_panoptic)[0] - for i in range(batch_size): - self._eval_pq_metric.update_state(gt_panoptic[i], pred_panoptic[i]) - # STQ. - if common.TASK_VIDEO_PANOPTIC_SEGMENTATION in self._supported_tasks: - self._eval_tracking_metric.update_state( - gt_panoptic[i], pred_panoptic[i], - step_outputs[common.SEQUENCE_ID][0][0].numpy()) - if common.TASK_INSTANCE_SEGMENTATION in self._supported_tasks: - # AP_Mask. - for ap_result in zip(*tuple(step_outputs[self._eval_ap_metric.name])): - (gt_panoptic, pred_panoptic, pred_semantic_probs, pred_instance_scores, - gt_is_crowd) = ap_result - batch_size = tf.shape(gt_panoptic)[0] - for i in range(batch_size): - self._eval_ap_metric.update_state(gt_panoptic[i], pred_panoptic[i], - pred_semantic_probs[i], - pred_instance_scores[i], - gt_is_crowd[i]) - if (common.TASK_DEPTH_AWARE_VIDEO_PANOPTIC_SEGMENTATION - in self._supported_tasks): - for vpq_result in zip(*tuple(step_outputs[self._eval_vpq_metric.name])): - (gt_panoptic, gt_next_panoptic, pred_panoptic, - pred_next_panoptic) = vpq_result - batch_size = tf.shape(gt_panoptic)[0] - for i in range(batch_size): - self._eval_vpq_metric.update_state( - [gt_panoptic[i], gt_next_panoptic[i]], - [pred_panoptic[i], pred_next_panoptic[i]]) - # We simply return state as it is, since our current implementation does not - # keep track of state between steps. - return state diff --git a/spaces/akhaliq/wavyfusion/app.py b/spaces/akhaliq/wavyfusion/app.py deleted file mode 100644 index 4ac769d6a7d9ddb3c1d71a8fffe6ad2e92d180f6..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/wavyfusion/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'wavymulder/wavyfusion' -prefix = '' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
    -
    -

    Wavyfusion

    -
    -

    - Demo for Wavyfusion Stable Diffusion model.
    - {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

    - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"}

    -

    This demo is slow on cpu, to use it upgrade to gpu by going to settings after duplicating this space:Duplicate Space

    -
    - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically ()", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
    -
    -

    This space was created using SD Space Creator.

    -
    - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/alalalyuqing/White-box-Cartoonization/app.py b/spaces/alalalyuqing/White-box-Cartoonization/app.py deleted file mode 100644 index c55ced56bd87a85f59d1c8ef84b7eca87422720f..0000000000000000000000000000000000000000 --- a/spaces/alalalyuqing/White-box-Cartoonization/app.py +++ /dev/null @@ -1,108 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations -import argparse -import functools -import os -import pathlib -import sys -from typing import Callable -import uuid - -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image - -from io import BytesIO -from wbc.cartoonize import Cartoonize - -ORIGINAL_REPO_URL = 'https://github.com/SystemErrorWang/White-box-Cartoonization' -TITLE = 'SystemErrorWang/White-box-Cartoonization' -DESCRIPTION = f"""This is a demo for {ORIGINAL_REPO_URL}. - -""" -ARTICLE = """ - -""" - -SAFEHASH = [x for x in "0123456789-abcdefghijklmnopqrstuvwxyz_ABCDEFGHIJKLMNOPQRSTUVWXYZ"] -def compress_UUID(): - ''' - 根据http://www.ietf.org/rfc/rfc1738.txt,由uuid编码扩bai大字符域生成du串 - 包括:[0-9a-zA-Z\-_]共64个 - 长度:(32-2)/3*2=20 - 备注:可在地球上人zhi人都用,使用100年不重复(2^120) - :return:String - ''' - row = str(uuid.uuid4()).replace('-', '') - safe_code = '' - for i in range(10): - enbin = "%012d" % int(bin(int(row[i * 3] + row[i * 3 + 1] + row[i * 3 + 2], 16))[2:], 10) - safe_code += (SAFEHASH[int(enbin[0:6], 2)] + SAFEHASH[int(enbin[6:12], 2)]) - safe_code = safe_code.replace('-', '') - return safe_code - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--theme', type=str) - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - parser.add_argument('--allow-screenshot', action='store_true') - return parser.parse_args() - -def run( - image, - cartoonize : Cartoonize -) -> tuple[PIL.Image.Image]: - - out_path = compress_UUID()+'.png' - cartoonize.run_sigle(image.name, out_path) - - return PIL.Image.open(out_path) - - -def main(): - gr.close_all() - - args = parse_args() - - cartoonize = Cartoonize(os.path.join(os.path.dirname(os.path.abspath(__file__)),'wbc/saved_models/')) - - func = functools.partial(run, cartoonize=cartoonize) - func = functools.update_wrapper(func, run) - - gr.Interface( - func, - [ - gr.inputs.Image(type='file', label='Input Image'), - ], - [ - gr.outputs.Image( - type='pil', - label='Result'), - ], - # examples=examples, - theme=args.theme, - title=TITLE, - description=DESCRIPTION, - article=ARTICLE, - allow_screenshot=args.allow_screenshot, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/langbulgarianmodel.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/langbulgarianmodel.py deleted file mode 100644 index e963a50979a0b3dd56558240e075ca0f889479df..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/langbulgarianmodel.py +++ /dev/null @@ -1,4650 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -BULGARIAN_LANG_MODEL = { - 63: { # 'e' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 1, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 1, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 45: { # '\xad' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 1, # 'М' - 36: 0, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 31: { # 'А' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 2, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 1, # 'К' - 46: 2, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 1, # 'О' - 30: 2, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 2, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 1, # 'е' - 23: 1, # 'ж' - 15: 2, # 'з' - 2: 0, # 'и' - 26: 2, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 0, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 1, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 32: { # 'Б' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 2, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 1, # 'Е' - 55: 1, # 'Ж' - 47: 2, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 1, # 'Щ' - 61: 2, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 1, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 35: { # 'В' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 2, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 2, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 43: { # 'Г' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 1, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 1, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 37: { # 'Д' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 2, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 44: { # 'Е' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 2, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 1, # 'Х' - 53: 2, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 0, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 0, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 1, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 55: { # 'Ж' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 47: { # 'З' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 1, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 40: { # 'И' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 2, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 2, # 'Л' - 38: 2, # 'М' - 36: 2, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 2, # 'Я' - 1: 1, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 1, # 'е' - 23: 0, # 'ж' - 15: 3, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 0, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 0, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 59: { # 'Й' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 1, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 33: { # 'К' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 46: { # 'Л' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 2, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 38: { # 'М' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 0, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 36: { # 'Н' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 2, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 1, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 1, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 2, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 41: { # 'О' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 1, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 2, # 'Л' - 38: 2, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 0, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 1, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 0, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 2, # 'ч' - 27: 0, # 'ш' - 24: 2, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 30: { # 'П' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 2, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 39: { # 'Р' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 2, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 1, # 'с' - 5: 0, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 28: { # 'С' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 3, # 'А' - 32: 2, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 34: { # 'Т' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 2, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 3, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 51: { # 'У' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 2, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 2, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 2, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 2, # 'с' - 5: 1, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 48: { # 'Ф' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 49: { # 'Х' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 53: { # 'Ц' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 1, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 50: { # 'Ч' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 54: { # 'Ш' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 57: { # 'Щ' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 1, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 61: { # 'Ъ' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 2, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 60: { # 'Ю' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 1, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 0, # 'е' - 23: 2, # 'ж' - 15: 1, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 0, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 56: { # 'Я' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 2, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 1, # 'и' - 26: 1, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 0, # 'о' - 13: 2, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 1: { # 'а' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 18: { # 'б' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 3, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 0, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 2, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 3, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 9: { # 'в' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 0, # 'в' - 20: 2, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 20: { # 'г' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 11: { # 'д' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 2, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 1, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 3: { # 'е' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 2, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 23: { # 'ж' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 15: { # 'з' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 2: { # 'и' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 1, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 1, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 26: { # 'й' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 2, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 12: { # 'к' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 3, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 10: { # 'л' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 1, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 3, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 14: { # 'м' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 1, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 6: { # 'н' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 2, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 3, # 'ф' - 25: 2, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 4: { # 'о' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 13: { # 'п' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 7: { # 'р' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 1, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 2, # 'ч' - 27: 3, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 8: { # 'с' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 2, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 5: { # 'т' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 19: { # 'у' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 2, # 'и' - 26: 2, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 2, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 2, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 29: { # 'ф' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 1, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 2, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 25: { # 'х' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 22: { # 'ц' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 0, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 21: { # 'ч' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 1, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 27: { # 'ш' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 2, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 1, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 24: { # 'щ' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 1, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 17: { # 'ъ' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 1, # 'и' - 26: 2, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 2, # 'ш' - 24: 3, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 2, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 52: { # 'ь' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 1, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 1, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 42: { # 'ю' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 1, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 1, # 'е' - 23: 2, # 'ж' - 15: 2, # 'з' - 2: 1, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 1, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 16: { # 'я' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 1, # 'ж' - 15: 2, # 'з' - 2: 1, # 'и' - 26: 2, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 1, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 3, # 'х' - 22: 2, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 2, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 58: { # 'є' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 62: { # '№' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -ISO_8859_5_BULGARIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 77, # 'A' - 66: 90, # 'B' - 67: 99, # 'C' - 68: 100, # 'D' - 69: 72, # 'E' - 70: 109, # 'F' - 71: 107, # 'G' - 72: 101, # 'H' - 73: 79, # 'I' - 74: 185, # 'J' - 75: 81, # 'K' - 76: 102, # 'L' - 77: 76, # 'M' - 78: 94, # 'N' - 79: 82, # 'O' - 80: 110, # 'P' - 81: 186, # 'Q' - 82: 108, # 'R' - 83: 91, # 'S' - 84: 74, # 'T' - 85: 119, # 'U' - 86: 84, # 'V' - 87: 96, # 'W' - 88: 111, # 'X' - 89: 187, # 'Y' - 90: 115, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 65, # 'a' - 98: 69, # 'b' - 99: 70, # 'c' - 100: 66, # 'd' - 101: 63, # 'e' - 102: 68, # 'f' - 103: 112, # 'g' - 104: 103, # 'h' - 105: 92, # 'i' - 106: 194, # 'j' - 107: 104, # 'k' - 108: 95, # 'l' - 109: 86, # 'm' - 110: 87, # 'n' - 111: 71, # 'o' - 112: 116, # 'p' - 113: 195, # 'q' - 114: 85, # 'r' - 115: 93, # 's' - 116: 97, # 't' - 117: 113, # 'u' - 118: 196, # 'v' - 119: 197, # 'w' - 120: 198, # 'x' - 121: 199, # 'y' - 122: 200, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 194, # '\x80' - 129: 195, # '\x81' - 130: 196, # '\x82' - 131: 197, # '\x83' - 132: 198, # '\x84' - 133: 199, # '\x85' - 134: 200, # '\x86' - 135: 201, # '\x87' - 136: 202, # '\x88' - 137: 203, # '\x89' - 138: 204, # '\x8a' - 139: 205, # '\x8b' - 140: 206, # '\x8c' - 141: 207, # '\x8d' - 142: 208, # '\x8e' - 143: 209, # '\x8f' - 144: 210, # '\x90' - 145: 211, # '\x91' - 146: 212, # '\x92' - 147: 213, # '\x93' - 148: 214, # '\x94' - 149: 215, # '\x95' - 150: 216, # '\x96' - 151: 217, # '\x97' - 152: 218, # '\x98' - 153: 219, # '\x99' - 154: 220, # '\x9a' - 155: 221, # '\x9b' - 156: 222, # '\x9c' - 157: 223, # '\x9d' - 158: 224, # '\x9e' - 159: 225, # '\x9f' - 160: 81, # '\xa0' - 161: 226, # 'Ё' - 162: 227, # 'Ђ' - 163: 228, # 'Ѓ' - 164: 229, # 'Є' - 165: 230, # 'Ѕ' - 166: 105, # 'І' - 167: 231, # 'Ї' - 168: 232, # 'Ј' - 169: 233, # 'Љ' - 170: 234, # 'Њ' - 171: 235, # 'Ћ' - 172: 236, # 'Ќ' - 173: 45, # '\xad' - 174: 237, # 'Ў' - 175: 238, # 'Џ' - 176: 31, # 'А' - 177: 32, # 'Б' - 178: 35, # 'В' - 179: 43, # 'Г' - 180: 37, # 'Д' - 181: 44, # 'Е' - 182: 55, # 'Ж' - 183: 47, # 'З' - 184: 40, # 'И' - 185: 59, # 'Й' - 186: 33, # 'К' - 187: 46, # 'Л' - 188: 38, # 'М' - 189: 36, # 'Н' - 190: 41, # 'О' - 191: 30, # 'П' - 192: 39, # 'Р' - 193: 28, # 'С' - 194: 34, # 'Т' - 195: 51, # 'У' - 196: 48, # 'Ф' - 197: 49, # 'Х' - 198: 53, # 'Ц' - 199: 50, # 'Ч' - 200: 54, # 'Ш' - 201: 57, # 'Щ' - 202: 61, # 'Ъ' - 203: 239, # 'Ы' - 204: 67, # 'Ь' - 205: 240, # 'Э' - 206: 60, # 'Ю' - 207: 56, # 'Я' - 208: 1, # 'а' - 209: 18, # 'б' - 210: 9, # 'в' - 211: 20, # 'г' - 212: 11, # 'д' - 213: 3, # 'е' - 214: 23, # 'ж' - 215: 15, # 'з' - 216: 2, # 'и' - 217: 26, # 'й' - 218: 12, # 'к' - 219: 10, # 'л' - 220: 14, # 'м' - 221: 6, # 'н' - 222: 4, # 'о' - 223: 13, # 'п' - 224: 7, # 'р' - 225: 8, # 'с' - 226: 5, # 'т' - 227: 19, # 'у' - 228: 29, # 'ф' - 229: 25, # 'х' - 230: 22, # 'ц' - 231: 21, # 'ч' - 232: 27, # 'ш' - 233: 24, # 'щ' - 234: 17, # 'ъ' - 235: 75, # 'ы' - 236: 52, # 'ь' - 237: 241, # 'э' - 238: 42, # 'ю' - 239: 16, # 'я' - 240: 62, # '№' - 241: 242, # 'ё' - 242: 243, # 'ђ' - 243: 244, # 'ѓ' - 244: 58, # 'є' - 245: 245, # 'ѕ' - 246: 98, # 'і' - 247: 246, # 'ї' - 248: 247, # 'ј' - 249: 248, # 'љ' - 250: 249, # 'њ' - 251: 250, # 'ћ' - 252: 251, # 'ќ' - 253: 91, # '§' - 254: 252, # 'ў' - 255: 253, # 'џ' -} - -ISO_8859_5_BULGARIAN_MODEL = SingleByteCharSetModel(charset_name='ISO-8859-5', - language='Bulgarian', - char_to_order_map=ISO_8859_5_BULGARIAN_CHAR_TO_ORDER, - language_model=BULGARIAN_LANG_MODEL, - typical_positive_ratio=0.969392, - keep_ascii_letters=False, - alphabet='АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрстуфхцчшщъьюя') - -WINDOWS_1251_BULGARIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 77, # 'A' - 66: 90, # 'B' - 67: 99, # 'C' - 68: 100, # 'D' - 69: 72, # 'E' - 70: 109, # 'F' - 71: 107, # 'G' - 72: 101, # 'H' - 73: 79, # 'I' - 74: 185, # 'J' - 75: 81, # 'K' - 76: 102, # 'L' - 77: 76, # 'M' - 78: 94, # 'N' - 79: 82, # 'O' - 80: 110, # 'P' - 81: 186, # 'Q' - 82: 108, # 'R' - 83: 91, # 'S' - 84: 74, # 'T' - 85: 119, # 'U' - 86: 84, # 'V' - 87: 96, # 'W' - 88: 111, # 'X' - 89: 187, # 'Y' - 90: 115, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 65, # 'a' - 98: 69, # 'b' - 99: 70, # 'c' - 100: 66, # 'd' - 101: 63, # 'e' - 102: 68, # 'f' - 103: 112, # 'g' - 104: 103, # 'h' - 105: 92, # 'i' - 106: 194, # 'j' - 107: 104, # 'k' - 108: 95, # 'l' - 109: 86, # 'm' - 110: 87, # 'n' - 111: 71, # 'o' - 112: 116, # 'p' - 113: 195, # 'q' - 114: 85, # 'r' - 115: 93, # 's' - 116: 97, # 't' - 117: 113, # 'u' - 118: 196, # 'v' - 119: 197, # 'w' - 120: 198, # 'x' - 121: 199, # 'y' - 122: 200, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 206, # 'Ђ' - 129: 207, # 'Ѓ' - 130: 208, # '‚' - 131: 209, # 'ѓ' - 132: 210, # '„' - 133: 211, # '…' - 134: 212, # '†' - 135: 213, # '‡' - 136: 120, # '€' - 137: 214, # '‰' - 138: 215, # 'Љ' - 139: 216, # '‹' - 140: 217, # 'Њ' - 141: 218, # 'Ќ' - 142: 219, # 'Ћ' - 143: 220, # 'Џ' - 144: 221, # 'ђ' - 145: 78, # '‘' - 146: 64, # '’' - 147: 83, # '“' - 148: 121, # '”' - 149: 98, # '•' - 150: 117, # '–' - 151: 105, # '—' - 152: 222, # None - 153: 223, # '™' - 154: 224, # 'љ' - 155: 225, # '›' - 156: 226, # 'њ' - 157: 227, # 'ќ' - 158: 228, # 'ћ' - 159: 229, # 'џ' - 160: 88, # '\xa0' - 161: 230, # 'Ў' - 162: 231, # 'ў' - 163: 232, # 'Ј' - 164: 233, # '¤' - 165: 122, # 'Ґ' - 166: 89, # '¦' - 167: 106, # '§' - 168: 234, # 'Ё' - 169: 235, # '©' - 170: 236, # 'Є' - 171: 237, # '«' - 172: 238, # '¬' - 173: 45, # '\xad' - 174: 239, # '®' - 175: 240, # 'Ї' - 176: 73, # '°' - 177: 80, # '±' - 178: 118, # 'І' - 179: 114, # 'і' - 180: 241, # 'ґ' - 181: 242, # 'µ' - 182: 243, # '¶' - 183: 244, # '·' - 184: 245, # 'ё' - 185: 62, # '№' - 186: 58, # 'є' - 187: 246, # '»' - 188: 247, # 'ј' - 189: 248, # 'Ѕ' - 190: 249, # 'ѕ' - 191: 250, # 'ї' - 192: 31, # 'А' - 193: 32, # 'Б' - 194: 35, # 'В' - 195: 43, # 'Г' - 196: 37, # 'Д' - 197: 44, # 'Е' - 198: 55, # 'Ж' - 199: 47, # 'З' - 200: 40, # 'И' - 201: 59, # 'Й' - 202: 33, # 'К' - 203: 46, # 'Л' - 204: 38, # 'М' - 205: 36, # 'Н' - 206: 41, # 'О' - 207: 30, # 'П' - 208: 39, # 'Р' - 209: 28, # 'С' - 210: 34, # 'Т' - 211: 51, # 'У' - 212: 48, # 'Ф' - 213: 49, # 'Х' - 214: 53, # 'Ц' - 215: 50, # 'Ч' - 216: 54, # 'Ш' - 217: 57, # 'Щ' - 218: 61, # 'Ъ' - 219: 251, # 'Ы' - 220: 67, # 'Ь' - 221: 252, # 'Э' - 222: 60, # 'Ю' - 223: 56, # 'Я' - 224: 1, # 'а' - 225: 18, # 'б' - 226: 9, # 'в' - 227: 20, # 'г' - 228: 11, # 'д' - 229: 3, # 'е' - 230: 23, # 'ж' - 231: 15, # 'з' - 232: 2, # 'и' - 233: 26, # 'й' - 234: 12, # 'к' - 235: 10, # 'л' - 236: 14, # 'м' - 237: 6, # 'н' - 238: 4, # 'о' - 239: 13, # 'п' - 240: 7, # 'р' - 241: 8, # 'с' - 242: 5, # 'т' - 243: 19, # 'у' - 244: 29, # 'ф' - 245: 25, # 'х' - 246: 22, # 'ц' - 247: 21, # 'ч' - 248: 27, # 'ш' - 249: 24, # 'щ' - 250: 17, # 'ъ' - 251: 75, # 'ы' - 252: 52, # 'ь' - 253: 253, # 'э' - 254: 42, # 'ю' - 255: 16, # 'я' -} - -WINDOWS_1251_BULGARIAN_MODEL = SingleByteCharSetModel(charset_name='windows-1251', - language='Bulgarian', - char_to_order_map=WINDOWS_1251_BULGARIAN_CHAR_TO_ORDER, - language_model=BULGARIAN_LANG_MODEL, - typical_positive_ratio=0.969392, - keep_ascii_letters=False, - alphabet='АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрстуфхцчшщъьюя') - diff --git a/spaces/all-things-vits/class-attention-map/README.md b/spaces/all-things-vits/class-attention-map/README.md deleted file mode 100644 index cc20c430361744a4fc0bff0fadad9ec746d974b3..0000000000000000000000000000000000000000 --- a/spaces/all-things-vits/class-attention-map/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Class Attention Map -emoji: 🏢 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/anakin87/fact-checking-rocks/app_utils/config.py b/spaces/anakin87/fact-checking-rocks/app_utils/config.py deleted file mode 100644 index 4d7e3d15c10033a5e5bd4083099780a1fabb7906..0000000000000000000000000000000000000000 --- a/spaces/anakin87/fact-checking-rocks/app_utils/config.py +++ /dev/null @@ -1,25 +0,0 @@ -import streamlit as st - -INDEX_DIR = "data/index" -STATEMENTS_PATH = "data/statements.txt" - -RETRIEVER_MODEL = "sentence-transformers/msmarco-distilbert-base-tas-b" -RETRIEVER_MODEL_FORMAT = "sentence_transformers" -RETRIEVER_TOP_K = 5 - -# In HF Space, we use microsoft/deberta-v2-xlarge-mnli -# for local testing, a smaller model is better -try: - NLI_MODEL = st.secrets["NLI_MODEL"] -except: - NLI_MODEL = "valhalla/distilbart-mnli-12-1" -print(f"Used NLI model: {NLI_MODEL}") - - -# In HF Space, we use google/flan-t5-large -# for local testing, a smaller model is better -try: - PROMPT_MODEL = st.secrets["PROMPT_MODEL"] -except: - PROMPT_MODEL = "google/flan-t5-small" -print(f"Used Prompt model: {PROMPT_MODEL}") diff --git a/spaces/anjaymabskuy/Linaqruf-anything-v3.0/app.py b/spaces/anjaymabskuy/Linaqruf-anything-v3.0/app.py deleted file mode 100644 index 16e8131a0bbf7b06956e69e2b7758fa01e4eb51f..0000000000000000000000000000000000000000 --- a/spaces/anjaymabskuy/Linaqruf-anything-v3.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Linaqruf/anything-v3.0").launch() \ No newline at end of file diff --git a/spaces/anuragshas/restore-punctuation-demo/app.py b/spaces/anuragshas/restore-punctuation-demo/app.py deleted file mode 100644 index 667c9853d2f86910c5328b489d1200fd0ac2c92a..0000000000000000000000000000000000000000 --- a/spaces/anuragshas/restore-punctuation-demo/app.py +++ /dev/null @@ -1,89 +0,0 @@ -import streamlit as st -from multiprocessing import Process -import json -import requests -import time -import os - - -def start_server(): - '''Helper to start to service through Unicorn ''' - os.system("uvicorn InferenceServer:app --port 8080 --host 0.0.0.0 --workers 2") - -def load_models(): - '''One time loading/ Init of models and starting server as a seperate process''' - if not is_port_in_use(8080): - with st.spinner(text="Loading model, please wait..."): - proc = Process(target=start_server, args=(), daemon=True) - proc.start() - while not is_port_in_use(8080): - time.sleep(1) - st.success("Model server started.") - else: - st.success("Model server already running...") - st.session_state['models_loaded'] = True - -def is_port_in_use(port): - '''Helper to check if service already running''' - import socket - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: - return s.connect_ex(('0.0.0.0', port)) == 0 - -if 'models_loaded' not in st.session_state: - st.session_state['models_loaded'] = False - -def get_correction(input_text): - '''Invokes the inference service''' - st.markdown(f'##### Corrected text:') - st.write('') - correct_request = "http://0.0.0.0:8080/restore?input_sentence="+input_text - with st.spinner('Wait for it...'): - correct_response = requests.get(correct_request) - correct_json = json.loads(correct_response.text) - corrected_sentence = correct_json["corrected_sentence"] - result = diff_strings(corrected_sentence,input_text) - st.markdown(result, unsafe_allow_html=True) - -def diff_strings(output_text, input_text): - '''Highlights corrections''' - c_text = "" - for x in output_text.split(" "): - if x in input_text.split(" "): - c_text = c_text + x + " " - else: - c_text = c_text + '' + x + '' + " " - return c_text - -if __name__ == "__main__": - - st.title('Rpunct') - st.subheader('For Punctuation and Upper Case restoration') - st.markdown("Spaces for [felflare/bert-restore-punctuation](https://huggingface.co/felflare/bert-restore-punctuation) using [Fork with CPU support](https://github.com/anuragshas/rpunct) | [Original repo](https://github.com/Felflare/rpunct)", unsafe_allow_html=True) - st.markdown("Model restores the following punctuations -- [! ? . , - : ; ' ] and also the upper-casing of words.") - st.markdown("Integrate with just few lines of code", unsafe_allow_html=True) - st.markdown(""" - ```python - from rpunct import RestorePuncts - rpunct = RestorePuncts() - rpunct.punctuate('''my name is clara and i live in berkeley california''') - ``` - """) - examples = [ - "my name is clara and i live in berkeley california", - "in 2018 cornell researchers built a high-power detector", - "lorem ipsum has been the industrys standard dummy text ever since the 1500s when an unknown printer took a galley of type and scrambled it to make a type specimen book" - ] - if not st.session_state['models_loaded']: - load_models() - - input_text = st.selectbox( - label="Choose an example", - options=examples - ) - st.write("(or)") - input_text = st.text_input( - label="Input sentence", - value=input_text - ) - if input_text.strip(): - get_correction(input_text) \ No newline at end of file diff --git a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/common.py b/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/common.py deleted file mode 100644 index 2bf15236a3eb24d8526073bc4fa2b274cccb3f96..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/common.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn - -from typing import Type - - -class MLPBlock(nn.Module): - def __init__( - self, - embedding_dim: int, - mlp_dim: int, - act: Type[nn.Module] = nn.GELU, - ) -> None: - super().__init__() - self.lin1 = nn.Linear(embedding_dim, mlp_dim) - self.lin2 = nn.Linear(mlp_dim, embedding_dim) - self.act = act() - - def forward(self, x: torch.Tensor) -> torch.Tensor: - return self.lin2(self.act(self.lin1(x))) - - -# From https://github.com/facebookresearch/detectron2/blob/main/detectron2/layers/batch_norm.py # noqa -# Itself from https://github.com/facebookresearch/ConvNeXt/blob/d1fa8f6fef0a165b27399986cc2bdacc92777e40/models/convnext.py#L119 # noqa -class LayerNorm2d(nn.Module): - def __init__(self, num_channels: int, eps: float = 1e-6) -> None: - super().__init__() - self.weight = nn.Parameter(torch.ones(num_channels)) - self.bias = nn.Parameter(torch.zeros(num_channels)) - self.eps = eps - - def forward(self, x: torch.Tensor) -> torch.Tensor: - u = x.mean(1, keepdim=True) - s = (x - u).pow(2).mean(1, keepdim=True) - x = (x - u) / torch.sqrt(s + self.eps) - x = self.weight[:, None, None] * x + self.bias[:, None, None] - return x diff --git a/spaces/aphenx/bingo/src/lib/bots/bing/index.ts b/spaces/aphenx/bingo/src/lib/bots/bing/index.ts deleted file mode 100644 index 2c4afae01a345b8415935228566cb30d695e768d..0000000000000000000000000000000000000000 --- a/spaces/aphenx/bingo/src/lib/bots/bing/index.ts +++ /dev/null @@ -1,421 +0,0 @@ -import { fetch, WebSocket, debug } from '@/lib/isomorphic' -import WebSocketAsPromised from 'websocket-as-promised' -import { - SendMessageParams, - BingConversationStyle, - ConversationResponse, - ChatResponseMessage, - ConversationInfo, - InvocationEventType, - ChatError, - ErrorCode, - ChatUpdateCompleteResponse, - ImageInfo, - KBlobResponse -} from './types' - -import { convertMessageToMarkdown, websocketUtils, streamAsyncIterable } from './utils' -import { WatchDog, createChunkDecoder } from '@/lib/utils' - -type Params = SendMessageParams<{ bingConversationStyle: BingConversationStyle }> - -const OPTIONS_SETS = [ - 'nlu_direct_response_filter', - 'deepleo', - 'disable_emoji_spoken_text', - 'responsible_ai_policy_235', - 'enablemm', - 'iycapbing', - 'iyxapbing', - 'objopinion', - 'rweasgv2', - 'dagslnv1', - 'dv3sugg', - 'autosave', - 'iyoloxap', - 'iyoloneutral', - 'clgalileo', - 'gencontentv3', -] - -export class BingWebBot { - protected conversationContext?: ConversationInfo - protected cookie: string - protected ua: string - protected endpoint = '' - private lastText = '' - private asyncTasks: Array> = [] - - constructor(opts: { - cookie: string - ua: string - bingConversationStyle?: BingConversationStyle - conversationContext?: ConversationInfo - }) { - const { cookie, ua, conversationContext } = opts - this.cookie = cookie?.includes(';') ? cookie : `_EDGE_V=1; _U=${cookie}` - this.ua = ua - this.conversationContext = conversationContext - } - - static buildChatRequest(conversation: ConversationInfo) { - const optionsSets = OPTIONS_SETS - if (conversation.conversationStyle === BingConversationStyle.Precise) { - optionsSets.push('h3precise') - } else if (conversation.conversationStyle === BingConversationStyle.Creative) { - optionsSets.push('h3imaginative') - } - return { - arguments: [ - { - source: 'cib', - optionsSets, - allowedMessageTypes: [ - 'Chat', - 'InternalSearchQuery', - 'Disengaged', - 'InternalLoaderMessage', - 'SemanticSerp', - 'GenerateContentQuery', - 'SearchQuery', - ], - sliceIds: [ - 'winmuid1tf', - 'anssupfor_c', - 'imgchatgptv2', - 'tts2cf', - 'contansperf', - 'mlchatpc8500w', - 'mlchatpc2', - 'ctrlworkpay', - 'winshortmsgtf', - 'cibctrl', - 'sydtransctrl', - 'sydconfigoptc', - '0705trt4', - '517opinion', - '628ajcopus0', - '330uaugs0', - '529rwea', - '0626snptrcs0', - '424dagslnv1', - ], - isStartOfSession: conversation.invocationId === 0, - message: { - author: 'user', - inputMethod: 'Keyboard', - text: conversation.prompt, - imageUrl: conversation.imageUrl, - messageType: 'Chat', - }, - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - participant: { id: conversation.clientId }, - }, - ], - invocationId: conversation.invocationId.toString(), - target: 'chat', - type: InvocationEventType.StreamInvocation, - } - } - - async createConversation(): Promise { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - - let resp: ConversationResponse | undefined - try { - const response = await fetch(this.endpoint + '/api/create', { method: 'POST', headers, redirect: 'error', mode: 'cors', credentials: 'include' }) - if (response.status === 404) { - throw new ChatError('Not Found', ErrorCode.NOTFOUND_ERROR) - } - resp = await response.json() as ConversationResponse - } catch (err) { - console.error('create conversation error', err) - } - - if (!resp?.result) { - throw new ChatError('Invalid response', ErrorCode.UNKOWN_ERROR) - } - - const { value, message } = resp.result || {} - if (value !== 'Success') { - const errorMsg = `${value}: ${message}` - if (value === 'UnauthorizedRequest') { - throw new ChatError(errorMsg, ErrorCode.BING_UNAUTHORIZED) - } - if (value === 'Forbidden') { - throw new ChatError(errorMsg, ErrorCode.BING_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.UNKOWN_ERROR) - } - return resp - } - - private async createContext(conversationStyle: BingConversationStyle) { - if (!this.conversationContext) { - const conversation = await this.createConversation() - this.conversationContext = { - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - clientId: conversation.clientId, - invocationId: 0, - conversationStyle, - prompt: '', - } - } - return this.conversationContext - } - - async sendMessage(params: Params) { - try { - await this.createContext(params.options.bingConversationStyle) - Object.assign(this.conversationContext!, { prompt: params.prompt, imageUrl: params.imageUrl }) - return this.sydneyProxy(params) - } catch (error) { - params.onEvent({ - type: 'ERROR', - error: error instanceof ChatError ? error : new ChatError('Catch Error', ErrorCode.UNKOWN_ERROR), - }) - } - } - - private async sydneyProxy(params: Params) { - const abortController = new AbortController() - const response = await fetch(this.endpoint + '/api/sydney', { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - signal: abortController.signal, - body: JSON.stringify(this.conversationContext!) - }) - if (response.status !== 200) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Unknown error', - ErrorCode.UNKOWN_ERROR, - ), - }) - } - params.signal?.addEventListener('abort', () => { - abortController.abort() - }) - - const textDecoder = createChunkDecoder() - for await (const chunk of streamAsyncIterable(response.body!)) { - this.parseEvents(params, websocketUtils.unpackMessage(textDecoder(chunk))) - } - } - - async sendWs() { - const wsConfig: ConstructorParameters[1] = { - packMessage: websocketUtils.packMessage, - unpackMessage: websocketUtils.unpackMessage, - createWebSocket: (url) => new WebSocket(url, { - headers: { - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'User-Agent': this.ua, - pragma: 'no-cache', - cookie: this.cookie, - } - }) - } - const wsp = new WebSocketAsPromised('wss://sydney.bing.com/sydney/ChatHub', wsConfig) - - wsp.open().then(() => { - wsp.sendPacked({ protocol: 'json', version: 1 }) - wsp.sendPacked({ type: 6 }) - wsp.sendPacked(BingWebBot.buildChatRequest(this.conversationContext!)) - }) - - return wsp - } - - private async useWs(params: Params) { - const wsp = await this.sendWs() - const watchDog = new WatchDog() - wsp.onUnpackedMessage.addListener((events) => { - watchDog.watch(() => { - wsp.sendPacked({ type: 6 }) - }) - this.parseEvents(params, events) - }) - - wsp.onClose.addListener(() => { - watchDog.reset() - params.onEvent({ type: 'DONE' }) - wsp.removeAllListeners() - }) - - params.signal?.addEventListener('abort', () => { - wsp.removeAllListeners() - wsp.close() - }) - } - - private async createImage(prompt: string, id: string) { - try { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - const query = new URLSearchParams({ - prompt, - id - }) - const response = await fetch(this.endpoint + '/api/image?' + query.toString(), - { - method: 'POST', - headers, - mode: 'cors', - credentials: 'include' - }) - .then(res => res.text()) - if (response) { - this.lastText += '\n' + response - } - } catch (err) { - console.error('Create Image Error', err) - } - } - - private buildKnowledgeApiPayload(imageUrl: string, conversationStyle: BingConversationStyle) { - const imageInfo: ImageInfo = {} - let imageBase64: string | undefined = undefined - const knowledgeRequest = { - imageInfo, - knowledgeRequest: { - invokedSkills: [ - 'ImageById' - ], - subscriptionId: 'Bing.Chat.Multimodal', - invokedSkillsRequestData: { - enableFaceBlur: true - }, - convoData: { - convoid: this.conversationContext?.conversationId, - convotone: conversationStyle, - } - }, - } - - if (imageUrl.startsWith('data:image/')) { - imageBase64 = imageUrl.replace('data:image/', ''); - const partIndex = imageBase64.indexOf(',') - if (partIndex) { - imageBase64 = imageBase64.substring(partIndex + 1) - } - } else { - imageInfo.url = imageUrl - } - return { knowledgeRequest, imageBase64 } - } - - async uploadImage(imageUrl: string, conversationStyle: BingConversationStyle = BingConversationStyle.Creative): Promise { - if (!imageUrl) { - return - } - await this.createContext(conversationStyle) - const payload = this.buildKnowledgeApiPayload(imageUrl, conversationStyle) - - const response = await fetch(this.endpoint + '/api/kblob', - { - headers: { - 'Content-Type': 'application/json', - }, - method: 'POST', - mode: 'cors', - credentials: 'include', - body: JSON.stringify(payload), - }) - .then(res => res.json()) - .catch(e => { - console.log('Error', e) - }) - return response - } - - private async generateContent(message: ChatResponseMessage) { - if (message.contentType === 'IMAGE') { - this.asyncTasks.push(this.createImage(message.text, message.messageId)) - } - } - - private async parseEvents(params: Params, events: any) { - const conversation = this.conversationContext! - - events?.forEach(async (event: ChatUpdateCompleteResponse) => { - debug('bing event', event) - if (event.type === 3) { - await Promise.all(this.asyncTasks) - this.asyncTasks = [] - params.onEvent({ type: 'UPDATE_ANSWER', data: { text: this.lastText } }) - params.onEvent({ type: 'DONE' }) - conversation.invocationId = parseInt(event.invocationId, 10) + 1 - } else if (event.type === 1) { - const messages = event.arguments[0].messages - if (messages) { - const text = convertMessageToMarkdown(messages[0]) - this.lastText = text - params.onEvent({ type: 'UPDATE_ANSWER', data: { text, spokenText: messages[0].text, throttling: event.arguments[0].throttling } }) - } - } else if (event.type === 2) { - const messages = event.item.messages as ChatResponseMessage[] | undefined - if (!messages) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - event.item.result.error || 'Unknown error', - event.item.result.value === 'Throttled' ? ErrorCode.THROTTLE_LIMIT - : event.item.result.value === 'CaptchaChallenge' ? (this.conversationContext?.conversationId?.includes('BingProdUnAuthenticatedUsers') ? ErrorCode.BING_UNAUTHORIZED : ErrorCode.BING_CAPTCHA) - : ErrorCode.UNKOWN_ERROR - ), - }) - return - } - const limited = messages.some((message) => - message.contentOrigin === 'TurnLimiter' - || message.messageType === 'Disengaged' - ) - if (limited) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Sorry, you have reached chat limit in this conversation.', - ErrorCode.CONVERSATION_LIMIT, - ), - }) - return - } - - const lastMessage = event.item.messages.at(-1) as ChatResponseMessage - const specialMessage = event.item.messages.find(message => message.author === 'bot' && message.contentType === 'IMAGE') - if (specialMessage) { - this.generateContent(specialMessage) - } - - if (lastMessage) { - const text = convertMessageToMarkdown(lastMessage) - this.lastText = text - params.onEvent({ - type: 'UPDATE_ANSWER', - data: { text, throttling: event.item.throttling, suggestedResponses: lastMessage.suggestedResponses, sourceAttributions: lastMessage.sourceAttributions }, - }) - } - } - }) - } - - resetConversation() { - this.conversationContext = undefined - } -} diff --git a/spaces/arixiii/open-reverse-proxy/server.js b/spaces/arixiii/open-reverse-proxy/server.js deleted file mode 100644 index 04a48b7a429c4d0ad0b772ba1edf503e349eda21..0000000000000000000000000000000000000000 --- a/spaces/arixiii/open-reverse-proxy/server.js +++ /dev/null @@ -1,32 +0,0 @@ -const express = require('express'); -const proxy = require('express-http-proxy'); -const app = express(); -const targetUrl = 'https://api.openai.com'; -const openaiKey = process.env.OPENAI_KEY -const port = 7860; -const baseUrl = getExternalUrl(process.env.SPACE_ID); - -app.use('/api', proxy(targetUrl, { - proxyReqOptDecorator: (proxyReqOpts, srcReq) => { - // Modify the request headers if necessary - proxyReqOpts.headers['Authorization'] = 'Bearer '+openaiKey; - return proxyReqOpts; - }, -})); - -app.get("/", (req, res) => { - res.send(`This is your OpenAI Reverse Proxy URL: ${baseUrl}`); -}); - -function getExternalUrl(spaceId) { - try { - const [username, spacename] = spaceId.split("/"); - return `https://${username}-${spacename.replace(/_/g, "-")}.hf.space/api/v1`; - } catch (e) { - return ""; - } -} - -app.listen(port, () => { - console.log(`Reverse proxy server running on ${baseUrl}`); -}); \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utils.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utils.py deleted file mode 100644 index d59d67d78b1a806e23e2976a034bb63775102756..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utils.py +++ /dev/null @@ -1,449 +0,0 @@ -# -# Cython -- Things that don't belong -# anywhere else in particular -# - -from __future__ import absolute_import - -try: - from __builtin__ import basestring -except ImportError: - basestring = str - -try: - FileNotFoundError -except NameError: - FileNotFoundError = OSError - -import os -import sys -import re -import io -import codecs -import shutil -import tempfile -from contextlib import contextmanager - -modification_time = os.path.getmtime - -_function_caches = [] -def clear_function_caches(): - for cache in _function_caches: - cache.clear() - -def cached_function(f): - cache = {} - _function_caches.append(cache) - uncomputed = object() - def wrapper(*args): - res = cache.get(args, uncomputed) - if res is uncomputed: - res = cache[args] = f(*args) - return res - wrapper.uncached = f - return wrapper - -def cached_method(f): - cache_name = '__%s_cache' % f.__name__ - def wrapper(self, *args): - cache = getattr(self, cache_name, None) - if cache is None: - cache = {} - setattr(self, cache_name, cache) - if args in cache: - return cache[args] - res = cache[args] = f(self, *args) - return res - return wrapper - -def replace_suffix(path, newsuf): - base, _ = os.path.splitext(path) - return base + newsuf - - -def open_new_file(path): - if os.path.exists(path): - # Make sure to create a new file here so we can - # safely hard link the output files. - os.unlink(path) - - # we use the ISO-8859-1 encoding here because we only write pure - # ASCII strings or (e.g. for file names) byte encoded strings as - # Unicode, so we need a direct mapping from the first 256 Unicode - # characters to a byte sequence, which ISO-8859-1 provides - - # note: can't use io.open() in Py2 as we may be writing str objects - return codecs.open(path, "w", encoding="ISO-8859-1") - - -def castrate_file(path, st): - # Remove junk contents from an output file after a - # failed compilation. - # Also sets access and modification times back to - # those specified by st (a stat struct). - try: - f = open_new_file(path) - except EnvironmentError: - pass - else: - f.write( - "#error Do not use this file, it is the result of a failed Cython compilation.\n") - f.close() - if st: - os.utime(path, (st.st_atime, st.st_mtime-1)) - -def file_newer_than(path, time): - ftime = modification_time(path) - return ftime > time - - -def safe_makedirs(path): - try: - os.makedirs(path) - except OSError: - if not os.path.isdir(path): - raise - - -def copy_file_to_dir_if_newer(sourcefile, destdir): - """ - Copy file sourcefile to directory destdir (creating it if needed), - preserving metadata. If the destination file exists and is not - older than the source file, the copying is skipped. - """ - destfile = os.path.join(destdir, os.path.basename(sourcefile)) - try: - desttime = modification_time(destfile) - except OSError: - # New file does not exist, destdir may or may not exist - safe_makedirs(destdir) - else: - # New file already exists - if not file_newer_than(sourcefile, desttime): - return - shutil.copy2(sourcefile, destfile) - - -@cached_function -def find_root_package_dir(file_path): - dir = os.path.dirname(file_path) - if file_path == dir: - return dir - elif is_package_dir(dir): - return find_root_package_dir(dir) - else: - return dir - -@cached_function -def check_package_dir(dir, package_names): - for dirname in package_names: - dir = os.path.join(dir, dirname) - if not is_package_dir(dir): - return None - return dir - -@cached_function -def is_package_dir(dir_path): - for filename in ("__init__.py", - "__init__.pyc", - "__init__.pyx", - "__init__.pxd"): - path = os.path.join(dir_path, filename) - if path_exists(path): - return 1 - -@cached_function -def path_exists(path): - # try on the filesystem first - if os.path.exists(path): - return True - # figure out if a PEP 302 loader is around - try: - loader = __loader__ - # XXX the code below assumes a 'zipimport.zipimporter' instance - # XXX should be easy to generalize, but too lazy right now to write it - archive_path = getattr(loader, 'archive', None) - if archive_path: - normpath = os.path.normpath(path) - if normpath.startswith(archive_path): - arcname = normpath[len(archive_path)+1:] - try: - loader.get_data(arcname) - return True - except IOError: - return False - except NameError: - pass - return False - -# file name encodings - -def decode_filename(filename): - if isinstance(filename, bytes): - try: - filename_encoding = sys.getfilesystemencoding() - if filename_encoding is None: - filename_encoding = sys.getdefaultencoding() - filename = filename.decode(filename_encoding) - except UnicodeDecodeError: - pass - return filename - -# support for source file encoding detection - -_match_file_encoding = re.compile(br"(\w*coding)[:=]\s*([-\w.]+)").search - - -def detect_opened_file_encoding(f): - # PEPs 263 and 3120 - # Most of the time the first two lines fall in the first couple of hundred chars, - # and this bulk read/split is much faster. - lines = () - start = b'' - while len(lines) < 3: - data = f.read(500) - start += data - lines = start.split(b"\n") - if not data: - break - m = _match_file_encoding(lines[0]) - if m and m.group(1) != b'c_string_encoding': - return m.group(2).decode('iso8859-1') - elif len(lines) > 1: - m = _match_file_encoding(lines[1]) - if m: - return m.group(2).decode('iso8859-1') - return "UTF-8" - - -def skip_bom(f): - """ - Read past a BOM at the beginning of a source file. - This could be added to the scanner, but it's *substantially* easier - to keep it at this level. - """ - if f.read(1) != u'\uFEFF': - f.seek(0) - - -def open_source_file(source_filename, encoding=None, error_handling=None): - stream = None - try: - if encoding is None: - # Most of the time the encoding is not specified, so try hard to open the file only once. - f = io.open(source_filename, 'rb') - encoding = detect_opened_file_encoding(f) - f.seek(0) - stream = io.TextIOWrapper(f, encoding=encoding, errors=error_handling) - else: - stream = io.open(source_filename, encoding=encoding, errors=error_handling) - - except OSError: - if os.path.exists(source_filename): - raise # File is there, but something went wrong reading from it. - # Allow source files to be in zip files etc. - try: - loader = __loader__ - if source_filename.startswith(loader.archive): - stream = open_source_from_loader( - loader, source_filename, - encoding, error_handling) - except (NameError, AttributeError): - pass - - if stream is None: - raise FileNotFoundError(source_filename) - skip_bom(stream) - return stream - - -def open_source_from_loader(loader, - source_filename, - encoding=None, error_handling=None): - nrmpath = os.path.normpath(source_filename) - arcname = nrmpath[len(loader.archive)+1:] - data = loader.get_data(arcname) - return io.TextIOWrapper(io.BytesIO(data), - encoding=encoding, - errors=error_handling) - - -def str_to_number(value): - # note: this expects a string as input that was accepted by the - # parser already, with an optional "-" sign in front - is_neg = False - if value[:1] == '-': - is_neg = True - value = value[1:] - if len(value) < 2: - value = int(value, 0) - elif value[0] == '0': - literal_type = value[1] # 0'o' - 0'b' - 0'x' - if literal_type in 'xX': - # hex notation ('0x1AF') - value = int(value[2:], 16) - elif literal_type in 'oO': - # Py3 octal notation ('0o136') - value = int(value[2:], 8) - elif literal_type in 'bB': - # Py3 binary notation ('0b101') - value = int(value[2:], 2) - else: - # Py2 octal notation ('0136') - value = int(value, 8) - else: - value = int(value, 0) - return -value if is_neg else value - - -def long_literal(value): - if isinstance(value, basestring): - value = str_to_number(value) - return not -2**31 <= value < 2**31 - - -@cached_function -def get_cython_cache_dir(): - r""" - Return the base directory containing Cython's caches. - - Priority: - - 1. CYTHON_CACHE_DIR - 2. (OS X): ~/Library/Caches/Cython - (posix not OS X): XDG_CACHE_HOME/cython if XDG_CACHE_HOME defined - 3. ~/.cython - - """ - if 'CYTHON_CACHE_DIR' in os.environ: - return os.environ['CYTHON_CACHE_DIR'] - - parent = None - if os.name == 'posix': - if sys.platform == 'darwin': - parent = os.path.expanduser('~/Library/Caches') - else: - # this could fallback on ~/.cache - parent = os.environ.get('XDG_CACHE_HOME') - - if parent and os.path.isdir(parent): - return os.path.join(parent, 'cython') - - # last fallback: ~/.cython - return os.path.expanduser(os.path.join('~', '.cython')) - - -@contextmanager -def captured_fd(stream=2, encoding=None): - orig_stream = os.dup(stream) # keep copy of original stream - try: - with tempfile.TemporaryFile(mode="a+b") as temp_file: - def read_output(_output=[b'']): - if not temp_file.closed: - temp_file.seek(0) - _output[0] = temp_file.read() - return _output[0] - - os.dup2(temp_file.fileno(), stream) # replace stream by copy of pipe - try: - def get_output(): - result = read_output() - return result.decode(encoding) if encoding else result - - yield get_output - finally: - os.dup2(orig_stream, stream) # restore original stream - read_output() # keep the output in case it's used after closing the context manager - finally: - os.close(orig_stream) - - -def print_bytes(s, header_text=None, end=b'\n', file=sys.stdout, flush=True): - if header_text: - file.write(header_text) # note: text! => file.write() instead of out.write() - file.flush() - try: - out = file.buffer # Py3 - except AttributeError: - out = file # Py2 - out.write(s) - if end: - out.write(end) - if flush: - out.flush() - -class LazyStr: - def __init__(self, callback): - self.callback = callback - def __str__(self): - return self.callback() - def __repr__(self): - return self.callback() - def __add__(self, right): - return self.callback() + right - def __radd__(self, left): - return left + self.callback() - - -class OrderedSet(object): - def __init__(self, elements=()): - self._list = [] - self._set = set() - self.update(elements) - def __iter__(self): - return iter(self._list) - def update(self, elements): - for e in elements: - self.add(e) - def add(self, e): - if e not in self._set: - self._list.append(e) - self._set.add(e) - - -# Class decorator that adds a metaclass and recreates the class with it. -# Copied from 'six'. -def add_metaclass(metaclass): - """Class decorator for creating a class with a metaclass.""" - def wrapper(cls): - orig_vars = cls.__dict__.copy() - slots = orig_vars.get('__slots__') - if slots is not None: - if isinstance(slots, str): - slots = [slots] - for slots_var in slots: - orig_vars.pop(slots_var) - orig_vars.pop('__dict__', None) - orig_vars.pop('__weakref__', None) - return metaclass(cls.__name__, cls.__bases__, orig_vars) - return wrapper - - -def raise_error_if_module_name_forbidden(full_module_name): - #it is bad idea to call the pyx-file cython.pyx, so fail early - if full_module_name == 'cython' or full_module_name.startswith('cython.'): - raise ValueError('cython is a special module, cannot be used as a module name') - - -def build_hex_version(version_string): - """ - Parse and translate '4.3a1' into the readable hex representation '0x040300A1' (like PY_VERSION_HEX). - """ - # First, parse '4.12a1' into [4, 12, 0, 0xA01]. - digits = [] - release_status = 0xF0 - for digit in re.split('([.abrc]+)', version_string): - if digit in ('a', 'b', 'rc'): - release_status = {'a': 0xA0, 'b': 0xB0, 'rc': 0xC0}[digit] - digits = (digits + [0, 0])[:3] # 1.2a1 -> 1.2.0a1 - elif digit != '.': - digits.append(int(digit)) - digits = (digits + [0] * 3)[:4] - digits[3] += release_status - - # Then, build a single hex value, two hex digits per version part. - hexversion = 0 - for digit in digits: - hexversion = (hexversion << 8) + digit - - return '0x%08X' % hexversion diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/interval_selection.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/interval_selection.py deleted file mode 100644 index 55853d9af1d8dc8ce70be178282fe611f9d0f8ff..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/interval_selection.py +++ /dev/null @@ -1,32 +0,0 @@ -""" -Interval Selection Example -========================== - -This is an example of creating a stacked chart for which the domain of the -top chart can be selected by interacting with the bottom chart. -""" -# category: area charts -import altair as alt -from vega_datasets import data - -source = data.sp500.url - -brush = alt.selection(type='interval', encodings=['x']) - -base = alt.Chart(source).mark_area().encode( - x = 'date:T', - y = 'price:Q' -).properties( - width=600, - height=200 -) - -upper = base.encode( - alt.X('date:T', scale=alt.Scale(domain=brush)) -) - -lower = base.properties( - height=60 -).add_selection(brush) - -upper & lower diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Vinamra Mathur.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Vinamra Mathur.html deleted file mode 100644 index e1ceadd9d2c0764feb5ef0e5ea4e21db7403b0b8..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Vinamra Mathur.html +++ /dev/null @@ -1,134 +0,0 @@ - - - - Vinamra Mathur - - - - -
    -

    Vinamra Mathur

    - -
    -
    How did you hear about SM?
    • I applied to SM (~2 years ago) when I was out of N.A (now in Canada)
    • A friend of mine was working as a mentor at SM and described it to me

    Brief background
    • worked in Asia (India, China, Singapore)
    • Then moved to Canada and now working at RBC as Sr. Data Engineer 
    • 2017 graduated and started as a DA
    • worked on his python skils, got a job in China at Audi
      • did a lot of DE, and full-stack DS
    • India => Singapore, Senior DS, consultant, customer churn model
    • now working as RBC as dull stack DE

    Mentorship exp
    • Has helped a lot of ppl launch their careers
    • loves to volunteer and better society
    • started running resumes workshops
    • giving lectures in universities on how to pivot into DS
      • how to deploy MLE
    • folks from my network reach out for help (through LI)
    • I schedule sessions
    • mentoring 10-15 students

    What do beginners need and how can you help?
    • leveraging their transferrable skills (not the hard DS skills) and applying to DS
    • resume review/writing
    • applying strategies - tailoir resume, optimize, use KW
    • how to network with professionals in the industry
      • Don't get siloed
      • Don't get overwhelmed
      • approach ppl and understand what use cases they have worked on
      • peer to peer learning
    -
    -
    Questions about SM:
    • How did you start it?
    • Can you describe how the ISA works?
    • What makes you happy about working at SM?
    • The motivation for me is giving back to society
    -
    - -
    - - - \ No newline at end of file diff --git a/spaces/auto-academic/auto-draft/kdb_test.py b/spaces/auto-academic/auto-draft/kdb_test.py deleted file mode 100644 index c42303ea76918ea336d1aeaaaa33a8f1994bd713..0000000000000000000000000000000000000000 --- a/spaces/auto-academic/auto-draft/kdb_test.py +++ /dev/null @@ -1,120 +0,0 @@ -from utils.knowledge import Knowledge -from langchain.vectorstores import FAISS -from utils.file_operations import list_folders -from huggingface_hub import snapshot_download -import gradio as gr -import os -import json -from models import EMBEDDINGS -from utils.gpt_interaction import GPTModel -from utils.prompts import SYSTEM -import openai - -llm = GPTModel(model="gpt-3.5-turbo") -openai.api_key = os.getenv("OPENAI_API_KEY") - -HF_TOKEN = os.getenv("HF_TOKEN") -REPO_ID = os.getenv("KDB_REPO") -if HF_TOKEN is not None and REPO_ID is not None: - snapshot_download(REPO_ID, repo_type="dataset", local_dir="knowledge_databases/", - local_dir_use_symlinks=False, token=HF_TOKEN) -ALL_KDB = ["(None)"] + list_folders("knowledge_databases") - -ANNOUNCEMENT = """ -# Evaluate the quality of retrieved date from the FAISS database - -Use this space test the performance of some pre-constructed vector databases hosted at `shaocongma/kdb`. To use this space for your own FAISS database, follow this instruction: -1. Duplicate this space. -2. Add the secret key `HF_TOKEN` with your own Huggingface User Access Token. -3. Create a Huggingface Dataset. Put your FAISS database to it. -4. Add the secret key `REPO_ID` as your dataset's address. -""" -AUTODRAFT = """ -AutoDraft is a GPT-based project to generate an academic paper using the title and contributions. When generating specific sections, AutoDraft will query some necessary backgrounds in related fields from the pre-constructed vector database. -""" - -def query_from_kdb(input, kdb, query_counts): - if kdb == "(None)": - return {"knowledge_database": "(None)", "input": input, "output": ""}, "" - - db_path = f"knowledge_databases/{kdb}" - db_config_path = os.path.join(db_path, "db_meta.json") - db_index_path = os.path.join(db_path, "faiss_index") - if os.path.isdir(db_path): - # load configuration file - with open(db_config_path, "r", encoding="utf-8") as f: - db_config = json.load(f) - model_name = db_config["embedding_model"] - embeddings = EMBEDDINGS[model_name] - db = FAISS.load_local(db_index_path, embeddings) - knowledge = Knowledge(db=db) - knowledge.collect_knowledge({input: query_counts}, max_query=query_counts) - domain_knowledge = knowledge.to_json() - else: - raise RuntimeError(f"Failed to query from FAISS.") - return domain_knowledge, "" - -def query_from_kdb_llm(title, contributions, kdb, query_counts): - if kdb == "(None)": - return {"knowledge_database": "(None)", "title": title, "contributions": contributions, "output": ""}, "", {} - - db_path = f"knowledge_databases/{kdb}" - db_config_path = os.path.join(db_path, "db_meta.json") - db_index_path = os.path.join(db_path, "faiss_index") - if os.path.isdir(db_path): - # load configuration file - with open(db_config_path, "r", encoding="utf-8") as f: - db_config = json.load(f) - model_name = db_config["embedding_model"] - embeddings = EMBEDDINGS[model_name] - db = FAISS.load_local(db_index_path, embeddings) - knowledge = Knowledge(db=db) - prompts = f"Title: {title}\n Contributions: {contributions}" - preliminaries_kw, _ = llm(systems=SYSTEM["preliminaries"], prompts=prompts, return_json=True) - knowledge.collect_knowledge(preliminaries_kw, max_query=query_counts) - domain_knowledge = knowledge.to_json() - else: - raise RuntimeError(f"Failed to query from FAISS.") - return domain_knowledge, "", preliminaries_kw - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - gr.Markdown(ANNOUNCEMENT) - - kdb_dropdown = gr.Dropdown(choices=ALL_KDB, value="(None)", label="Knowledge Databases", - info="Pre-defined knowledge databases utilized to aid in the generation of academic writing content. " - "Hosted at `shaocongma/kdb`.") - with gr.Tab("User's Input"): - user_input = gr.Textbox(label="Input", info="Input anything you like to test what will be retrived from the vector database.") - with gr.Row(): - button_clear = gr.Button("Clear") - button_retrieval = gr.Button("Retrieve", variant="primary") - with gr.Tab("AutoDraft"): - gr.Markdown(AUTODRAFT) - title_input = gr.Textbox(label="Title") - contribution_input = gr.Textbox(label="Contributions", lines=5) - with gr.Row(): - button_clear_2 = gr.Button("Clear") - button_retrieval_2 = gr.Button("Retrieve", variant="primary") - - with gr.Accordion("Advanced Setting", open=False): - query_counts_slider = gr.Slider(minimum=1, maximum=50, value=10, step=1, - interactive=True, label="QUERY_COUNTS", - info="How many contents will be retrieved from the vector database.") - - with gr.Column(): - retrieval_output = gr.JSON(label="Output") - llm_kws = gr.JSON(label="Keywords generated by LLM") - - button_retrieval.click(fn=query_from_kdb, - inputs=[user_input, kdb_dropdown, query_counts_slider], - outputs=[retrieval_output, user_input]) - button_retrieval_2.click(fn=query_from_kdb_llm, - inputs=[title_input, contribution_input, kdb_dropdown, query_counts_slider], - outputs=[retrieval_output, user_input, llm_kws]) - -demo.queue(concurrency_count=1, max_size=5, api_open=False) -demo.launch(show_error=True) - - diff --git a/spaces/awacke1/2-NLP-Seq2SeqQAGenerator/qasrl_model_pipeline.py b/spaces/awacke1/2-NLP-Seq2SeqQAGenerator/qasrl_model_pipeline.py deleted file mode 100644 index 50135f76849bc8537fcae83b72532da661487da6..0000000000000000000000000000000000000000 --- a/spaces/awacke1/2-NLP-Seq2SeqQAGenerator/qasrl_model_pipeline.py +++ /dev/null @@ -1,183 +0,0 @@ -from typing import Optional -import json -from argparse import Namespace -from pathlib import Path -from transformers import Text2TextGenerationPipeline, AutoModelForSeq2SeqLM, AutoTokenizer - -def get_markers_for_model(is_t5_model: bool) -> Namespace: - special_tokens_constants = Namespace() - if is_t5_model: - # T5 model have 100 special tokens by default - special_tokens_constants.separator_input_question_predicate = "" - special_tokens_constants.separator_output_answers = "" - special_tokens_constants.separator_output_questions = "" # if using only questions - special_tokens_constants.separator_output_question_answer = "" - special_tokens_constants.separator_output_pairs = "" - special_tokens_constants.predicate_generic_marker = "" - special_tokens_constants.predicate_verb_marker = "" - special_tokens_constants.predicate_nominalization_marker = "" - - else: - special_tokens_constants.separator_input_question_predicate = "" - special_tokens_constants.separator_output_answers = "" - special_tokens_constants.separator_output_questions = "" # if using only questions - special_tokens_constants.separator_output_question_answer = "" - special_tokens_constants.separator_output_pairs = "" - special_tokens_constants.predicate_generic_marker = "" - special_tokens_constants.predicate_verb_marker = "" - special_tokens_constants.predicate_nominalization_marker = "" - return special_tokens_constants - -def load_trained_model(name_or_path): - import huggingface_hub as HFhub - tokenizer = AutoTokenizer.from_pretrained(name_or_path) - model = AutoModelForSeq2SeqLM.from_pretrained(name_or_path) - # load preprocessing_kwargs from the model repo on HF hub, or from the local model directory - kwargs_filename = None - if name_or_path.startswith("kleinay/"): # and 'preprocessing_kwargs.json' in HFhub.list_repo_files(name_or_path): # the supported version of HFhub doesn't support list_repo_files - kwargs_filename = HFhub.hf_hub_download(repo_id=name_or_path, filename="preprocessing_kwargs.json") - elif Path(name_or_path).is_dir() and (Path(name_or_path) / "experiment_kwargs.json").exists(): - kwargs_filename = Path(name_or_path) / "experiment_kwargs.json" - - if kwargs_filename: - preprocessing_kwargs = json.load(open(kwargs_filename)) - # integrate into model.config (for decoding args, e.g. "num_beams"), and save also as standalone object for preprocessing - model.config.preprocessing_kwargs = Namespace(**preprocessing_kwargs) - model.config.update(preprocessing_kwargs) - return model, tokenizer - - -class QASRL_Pipeline(Text2TextGenerationPipeline): - def __init__(self, model_repo: str, **kwargs): - model, tokenizer = load_trained_model(model_repo) - super().__init__(model, tokenizer, framework="pt") - self.is_t5_model = "t5" in model.config.model_type - self.special_tokens = get_markers_for_model(self.is_t5_model) - self.data_args = model.config.preprocessing_kwargs - # backward compatibility - default keyword values implemeted in `run_summarization`, thus not saved in `preprocessing_kwargs` - if "predicate_marker_type" not in vars(self.data_args): - self.data_args.predicate_marker_type = "generic" - if "use_bilateral_predicate_marker" not in vars(self.data_args): - self.data_args.use_bilateral_predicate_marker = True - if "append_verb_form" not in vars(self.data_args): - self.data_args.append_verb_form = True - self._update_config(**kwargs) - - def _update_config(self, **kwargs): - " Update self.model.config with initialization parameters and necessary defaults. " - # set default values that will always override model.config, but can overriden by __init__ kwargs - kwargs["max_length"] = kwargs.get("max_length", 80) - # override model.config with kwargs - for k,v in kwargs.items(): - self.model.config.__dict__[k] = v - - def _sanitize_parameters(self, **kwargs): - preprocess_kwargs, forward_kwargs, postprocess_kwargs = {}, {}, {} - if "predicate_marker" in kwargs: - preprocess_kwargs["predicate_marker"] = kwargs["predicate_marker"] - if "predicate_type" in kwargs: - preprocess_kwargs["predicate_type"] = kwargs["predicate_type"] - if "verb_form" in kwargs: - preprocess_kwargs["verb_form"] = kwargs["verb_form"] - return preprocess_kwargs, forward_kwargs, postprocess_kwargs - - def preprocess(self, inputs, predicate_marker="", predicate_type=None, verb_form=None): - # Here, inputs is string or list of strings; apply string postprocessing - if isinstance(inputs, str): - processed_inputs = self._preprocess_string(inputs, predicate_marker, predicate_type, verb_form) - elif hasattr(inputs, "__iter__"): - processed_inputs = [self._preprocess_string(s, predicate_marker, predicate_type, verb_form) for s in inputs] - else: - raise ValueError("inputs must be str or Iterable[str]") - # Now pass to super.preprocess for tokenization - return super().preprocess(processed_inputs) - - def _preprocess_string(self, seq: str, predicate_marker: str, predicate_type: Optional[str], verb_form: Optional[str]) -> str: - sent_tokens = seq.split(" ") - assert predicate_marker in sent_tokens, f"Input sentence must include a predicate-marker token ('{predicate_marker}') before the target predicate word" - predicate_idx = sent_tokens.index(predicate_marker) - sent_tokens.remove(predicate_marker) - sentence_before_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx)]) - predicate = sent_tokens[predicate_idx] - sentence_after_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx+1, len(sent_tokens))]) - - if self.data_args.predicate_marker_type == "generic": - predicate_marker = self.special_tokens.predicate_generic_marker - # In case we want special marker for each predicate type: """ - elif self.data_args.predicate_marker_type == "pred_type": - assert predicate_type is not None, "For this model, you must provide the `predicate_type` either when initializing QASRL_Pipeline(...) or when applying __call__(...) on it" - assert predicate_type in ("verbal", "nominal"), f"`predicate_type` must be either 'verbal' or 'nominal'; got '{predicate_type}'" - predicate_marker = {"verbal": self.special_tokens.predicate_verb_marker , - "nominal": self.special_tokens.predicate_nominalization_marker - }[predicate_type] - - if self.data_args.use_bilateral_predicate_marker: - seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {predicate_marker} {sentence_after_predicate}" - else: - seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {sentence_after_predicate}" - - # embed also verb_form - if self.data_args.append_verb_form and verb_form is None: - raise ValueError(f"For this model, you must provide the `verb_form` of the predicate when applying __call__(...)") - elif self.data_args.append_verb_form: - seq = f"{seq} {self.special_tokens.separator_input_question_predicate} {verb_form} " - else: - seq = f"{seq} " - - # append source prefix (for t5 models) - prefix = self._get_source_prefix(predicate_type) - - return prefix + seq - - def _get_source_prefix(self, predicate_type: Optional[str]): - if not self.is_t5_model or self.data_args.source_prefix is None: - return '' - if not self.data_args.source_prefix.startswith("<"): # Regular prefix - not dependent on input row x - return self.data_args.source_prefix - if self.data_args.source_prefix == "": - if predicate_type is None: - raise ValueError("source_prefix is '' but input no `predicate_type`.") - else: - return f"Generate QAs for {predicate_type} QASRL: " - - def _forward(self, *args, **kwargs): - outputs = super()._forward(*args, **kwargs) - return outputs - - - def postprocess(self, model_outputs): - output_seq = self.tokenizer.decode( - model_outputs["output_ids"].squeeze(), - skip_special_tokens=False, - clean_up_tokenization_spaces=False, - ) - output_seq = output_seq.strip(self.tokenizer.pad_token).strip(self.tokenizer.eos_token).strip() - qa_subseqs = output_seq.split(self.special_tokens.separator_output_pairs) - qas = [self._postrocess_qa(qa_subseq) for qa_subseq in qa_subseqs] - return {"generated_text": output_seq, - "QAs": qas} - - def _postrocess_qa(self, seq: str) -> str: - # split question and answers - if self.special_tokens.separator_output_question_answer in seq: - question, answer = seq.split(self.special_tokens.separator_output_question_answer)[:2] - else: - print("invalid format: no separator between question and answer found...") - return None - # question, answer = seq, '' # Or: backoff to only question - # skip "_" slots in questions - question = ' '.join(t for t in question.split(' ') if t != '_') - answers = [a.strip() for a in answer.split(self.special_tokens.separator_output_answers)] - return {"question": question, "answers": answers} - - -if __name__ == "__main__": - pipe = QASRL_Pipeline("kleinay/qanom-seq2seq-model-baseline") - res1 = pipe("The student was interested in Luke 's research about sea animals .", verb_form="research", predicate_type="nominal") - res2 = pipe(["The doctor was interested in Luke 's treatment .", - "The Veterinary student was interested in Luke 's treatment of sea animals ."], verb_form="treat", predicate_type="nominal", num_beams=10) - res3 = pipe("A number of professions have developed that specialize in the treatment of mental disorders .", verb_form="develop", predicate_type="verbal") - print(res1) - print(res2) - print(res3) - \ No newline at end of file diff --git a/spaces/awacke1/Slot-Machine-HTML5/backupindex.html b/spaces/awacke1/Slot-Machine-HTML5/backupindex.html deleted file mode 100644 index c5ea0c1913847ad839b8030d9d6bfc369af9db3d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Slot-Machine-HTML5/backupindex.html +++ /dev/null @@ -1,77 +0,0 @@ - - - - - - Emoji Slot Machine - - - -

    Emoji Slot Machine

    -
    -
    🍌
    -
    🍌
    -
    🍌
    -
    - -
    Balance: $10.00
    -
    - - - - \ No newline at end of file diff --git a/spaces/awacke1/Streamlit-ChatGPT/backupapp.py b/spaces/awacke1/Streamlit-ChatGPT/backupapp.py deleted file mode 100644 index a7e8ab19c5290ce2fce1dd921c48a9f19b9d028c..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Streamlit-ChatGPT/backupapp.py +++ /dev/null @@ -1,67 +0,0 @@ -import streamlit as st -import openai -import os -import base64 -import glob -from datetime import datetime -from dotenv import load_dotenv -from openai import ChatCompletion - -load_dotenv() - -openai.api_key = os.getenv('OPENAI_KEY') - -def chat_with_model(prompts): - model = "gpt-3.5-turbo" - - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.extend([{'role': 'user', 'content': prompt} for prompt in prompts]) - - response = openai.ChatCompletion.create(model=model, messages=conversation) - return response['choices'][0]['message']['content'] - -def generate_filename(prompt): - safe_date_time = datetime.now().strftime("%Y_%m_%d_%H_%M_%S") - safe_prompt = "".join(x for x in prompt if x.isalnum())[:50] - return f"{safe_date_time}_{safe_prompt}.md" - -def create_file(filename, prompt, response): - with open(filename, 'w') as file: - file.write(f"Prompt: {prompt}\n\nResponse: {response}") - -def get_table_download_link(file_path): - with open(file_path, 'r') as file: - data = file.read() - b64 = base64.b64encode(data.encode()).decode() - href = f'Download Response File' - return href - -def main(): - st.title("Chat with AI") - - # Pre-defined prompts - prompts = ['Hows the weather?', 'Tell me a joke.', 'What is the meaning of life?'] - - # User prompt input - user_prompt = st.text_input("Your question:", '') - - if user_prompt: - prompts.append(user_prompt) - - if st.button('Chat'): - st.write('Chatting with GPT-3...') - response = chat_with_model(prompts) - st.write('Response:') - st.write(response) - - filename = generate_filename(user_prompt) - create_file(filename, user_prompt, response) - - st.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - md_files = glob.glob("*.md") - for file in md_files: - st.markdown(get_table_download_link(file), unsafe_allow_html=True) - -if __name__ == "__main__": - main() diff --git a/spaces/azusarang/so-vits-svc-models-ba_P/vencoder/__init__.py b/spaces/azusarang/so-vits-svc-models-ba_P/vencoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/scripts/parse_landmark.py b/spaces/beihai/GFPGAN-V1.3-whole-image/scripts/parse_landmark.py deleted file mode 100644 index 74e2ff9e130ad4f2395c9666dca3ba78526d7a8a..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/scripts/parse_landmark.py +++ /dev/null @@ -1,85 +0,0 @@ -import cv2 -import json -import numpy as np -import os -import torch -from basicsr.utils import FileClient, imfrombytes -from collections import OrderedDict - -# ---------------------------- This script is used to parse facial landmarks ------------------------------------- # -# Configurations -save_img = False -scale = 0.5 # 0.5 for official FFHQ (512x512), 1 for others -enlarge_ratio = 1.4 # only for eyes -json_path = 'ffhq-dataset-v2.json' -face_path = 'datasets/ffhq/ffhq_512.lmdb' -save_path = './FFHQ_eye_mouth_landmarks_512.pth' - -print('Load JSON metadata...') -# use the official json file in FFHQ dataset -with open(json_path, 'rb') as f: - json_data = json.load(f, object_pairs_hook=OrderedDict) - -print('Open LMDB file...') -# read ffhq images -file_client = FileClient('lmdb', db_paths=face_path) -with open(os.path.join(face_path, 'meta_info.txt')) as fin: - paths = [line.split('.')[0] for line in fin] - -save_dict = {} - -for item_idx, item in enumerate(json_data.values()): - print(f'\r{item_idx} / {len(json_data)}, {item["image"]["file_path"]} ', end='', flush=True) - - # parse landmarks - lm = np.array(item['image']['face_landmarks']) - lm = lm * scale - - item_dict = {} - # get image - if save_img: - img_bytes = file_client.get(paths[item_idx]) - img = imfrombytes(img_bytes, float32=True) - - # get landmarks for each component - map_left_eye = list(range(36, 42)) - map_right_eye = list(range(42, 48)) - map_mouth = list(range(48, 68)) - - # eye_left - mean_left_eye = np.mean(lm[map_left_eye], 0) # (x, y) - half_len_left_eye = np.max((np.max(np.max(lm[map_left_eye], 0) - np.min(lm[map_left_eye], 0)) / 2, 16)) - item_dict['left_eye'] = [mean_left_eye[0], mean_left_eye[1], half_len_left_eye] - # mean_left_eye[0] = 512 - mean_left_eye[0] # for testing flip - half_len_left_eye *= enlarge_ratio - loc_left_eye = np.hstack((mean_left_eye - half_len_left_eye + 1, mean_left_eye + half_len_left_eye)).astype(int) - if save_img: - eye_left_img = img[loc_left_eye[1]:loc_left_eye[3], loc_left_eye[0]:loc_left_eye[2], :] - cv2.imwrite(f'tmp/{item_idx:08d}_eye_left.png', eye_left_img * 255) - - # eye_right - mean_right_eye = np.mean(lm[map_right_eye], 0) - half_len_right_eye = np.max((np.max(np.max(lm[map_right_eye], 0) - np.min(lm[map_right_eye], 0)) / 2, 16)) - item_dict['right_eye'] = [mean_right_eye[0], mean_right_eye[1], half_len_right_eye] - # mean_right_eye[0] = 512 - mean_right_eye[0] # # for testing flip - half_len_right_eye *= enlarge_ratio - loc_right_eye = np.hstack( - (mean_right_eye - half_len_right_eye + 1, mean_right_eye + half_len_right_eye)).astype(int) - if save_img: - eye_right_img = img[loc_right_eye[1]:loc_right_eye[3], loc_right_eye[0]:loc_right_eye[2], :] - cv2.imwrite(f'tmp/{item_idx:08d}_eye_right.png', eye_right_img * 255) - - # mouth - mean_mouth = np.mean(lm[map_mouth], 0) - half_len_mouth = np.max((np.max(np.max(lm[map_mouth], 0) - np.min(lm[map_mouth], 0)) / 2, 16)) - item_dict['mouth'] = [mean_mouth[0], mean_mouth[1], half_len_mouth] - # mean_mouth[0] = 512 - mean_mouth[0] # for testing flip - loc_mouth = np.hstack((mean_mouth - half_len_mouth + 1, mean_mouth + half_len_mouth)).astype(int) - if save_img: - mouth_img = img[loc_mouth[1]:loc_mouth[3], loc_mouth[0]:loc_mouth[2], :] - cv2.imwrite(f'tmp/{item_idx:08d}_mouth.png', mouth_img * 255) - - save_dict[f'{item_idx:08d}'] = item_dict - -print('Save...') -torch.save(save_dict, save_path) diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621075907.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220621075907.py deleted file mode 100644 index 5e386bd3894f8d8f8c548178536e3713316e7a2e..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621075907.py +++ /dev/null @@ -1,31 +0,0 @@ -#-*- coding : utf-8-*- -import base64 -from subprocess import STDOUT -import streamlit as st -import pandas as pd -import camelot as cam # extracting tables from PDFs - -st.title("PDF Table Extractor") - -input_pdf = st.file_uploader(label = "", type = 'pdf') - -page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1) -background = st.selectbox("表格线条是否隐藏",(True, False)) -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - - # read the pdf and parse it using stream - tables = cam.read_pdf("input.pdf", pages=page_number, process_background=background) - result = pd.ExcelWriter('result.xlsx', engine='xlsxwriter') - tables[0].to_excel(result,index=False) - # for i in range(0,len(tables)): - # table = tables[i].df - # sheetname = str(i) - # table.to_excel(result, sheetname,index=False) - - with open('result.xlsx','rb') as f: - st.download_button('提取完成,点击下载!', f,file_name='result.xlsx',mime="application/vnd.ms-excel") \ No newline at end of file diff --git "a/spaces/betterme/mestreamlit/pages/3_\360\237\220\247_\345\210\206\350\257\215.py" "b/spaces/betterme/mestreamlit/pages/3_\360\237\220\247_\345\210\206\350\257\215.py" deleted file mode 100644 index ee7396c3024befe810de768b26cf4cabe503a2b7..0000000000000000000000000000000000000000 --- "a/spaces/betterme/mestreamlit/pages/3_\360\237\220\247_\345\210\206\350\257\215.py" +++ /dev/null @@ -1,39 +0,0 @@ -from meutils.pipe import * - -from appzoo.streamlit_app import Page - -import streamlit as st - -from LAC import LAC - - -# @st.experimental_memo(persist='disk', show_spinner=True, suppress_st_warning=False, max_entries=None, ttl=64) -@st.cache(func=None, persist=False, hash_funcs={'LAC.lac.LAC': str}, ttl=8) -def tokenizer1(): - print('Loading tokenizer1...') - return LAC() - - -@st.experimental_singleton -def tokenizer2(): - print('Loading tokenizer2...') - return LAC() - - -class MyPage(Page): - - def main(self): - with st.form("Coding"): - text = st.text_input("输入文本", "") - - if st.form_submit_button('开始转换'): - _ = tokenizer1().run(text) - _ = tokenizer2().run(text) - - st.json(_) - - -if __name__ == '__main__': - app_title = "# 切词" - app_info = "" - MyPage(app_title=app_title, app_info=app_info).main() diff --git a/spaces/bigcode/bigcode-playground/README.md b/spaces/bigcode/bigcode-playground/README.md deleted file mode 100644 index ebcf541c5d270c41a15f81be8e625784ca0dcae3..0000000000000000000000000000000000000000 --- a/spaces/bigcode/bigcode-playground/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: BigCode - Playground -emoji: 🪐 -colorFrom: grey -colorTo: yellow -sdk: gradio -sdk_version: 3.28.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bigscience-data/roots-search/Makefile b/spaces/bigscience-data/roots-search/Makefile deleted file mode 100644 index 4b312120097eb5076c15c85a6cba7a1fd9383586..0000000000000000000000000000000000000000 --- a/spaces/bigscience-data/roots-search/Makefile +++ /dev/null @@ -1,7 +0,0 @@ -.PHONY: style - -# Format source code automatically - -style: - black --line-length 119 --target-version py36 . - isort . diff --git a/spaces/bioriAsaeru/text-to-voice/2manuals wic reset key crack Download the best tool for Epson and Canon printer maintenance.md b/spaces/bioriAsaeru/text-to-voice/2manuals wic reset key crack Download the best tool for Epson and Canon printer maintenance.md deleted file mode 100644 index c527199ebe7791e18b561fbee784f28df2ede5b6..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/2manuals wic reset key crack Download the best tool for Epson and Canon printer maintenance.md +++ /dev/null @@ -1,9 +0,0 @@ -
    -

    First step: I went to 2manuals.com and downloaded the free WIC Reset Utility (at the time of this post, it's v.5.57) at (choose the version for your system e.g. Apple or Windows).

    -

    2manuals wic reset key crack


    Download File ————— https://urloso.com/2uyOhe



    -

    Third, I watched a YouTube video called: "Epson WF4630 Firmware Downgrade Always Full." The guy in the video is kind of rough, but he walks you through the software fairly well. In his video, he was attempting to permanently reset the ink levels for his cartridges. That's not what I needed to do to fix my problem, but the guy in the video does a nice job walking you through the WIC software. In the steps below, I used the WIC Reset Utility software to downgrade the firmware version for my WF-3640 to an older version - a version that doesn't tell the printer to brick itself if non-genuine Epson cartridges are installed. For reference, the downgraded firmware that the WIC software eventually installed on my printer was a version from March 3, 2016 (version CB17J4) - I assume that was the original firmware that was installed on this printer when it was manufactured and sold new 4 years ago. The painful part of this whole process is that I had to purchase a "key" for the WIC software to use in order to download that firmware. I could not figure out any other way to get a key. Unfortunately, they can only be used once. Otherwise I would gladly share my code with everyone. Read below...

    -

    Fourth, (and this part sucked) as noted above, I painfully paid the $19.99 for a firmware change key for the WIC reset utility, and paid for that through PayPal. Although physically cringing when I did it, it was an easy process. I got a 16-digit code (key) that can only be used once. (NOTE: there are other websites that purport to sell a "WIC reset key generator," and they claim that they will give you your first key for free, but each one of these sites requires you to enter all of your credit card information first - apparently in case you ever want to buy more key generating software from them?? I don't know). Anyway, I chose not to do that (purchase a key generator), even though it might be legit, and instead used PayPal for the one-time purchase through www.wic.support. You can dig around on the internet, and there are more than one site that sells them. I simply chose that one. Please understand that I am in NO WAY affiliated with www.wic.support. This was the first time I had ever heard of them, and I hope to never pay them another penny again. However, I must say, I was so frustrated that I was just about ready to order a new printer for about $180 on Amazon - so, to me, twenty bucks seemed worth a shot, even if it didn't work.

    -

    If You want to pay by Visa, Master Card etc., WebMoney - just click on BUY button ahead.

    Pay by PayPal or other local payment systems - go here to pay.If You have Epson printer You can use Trial key for first time - it will reset counters from 100% to 80%. Please read about Trial Key here - Video

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Camfrog Pro Code 6.3 __HOT__ Free 18.md b/spaces/bioriAsaeru/text-to-voice/Camfrog Pro Code 6.3 __HOT__ Free 18.md deleted file mode 100644 index 919b9ffebf2e096e0be42ec9d0ef2320aef63a79..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Camfrog Pro Code 6.3 __HOT__ Free 18.md +++ /dev/null @@ -1,6 +0,0 @@ -

    camfrog pro code 6.3 free 18


    Download »»» https://urloso.com/2uyP0x



    - -Download Camfrog Video Chat - Chat with friends via instant messaging or video calls, transfer files, access various public and private chat ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/bird-watching-society-of-greater-clare/brainy/README.md b/spaces/bird-watching-society-of-greater-clare/brainy/README.md deleted file mode 100644 index 5e10640a1938a2982c1958272858d3072688d24b..0000000000000000000000000000000000000000 --- a/spaces/bird-watching-society-of-greater-clare/brainy/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Brainy -emoji: 🧠 -colorFrom: pink -colorTo: purple -sdk: docker -pinned: false ---- - -Check out the configuration reference at diff --git a/spaces/blmdsydm/faster-whisper-webui/src/hooks/progressListener.py b/spaces/blmdsydm/faster-whisper-webui/src/hooks/progressListener.py deleted file mode 100644 index a7852a24e237ae864bbce5f37674e1f7c817a1b3..0000000000000000000000000000000000000000 --- a/spaces/blmdsydm/faster-whisper-webui/src/hooks/progressListener.py +++ /dev/null @@ -1,8 +0,0 @@ -from typing import Union - -class ProgressListener: - def on_progress(self, current: Union[int, float], total: Union[int, float]): - self.total = total - - def on_finished(self): - pass \ No newline at end of file diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/XVThumbImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/XVThumbImagePlugin.py deleted file mode 100644 index aa4a01f4e5aa4109ff447c42669d97a4fe43fd0c..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/XVThumbImagePlugin.py +++ /dev/null @@ -1,78 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# XV Thumbnail file handler by Charles E. "Gene" Cash -# (gcash@magicnet.net) -# -# see xvcolor.c and xvbrowse.c in the sources to John Bradley's XV, -# available from ftp://ftp.cis.upenn.edu/pub/xv/ -# -# history: -# 98-08-15 cec created (b/w only) -# 98-12-09 cec added color palette -# 98-12-28 fl added to PIL (with only a few very minor modifications) -# -# To do: -# FIXME: make save work (this requires quantization support) -# - -from . import Image, ImageFile, ImagePalette -from ._binary import o8 - -_MAGIC = b"P7 332" - -# standard color palette for thumbnails (RGB332) -PALETTE = b"" -for r in range(8): - for g in range(8): - for b in range(4): - PALETTE = PALETTE + ( - o8((r * 255) // 7) + o8((g * 255) // 7) + o8((b * 255) // 3) - ) - - -def _accept(prefix): - return prefix[:6] == _MAGIC - - -## -# Image plugin for XV thumbnail images. - - -class XVThumbImageFile(ImageFile.ImageFile): - format = "XVThumb" - format_description = "XV thumbnail image" - - def _open(self): - # check magic - if not _accept(self.fp.read(6)): - msg = "not an XV thumbnail file" - raise SyntaxError(msg) - - # Skip to beginning of next line - self.fp.readline() - - # skip info comments - while True: - s = self.fp.readline() - if not s: - msg = "Unexpected EOF reading XV thumbnail file" - raise SyntaxError(msg) - if s[0] != 35: # ie. when not a comment: '#' - break - - # parse header line (already read) - s = s.strip().split() - - self.mode = "P" - self._size = int(s[0]), int(s[1]) - - self.palette = ImagePalette.raw("RGB", PALETTE) - - self.tile = [("raw", (0, 0) + self.size, self.fp.tell(), (self.mode, 0, 1))] - - -# -------------------------------------------------------------------- - -Image.register_open(XVThumbImageFile.format, XVThumbImageFile, _accept) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/demo/predictor.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/demo/predictor.py deleted file mode 100644 index 7b7ebd3f846850172c1f560f8492d51e5667f76d..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/demo/predictor.py +++ /dev/null @@ -1,220 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import atexit -import bisect -import multiprocessing as mp -from collections import deque -import cv2 -import torch - -from detectron2.data import MetadataCatalog -from detectron2.engine.defaults import DefaultPredictor -from detectron2.utils.video_visualizer import VideoVisualizer -from detectron2.utils.visualizer import ColorMode, Visualizer - - -class VisualizationDemo(object): - def __init__(self, cfg, instance_mode=ColorMode.IMAGE, parallel=False): - """ - Args: - cfg (CfgNode): - instance_mode (ColorMode): - parallel (bool): whether to run the model in different processes from visualization. - Useful since the visualization logic can be slow. - """ - self.metadata = MetadataCatalog.get( - cfg.DATASETS.TEST[0] if len(cfg.DATASETS.TEST) else "__unused" - ) - self.cpu_device = torch.device("cpu") - self.instance_mode = instance_mode - - self.parallel = parallel - if parallel: - num_gpu = torch.cuda.device_count() - self.predictor = AsyncPredictor(cfg, num_gpus=num_gpu) - else: - self.predictor = DefaultPredictor(cfg) - - def run_on_image(self, image): - """ - Args: - image (np.ndarray): an image of shape (H, W, C) (in BGR order). - This is the format used by OpenCV. - - Returns: - predictions (dict): the output of the model. - vis_output (VisImage): the visualized image output. - """ - vis_output = None - predictions = self.predictor(image) - # Convert image from OpenCV BGR format to Matplotlib RGB format. - image = image[:, :, ::-1] - visualizer = Visualizer(image, self.metadata, instance_mode=self.instance_mode) - if "panoptic_seg" in predictions: - panoptic_seg, segments_info = predictions["panoptic_seg"] - vis_output = visualizer.draw_panoptic_seg_predictions( - panoptic_seg.to(self.cpu_device), segments_info - ) - else: - if "sem_seg" in predictions: - vis_output = visualizer.draw_sem_seg( - predictions["sem_seg"].argmax(dim=0).to(self.cpu_device) - ) - if "instances" in predictions: - instances = predictions["instances"].to(self.cpu_device) - vis_output = visualizer.draw_instance_predictions(predictions=instances) - - return predictions, vis_output - - def _frame_from_video(self, video): - while video.isOpened(): - success, frame = video.read() - if success: - yield frame - else: - break - - def run_on_video(self, video): - """ - Visualizes predictions on frames of the input video. - - Args: - video (cv2.VideoCapture): a :class:`VideoCapture` object, whose source can be - either a webcam or a video file. - - Yields: - ndarray: BGR visualizations of each video frame. - """ - video_visualizer = VideoVisualizer(self.metadata, self.instance_mode) - - def process_predictions(frame, predictions): - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - if "panoptic_seg" in predictions: - panoptic_seg, segments_info = predictions["panoptic_seg"] - vis_frame = video_visualizer.draw_panoptic_seg_predictions( - frame, panoptic_seg.to(self.cpu_device), segments_info - ) - elif "instances" in predictions: - predictions = predictions["instances"].to(self.cpu_device) - vis_frame = video_visualizer.draw_instance_predictions(frame, predictions) - elif "sem_seg" in predictions: - vis_frame = video_visualizer.draw_sem_seg( - frame, predictions["sem_seg"].argmax(dim=0).to(self.cpu_device) - ) - - # Converts Matplotlib RGB format to OpenCV BGR format - vis_frame = cv2.cvtColor(vis_frame.get_image(), cv2.COLOR_RGB2BGR) - return vis_frame - - frame_gen = self._frame_from_video(video) - if self.parallel: - buffer_size = self.predictor.default_buffer_size - - frame_data = deque() - - for cnt, frame in enumerate(frame_gen): - frame_data.append(frame) - self.predictor.put(frame) - - if cnt >= buffer_size: - frame = frame_data.popleft() - predictions = self.predictor.get() - yield process_predictions(frame, predictions) - - while len(frame_data): - frame = frame_data.popleft() - predictions = self.predictor.get() - yield process_predictions(frame, predictions) - else: - for frame in frame_gen: - yield process_predictions(frame, self.predictor(frame)) - - -class AsyncPredictor: - """ - A predictor that runs the model asynchronously, possibly on >1 GPUs. - Because rendering the visualization takes considerably amount of time, - this helps improve throughput a little bit when rendering videos. - """ - - class _StopToken: - pass - - class _PredictWorker(mp.Process): - def __init__(self, cfg, task_queue, result_queue): - self.cfg = cfg - self.task_queue = task_queue - self.result_queue = result_queue - super().__init__() - - def run(self): - predictor = DefaultPredictor(self.cfg) - - while True: - task = self.task_queue.get() - if isinstance(task, AsyncPredictor._StopToken): - break - idx, data = task - result = predictor(data) - self.result_queue.put((idx, result)) - - def __init__(self, cfg, num_gpus: int = 1): - """ - Args: - cfg (CfgNode): - num_gpus (int): if 0, will run on CPU - """ - num_workers = max(num_gpus, 1) - self.task_queue = mp.Queue(maxsize=num_workers * 3) - self.result_queue = mp.Queue(maxsize=num_workers * 3) - self.procs = [] - for gpuid in range(max(num_gpus, 1)): - cfg = cfg.clone() - cfg.defrost() - cfg.MODEL.DEVICE = "cuda:{}".format(gpuid) if num_gpus > 0 else "cpu" - self.procs.append( - AsyncPredictor._PredictWorker(cfg, self.task_queue, self.result_queue) - ) - - self.put_idx = 0 - self.get_idx = 0 - self.result_rank = [] - self.result_data = [] - - for p in self.procs: - p.start() - atexit.register(self.shutdown) - - def put(self, image): - self.put_idx += 1 - self.task_queue.put((self.put_idx, image)) - - def get(self): - self.get_idx += 1 # the index needed for this request - if len(self.result_rank) and self.result_rank[0] == self.get_idx: - res = self.result_data[0] - del self.result_data[0], self.result_rank[0] - return res - - while True: - # make sure the results are returned in the correct order - idx, res = self.result_queue.get() - if idx == self.get_idx: - return res - insert = bisect.bisect(self.result_rank, idx) - self.result_rank.insert(insert, idx) - self.result_data.insert(insert, res) - - def __len__(self): - return self.put_idx - self.get_idx - - def __call__(self, image): - self.put(image) - return self.get() - - def shutdown(self): - for _ in self.procs: - self.task_queue.put(AsyncPredictor._StopToken()) - - @property - def default_buffer_size(self): - return len(self.procs) * 5 diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/meta_arch/fcos.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/meta_arch/fcos.py deleted file mode 100644 index 7e7140bfa04a8e8bb199a800805cbaf22fdd8f32..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/meta_arch/fcos.py +++ /dev/null @@ -1,328 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -from typing import List, Optional, Tuple -import torch -from fvcore.nn import sigmoid_focal_loss_jit -from torch import nn -from torch.nn import functional as F - -from detectron2.layers import ShapeSpec, batched_nms -from detectron2.structures import Boxes, ImageList, Instances, pairwise_point_box_distance -from detectron2.utils.events import get_event_storage - -from ..anchor_generator import DefaultAnchorGenerator -from ..backbone import Backbone -from ..box_regression import Box2BoxTransformLinear, _dense_box_regression_loss -from .dense_detector import DenseDetector -from .retinanet import RetinaNetHead - -__all__ = ["FCOS"] - -logger = logging.getLogger(__name__) - - -class FCOS(DenseDetector): - """ - Implement FCOS in :paper:`fcos`. - """ - - def __init__( - self, - *, - backbone: Backbone, - head: nn.Module, - head_in_features: Optional[List[str]] = None, - box2box_transform=None, - num_classes, - center_sampling_radius: float = 1.5, - focal_loss_alpha=0.25, - focal_loss_gamma=2.0, - test_score_thresh=0.2, - test_topk_candidates=1000, - test_nms_thresh=0.6, - max_detections_per_image=100, - pixel_mean, - pixel_std, - ): - """ - Args: - center_sampling_radius: radius of the "center" of a groundtruth box, - within which all anchor points are labeled positive. - Other arguments mean the same as in :class:`RetinaNet`. - """ - super().__init__( - backbone, head, head_in_features, pixel_mean=pixel_mean, pixel_std=pixel_std - ) - - self.num_classes = num_classes - - # FCOS uses one anchor point per location. - # We represent the anchor point by a box whose size equals the anchor stride. - feature_shapes = backbone.output_shape() - fpn_strides = [feature_shapes[k].stride for k in self.head_in_features] - self.anchor_generator = DefaultAnchorGenerator( - sizes=[[k] for k in fpn_strides], aspect_ratios=[1.0], strides=fpn_strides - ) - - # FCOS parameterizes box regression by a linear transform, - # where predictions are normalized by anchor stride (equal to anchor size). - if box2box_transform is None: - box2box_transform = Box2BoxTransformLinear(normalize_by_size=True) - self.box2box_transform = box2box_transform - - self.center_sampling_radius = float(center_sampling_radius) - - # Loss parameters: - self.focal_loss_alpha = focal_loss_alpha - self.focal_loss_gamma = focal_loss_gamma - - # Inference parameters: - self.test_score_thresh = test_score_thresh - self.test_topk_candidates = test_topk_candidates - self.test_nms_thresh = test_nms_thresh - self.max_detections_per_image = max_detections_per_image - - def forward_training(self, images, features, predictions, gt_instances): - # Transpose the Hi*Wi*A dimension to the middle: - pred_logits, pred_anchor_deltas, pred_centerness = self._transpose_dense_predictions( - predictions, [self.num_classes, 4, 1] - ) - anchors = self.anchor_generator(features) - gt_labels, gt_boxes = self.label_anchors(anchors, gt_instances) - return self.losses( - anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes, pred_centerness - ) - - @torch.no_grad() - def _match_anchors(self, gt_boxes: Boxes, anchors: List[Boxes]): - """ - Match ground-truth boxes to a set of multi-level anchors. - - Args: - gt_boxes: Ground-truth boxes from instances of an image. - anchors: List of anchors for each feature map (of different scales). - - Returns: - torch.Tensor - A tensor of shape `(M, R)`, given `M` ground-truth boxes and total - `R` anchor points from all feature levels, indicating the quality - of match between m-th box and r-th anchor. Higher value indicates - better match. - """ - # Naming convention: (M = ground-truth boxes, R = anchor points) - # Anchor points are represented as square boxes of size = stride. - num_anchors_per_level = [len(x) for x in anchors] - anchors = Boxes.cat(anchors) # (R, 4) - anchor_centers = anchors.get_centers() # (R, 2) - anchor_sizes = anchors.tensor[:, 2] - anchors.tensor[:, 0] # (R, ) - - lower_bound = anchor_sizes * 4 - lower_bound[: num_anchors_per_level[0]] = 0 - upper_bound = anchor_sizes * 8 - upper_bound[-num_anchors_per_level[-1] :] = float("inf") - - gt_centers = gt_boxes.get_centers() - - # FCOS with center sampling: anchor point must be close enough to - # ground-truth box center. - center_dists = (anchor_centers[None, :, :] - gt_centers[:, None, :]).abs_() - sampling_regions = self.center_sampling_radius * anchor_sizes[None, :] - - match_quality_matrix = center_dists.max(dim=2).values < sampling_regions - - pairwise_dist = pairwise_point_box_distance(anchor_centers, gt_boxes) - pairwise_dist = pairwise_dist.permute(1, 0, 2) # (M, R, 4) - - # The original FCOS anchor matching rule: anchor point must be inside GT. - match_quality_matrix &= pairwise_dist.min(dim=2).values > 0 - - # Multilevel anchor matching in FCOS: each anchor is only responsible - # for certain scale range. - pairwise_dist = pairwise_dist.max(dim=2).values - match_quality_matrix &= (pairwise_dist > lower_bound[None, :]) & ( - pairwise_dist < upper_bound[None, :] - ) - # Match the GT box with minimum area, if there are multiple GT matches. - gt_areas = gt_boxes.area() # (M, ) - - match_quality_matrix = match_quality_matrix.to(torch.float32) - match_quality_matrix *= 1e8 - gt_areas[:, None] - return match_quality_matrix # (M, R) - - @torch.no_grad() - def label_anchors(self, anchors: List[Boxes], gt_instances: List[Instances]): - """ - Same interface as :meth:`RetinaNet.label_anchors`, but implemented with FCOS - anchor matching rule. - - Unlike RetinaNet, there are no ignored anchors. - """ - - gt_labels, matched_gt_boxes = [], [] - - for inst in gt_instances: - if len(inst) > 0: - match_quality_matrix = self._match_anchors(inst.gt_boxes, anchors) - - # Find matched ground-truth box per anchor. Un-matched anchors are - # assigned -1. This is equivalent to using an anchor matcher as used - # in R-CNN/RetinaNet: `Matcher(thresholds=[1e-5], labels=[0, 1])` - match_quality, matched_idxs = match_quality_matrix.max(dim=0) - matched_idxs[match_quality < 1e-5] = -1 - - matched_gt_boxes_i = inst.gt_boxes.tensor[matched_idxs.clip(min=0)] - gt_labels_i = inst.gt_classes[matched_idxs.clip(min=0)] - - # Anchors with matched_idxs = -1 are labeled background. - gt_labels_i[matched_idxs < 0] = self.num_classes - else: - matched_gt_boxes_i = torch.zeros_like(Boxes.cat(anchors).tensor) - gt_labels_i = torch.full( - (len(matched_gt_boxes_i),), - fill_value=self.num_classes, - dtype=torch.long, - device=matched_gt_boxes_i.device, - ) - - gt_labels.append(gt_labels_i) - matched_gt_boxes.append(matched_gt_boxes_i) - - return gt_labels, matched_gt_boxes - - def losses( - self, anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes, pred_centerness - ): - """ - This method is almost identical to :meth:`RetinaNet.losses`, with an extra - "loss_centerness" in the returned dict. - """ - num_images = len(gt_labels) - gt_labels = torch.stack(gt_labels) # (M, R) - - pos_mask = (gt_labels >= 0) & (gt_labels != self.num_classes) - num_pos_anchors = pos_mask.sum().item() - get_event_storage().put_scalar("num_pos_anchors", num_pos_anchors / num_images) - normalizer = self._ema_update("loss_normalizer", max(num_pos_anchors, 1), 300) - - # classification and regression loss - gt_labels_target = F.one_hot(gt_labels, num_classes=self.num_classes + 1)[ - :, :, :-1 - ] # no loss for the last (background) class - loss_cls = sigmoid_focal_loss_jit( - torch.cat(pred_logits, dim=1), - gt_labels_target.to(pred_logits[0].dtype), - alpha=self.focal_loss_alpha, - gamma=self.focal_loss_gamma, - reduction="sum", - ) - - loss_box_reg = _dense_box_regression_loss( - anchors, - self.box2box_transform, - pred_anchor_deltas, - gt_boxes, - pos_mask, - box_reg_loss_type="giou", - ) - - ctrness_targets = self.compute_ctrness_targets(anchors, gt_boxes) # (M, R) - pred_centerness = torch.cat(pred_centerness, dim=1).squeeze(dim=2) # (M, R) - ctrness_loss = F.binary_cross_entropy_with_logits( - pred_centerness[pos_mask], ctrness_targets[pos_mask], reduction="sum" - ) - return { - "loss_fcos_cls": loss_cls / normalizer, - "loss_fcos_loc": loss_box_reg / normalizer, - "loss_fcos_ctr": ctrness_loss / normalizer, - } - - def compute_ctrness_targets(self, anchors: List[Boxes], gt_boxes: List[torch.Tensor]): - anchors = Boxes.cat(anchors).tensor # Rx4 - reg_targets = [self.box2box_transform.get_deltas(anchors, m) for m in gt_boxes] - reg_targets = torch.stack(reg_targets, dim=0) # NxRx4 - if len(reg_targets) == 0: - return reg_targets.new_zeros(len(reg_targets)) - left_right = reg_targets[:, :, [0, 2]] - top_bottom = reg_targets[:, :, [1, 3]] - ctrness = (left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * ( - top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0] - ) - return torch.sqrt(ctrness) - - def forward_inference( - self, - images: ImageList, - features: List[torch.Tensor], - predictions: List[List[torch.Tensor]], - ): - pred_logits, pred_anchor_deltas, pred_centerness = self._transpose_dense_predictions( - predictions, [self.num_classes, 4, 1] - ) - anchors = self.anchor_generator(features) - - results: List[Instances] = [] - for img_idx, image_size in enumerate(images.image_sizes): - scores_per_image = [ - # Multiply and sqrt centerness & classification scores - # (See eqn. 4 in https://arxiv.org/abs/2006.09214) - torch.sqrt(x[img_idx].sigmoid_() * y[img_idx].sigmoid_()) - for x, y in zip(pred_logits, pred_centerness) - ] - deltas_per_image = [x[img_idx] for x in pred_anchor_deltas] - results_per_image = self.inference_single_image( - anchors, scores_per_image, deltas_per_image, image_size - ) - results.append(results_per_image) - return results - - def inference_single_image( - self, - anchors: List[Boxes], - box_cls: List[torch.Tensor], - box_delta: List[torch.Tensor], - image_size: Tuple[int, int], - ): - """ - Identical to :meth:`RetinaNet.inference_single_image. - """ - pred = self._decode_multi_level_predictions( - anchors, - box_cls, - box_delta, - self.test_score_thresh, - self.test_topk_candidates, - image_size, - ) - keep = batched_nms( - pred.pred_boxes.tensor, pred.scores, pred.pred_classes, self.test_nms_thresh - ) - return pred[keep[: self.max_detections_per_image]] - - -class FCOSHead(RetinaNetHead): - """ - The head used in :paper:`fcos`. It adds an additional centerness - prediction branch on top of :class:`RetinaNetHead`. - """ - - def __init__(self, *, input_shape: List[ShapeSpec], conv_dims: List[int], **kwargs): - super().__init__(input_shape=input_shape, conv_dims=conv_dims, num_anchors=1, **kwargs) - # Unlike original FCOS, we do not add an additional learnable scale layer - # because it's found to have no benefits after normalizing regression targets by stride. - self._num_features = len(input_shape) - self.ctrness = nn.Conv2d(conv_dims[-1], 1, kernel_size=3, stride=1, padding=1) - torch.nn.init.normal_(self.ctrness.weight, std=0.01) - torch.nn.init.constant_(self.ctrness.bias, 0) - - def forward(self, features): - assert len(features) == self._num_features - logits = [] - bbox_reg = [] - ctrness = [] - for feature in features: - logits.append(self.cls_score(self.cls_subnet(feature))) - bbox_feature = self.bbox_subnet(feature) - bbox_reg.append(self.bbox_pred(bbox_feature)) - ctrness.append(self.ctrness(bbox_feature)) - return logits, bbox_reg, ctrness diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/setup.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/setup.py deleted file mode 100644 index 22ad239fe320b8f9501f783afb134b975276a628..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/setup.py +++ /dev/null @@ -1,42 +0,0 @@ -import re -from pathlib import Path -from setuptools import find_packages, setup - -try: - import torch # noqa: F401 -except ImportError as e: - raise Exception( - """ -You must install PyTorch prior to installing DensePose: -pip install torch - -For more information: - https://pytorch.org/get-started/locally/ - """ - ) from e - - -def get_detectron2_current_version(): - """Version is not available for import through Python since it is - above the top level of the package. Instead, we parse it from the - file with a regex.""" - # Get version info from detectron2 __init__.py - version_source = (Path(__file__).parents[2] / "detectron2" / "__init__.py").read_text() - version_number = re.findall(r'__version__ = "([0-9\.]+)"', version_source)[0] - return version_number - - -setup( - name="detectron2-densepose", - author="FAIR", - version=get_detectron2_current_version(), - url="https://github.com/facebookresearch/detectron2/tree/main/projects/DensePose", - packages=find_packages(), - python_requires=">=3.7", - install_requires=[ - "av>=8.0.3", - "detectron2@git+https://github.com/facebookresearch/detectron2.git", - "opencv-python-headless>=4.5.3.56", - "scipy>=1.5.4", - ], -) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/COCO/cascade_mask_rcnn_mvitv2_h_in21k_36ep.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/COCO/cascade_mask_rcnn_mvitv2_h_in21k_36ep.py deleted file mode 100644 index 577045043b960384953a00eac4dc45ee43c1045e..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/COCO/cascade_mask_rcnn_mvitv2_h_in21k_36ep.py +++ /dev/null @@ -1,39 +0,0 @@ -from fvcore.common.param_scheduler import MultiStepParamScheduler - -from detectron2.config import LazyCall as L -from detectron2.solver import WarmupParamScheduler - -from .cascade_mask_rcnn_mvitv2_b_in21k_100ep import ( - dataloader, - lr_multiplier, - model, - train, - optimizer, -) - -model.backbone.bottom_up.embed_dim = 192 -model.backbone.bottom_up.depth = 80 -model.backbone.bottom_up.num_heads = 3 -model.backbone.bottom_up.last_block_indexes = (3, 11, 71, 79) -model.backbone.bottom_up.drop_path_rate = 0.6 -model.backbone.bottom_up.use_act_checkpoint = True - - -train.init_checkpoint = "detectron2://ImageNetPretrained/mvitv2/MViTv2_H_in21k.pyth" - - -# 36 epochs -train.max_iter = 67500 -lr_multiplier = L(WarmupParamScheduler)( - scheduler=L(MultiStepParamScheduler)( - values=[1.0, 0.1, 0.01], - milestones=[ - 52500, - 62500, - 67500, - ], - ), - warmup_length=250 / train.max_iter, - warmup_factor=0.001, -) -optimizer.lr = 1.6e-4 diff --git a/spaces/caslabs/midi-autocompletion/musicautobot/multitask_transformer/transform.py b/spaces/caslabs/midi-autocompletion/musicautobot/multitask_transformer/transform.py deleted file mode 100644 index 747e217a2e6605133b5fdbc807dd715ed4292c12..0000000000000000000000000000000000000000 --- a/spaces/caslabs/midi-autocompletion/musicautobot/multitask_transformer/transform.py +++ /dev/null @@ -1,68 +0,0 @@ -from ..music_transformer.transform import * - -class MultitrackItem(): - def __init__(self, melody:MusicItem, chords:MusicItem, stream=None): - self.melody,self.chords = melody, chords - self.vocab = melody.vocab - self._stream = stream - - @classmethod - def from_file(cls, midi_file, vocab): - return cls.from_stream(file2stream(midi_file), vocab) - - @classmethod - def from_stream(cls, stream, vocab): - if not isinstance(stream, music21.stream.Score): stream = stream.voicesToParts() - num_parts = len(stream.parts) - sort_pitch = False - if num_parts > 2: - raise ValueError('Could not extract melody and chords from midi file. Please make sure file contains exactly 2 tracks') - elif num_parts == 1: - print('Warning: only 1 track found. Inferring melody/chords') - stream = separate_melody_chord(stream) - sort_pitch = False - - mpart, cpart = stream2npenc_parts(stream, sort_pitch=sort_pitch) - return cls.from_npenc_parts(mpart, cpart, vocab, stream) - - @classmethod - def from_npenc_parts(cls, mpart, cpart, vocab, stream=None): - mpart = npenc2idxenc(mpart, seq_type=SEQType.Melody, vocab=vocab, add_eos=False) - cpart = npenc2idxenc(cpart, seq_type=SEQType.Chords, vocab=vocab, add_eos=False) - return MultitrackItem(MusicItem(mpart, vocab), MusicItem(cpart, vocab), stream) - - @classmethod - def from_idx(cls, item, vocab): - m, c = item - return MultitrackItem(MusicItem.from_idx(m, vocab), MusicItem.from_idx(c, vocab)) - def to_idx(self): return np.array((self.melody.to_idx(), self.chords.to_idx())) - - @property - def stream(self): - self._stream = self.to_stream() if self._stream is None else self._stream - return self._stream - - def to_stream(self, bpm=120): - ps = self.melody.to_npenc(), self.chords.to_npenc() - ps = [npenc2chordarr(p) for p in ps] - chordarr = chordarr_combine_parts(ps) - return chordarr2stream(chordarr, bpm=bpm) - - - def show(self, format:str=None): - return self.stream.show(format) - def play(self): self.stream.show('midi') - - def transpose(self, val): - return MultitrackItem(self.melody.transpose(val), self.chords.transpose(val)) - def pad_to(self, val): - return MultitrackItem(self.melody.pad_to(val), self.chords.pad_to(val)) - def trim_to_beat(self, beat): - return MultitrackItem(self.melody.trim_to_beat(beat), self.chords.trim_to_beat(beat)) - -def combine2chordarr(np1, np2, vocab): - if len(np1.shape) == 1: np1 = idxenc2npenc(np1, vocab) - if len(np2.shape) == 1: np2 = idxenc2npenc(np2, vocab) - p1 = npenc2chordarr(np1) - p2 = npenc2chordarr(np2) - return chordarr_combine_parts((p1, p2)) diff --git a/spaces/cccc-c/web-ui-pub/_next/static/chunks/main-82b99ba2818bf862.js b/spaces/cccc-c/web-ui-pub/_next/static/chunks/main-82b99ba2818bf862.js deleted file mode 100644 index 42754ad9d5eba40922e6439852c0a55763c215c8..0000000000000000000000000000000000000000 --- a/spaces/cccc-c/web-ui-pub/_next/static/chunks/main-82b99ba2818bf862.js +++ /dev/null @@ -1 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[179],{40037:function(){"trimStart"in String.prototype||(String.prototype.trimStart=String.prototype.trimLeft),"trimEnd"in String.prototype||(String.prototype.trimEnd=String.prototype.trimRight),"description"in Symbol.prototype||Object.defineProperty(Symbol.prototype,"description",{configurable:!0,get:function(){var e=/\((.*)\)/.exec(this.toString());return e?e[1]:void 0}}),Array.prototype.flat||(Array.prototype.flat=function(e,t){return t=this.concat.apply([],this),e>1&&t.some(Array.isArray)?t.flat(e-1):t},Array.prototype.flatMap=function(e,t){return this.map(e,t).flat()}),Promise.prototype.finally||(Promise.prototype.finally=function(e){if("function"!=typeof e)return this.then(e,e);var t=this.constructor||Promise;return this.then(function(r){return t.resolve(e()).then(function(){return r})},function(r){return t.resolve(e()).then(function(){throw r})})}),Object.fromEntries||(Object.fromEntries=function(e){return Array.from(e).reduce(function(e,t){return e[t[0]]=t[1],e},{})})},98010:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"addBasePath",{enumerable:!0,get:function(){return o}});let n=r(46584),a=r(81583);function o(e,t){return(0,a.normalizePathTrailingSlash)((0,n.addPathPrefix)(e,""))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},13331:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"addLocale",{enumerable:!0,get:function(){return n}}),r(81583);let n=function(e){for(var t=arguments.length,r=Array(t>1?t-1:0),n=1;n{let t={};e.forEach(e=>{if("link"===e.type&&e.props["data-optimized-fonts"]){if(document.querySelector('style[data-href="'+e.props["data-href"]+'"]'))return;e.props.href=e.props["data-href"],e.props["data-href"]=void 0}let r=t[e.type]||[];r.push(e),t[e.type]=r});let n=t.title?t.title[0]:null,a="";if(n){let{children:e}=n.props;a="string"==typeof e?e:Array.isArray(e)?e.join(""):""}a!==document.title&&(document.title=a),["meta","base","link","style","script"].forEach(e=>{r(e,t[e]||[])})}}}r=(e,t)=>{let r=document.getElementsByTagName("head")[0],n=r.querySelector("meta[name=next-head-count]"),i=Number(n.content),l=[];for(let t=0,r=n.previousElementSibling;t{for(let t=0,r=l.length;t{var t;return null==(t=e.parentNode)?void 0:t.removeChild(e)}),s.forEach(e=>r.insertBefore(e,n)),n.content=(i-l.length+s.length).toString()},("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},98141:function(e,t,r){"use strict";let n,a,o,i,l,u,s,c,f,d,h,p;Object.defineProperty(t,"__esModule",{value:!0});let m=r(61757);Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{version:function(){return G},router:function(){return n},emitter:function(){return V},initialize:function(){return K},hydrate:function(){return ec}});let g=r(38754);r(40037);let y=g._(r(67294)),_=g._(r(20745)),b=r(19742),v=g._(r(2139)),P=r(9378),w=r(20075),S=r(4138),j=r(22946),O=r(20755),E=r(64207),R=r(44422),x=g._(r(9473)),C=g._(r(31267)),M=g._(r(84366)),A=r(55825),L=r(38645),I=r(80676),T=r(52790),N=r(90684),k=r(10420),D=r(2496),B=r(53948),H=r(88783),U=g._(r(32867)),F=e=>t=>e(t)+"",W=r.u;r.u=F(W);let q=r.k;r.k=F(q);let z=r.miniCssF;r.miniCssF=F(z);let G="13.4.9",V=(0,v.default)(),X=e=>[].slice.call(e),$=!1;self.__next_require__=r;class Y extends y.default.Component{componentDidCatch(e,t){this.props.fn(e,t)}componentDidMount(){this.scrollToHash(),n.isSsr&&(a.isFallback||a.nextExport&&((0,S.isDynamicRoute)(n.pathname)||location.search||$)||a.props&&a.props.__N_SSG&&(location.search||$))&&n.replace(n.pathname+"?"+String((0,j.assign)((0,j.urlQueryToSearchParams)(n.query),new URLSearchParams(location.search))),o,{_h:1,shallow:!a.isFallback&&!$}).catch(e=>{if(!e.cancelled)throw e})}componentDidUpdate(){this.scrollToHash()}scrollToHash(){let{hash:e}=location;if(!(e=e&&e.substring(1)))return;let t=document.getElementById(e);t&&setTimeout(()=>t.scrollIntoView(),0)}render(){return this.props.children}}async function K(e){void 0===e&&(e={}),a=JSON.parse(document.getElementById("__NEXT_DATA__").textContent),window.__NEXT_DATA__=a,p=a.defaultLocale;let t=a.assetPrefix||"";if(r.p=""+t+"/_next/",(0,O.setConfig)({serverRuntimeConfig:{},publicRuntimeConfig:a.runtimeConfig||{}}),o=(0,E.getURL)(),(0,k.hasBasePath)(o)&&(o=(0,N.removeBasePath)(o)),a.scriptLoader){let{initScriptLoader:e}=r(74844);e(a.scriptLoader)}i=new C.default(a.buildId,t);let s=e=>{let[t,r]=e;return i.routeLoader.onEntrypoint(t,r)};return window.__NEXT_P&&window.__NEXT_P.map(e=>setTimeout(()=>s(e),0)),window.__NEXT_P=[],window.__NEXT_P.push=s,(u=(0,x.default)()).getIsSsr=()=>n.isSsr,l=document.getElementById("__next"),{assetPrefix:t}}function J(e,t){return y.default.createElement(e,t)}function Q(e){var t;let{children:r}=e,a=y.default.useMemo(()=>(0,B.adaptForAppRouterInstance)(n),[]);return y.default.createElement(Y,{fn:e=>ee({App:f,err:e}).catch(e=>console.error("Error rendering page: ",e))},y.default.createElement(D.AppRouterContext.Provider,{value:a},y.default.createElement(H.SearchParamsContext.Provider,{value:(0,B.adaptForSearchParams)(n)},y.default.createElement(B.PathnameContextProviderAdapter,{router:n,isAutoExport:null!=(t=self.__NEXT_DATA__.autoExport)&&t},y.default.createElement(P.RouterContext.Provider,{value:(0,L.makePublicRouterInstance)(n)},y.default.createElement(b.HeadManagerContext.Provider,{value:u},y.default.createElement(T.ImageConfigContext.Provider,{value:{deviceSizes:[640,750,828,1080,1200,1920,2048,3840],imageSizes:[16,32,48,64,96,128,256,384],path:"/_next/image",loader:"default",dangerouslyAllowSVG:!1,unoptimized:!1}},r)))))))}let Z=e=>t=>{let r={...t,Component:h,err:a.err,router:n};return y.default.createElement(Q,null,J(e,r))};function ee(e){let{App:t,err:l}=e;return console.error(l),console.error("A client-side exception has occurred, see here for more info: https://nextjs.org/docs/messages/client-side-exception-occurred"),i.loadPage("/_error").then(n=>{let{page:a,styleSheets:o}=n;return(null==s?void 0:s.Component)===a?Promise.resolve().then(()=>m._(r(28476))).then(n=>Promise.resolve().then(()=>m._(r(11767))).then(r=>(t=r.default,e.App=t,n))).then(e=>({ErrorComponent:e.default,styleSheets:[]})):{ErrorComponent:a,styleSheets:o}}).then(r=>{var i;let{ErrorComponent:u,styleSheets:s}=r,c=Z(t),f={Component:u,AppTree:c,router:n,ctx:{err:l,pathname:a.page,query:a.query,asPath:o,AppTree:c}};return Promise.resolve((null==(i=e.props)?void 0:i.err)?e.props:(0,E.loadGetInitialProps)(t,f)).then(t=>eu({...e,err:l,Component:u,styleSheets:s,props:t}))})}function et(e){let{callback:t}=e;return y.default.useLayoutEffect(()=>t(),[t]),null}let er=null,en=!0;function ea(){["beforeRender","afterHydrate","afterRender","routeChange"].forEach(e=>performance.clearMarks(e))}function eo(){E.ST&&(performance.mark("afterHydrate"),performance.measure("Next.js-before-hydration","navigationStart","beforeRender"),performance.measure("Next.js-hydration","beforeRender","afterHydrate"),d&&performance.getEntriesByName("Next.js-hydration").forEach(d),ea())}function ei(){if(!E.ST)return;performance.mark("afterRender");let e=performance.getEntriesByName("routeChange","mark");e.length&&(performance.measure("Next.js-route-change-to-render",e[0].name,"beforeRender"),performance.measure("Next.js-render","beforeRender","afterRender"),d&&(performance.getEntriesByName("Next.js-render").forEach(d),performance.getEntriesByName("Next.js-route-change-to-render").forEach(d)),ea(),["Next.js-route-change-to-render","Next.js-render"].forEach(e=>performance.clearMeasures(e)))}function el(e){let{callbacks:t,children:r}=e;return y.default.useLayoutEffect(()=>t.forEach(e=>e()),[t]),y.default.useEffect(()=>{(0,M.default)(d)},[]),r}function eu(e){let t,{App:r,Component:a,props:o,err:i}=e,u="initial"in e?void 0:e.styleSheets;a=a||s.Component,o=o||s.props;let f={...o,Component:a,err:i,router:n};s=f;let d=!1,h=new Promise((e,r)=>{c&&c(),t=()=>{c=null,e()},c=()=>{d=!0,c=null;let e=Error("Cancel rendering route");e.cancelled=!0,r(e)}});function p(){t()}!function(){if(!u)return;let e=X(document.querySelectorAll("style[data-n-href]")),t=new Set(e.map(e=>e.getAttribute("data-n-href"))),r=document.querySelector("noscript[data-n-css]"),n=null==r?void 0:r.getAttribute("data-n-css");u.forEach(e=>{let{href:r,text:a}=e;if(!t.has(r)){let e=document.createElement("style");e.setAttribute("data-n-href",r),e.setAttribute("media","x"),n&&e.setAttribute("nonce",n),document.head.appendChild(e),e.appendChild(document.createTextNode(a))}})}();let m=y.default.createElement(y.default.Fragment,null,y.default.createElement(et,{callback:function(){if(u&&!d){let e=new Set(u.map(e=>e.href)),t=X(document.querySelectorAll("style[data-n-href]")),r=t.map(e=>e.getAttribute("data-n-href"));for(let n=0;n{let{href:t}=e,r=document.querySelector('style[data-n-href="'+t+'"]');r&&(n.parentNode.insertBefore(r,n.nextSibling),n=r)}),X(document.querySelectorAll("link[data-n-p]")).forEach(e=>{e.parentNode.removeChild(e)})}if(e.scroll){let{x:t,y:r}=e.scroll;(0,w.handleSmoothScroll)(()=>{window.scrollTo(t,r)})}}}),y.default.createElement(Q,null,J(r,f),y.default.createElement(R.Portal,{type:"next-route-announcer"},y.default.createElement(A.RouteAnnouncer,null))));return!function(e,t){E.ST&&performance.mark("beforeRender");let r=t(en?eo:ei);if(er){let e=y.default.startTransition;e(()=>{er.render(r)})}else er=_.default.hydrateRoot(e,r,{onRecoverableError:U.default}),en=!1}(l,e=>y.default.createElement(el,{callbacks:[e,p]},m)),h}async function es(e){if(e.err){await ee(e);return}try{await eu(e)}catch(r){let t=(0,I.getProperError)(r);if(t.cancelled)throw t;await ee({...e,err:t})}}async function ec(e){let t=a.err;try{let e=await i.routeLoader.whenEntrypoint("/_app");if("error"in e)throw e.error;let{component:t,exports:r}=e;f=t,r&&r.reportWebVitals&&(d=e=>{let t,{id:n,name:a,startTime:o,value:i,duration:l,entryType:u,entries:s,attribution:c}=e,f=Date.now()+"-"+(Math.floor(Math.random()*(9e12-1))+1e12);s&&s.length&&(t=s[0].startTime);let d={id:n||f,name:a,startTime:o||t,value:null==i?l:i,label:"mark"===u||"measure"===u?"custom":"web-vital"};c&&(d.attribution=c),r.reportWebVitals(d)});let n=await i.routeLoader.whenEntrypoint(a.page);if("error"in n)throw n.error;h=n.component}catch(e){t=(0,I.getProperError)(e)}window.__NEXT_PRELOADREADY&&await window.__NEXT_PRELOADREADY(a.dynamicIds),n=(0,L.createRouter)(a.page,a.query,o,{initialProps:a.props,pageLoader:i,App:f,Component:h,wrapApp:Z,err:t,isFallback:!!a.isFallback,subscription:(e,t,r)=>es(Object.assign({},e,{App:t,scroll:r})),locale:a.locale,locales:a.locales,defaultLocale:p,domainLocales:a.domainLocales,isPreview:a.isPreview}),$=await n._initialMatchesMiddlewarePromise;let r={App:f,initial:!0,Component:h,props:a.props,err:t};(null==e?void 0:e.beforeRender)&&await e.beforeRender(),es(r)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},87578:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0});let n=r(98141);window.next={version:n.version,get router(){return n.router},emitter:n.emitter},(0,n.initialize)({}).then(()=>(0,n.hydrate)()).catch(console.error),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},81583:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"normalizePathTrailingSlash",{enumerable:!0,get:function(){return o}});let n=r(79861),a=r(96583),o=e=>{if(!e.startsWith("/"))return e;let{pathname:t,query:r,hash:o}=(0,a.parsePath)(e);return""+(0,n.removeTrailingSlash)(t)+r+o};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},32867:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return a}});let n=r(5388);function a(e){let t="function"==typeof reportError?reportError:e=>{window.console.error(e)};e.digest!==n.NEXT_DYNAMIC_NO_SSR_CODE&&t(e)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},31267:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return d}});let n=r(38754),a=r(98010),o=r(53529),i=n._(r(11033)),l=r(13331),u=r(4138),s=r(62837),c=r(79861),f=r(4203);class d{getPageList(){return(0,f.getClientBuildManifest)().then(e=>e.sortedPages)}getMiddleware(){return window.__MIDDLEWARE_MATCHERS=[],window.__MIDDLEWARE_MATCHERS}getDataHref(e){let{asPath:t,href:r,locale:n}=e,{pathname:f,query:d,search:h}=(0,s.parseRelativeUrl)(r),{pathname:p}=(0,s.parseRelativeUrl)(t),m=(0,c.removeTrailingSlash)(f);if("/"!==m[0])throw Error('Route name should start with a "/", got "'+m+'"');return(e=>{let t=(0,i.default)((0,c.removeTrailingSlash)((0,l.addLocale)(e,n)),".json");return(0,a.addBasePath)("/_next/data/"+this.buildId+t+h,!0)})(e.skipInterpolation?p:(0,u.isDynamicRoute)(m)?(0,o.interpolateAs)(f,p,d).result:m)}_isSsg(e){return this.promisedSsgManifest.then(t=>t.has(e))}loadPage(e){return this.routeLoader.loadRoute(e).then(e=>{if("component"in e)return{page:e.component,mod:e.exports,styleSheets:e.styles.map(e=>({href:e.href,text:e.content}))};throw e.error})}prefetch(e){return this.routeLoader.prefetch(e)}constructor(e,t){this.routeLoader=(0,f.createRouteLoader)(t),this.buildId=e,this.assetPrefix=t,this.promisedSsgManifest=new Promise(e=>{window.__SSG_MANIFEST?e(window.__SSG_MANIFEST):window.__SSG_MANIFEST_CB=()=>{e(window.__SSG_MANIFEST)}})}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},84366:function(e,t,r){"use strict";let n;Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return l}});let a=["CLS","FCP","FID","INP","LCP","TTFB"];location.href;let o=!1;function i(e){n&&n(e)}let l=e=>{if(n=e,!o)for(let e of(o=!0,a))try{let t;t||(t=r(78018)),t["on"+e](i)}catch(t){console.warn("Failed to track "+e+" web-vital",t)}};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},44422:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"Portal",{enumerable:!0,get:function(){return o}});let n=r(67294),a=r(73935),o=e=>{let{children:t,type:r}=e,[o,i]=(0,n.useState)(null);return(0,n.useEffect)(()=>{let e=document.createElement(r);return document.body.appendChild(e),i(e),()=>{document.body.removeChild(e)}},[r]),o?(0,a.createPortal)(t,o):null};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},90684:function(e,t,r){"use strict";function n(e){return(e=e.slice(0)).startsWith("/")||(e="/"+e),e}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"removeBasePath",{enumerable:!0,get:function(){return n}}),r(10420),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},39727:function(e,t,r){"use strict";function n(e,t){return e}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"removeLocale",{enumerable:!0,get:function(){return n}}),r(96583),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},463:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{requestIdleCallback:function(){return r},cancelIdleCallback:function(){return n}});let r="undefined"!=typeof self&&self.requestIdleCallback&&self.requestIdleCallback.bind(window)||function(e){let t=Date.now();return self.setTimeout(function(){e({didTimeout:!1,timeRemaining:function(){return Math.max(0,50-(Date.now()-t))}})},1)},n="undefined"!=typeof self&&self.cancelIdleCallback&&self.cancelIdleCallback.bind(window)||function(e){return clearTimeout(e)};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},55825:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{RouteAnnouncer:function(){return l},default:function(){return u}});let n=r(38754),a=n._(r(67294)),o=r(38645),i={border:0,clip:"rect(0 0 0 0)",height:"1px",margin:"-1px",overflow:"hidden",padding:0,position:"absolute",top:0,width:"1px",whiteSpace:"nowrap",wordWrap:"normal"},l=()=>{let{asPath:e}=(0,o.useRouter)(),[t,r]=a.default.useState(""),n=a.default.useRef(e);return a.default.useEffect(()=>{if(n.current!==e){if(n.current=e,document.title)r(document.title);else{var t;let n=document.querySelector("h1"),a=null!=(t=null==n?void 0:n.innerText)?t:null==n?void 0:n.textContent;r(a||e)}}},[e]),a.default.createElement("p",{"aria-live":"assertive",id:"__next-route-announcer__",role:"alert",style:i},t)},u=l;("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},4203:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{markAssetError:function(){return l},isAssetError:function(){return u},getClientBuildManifest:function(){return d},createRouteLoader:function(){return p}}),r(38754),r(11033);let n=r(2771),a=r(463);function o(e,t,r){let n,a=t.get(e);if(a)return"future"in a?a.future:Promise.resolve(a);let o=new Promise(e=>{n=e});return t.set(e,a={resolve:n,future:o}),r?r().then(e=>(n(e),e)).catch(r=>{throw t.delete(e),r}):o}let i=Symbol("ASSET_LOAD_ERROR");function l(e){return Object.defineProperty(e,i,{})}function u(e){return e&&i in e}let s=function(e){try{return e=document.createElement("link"),!!window.MSInputMethodContext&&!!document.documentMode||e.relList.supports("prefetch")}catch(e){return!1}}(),c=()=>"";function f(e,t,r){return new Promise((n,o)=>{let i=!1;e.then(e=>{i=!0,n(e)}).catch(o),(0,a.requestIdleCallback)(()=>setTimeout(()=>{i||o(r)},t))})}function d(){if(self.__BUILD_MANIFEST)return Promise.resolve(self.__BUILD_MANIFEST);let e=new Promise(e=>{let t=self.__BUILD_MANIFEST_CB;self.__BUILD_MANIFEST_CB=()=>{e(self.__BUILD_MANIFEST),t&&t()}});return f(e,3800,l(Error("Failed to load client build manifest")))}function h(e,t){return d().then(r=>{if(!(t in r))throw l(Error("Failed to lookup route: "+t));let a=r[t].map(t=>e+"/_next/"+encodeURI(t));return{scripts:a.filter(e=>e.endsWith(".js")).map(e=>(0,n.__unsafeCreateTrustedScriptURL)(e)+c()),css:a.filter(e=>e.endsWith(".css")).map(e=>e+c())}})}function p(e){let t=new Map,r=new Map,n=new Map,i=new Map;function u(e){{var t;let n=r.get(e.toString());return n||(document.querySelector('script[src^="'+e+'"]')?Promise.resolve():(r.set(e.toString(),n=new Promise((r,n)=>{(t=document.createElement("script")).onload=r,t.onerror=()=>n(l(Error("Failed to load script: "+e))),t.crossOrigin=void 0,t.src=e,document.body.appendChild(t)})),n))}}function c(e){let t=n.get(e);return t||n.set(e,t=fetch(e).then(t=>{if(!t.ok)throw Error("Failed to load stylesheet: "+e);return t.text().then(t=>({href:e,content:t}))}).catch(e=>{throw l(e)})),t}return{whenEntrypoint:e=>o(e,t),onEntrypoint(e,r){(r?Promise.resolve().then(()=>r()).then(e=>({component:e&&e.default||e,exports:e}),e=>({error:e})):Promise.resolve(void 0)).then(r=>{let n=t.get(e);n&&"resolve"in n?r&&(t.set(e,r),n.resolve(r)):(r?t.set(e,r):t.delete(e),i.delete(e))})},loadRoute(r,n){return o(r,i,()=>{let a;return f(h(e,r).then(e=>{let{scripts:n,css:a}=e;return Promise.all([t.has(r)?[]:Promise.all(n.map(u)),Promise.all(a.map(c))])}).then(e=>this.whenEntrypoint(r).then(t=>({entrypoint:t,styles:e[1]}))),3800,l(Error("Route did not complete loading: "+r))).then(e=>{let{entrypoint:t,styles:r}=e,n=Object.assign({styles:r},t);return"error"in t?t:n}).catch(e=>{if(n)throw e;return{error:e}}).finally(()=>null==a?void 0:a())})},prefetch(t){let r;return(r=navigator.connection)&&(r.saveData||/2g/.test(r.effectiveType))?Promise.resolve():h(e,t).then(e=>Promise.all(s?e.scripts.map(e=>{var t,r,n;return t=e.toString(),r="script",new Promise((e,a)=>{let o='\n link[rel="prefetch"][href^="'+t+'"],\n link[rel="preload"][href^="'+t+'"],\n script[src^="'+t+'"]';if(document.querySelector(o))return e();n=document.createElement("link"),r&&(n.as=r),n.rel="prefetch",n.crossOrigin=void 0,n.onload=e,n.onerror=()=>a(l(Error("Failed to prefetch: "+t))),n.href=t,document.head.appendChild(n)})}):[])).then(()=>{(0,a.requestIdleCallback)(()=>this.loadRoute(t,!0).catch(()=>{}))}).catch(()=>{})}}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},38645:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{Router:function(){return o.default},default:function(){return h},withRouter:function(){return u.default},useRouter:function(){return p},createRouter:function(){return m},makePublicRouterInstance:function(){return g}});let n=r(38754),a=n._(r(67294)),o=n._(r(60805)),i=r(9378),l=n._(r(80676)),u=n._(r(59386)),s={router:null,readyCallbacks:[],ready(e){if(this.router)return e();this.readyCallbacks.push(e)}},c=["pathname","route","query","asPath","components","isFallback","basePath","locale","locales","defaultLocale","isReady","isPreview","isLocaleDomain","domainLocales"],f=["push","replace","reload","back","prefetch","beforePopState"];function d(){if(!s.router)throw Error('No router instance found.\nYou should only use "next/router" on the client side of your app.\n');return s.router}Object.defineProperty(s,"events",{get:()=>o.default.events}),c.forEach(e=>{Object.defineProperty(s,e,{get(){let t=d();return t[e]}})}),f.forEach(e=>{s[e]=function(){for(var t=arguments.length,r=Array(t),n=0;n{s.ready(()=>{o.default.events.on(e,function(){for(var t=arguments.length,r=Array(t),n=0;ne()),s.readyCallbacks=[],s.router}function g(e){let t={};for(let r of c){if("object"==typeof e[r]){t[r]=Object.assign(Array.isArray(e[r])?[]:{},e[r]);continue}t[r]=e[r]}return t.events=o.default.events,f.forEach(r=>{t[r]=function(){for(var t=arguments.length,n=Array(t),a=0;a{let{src:t,id:r,onLoad:n=()=>{},onReady:a=null,dangerouslySetInnerHTML:o,children:i="",strategy:l="afterInteractive",onError:s}=e,h=r||t;if(h&&f.has(h))return;if(c.has(t)){f.add(h),c.get(t).then(n,s);return}let p=()=>{a&&a(),f.add(h)},m=document.createElement("script"),g=new Promise((e,t)=>{m.addEventListener("load",function(t){e(),n&&n.call(this,t),p()}),m.addEventListener("error",function(e){t(e)})}).catch(function(e){s&&s(e)});for(let[r,n]of(o?(m.innerHTML=o.__html||"",p()):i?(m.textContent="string"==typeof i?i:Array.isArray(i)?i.join(""):"",p()):t&&(m.src=t,c.set(t,g)),Object.entries(e))){if(void 0===n||d.includes(r))continue;let e=u.DOMAttributeNames[r]||r.toLowerCase();m.setAttribute(e,n)}"worker"===l&&m.setAttribute("type","text/partytown"),m.setAttribute("data-nscript",l),document.body.appendChild(m)};function p(e){let{strategy:t="afterInteractive"}=e;"lazyOnload"===t?window.addEventListener("load",()=>{(0,s.requestIdleCallback)(()=>h(e))}):h(e)}function m(e){e.forEach(p),function(){let e=[...document.querySelectorAll('[data-nscript="beforeInteractive"]'),...document.querySelectorAll('[data-nscript="beforePageRender"]')];e.forEach(e=>{let t=e.id||e.getAttribute("src");f.add(t)})}()}function g(e){let{id:t,src:r="",onLoad:n=()=>{},onReady:a=null,strategy:u="afterInteractive",onError:c,...d}=e,{updateScripts:p,scripts:m,getIsSsr:g,appDir:y,nonce:_}=(0,i.useContext)(l.HeadManagerContext),b=(0,i.useRef)(!1);(0,i.useEffect)(()=>{let e=t||r;b.current||(a&&e&&f.has(e)&&a(),b.current=!0)},[a,t,r]);let v=(0,i.useRef)(!1);if((0,i.useEffect)(()=>{!v.current&&("afterInteractive"===u?h(e):"lazyOnload"===u&&("complete"===document.readyState?(0,s.requestIdleCallback)(()=>h(e)):window.addEventListener("load",()=>{(0,s.requestIdleCallback)(()=>h(e))})),v.current=!0)},[e,u]),("beforeInteractive"===u||"worker"===u)&&(p?(m[u]=(m[u]||[]).concat([{id:t,src:r,onLoad:n,onReady:a,onError:c,...d}]),p(m)):g&&g()?f.add(t||r):g&&!g()&&h(e)),y){if("beforeInteractive"===u)return r?(o.default.preload(r,d.integrity?{as:"script",integrity:d.integrity}:{as:"script"}),i.default.createElement("script",{nonce:_,dangerouslySetInnerHTML:{__html:"(self.__next_s=self.__next_s||[]).push("+JSON.stringify([r])+")"}})):(d.dangerouslySetInnerHTML&&(d.children=d.dangerouslySetInnerHTML.__html,delete d.dangerouslySetInnerHTML),i.default.createElement("script",{nonce:_,dangerouslySetInnerHTML:{__html:"(self.__next_s=self.__next_s||[]).push("+JSON.stringify([0,{...d}])+")"}}));"afterInteractive"===u&&r&&o.default.preload(r,d.integrity?{as:"script",integrity:d.integrity}:{as:"script"})}return null}Object.defineProperty(g,"__nextScript",{value:!0});let y=g;("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},2771:function(e,t){"use strict";let r;function n(e){var t;return(null==(t=function(){if(void 0===r){var e;r=(null==(e=window.trustedTypes)?void 0:e.createPolicy("nextjs",{createHTML:e=>e,createScript:e=>e,createScriptURL:e=>e}))||null}return r}())?void 0:t.createScriptURL(e))||e}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"__unsafeCreateTrustedScriptURL",{enumerable:!0,get:function(){return n}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},59386:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return i}});let n=r(38754),a=n._(r(67294)),o=r(38645);function i(e){function t(t){return a.default.createElement(e,{router:(0,o.useRouter)(),...t})}return t.getInitialProps=e.getInitialProps,t.origGetInitialProps=e.origGetInitialProps,t}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},11767:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return l}});let n=r(38754),a=n._(r(67294)),o=r(64207);async function i(e){let{Component:t,ctx:r}=e,n=await (0,o.loadGetInitialProps)(t,r);return{pageProps:n}}class l extends a.default.Component{render(){let{Component:e,pageProps:t}=this.props;return a.default.createElement(e,t)}}l.origGetInitialProps=i,l.getInitialProps=i,("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},28476:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return s}});let n=r(38754),a=n._(r(67294)),o=n._(r(78044)),i={400:"Bad Request",404:"This page could not be found",405:"Method Not Allowed",500:"Internal Server Error"};function l(e){let{res:t,err:r}=e,n=t&&t.statusCode?t.statusCode:r?r.statusCode:404;return{statusCode:n}}let u={error:{fontFamily:'system-ui,"Segoe UI",Roboto,Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji"',height:"100vh",textAlign:"center",display:"flex",flexDirection:"column",alignItems:"center",justifyContent:"center"},desc:{lineHeight:"48px"},h1:{display:"inline-block",margin:"0 20px 0 0",paddingRight:23,fontSize:24,fontWeight:500,verticalAlign:"top"},h2:{fontSize:14,fontWeight:400,lineHeight:"28px"},wrap:{display:"inline-block"}};class s extends a.default.Component{render(){let{statusCode:e,withDarkMode:t=!0}=this.props,r=this.props.title||i[e]||"An unexpected error has occurred";return a.default.createElement("div",{style:u.error},a.default.createElement(o.default,null,a.default.createElement("title",null,e?e+": "+r:"Application error: a client-side exception has occurred")),a.default.createElement("div",{style:u.desc},a.default.createElement("style",{dangerouslySetInnerHTML:{__html:"body{color:#000;background:#fff;margin:0}.next-error-h1{border-right:1px solid rgba(0,0,0,.3)}"+(t?"@media (prefers-color-scheme:dark){body{color:#fff;background:#000}.next-error-h1{border-right:1px solid rgba(255,255,255,.3)}}":"")}}),e?a.default.createElement("h1",{className:"next-error-h1",style:u.h1},e):null,a.default.createElement("div",{style:u.wrap},a.default.createElement("h2",{style:u.h2},this.props.title||e?r:a.default.createElement(a.default.Fragment,null,"Application error: a client-side exception has occurred (see the browser console for more information)"),"."))))}}s.displayName="ErrorPage",s.getInitialProps=l,s.origGetInitialProps=l,("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},64351:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"AmpStateContext",{enumerable:!0,get:function(){return o}});let n=r(38754),a=n._(r(67294)),o=a.default.createContext({})},92253:function(e,t){"use strict";function r(e){let{ampFirst:t=!1,hybrid:r=!1,hasQuery:n=!1}=void 0===e?{}:e;return t||r&&n}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"isInAmpMode",{enumerable:!0,get:function(){return r}})},2496:function(e,t,r){"use strict";var n,a;Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{CacheStates:function(){return n},AppRouterContext:function(){return l},LayoutRouterContext:function(){return u},GlobalLayoutRouterContext:function(){return s},TemplateContext:function(){return c}});let o=r(38754),i=o._(r(67294));(a=n||(n={})).LAZY_INITIALIZED="LAZYINITIALIZED",a.DATA_FETCH="DATAFETCH",a.READY="READY";let l=i.default.createContext(null),u=i.default.createContext(null),s=i.default.createContext(null),c=i.default.createContext(null)},34258:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"BloomFilter",{enumerable:!0,get:function(){return r}});class r{static from(e,t){void 0===t&&(t=.01);let n=new r(e.length,t);for(let t of e)n.add(t);return n}export(){let e={numItems:this.numItems,errorRate:this.errorRate,numBits:this.numBits,numHashes:this.numHashes,bitArray:this.bitArray};return e}import(e){this.numItems=e.numItems,this.errorRate=e.errorRate,this.numBits=e.numBits,this.numHashes=e.numHashes,this.bitArray=e.bitArray}add(e){let t=this.getHashValues(e);t.forEach(e=>{this.bitArray[e]=1})}contains(e){let t=this.getHashValues(e);return t.every(e=>this.bitArray[e])}getHashValues(e){let t=[];for(let r=1;r<=this.numHashes;r++){let n=function(e){let t=0;for(let r=0;r>>13,t=Math.imul(t,1540483477)}return t>>>0}(""+e+r)%this.numBits;t.push(n)}return t}constructor(e,t){this.numItems=e,this.errorRate=t,this.numBits=Math.ceil(-(e*Math.log(t))/(Math.log(2)*Math.log(2))),this.numHashes=Math.ceil(this.numBits/e*Math.log(2)),this.bitArray=Array(this.numBits).fill(0)}}},9630:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"escapeStringRegexp",{enumerable:!0,get:function(){return a}});let r=/[|\\{}()[\]^$+*?.-]/,n=/[|\\{}()[\]^$+*?.-]/g;function a(e){return r.test(e)?e.replace(n,"\\$&"):e}},19742:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"HeadManagerContext",{enumerable:!0,get:function(){return o}});let n=r(38754),a=n._(r(67294)),o=a.default.createContext({})},78044:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{defaultHead:function(){return c},default:function(){return p}});let n=r(38754),a=r(61757),o=a._(r(67294)),i=n._(r(29274)),l=r(64351),u=r(19742),s=r(92253);function c(e){void 0===e&&(e=!1);let t=[o.default.createElement("meta",{charSet:"utf-8"})];return e||t.push(o.default.createElement("meta",{name:"viewport",content:"width=device-width"})),t}function f(e,t){return"string"==typeof t||"number"==typeof t?e:t.type===o.default.Fragment?e.concat(o.default.Children.toArray(t.props.children).reduce((e,t)=>"string"==typeof t||"number"==typeof t?e:e.concat(t),[])):e.concat(t)}r(60421);let d=["name","httpEquiv","charSet","itemProp"];function h(e,t){let{inAmpMode:r}=t;return e.reduce(f,[]).reverse().concat(c(r).reverse()).filter(function(){let e=new Set,t=new Set,r=new Set,n={};return a=>{let o=!0,i=!1;if(a.key&&"number"!=typeof a.key&&a.key.indexOf("$")>0){i=!0;let t=a.key.slice(a.key.indexOf("$")+1);e.has(t)?o=!1:e.add(t)}switch(a.type){case"title":case"base":t.has(a.type)?o=!1:t.add(a.type);break;case"meta":for(let e=0,t=d.length;e{let n=e.key||t;if(!r&&"link"===e.type&&e.props.href&&["https://fonts.googleapis.com/css","https://use.typekit.net/"].some(t=>e.props.href.startsWith(t))){let t={...e.props||{}};return t["data-href"]=t.href,t.href=void 0,t["data-optimized-fonts"]=!0,o.default.cloneElement(e,t)}return o.default.cloneElement(e,{key:n})})}let p=function(e){let{children:t}=e,r=(0,o.useContext)(l.AmpStateContext),n=(0,o.useContext)(u.HeadManagerContext);return o.default.createElement(i.default,{reduceComponentsToState:h,headManager:n,inAmpMode:(0,s.isInAmpMode)(r)},t)};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},88783:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{SearchParamsContext:function(){return a},PathnameContext:function(){return o}});let n=r(67294),a=(0,n.createContext)(null),o=(0,n.createContext)(null)},25935:function(e,t){"use strict";function r(e,t){let r;let n=e.split("/");return(t||[]).some(t=>!!n[1]&&n[1].toLowerCase()===t.toLowerCase()&&(r=t,n.splice(1,1),e=n.join("/")||"/",!0)),{pathname:e,detectedLocale:r}}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"normalizeLocalePath",{enumerable:!0,get:function(){return r}})},52790:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"ImageConfigContext",{enumerable:!0,get:function(){return i}});let n=r(38754),a=n._(r(67294)),o=r(88996),i=a.default.createContext(o.imageConfigDefault)},88996:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{VALID_LOADERS:function(){return r},imageConfigDefault:function(){return n}});let r=["default","imgix","cloudinary","akamai","custom"],n={deviceSizes:[640,750,828,1080,1200,1920,2048,3840],imageSizes:[16,32,48,64,96,128,256,384],path:"/_next/image",loader:"default",loaderFile:"",domains:[],disableStaticImages:!1,minimumCacheTTL:60,formats:["image/webp"],dangerouslyAllowSVG:!1,contentSecurityPolicy:"script-src 'none'; frame-src 'none'; sandbox;",contentDispositionType:"inline",remotePatterns:[],unoptimized:!1}},67235:function(e,t){"use strict";function r(e){return Object.prototype.toString.call(e)}function n(e){if("[object Object]"!==r(e))return!1;let t=Object.getPrototypeOf(e);return null===t||t.hasOwnProperty("isPrototypeOf")}Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{getObjectClassLabel:function(){return r},isPlainObject:function(){return n}})},5388:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"NEXT_DYNAMIC_NO_SSR_CODE",{enumerable:!0,get:function(){return r}});let r="NEXT_DYNAMIC_NO_SSR_CODE"},2139:function(e,t){"use strict";function r(){let e=Object.create(null);return{on(t,r){(e[t]||(e[t]=[])).push(r)},off(t,r){e[t]&&e[t].splice(e[t].indexOf(r)>>>0,1)},emit(t){for(var r=arguments.length,n=Array(r>1?r-1:0),a=1;a{e(...n)})}}}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return r}})},38991:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"denormalizePagePath",{enumerable:!0,get:function(){return o}});let n=r(21718),a=r(73468);function o(e){let t=(0,a.normalizePathSep)(e);return t.startsWith("/index/")&&!(0,n.isDynamicRoute)(t)?t.slice(6):"/index"!==t?t:"/"}},2131:function(e,t){"use strict";function r(e){return e.startsWith("/")?e:"/"+e}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"ensureLeadingSlash",{enumerable:!0,get:function(){return r}})},73468:function(e,t){"use strict";function r(e){return e.replace(/\\/g,"/")}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"normalizePathSep",{enumerable:!0,get:function(){return r}})},9378:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"RouterContext",{enumerable:!0,get:function(){return o}});let n=r(38754),a=n._(r(67294)),o=a.default.createContext(null)},53948:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{adaptForAppRouterInstance:function(){return l},adaptForSearchParams:function(){return u},PathnameContextProviderAdapter:function(){return s}});let n=r(61757),a=n._(r(67294)),o=r(88783),i=r(21718);function l(e){return{back(){e.back()},forward(){e.forward()},refresh(){e.reload()},push(t){e.push(t)},replace(t){e.replace(t)},prefetch(t){e.prefetch(t)}}}function u(e){return e.isReady&&e.query?function(e){let t=new URLSearchParams;for(let[r,n]of Object.entries(e))if(Array.isArray(n))for(let e of n)t.append(r,e);else void 0!==n&&t.append(r,n);return t}(e.query):new URLSearchParams}function s(e){let{children:t,router:r,...n}=e,l=(0,a.useRef)(n.isAutoExport),u=(0,a.useMemo)(()=>{let e;let t=l.current;if(t&&(l.current=!1),(0,i.isDynamicRoute)(r.pathname)&&(r.isFallback||t&&!r.isReady))return null;try{e=new URL(r.asPath,"http://f")}catch(e){return"/"}return e.pathname},[r.asPath,r.isFallback,r.isReady,r.pathname]);return a.default.createElement(o.PathnameContext.Provider,{value:u},t)}},60805:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{default:function(){return V},matchesMiddleware:function(){return N},createKey:function(){return q}});let n=r(38754),a=r(61757),o=r(79861),i=r(4203),l=r(74844),u=a._(r(80676)),s=r(38991),c=r(25935),f=n._(r(2139)),d=r(64207),h=r(4138),p=r(62837);r(72431);let m=r(2754),g=r(45986),y=r(21896);r(53673);let _=r(96583),b=r(13331),v=r(39727),P=r(90684),w=r(98010),S=r(10420),j=r(79423),O=r(61714),E=r(48497),R=r(33820),x=r(33800),C=r(67578),M=r(24540),A=r(49413),L=r(53529),I=r(20075);function T(){return Object.assign(Error("Route Cancelled"),{cancelled:!0})}async function N(e){let t=await Promise.resolve(e.router.pageLoader.getMiddleware());if(!t)return!1;let{pathname:r}=(0,_.parsePath)(e.asPath),n=(0,S.hasBasePath)(r)?(0,P.removeBasePath)(r):r,a=(0,w.addBasePath)((0,b.addLocale)(n,e.locale));return t.some(e=>new RegExp(e.regexp).test(a))}function k(e){let t=(0,d.getLocationOrigin)();return e.startsWith(t)?e.substring(t.length):e}function D(e,t,r){let[n,a]=(0,A.resolveHref)(e,t,!0),o=(0,d.getLocationOrigin)(),i=n.startsWith(o),l=a&&a.startsWith(o);n=k(n),a=a?k(a):a;let u=i?n:(0,w.addBasePath)(n),s=r?k((0,A.resolveHref)(e,r)):a||n;return{url:u,as:l?s:(0,w.addBasePath)(s)}}function B(e,t){let r=(0,o.removeTrailingSlash)((0,s.denormalizePagePath)(e));return"/404"===r||"/_error"===r?e:(t.includes(r)||t.some(t=>{if((0,h.isDynamicRoute)(t)&&(0,g.getRouteRegex)(t).re.test(r))return e=t,!0}),(0,o.removeTrailingSlash)(e))}async function H(e){let t=await N(e);if(!t||!e.fetchData)return null;try{let t=await e.fetchData(),r=await function(e,t,r){let n={basePath:r.router.basePath,i18n:{locales:r.router.locales},trailingSlash:!1},a=t.headers.get("x-nextjs-rewrite"),l=a||t.headers.get("x-nextjs-matched-path"),u=t.headers.get("x-matched-path");if(!u||l||u.includes("__next_data_catchall")||u.includes("/_error")||u.includes("/404")||(l=u),l){if(l.startsWith("/")){let t=(0,p.parseRelativeUrl)(l),u=(0,O.getNextPathnameInfo)(t.pathname,{nextConfig:n,parseData:!0}),s=(0,o.removeTrailingSlash)(u.pathname);return Promise.all([r.router.pageLoader.getPageList(),(0,i.getClientBuildManifest)()]).then(o=>{let[i,{__rewrites:l}]=o,f=(0,b.addLocale)(u.pathname,u.locale);if((0,h.isDynamicRoute)(f)||!a&&i.includes((0,c.normalizeLocalePath)((0,P.removeBasePath)(f),r.router.locales).pathname)){let r=(0,O.getNextPathnameInfo)((0,p.parseRelativeUrl)(e).pathname,{nextConfig:n,parseData:!0});f=(0,w.addBasePath)(r.pathname),t.pathname=f}if(!i.includes(s)){let e=B(s,i);e!==s&&(s=e)}let d=i.includes(s)?s:B((0,c.normalizeLocalePath)((0,P.removeBasePath)(t.pathname),r.router.locales).pathname,i);if((0,h.isDynamicRoute)(d)){let e=(0,m.getRouteMatcher)((0,g.getRouteRegex)(d))(f);Object.assign(t.query,e||{})}return{type:"rewrite",parsedAs:t,resolvedHref:d}})}let t=(0,_.parsePath)(e),u=(0,E.formatNextPathnameInfo)({...(0,O.getNextPathnameInfo)(t.pathname,{nextConfig:n,parseData:!0}),defaultLocale:r.router.defaultLocale,buildId:""});return Promise.resolve({type:"redirect-external",destination:""+u+t.query+t.hash})}let s=t.headers.get("x-nextjs-redirect");if(s){if(s.startsWith("/")){let e=(0,_.parsePath)(s),t=(0,E.formatNextPathnameInfo)({...(0,O.getNextPathnameInfo)(e.pathname,{nextConfig:n,parseData:!0}),defaultLocale:r.router.defaultLocale,buildId:""});return Promise.resolve({type:"redirect-internal",newAs:""+t+e.query+e.hash,newUrl:""+t+e.query+e.hash})}return Promise.resolve({type:"redirect-external",destination:s})}return Promise.resolve({type:"next"})}(t.dataHref,t.response,e);return{dataHref:t.dataHref,json:t.json,response:t.response,text:t.text,cacheKey:t.cacheKey,effect:r}}catch(e){return null}}let U=Symbol("SSG_DATA_NOT_FOUND");function F(e){try{return JSON.parse(e)}catch(e){return null}}function W(e){var t;let{dataHref:r,inflightCache:n,isPrefetch:a,hasMiddleware:o,isServerRender:l,parseJSON:u,persistCache:s,isBackground:c,unstable_skipClientCache:f}=e,{href:d}=new URL(r,window.location.href),h=e=>(function e(t,r,n){return fetch(t,{credentials:"same-origin",method:n.method||"GET",headers:Object.assign({},n.headers,{"x-nextjs-data":"1"})}).then(a=>!a.ok&&r>1&&a.status>=500?e(t,r-1,n):a)})(r,l?3:1,{headers:Object.assign({},a?{purpose:"prefetch"}:{},a&&o?{"x-middleware-prefetch":"1"}:{}),method:null!=(t=null==e?void 0:e.method)?t:"GET"}).then(t=>t.ok&&(null==e?void 0:e.method)==="HEAD"?{dataHref:r,response:t,text:"",json:{},cacheKey:d}:t.text().then(e=>{if(!t.ok){if(o&&[301,302,307,308].includes(t.status))return{dataHref:r,response:t,text:e,json:{},cacheKey:d};if(404===t.status){var n;if(null==(n=F(e))?void 0:n.notFound)return{dataHref:r,json:{notFound:U},response:t,text:e,cacheKey:d}}let a=Error("Failed to load static props");throw l||(0,i.markAssetError)(a),a}return{dataHref:r,json:u?F(e):null,response:t,text:e,cacheKey:d}})).then(e=>(s&&"no-cache"!==e.response.headers.get("x-middleware-cache")||delete n[d],e)).catch(e=>{throw f||delete n[d],("Failed to fetch"===e.message||"NetworkError when attempting to fetch resource."===e.message||"Load failed"===e.message)&&(0,i.markAssetError)(e),e});return f&&s?h({}).then(e=>(n[d]=Promise.resolve(e),e)):void 0!==n[d]?n[d]:n[d]=h(c?{method:"HEAD"}:{})}function q(){return Math.random().toString(36).slice(2,10)}function z(e){let{url:t,router:r}=e;if(t===(0,w.addBasePath)((0,b.addLocale)(r.asPath,r.locale)))throw Error("Invariant: attempted to hard navigate to the same URL "+t+" "+location.href);window.location.href=t}let G=e=>{let{route:t,router:r}=e,n=!1,a=r.clc=()=>{n=!0};return()=>{if(n){let e=Error('Abort fetching component for route: "'+t+'"');throw e.cancelled=!0,e}a===r.clc&&(r.clc=null)}};class V{reload(){window.location.reload()}back(){window.history.back()}forward(){window.history.forward()}push(e,t,r){return void 0===r&&(r={}),{url:e,as:t}=D(this,e,t),this.change("pushState",e,t,r)}replace(e,t,r){return void 0===r&&(r={}),{url:e,as:t}=D(this,e,t),this.change("replaceState",e,t,r)}async _bfl(e,t,r,n){{let u=!1,s=!1;for(let c of[e,t])if(c){let t=(0,o.removeTrailingSlash)(new URL(c,"http://n").pathname),f=(0,w.addBasePath)((0,b.addLocale)(t,r||this.locale));if(t!==(0,o.removeTrailingSlash)(new URL(this.asPath,"http://n").pathname)){var a,i,l;for(let e of(u=u||!!(null==(a=this._bfl_s)?void 0:a.contains(t))||!!(null==(i=this._bfl_s)?void 0:i.contains(f)),[t,f])){let t=e.split("/");for(let e=0;!s&&e{})}}}}return!1}async change(e,t,r,n,a){var s,c,f,j,O,E,C,A,I;let k,H;if(!(0,x.isLocalURL)(t))return z({url:t,router:this}),!1;let F=1===n._h;F||n.shallow||await this._bfl(r,void 0,n.locale);let W=F||n._shouldResolveHref||(0,_.parsePath)(t).pathname===(0,_.parsePath)(r).pathname,q={...this.state},G=!0!==this.isReady;this.isReady=!0;let X=this.isSsr;if(F||(this.isSsr=!1),F&&this.clc)return!1;let $=q.locale;d.ST&&performance.mark("routeChange");let{shallow:Y=!1,scroll:K=!0}=n,J={shallow:Y};this._inFlightRoute&&this.clc&&(X||V.events.emit("routeChangeError",T(),this._inFlightRoute,J),this.clc(),this.clc=null),r=(0,w.addBasePath)((0,b.addLocale)((0,S.hasBasePath)(r)?(0,P.removeBasePath)(r):r,n.locale,this.defaultLocale));let Q=(0,v.removeLocale)((0,S.hasBasePath)(r)?(0,P.removeBasePath)(r):r,q.locale);this._inFlightRoute=r;let Z=$!==q.locale;if(!F&&this.onlyAHashChange(Q)&&!Z){q.asPath=Q,V.events.emit("hashChangeStart",r,J),this.changeState(e,t,r,{...n,scroll:!1}),K&&this.scrollToHash(Q);try{await this.set(q,this.components[q.route],null)}catch(e){throw(0,u.default)(e)&&e.cancelled&&V.events.emit("routeChangeError",e,Q,J),e}return V.events.emit("hashChangeComplete",r,J),!0}let ee=(0,p.parseRelativeUrl)(t),{pathname:et,query:er}=ee;if(null==(s=this.components[et])?void 0:s.__appRouter)return z({url:r,router:this}),new Promise(()=>{});try{[k,{__rewrites:H}]=await Promise.all([this.pageLoader.getPageList(),(0,i.getClientBuildManifest)(),this.pageLoader.getMiddleware()])}catch(e){return z({url:r,router:this}),!1}this.urlIsNew(Q)||Z||(e="replaceState");let en=r;et=et?(0,o.removeTrailingSlash)((0,P.removeBasePath)(et)):et;let ea=(0,o.removeTrailingSlash)(et),eo=r.startsWith("/")&&(0,p.parseRelativeUrl)(r).pathname,ei=!!(eo&&ea!==eo&&(!(0,h.isDynamicRoute)(ea)||!(0,m.getRouteMatcher)((0,g.getRouteRegex)(ea))(eo))),el=!n.shallow&&await N({asPath:r,locale:q.locale,router:this});if(F&&el&&(W=!1),W&&"/_error"!==et&&(n._shouldResolveHref=!0,ee.pathname=B(et,k),ee.pathname===et||(et=ee.pathname,ee.pathname=(0,w.addBasePath)(et),el||(t=(0,y.formatWithValidation)(ee)))),!(0,x.isLocalURL)(r))return z({url:r,router:this}),!1;en=(0,v.removeLocale)((0,P.removeBasePath)(en),q.locale),ea=(0,o.removeTrailingSlash)(et);let eu=!1;if((0,h.isDynamicRoute)(ea)){let e=(0,p.parseRelativeUrl)(en),n=e.pathname,a=(0,g.getRouteRegex)(ea);eu=(0,m.getRouteMatcher)(a)(n);let o=ea===n,i=o?(0,L.interpolateAs)(ea,n,er):{};if(eu&&(!o||i.result))o?r=(0,y.formatWithValidation)(Object.assign({},e,{pathname:i.result,query:(0,M.omit)(er,i.params)})):Object.assign(er,eu);else{let e=Object.keys(a.groups).filter(e=>!er[e]&&!a.groups[e].optional);if(e.length>0&&!el)throw Error((o?"The provided `href` ("+t+") value is missing query values ("+e.join(", ")+") to be interpolated properly. ":"The provided `as` value ("+n+") is incompatible with the `href` value ("+ea+"). ")+"Read more: https://nextjs.org/docs/messages/"+(o?"href-interpolation-failed":"incompatible-href-as"))}}F||V.events.emit("routeChangeStart",r,J);let es="/404"===this.pathname||"/_error"===this.pathname;try{let o=await this.getRouteInfo({route:ea,pathname:et,query:er,as:r,resolvedAs:en,routeProps:J,locale:q.locale,isPreview:q.isPreview,hasMiddleware:el,unstable_skipClientCache:n.unstable_skipClientCache,isQueryUpdating:F&&!this.isFallback,isMiddlewareRewrite:ei});if(F||n.shallow||await this._bfl(r,"resolvedAs"in o?o.resolvedAs:void 0,q.locale),"route"in o&&el){ea=et=o.route||ea,J.shallow||(er=Object.assign({},o.query||{},er));let e=(0,S.hasBasePath)(ee.pathname)?(0,P.removeBasePath)(ee.pathname):ee.pathname;if(eu&&et!==e&&Object.keys(eu).forEach(e=>{eu&&er[e]===eu[e]&&delete er[e]}),(0,h.isDynamicRoute)(et)){let e=!J.shallow&&o.resolvedAs?o.resolvedAs:(0,w.addBasePath)((0,b.addLocale)(new URL(r,location.href).pathname,q.locale),!0),t=e;(0,S.hasBasePath)(t)&&(t=(0,P.removeBasePath)(t));let n=(0,g.getRouteRegex)(et),a=(0,m.getRouteMatcher)(n)(new URL(t,location.href).pathname);a&&Object.assign(er,a)}}if("type"in o){if("redirect-internal"===o.type)return this.change(e,o.newUrl,o.newAs,n);return z({url:o.destination,router:this}),new Promise(()=>{})}let i=o.Component;if(i&&i.unstable_scriptLoader){let e=[].concat(i.unstable_scriptLoader());e.forEach(e=>{(0,l.handleClientScriptLoad)(e.props)})}if((o.__N_SSG||o.__N_SSP)&&o.props){if(o.props.pageProps&&o.props.pageProps.__N_REDIRECT){n.locale=!1;let t=o.props.pageProps.__N_REDIRECT;if(t.startsWith("/")&&!1!==o.props.pageProps.__N_REDIRECT_BASE_PATH){let r=(0,p.parseRelativeUrl)(t);r.pathname=B(r.pathname,k);let{url:a,as:o}=D(this,t,t);return this.change(e,a,o,n)}return z({url:t,router:this}),new Promise(()=>{})}if(q.isPreview=!!o.props.__N_PREVIEW,o.props.notFound===U){let e;try{await this.fetchComponent("/404"),e="/404"}catch(t){e="/_error"}if(o=await this.getRouteInfo({route:e,pathname:e,query:er,as:r,resolvedAs:en,routeProps:{shallow:!1},locale:q.locale,isPreview:q.isPreview,isNotFound:!0}),"type"in o)throw Error("Unexpected middleware effect on /404")}}F&&"/_error"===this.pathname&&(null==(c=self.__NEXT_DATA__.props)?void 0:null==(f=c.pageProps)?void 0:f.statusCode)===500&&(null==(j=o.props)?void 0:j.pageProps)&&(o.props.pageProps.statusCode=500);let s=n.shallow&&q.route===(null!=(O=o.route)?O:ea),d=null!=(E=n.scroll)?E:!F&&!s,y=null!=a?a:d?{x:0,y:0}:null,_={...q,route:ea,pathname:et,query:er,asPath:Q,isFallback:!1};if(F&&es){if(o=await this.getRouteInfo({route:this.pathname,pathname:this.pathname,query:er,as:r,resolvedAs:en,routeProps:{shallow:!1},locale:q.locale,isPreview:q.isPreview,isQueryUpdating:F&&!this.isFallback}),"type"in o)throw Error("Unexpected middleware effect on "+this.pathname);"/_error"===this.pathname&&(null==(C=self.__NEXT_DATA__.props)?void 0:null==(A=C.pageProps)?void 0:A.statusCode)===500&&(null==(I=o.props)?void 0:I.pageProps)&&(o.props.pageProps.statusCode=500);try{await this.set(_,o,y)}catch(e){throw(0,u.default)(e)&&e.cancelled&&V.events.emit("routeChangeError",e,Q,J),e}return!0}V.events.emit("beforeHistoryChange",r,J),this.changeState(e,t,r,n);let v=F&&!y&&!G&&!Z&&(0,R.compareRouterStates)(_,this.state);if(!v){try{await this.set(_,o,y)}catch(e){if(e.cancelled)o.error=o.error||e;else throw e}if(o.error)throw F||V.events.emit("routeChangeError",o.error,Q,J),o.error;F||V.events.emit("routeChangeComplete",r,J),d&&/#.+$/.test(r)&&this.scrollToHash(r)}return!0}catch(e){if((0,u.default)(e)&&e.cancelled)return!1;throw e}}changeState(e,t,r,n){void 0===n&&(n={}),("pushState"!==e||(0,d.getURL)()!==r)&&(this._shallow=n.shallow,window.history[e]({url:t,as:r,options:n,__N:!0,key:this._key="pushState"!==e?this._key:q()},"",r))}async handleRouteInfoError(e,t,r,n,a,o){if(console.error(e),e.cancelled)throw e;if((0,i.isAssetError)(e)||o)throw V.events.emit("routeChangeError",e,n,a),z({url:n,router:this}),T();try{let n;let{page:a,styleSheets:o}=await this.fetchComponent("/_error"),i={props:n,Component:a,styleSheets:o,err:e,error:e};if(!i.props)try{i.props=await this.getInitialProps(a,{err:e,pathname:t,query:r})}catch(e){console.error("Error in error page `getInitialProps`: ",e),i.props={}}return i}catch(e){return this.handleRouteInfoError((0,u.default)(e)?e:Error(e+""),t,r,n,a,!0)}}async getRouteInfo(e){let{route:t,pathname:r,query:n,as:a,resolvedAs:i,routeProps:l,locale:s,hasMiddleware:f,isPreview:d,unstable_skipClientCache:h,isQueryUpdating:p,isMiddlewareRewrite:m,isNotFound:g}=e,_=t;try{var b,v,w,S;let e=G({route:_,router:this}),t=this.components[_];if(l.shallow&&t&&this.route===_)return t;f&&(t=void 0);let u=!t||"initial"in t?void 0:t,O={dataHref:this.pageLoader.getDataHref({href:(0,y.formatWithValidation)({pathname:r,query:n}),skipInterpolation:!0,asPath:g?"/404":i,locale:s}),hasMiddleware:!0,isServerRender:this.isSsr,parseJSON:!0,inflightCache:p?this.sbc:this.sdc,persistCache:!d,isPrefetch:!1,unstable_skipClientCache:h,isBackground:p},E=p&&!m?null:await H({fetchData:()=>W(O),asPath:g?"/404":i,locale:s,router:this}).catch(e=>{if(p)return null;throw e});if(E&&("/_error"===r||"/404"===r)&&(E.effect=void 0),p&&(E?E.json=self.__NEXT_DATA__.props:E={json:self.__NEXT_DATA__.props}),e(),(null==E?void 0:null==(b=E.effect)?void 0:b.type)==="redirect-internal"||(null==E?void 0:null==(v=E.effect)?void 0:v.type)==="redirect-external")return E.effect;if((null==E?void 0:null==(w=E.effect)?void 0:w.type)==="rewrite"){let e=(0,o.removeTrailingSlash)(E.effect.resolvedHref),a=await this.pageLoader.getPageList();if((!p||a.includes(e))&&(_=e,r=E.effect.resolvedHref,n={...n,...E.effect.parsedAs.query},i=(0,P.removeBasePath)((0,c.normalizeLocalePath)(E.effect.parsedAs.pathname,this.locales).pathname),t=this.components[_],l.shallow&&t&&this.route===_&&!f))return{...t,route:_}}if((0,j.isAPIRoute)(_))return z({url:a,router:this}),new Promise(()=>{});let R=u||await this.fetchComponent(_).then(e=>({Component:e.page,styleSheets:e.styleSheets,__N_SSG:e.mod.__N_SSG,__N_SSP:e.mod.__N_SSP})),x=null==E?void 0:null==(S=E.response)?void 0:S.headers.get("x-middleware-skip"),C=R.__N_SSG||R.__N_SSP;x&&(null==E?void 0:E.dataHref)&&delete this.sdc[E.dataHref];let{props:M,cacheKey:A}=await this._getData(async()=>{if(C){if((null==E?void 0:E.json)&&!x)return{cacheKey:E.cacheKey,props:E.json};let e=(null==E?void 0:E.dataHref)?E.dataHref:this.pageLoader.getDataHref({href:(0,y.formatWithValidation)({pathname:r,query:n}),asPath:i,locale:s}),t=await W({dataHref:e,isServerRender:this.isSsr,parseJSON:!0,inflightCache:x?{}:this.sdc,persistCache:!d,isPrefetch:!1,unstable_skipClientCache:h});return{cacheKey:t.cacheKey,props:t.json||{}}}return{headers:{},props:await this.getInitialProps(R.Component,{pathname:r,query:n,asPath:a,locale:s,locales:this.locales,defaultLocale:this.defaultLocale})}});return R.__N_SSP&&O.dataHref&&A&&delete this.sdc[A],this.isPreview||!R.__N_SSG||p||W(Object.assign({},O,{isBackground:!0,persistCache:!1,inflightCache:this.sbc})).catch(()=>{}),M.pageProps=Object.assign({},M.pageProps),R.props=M,R.route=_,R.query=n,R.resolvedAs=i,this.components[_]=R,R}catch(e){return this.handleRouteInfoError((0,u.getProperError)(e),r,n,a,l)}}set(e,t,r){return this.state=e,this.sub(t,this.components["/_app"].Component,r)}beforePopState(e){this._bps=e}onlyAHashChange(e){if(!this.asPath)return!1;let[t,r]=this.asPath.split("#"),[n,a]=e.split("#");return!!a&&t===n&&r===a||t===n&&r!==a}scrollToHash(e){let[,t=""]=e.split("#");if(""===t||"top"===t){(0,I.handleSmoothScroll)(()=>window.scrollTo(0,0));return}let r=decodeURIComponent(t),n=document.getElementById(r);if(n){(0,I.handleSmoothScroll)(()=>n.scrollIntoView());return}let a=document.getElementsByName(r)[0];a&&(0,I.handleSmoothScroll)(()=>a.scrollIntoView())}urlIsNew(e){return this.asPath!==e}async prefetch(e,t,r){if(void 0===t&&(t=e),void 0===r&&(r={}),(0,C.isBot)(window.navigator.userAgent))return;let n=(0,p.parseRelativeUrl)(e),a=n.pathname,{pathname:i,query:l}=n,u=i,s=await this.pageLoader.getPageList(),c=t,f=void 0!==r.locale?r.locale||void 0:this.locale,d=await N({asPath:t,locale:f,router:this});n.pathname=B(n.pathname,s),(0,h.isDynamicRoute)(n.pathname)&&(i=n.pathname,n.pathname=i,Object.assign(l,(0,m.getRouteMatcher)((0,g.getRouteRegex)(n.pathname))((0,_.parsePath)(t).pathname)||{}),d||(e=(0,y.formatWithValidation)(n)));let b=await H({fetchData:()=>W({dataHref:this.pageLoader.getDataHref({href:(0,y.formatWithValidation)({pathname:u,query:l}),skipInterpolation:!0,asPath:c,locale:f}),hasMiddleware:!0,isServerRender:this.isSsr,parseJSON:!0,inflightCache:this.sdc,persistCache:!this.isPreview,isPrefetch:!0}),asPath:t,locale:f,router:this});if((null==b?void 0:b.effect.type)==="rewrite"&&(n.pathname=b.effect.resolvedHref,i=b.effect.resolvedHref,l={...l,...b.effect.parsedAs.query},c=b.effect.parsedAs.pathname,e=(0,y.formatWithValidation)(n)),(null==b?void 0:b.effect.type)==="redirect-external")return;let v=(0,o.removeTrailingSlash)(i);await this._bfl(t,c,r.locale,!0)&&(this.components[a]={__appRouter:!0}),await Promise.all([this.pageLoader._isSsg(v).then(t=>!!t&&W({dataHref:(null==b?void 0:b.json)?null==b?void 0:b.dataHref:this.pageLoader.getDataHref({href:e,asPath:c,locale:f}),isServerRender:!1,parseJSON:!0,inflightCache:this.sdc,persistCache:!this.isPreview,isPrefetch:!0,unstable_skipClientCache:r.unstable_skipClientCache||r.priority&&!0}).then(()=>!1).catch(()=>!1)),this.pageLoader[r.priority?"loadPage":"prefetch"](v)])}async fetchComponent(e){let t=G({route:e,router:this});try{let r=await this.pageLoader.loadPage(e);return t(),r}catch(e){throw t(),e}}_getData(e){let t=!1,r=()=>{t=!0};return this.clc=r,e().then(e=>{if(r===this.clc&&(this.clc=null),t){let e=Error("Loading initial props cancelled");throw e.cancelled=!0,e}return e})}_getFlightData(e){return W({dataHref:e,isServerRender:!0,parseJSON:!1,inflightCache:this.sdc,persistCache:!1,isPrefetch:!1}).then(e=>{let{text:t}=e;return{data:t}})}getInitialProps(e,t){let{Component:r}=this.components["/_app"],n=this._wrapApp(r);return t.AppTree=n,(0,d.loadGetInitialProps)(r,{AppTree:n,Component:e,router:this,ctx:t})}get route(){return this.state.route}get pathname(){return this.state.pathname}get query(){return this.state.query}get asPath(){return this.state.asPath}get locale(){return this.state.locale}get isFallback(){return this.state.isFallback}get isPreview(){return this.state.isPreview}constructor(e,t,n,{initialProps:a,pageLoader:i,App:l,wrapApp:u,Component:s,err:c,subscription:f,isFallback:m,locale:g,locales:_,defaultLocale:b,domainLocales:v,isPreview:P}){this.sdc={},this.sbc={},this.isFirstPopStateEvent=!0,this._key=q(),this.onPopState=e=>{let t;let{isFirstPopStateEvent:r}=this;this.isFirstPopStateEvent=!1;let n=e.state;if(!n){let{pathname:e,query:t}=this;this.changeState("replaceState",(0,y.formatWithValidation)({pathname:(0,w.addBasePath)(e),query:t}),(0,d.getURL)());return}if(n.__NA){window.location.reload();return}if(!n.__N||r&&this.locale===n.options.locale&&n.as===this.asPath)return;let{url:a,as:o,options:i,key:l}=n;this._key=l;let{pathname:u}=(0,p.parseRelativeUrl)(a);(!this.isSsr||o!==(0,w.addBasePath)(this.asPath)||u!==(0,w.addBasePath)(this.pathname))&&(!this._bps||this._bps(n))&&this.change("replaceState",a,o,Object.assign({},i,{shallow:i.shallow&&this._shallow,locale:i.locale||this.defaultLocale,_h:0}),t)};let S=(0,o.removeTrailingSlash)(e);this.components={},"/_error"!==e&&(this.components[S]={Component:s,initial:!0,props:a,err:c,__N_SSG:a&&a.__N_SSG,__N_SSP:a&&a.__N_SSP}),this.components["/_app"]={Component:l,styleSheets:[]};{let{BloomFilter:e}=r(34258),t={numItems:2,errorRate:.01,numBits:20,numHashes:7,bitArray:[0,0,0,1,1,0,1,1,0,0,1,1,0,0,0,1,1,0,1,1]},n={numItems:0,errorRate:.01,numBits:0,numHashes:null,bitArray:[]};(null==t?void 0:t.numHashes)&&(this._bfl_s=new e(t.numItems,t.errorRate),this._bfl_s.import(t)),(null==n?void 0:n.numHashes)&&(this._bfl_d=new e(n.numItems,n.errorRate),this._bfl_d.import(n))}this.events=V.events,this.pageLoader=i;let j=(0,h.isDynamicRoute)(e)&&self.__NEXT_DATA__.autoExport;if(this.basePath="",this.sub=f,this.clc=null,this._wrapApp=u,this.isSsr=!0,this.isLocaleDomain=!1,this.isReady=!!(self.__NEXT_DATA__.gssp||self.__NEXT_DATA__.gip||self.__NEXT_DATA__.appGip&&!self.__NEXT_DATA__.gsp||!j&&!self.location.search),this.state={route:S,pathname:e,query:t,asPath:j?e:n,isPreview:!!P,locale:void 0,isFallback:m},this._initialMatchesMiddlewarePromise=Promise.resolve(!1),!n.startsWith("//")){let r={locale:g},a=(0,d.getURL)();this._initialMatchesMiddlewarePromise=N({router:this,locale:g,asPath:a}).then(o=>(r._shouldResolveHref=n!==e,this.changeState("replaceState",o?a:(0,y.formatWithValidation)({pathname:(0,w.addBasePath)(e),query:t}),a,r),o))}window.addEventListener("popstate",this.onPopState)}}V.events=(0,f.default)()},69092:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"addLocale",{enumerable:!0,get:function(){return o}});let n=r(46584),a=r(71509);function o(e,t,r,o){if(!t||t===r)return e;let i=e.toLowerCase();return!o&&((0,a.pathHasPrefix)(i,"/api")||(0,a.pathHasPrefix)(i,"/"+t.toLowerCase()))?e:(0,n.addPathPrefix)(e,"/"+t)}},46584:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"addPathPrefix",{enumerable:!0,get:function(){return a}});let n=r(96583);function a(e,t){if(!e.startsWith("/")||!t)return e;let{pathname:r,query:a,hash:o}=(0,n.parsePath)(e);return""+t+r+a+o}},48445:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"addPathSuffix",{enumerable:!0,get:function(){return a}});let n=r(96583);function a(e,t){if(!e.startsWith("/")||!t)return e;let{pathname:r,query:a,hash:o}=(0,n.parsePath)(e);return""+r+t+a+o}},16213:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{normalizeAppPath:function(){return a},normalizeRscPath:function(){return o}});let n=r(2131);function a(e){return(0,n.ensureLeadingSlash)(e.split("/").reduce((e,t,r,n)=>!t||"("===t[0]&&t.endsWith(")")||"@"===t[0]||("page"===t||"route"===t)&&r===n.length-1?e:e+"/"+t,""))}function o(e,t){return t?e.replace(/\.rsc($|\?)/,"$1"):e}},33820:function(e,t){"use strict";function r(e,t){let r=Object.keys(e);if(r.length!==Object.keys(t).length)return!1;for(let n=r.length;n--;){let a=r[n];if("query"===a){let r=Object.keys(e.query);if(r.length!==Object.keys(t.query).length)return!1;for(let n=r.length;n--;){let a=r[n];if(!t.query.hasOwnProperty(a)||e.query[a]!==t.query[a])return!1}}else if(!t.hasOwnProperty(a)||e[a]!==t[a])return!1}return!0}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"compareRouterStates",{enumerable:!0,get:function(){return r}})},48497:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"formatNextPathnameInfo",{enumerable:!0,get:function(){return l}});let n=r(79861),a=r(46584),o=r(48445),i=r(69092);function l(e){let t=(0,i.addLocale)(e.pathname,e.locale,e.buildId?void 0:e.defaultLocale,e.ignorePrefix);return(e.buildId||!e.trailingSlash)&&(t=(0,n.removeTrailingSlash)(t)),e.buildId&&(t=(0,o.addPathSuffix)((0,a.addPathPrefix)(t,"/_next/data/"+e.buildId),"/"===e.pathname?"index.json":".json")),t=(0,a.addPathPrefix)(t,e.basePath),!e.buildId&&e.trailingSlash?t.endsWith("/")?t:(0,o.addPathSuffix)(t,"/"):(0,n.removeTrailingSlash)(t)}},21896:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{formatUrl:function(){return i},urlObjectKeys:function(){return l},formatWithValidation:function(){return u}});let n=r(61757),a=n._(r(22946)),o=/https?|ftp|gopher|file/;function i(e){let{auth:t,hostname:r}=e,n=e.protocol||"",i=e.pathname||"",l=e.hash||"",u=e.query||"",s=!1;t=t?encodeURIComponent(t).replace(/%3A/i,":")+"@":"",e.host?s=t+e.host:r&&(s=t+(~r.indexOf(":")?"["+r+"]":r),e.port&&(s+=":"+e.port)),u&&"object"==typeof u&&(u=String(a.urlQueryToSearchParams(u)));let c=e.search||u&&"?"+u||"";return n&&!n.endsWith(":")&&(n+=":"),e.slashes||(!n||o.test(n))&&!1!==s?(s="//"+(s||""),i&&"/"!==i[0]&&(i="/"+i)):s||(s=""),l&&"#"!==l[0]&&(l="#"+l),c&&"?"!==c[0]&&(c="?"+c),""+n+s+(i=i.replace(/[?#]/g,encodeURIComponent))+(c=c.replace("#","%23"))+l}let l=["auth","hash","host","hostname","href","path","pathname","port","protocol","query","search","slashes"];function u(e){return i(e)}},11033:function(e,t){"use strict";function r(e,t){void 0===t&&(t="");let r="/"===e?"/index":/^\/index(\/|$)/.test(e)?"/index"+e:""+e;return r+t}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return r}})},61714:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"getNextPathnameInfo",{enumerable:!0,get:function(){return i}});let n=r(25935),a=r(75938),o=r(71509);function i(e,t){var r,i,l;let{basePath:u,i18n:s,trailingSlash:c}=null!=(r=t.nextConfig)?r:{},f={pathname:e,trailingSlash:"/"!==e?e.endsWith("/"):c};if(u&&(0,o.pathHasPrefix)(f.pathname,u)&&(f.pathname=(0,a.removePathPrefix)(f.pathname,u),f.basePath=u),!0===t.parseData&&f.pathname.startsWith("/_next/data/")&&f.pathname.endsWith(".json")){let e=f.pathname.replace(/^\/_next\/data\//,"").replace(/\.json$/,"").split("/"),t=e[0];f.pathname="index"!==e[1]?"/"+e.slice(1).join("/"):"/",f.buildId=t}if(t.i18nProvider){let e=t.i18nProvider.analyze(f.pathname);f.locale=e.detectedLocale,f.pathname=null!=(i=e.pathname)?i:f.pathname}else if(s){let e=(0,n.normalizeLocalePath)(f.pathname,s.locales);f.locale=e.detectedLocale,f.pathname=null!=(l=e.pathname)?l:f.pathname}return f}},20075:function(e,t){"use strict";function r(e,t){void 0===t&&(t={});let r=document.documentElement,n=r.style.scrollBehavior;r.style.scrollBehavior="auto",t.dontForceLayout||r.getClientRects(),e(),r.style.scrollBehavior=n}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"handleSmoothScroll",{enumerable:!0,get:function(){return r}})},21718:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{getSortedRoutes:function(){return n.getSortedRoutes},isDynamicRoute:function(){return a.isDynamicRoute}});let n=r(50700),a=r(4138)},53529:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"interpolateAs",{enumerable:!0,get:function(){return o}});let n=r(2754),a=r(45986);function o(e,t,r){let o="",i=(0,a.getRouteRegex)(e),l=i.groups,u=(t!==e?(0,n.getRouteMatcher)(i)(t):"")||r;o=e;let s=Object.keys(l);return s.every(e=>{let t=u[e]||"",{repeat:r,optional:n}=l[e],a="["+(r?"...":"")+e+"]";return n&&(a=(t?"":"/")+"["+a+"]"),r&&!Array.isArray(t)&&(t=[t]),(n||e in u)&&(o=o.replace(a,r?t.map(e=>encodeURIComponent(e)).join("/"):encodeURIComponent(t))||"/")})||(o=""),{params:s,result:o}}},67578:function(e,t){"use strict";function r(e){return/Googlebot|Mediapartners-Google|AdsBot-Google|googleweblight|Storebot-Google|Google-PageRenderer|Bingbot|BingPreview|Slurp|DuckDuckBot|baiduspider|yandex|sogou|LinkedInBot|bitlybot|tumblr|vkShare|quora link preview|facebookexternalhit|facebookcatalog|Twitterbot|applebot|redditbot|Slackbot|Discordbot|WhatsApp|SkypeUriPreview|ia_archiver/i.test(e)}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"isBot",{enumerable:!0,get:function(){return r}})},4138:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"isDynamicRoute",{enumerable:!0,get:function(){return n}});let r=/\/\[[^/]+?\](?=\/|$)/;function n(e){return r.test(e)}},33800:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"isLocalURL",{enumerable:!0,get:function(){return o}});let n=r(64207),a=r(10420);function o(e){if(!(0,n.isAbsoluteUrl)(e))return!0;try{let t=(0,n.getLocationOrigin)(),r=new URL(e,t);return r.origin===t&&(0,a.hasBasePath)(r.pathname)}catch(e){return!1}}},24540:function(e,t){"use strict";function r(e,t){let r={};return Object.keys(e).forEach(n=>{t.includes(n)||(r[n]=e[n])}),r}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"omit",{enumerable:!0,get:function(){return r}})},96583:function(e,t){"use strict";function r(e){let t=e.indexOf("#"),r=e.indexOf("?"),n=r>-1&&(t<0||r-1?{pathname:e.substring(0,n?r:t),query:n?e.substring(r,t>-1?t:void 0):"",hash:t>-1?e.slice(t):""}:{pathname:e,query:"",hash:""}}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"parsePath",{enumerable:!0,get:function(){return r}})},62837:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"parseRelativeUrl",{enumerable:!0,get:function(){return o}});let n=r(64207),a=r(22946);function o(e,t){let r=new URL((0,n.getLocationOrigin)()),o=t?new URL(t,r):e.startsWith(".")?new URL(window.location.href):r,{pathname:i,searchParams:l,search:u,hash:s,href:c,origin:f}=new URL(e,o);if(f!==r.origin)throw Error("invariant: invalid relative URL, router received "+e);return{pathname:i,query:(0,a.searchParamsToUrlQuery)(l),search:u,hash:s,href:c.slice(r.origin.length)}}},71509:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"pathHasPrefix",{enumerable:!0,get:function(){return a}});let n=r(96583);function a(e,t){if("string"!=typeof e)return!1;let{pathname:r}=(0,n.parsePath)(e);return r===t||r.startsWith(t+"/")}},22946:function(e,t){"use strict";function r(e){let t={};return e.forEach((e,r)=>{void 0===t[r]?t[r]=e:Array.isArray(t[r])?t[r].push(e):t[r]=[t[r],e]}),t}function n(e){return"string"!=typeof e&&("number"!=typeof e||isNaN(e))&&"boolean"!=typeof e?"":String(e)}function a(e){let t=new URLSearchParams;return Object.entries(e).forEach(e=>{let[r,a]=e;Array.isArray(a)?a.forEach(e=>t.append(r,n(e))):t.set(r,n(a))}),t}function o(e){for(var t=arguments.length,r=Array(t>1?t-1:0),n=1;n{Array.from(t.keys()).forEach(t=>e.delete(t)),t.forEach((t,r)=>e.append(r,t))}),e}Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{searchParamsToUrlQuery:function(){return r},urlQueryToSearchParams:function(){return a},assign:function(){return o}})},75938:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"removePathPrefix",{enumerable:!0,get:function(){return a}});let n=r(71509);function a(e,t){if(!(0,n.pathHasPrefix)(e,t))return e;let r=e.slice(t.length);return r.startsWith("/")?r:"/"+r}},79861:function(e,t){"use strict";function r(e){return e.replace(/\/$/,"")||"/"}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"removeTrailingSlash",{enumerable:!0,get:function(){return r}})},49413:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"resolveHref",{enumerable:!0,get:function(){return f}});let n=r(22946),a=r(21896),o=r(24540),i=r(64207),l=r(81583),u=r(33800),s=r(4138),c=r(53529);function f(e,t,r){let f;let d="string"==typeof t?t:(0,a.formatWithValidation)(t),h=d.match(/^[a-zA-Z]{1,}:\/\//),p=h?d.slice(h[0].length):d;if((p.split("?")[0]||"").match(/(\/\/|\\)/)){console.error("Invalid href '"+d+"' passed to next/router in page: '"+e.pathname+"'. Repeated forward-slashes (//) or backslashes \\ are not valid in the href.");let t=(0,i.normalizeRepeatedSlashes)(p);d=(h?h[0]:"")+t}if(!(0,u.isLocalURL)(d))return r?[d]:d;try{f=new URL(d.startsWith("#")?e.asPath:e.pathname,"http://n")}catch(e){f=new URL("/","http://n")}try{let e=new URL(d,f);e.pathname=(0,l.normalizePathTrailingSlash)(e.pathname);let t="";if((0,s.isDynamicRoute)(e.pathname)&&e.searchParams&&r){let r=(0,n.searchParamsToUrlQuery)(e.searchParams),{result:i,params:l}=(0,c.interpolateAs)(e.pathname,e.pathname,r);i&&(t=(0,a.formatWithValidation)({pathname:i,hash:e.hash,query:(0,o.omit)(r,l)}))}let i=e.origin===f.origin?e.href.slice(e.origin.length):e.href;return r?[i,t||i]:i}catch(e){return r?[d]:d}}},2754:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"getRouteMatcher",{enumerable:!0,get:function(){return a}});let n=r(64207);function a(e){let{re:t,groups:r}=e;return e=>{let a=t.exec(e);if(!a)return!1;let o=e=>{try{return decodeURIComponent(e)}catch(e){throw new n.DecodeError("failed to decode param")}},i={};return Object.keys(r).forEach(e=>{let t=r[e],n=a[t.pos];void 0!==n&&(i[e]=~n.indexOf("/")?n.split("/").map(e=>o(e)):t.repeat?[o(n)]:o(n))}),i}}},45986:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{getRouteRegex:function(){return u},getNamedRouteRegex:function(){return f},getNamedMiddlewareRegex:function(){return d}});let n=r(92407),a=r(9630),o=r(79861);function i(e){let t=e.startsWith("[")&&e.endsWith("]");t&&(e=e.slice(1,-1));let r=e.startsWith("...");return r&&(e=e.slice(3)),{key:e,repeat:r,optional:t}}function l(e){let t=(0,o.removeTrailingSlash)(e).slice(1).split("/"),r={},l=1;return{parameterizedRoute:t.map(e=>{let t=n.INTERCEPTION_ROUTE_MARKERS.find(t=>e.startsWith(t)),o=e.match(/\[((?:\[.*\])|.+)\]/);if(t&&o){let{key:e,optional:n,repeat:u}=i(o[1]);return r[e]={pos:l++,repeat:u,optional:n},"/"+(0,a.escapeStringRegexp)(t)+"([^/]+?)"}if(!o)return"/"+(0,a.escapeStringRegexp)(e);{let{key:e,repeat:t,optional:n}=i(o[1]);return r[e]={pos:l++,repeat:t,optional:n},t?n?"(?:/(.+?))?":"/(.+?)":"/([^/]+?)"}}).join(""),groups:r}}function u(e){let{parameterizedRoute:t,groups:r}=l(e);return{re:RegExp("^"+t+"(?:/)?$"),groups:r}}function s(e){let t,r,{segment:n,routeKeys:a,keyPrefix:o}=e,l=(t=97,r=1,()=>{let e="";for(let n=0;n122&&(r++,t=97);return e}),{key:u,optional:s,repeat:c}=i(n),f=u.replace(/\W/g,"");o&&(f=""+o+f);let d=!1;return(0===f.length||f.length>30)&&(d=!0),isNaN(parseInt(f.slice(0,1)))||(d=!0),d&&(f=l()),o?a[f]=""+o+u:a[f]=""+u,c?s?"(?:/(?<"+f+">.+?))?":"/(?<"+f+">.+?)":"/(?<"+f+">[^/]+?)"}function c(e,t){let r=(0,o.removeTrailingSlash)(e).slice(1).split("/"),i={};return{namedParameterizedRoute:r.map(e=>{let r=n.INTERCEPTION_ROUTE_MARKERS.some(t=>e.startsWith(t)),o=e.match(/\[((?:\[.*\])|.+)\]/);return r&&o?s({segment:o[1],routeKeys:i,keyPrefix:t?"nxtI":void 0}):o?s({segment:o[1],routeKeys:i,keyPrefix:t?"nxtP":void 0}):"/"+(0,a.escapeStringRegexp)(e)}).join(""),routeKeys:i}}function f(e,t){let r=c(e,t);return{...u(e),namedRegex:"^"+r.namedParameterizedRoute+"(?:/)?$",routeKeys:r.routeKeys}}function d(e,t){let{parameterizedRoute:r}=l(e),{catchAll:n=!0}=t;if("/"===r)return{namedRegex:"^/"+(n?".*":"")+"$"};let{namedParameterizedRoute:a}=c(e,!1);return{namedRegex:"^"+a+(n?"(?:(/.*)?)":"")+"$"}}},50700:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"getSortedRoutes",{enumerable:!0,get:function(){return n}});class r{insert(e){this._insert(e.split("/").filter(Boolean),[],!1)}smoosh(){return this._smoosh()}_smoosh(e){void 0===e&&(e="/");let t=[...this.children.keys()].sort();null!==this.slugName&&t.splice(t.indexOf("[]"),1),null!==this.restSlugName&&t.splice(t.indexOf("[...]"),1),null!==this.optionalRestSlugName&&t.splice(t.indexOf("[[...]]"),1);let r=t.map(t=>this.children.get(t)._smoosh(""+e+t+"/")).reduce((e,t)=>[...e,...t],[]);if(null!==this.slugName&&r.push(...this.children.get("[]")._smoosh(e+"["+this.slugName+"]/")),!this.placeholder){let t="/"===e?"/":e.slice(0,-1);if(null!=this.optionalRestSlugName)throw Error('You cannot define a route with the same specificity as a optional catch-all route ("'+t+'" and "'+t+"[[..."+this.optionalRestSlugName+']]").');r.unshift(t)}return null!==this.restSlugName&&r.push(...this.children.get("[...]")._smoosh(e+"[..."+this.restSlugName+"]/")),null!==this.optionalRestSlugName&&r.push(...this.children.get("[[...]]")._smoosh(e+"[[..."+this.optionalRestSlugName+"]]/")),r}_insert(e,t,n){if(0===e.length){this.placeholder=!1;return}if(n)throw Error("Catch-all must be the last part of the URL.");let a=e[0];if(a.startsWith("[")&&a.endsWith("]")){let r=a.slice(1,-1),i=!1;if(r.startsWith("[")&&r.endsWith("]")&&(r=r.slice(1,-1),i=!0),r.startsWith("...")&&(r=r.substring(3),n=!0),r.startsWith("[")||r.endsWith("]"))throw Error("Segment names may not start or end with extra brackets ('"+r+"').");if(r.startsWith("."))throw Error("Segment names may not start with erroneous periods ('"+r+"').");function o(e,r){if(null!==e&&e!==r)throw Error("You cannot use different slug names for the same dynamic path ('"+e+"' !== '"+r+"').");t.forEach(e=>{if(e===r)throw Error('You cannot have the same slug name "'+r+'" repeat within a single dynamic path');if(e.replace(/\W/g,"")===a.replace(/\W/g,""))throw Error('You cannot have the slug names "'+e+'" and "'+r+'" differ only by non-word symbols within a single dynamic path')}),t.push(r)}if(n){if(i){if(null!=this.restSlugName)throw Error('You cannot use both an required and optional catch-all route at the same level ("[...'+this.restSlugName+']" and "'+e[0]+'" ).');o(this.optionalRestSlugName,r),this.optionalRestSlugName=r,a="[[...]]"}else{if(null!=this.optionalRestSlugName)throw Error('You cannot use both an optional and required catch-all route at the same level ("[[...'+this.optionalRestSlugName+']]" and "'+e[0]+'").');o(this.restSlugName,r),this.restSlugName=r,a="[...]"}}else{if(i)throw Error('Optional route parameters are not yet supported ("'+e[0]+'").');o(this.slugName,r),this.slugName=r,a="[]"}}this.children.has(a)||this.children.set(a,new r),this.children.get(a)._insert(e.slice(1),t,n)}constructor(){this.placeholder=!0,this.children=new Map,this.slugName=null,this.restSlugName=null,this.optionalRestSlugName=null}}function n(e){let t=new r;return e.forEach(e=>t.insert(e)),t.smoosh()}},20755:function(e,t){"use strict";let r;Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{default:function(){return n},setConfig:function(){return a}});let n=()=>r;function a(e){r=e}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},29274:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return l}});let n=r(61757),a=n._(r(67294)),o=a.useLayoutEffect,i=a.useEffect;function l(e){let{headManager:t,reduceComponentsToState:r}=e;function n(){if(t&&t.mountedInstances){let n=a.Children.toArray(Array.from(t.mountedInstances).filter(Boolean));t.updateHead(r(n,e))}}return o(()=>{var r;return null==t||null==(r=t.mountedInstances)||r.add(e.children),()=>{var r;null==t||null==(r=t.mountedInstances)||r.delete(e.children)}}),o(()=>(t&&(t._pendingUpdate=n),()=>{t&&(t._pendingUpdate=n)})),i(()=>(t&&t._pendingUpdate&&(t._pendingUpdate(),t._pendingUpdate=null),()=>{t&&t._pendingUpdate&&(t._pendingUpdate(),t._pendingUpdate=null)})),null}},64207:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{WEB_VITALS:function(){return r},execOnce:function(){return n},isAbsoluteUrl:function(){return o},getLocationOrigin:function(){return i},getURL:function(){return l},getDisplayName:function(){return u},isResSent:function(){return s},normalizeRepeatedSlashes:function(){return c},loadGetInitialProps:function(){return f},SP:function(){return d},ST:function(){return h},DecodeError:function(){return p},NormalizeError:function(){return m},PageNotFoundError:function(){return g},MissingStaticPage:function(){return y},MiddlewareNotFoundError:function(){return _},stringifyError:function(){return b}});let r=["CLS","FCP","FID","INP","LCP","TTFB"];function n(e){let t,r=!1;return function(){for(var n=arguments.length,a=Array(n),o=0;oa.test(e);function i(){let{protocol:e,hostname:t,port:r}=window.location;return e+"//"+t+(r?":"+r:"")}function l(){let{href:e}=window.location,t=i();return e.substring(t.length)}function u(e){return"string"==typeof e?e:e.displayName||e.name||"Unknown"}function s(e){return e.finished||e.headersSent}function c(e){let t=e.split("?"),r=t[0];return r.replace(/\\/g,"/").replace(/\/\/+/g,"/")+(t[1]?"?"+t.slice(1).join("?"):"")}async function f(e,t){let r=t.res||t.ctx&&t.ctx.res;if(!e.getInitialProps)return t.ctx&&t.Component?{pageProps:await f(t.Component,t.ctx)}:{};let n=await e.getInitialProps(t);if(r&&s(r))return n;if(!n){let t='"'+u(e)+'.getInitialProps()" should resolve to an object. But found "'+n+'" instead.';throw Error(t)}return n}let d="undefined"!=typeof performance,h=d&&["mark","measure","getEntriesByName"].every(e=>"function"==typeof performance[e]);class p extends Error{}class m extends Error{}class g extends Error{constructor(e){super(),this.code="ENOENT",this.name="PageNotFoundError",this.message="Cannot find module for page: "+e}}class y extends Error{constructor(e,t){super(),this.message="Failed to load static file for page: "+e+" "+t}}class _ extends Error{constructor(){super(),this.code="ENOENT",this.message="Cannot find the middleware module"}}function b(e){return JSON.stringify({message:e.message,stack:e.stack})}},60421:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"warnOnce",{enumerable:!0,get:function(){return r}});let r=e=>{}},78018:function(e){var t,r,n,a,o,i,l,u,s,c,f,d,h,p,m,g,y,_,b,v,P,w,S,j,O,E,R,x,C,M,A,L,I,T,N,k,D,B,H,U,F,W,q,z,G,V;(t={}).d=function(e,r){for(var n in r)t.o(r,n)&&!t.o(e,n)&&Object.defineProperty(e,n,{enumerable:!0,get:r[n]})},t.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},t.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},void 0!==t&&(t.ab="//"),r={},t.r(r),t.d(r,{getCLS:function(){return S},getFCP:function(){return v},getFID:function(){return M},getINP:function(){return W},getLCP:function(){return z},getTTFB:function(){return V},onCLS:function(){return S},onFCP:function(){return v},onFID:function(){return M},onINP:function(){return W},onLCP:function(){return z},onTTFB:function(){return V}}),u=-1,s=function(e){addEventListener("pageshow",function(t){t.persisted&&(u=t.timeStamp,e(t))},!0)},c=function(){return window.performance&&performance.getEntriesByType&&performance.getEntriesByType("navigation")[0]},f=function(){var e=c();return e&&e.activationStart||0},d=function(e,t){var r=c(),n="navigate";return u>=0?n="back-forward-cache":r&&(n=document.prerendering||f()>0?"prerender":r.type.replace(/_/g,"-")),{name:e,value:void 0===t?-1:t,rating:"good",delta:0,entries:[],id:"v3-".concat(Date.now(),"-").concat(Math.floor(8999999999999*Math.random())+1e12),navigationType:n}},h=function(e,t,r){try{if(PerformanceObserver.supportedEntryTypes.includes(e)){var n=new PerformanceObserver(function(e){t(e.getEntries())});return n.observe(Object.assign({type:e,buffered:!0},r||{})),n}}catch(e){}},p=function(e,t){var r=function r(n){"pagehide"!==n.type&&"hidden"!==document.visibilityState||(e(n),t&&(removeEventListener("visibilitychange",r,!0),removeEventListener("pagehide",r,!0)))};addEventListener("visibilitychange",r,!0),addEventListener("pagehide",r,!0)},m=function(e,t,r,n){var a,o;return function(i){var l;t.value>=0&&(i||n)&&((o=t.value-(a||0))||void 0===a)&&(a=t.value,t.delta=o,t.rating=(l=t.value)>r[1]?"poor":l>r[0]?"needs-improvement":"good",e(t))}},g=-1,y=function(){return"hidden"!==document.visibilityState||document.prerendering?1/0:0},_=function(){p(function(e){g=e.timeStamp},!0)},b=function(){return g<0&&(g=y(),_(),s(function(){setTimeout(function(){g=y(),_()},0)})),{get firstHiddenTime(){return g}}},v=function(e,t){t=t||{};var r,n=[1800,3e3],a=b(),o=d("FCP"),i=function(e){e.forEach(function(e){"first-contentful-paint"===e.name&&(u&&u.disconnect(),e.startTime-1&&e(t)},o=d("CLS",0),i=0,l=[],u=function(e){e.forEach(function(e){if(!e.hadRecentInput){var t=l[0],r=l[l.length-1];i&&e.startTime-r.startTime<1e3&&e.startTime-t.startTime<5e3?(i+=e.value,l.push(e)):(i=e.value,l=[e]),i>o.value&&(o.value=i,o.entries=l,n())}})},c=h("layout-shift",u);c&&(n=m(a,o,r,t.reportAllChanges),p(function(){u(c.takeRecords()),n(!0)}),s(function(){i=0,w=-1,n=m(a,o=d("CLS",0),r,t.reportAllChanges)}))},j={passive:!0,capture:!0},O=new Date,E=function(e,t){n||(n=t,a=e,o=new Date,C(removeEventListener),R())},R=function(){if(a>=0&&a1e12?new Date:performance.now())-e.timeStamp;"pointerdown"==e.type?(t=function(){E(a,e),n()},r=function(){n()},n=function(){removeEventListener("pointerup",t,j),removeEventListener("pointercancel",r,j)},addEventListener("pointerup",t,j),addEventListener("pointercancel",r,j)):E(a,e)}},C=function(e){["mousedown","keydown","touchstart","pointerdown"].forEach(function(t){return e(t,x,j)})},M=function(e,t){t=t||{};var r,o=[100,300],l=b(),u=d("FID"),c=function(e){e.startTimet.latency){if(r)r.entries.push(e),r.latency=Math.max(r.latency,e.duration);else{var n={id:e.interactionId,latency:e.duration,entries:[e]};U[n.id]=n,H.push(n)}H.sort(function(e,t){return t.latency-e.latency}),H.splice(10).forEach(function(e){delete U[e.id]})}},W=function(e,t){t=t||{};var r=[200,500];k();var n,a=d("INP"),o=function(e){e.forEach(function(e){e.interactionId&&F(e),"first-input"!==e.entryType||H.some(function(t){return t.entries.some(function(t){return e.duration===t.duration&&e.startTime===t.startTime})})||F(e)});var t,r=(t=Math.min(H.length-1,Math.floor(B()/50)),H[t]);r&&r.latency!==a.value&&(a.value=r.latency,a.entries=r.entries,n())},i=h("event",o,{durationThreshold:t.durationThreshold||40});n=m(e,a,r,t.reportAllChanges),i&&(i.observe({type:"first-input",buffered:!0}),p(function(){o(i.takeRecords()),a.value<0&&B()>0&&(a.value=0,a.entries=[]),n(!0)}),s(function(){H=[],D=N(),n=m(e,a=d("INP"),r,t.reportAllChanges)}))},q={},z=function(e,t){t=t||{};var r,n=[2500,4e3],a=b(),o=d("LCP"),i=function(e){var t=e[e.length-1];if(t){var n=t.startTime-f();nperformance.now())return;n.entries=[o],a(!0),s(function(){(a=m(e,n=d("TTFB",0),r,t.reportAllChanges))(!0)})}})},e.exports=r},79423:function(e,t){"use strict";function r(e){return"/api"===e||!!(null==e?void 0:e.startsWith("/api/"))}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"isAPIRoute",{enumerable:!0,get:function(){return r}})},80676:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{default:function(){return a},getProperError:function(){return o}});let n=r(67235);function a(e){return"object"==typeof e&&null!==e&&"name"in e&&"message"in e}function o(e){return a(e)?e:Error((0,n.isPlainObject)(e)?JSON.stringify(e):e+"")}},92407:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{INTERCEPTION_ROUTE_MARKERS:function(){return a},isInterceptionRouteAppPath:function(){return o},extractInterceptionRouteInformation:function(){return i}});let n=r(16213),a=["(..)(..)","(.)","(..)","(...)"];function o(e){return void 0!==e.split("/").find(e=>a.find(t=>e.startsWith(t)))}function i(e){let t,r,o;for(let n of e.split("/"))if(r=a.find(e=>n.startsWith(e))){[t,o]=e.split(r,2);break}if(!t||!r||!o)throw Error(`Invalid interception route: ${e}. Must be in the format //(..|...|..)(..)/`);switch(t=(0,n.normalizeAppPath)(t),r){case"(.)":o="/"===t?`/${o}`:t+"/"+o;break;case"(..)":if("/"===t)throw Error(`Invalid interception route: ${e}. Cannot use (..) marker at the root level, use (.) instead.`);o=t.split("/").slice(0,-1).concat(o).join("/");break;case"(...)":o="/"+o;break;case"(..)(..)":let i=t.split("/");if(i.length<=2)throw Error(`Invalid interception route: ${e}. Cannot use (..)(..) marker at the root level or one level up.`);o=i.slice(0,-2).concat(o).join("/");break;default:throw Error("Invariant: unexpected marker")}return{interceptingRoute:t,interceptedRoute:o}}},72431:function(){},38754:function(e,t,r){"use strict";function n(e){return e&&e.__esModule?e:{default:e}}r.r(t),r.d(t,{_:function(){return n},_interop_require_default:function(){return n}})},61757:function(e,t,r){"use strict";function n(e){if("function"!=typeof WeakMap)return null;var t=new WeakMap,r=new WeakMap;return(n=function(e){return e?r:t})(e)}function a(e,t){if(!t&&e&&e.__esModule)return e;if(null===e||"object"!=typeof e&&"function"!=typeof e)return{default:e};var r=n(t);if(r&&r.has(e))return r.get(e);var a={},o=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var i in e)if("default"!==i&&Object.prototype.hasOwnProperty.call(e,i)){var l=o?Object.getOwnPropertyDescriptor(e,i):null;l&&(l.get||l.set)?Object.defineProperty(a,i,l):a[i]=e[i]}return a.default=e,r&&r.set(e,a),a}r.r(t),r.d(t,{_:function(){return a},_interop_require_wildcard:function(){return a}})}},function(e){e.O(0,[774],function(){return e(e.s=87578)}),_N_E=e.O()}]); \ No newline at end of file diff --git a/spaces/ceckenrode/bigscience-bloom/app.py b/spaces/ceckenrode/bigscience-bloom/app.py deleted file mode 100644 index e2baf29247fdd75903697d71a498e9de137f37bc..0000000000000000000000000000000000000000 --- a/spaces/ceckenrode/bigscience-bloom/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/bigscience/bloom").launch() \ No newline at end of file diff --git a/spaces/cenji1109285052/img-to-music/style.css b/spaces/cenji1109285052/img-to-music/style.css deleted file mode 100644 index 8f7397fe7f0971636015170df075cd2d070344ec..0000000000000000000000000000000000000000 --- a/spaces/cenji1109285052/img-to-music/style.css +++ /dev/null @@ -1,51 +0,0 @@ -#col-container {max-width: 510px; margin-left: auto; margin-right: auto;} -a {text-decoration-line: underline; font-weight: 600;} -div#music-output .h-full { - min-height: 5rem; -} -.footer { - margin-bottom: 45px; - margin-top: 10px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} \ No newline at end of file diff --git a/spaces/chasemcdo/hf_localai/pkg/stablediffusion/generate_unsupported.go b/spaces/chasemcdo/hf_localai/pkg/stablediffusion/generate_unsupported.go deleted file mode 100644 index 9563bae0623550426796c95db32a8bfdfbb8dc25..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/pkg/stablediffusion/generate_unsupported.go +++ /dev/null @@ -1,10 +0,0 @@ -//go:build !stablediffusion -// +build !stablediffusion - -package stablediffusion - -import "fmt" - -func GenerateImage(height, width, mode, step, seed int, positive_prompt, negative_prompt, dst, asset_dir string) error { - return fmt.Errorf("This version of LocalAI was built without the stablediffusion tag") -} diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/token-classification/run.sh b/spaces/chendl/compositional_test/transformers/examples/legacy/token-classification/run.sh deleted file mode 100644 index b5f1e5f83bc7ffa20756edbfff35b8282caef828..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/legacy/token-classification/run.sh +++ /dev/null @@ -1,36 +0,0 @@ -## The relevant files are currently on a shared Google -## drive at https://drive.google.com/drive/folders/1kC0I2UGl2ltrluI9NqDjaQJGw5iliw_J -## Monitor for changes and eventually migrate to use the `datasets` library -curl -L 'https://drive.google.com/uc?export=download&id=1Jjhbal535VVz2ap4v4r_rN1UEHTdLK5P' \ -| grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > train.txt.tmp -curl -L 'https://drive.google.com/uc?export=download&id=1ZfRcQThdtAR5PPRjIDtrVP7BtXSCUBbm' \ -| grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > dev.txt.tmp -curl -L 'https://drive.google.com/uc?export=download&id=1u9mb7kNJHWQCWyweMDRMuTFoOHOfeBTH' \ -| grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > test.txt.tmp - -export MAX_LENGTH=128 -export BERT_MODEL=bert-base-multilingual-cased -python3 scripts/preprocess.py train.txt.tmp $BERT_MODEL $MAX_LENGTH > train.txt -python3 scripts/preprocess.py dev.txt.tmp $BERT_MODEL $MAX_LENGTH > dev.txt -python3 scripts/preprocess.py test.txt.tmp $BERT_MODEL $MAX_LENGTH > test.txt -cat train.txt dev.txt test.txt | cut -d " " -f 2 | grep -v "^$"| sort | uniq > labels.txt -export OUTPUT_DIR=germeval-model -export BATCH_SIZE=32 -export NUM_EPOCHS=3 -export SAVE_STEPS=750 -export SEED=1 - -python3 run_ner.py \ ---task_type NER \ ---data_dir . \ ---labels ./labels.txt \ ---model_name_or_path $BERT_MODEL \ ---output_dir $OUTPUT_DIR \ ---max_seq_length $MAX_LENGTH \ ---num_train_epochs $NUM_EPOCHS \ ---per_gpu_train_batch_size $BATCH_SIZE \ ---save_steps $SAVE_STEPS \ ---seed $SEED \ ---do_train \ ---do_eval \ ---do_predict diff --git a/spaces/chinmaysharma1020/malware_classification/README.md b/spaces/chinmaysharma1020/malware_classification/README.md deleted file mode 100644 index 87ad0ef9fab14de78ce71d88194a7bc21cb1a41b..0000000000000000000000000000000000000000 --- a/spaces/chinmaysharma1020/malware_classification/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Malware Classification -emoji: ⚡ -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chongjie/PoseDiffusion_MVP/README.md b/spaces/chongjie/PoseDiffusion_MVP/README.md deleted file mode 100644 index 171e471f69bfd6c17e55767541a460a5628f47ff..0000000000000000000000000000000000000000 --- a/spaces/chongjie/PoseDiffusion_MVP/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: PoseDiffusion_MVP -emoji: 🐠 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -# An Out-Of-The-Box Version of PoseDiffusion -[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/chongjie/PoseDiffusion_MVP) - -## Introduction -Camera pose estimation is a critical task in computer vision, traditionally relying on classical methods such as keypoint matching, RANSAC, and bundle adjustment. [PoseDiffusion](https://posediffusion.github.io/) introduces a novel approach to this problem by formulating the Structure from Motion (SfM) problem within a probabilistic diffusion framework. - -[![Demo Video](https://posediffusion.github.io/resources/qual_co3d.png)](https://posediffusion.github.io/resources/splash_sample2.mp4 "Demo Video") - -## Usage - -There are several ways you can use or interact with this project: - -* **Direct Use**: If you want to use the space directly without any modifications, simply click [here](https://huggingface.co/spaces/chongjie/PoseDiffusion_MVP). This will take you to the live application where you can interact with it as is. - -* **Duplicate the Space**: If you want to create a copy of this space for your own use or modifications, click [here](https://huggingface.co/spaces/chongjie/co-tracker?duplicate=true). This will create a duplicate of the space under your account, which you can then modify as per your needs. - -* **Run with Docker**: If you prefer to run the application locally using Docker, you can do so with the following command: - - ```bash - docker run -it -p 7860:7860 --platform=linux/amd64 \ - registry.hf.space/chongjie-posediffusion-mvp:latest python app.py - ``` - -## Acknowledgments -This repository is based on original [PoseDiffusion](https://posediffusion.github.io/) \ No newline at end of file diff --git a/spaces/chronopt-research/ViTExCo/train_swin_224.py b/spaces/chronopt-research/ViTExCo/train_swin_224.py deleted file mode 100644 index 31e6069068771f8f3184f58bfea1dc4d0f11bdc0..0000000000000000000000000000000000000000 --- a/spaces/chronopt-research/ViTExCo/train_swin_224.py +++ /dev/null @@ -1,593 +0,0 @@ -import os -import sys -import wandb -import argparse -import numpy as np -from tqdm import tqdm -from PIL import Image -from datetime import datetime -from zoneinfo import ZoneInfo -from time import gmtime, strftime -from collections import OrderedDict -import random - -import torch -import torch.nn as nn -import torch.optim as optim -import torch.backends.cudnn as cudnn -from torchvision.transforms import CenterCrop -from torch.utils.data import ConcatDataset, DataLoader -import torchvision.transforms as torch_transforms -from torchvision.utils import make_grid - -from src.losses import ( - ContextualLoss, - ContextualLoss_forward, - Perceptual_loss, - consistent_loss_fn, - discriminator_loss_fn, - generator_loss_fn, - l1_loss_fn, - smoothness_loss_fn, -) -from src.models.CNN.GAN_models import Discriminator_x64_224 -from src.models.CNN.ColorVidNet import GeneralColorVidNet -from src.models.CNN.FrameColor import frame_colorization -from src.models.CNN.NonlocalNet import WeightedAverage_color, NonlocalWeightedAverage, GeneralWarpNet -from src.models.vit.embed import GeneralEmbedModel -from src.data import transforms -from src.data.dataloader import VideosDataset, VideosDataset_ImageNet -from src.utils import CenterPad_threshold -from src.utils import ( - TimeHandler, - RGB2Lab, - ToTensor, - Normalize, - LossHandler, - WarpingLayer, - uncenter_l, - tensor_lab2rgb, - print_num_params, -) -from src.scheduler import PolynomialLR - -parser = argparse.ArgumentParser() -parser.add_argument("--video_data_root_list", type=str, default="dataset") -parser.add_argument("--flow_data_root_list", type=str, default="flow") -parser.add_argument("--mask_data_root_list", type=str, default="mask") -parser.add_argument("--data_root_imagenet", default="imagenet", type=str) -parser.add_argument("--annotation_file_path", default="dataset/annotation.csv", type=str) -parser.add_argument("--imagenet_pairs_file", default="imagenet_pairs.txt", type=str) -parser.add_argument("--gpu_ids", type=str, default="0,1,2,3", help="separate by comma") -parser.add_argument("--workers", type=int, default=0) -parser.add_argument("--batch_size", type=int, default=2) -parser.add_argument("--image_size", type=int, default=[384, 384]) -parser.add_argument("--ic", type=int, default=7) -parser.add_argument("--epoch", type=int, default=40) -parser.add_argument("--resume_epoch", type=int, default=0) -parser.add_argument("--resume", type=bool, default=False) -parser.add_argument("--load_pretrained_model", type=bool, default=False) -parser.add_argument("--lr", type=float, default=1e-4) -parser.add_argument("--beta1", type=float, default=0.5) -parser.add_argument("--lr_step", type=int, default=1) -parser.add_argument("--lr_gamma", type=float, default=0.9) -parser.add_argument("--checkpoint_dir", type=str, default="checkpoints") -parser.add_argument("--checkpoint_step", type=int, default=500) -parser.add_argument("--real_reference_probability", type=float, default=0.7) -parser.add_argument("--nonzero_placeholder_probability", type=float, default=0.0) -parser.add_argument("--domain_invariant", type=bool, default=False) -parser.add_argument("--weigth_l1", type=float, default=2.0) -parser.add_argument("--weight_contextual", type=float, default="0.5") -parser.add_argument("--weight_perceptual", type=float, default="0.02") -parser.add_argument("--weight_smoothness", type=float, default="5.0") -parser.add_argument("--weight_gan", type=float, default="0.5") -parser.add_argument("--weight_nonlocal_smoothness", type=float, default="0.0") -parser.add_argument("--weight_nonlocal_consistent", type=float, default="0.0") -parser.add_argument("--weight_consistent", type=float, default="0.05") -parser.add_argument("--luminance_noise", type=float, default="2.0") -parser.add_argument("--permute_data", type=bool, default=True) -parser.add_argument("--contextual_loss_direction", type=str, default="forward", help="forward or backward matching") -parser.add_argument("--batch_accum_size", type=int, default=10) -parser.add_argument("--epoch_train_discriminator", type=int, default=3) -parser.add_argument("--vit_version", type=str, default="vit_tiny_patch16_384") -parser.add_argument("--use_dummy", type=bool, default=False) -parser.add_argument("--use_wandb", type=bool, default=False) -parser.add_argument("--use_feature_transform", type=bool, default=False) -parser.add_argument("--head_out_idx", type=str, default="8,9,10,11") -parser.add_argument("--wandb_token", type=str, default="") -parser.add_argument("--wandb_name", type=str, default="") - - -def load_data(): - transforms_video = [ - CenterCrop(opt.image_size), - RGB2Lab(), - ToTensor(), - Normalize(), - ] - - train_dataset_videos = [ - VideosDataset( - video_data_root=video_data_root, - flow_data_root=flow_data_root, - mask_data_root=mask_data_root, - imagenet_folder=opt.data_root_imagenet, - annotation_file_path=opt.annotation_file_path, - image_size=opt.image_size, - image_transform=transforms.Compose(transforms_video), - real_reference_probability=opt.real_reference_probability, - nonzero_placeholder_probability=opt.nonzero_placeholder_probability, - ) - for video_data_root, flow_data_root, mask_data_root in zip( - opt.video_data_root_list, opt.flow_data_root_list, opt.mask_data_root_list - ) - ] - - transforms_imagenet = [CenterPad_threshold(opt.image_size), RGB2Lab(), ToTensor(), Normalize()] - extra_reference_transform = [ - torch_transforms.RandomHorizontalFlip(0.5), - torch_transforms.RandomResizedCrop(480, (0.98, 1.0), ratio=(0.8, 1.2)), - ] - - train_dataset_imagenet = VideosDataset_ImageNet( - imagenet_data_root=opt.data_root_imagenet, - pairs_file=opt.imagenet_pairs_file, - image_size=opt.image_size, - transforms_imagenet=transforms_imagenet, - distortion_level=4, - brightnessjitter=5, - nonzero_placeholder_probability=opt.nonzero_placeholder_probability, - extra_reference_transform=extra_reference_transform, - real_reference_probability=opt.real_reference_probability, - ) - - # video_training_length = sum([len(dataset) for dataset in train_dataset_videos]) - # imagenet_training_length = len(train_dataset_imagenet) - # dataset_training_length = sum([dataset.real_len for dataset in train_dataset_videos]) + +train_dataset_imagenet.real_len - dataset_combined = ConcatDataset(train_dataset_videos + [train_dataset_imagenet]) - # sampler=[] - # seed_sampler=int.from_bytes(os.urandom(4),"big") - # random.seed(seed_sampler) - # for idx in range(opt.epoch): - # sampler = sampler + random.sample(range(dataset_training_length),dataset_training_length) - # wandb.log({"Sampler_Seed":seed_sampler}) - # sampler = sampler+WeightedRandomSampler([1] * video_training_length + [1] * imagenet_training_length, dataset_training_length*opt.epoch) - - # video_training_length = sum([len(dataset) for dataset in train_dataset_videos]) - # dataset_training_length = sum([dataset.real_len for dataset in train_dataset_videos]) - # dataset_combined = ConcatDataset(train_dataset_videos) - # sampler = WeightedRandomSampler([1] * video_training_length, dataset_training_length * opt.epoch) - - data_loader = DataLoader(dataset_combined, batch_size=opt.batch_size, shuffle=True, num_workers=opt.workers) - return data_loader - - -def training_logger(): - if (total_iter % opt.checkpoint_step == 0) or (total_iter == len(data_loader)): - train_loss_dict = {"train/" + str(k): v / loss_handler.count_sample for k, v in loss_handler.loss_dict.items()} - train_loss_dict["train/opt_g_lr_1"] = step_optim_scheduler_g.get_last_lr()[0] - train_loss_dict["train/opt_g_lr_2"] = step_optim_scheduler_g.get_last_lr()[1] - train_loss_dict["train/opt_d_lr"] = step_optim_scheduler_d.get_last_lr()[0] - - alert_text = f"l1_loss: {l1_loss.item()}\npercep_loss: {perceptual_loss.item()}\nctx_loss: {contextual_loss_total.item()}\ncst_loss: {consistent_loss.item()}\nsm_loss: {smoothness_loss.item()}\ntotal: {total_loss.item()}" - - if opt.use_wandb: - wandb.log(train_loss_dict) - wandb.alert(title=f"Progress training #{total_iter}", text=alert_text) - - for idx in range(I_predict_rgb.shape[0]): - concated_I = make_grid( - [(I_predict_rgb[idx] * 255), (I_reference_rgb[idx] * 255), (I_current_rgb[idx] * 255)], nrow=3 - ) - wandb_concated_I = wandb.Image( - concated_I, - caption="[LEFT] Predict, [CENTER] Reference, [RIGHT] Ground truth\n[REF] {}, [FRAME] {}".format( - ref_path[idx], curr_frame_path[idx] - ), - ) - wandb.log({f"example_{idx}": wandb_concated_I}) - - torch.save( - nonlocal_net.state_dict(), - os.path.join(opt.checkpoint_dir, "nonlocal_net_iter.pth"), - ) - torch.save( - colornet.state_dict(), - os.path.join(opt.checkpoint_dir, "colornet_iter.pth"), - ) - torch.save( - discriminator.state_dict(), - os.path.join(opt.checkpoint_dir, "discriminator_iter.pth"), - ) - torch.save(embed_net.state_dict(), os.path.join(opt.checkpoint_dir, "embed_net_iter.pth")) - - loss_handler.reset() - - -def load_params(ckpt_file): - params = torch.load(ckpt_file) - new_params = [] - for key, value in params.items(): - new_params.append((key, value)) - return OrderedDict(new_params) - - -def parse(parser, save=True): - opt = parser.parse_args() - args = vars(opt) - - print("------------------------------ Options -------------------------------") - for k, v in sorted(args.items()): - print("%s: %s" % (str(k), str(v))) - print("-------------------------------- End ---------------------------------") - - if save: - file_name = os.path.join("opt.txt") - with open(file_name, "wt") as opt_file: - opt_file.write(os.path.basename(sys.argv[0]) + " " + strftime("%Y-%m-%d %H:%M:%S", gmtime()) + "\n") - opt_file.write("------------------------------ Options -------------------------------\n") - for k, v in sorted(args.items()): - opt_file.write("%s: %s\n" % (str(k), str(v))) - opt_file.write("-------------------------------- End ---------------------------------\n") - return opt - - -def gpu_setup(): - os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" - cudnn.benchmark = True - torch.cuda.set_device(opt.gpu_ids[0]) - device = torch.device("cuda") - print("running on GPU", opt.gpu_ids) - return device - - -if __name__ == "__main__": - ############################################## SETUP ############################################### - torch.multiprocessing.set_start_method("spawn", force=True) - # =============== GET PARSER OPTION ================ - opt = parse(parser) - opt.video_data_root_list = opt.video_data_root_list.split(",") - opt.flow_data_root_list = opt.flow_data_root_list.split(",") - opt.mask_data_root_list = opt.mask_data_root_list.split(",") - opt.gpu_ids = list(map(int, opt.gpu_ids.split(","))) - opt.head_out_idx = list(map(int, opt.head_out_idx.split(","))) - n_dim_output = 3 if opt.use_feature_transform else 4 - assert len(opt.head_out_idx) == 4, "Size of head_out_idx must be 4" - - os.makedirs(opt.checkpoint_dir, exist_ok=True) - - # =================== INIT WANDB =================== - if opt.use_wandb: - print("Save images to Wandb") - if opt.wandb_token != "": - try: - wandb.login(key=opt.wandb_token) - except: - pass - wandb.init( - project="video-colorization", - name=f"{opt.wandb_name} {datetime.now(tz=ZoneInfo('Asia/Ho_Chi_Minh')).strftime('%Y/%m/%d_%H-%M-%S')}", - ) - - # ================== SETUP DEVICE ================== - # torch.multiprocessing.set_start_method("spawn", force=True) - # device = gpu_setup() - device = "cuda" if torch.cuda.is_available() else "cpu" - - ############################################ LOAD DATA ############################################# - if opt.use_dummy: - H, W = 224, 224 - I_last_lab = torch.rand(opt.batch_size, 3, H, W) - I_current_lab = torch.rand(opt.batch_size, 3, H, W) - I_reference_lab = torch.rand(opt.batch_size, 3, H, W) - flow_forward = torch.rand(opt.batch_size, 2, H, W) - mask = torch.rand(opt.batch_size, 1, H, W) - placeholder_lab = torch.rand(opt.batch_size, 3, H, W) - self_ref_flag = torch.rand(opt.batch_size, 3, H, W) - data_loader = [ - [I_last_lab, I_current_lab, I_reference_lab, flow_forward, mask, placeholder_lab, self_ref_flag, None, None, None] - for _ in range(1) - ] - else: - data_loader = load_data() - - ########################################## DEFINE NETWORK ########################################## - colornet = GeneralColorVidNet(opt.ic).to(device) - nonlocal_net = GeneralWarpNet(feature_channel=256).to(device) # change to 128 in swin tiny - discriminator = Discriminator_x64_224(ndf=64).to(device) - weighted_layer_color = WeightedAverage_color().to(device) - nonlocal_weighted_layer = NonlocalWeightedAverage().to(device) - warping_layer = WarpingLayer(device=device).to(device) - embed_net = GeneralEmbedModel(pretrained_model="swin-small", device=device).to(device) - - print("-" * 59) - print("| TYPE | Model name | Num params |") - print("-" * 59) - colornet_params = print_num_params(colornet) - nonlocal_net_params = print_num_params(nonlocal_net) - discriminator_params = print_num_params(discriminator) - weighted_layer_color_params = print_num_params(weighted_layer_color) - nonlocal_weighted_layer_params = print_num_params(nonlocal_weighted_layer) - warping_layer_params = print_num_params(warping_layer) - embed_net_params = print_num_params(embed_net) - - print("-" * 59) - print( - f"| TOTAL | | {('{:,}'.format(colornet_params+nonlocal_net_params+discriminator_params+weighted_layer_color_params+nonlocal_weighted_layer_params+warping_layer_params+embed_net_params)).rjust(10)} |" - ) - print("-" * 59) - - if opt.use_wandb: - wandb.watch(discriminator, log="all", log_freq=opt.checkpoint_step, idx=0) - wandb.watch(embed_net, log="all", log_freq=opt.checkpoint_step, idx=1) - wandb.watch(colornet, log="all", log_freq=opt.checkpoint_step, idx=2) - wandb.watch(nonlocal_net, log="all", log_freq=opt.checkpoint_step, idx=3) - - # ============= USE PRETRAINED OR NOT ============== - if opt.load_pretrained_model: - # pretrained_path = "/workspace/video_colorization/ckpt_folder_ver_1_vit_small_patch16_384" - nonlocal_net.load_state_dict(load_params(os.path.join(opt.checkpoint_dir, "nonlocal_net_iter.pth"))) - colornet.load_state_dict(load_params(os.path.join(opt.checkpoint_dir, "colornet_iter.pth"))) - discriminator.load_state_dict(load_params(os.path.join(opt.checkpoint_dir, "discriminator_iter.pth"))) - embed_net_params = load_params(os.path.join(opt.checkpoint_dir, "embed_net_iter.pth")) - embed_net.load_state_dict(embed_net_params) - - ###################################### DEFINE LOSS FUNCTIONS ####################################### - perceptual_loss_fn = Perceptual_loss(opt.domain_invariant, opt.weight_perceptual) - contextual_loss = ContextualLoss().to(device) - contextual_forward_loss = ContextualLoss_forward().to(device) - - ######################################## DEFINE OPTIMIZERS ######################################### - optimizer_g = optim.AdamW( - [ - {"params": nonlocal_net.parameters(), "lr": opt.lr}, - {"params": colornet.parameters(), "lr": 2 * opt.lr}, - {"params": embed_net.parameters(), "lr": opt.lr}, - ], - betas=(0.5, 0.999), - eps=1e-5, - amsgrad=True, - ) - - optimizer_d = optim.AdamW( - filter(lambda p: p.requires_grad, discriminator.parameters()), - lr=opt.lr, - betas=(0.5, 0.999), - amsgrad=True, - ) - - step_optim_scheduler_g = PolynomialLR( - optimizer_g, - step_size=opt.lr_step, - iter_warmup=0, - iter_max=len(data_loader) * opt.epoch, - power=0.9, - min_lr=1e-8, - ) - step_optim_scheduler_d = PolynomialLR( - optimizer_d, - step_size=opt.lr_step, - iter_warmup=0, - iter_max=len(data_loader) * opt.epoch, - power=0.9, - min_lr=1e-8, - ) - ########################################## DEFINE OTHERS ########################################### - downsampling_by2 = nn.AvgPool2d(kernel_size=2).to(device) - timer_handler = TimeHandler() - loss_handler = LossHandler() # Handle loss value - ############################################## TRAIN ############################################### - - total_iter = 0 - for epoch_num in range(1, opt.epoch + 1): - # if opt.use_wandb: - # wandb.log({"Current_trainning_epoch": epoch_num}) - with tqdm(total=len(data_loader), position=0, leave=True) as pbar: - for iter, sample in enumerate(data_loader): - timer_handler.compute_time("load_sample") - total_iter += 1 - - # =============== LOAD DATA SAMPLE ================ - ( - I_last_lab, ######## (3, H, W) - I_current_lab, ##### (3, H, W) - I_reference_lab, ### (3, H, W) - flow_forward, ###### (2, H, W) - mask, ############## (1, H, W) - placeholder_lab, ### (3, H, W) - self_ref_flag, ##### (3, H, W) - prev_frame_path, - curr_frame_path, - ref_path, - ) = sample - - I_last_lab = I_last_lab.to(device) - I_current_lab = I_current_lab.to(device) - I_reference_lab = I_reference_lab.to(device) - flow_forward = flow_forward.to(device) - mask = mask.to(device) - placeholder_lab = placeholder_lab.to(device) - self_ref_flag = self_ref_flag.to(device) - - I_last_l = I_last_lab[:, 0:1, :, :] - I_last_ab = I_last_lab[:, 1:3, :, :] - I_current_l = I_current_lab[:, 0:1, :, :] - I_current_ab = I_current_lab[:, 1:3, :, :] - I_reference_l = I_reference_lab[:, 0:1, :, :] - I_reference_ab = I_reference_lab[:, 1:3, :, :] - I_reference_rgb = tensor_lab2rgb(torch.cat((uncenter_l(I_reference_l), I_reference_ab), dim=1)) - - _load_sample_time = timer_handler.compute_time("load_sample") - timer_handler.compute_time("forward_model") - - features_B = embed_net(I_reference_rgb) - B_feat_0, B_feat_1, B_feat_2, B_feat_3 = features_B - - # ================== COLORIZATION ================== - # The last frame - I_last_ab_predict, I_last_nonlocal_lab_predict = frame_colorization( - IA_l=I_last_l, - IB_lab=I_reference_lab, - IA_last_lab=placeholder_lab, - features_B=features_B, - embed_net=embed_net, - colornet=colornet, - nonlocal_net=nonlocal_net, - luminance_noise=opt.luminance_noise, - ) - I_last_lab_predict = torch.cat((I_last_l, I_last_ab_predict), dim=1) - - # The current frame - I_current_ab_predict, I_current_nonlocal_lab_predict = frame_colorization( - IA_l=I_current_l, - IB_lab=I_reference_lab, - IA_last_lab=I_last_lab_predict, - features_B=features_B, - embed_net=embed_net, - colornet=colornet, - nonlocal_net=nonlocal_net, - luminance_noise=opt.luminance_noise, - ) - I_current_lab_predict = torch.cat((I_last_l, I_current_ab_predict), dim=1) - - # ================ UPDATE GENERATOR ================ - if opt.weight_gan > 0: - optimizer_g.zero_grad() - optimizer_d.zero_grad() - fake_data_lab = torch.cat( - ( - uncenter_l(I_current_l), - I_current_ab_predict, - uncenter_l(I_last_l), - I_last_ab_predict, - ), - dim=1, - ) - real_data_lab = torch.cat( - ( - uncenter_l(I_current_l), - I_current_ab, - uncenter_l(I_last_l), - I_last_ab, - ), - dim=1, - ) - - if opt.permute_data: - batch_index = torch.arange(-1, opt.batch_size - 1, dtype=torch.long) - real_data_lab = real_data_lab[batch_index, ...] - - discriminator_loss = discriminator_loss_fn(real_data_lab, fake_data_lab, discriminator) - discriminator_loss.backward() - optimizer_d.step() - - optimizer_g.zero_grad() - optimizer_d.zero_grad() - - # ================== COMPUTE LOSS ================== - # L1 loss - l1_loss = l1_loss_fn(I_current_ab, I_current_ab_predict) * opt.weigth_l1 - - # Generator_loss. TODO: freeze this to train some first epoch - if epoch_num > opt.epoch_train_discriminator: - generator_loss = generator_loss_fn(real_data_lab, fake_data_lab, discriminator, opt.weight_gan, device) - - # Perceptual Loss - I_predict_rgb = tensor_lab2rgb(torch.cat((uncenter_l(I_current_l), I_current_ab_predict), dim=1)) - pred_feat_0, pred_feat_1, pred_feat_2, pred_feat_3 = embed_net(I_predict_rgb) - - I_current_rgb = tensor_lab2rgb(torch.cat((uncenter_l(I_current_l), I_current_ab), dim=1)) - A_feat_0, _, _, A_feat_3 = embed_net(I_current_rgb) - - perceptual_loss = perceptual_loss_fn(A_feat_3, pred_feat_3) - - # Contextual Loss - contextual_style5_1 = torch.mean(contextual_forward_loss(pred_feat_3, B_feat_3.detach())) * 8 - contextual_style4_1 = torch.mean(contextual_forward_loss(pred_feat_2, B_feat_2.detach())) * 4 - contextual_style3_1 = torch.mean(contextual_forward_loss(pred_feat_1, B_feat_1.detach())) * 2 - contextual_style2_1 = torch.mean(contextual_forward_loss(pred_feat_0, B_feat_0.detach())) - # if opt.use_feature_transform: - # contextual_style3_1 = ( - # torch.mean( - # contextual_forward_loss( - # downsampling_by2(pred_feat_1), - # downsampling_by2(), - # ) - # ) - # * 2 - # ) - # else: - # contextual_style3_1 = ( - # torch.mean( - # contextual_forward_loss( - # pred_feat_1, - # B_feat_1.detach(), - # ) - # ) - # * 2 - # ) - - contextual_loss_total = ( - contextual_style5_1 + contextual_style4_1 + contextual_style3_1 + contextual_style2_1 - ) * opt.weight_contextual - - # Consistent Loss - consistent_loss = consistent_loss_fn( - I_current_lab_predict, - I_last_ab_predict, - I_current_nonlocal_lab_predict, - I_last_nonlocal_lab_predict, - flow_forward, - mask, - warping_layer, - weight_consistent=opt.weight_consistent, - weight_nonlocal_consistent=opt.weight_nonlocal_consistent, - device=device, - ) - - # Smoothness loss - smoothness_loss = smoothness_loss_fn( - I_current_l, - I_current_lab, - I_current_ab_predict, - A_feat_0, - weighted_layer_color, - nonlocal_weighted_layer, - weight_smoothness=opt.weight_smoothness, - weight_nonlocal_smoothness=opt.weight_nonlocal_smoothness, - device=device, - ) - - # Total loss - total_loss = l1_loss + perceptual_loss + contextual_loss_total + consistent_loss + smoothness_loss - if epoch_num > opt.epoch_train_discriminator: - total_loss += generator_loss - - # Add loss to loss handler - loss_handler.add_loss(key="total_loss", loss=total_loss.item()) - loss_handler.add_loss(key="l1_loss", loss=l1_loss.item()) - loss_handler.add_loss(key="perceptual_loss", loss=perceptual_loss.item()) - loss_handler.add_loss(key="contextual_loss", loss=contextual_loss_total.item()) - loss_handler.add_loss(key="consistent_loss", loss=consistent_loss.item()) - loss_handler.add_loss(key="smoothness_loss", loss=smoothness_loss.item()) - loss_handler.add_loss(key="discriminator_loss", loss=discriminator_loss.item()) - if epoch_num > opt.epoch_train_discriminator: - loss_handler.add_loss(key="generator_loss", loss=generator_loss.item()) - loss_handler.count_one_sample() - - total_loss.backward() - - optimizer_g.step() - step_optim_scheduler_g.step() - step_optim_scheduler_d.step() - - _forward_model_time = timer_handler.compute_time("forward_model") - - timer_handler.compute_time("training_logger") - training_logger() - _training_logger_time = timer_handler.compute_time("training_logger") - - pbar.set_description( - f"Epochs: {epoch_num}, Load_sample: {_load_sample_time:.3f}s, Forward: {_forward_model_time:.3f}s, log: {_training_logger_time:.3f}s" - ) - pbar.update(1) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cffi/error.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cffi/error.py deleted file mode 100644 index 0a27247c32a381ab7cecedd0f985b781619c1ea5..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cffi/error.py +++ /dev/null @@ -1,31 +0,0 @@ - -class FFIError(Exception): - __module__ = 'cffi' - -class CDefError(Exception): - __module__ = 'cffi' - def __str__(self): - try: - current_decl = self.args[1] - filename = current_decl.coord.file - linenum = current_decl.coord.line - prefix = '%s:%d: ' % (filename, linenum) - except (AttributeError, TypeError, IndexError): - prefix = '' - return '%s%s' % (prefix, self.args[0]) - -class VerificationError(Exception): - """ An error raised when verification fails - """ - __module__ = 'cffi' - -class VerificationMissing(Exception): - """ An error raised when incomplete structures are passed into - cdef, but no verification has been done - """ - __module__ = 'cffi' - -class PkgConfigError(Exception): - """ An error raised for missing modules in pkg-config - """ - __module__ = 'cffi' diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/timeTools.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/timeTools.py deleted file mode 100644 index 175ce81563daf3e9a924701dd2c9d4b71084c286..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/timeTools.py +++ /dev/null @@ -1,88 +0,0 @@ -"""fontTools.misc.timeTools.py -- tools for working with OpenType timestamps. -""" - -import os -import time -from datetime import datetime, timezone -import calendar - - -epoch_diff = calendar.timegm((1904, 1, 1, 0, 0, 0, 0, 0, 0)) - -DAYNAMES = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"] -MONTHNAMES = [ - None, - "Jan", - "Feb", - "Mar", - "Apr", - "May", - "Jun", - "Jul", - "Aug", - "Sep", - "Oct", - "Nov", - "Dec", -] - - -def asctime(t=None): - """ - Convert a tuple or struct_time representing a time as returned by gmtime() - or localtime() to a 24-character string of the following form: - - >>> asctime(time.gmtime(0)) - 'Thu Jan 1 00:00:00 1970' - - If t is not provided, the current time as returned by localtime() is used. - Locale information is not used by asctime(). - - This is meant to normalise the output of the built-in time.asctime() across - different platforms and Python versions. - In Python 3.x, the day of the month is right-justified, whereas on Windows - Python 2.7 it is padded with zeros. - - See https://github.com/fonttools/fonttools/issues/455 - """ - if t is None: - t = time.localtime() - s = "%s %s %2s %s" % ( - DAYNAMES[t.tm_wday], - MONTHNAMES[t.tm_mon], - t.tm_mday, - time.strftime("%H:%M:%S %Y", t), - ) - return s - - -def timestampToString(value): - return asctime(time.gmtime(max(0, value + epoch_diff))) - - -def timestampFromString(value): - wkday, mnth = value[:7].split() - t = datetime.strptime(value[7:], " %d %H:%M:%S %Y") - t = t.replace(month=MONTHNAMES.index(mnth), tzinfo=timezone.utc) - wkday_idx = DAYNAMES.index(wkday) - assert t.weekday() == wkday_idx, '"' + value + '" has inconsistent weekday' - return int(t.timestamp()) - epoch_diff - - -def timestampNow(): - # https://reproducible-builds.org/specs/source-date-epoch/ - source_date_epoch = os.environ.get("SOURCE_DATE_EPOCH") - if source_date_epoch is not None: - return int(source_date_epoch) - epoch_diff - return int(time.time() - epoch_diff) - - -def timestampSinceEpoch(value): - return int(value - epoch_diff) - - -if __name__ == "__main__": - import sys - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/pens/teePen.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/pens/teePen.py deleted file mode 100644 index 2828175a7c02c1858db5cbfc45c8686f3187a50e..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/pens/teePen.py +++ /dev/null @@ -1,54 +0,0 @@ -"""Pen multiplexing drawing to one or more pens.""" -from fontTools.pens.basePen import AbstractPen - - -__all__ = ["TeePen"] - - -class TeePen(AbstractPen): - """Pen multiplexing drawing to one or more pens. - - Use either as TeePen(pen1, pen2, ...) or TeePen(iterableOfPens).""" - - def __init__(self, *pens): - if len(pens) == 1: - pens = pens[0] - self.pens = pens - - def moveTo(self, p0): - for pen in self.pens: - pen.moveTo(p0) - - def lineTo(self, p1): - for pen in self.pens: - pen.lineTo(p1) - - def qCurveTo(self, *points): - for pen in self.pens: - pen.qCurveTo(*points) - - def curveTo(self, *points): - for pen in self.pens: - pen.curveTo(*points) - - def closePath(self): - for pen in self.pens: - pen.closePath() - - def endPath(self): - for pen in self.pens: - pen.endPath() - - def addComponent(self, glyphName, transformation): - for pen in self.pens: - pen.addComponent(glyphName, transformation) - - -if __name__ == "__main__": - from fontTools.pens.basePen import _TestPen - - pen = TeePen(_TestPen(), _TestPen()) - pen.moveTo((0, 0)) - pen.lineTo((0, 100)) - pen.curveTo((50, 75), (60, 50), (50, 25)) - pen.closePath() diff --git a/spaces/cifkao/context-probing/highlighted_text/build/static/js/main.1659c043.chunk.js b/spaces/cifkao/context-probing/highlighted_text/build/static/js/main.1659c043.chunk.js deleted file mode 100644 index 64cf7af5bf6de66699547fa742713f398249b011..0000000000000000000000000000000000000000 --- a/spaces/cifkao/context-probing/highlighted_text/build/static/js/main.1659c043.chunk.js +++ /dev/null @@ -1,2 +0,0 @@ -(this.webpackJsonpstreamlit_component_template=this.webpackJsonpstreamlit_component_template||[]).push([[0],{27:function(t,e,a){},28:function(t,e,a){"use strict";a.r(e);var n=a(7),s=a.n(n),r=a(18),c=a.n(r),i=a(4),o=a(0),l=a(1),h=a(2),d=a(3),j=a(16),u=a(6),x=function(t){Object(h.a)(a,t);var e=Object(d.a)(a);function a(){var t;Object(o.a)(this,a);for(var n=arguments.length,s=new Array(n),r=0;r0?"rgba(32, 255, 32, ".concat(a[s],")"):"rgba(255, 32, 32, ".concat(-a[s],")")};return Object(u.jsx)("span",{className:c,style:i,onMouseOver:function(){t.state.isFrozen||t.setState({activeIndex:s}),t.setState({hoverIndex:s})},onClick:r,children:e},s)}))},"text")]})}},{key:"getScores",value:function(){var t=this.props.args.tokens;if(!this.state||null==this.state.activeIndex||this.state.activeIndex<1)return t.map((function(){return 0}));var e=this.props.args.scores,a=this.state.activeIndex-1,n=Math.min(Math.max(0,a+1),e[a].length),s=e[a].slice(0,n);s.reverse();var r=[].concat(Object(i.a)(Array(Math.max(0,a+1-s.length)).fill(0)),Object(i.a)(s.map((function(t){return void 0==t||isNaN(t)?0:t}))));return r=[].concat(Object(i.a)(r),Object(i.a)(Array(t.length-r.length).fill(0)))}}]),a}(j.a),b=Object(j.b)(x);a(27);c.a.render(Object(u.jsx)(s.a.StrictMode,{children:Object(u.jsx)(b,{})}),document.getElementById("root"))}},[[28,1,2]]]); -//# sourceMappingURL=main.1659c043.chunk.js.map \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Brazil Naturist Festival Part 5 !FREE! Download.md b/spaces/cihyFjudo/fairness-paper-search/Brazil Naturist Festival Part 5 !FREE! Download.md deleted file mode 100644 index 44e5059faed29c2fc3545c3cbbf35efca6317ca1..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Brazil Naturist Festival Part 5 !FREE! Download.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    Search nudist brazil naturist festival car Photos
    Search nudist brazil naturist festival car Unrated Videos
    Search nudist brazil naturist festival car HD Videos
    Search nudist brazil naturist festival car Indian Videos
    Search nudist brazil naturist festival car MP4 Videos
    Search nudist brazil naturist festival car Indian Images
    Search nudist brazil naturist festival car Leaked Videos
    Search nudist brazil naturist festival car Leaked Pics
    Search nudist brazil naturist festival car XXX Posts

    -

    brazil naturist festival part 5 download


    DOWNLOAD ————— https://tinurli.com/2uwhES



    -

    Search enature net brazil festival part 10 124 nudist video dv Photos
    Search enature net brazil festival part 10 124 nudist video dv Unrated Videos
    Search enature net brazil festival part 10 124 nudist video dv XXX Videos
    Search enature net brazil festival part 10 124 nudist video dv Indian Videos
    Search enature net brazil festival part 10 124 nudist video dv MP4 Videos
    Search enature net brazil festival part 10 124 nudist video dv Indian Images
    Search enature net brazil festival part 10 124 nudist video dv Leaked Videos
    Search enature net brazil festival part 10 124 nudist video dv Leaked Pics
    Search enature net brazil festival part 10 124 nudist video dv XXX Posts

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Native Instruments West Africa Serial Numberl Tips and Tricks for Using the Spotlight Collection.md b/spaces/cihyFjudo/fairness-paper-search/Native Instruments West Africa Serial Numberl Tips and Tricks for Using the Spotlight Collection.md deleted file mode 100644 index dc4e44cf70449ec2f9cf8d775e9ab82759863371..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Native Instruments West Africa Serial Numberl Tips and Tricks for Using the Spotlight Collection.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Native Instruments West Africa Serial Numberl


    Download > https://tinurli.com/2uwhTr



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/audioread/version.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/audioread/version.py deleted file mode 100644 index 83e990e3183bab02300c237d2cd83b7fab7ccc73..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/audioread/version.py +++ /dev/null @@ -1,18 +0,0 @@ -# This file is part of audioread. -# Copyright 2017, Adrian Sampson. -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. - -"""Version data for the audioread package.""" - -version = '3.0.0' -short_version = '3.0' diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/cu2qu/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/cu2qu/__init__.py deleted file mode 100644 index 4ae6356e44e1fed074b6283bcb4365bf2b770529..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/cu2qu/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright 2016 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from .cu2qu import * diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/filterPen.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/filterPen.py deleted file mode 100644 index 81423109ae6b0caed4b75189a0d87b64cf8d0197..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/filterPen.py +++ /dev/null @@ -1,164 +0,0 @@ -from fontTools.pens.basePen import AbstractPen -from fontTools.pens.pointPen import AbstractPointPen -from fontTools.pens.recordingPen import RecordingPen - - -class _PassThruComponentsMixin(object): - def addComponent(self, glyphName, transformation, **kwargs): - self._outPen.addComponent(glyphName, transformation, **kwargs) - - -class FilterPen(_PassThruComponentsMixin, AbstractPen): - - """Base class for pens that apply some transformation to the coordinates - they receive and pass them to another pen. - - You can override any of its methods. The default implementation does - nothing, but passes the commands unmodified to the other pen. - - >>> from fontTools.pens.recordingPen import RecordingPen - >>> rec = RecordingPen() - >>> pen = FilterPen(rec) - >>> v = iter(rec.value) - - >>> pen.moveTo((0, 0)) - >>> next(v) - ('moveTo', ((0, 0),)) - - >>> pen.lineTo((1, 1)) - >>> next(v) - ('lineTo', ((1, 1),)) - - >>> pen.curveTo((2, 2), (3, 3), (4, 4)) - >>> next(v) - ('curveTo', ((2, 2), (3, 3), (4, 4))) - - >>> pen.qCurveTo((5, 5), (6, 6), (7, 7), (8, 8)) - >>> next(v) - ('qCurveTo', ((5, 5), (6, 6), (7, 7), (8, 8))) - - >>> pen.closePath() - >>> next(v) - ('closePath', ()) - - >>> pen.moveTo((9, 9)) - >>> next(v) - ('moveTo', ((9, 9),)) - - >>> pen.endPath() - >>> next(v) - ('endPath', ()) - - >>> pen.addComponent('foo', (1, 0, 0, 1, 0, 0)) - >>> next(v) - ('addComponent', ('foo', (1, 0, 0, 1, 0, 0))) - """ - - def __init__(self, outPen): - self._outPen = outPen - self.current_pt = None - - def moveTo(self, pt): - self._outPen.moveTo(pt) - self.current_pt = pt - - def lineTo(self, pt): - self._outPen.lineTo(pt) - self.current_pt = pt - - def curveTo(self, *points): - self._outPen.curveTo(*points) - self.current_pt = points[-1] - - def qCurveTo(self, *points): - self._outPen.qCurveTo(*points) - self.current_pt = points[-1] - - def closePath(self): - self._outPen.closePath() - self.current_pt = None - - def endPath(self): - self._outPen.endPath() - self.current_pt = None - - -class ContourFilterPen(_PassThruComponentsMixin, RecordingPen): - """A "buffered" filter pen that accumulates contour data, passes - it through a ``filterContour`` method when the contour is closed or ended, - and finally draws the result with the output pen. - - Components are passed through unchanged. - """ - - def __init__(self, outPen): - super(ContourFilterPen, self).__init__() - self._outPen = outPen - - def closePath(self): - super(ContourFilterPen, self).closePath() - self._flushContour() - - def endPath(self): - super(ContourFilterPen, self).endPath() - self._flushContour() - - def _flushContour(self): - result = self.filterContour(self.value) - if result is not None: - self.value = result - self.replay(self._outPen) - self.value = [] - - def filterContour(self, contour): - """Subclasses must override this to perform the filtering. - - The contour is a list of pen (operator, operands) tuples. - Operators are strings corresponding to the AbstractPen methods: - "moveTo", "lineTo", "curveTo", "qCurveTo", "closePath" and - "endPath". The operands are the positional arguments that are - passed to each method. - - If the method doesn't return a value (i.e. returns None), it's - assumed that the argument was modified in-place. - Otherwise, the return value is drawn with the output pen. - """ - return # or return contour - - -class FilterPointPen(_PassThruComponentsMixin, AbstractPointPen): - """Baseclass for point pens that apply some transformation to the - coordinates they receive and pass them to another point pen. - - You can override any of its methods. The default implementation does - nothing, but passes the commands unmodified to the other pen. - - >>> from fontTools.pens.recordingPen import RecordingPointPen - >>> rec = RecordingPointPen() - >>> pen = FilterPointPen(rec) - >>> v = iter(rec.value) - >>> pen.beginPath(identifier="abc") - >>> next(v) - ('beginPath', (), {'identifier': 'abc'}) - >>> pen.addPoint((1, 2), "line", False) - >>> next(v) - ('addPoint', ((1, 2), 'line', False, None), {}) - >>> pen.addComponent("a", (2, 0, 0, 2, 10, -10), identifier="0001") - >>> next(v) - ('addComponent', ('a', (2, 0, 0, 2, 10, -10)), {'identifier': '0001'}) - >>> pen.endPath() - >>> next(v) - ('endPath', (), {}) - """ - - def __init__(self, outPointPen): - self._outPen = outPointPen - - def beginPath(self, **kwargs): - self._outPen.beginPath(**kwargs) - - def endPath(self): - self._outPen.endPath() - - def addPoint(self, pt, segmentType=None, smooth=False, name=None, **kwargs): - self._outPen.addPoint(pt, segmentType, smooth, name, **kwargs) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/ttGlyphSet.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/ttGlyphSet.py deleted file mode 100644 index fa7fbd4f23558f6705ee3e819ded518bb7549e36..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/ttGlyphSet.py +++ /dev/null @@ -1,322 +0,0 @@ -"""GlyphSets returned by a TTFont.""" - -from abc import ABC, abstractmethod -from collections.abc import Mapping -from contextlib import contextmanager -from copy import copy -from types import SimpleNamespace -from fontTools.misc.fixedTools import otRound -from fontTools.misc.loggingTools import deprecateFunction -from fontTools.misc.transform import Transform -from fontTools.pens.transformPen import TransformPen, TransformPointPen - - -class _TTGlyphSet(Mapping): - - """Generic dict-like GlyphSet class that pulls metrics from hmtx and - glyph shape from TrueType or CFF. - """ - - def __init__(self, font, location, glyphsMapping): - self.font = font - self.defaultLocationNormalized = ( - {axis.axisTag: 0 for axis in self.font["fvar"].axes} - if "fvar" in self.font - else {} - ) - self.location = location if location is not None else {} - self.rawLocation = {} # VarComponent-only location - self.originalLocation = location if location is not None else {} - self.depth = 0 - self.locationStack = [] - self.rawLocationStack = [] - self.glyphsMapping = glyphsMapping - self.hMetrics = font["hmtx"].metrics - self.vMetrics = getattr(font.get("vmtx"), "metrics", None) - self.hvarTable = None - if location: - from fontTools.varLib.varStore import VarStoreInstancer - - self.hvarTable = getattr(font.get("HVAR"), "table", None) - if self.hvarTable is not None: - self.hvarInstancer = VarStoreInstancer( - self.hvarTable.VarStore, font["fvar"].axes, location - ) - # TODO VVAR, VORG - - @contextmanager - def pushLocation(self, location, reset: bool): - self.locationStack.append(self.location) - self.rawLocationStack.append(self.rawLocation) - if reset: - self.location = self.originalLocation.copy() - self.rawLocation = self.defaultLocationNormalized.copy() - else: - self.location = self.location.copy() - self.rawLocation = {} - self.location.update(location) - self.rawLocation.update(location) - - try: - yield None - finally: - self.location = self.locationStack.pop() - self.rawLocation = self.rawLocationStack.pop() - - @contextmanager - def pushDepth(self): - try: - depth = self.depth - self.depth += 1 - yield depth - finally: - self.depth -= 1 - - def __contains__(self, glyphName): - return glyphName in self.glyphsMapping - - def __iter__(self): - return iter(self.glyphsMapping.keys()) - - def __len__(self): - return len(self.glyphsMapping) - - @deprecateFunction( - "use 'glyphName in glyphSet' instead", category=DeprecationWarning - ) - def has_key(self, glyphName): - return glyphName in self.glyphsMapping - - -class _TTGlyphSetGlyf(_TTGlyphSet): - def __init__(self, font, location): - self.glyfTable = font["glyf"] - super().__init__(font, location, self.glyfTable) - self.gvarTable = font.get("gvar") - - def __getitem__(self, glyphName): - return _TTGlyphGlyf(self, glyphName) - - -class _TTGlyphSetCFF(_TTGlyphSet): - def __init__(self, font, location): - tableTag = "CFF2" if "CFF2" in font else "CFF " - self.charStrings = list(font[tableTag].cff.values())[0].CharStrings - super().__init__(font, location, self.charStrings) - self.blender = None - if location: - from fontTools.varLib.varStore import VarStoreInstancer - - varStore = getattr(self.charStrings, "varStore", None) - if varStore is not None: - instancer = VarStoreInstancer( - varStore.otVarStore, font["fvar"].axes, location - ) - self.blender = instancer.interpolateFromDeltas - - def __getitem__(self, glyphName): - return _TTGlyphCFF(self, glyphName) - - -class _TTGlyph(ABC): - - """Glyph object that supports the Pen protocol, meaning that it has - .draw() and .drawPoints() methods that take a pen object as their only - argument. Additionally there are 'width' and 'lsb' attributes, read from - the 'hmtx' table. - - If the font contains a 'vmtx' table, there will also be 'height' and 'tsb' - attributes. - """ - - def __init__(self, glyphSet, glyphName): - self.glyphSet = glyphSet - self.name = glyphName - self.width, self.lsb = glyphSet.hMetrics[glyphName] - if glyphSet.vMetrics is not None: - self.height, self.tsb = glyphSet.vMetrics[glyphName] - else: - self.height, self.tsb = None, None - if glyphSet.location and glyphSet.hvarTable is not None: - varidx = ( - glyphSet.font.getGlyphID(glyphName) - if glyphSet.hvarTable.AdvWidthMap is None - else glyphSet.hvarTable.AdvWidthMap.mapping[glyphName] - ) - self.width += glyphSet.hvarInstancer[varidx] - # TODO: VVAR/VORG - - @abstractmethod - def draw(self, pen): - """Draw the glyph onto ``pen``. See fontTools.pens.basePen for details - how that works. - """ - raise NotImplementedError - - def drawPoints(self, pen): - """Draw the glyph onto ``pen``. See fontTools.pens.pointPen for details - how that works. - """ - from fontTools.pens.pointPen import SegmentToPointPen - - self.draw(SegmentToPointPen(pen)) - - -class _TTGlyphGlyf(_TTGlyph): - def draw(self, pen): - """Draw the glyph onto ``pen``. See fontTools.pens.basePen for details - how that works. - """ - glyph, offset = self._getGlyphAndOffset() - - with self.glyphSet.pushDepth() as depth: - - if depth: - offset = 0 # Offset should only apply at top-level - - if glyph.isVarComposite(): - self._drawVarComposite(glyph, pen, False) - return - - glyph.draw(pen, self.glyphSet.glyfTable, offset) - - def drawPoints(self, pen): - """Draw the glyph onto ``pen``. See fontTools.pens.pointPen for details - how that works. - """ - glyph, offset = self._getGlyphAndOffset() - - with self.glyphSet.pushDepth() as depth: - - if depth: - offset = 0 # Offset should only apply at top-level - - if glyph.isVarComposite(): - self._drawVarComposite(glyph, pen, True) - return - - glyph.drawPoints(pen, self.glyphSet.glyfTable, offset) - - def _drawVarComposite(self, glyph, pen, isPointPen): - - from fontTools.ttLib.tables._g_l_y_f import ( - VarComponentFlags, - VAR_COMPONENT_TRANSFORM_MAPPING, - ) - - for comp in glyph.components: - - with self.glyphSet.pushLocation( - comp.location, comp.flags & VarComponentFlags.RESET_UNSPECIFIED_AXES - ): - try: - pen.addVarComponent( - comp.glyphName, comp.transform, self.glyphSet.rawLocation - ) - except AttributeError: - t = comp.transform.toTransform() - if isPointPen: - tPen = TransformPointPen(pen, t) - self.glyphSet[comp.glyphName].drawPoints(tPen) - else: - tPen = TransformPen(pen, t) - self.glyphSet[comp.glyphName].draw(tPen) - - def _getGlyphAndOffset(self): - if self.glyphSet.location and self.glyphSet.gvarTable is not None: - glyph = self._getGlyphInstance() - else: - glyph = self.glyphSet.glyfTable[self.name] - - offset = self.lsb - glyph.xMin if hasattr(glyph, "xMin") else 0 - return glyph, offset - - def _getGlyphInstance(self): - from fontTools.varLib.iup import iup_delta - from fontTools.ttLib.tables._g_l_y_f import GlyphCoordinates - from fontTools.varLib.models import supportScalar - - glyphSet = self.glyphSet - glyfTable = glyphSet.glyfTable - variations = glyphSet.gvarTable.variations[self.name] - hMetrics = glyphSet.hMetrics - vMetrics = glyphSet.vMetrics - coordinates, _ = glyfTable._getCoordinatesAndControls( - self.name, hMetrics, vMetrics - ) - origCoords, endPts = None, None - for var in variations: - scalar = supportScalar(glyphSet.location, var.axes) - if not scalar: - continue - delta = var.coordinates - if None in delta: - if origCoords is None: - origCoords, control = glyfTable._getCoordinatesAndControls( - self.name, hMetrics, vMetrics - ) - endPts = ( - control[1] if control[0] >= 1 else list(range(len(control[1]))) - ) - delta = iup_delta(delta, origCoords, endPts) - coordinates += GlyphCoordinates(delta) * scalar - - glyph = copy(glyfTable[self.name]) # Shallow copy - width, lsb, height, tsb = _setCoordinates(glyph, coordinates, glyfTable) - self.lsb = lsb - self.tsb = tsb - if glyphSet.hvarTable is None: - # no HVAR: let's set metrics from the phantom points - self.width = width - self.height = height - return glyph - - -class _TTGlyphCFF(_TTGlyph): - def draw(self, pen): - """Draw the glyph onto ``pen``. See fontTools.pens.basePen for details - how that works. - """ - self.glyphSet.charStrings[self.name].draw(pen, self.glyphSet.blender) - - -def _setCoordinates(glyph, coord, glyfTable): - # Handle phantom points for (left, right, top, bottom) positions. - assert len(coord) >= 4 - leftSideX = coord[-4][0] - rightSideX = coord[-3][0] - topSideY = coord[-2][1] - bottomSideY = coord[-1][1] - - for _ in range(4): - del coord[-1] - - if glyph.isComposite(): - assert len(coord) == len(glyph.components) - glyph.components = [copy(comp) for comp in glyph.components] # Shallow copy - for p, comp in zip(coord, glyph.components): - if hasattr(comp, "x"): - comp.x, comp.y = p - elif glyph.isVarComposite(): - glyph.components = [copy(comp) for comp in glyph.components] # Shallow copy - for comp in glyph.components: - coord = comp.setCoordinates(coord) - assert not coord - elif glyph.numberOfContours == 0: - assert len(coord) == 0 - else: - assert len(coord) == len(glyph.coordinates) - glyph.coordinates = coord - - glyph.recalcBounds(glyfTable) - - horizontalAdvanceWidth = otRound(rightSideX - leftSideX) - verticalAdvanceWidth = otRound(topSideY - bottomSideY) - leftSideBearing = otRound(glyph.xMin - leftSideX) - topSideBearing = otRound(topSideY - glyph.yMax) - return ( - horizontalAdvanceWidth, - leftSideBearing, - verticalAdvanceWidth, - topSideBearing, - ) diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/me_cmp_init_aarch64.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/me_cmp_init_aarch64.c deleted file mode 100644 index 1e0f1cf4f11f974c09770ec482296013bef8f201..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/me_cmp_init_aarch64.c +++ /dev/null @@ -1,134 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "config.h" -#include "libavutil/attributes.h" -#include "libavutil/aarch64/cpu.h" -#include "libavcodec/mpegvideo.h" - -int ff_pix_abs16_neon(MpegEncContext *s, const uint8_t *blk1, const uint8_t *blk2, - ptrdiff_t stride, int h); -int ff_pix_abs16_xy2_neon(MpegEncContext *s, const uint8_t *blk1, const uint8_t *blk2, - ptrdiff_t stride, int h); -int ff_pix_abs16_x2_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, - ptrdiff_t stride, int h); -int ff_pix_abs16_y2_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, - ptrdiff_t stride, int h); -int ff_pix_abs8_neon(MpegEncContext *s, const uint8_t *blk1, const uint8_t *blk2, - ptrdiff_t stride, int h); - -int sse16_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, - ptrdiff_t stride, int h); -int sse8_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, - ptrdiff_t stride, int h); -int sse4_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, - ptrdiff_t stride, int h); - -int vsad16_neon(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, - ptrdiff_t stride, int h); -int vsad_intra16_neon(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy, - ptrdiff_t stride, int h) ; -int vsad_intra8_neon(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy, - ptrdiff_t stride, int h) ; -int vsse16_neon(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, - ptrdiff_t stride, int h); -int vsse_intra16_neon(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy, - ptrdiff_t stride, int h); -int nsse16_neon(int multiplier, const uint8_t *s, const uint8_t *s2, - ptrdiff_t stride, int h); -int nsse16_neon_wrapper(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, - ptrdiff_t stride, int h); -int pix_median_abs16_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, - ptrdiff_t stride, int h); -int pix_median_abs8_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, - ptrdiff_t stride, int h); -int ff_pix_abs8_x2_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, - ptrdiff_t stride, int h); -int ff_pix_abs8_y2_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, - ptrdiff_t stride, int h); -int ff_pix_abs8_xy2_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, - ptrdiff_t stride, int h); - -int nsse8_neon(int multiplier, const uint8_t *s, const uint8_t *s2, - ptrdiff_t stride, int h); -int nsse8_neon_wrapper(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, - ptrdiff_t stride, int h); - -int vsse8_neon(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, - ptrdiff_t stride, int h); - -int vsse_intra8_neon(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy, - ptrdiff_t stride, int h); - -av_cold void ff_me_cmp_init_aarch64(MECmpContext *c, AVCodecContext *avctx) -{ - int cpu_flags = av_get_cpu_flags(); - - if (have_neon(cpu_flags)) { - c->pix_abs[0][0] = ff_pix_abs16_neon; - c->pix_abs[0][1] = ff_pix_abs16_x2_neon; - c->pix_abs[0][2] = ff_pix_abs16_y2_neon; - c->pix_abs[0][3] = ff_pix_abs16_xy2_neon; - c->pix_abs[1][0] = ff_pix_abs8_neon; - c->pix_abs[1][1] = ff_pix_abs8_x2_neon; - c->pix_abs[1][2] = ff_pix_abs8_y2_neon; - c->pix_abs[1][3] = ff_pix_abs8_xy2_neon; - - c->sad[0] = ff_pix_abs16_neon; - c->sad[1] = ff_pix_abs8_neon; - c->sse[0] = sse16_neon; - c->sse[1] = sse8_neon; - c->sse[2] = sse4_neon; - - c->vsad[0] = vsad16_neon; - c->vsad[4] = vsad_intra16_neon; - c->vsad[5] = vsad_intra8_neon; - - c->vsse[0] = vsse16_neon; - c->vsse[1] = vsse8_neon; - - c->vsse[4] = vsse_intra16_neon; - c->vsse[5] = vsse_intra8_neon; - - c->nsse[0] = nsse16_neon_wrapper; - c->nsse[1] = nsse8_neon_wrapper; - - c->median_sad[0] = pix_median_abs16_neon; - c->median_sad[1] = pix_median_abs8_neon; - } -} - -int nsse16_neon_wrapper(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, - ptrdiff_t stride, int h) -{ - if (c) - return nsse16_neon(c->avctx->nsse_weight, s1, s2, stride, h); - else - return nsse16_neon(8, s1, s2, stride, h); -} - -int nsse8_neon_wrapper(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, - ptrdiff_t stride, int h) -{ - if (c) - return nsse8_neon(c->avctx->nsse_weight, s1, s2, stride, h); - else - return nsse8_neon(8, s1, s2, stride, h); -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dss_sp.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dss_sp.c deleted file mode 100644 index 9337371bce60646a09bcb32ccb1a99421c1e6877..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dss_sp.c +++ /dev/null @@ -1,783 +0,0 @@ -/* - * Digital Speech Standard - Standard Play mode (DSS SP) audio decoder. - * Copyright (C) 2014 Oleksij Rempel - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/channel_layout.h" -#include "libavutil/common.h" -#include "libavutil/mem_internal.h" - -#include "avcodec.h" -#include "codec_internal.h" -#include "decode.h" -#include "get_bits.h" - -#define SUBFRAMES 4 -#define PULSE_MAX 8 - -#define DSS_SP_FRAME_SIZE 42 -#define DSS_SP_SAMPLE_COUNT (66 * SUBFRAMES) -#define DSS_SP_FORMULA(a, b, c) ((int)((((a) * (1 << 15)) + (b) * (unsigned)(c)) + 0x4000) >> 15) - -typedef struct DssSpSubframe { - int16_t gain; - int32_t combined_pulse_pos; - int16_t pulse_pos[7]; - int16_t pulse_val[7]; -} DssSpSubframe; - -typedef struct DssSpFrame { - int16_t filter_idx[14]; - int16_t sf_adaptive_gain[SUBFRAMES]; - int16_t pitch_lag[SUBFRAMES]; - struct DssSpSubframe sf[SUBFRAMES]; -} DssSpFrame; - -typedef struct DssSpContext { - AVCodecContext *avctx; - int32_t excitation[288 + 6]; - int32_t history[187]; - DssSpFrame fparam; - int32_t working_buffer[SUBFRAMES][72]; - int32_t audio_buf[15]; - int32_t err_buf1[15]; - int32_t lpc_filter[14]; - int32_t filter[15]; - int32_t vector_buf[72]; - int noise_state; - int32_t err_buf2[15]; - - int pulse_dec_mode; - - DECLARE_ALIGNED(16, uint8_t, bits)[DSS_SP_FRAME_SIZE + - AV_INPUT_BUFFER_PADDING_SIZE]; -} DssSpContext; - -/* - * Used for the coding/decoding of the pulse positions for the MP-MLQ codebook. - */ -static const uint32_t dss_sp_combinatorial_table[PULSE_MAX][72] = { - { 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0 }, - { 0, 1, 2, 3, 4, 5, - 6, 7, 8, 9, 10, 11, - 12, 13, 14, 15, 16, 17, - 18, 19, 20, 21, 22, 23, - 24, 25, 26, 27, 28, 29, - 30, 31, 32, 33, 34, 35, - 36, 37, 38, 39, 40, 41, - 42, 43, 44, 45, 46, 47, - 48, 49, 50, 51, 52, 53, - 54, 55, 56, 57, 58, 59, - 60, 61, 62, 63, 64, 65, - 66, 67, 68, 69, 70, 71 }, - { 0, 0, 1, 3, 6, 10, - 15, 21, 28, 36, 45, 55, - 66, 78, 91, 105, 120, 136, - 153, 171, 190, 210, 231, 253, - 276, 300, 325, 351, 378, 406, - 435, 465, 496, 528, 561, 595, - 630, 666, 703, 741, 780, 820, - 861, 903, 946, 990, 1035, 1081, - 1128, 1176, 1225, 1275, 1326, 1378, - 1431, 1485, 1540, 1596, 1653, 1711, - 1770, 1830, 1891, 1953, 2016, 2080, - 2145, 2211, 2278, 2346, 2415, 2485 }, - { 0, 0, 0, 1, 4, 10, - 20, 35, 56, 84, 120, 165, - 220, 286, 364, 455, 560, 680, - 816, 969, 1140, 1330, 1540, 1771, - 2024, 2300, 2600, 2925, 3276, 3654, - 4060, 4495, 4960, 5456, 5984, 6545, - 7140, 7770, 8436, 9139, 9880, 10660, - 11480, 12341, 13244, 14190, 15180, 16215, - 17296, 18424, 19600, 20825, 22100, 23426, - 24804, 26235, 27720, 29260, 30856, 32509, - 34220, 35990, 37820, 39711, 41664, 43680, - 45760, 47905, 50116, 52394, 54740, 57155 }, - { 0, 0, 0, 0, 1, 5, - 15, 35, 70, 126, 210, 330, - 495, 715, 1001, 1365, 1820, 2380, - 3060, 3876, 4845, 5985, 7315, 8855, - 10626, 12650, 14950, 17550, 20475, 23751, - 27405, 31465, 35960, 40920, 46376, 52360, - 58905, 66045, 73815, 82251, 91390, 101270, - 111930, 123410, 135751, 148995, 163185, 178365, - 194580, 211876, 230300, 249900, 270725, 292825, - 316251, 341055, 367290, 395010, 424270, 455126, - 487635, 521855, 557845, 595665, 635376, 677040, - 720720, 766480, 814385, 864501, 916895, 971635 }, - { 0, 0, 0, 0, 0, 1, - 6, 21, 56, 126, 252, 462, - 792, 1287, 2002, 3003, 4368, 6188, - 8568, 11628, 15504, 20349, 26334, 33649, - 42504, 53130, 65780, 80730, 98280, 118755, - 142506, 169911, 201376, 237336, 278256, 324632, - 376992, 435897, 501942, 575757, 658008, 749398, - 850668, 962598, 1086008, 1221759, 1370754, 1533939, - 1712304, 1906884, 2118760, 2349060, 2598960, 2869685, - 3162510, 3478761, 3819816, 4187106, 4582116, 5006386, - 5461512, 5949147, 6471002, 7028847, 7624512, 8259888, - 8936928, 9657648, 10424128, 11238513, 12103014, 13019909 }, - { 0, 0, 0, 0, 0, 0, - 1, 7, 28, 84, 210, 462, - 924, 1716, 3003, 5005, 8008, 12376, - 18564, 27132, 38760, 54264, 74613, 100947, - 134596, 177100, 230230, 296010, 376740, 475020, - 593775, 736281, 906192, 1107568, 1344904, 1623160, - 1947792, 2324784, 2760681, 3262623, 3838380, 4496388, - 5245786, 6096454, 7059052, 8145060, 9366819, 10737573, - 12271512, 13983816, 15890700, 18009460, 20358520, 22957480, - 25827165, 28989675, 32468436, 36288252, 40475358, 45057474, - 50063860, 55525372, 61474519, 67945521, 74974368, 82598880, - 90858768, 99795696, 109453344, 119877472, 131115985, 143218999 }, - { 0, 0, 0, 0, 0, 0, - 0, 1, 8, 36, 120, 330, - 792, 1716, 3432, 6435, 11440, 19448, - 31824, 50388, 77520, 116280, 170544, 245157, - 346104, 480700, 657800, 888030, 1184040, 1560780, - 2035800, 2629575, 3365856, 4272048, 5379616, 6724520, - 8347680, 10295472, 12620256, 15380937, 18643560, 22481940, - 26978328, 32224114, 38320568, 45379620, 53524680, 62891499, - 73629072, 85900584, 99884400, 115775100, 133784560, 154143080, - 177100560, 202927725, 231917400, 264385836, 300674088, 341149446, - 386206920, 436270780, 491796152, 553270671, 621216192, 696190560, - 778789440, 869648208, 969443904, 1078897248, 1198774720, 1329890705 }, -}; - -static const int16_t dss_sp_filter_cb[14][32] = { - { -32653, -32587, -32515, -32438, -32341, -32216, -32062, -31881, - -31665, -31398, -31080, -30724, -30299, -29813, -29248, -28572, - -27674, -26439, -24666, -22466, -19433, -16133, -12218, -7783, - -2834, 1819, 6544, 11260, 16050, 20220, 24774, 28120 }, - - { -27503, -24509, -20644, -17496, -14187, -11277, -8420, -5595, - -3013, -624, 1711, 3880, 5844, 7774, 9739, 11592, - 13364, 14903, 16426, 17900, 19250, 20586, 21803, 23006, - 24142, 25249, 26275, 27300, 28359, 29249, 30118, 31183 }, - - { -27827, -24208, -20943, -17781, -14843, -11848, -9066, -6297, - -3660, -910, 1918, 5025, 8223, 11649, 15086, 18423, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 }, - - { -17128, -11975, -8270, -5123, -2296, 183, 2503, 4707, - 6798, 8945, 11045, 13239, 15528, 18248, 21115, 24785, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 }, - - { -21557, -17280, -14286, -11644, -9268, -7087, -4939, -2831, - -691, 1407, 3536, 5721, 8125, 10677, 13721, 17731, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 }, - - { -15030, -10377, -7034, -4327, -1900, 364, 2458, 4450, - 6422, 8374, 10374, 12486, 14714, 16997, 19626, 22954, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 }, - - { -16155, -12362, -9698, -7460, -5258, -3359, -1547, 219, - 1916, 3599, 5299, 6994, 8963, 11226, 13716, 16982, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 }, - - { -14742, -9848, -6921, -4648, -2769, -1065, 499, 2083, - 3633, 5219, 6857, 8580, 10410, 12672, 15561, 20101, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 }, - - { -11099, -7014, -3855, -1025, 1680, 4544, 7807, 11932, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 }, - - { -9060, -4570, -1381, 1419, 4034, 6728, 9865, 14149, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 }, - - { -12450, -7985, -4596, -1734, 961, 3629, 6865, 11142, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 }, - - { -11831, -7404, -4010, -1096, 1606, 4291, 7386, 11482, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 }, - - { -13404, -9250, -5995, -3312, -890, 1594, 4464, 8198, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 }, - - { -11239, -7220, -4040, -1406, 971, 3321, 6006, 9697, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 }, -}; - -static const uint16_t dss_sp_fixed_cb_gain[64] = { - 0, 4, 8, 13, 17, 22, 26, 31, - 35, 40, 44, 48, 53, 58, 63, 69, - 76, 83, 91, 99, 109, 119, 130, 142, - 155, 170, 185, 203, 222, 242, 265, 290, - 317, 346, 378, 414, 452, 494, 540, 591, - 646, 706, 771, 843, 922, 1007, 1101, 1204, - 1316, 1438, 1572, 1719, 1879, 2053, 2244, 2453, - 2682, 2931, 3204, 3502, 3828, 4184, 4574, 5000, -}; - -static const int16_t dss_sp_pulse_val[8] = { - -31182, -22273, -13364, -4455, 4455, 13364, 22273, 31182 -}; - -static const uint16_t binary_decreasing_array[] = { - 32767, 16384, 8192, 4096, 2048, 1024, 512, 256, - 128, 64, 32, 16, 8, 4, 2, -}; - -static const uint16_t dss_sp_unc_decreasing_array[] = { - 32767, 26214, 20972, 16777, 13422, 10737, 8590, 6872, - 5498, 4398, 3518, 2815, 2252, 1801, 1441, -}; - -static const uint16_t dss_sp_adaptive_gain[] = { - 102, 231, 360, 488, 617, 746, 875, 1004, - 1133, 1261, 1390, 1519, 1648, 1777, 1905, 2034, - 2163, 2292, 2421, 2550, 2678, 2807, 2936, 3065, - 3194, 3323, 3451, 3580, 3709, 3838, 3967, 4096, -}; - -static const int32_t dss_sp_sinc[67] = { - 262, 293, 323, 348, 356, 336, 269, 139, - -67, -358, -733, -1178, -1668, -2162, -2607, -2940, - -3090, -2986, -2562, -1760, -541, 1110, 3187, 5651, - 8435, 11446, 14568, 17670, 20611, 23251, 25460, 27125, - 28160, 28512, 28160, - 27125, 25460, 23251, 20611, 17670, 14568, 11446, 8435, - 5651, 3187, 1110, -541, -1760, -2562, -2986, -3090, - -2940, -2607, -2162, -1668, -1178, -733, -358, -67, - 139, 269, 336, 356, 348, 323, 293, 262, -}; - -static av_cold int dss_sp_decode_init(AVCodecContext *avctx) -{ - DssSpContext *p = avctx->priv_data; - avctx->sample_fmt = AV_SAMPLE_FMT_S16; - avctx->sample_rate = 11025; - av_channel_layout_uninit(&avctx->ch_layout); - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_MONO; - - p->pulse_dec_mode = 1; - p->avctx = avctx; - - return 0; -} - -static void dss_sp_unpack_coeffs(DssSpContext *p, const uint8_t *src) -{ - GetBitContext gb; - DssSpFrame *fparam = &p->fparam; - int i; - int subframe_idx; - uint32_t combined_pitch; - uint32_t tmp; - uint32_t pitch_lag; - - for (i = 0; i < DSS_SP_FRAME_SIZE; i += 2) { - p->bits[i] = src[i + 1]; - p->bits[i + 1] = src[i]; - } - - init_get_bits(&gb, p->bits, DSS_SP_FRAME_SIZE * 8); - - for (i = 0; i < 2; i++) - fparam->filter_idx[i] = get_bits(&gb, 5); - for (; i < 8; i++) - fparam->filter_idx[i] = get_bits(&gb, 4); - for (; i < 14; i++) - fparam->filter_idx[i] = get_bits(&gb, 3); - - for (subframe_idx = 0; subframe_idx < 4; subframe_idx++) { - fparam->sf_adaptive_gain[subframe_idx] = get_bits(&gb, 5); - - fparam->sf[subframe_idx].combined_pulse_pos = get_bits_long(&gb, 31); - - fparam->sf[subframe_idx].gain = get_bits(&gb, 6); - - for (i = 0; i < 7; i++) - fparam->sf[subframe_idx].pulse_val[i] = get_bits(&gb, 3); - } - - for (subframe_idx = 0; subframe_idx < 4; subframe_idx++) { - unsigned int C72_binomials[PULSE_MAX] = { - 72, 2556, 59640, 1028790, 13991544, 156238908, 1473109704, - 3379081753 - }; - unsigned int combined_pulse_pos = - fparam->sf[subframe_idx].combined_pulse_pos; - int index = 6; - - if (combined_pulse_pos < C72_binomials[PULSE_MAX - 1]) { - if (p->pulse_dec_mode) { - int pulse, pulse_idx; - pulse = PULSE_MAX - 1; - pulse_idx = 71; - combined_pulse_pos = - fparam->sf[subframe_idx].combined_pulse_pos; - - /* this part seems to be close to g723.1 gen_fcb_excitation() - * RATE_6300 */ - - /* TODO: what is 7? size of subframe? */ - for (i = 0; i < 7; i++) { - for (; - combined_pulse_pos < - dss_sp_combinatorial_table[pulse][pulse_idx]; - --pulse_idx) - ; - combined_pulse_pos -= - dss_sp_combinatorial_table[pulse][pulse_idx]; - pulse--; - fparam->sf[subframe_idx].pulse_pos[i] = pulse_idx; - } - } - } else { - p->pulse_dec_mode = 0; - - /* why do we need this? */ - fparam->sf[subframe_idx].pulse_pos[6] = 0; - - for (i = 71; i >= 0; i--) { - if (C72_binomials[index] <= combined_pulse_pos) { - combined_pulse_pos -= C72_binomials[index]; - - fparam->sf[subframe_idx].pulse_pos[6 - index] = i; - - if (!index) - break; - --index; - } - --C72_binomials[0]; - if (index) { - int a; - for (a = 0; a < index; a++) - C72_binomials[a + 1] -= C72_binomials[a]; - } - } - } - } - - combined_pitch = get_bits(&gb, 24); - - fparam->pitch_lag[0] = (combined_pitch % 151) + 36; - - combined_pitch /= 151; - - for (i = 1; i < SUBFRAMES - 1; i++) { - fparam->pitch_lag[i] = combined_pitch % 48; - combined_pitch /= 48; - } - if (combined_pitch > 47) { - av_log (p->avctx, AV_LOG_WARNING, "combined_pitch was too large\n"); - combined_pitch = 0; - } - fparam->pitch_lag[i] = combined_pitch; - - pitch_lag = fparam->pitch_lag[0]; - for (i = 1; i < SUBFRAMES; i++) { - if (pitch_lag > 162) { - fparam->pitch_lag[i] += 162 - 23; - } else { - tmp = pitch_lag - 23; - if (tmp < 36) - tmp = 36; - fparam->pitch_lag[i] += tmp; - } - pitch_lag = fparam->pitch_lag[i]; - } -} - -static void dss_sp_unpack_filter(DssSpContext *p) -{ - int i; - - for (i = 0; i < 14; i++) - p->lpc_filter[i] = dss_sp_filter_cb[i][p->fparam.filter_idx[i]]; -} - -static void dss_sp_convert_coeffs(int32_t *lpc_filter, int32_t *coeffs) -{ - int a, a_plus, i; - - coeffs[0] = 0x2000; - for (a = 0; a < 14; a++) { - a_plus = a + 1; - coeffs[a_plus] = lpc_filter[a] >> 2; - if (a_plus / 2 >= 1) { - for (i = 1; i <= a_plus / 2; i++) { - int coeff_1, coeff_2, tmp; - - coeff_1 = coeffs[i]; - coeff_2 = coeffs[a_plus - i]; - - tmp = DSS_SP_FORMULA(coeff_1, lpc_filter[a], coeff_2); - coeffs[i] = av_clip_int16(tmp); - - tmp = DSS_SP_FORMULA(coeff_2, lpc_filter[a], coeff_1); - coeffs[a_plus - i] = av_clip_int16(tmp); - } - } - } -} - -static void dss_sp_add_pulses(int32_t *vector_buf, - const struct DssSpSubframe *sf) -{ - int i; - - for (i = 0; i < 7; i++) - vector_buf[sf->pulse_pos[i]] += (dss_sp_fixed_cb_gain[sf->gain] * - dss_sp_pulse_val[sf->pulse_val[i]] + - 0x4000) >> 15; -} - -static void dss_sp_gen_exc(int32_t *vector, int32_t *prev_exc, - int pitch_lag, int gain) -{ - int i; - - /* do we actually need this check? we can use just [a3 - i % a3] - * for both cases */ - if (pitch_lag < 72) - for (i = 0; i < 72; i++) - vector[i] = prev_exc[pitch_lag - i % pitch_lag]; - else - for (i = 0; i < 72; i++) - vector[i] = prev_exc[pitch_lag - i]; - - for (i = 0; i < 72; i++) { - int tmp = gain * vector[i] >> 11; - vector[i] = av_clip_int16(tmp); - } -} - -static void dss_sp_scale_vector(int32_t *vec, int bits, int size) -{ - int i; - - if (bits < 0) - for (i = 0; i < size; i++) - vec[i] = vec[i] >> -bits; - else - for (i = 0; i < size; i++) - vec[i] = vec[i] * (1 << bits); -} - -static void dss_sp_update_buf(int32_t *hist, int32_t *vector) -{ - int i; - - for (i = 114; i > 0; i--) - vector[i + 72] = vector[i]; - - for (i = 0; i < 72; i++) - vector[72 - i] = hist[i]; -} - -static void dss_sp_shift_sq_sub(const int32_t *filter_buf, - int32_t *error_buf, int32_t *dst) -{ - int a; - - for (a = 0; a < 72; a++) { - int i, tmp; - - tmp = dst[a] * filter_buf[0]; - - for (i = 14; i > 0; i--) - tmp -= error_buf[i] * (unsigned)filter_buf[i]; - - for (i = 14; i > 0; i--) - error_buf[i] = error_buf[i - 1]; - - tmp = (int)(tmp + 4096U) >> 13; - - error_buf[1] = tmp; - - dst[a] = av_clip_int16(tmp); - } -} - -static void dss_sp_shift_sq_add(const int32_t *filter_buf, int32_t *audio_buf, - int32_t *dst) -{ - int a; - - for (a = 0; a < 72; a++) { - int i, tmp = 0; - - audio_buf[0] = dst[a]; - - for (i = 14; i >= 0; i--) - tmp += audio_buf[i] * filter_buf[i]; - - for (i = 14; i > 0; i--) - audio_buf[i] = audio_buf[i - 1]; - - tmp = (tmp + 4096) >> 13; - - dst[a] = av_clip_int16(tmp); - } -} - -static void dss_sp_vec_mult(const int32_t *src, int32_t *dst, - const int16_t *mult) -{ - int i; - - dst[0] = src[0]; - - for (i = 1; i < 15; i++) - dst[i] = (src[i] * mult[i] + 0x4000) >> 15; -} - -static int dss_sp_get_normalize_bits(int32_t *vector_buf, int16_t size) -{ - unsigned int val; - int max_val; - int i; - - val = 1; - for (i = 0; i < size; i++) - val |= FFABS(vector_buf[i]); - - for (max_val = 0; val <= 0x4000; ++max_val) - val *= 2; - return max_val; -} - -static int dss_sp_vector_sum(DssSpContext *p, int size) -{ - int i, sum = 0; - for (i = 0; i < size; i++) - sum += FFABS(p->vector_buf[i]); - return sum; -} - -static void dss_sp_sf_synthesis(DssSpContext *p, int32_t lpc_filter, - int32_t *dst, int size) -{ - int32_t tmp_buf[15]; - int32_t noise[72]; - int bias, vsum_2 = 0, vsum_1 = 0, v36, normalize_bits; - int i, tmp; - - if (size > 0) { - vsum_1 = dss_sp_vector_sum(p, size); - - if (vsum_1 > 0xFFFFF) - vsum_1 = 0xFFFFF; - } - - normalize_bits = dss_sp_get_normalize_bits(p->vector_buf, size); - - dss_sp_scale_vector(p->vector_buf, normalize_bits - 3, size); - dss_sp_scale_vector(p->audio_buf, normalize_bits, 15); - dss_sp_scale_vector(p->err_buf1, normalize_bits, 15); - - v36 = p->err_buf1[1]; - - dss_sp_vec_mult(p->filter, tmp_buf, binary_decreasing_array); - dss_sp_shift_sq_add(tmp_buf, p->audio_buf, p->vector_buf); - - dss_sp_vec_mult(p->filter, tmp_buf, dss_sp_unc_decreasing_array); - dss_sp_shift_sq_sub(tmp_buf, p->err_buf1, p->vector_buf); - - /* lpc_filter can be negative */ - lpc_filter = lpc_filter >> 1; - if (lpc_filter >= 0) - lpc_filter = 0; - - if (size > 1) { - for (i = size - 1; i > 0; i--) { - tmp = DSS_SP_FORMULA(p->vector_buf[i], lpc_filter, - p->vector_buf[i - 1]); - p->vector_buf[i] = av_clip_int16(tmp); - } - } - - tmp = DSS_SP_FORMULA(p->vector_buf[0], lpc_filter, v36); - p->vector_buf[0] = av_clip_int16(tmp); - - dss_sp_scale_vector(p->vector_buf, -normalize_bits, size); - dss_sp_scale_vector(p->audio_buf, -normalize_bits, 15); - dss_sp_scale_vector(p->err_buf1, -normalize_bits, 15); - - if (size > 0) - vsum_2 = dss_sp_vector_sum(p, size); - - if (vsum_2 >= 0x40) - tmp = (vsum_1 << 11) / vsum_2; - else - tmp = 1; - - bias = 409 * tmp >> 15 << 15; - tmp = (bias + 32358 * p->noise_state) >> 15; - noise[0] = av_clip_int16(tmp); - - for (i = 1; i < size; i++) { - tmp = (bias + 32358 * noise[i - 1]) >> 15; - noise[i] = av_clip_int16(tmp); - } - - p->noise_state = noise[size - 1]; - for (i = 0; i < size; i++) { - tmp = (p->vector_buf[i] * noise[i]) >> 11; - dst[i] = av_clip_int16(tmp); - } -} - -static void dss_sp_update_state(DssSpContext *p, int32_t *dst) -{ - int i, offset = 6, counter = 0, a = 0; - - for (i = 0; i < 6; i++) - p->excitation[i] = p->excitation[288 + i]; - - for (i = 0; i < 72 * SUBFRAMES; i++) - p->excitation[6 + i] = dst[i]; - - do { - int tmp = 0; - - for (i = 0; i < 6; i++) - tmp += p->excitation[offset--] * dss_sp_sinc[a + i * 11]; - - offset += 7; - - tmp >>= 15; - dst[counter] = av_clip_int16(tmp); - - counter++; - - a = (a + 1) % 11; - if (!a) - offset++; - } while (offset < FF_ARRAY_ELEMS(p->excitation)); -} - -static void dss_sp_32to16bit(int16_t *dst, int32_t *src, int size) -{ - int i; - - for (i = 0; i < size; i++) - dst[i] = av_clip_int16(src[i]); -} - -static int dss_sp_decode_one_frame(DssSpContext *p, - int16_t *abuf_dst, const uint8_t *abuf_src) -{ - int i, j; - - dss_sp_unpack_coeffs(p, abuf_src); - - dss_sp_unpack_filter(p); - - dss_sp_convert_coeffs(p->lpc_filter, p->filter); - - for (j = 0; j < SUBFRAMES; j++) { - dss_sp_gen_exc(p->vector_buf, p->history, - p->fparam.pitch_lag[j], - dss_sp_adaptive_gain[p->fparam.sf_adaptive_gain[j]]); - - dss_sp_add_pulses(p->vector_buf, &p->fparam.sf[j]); - - dss_sp_update_buf(p->vector_buf, p->history); - - for (i = 0; i < 72; i++) - p->vector_buf[i] = p->history[72 - i]; - - dss_sp_shift_sq_sub(p->filter, - p->err_buf2, p->vector_buf); - - dss_sp_sf_synthesis(p, p->lpc_filter[0], - &p->working_buffer[j][0], 72); - } - - dss_sp_update_state(p, &p->working_buffer[0][0]); - - dss_sp_32to16bit(abuf_dst, - &p->working_buffer[0][0], 264); - return 0; -} - -static int dss_sp_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame_ptr, AVPacket *avpkt) -{ - DssSpContext *p = avctx->priv_data; - const uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - - int16_t *out; - int ret; - - if (buf_size < DSS_SP_FRAME_SIZE) { - if (buf_size) - av_log(avctx, AV_LOG_WARNING, - "Expected %d bytes, got %d - skipping packet.\n", - DSS_SP_FRAME_SIZE, buf_size); - *got_frame_ptr = 0; - return AVERROR_INVALIDDATA; - } - - frame->nb_samples = DSS_SP_SAMPLE_COUNT; - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - - out = (int16_t *)frame->data[0]; - - dss_sp_decode_one_frame(p, out, buf); - - *got_frame_ptr = 1; - - return DSS_SP_FRAME_SIZE; -} - -const FFCodec ff_dss_sp_decoder = { - .p.name = "dss_sp", - CODEC_LONG_NAME("Digital Speech Standard - Standard Play mode (DSS SP)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_DSS_SP, - .priv_data_size = sizeof(DssSpContext), - .init = dss_sp_decode_init, - FF_CODEC_DECODE_CB(dss_sp_decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_CHANNEL_CONF, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpegquanttables.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpegquanttables.h deleted file mode 100644 index 48a3429874679611a972e27acb60649146909d6a..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpegquanttables.h +++ /dev/null @@ -1,32 +0,0 @@ -/* - * MJPEG quantization tables - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_JPEGQUANTTABLES_H -#define AVCODEC_JPEGQUANTTABLES_H - -#include -#include "libavutil/attributes_internal.h" - -FF_VISIBILITY_PUSH_HIDDEN -extern const uint8_t ff_mjpeg_std_luminance_quant_tbl[64]; -extern const uint8_t ff_mjpeg_std_chrominance_quant_tbl[64]; -FF_VISIBILITY_POP_HIDDEN - -#endif /* AVCODEC_JPEGQUANTTABLES_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/iirfilter_mips.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/iirfilter_mips.c deleted file mode 100644 index 3a1352a7e4392b817cd138fbec3266a561be2de2..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/iirfilter_mips.c +++ /dev/null @@ -1,209 +0,0 @@ -/* - * Copyright (c) 2012 - * MIPS Technologies, Inc., California. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * 3. Neither the name of the MIPS Technologies, Inc., nor the names of its - * contributors may be used to endorse or promote products derived from - * this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE MIPS TECHNOLOGIES, INC. ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE MIPS TECHNOLOGIES, INC. BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - * - * Author: Bojan Zivkovic (bojan@mips.com) - * - * IIR filter optimized for MIPS floating-point architecture - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - - /** - * @file - * Reference: libavcodec/iirfilter.c - */ - -#include "config.h" -#include "libavcodec/iirfilter.h" - -#if HAVE_INLINE_ASM -#if !HAVE_MIPS32R6 && !HAVE_MIPS64R6 -typedef struct FFIIRFilterCoeffs { - int order; - float gain; - int *cx; - float *cy; -} FFIIRFilterCoeffs; - -typedef struct FFIIRFilterState { - float x[1]; -} FFIIRFilterState; - -static void iir_filter_flt_mips(const struct FFIIRFilterCoeffs *c, - struct FFIIRFilterState *s, int size, - const float *src, ptrdiff_t sstep, float *dst, ptrdiff_t dstep) -{ - if (c->order == 2) { - int i; - const float *src0 = src; - float *dst0 = dst; - for (i = 0; i < size; i++) { - float in = *src0 * c->gain + s->x[0] * c->cy[0] + s->x[1] * c->cy[1]; - *dst0 = s->x[0] + in + s->x[1] * c->cx[1]; - s->x[0] = s->x[1]; - s->x[1] = in; - src0 += sstep; - dst0 += dstep; - } - } else if (c->order == 4) { - int i; - const float *src0 = src; - float *dst0 = dst; - float four = 4.0; - float six = 6.0; - for (i = 0; i < size; i += 4) { - float in1, in2, in3, in4; - float res1, res2, res3, res4; - float *x = s->x; - float *cy = c->cy; - float gain = c->gain; - float src0_0 = src0[0 ]; - float src0_1 = src0[sstep ]; - float src0_2 = src0[2*sstep]; - float src0_3 = src0[3*sstep]; - - __asm__ volatile ( - "lwc1 $f0, 0(%[cy]) \n\t" - "lwc1 $f4, 0(%[x]) \n\t" - "lwc1 $f5, 4(%[x]) \n\t" - "lwc1 $f6, 8(%[x]) \n\t" - "lwc1 $f7, 12(%[x]) \n\t" - "mul.s %[in1], %[src0_0], %[gain] \n\t" - "mul.s %[in2], %[src0_1], %[gain] \n\t" - "mul.s %[in3], %[src0_2], %[gain] \n\t" - "mul.s %[in4], %[src0_3], %[gain] \n\t" - "lwc1 $f1, 4(%[cy]) \n\t" - "madd.s %[in1], %[in1], $f0, $f4 \n\t" - "madd.s %[in2], %[in2], $f0, $f5 \n\t" - "madd.s %[in3], %[in3], $f0, $f6 \n\t" - "madd.s %[in4], %[in4], $f0, $f7 \n\t" - "lwc1 $f2, 8(%[cy]) \n\t" - "madd.s %[in1], %[in1], $f1, $f5 \n\t" - "madd.s %[in2], %[in2], $f1, $f6 \n\t" - "madd.s %[in3], %[in3], $f1, $f7 \n\t" - "lwc1 $f3, 12(%[cy]) \n\t" - "add.s $f8, $f5, $f7 \n\t" - "madd.s %[in1], %[in1], $f2, $f6 \n\t" - "madd.s %[in2], %[in2], $f2, $f7 \n\t" - "mul.s $f9, $f6, %[six] \n\t" - "mul.s $f10, $f7, %[six] \n\t" - "madd.s %[in1], %[in1], $f3, $f7 \n\t" - "madd.s %[in2], %[in2], $f3, %[in1] \n\t" - "madd.s %[in3], %[in3], $f2, %[in1] \n\t" - "madd.s %[in4], %[in4], $f1, %[in1] \n\t" - "add.s %[res1], $f4, %[in1] \n\t" - "swc1 %[in1], 0(%[x]) \n\t" - "add.s $f0, $f6, %[in1] \n\t" - "madd.s %[in3], %[in3], $f3, %[in2] \n\t" - "madd.s %[in4], %[in4], $f2, %[in2] \n\t" - "add.s %[res2], $f5, %[in2] \n\t" - "madd.s %[res1], %[res1], $f8, %[four] \n\t" - "add.s $f8, $f7, %[in2] \n\t" - "swc1 %[in2], 4(%[x]) \n\t" - "madd.s %[in4], %[in4], $f3, %[in3] \n\t" - "add.s %[res3], $f6, %[in3] \n\t" - "add.s %[res1], %[res1], $f9 \n\t" - "madd.s %[res2], %[res2], $f0, %[four] \n\t" - "swc1 %[in3], 8(%[x]) \n\t" - "add.s %[res4], $f7, %[in4] \n\t" - "madd.s %[res3], %[res3], $f8, %[four] \n\t" - "swc1 %[in4], 12(%[x]) \n\t" - "add.s %[res2], %[res2], $f10 \n\t" - "add.s $f8, %[in1], %[in3] \n\t" - "madd.s %[res3], %[res3], %[in1], %[six] \n\t" - "madd.s %[res4], %[res4], $f8, %[four] \n\t" - "madd.s %[res4], %[res4], %[in2], %[six] \n\t" - - : [in1]"=&f"(in1), [in2]"=&f"(in2), - [in3]"=&f"(in3), [in4]"=&f"(in4), - [res1]"=&f"(res1), [res2]"=&f"(res2), - [res3]"=&f"(res3), [res4]"=&f"(res4) - : [src0_0]"f"(src0_0), [src0_1]"f"(src0_1), - [src0_2]"f"(src0_2), [src0_3]"f"(src0_3), - [gain]"f"(gain), [x]"r"(x), [cy]"r"(cy), - [four]"f"(four), [six]"f"(six) - : "$f0", "$f1", "$f2", "$f3", - "$f4", "$f5", "$f6", "$f7", - "$f8", "$f9", "$f10", - "memory" - ); - - dst0[0 ] = res1; - dst0[sstep ] = res2; - dst0[2*sstep] = res3; - dst0[3*sstep] = res4; - - src0 += 4*sstep; - dst0 += 4*dstep; - } - } else { - int i; - const float *src0 = src; - float *dst0 = dst; - for (i = 0; i < size; i++) { - int j; - float in, res; - in = *src0 * c->gain; - for(j = 0; j < c->order; j++) - in += c->cy[j] * s->x[j]; - res = s->x[0] + in + s->x[c->order >> 1] * c->cx[c->order >> 1]; - for(j = 1; j < c->order >> 1; j++) - res += (s->x[j] + s->x[c->order - j]) * c->cx[j]; - for(j = 0; j < c->order - 1; j++) - s->x[j] = s->x[j + 1]; - *dst0 = res; - s->x[c->order - 1] = in; - src0 += sstep; - dst0 += dstep; - } - } -} -#endif /* !HAVE_MIPS32R6 && !HAVE_MIPS64R6 */ -#endif /* HAVE_INLINE_ASM */ - -void ff_iir_filter_init_mips(FFIIRFilterContext *f) { -#if HAVE_INLINE_ASM -#if !HAVE_MIPS32R6 && !HAVE_MIPS64R6 - f->filter_flt = iir_filter_flt_mips; -#endif /* !HAVE_MIPS32R6 && !HAVE_MIPS64R6 */ -#endif /* HAVE_INLINE_ASM */ -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Cara Download FateGrand Order di Play Store Solusi untuk Masalah Umum yang Mungkin Anda Hadapi.md b/spaces/congsaPfin/Manga-OCR/logs/Cara Download FateGrand Order di Play Store Solusi untuk Masalah Umum yang Mungkin Anda Hadapi.md deleted file mode 100644 index 33d7f7b86d01ac753e1617bbcccc3159627ffc50..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Cara Download FateGrand Order di Play Store Solusi untuk Masalah Umum yang Mungkin Anda Hadapi.md +++ /dev/null @@ -1,130 +0,0 @@ -
    -

    Cara Download Fate/Grand Order di Play Store

    -

    Fate/Grand Order adalah sebuah game mobile RPG yang berdasarkan seri Fate karya Type-Moon, di mana Anda bisa memanggil dan mengendalikan roh pahlawan dari sejarah dan legenda untuk menyelamatkan umat manusia dari bencana. Jika Anda ingin bermain game ini di perangkat Android Anda, Anda perlu mengikuti beberapa langkah untuk mendownload dan menginstalnya. Berikut ini adalah cara download Fate/Grand Order di Play Store.

    -

    Apa itu Fate/Grand Order?

    -

    Sejarah dan latar belakang game

    -

    Fate/Grand Order adalah game yang dirilis pada tahun 2015 oleh Aniplex, sebuah anak perusahaan dari Sony Music Entertainment Japan. Game ini adalah bagian dari franchise Fate/stay night, yang dimulai sebagai novel visual pada tahun 2004 dan telah melahirkan sekuel, manga, novel, dan anime. Cerita Fate/Grand Order berlatar pada tahun 2015, di mana sebuah organisasi bernama Chaldea mengamati masa depan Bumi dan menemukan bahwa sejarah manusia akan punah pada tahun 2017. Tanpa peringatan, masa depan yang dijanjikan hingga tahun 2115 tiba-tiba menghilang. Untuk mencegah hal ini, Chaldea melakukan eksperimen keenam, yang masih dalam tahap percobaan.

    -

    cara download fate grand order di play store


    DOWNLOADhttps://urlca.com/2uOdqz



    -

    "Ini adalah perjalanan waktu ke masa lalu."

    -

    Sebuah ritual terlarang yang mengubah manusia menjadi roh dan mengirim mereka ke masa lalu, untuk campur tangan dalam peristiwa-peristiwa yang menyebabkan singularitas ruang-waktu.

    -

    "Namanya adalah Perintah Perlindungan Hak Asasi Manusia, Grand Order."

    -

    Ini adalah istilah umum untuk mereka yang berjuang melawan takdir dan menghadapi sejarah manusia demi melindungi kemanusiaan.

    -

    Gameplay dan fitur game

    -

    Fate/Grand Order adalah game RPG berbasis giliran dengan beberapa elemen novel visual. Pemain menjadi seorang "Master" yang memerintahkan sekelompok individu yang disebut "Servant", yang biasanya adalah tokoh-tokoh sejarah, sastra, dan mitologi dari berbagai budaya. Pemain membentuk sebuah tim yang terdiri dari hingga 6 Servant dalam setiap pertempuran, 3 anggota aktif, dan 3 anggota cadangan. Di setiap giliran, pemain d

    ian dapat memilih 3 dari 5 kartu serangan yang muncul secara acak dari setiap Servant, dengan jenis kartu yang berbeda-beda, seperti Buster, Arts, dan Quick. Setiap jenis kartu memiliki efek yang berbeda, seperti meningkatkan kerusakan, mengisi meteran NP (Noble Phantasm), atau meningkatkan kritikal. Pemain juga dapat menggunakan skill Servant dan item untuk mendapatkan keuntungan dalam pertempuran. Setiap Servant memiliki NP yang unik, yang merupakan serangan khusus yang dapat diaktifkan ketika meteran NP penuh. NP biasanya memiliki efek tambahan selain kerusakan, seperti menyembuhkan, menambah status, atau mengurangi status musuh.

    -

    Game ini terdiri dari beberapa bab cerita yang disebut "Order", di mana pemain mengunjungi berbagai era sejarah dan bertemu dengan tokoh-tokoh terkenal. Setiap Order memiliki beberapa misi yang harus diselesaikan oleh pemain untuk melanjutkan cerita. Selain itu, game ini juga memiliki konten sampingan seperti quest harian, event khusus, dan mode PvP yang disebut "Grand Battle". Pemain dapat memperoleh berbagai hadiah dari menyelesaikan konten tersebut, seperti mata uang game, item upgrade, dan Servant baru.

    -

    Bagaimana cara download Fate/Grand Order di Play Store?

    -

    Langkah 1: Cek spesifikasi perangkat Anda

    -

    Fate/Grand Order adalah game yang cukup berat dan membutuhkan spesifikasi perangkat yang memadai untuk berjalan dengan lancar. Berikut ini adalah spesifikasi minimum yang diperlukan untuk bermain game ini:

    - - - - - - - - - - - - - -
    OSRAMMemori InternalResolusi Layar
    Android 4.1 atau lebih tinggi2 GB atau lebih2 GB atau lebih1280 x 720 atau lebih tinggi
    -

    Jika perangkat Anda tidak memenuhi spesifikasi tersebut, Anda mungkin akan mengalami masalah seperti lag, crash, atau error saat bermain game ini. Oleh karena itu, pastikan Anda memiliki perangkat yang sesuai sebelum mendownload game ini.

    -

    cara download fate grand order di play store tanpa vpn
    -cara download fate grand order di play store dengan qooapp
    -cara download fate grand order di play store versi jepang
    -cara download fate grand order di play store versi inggris
    -cara download fate grand order di play store terbaru
    -cara download fate grand order di play store android
    -cara download fate grand order di play store ios
    -cara download fate grand order di play store gratis
    -cara download fate grand order di play store mudah
    -cara download fate grand order di play store cepat
    -cara install fate grand order di play store
    -cara update fate grand order di play store
    -cara main fate grand order di play store
    -cara daftar fate grand order di play store
    -cara login fate grand order di play store
    -tips dan trik download fate grand order di play store
    -panduan lengkap download fate grand order di play store
    -review game fate grand order di play store
    -gameplay fate grand order di play store
    -fitur game fate grand order di play store
    -karakter game fate grand order di play store
    -event game fate grand order di play store
    -gacha game fate grand order di play store
    -cheat game fate grand order di play store
    -mod game fate grand order di play store
    -apk game fate grand order di play store
    -data game fate grand order di play store
    -size game fate grand order di play store
    -rating game fate grand order di play store
    -genre game fate grand order di play store
    -developer game fate grand order di play store
    -publisher game fate grand order di play store
    -platform game fate grand order di play store
    -sinopsis game fate grand order di play store
    -cerita game fate grand order di play store
    -grafik game fate grand order di play store
    -suara game fate grand order di play store
    -musik game fate grand order di play store
    -animasi game fate grand order di play store
    -sistem game fate grand order di play store
    -kontrol game fate grand order di play store
    -strategi game fate grand order di play store
    -kelas game fate grand order di play store
    -skill game fate grand order di play store
    -item game fate grand order di play store
    -shop game fate grand order di play store

    -

    Langkah 2: Ubah lokasi negara Anda ke Jepang atau Amerika Serikat

    -

    Fate/Grand Order adalah game yang hanya tersedia di beberapa negara tertentu, yaitu Jepang, Amerika Serikat, Kanada, Australia, Singapura, Filipina, Vietnam, dan Thailand. Jika Anda berada di luar negara-negara tersebut, Anda tidak akan bisa menemukan game ini di Play Store. Untuk mengatasi hal ini, Anda perlu mengubah lokasi negara Anda ke salah satu negara yang didukung oleh game ini. Berikut ini adalah cara untuk melakukannya:

    -
      -
    • Buka aplikasi Play Store di perangkat Anda.
    • -
    • Ketuk ikon menu di pojok kiri atas layar.
    • -
    • Pilih Akun dari menu yang muncul.
    • -
    • Ketuk Preferensi Negara dan Profil.
    • -
    • Pilih negara yang Anda inginkan, misalnya Jepang atau Amerika Serikat.
    • -
    • Ketuk Beralih ke Negara Ini untuk mengonfirmasi pilihan Anda.
    • -
    • Tunggu hingga Play Store memperbarui konten sesuai dengan negara baru Anda.
    • -
    -

    Catatan: Anda hanya bisa mengubah lokasi negara Anda sekali setahun. Jadi pastikan Anda memilih negara yang tepat sebelum melakukan perubahan ini.

    -

    Langkah 3: Buka Play Store dan cari Fate/Grand Order

    -

    Setelah Anda mengubah lokasi negara Anda, Anda bisa membuka Play Store lagi dan mencari Fate/Grand Order di kotak pencarian. Pastikan Anda mengetik nama game dengan benar dan lengkap. Anda akan melihat dua versi game ini di hasil pencarian, yaitu Fate/Grand Order (English) dan Fate/Grand Order (Japanese). Pilih versi yang sesuai dengan bahasa yang Anda inginkan. Versi Inggris memiliki subtitle Inggr is di layar, sedangkan versi Jepang memiliki subtitle Jepang. Anda bisa memilih salah satu atau keduanya, tergantung pada preferensi Anda.

    -

    Langkah 4: Download dan instal game

    -

    Setelah Anda memilih versi game yang Anda inginkan, ketuk tombol Instal untuk mendownload dan menginstal game di perangkat Anda. Pastikan Anda memiliki koneksi internet yang stabil dan cukup ruang penyimpanan di perangkat Anda. Ukuran file game ini sekitar 1,5 GB, jadi mungkin membutuhkan waktu beberapa menit untuk menyelesaikan proses download dan instalasi. Jangan tutup Play Store atau matikan perangkat Anda selama proses ini berlangsung.

    -

    Langkah 5: Jalankan game dan nikmati petualangan Anda

    -

    Setelah proses download dan instalasi selesai, Anda bisa membuka game dari layar utama perangkat Anda atau dari Play Store. Ketuk ikon game untuk menjalankannya. Anda akan melihat layar pembuka yang menampilkan logo game dan beberapa pilihan menu. Pilih menu Start Game untuk memulai permainan. Anda akan diminta untuk memasukkan nama Master Anda, yang akan menjadi identitas Anda dalam game. Anda juga bisa memilih gender dan suara Master Anda. Setelah itu, Anda akan melihat prolog cerita yang menjelaskan latar belakang dan tujuan permainan. Ikuti dialog dan instruksi yang muncul di layar untuk mempelajari dasar-dasar gameplay dan bertemu dengan Servant pertama Anda.

    -

    Selamat, Anda telah berhasil mendownload dan memainkan Fate/Grand Order di Play Store. Sekarang, Anda bisa menikmati petualangan Anda sebagai Master yang bertugas menyelamatkan sejarah manusia dari kehancuran. Bersiaplah untuk menghadapi berbagai tantangan, misteri, dan kejutan yang menanti Anda di setiap Order. Semoga berhasil!

    -

    Tips dan trik untuk bermain Fate/Grand Order

    -

    Lakukan quest harian dan event untuk mendapatkan hadiah

    -

    Fate/Grand Order adalah game yang terus diperbarui dengan konten baru, seperti quest harian dan event khusus. Quest harian adalah misi yang bisa Anda lakukan setiap hari untuk mendapatkan berbagai hadiah, seperti mata uang game, item upgrade, dan Servant baru. Quest harian berubah setiap hari sesuai dengan jenis hadiahnya, jadi pastikan Anda tidak melewatkannya. Event khusus adalah misi yang hanya tersedia dalam waktu terbatas, biasanya berkaitan dengan tema tertentu, seperti festival, liburan, atau kolaborasi dengan seri lain. Event khusus biasanya memiliki cerita sendiri, karakter unik, dan hadiah eksklusif yang tidak bisa didapatkan di tempat lain. Event khusus juga berubah secara berkala, jadi ikuti informasi terbaru dari situs resmi atau media sosial game ini untuk tidak ketinggalan.

    -

    Simpan tiket summon untuk mendapatkan Servant langka

    -

    Fate/Grand Order adalah game yang mengandalkan sistem gacha untuk mendapatkan Servant baru. Gacha adalah istilah Jepang untuk mesin penjual otomatis yang memberikan barang acak dengan harga tetap. Dalam game ini, pemain bisa menggunakan mata uang game atau tiket summon untuk memutar gacha dan mendapatkan Servant acak dengan tingkat kelangkaan yang berbeda-beda. Ada dua jenis gacha dalam game ini, yaitu gacha biasa dan gacha spesial. Gacha biasa adalah gacha yang selalu tersedia dan memiliki daftar Servant yang tetap. Gacha spesial adalah gacha yang hanya tersedia dalam waktu terbatas dan memiliki daftar Servant yang berbeda dari gacha biasa, biasanya lebih langka atau lebih kuat.

    -

    Tiket summon adalah item yang bisa digunakan untuk memutar gacha satu kali tanpa menggunakan mata uang game. Tiket summon bisa didapatkan dari berbagai sumber, seperti quest harian, event khusus, login harian, atau hadiah bulanan. Tiket summon adalah item yang sangat berharga karena bisa membantu pemain mendapatkan Servant langka tanpa mengeluarkan uang sungguhan. Oleh karena itu, disarankan untuk menyimpan tiket summon sebanyak mungkin dan hanya menggunak kannya saat ada gacha spesial yang menarik minat Anda. Misalnya, saat ada gacha yang menampilkan Servant favorit Anda, Servant dengan kelas yang Anda butuhkan, atau Servant dengan tingkat kelangkaan yang tinggi. Dengan begitu, Anda bisa meningkatkan peluang Anda untuk mendapatkan Servant yang Anda inginkan.

    -

    Campurkan jenis kartu serangan untuk membuat kombinasi efektif

    -

    Fate/Grand Order adalah game yang mengharuskan pemain untuk memilih 3 dari 5 kartu serangan yang muncul secara acak dari setiap Servant di setiap giliran. Kartu serangan memiliki jenis yang berbeda-beda, yaitu Buster, Arts, dan Quick. Setiap jenis kartu memiliki efek yang berbeda, seperti meningkatkan kerusakan, mengisi meteran NP, atau meningkatkan kritikal. Selain itu, urutan kartu serangan juga mempengaruhi efeknya. Misalnya, jika Anda memilih kartu Buster pertama, semua kartu serangan setelahnya akan mendapatkan bonus kerusakan. Jika Anda memilih kartu Arts pertama, semua kartu serangan setelahnya akan mendapatkan bonus pengisian NP. Jika Anda memilih kartu Quick pertama, semua kartu serangan setelahnya akan mendapatkan bonus kritikal.

    -

    Oleh karena itu, penting untuk memperhatikan jenis dan urutan kartu serangan yang Anda pilih. Anda bisa mencampurkan jenis kartu serangan untuk membuat kombinasi yang efektif sesuai dengan situasi pertempuran. Misalnya, jika Anda ingin menyerang dengan keras dan cepat, Anda bisa memilih kombinasi BBB (Buster-Buster-Buster). Jika Anda ingin mengisi meteran NP secepat mungkin, Anda bisa memilih kombinasi ABB (Arts-Buster-Buster). Jika Anda ingin menimbulkan banyak kritikal, Anda bisa memilih kombinasi QQQ (Quick-Quick-Quick). Jika Anda ingin mencapai efek spesial tertentu, seperti Brave Chain atau Arts Chain, Anda bisa memilih kombinasi yang sesuai dengan syaratnya.

    -

    Tingkatkan skill dan noble phantasm Servant Anda

    -

    Fate/Grand Order adalah game yang mengharuskan pemain untuk meningkatkan kemampuan Servant mereka agar bisa menghadapi musuh yang semakin kuat. Ada dua cara utama untuk meningkatkan Servant Anda, yaitu dengan meningkatkan skill dan noble phantasm mereka. Skill adalah kemampuan khusus yang dimiliki oleh setiap Servant, yang bisa diaktifkan di setiap giliran dengan menggunakan cooldown tertentu. Skill biasanya memiliki efek positif bagi Servant sendiri atau timnya, seperti menyembuhkan, menambah status, atau mengurangi cooldown. Noble phantasm adalah serangan khusus yang dimiliki oleh setiap Servant, yang bisa diaktifkan ketika meteran NP penuh. Noble phantasm biasanya memiliki efek negatif bagi musuh atau positif bagi timnya, seperti menimbulkan kerusakan besar, menyebabkan status buruk, atau memberikan status baik.

    -

    Untuk meningkatkan skill Servant Anda, Anda perlu menggunakan item tertentu yang bisa didapatkan dari quest harian, event khusus, atau pertempuran biasa. Item-item ini berbeda-beda tergantung pada jenis dan tingkat skill Servant Anda. Semakin tinggi tingkat skill Servant Anda, semakin banyak item yang dibutuhkan dan semakin besar efeknya. Untuk meningkatkan noble phantasm Servant Anda, Anda perlu menggunakan duplikat dari Servant tersebut atau item khusus yang disebut Saint Quartz. Duplikat Servant bisa didapatkan dari gacha atau event khusus. Saint Quartz bisa didapatkan dari login harian, menyelesaikan quest cerita, atau membelinya dengan uang sungguhan. Semakin tinggi tingkat noble phantasm Servant Anda, semakin besar kerusakan atau efeknya.

    -

    Gunakan Servant dengan kelas yang sesuai dengan musuh

    -

    Fate/Grand Order adalah game yang mengharuskan pemain untuk memperhatikan kelas dari Servant mereka dan musuh mereka. Kelas adalah atribut yang menentukan kekuatan dan kelemahan dari setiap Servant dalam pertempuran. Ada 7 kelas dasar yang saling berhubungan dalam sistem segitiga, yaitu Saber, Archer, dan Lancer; Rider, Caster, dan Assassin; dan Berserker. Ada juga kelas tambahan yang memiliki hubungan yang berbeda dengan kelas dasar, yaitu Ruler, Avenger, Shielder, Alter Ego, Moon Cancer, Foreigner, dan Beast. Setiap kelas memiliki kelebihan dan kekurangan terhadap kelas lainnya. Misalnya, Saber kuat melawan Lancer, tapi lemah melawan Archer. Rider kuat melawan Caster, tapi lemah melawan Assassin. Berserker kuat melawan semua kelas, tapi juga lemah melawan semua kelas.

    -

    Untuk memenangkan pertempuran dengan mudah, Anda perlu menggunakan Servant dengan kelas yang sesuai dengan musuh yang Anda hadapi. Anda bisa melihat kelas musuh di layar pilih tim sebelum memulai pertempuran. Anda juga bisa melihat simbol kelas di atas kepala musuh saat berada di pertempuran. Pilih Servant dengan kelas yang memiliki keuntungan terhadap kelas musuh. Misalnya, jika Anda menghadapi musuh dengan kelas Lancer, pilih Servant dengan kelas Saber atau Archer. Jika Anda menghadapi musuh dengan kelas Caster, pilih Servant dengan kelas Rider atau Assassin. Jika Anda menghadapi musuh dengan kelas Berserker, pilih Servant dengan kelas apapun yang memiliki serangan kuat.

    -

    Kesimpulan

    -

    Fate/Grand Order adalah game mobile RPG yang berdasarkan seri Fate karya Type-Moon, di mana Anda bisa memanggil dan mengendalikan roh pahlawan dari sejarah dan legenda untuk menyelamatkan umat manusia dari bencana. Game ini memiliki cerita yang menarik, gameplay yang seru, dan karakter yang beragam. Untuk bermain game ini di perangkat Android Anda, Anda perlu mengikuti beberapa langkah untuk mendownload dan menginstalnya. Anda juga perlu memperhatikan beberapa tips dan trik untuk meningkatkan kemampuan Servant Anda dan memenangkan pertempuran dengan mudah. Semoga artikel ini bermanfaat bagi Anda yang ingin mencoba game ini atau yang sudah bermain game ini. Selamat bermain!

    -

    FAQ

    -

    Berikut ini adalah beberapa pertanyaan yang sering diajukan oleh pemain Fate/Grand Order:

    -
      -
    • Apakah game ini gratis?
    • -

      Ya, game ini gratis untuk didownload dan dimainkan. Namun, ada beberapa item premium yang bisa dibeli dengan uang sungguhan, seperti Saint Quartz, Golden Apple, dan Costume Dress. Item-item ini bersifat opsional dan tidak wajib untuk dimiliki.

      -
    • Apakah game ini online atau offline?
    • -

      Game ini online dan membutuhkan koneksi internet untuk bermain. Anda tidak bisa memainkan game ini jika tidak terhubung dengan internet.

      -
    • Apakah game ini memiliki mode multiplayer?
    • -

      Game ini memiliki mode multiplayer yang disebut Grand Battle, di mana Anda bisa bertarung melawan pemain lain secara real-time. Anda bisa memilih tim Anda sendiri atau menggunakan tim acak yang disediakan oleh sistem. Anda juga bisa berkomunikasi dengan pemain lain menggunakan chat atau stiker.

      -
    • Apakah game ini memiliki versi bahasa lain selain Inggris dan Jepang?
    • -

      Saat ini, game ini hanya memiliki versi bahasa Inggris dan Jepang. Tidak ada rencana untuk membuat versi bahasa lainnya.

      -
    • Apakah game ini memiliki hubungan dengan anime atau manga Fate?
    • -

      Game ini adalah bagian dari franchise Fate/stay night, yang memiliki banyak adaptasi anime dan manga. Beberapa karakter dan cerita dari game ini muncul atau disebutkan di anime atau manga Fate lainnya. Namun, game ini memiliki cerita sendiri yang berbeda dari anime atau manga Fate lainnya. Anda tidak perlu menonton atau membaca anime atau manga Fate lainnya untuk menikmati game ini.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Clash Royale on Chromebook Everything You Need to Know.md b/spaces/congsaPfin/Manga-OCR/logs/Clash Royale on Chromebook Everything You Need to Know.md deleted file mode 100644 index ff37cf8d7cfb4982698447e392d60bcbde6d4fd1..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Clash Royale on Chromebook Everything You Need to Know.md +++ /dev/null @@ -1,97 +0,0 @@ - -

    How to Download Clash Royale on a Chromebook

    -

    If you're looking for a fun and addictive game to play on your Chromebook, you might want to check out Clash Royale. Clash Royale is a real-time multiplayer game that combines card collecting, strategy, and tower defense. You can build your own battle deck, challenge other players from around the world, and join or form clans to share cards and participate in clan wars. In this article, we'll show you how to download and install Clash Royale on your Chromebook, and give you some tips and tricks for playing it.

    -

    download clash royale on chromebook


    DOWNLOAD ★★★ https://urlca.com/2uOfgA



    -

    What is Clash Royale?

    -

    A brief introduction to the game and its features

    -

    Clash Royale is a game developed by Supercell, the same company behind the popular Clash of Clans. It features many of the same characters and elements from the Clash universe, but with a different gameplay style. In Clash Royale, you have to use cards to deploy troops, spells, and buildings on a small arena, with the goal of destroying your opponent's towers and king. You can collect and upgrade dozens of cards, each with different abilities and strengths. You can also unlock new arenas, earn chests, and join leagues as you progress in the game.

    -

    Why play Clash Royale on a Chromebook?

    -

    Playing Clash Royale on a Chromebook has several advantages over playing it on a smartphone or tablet. For one thing, you can enjoy a bigger screen and better graphics, which can enhance your gaming experience. You can also use your keyboard and mouse to control the game, which can give you more precision and speed. Plus, you don't have to worry about battery life or storage space, as Chromebooks are designed to be fast, secure, and easy to use.

    -

    What is a Chromebook?

    -

    A brief introduction to the device and its operating system

    -

    A Chromebook is a laptop that runs on ChromeOS, an operating system made by Google. ChromeOS is based on the Chrome browser, which means that you can access most of your apps and files online. Chromebooks are also compatible with Android apps, which you can download from the Google Play Store app. Chromebooks are known for being affordable, lightweight, and reliable. They also have built-in virus protection, automatic updates, and long battery life.

    -

    How to access the Google Play Store on a Chromebook

    -

    To download Android apps on your Chromebook, you need to access the Google Play Store app. Not all Chromebooks support Android apps, so you need to check if yours does first. You can do this by going to Settings > Apps > Google Play Store. If you see this option, it means that your Chromebook supports Android apps. If not, you might need to update your ChromeOS or wait for future updates.

    -

    To enable the Google Play Store app on your Chromebook, follow these steps:

    -
      -
    1. Go to Settings > Apps > Google Play Store.
    2. -
    3. Click Turn On next to Install apps and games from Google Play on your Chromebook.
    4. -
    5. Read and agree to the terms of service.
    6. -
    7. The Google Play Store app will open. Sign in with your Google account if prompted.
    8. -
    9. You can now browse and download Android apps on your Chromebook.
    10. -
    -

    How to install and use Android apps on a Chromebook

    -

    Step-by-step instructions for downloading and installing Clash Royale

    -

    Once you have enabled the Google Play Store app on your Chromebook, you can download and install Clash Royale by following these steps:

    -
      -
    1. Open the. Google Play Store app on your Chromebook.
    2. -
    3. Search for Clash Royale in the search bar.
    4. -
    5. Click Install next to the Clash Royale icon.
    6. -
    7. Wait for the app to download and install on your Chromebook.
    8. -
    9. Click Open to launch the app.
    10. -
    11. You can also find the app in your Launcher or on your Shelf.
    12. -
    -

    Tips and tricks for playing Clash Royale on a Chromebook

    -

    Playing Clash Royale on a Chromebook is similar to playing it on a smartphone or tablet, but with some differences. Here are some tips and tricks to help you get the most out of your gaming experience:

    -
      -
    • You can use your keyboard and mouse to control the game. You can drag and drop cards with your mouse, or use the number keys to select them. You can also use the arrow keys to move the camera, and the spacebar to zoom in and out.
    • -
    • You can adjust the size and position of the app window by dragging the edges or corners. You can also switch between full-screen and windowed mode by pressing F4.
    • -
    • You can change the display settings of the app by going to Settings > Apps > Google Play Store > Manage Android preferences > Display. You can adjust the font size, screen zoom, and screen resolution to suit your preferences.
    • -
    • You can sync your progress and data across devices by linking your game account to your Supercell ID, Google Play Games, or Facebook. You can do this by going to Settings > Account in the game.
    • -
    • You can play with other players on different platforms by joining or creating a clan, or by using the 2v2 mode. You can also chat with your clanmates and friends in the game.
    • -
    -

    Conclusion

    -

    A summary of the main points and benefits of playing Clash Royale on a Chromebook

    -

    Clash Royale is a fun and addictive game that you can play on your Chromebook. By downloading and installing it from the Google Play Store app, you can enjoy a bigger screen, better graphics, and more control options. You can also access your game data and progress across devices, and play with other players online. Playing Clash Royale on a Chromebook can enhance your gaming experience and make you a better player.

    -

    How to install clash royale on chromebook
    -Clash royale chromebook apk download
    -Play clash royale on chromebook free
    -Clash royale for chrome os
    -Clash royale on google play store for chromebook
    -Best way to run clash royale on chromebook
    -Clash royale chromebook compatibility
    -Clash royale chromebook update
    -Clash royale chromebook reddit
    -Clash royale chromebook keyboard controls
    -Clash royale chromebook hack
    -Clash royale chromebook lag fix
    -Clash royale chromebook not working
    -Clash royale chromebook error
    -Clash royale chromebook review
    -Clash royale chromebook tips and tricks
    -Clash royale chromebook gameplay
    -Clash royale chromebook tutorial
    -Clash royale chromebook guide
    -Clash royale chromebook requirements
    -Clash royale chromebook download link
    -Clash royale chromebook vs android
    -Clash royale chromebook vs pc
    -Clash royale chromebook vs ios
    -Clash royale chromebook vs mac
    -Clash royale chromebook vs windows 10
    -Clash royale chromebook vs tablet
    -Clash royale chromebook vs phone
    -Clash royale chromebook vs laptop
    -Clash royale chromebook vs desktop
    -Clash royale on samsung chromebook
    -Clash royale on acer chromebook
    -Clash royale on lenovo chromebook
    -Clash royale on asus chromebook
    -Clash royale on hp chromebook
    -Clash royale on dell chromebook
    -Clash royale on toshiba chromebook
    -Clash royale on pixelbook go
    -Clash royale on pixel slate
    -Clash royale on pixel c

    -

    FAQs

    -

    Here are some frequently asked questions about playing Clash Royale on a Chromebook:

    - - - - - - - -
    QuestionAnswer
    Is Clash Royale free to play?Yes, Clash Royale is free to download and play. However, it does offer in-app purchases that can give you an edge in the game. You can buy gems, gold, chests, cards, and other items with real money. You can also earn these items by playing the game regularly.
    Can I play Clash Royale offline?No, Clash Royale requires an internet connection to play. You need to be online to access the game servers, update your game data, and play with other players. Make sure you have a stable Wi-Fi or mobile data connection before launching the game.
    How do I update Clash Royale on my Chromebook?To update Clash Royale on your Chromebook, you need to go to the Google Play Store app and check for updates. You can do this by opening the app, clicking on Menu > My apps & games > Updates. If you see an update available for Clash Royale, click Update to download and install it. You can also enable auto-updates for all your apps by going to Settings > Auto-update apps in the Google Play Store app.
    How do I uninstall Clash Royale from my Chromebook?To uninstall Clash Royale from your Chromebook, you need to go to the Launcher and find the app icon. Right-click on it and select Uninstall. Alternatively, you can go to Settings > Apps > Google Play Store > Manage Android preferences > Apps & notifications > See all apps. Find Clash Royale in the list and click Uninstall.
    How do I contact Supercell for support or feedback?To contact Supercell for support or feedback, you need to go to Settings > Help & Support in the game. You can browse through various topics and articles that might answer your questions or issues. You can also contact Supercell directly by clicking on Contact Us at the bottom of any article. You can send them a message with your query or feedback, and they will reply as soon as possible.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Attack on Titan on Your PC with Vita3K An Experimental PS Vita Emulator.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy Attack on Titan on Your PC with Vita3K An Experimental PS Vita Emulator.md deleted file mode 100644 index 76707869926c0fd4da499e86211f4f4bcd0dbf61..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Attack on Titan on Your PC with Vita3K An Experimental PS Vita Emulator.md +++ /dev/null @@ -1,170 +0,0 @@ - -

    How to Download Attack on Titan on Vita3K Emulator

    -

    Are you a fan of Attack on Titan, the epic manga and anime series that depicts a world where humanity is besieged by giant man-eating creatures? Do you want to play the official PS Vita games based on this thrilling story, but don't have a console or a physical copy? If so, you're in luck, because there is a way to enjoy these games on your PC using an emulator called Vita3K.

    -

    download attack on titan vita3k


    Download Zip > https://urlca.com/2uO7Fy



    -

    In this article, we will show you how to download and install Vita3K, how to dump your own PS Vita games legally, how to install them on the emulator, and how to run Attack on Titan smoothly and flawlessly. We will also share some tips and tricks for playing this game on Vita3K and answer some common questions you might have. So, without further ado, let's get started!

    -

    What is Attack on Titan?

    -

    Attack on Titan (Japanese: 進撃の巨人, Hepburn: Shingeki no Kyojin) is a Japanese manga series written and illustrated by Hajime Isayama. It is set in a world where humanity is forced to live in cities surrounded by three enormous walls that protect them from gigantic humanoid Titans that have brought humanity to the brink of extinction. The story follows Eren Yeager, who vows to exterminate the Titans after they destroy his hometown and kill his mother.

    -

    The manga was serialized in Kodansha's monthly magazine Bessatsu Shōnen Magazine from September 2009 to April 2021, with its chapters collected in 34 tankōbon volumes. It has been adapted into an anime television series by Wit Studio (seasons 1–3) and MAPPA (season 4), which has received critical acclaim and commercial success worldwide. The anime has also spawned several spin-offs, novels, video games, and live-action films.

    -

    One of the most popular video games based on Attack on Titan is Attack on Titan 2 (Japanese: 進撃の巨人 2 Hepburn: Shingeki no Kyojin 2), which was released for PS Vita in March 2018. It is an action role-playing game that allows players to create their own character and join the Survey Corps, an elite military unit that fights against the Titans. The game features an original story that follows the events of the anime's second season, as well as new characters, weapons, gameplay mechanics, and online multiplayer modes.

    -

    What is Vita3K?

    -

    Vita3K is an experimental PlayStation Vita emulator for Windows, Linux, macOS and Android. It is an open-source project that aims to emulate the PS Vita hardware and software as accurately as possible. It was launched in January 2018 by developer petmac and has since been improved by many other contributors.

    -

    Vita3K is still in an early stage of development, so it does not support all the PS Vita games and features, and some games may not work properly or at all. However, it is constantly being updated and optimized, and some games are already playable and enjoyable on the emulator. One of these games is Attack on Titan 2, which runs smoothly on Vita3K with minor graphical glitches and sound issues.

    -

    Vita3K is free to download and use, but you will need to provide your own PS Vita games and firmware files, which you can legally obtain from your own console. You will also need a decent PC that meets the minimum system requirements for the emulator, which are:

    -

    How to download attack on titan vita3k emulator for android
    -Download attack on titan vita3k apk file
    -Download attack on titan vita3k github repository
    -Download attack on titan vita3k windows version
    -Download attack on titan vita3k linux version
    -Download attack on titan vita3k macos version
    -Download attack on titan vita3k compatibility list
    -Download attack on titan vita3k homebrew games
    -Download attack on titan vita3k commercial games
    -Download attack on titan vita3k discord server
    -Download attack on titan vita3k license file
    -Download attack on titan vita3k minimum requirements
    -Download attack on titan vita3k latest release
    -Download attack on titan vita3k source code
    -Download attack on titan vita3k build instructions
    -Download attack on titan vita3k screenshots gallery
    -Download attack on titan vita3k vulkan support
    -Download attack on titan vita3k arm64 device
    -Download attack on titan vita3k experimental emulator
    -Download attack on titan vita3k no gpl dependencies
    -Download attack on titan vita3k ffmpeg version
    -Download attack on titan vita3k website information
    -Download attack on titan vita3k wiki information
    -Download attack on titan vita3k developers and contributors
    -Download attack on titan vita3k supporters and donors
    -Download attack on titan vita3k persona 4 golden game
    -Download attack on titan vita3k a rose in the twilight game
    -Download attack on titan vita3k alone with you game
    -Download attack on titan vita3k va 11 hall a game
    -Download attack on titan vita3k fruit ninja game
    -Download attack on titan vita3k jetpack joyride game
    -Download attack on titan vita3k nonpdrm plugin
    -Download attack on titan vita3k fagdec plugin
    -Download attack on titan vita3k vitadb homebrew database
    -Download attack on titan vita3k playstation trademarks disclaimer
    -Download attack on titan vita3k gplv2 license disclaimer
    -Download attack on titan vita3k illegal activity disclaimer
    -Download attack on titan vita3k net energy gain experiment
    -Download attack on titan vita3k holy grail fusion experiment
    -Download attack on titan vita3k mini sun experiment
    -Download attack on titan vita3k 100 million degrees celsius experiment
    -Download attack on titan vita3k 30 seconds experiment duration
    -Download attack on titan vita3k korea superconducting tokamak advanced research facility
    -Download attack on titan vita3k korea institute of fusion energy facility
    -Download attack on titan vita3k new scientist article
    -Download attack on titan vita3k the sun article
    -Download attack on titan vita3k yahoo news article
    -Download attack on titan vita3k seven times hotter than the sun core comparison
    -Download attack on titan vita3k 15 million degrees kelvins sun core temperature comparison

    -
      -
    • OS: Windows 7 64-bit or later, Linux 64-bit, or macOS 10.13 or later
    • -
    • CPU: Intel Core i5-4430 or AMD FX-6300 or better
    • -
    • RAM: 4 GB or more
    • -
    • GPU: OpenGL 4.1 compatible or better
    • -
    • Storage: At least 10 GB of free space
    • -
    -

    How to Install Vita3K on Your PC

    -

    Installing Vita3K on your PC is very easy and straightforward. Just follow these simple steps:

    -
      -
    1. Go to the official website of Vita3K at https://vita3k.org/ and click on the Download button.
    2. -
    3. Select the version of the emulator that matches your operating system (Windows, Linux, or macOS) and download the zip file.
    4. -
    5. Extract the zip file to a folder of your choice using a program like WinRAR or 7-Zip.
    6. -
    7. Open the folder and double-click on the vita3k.exe file (or vita3k if you are using Linux or macOS) to launch the emulator.
    8. -
    9. You will be greeted by a welcome screen that asks you to install the PS Vita firmware. Click on Install Firmware and browse to the location of your firmware file (which should have a .PUP extension). If you don't have a firmware file, you can download it from https://darthsternie.net/ps-vita-firmwares/. Make sure you choose the latest version (3.73 at the time of writing).
    10. -
    11. Wait for the firmware installation to complete and then click on Continue.
    12. -
    13. Congratulations! You have successfully installed Vita3K on your PC. You can now proceed to install and play your PS Vita games.
    14. -
    -

    How to Dump Your PS Vita Games

    -

    Before you can play your PS Vita games on Vita3K, you will need to dump them from your own console. This is a legal and ethical way to backup your own games and use them on the emulator. There are several methods to dump your PS Vita games, but we will focus on three of them: HENkaku, NoNpDrm, and FAGDec.

    -

    HENkaku

    -

    HENkaku is a homebrew enabler for PS Vita that allows you to run unsigned code and applications on your console. It works on PS Vita firmware 3.60 only, so if you have a higher firmware, you will need to downgrade it first using Modoru. You can find more information about HENkaku and Modoru at https://henkaku.xyz/.

    -

    To dump your PS Vita games using HENkaku, you will need to install a plugin called MaiDumpTool, which can be downloaded from https://github.com/LioMajor/MaiDumpToolEN/releases. You will also need an FTP client like FileZilla or WinSCP to transfer files between your PC and your console.

    -

    Here are the steps to dump your PS Vita games using HENkaku:

    -
      -
    1. Install HENkaku on your PS Vita by following the instructions at https://henkaku.xyz/go/.
    2. -
    3. Install MaiDumpTool on your PS Vita by copying the VPK file to your console using an FTP client and installing it from VitaShell.
    4. -
    5. Insert the game cartridge that you want to dump into your PS Vita or make sure it is installed on your memory card.
    6. -
    7. Launch MaiDumpTool from the home screen and select Dump game.
    8. -
    9. Select the game that you want to dump and choose whether you want to include the patch and DLC files or not.
    10. -
    11. Wait for the dumping process to complete and then exit MaiDumpTool.
    12. -
    13. Connect your PS Vita to your PC using an FTP client and navigate to the ux0:/mai folder and copy the folder that contains the game ID to your PC.
    14. -
    15. Rename the folder to the game ID and compress it into a zip file.
    16. -
    17. You have successfully dumped your PS Vita game using HENkaku. You can now install it on Vita3K using the zip file.
    18. -
    -

    NoNpDrm

    -

    NoNpDrm is a plugin for PS Vita that allows you to bypass the DRM protection of PS Vita games and create fake licenses for them. It works on any PS Vita firmware that supports HENkaku or h-encore, which are homebrew enablers for higher firmware versions. You can find more information about NoNpDrm, HENkaku, and h-encore at https://github.com/TheOfficialFloW/NoNpDrm, https://henkaku.xyz/, and https://github.com/TheOfficialFloW/h-encore respectively.

    -

    To dump your PS Vita games using NoNpDrm, you will need to install the plugin on your console and refresh the live area using VitaShell. You will also need a USB cable or an FTP client to transfer files between your PC and your console.

    -

    Here are the steps to dump your PS Vita games using NoNpDrm:

    -
      -
    1. Install NoNpDrm on your PS Vita by following the instructions at https://github.com/TheOfficialFloW/NoNpDrm/blob/master/README.md.
    2. -
    3. Insert the game cartridge that you want to dump into your PS Vita or make sure it is installed on your memory card.
    4. -
    5. Launch VitaShell from the home screen and press Triangle to open the menu. Select Refresh livearea and wait for the process to finish.
    6. -
    7. Connect your PS Vita to your PC using a USB cable or an FTP client and navigate to the ux0:/app folder. Copy the folder that contains the game ID to your PC.
    8. -
    9. Navigate to the ux0:/license/app folder and copy the file that has the same name as the game ID to your PC. Place it inside the game ID folder that you copied earlier.
    10. -
    11. Rename the game ID folder to the game ID and compress it into a zip file.
    12. -
    13. You have successfully dumped your PS Vita game using NoNpDrm. You can now install it on Vita3K using the zip file.
    14. -
    -

    FAGDec

    -

    FAGDec is a tool for PS Vita that allows you to decrypt and extract PS Vita games from PKG files. It works on any PS Vita firmware that supports HENkaku or h-encore, which are homebrew enablers for higher firmware versions. You can find more information about FAGDec, HENkaku, and h-encore at https://github.com/CelesteBlue-dev/PSVita-RE-tools/tree/master/FAGDec, https://henkaku.xyz/, and https://github.com/TheOfficialFloW/h-encore respectively.

    -

    To dump your PS Vita games using FAGDec, you will need to download the PKG file of the game that you want to dump from a legitimate source, such as PlayStation Store or NoPayStation. You will also need to install FAGDec on your console and use it to decrypt and extract the PKG file. You will also need a USB cable or an FTP client to transfer files between your PC and your console.

    -

    Here are the steps to dump your PS Vita games using FAGDec:

    -
      -
    1. Download the PKG file of the game that you want to dump from a legitimate source, such as PlayStation Store or NoPayStation. Make sure you download the correct region and version of the game.
    2. -
    3. Install FAGDec on your PS Vita by following the instructions at https://github.com/CelesteBlue-dev/PSVita-RE-tools/tree/master/FAGDec#installation.
    4. -
    5. Copy the PKG file to your PS Vita using a USB cable or an FTP client and place it in the ux0:/pkgi folder.
    6. -
    7. Launch FAGDec from the home screen and select Decrypt PKG file. Browse to the location of your PKG file and press X to start the decryption process.
    8. -
    9. Wait for the decryption process to complete and then exit FAGDec.
    10. -
    11. Connect your PS Vita to your PC using a USB cable or an FTP client and navigate to the ux0:/app folder and copy the folder that contains the game ID to your PC.
    12. -
    13. Rename the game ID folder to the game ID and compress it into a zip file.
    14. -
    15. You have successfully dumped your PS Vita game using FAGDec. You can now install it on Vita3K using the zip file.
    16. -
    -

    How to Install PS Vita Games on Vita3K

    -

    Installing PS Vita games on Vita3K is very easy and straightforward. Just follow these simple steps:

    -
      -
    1. Open Vita3K and click on File > Install .zip/.vpk. Browse to the location of your zip file that contains the game and select it.
    2. -
    3. Wait for the installation process to complete and then click on Refresh game list.
    4. -
    5. You will see your game appear on the emulator's home screen. You can also check the game's information by right-clicking on it and selecting Game info.
    6. -
    7. If you want to install more games, repeat the same steps for each zip file.
    8. -
    9. You have successfully installed your PS Vita games on Vita3K. You can now proceed to run them on the emulator.
    10. -
    -

    How to Run Attack on Titan on Vita3K

    -

    Running Attack on Titan on Vita3K is very simple and fun. Just follow these simple steps:

    -
      -
    1. Open Vita3K and double-click on Attack on Titan 2 from the home screen to launch the game.
    2. -
    3. You will see a loading screen followed by a disclaimer and a title screen. Press X to start the game.
    4. -
    5. You will be asked to create a new save data or load an existing one. Choose whichever option you prefer and press X to confirm.
    6. -
    7. You will be taken to the main menu, where you can choose from various options, such as Story Mode, Another Mode, Gallery, Options, and Online Mode. Select Story Mode to play the original story of the anime's second season, or Another Mode to play with your own custom character and interact with other characters from the series.
    8. -
    9. You will be able to customize your character's appearance, name, voice, and equipment before starting the game. You can also change these settings later from the Camp menu.
    10. -
    11. Once you are ready, press X to start the game and enjoy the thrilling action of fighting against the Titans with your 3D Maneuver Gear.
    12. -
    -

    Tips and Tricks for Playing Attack on Titan on Vita3K

    -

    Playing Attack on Titan on Vita3K is a great way to experience this amazing game on your PC, but there are some things you should know before you start. Here are some tips and tricks that will help you avoid common issues, improve compatibility, and enhance your gaming experience:

    -
      -
    • Make sure you have the latest version of Vita3K installed on your PC. You can check for updates from the emulator's menu by clicking on Help > Check for updates.
    • -
    • Make sure you have the latest version of Attack on Titan 2 installed on your console. You can check for updates from the PS Vita's live area by pressing Triangle and selecting Check for update.
    • -
    • If you encounter any graphical glitches or sound issues while playing the game, try changing the settings from the emulator's menu by clicking on Configuration > Settings. You can adjust the resolution scale, frame rate limit, backend renderer, audio device, and more.
    • -
    • If you want to play online with other players, you will need to create a PSN account and link it to your PS Vita. You can do this from the PS Vita's settings menu by selecting PlayStation Network > Sign Up. You will also need to enable online mode from the emulator's menu by clicking on Configuration > Settings > Network > Enable Online Mode.
    • -
    • If you want to use a controller instead of a keyboard and mouse, you will need to configure it from the emulator's menu by clicking on Configuration > Controls. You can map any button or axis to any PS Vita input, as well as adjust the sensitivity and deadzone.
    • -
    • If you want to save your progress or load a previous state, you can do so from the emulator's menu by clicking on File > Save state or Load state. You can also use hotkeys (F5-F8) for quick saving and loading.
    • -
    -

    Conclusion

    -

    In this article, we have shown you how to download and install Vita3K, how to dump your own PS Vita games legally, how to install them on the emulator, and how to run Attack on Titan smoothly and flawlessly. We have also shared some tips and tricks for playing this game on Vita3K and answered some common questions you might have. We hope you have found this article helpful and informative, and that you will enjoy playing Attack on Titan on your PC using Vita3K.

    -

    If you have any feedback, suggestions, or queries, please feel free to leave a comment below or contact us through our website. We would love to hear from you and help you with any issues you might encounter. Thank you for reading and happy gaming!

    -

    FAQs

    -

    Here are some frequently asked questions and answers related to the topic of this article:

    -

    Q: Is Vita3K legal?

    -

    A: Yes, Vita3K is legal as long as you use it to play your own PS Vita games that you have legally obtained from your own console. You should not use it to play pirated or downloaded games that you do not own, as that would be illegal and unethical.

    -

    Q: Is Vita3K safe?

    -

    A: Yes, Vita3K is safe as long as you download it from the official website or the GitHub repository. You should not download it from any other sources, as they might contain viruses or malware that could harm your PC.

    -

    Q: Is Vita3K compatible with all PS Vita games?

    -

    A: No, Vita3K is not compatible with all PS Vita games, as it is still in an early stage of development and some games may not work properly or at all. You can check the compatibility list at https://vita3k.org/compatibility.html to see which games are playable, ingame, loadable, or nothing.

    -

    Q: How can I improve the performance of Vita3K?

    -

    A: You can improve the performance of Vita3K by adjusting the settings from the emulator's menu by clicking on Configuration > Settings. You can lower the resolution scale, frame rate limit, or backend renderer to reduce the load on your GPU. You can also disable some features like shaders or audio to reduce the load on your CPU.

    -

    Q: How can I support the development of Vita3K?

    -

    A: You can support the development of Vita3K by donating to the project at https://vita3k.org/donate.html. You can also contribute to the project by reporting bugs, suggesting features, testing games, or submitting code at https://github.com/Vita3K/Vita3K.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Download Stickman Hook Mod APK and Unlock All the Features.md b/spaces/congsaPfin/Manga-OCR/logs/How to Download Stickman Hook Mod APK and Unlock All the Features.md deleted file mode 100644 index d2466be9ddc14ac1bbe171e48f390e36169afc17..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Download Stickman Hook Mod APK and Unlock All the Features.md +++ /dev/null @@ -1,89 +0,0 @@ -
    -

    Download Game Stickman Hook Mod Apk

    -

    If you are looking for a fun and addictive casual game that will keep you entertained for hours, then you should try Stickman Hook. This is a game where you control a stickman who can swing from one platform to another using ropes. You will have to use your timing and reflexes to avoid obstacles and reach the finish line. In this article, we will tell you what is Stickman Hook, how to play it, why you should download the mod apk version, how to download and install it, and what benefits you will get from it.

    -

    download game stickman hook mod apk


    Download ⚹⚹⚹ https://urlca.com/2uOddd



    -

    What is Stickman Hook?

    -

    Stickman Hook is a game developed by Madbox, a French studio that specializes in casual games. The game was released in 2018 and has been downloaded over 100 million times on Google Play Store. It has also received positive reviews from users and critics alike, who praised its simple yet challenging gameplay, colorful graphics, and smooth animations.

    -

    Stickman Hook is a game where you control a stickman who can swing from one platform to another using ropes. You will have to tap the screen at the right moment to release the rope and grab another one. You will also have to avoid obstacles such as spikes, saws, and lasers that can harm your stickman. The game has hundreds of levels with different themes and difficulties. You can also unlock various skins for your stickman, such as animals, superheroes, and more.

    -

    Features of Stickman Hook

    -

    Stickman Hook is a game that offers many features that make it fun and enjoyable. Here are some of them:

    -

    How to download stickman hook mod apk for free
    -Stickman hook mod apk latest version download
    -Stickman hook mod apk unlimited money and skins
    -Download stickman hook mod apk for android
    -Stickman hook mod apk offline gameplay
    -Stickman hook mod apk no ads download
    -Stickman hook mod apk hack cheats download
    -Download stickman hook mod apk from apkdone.com[^1^]
    -Stickman hook mod apk fun arcade platformer
    -Stickman hook mod apk graphics and sound quality
    -Download stickman hook mod apk with simple controls
    -Stickman hook mod apk unlock all levels and modes
    -Download stickman hook mod apk with realistic physics
    -Stickman hook mod apk challenge your friends online
    -Stickman hook mod apk best swing game download
    -Download stickman hook mod apk with awesome skins
    -Stickman hook mod apk easy installation guide
    -Stickman hook mod apk safe and secure download
    -Download stickman hook mod apk with no root required
    -Stickman hook mod apk reviews and ratings
    -Download stickman hook mod apk for PC
    -Stickman hook mod apk alternative games download
    -Stickman hook mod apk tips and tricks
    -Download stickman hook mod apk with fast speed
    -Stickman hook mod apk features and benefits

    -

    How to play Stickman Hook

    -
      -
    • The gameplay of Stickman Hook is very simple and intuitive. You just have to tap the screen to release the rope and grab another one. You have to time your taps carefully to avoid falling or hitting obstacles.
    • -
    • The game has a physics-based engine that makes the movements of your stickman realistic and fluid. You can also perform flips and tricks in the air to earn more points.
    • -
    • The game has hundreds of levels with different themes and difficulties. You will never get bored as each level offers a new challenge and a different environment.
    • -
    -

    Why you should download Stickman Hook mod apk

    -
      -
    • If you want to enjoy the game without any limitations or interruptions, then you should download the mod apk version of Stickman Hook. This is a modified version of the game that gives you access to all the features and benefits that are not available in the original version.
    • -
    • The mod apk version of Stickman Hook will allow you to unlock all the skins for your stickman, remove all the ads that can be annoying and distracting, and enjoy unlimited fun without any restrictions.
    • -
    • The mod apk version of Stickman Hook is also safe and easy to download and install. You don't need to root your device or pay any money to get it.
    • -
    -

    How to download and install Stickman Hook mod apk

    -

    If you are interested in downloading and installing Stickman Hook mod apk, then you should follow these steps:

    -

    Requirements for Stickman Hook mod apk

    -
      -
    • You need an Android device with Android 4.4 or higher.
    • -
    • You need enough storage space on your device to download and install the mod apk file.
    • -
    • You need to enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store.
    • -
    -

    Steps to download and install Stickman Hook mod apk

    -
      -
    1. Click on this link to download the mod apk file of Stickman Hook
    2. Save the mod apk file on your device storage.
    3. -
    4. Locate the mod apk file on your device and tap on it to start the installation process.
    5. -
    6. Follow the instructions on the screen to complete the installation.
    7. -
    8. Launch the game and enjoy the mod features.
    9. -
    -

    Benefits of Stickman Hook mod apk

    -

    By downloading and installing Stickman Hook mod apk, you will get many benefits that will enhance your gaming experience. Here are some of them:

    -

    Unlock all skins

    -

    One of the benefits of Stickman Hook mod apk is that you will be able to unlock all the skins for your stickman. You can choose from a variety of skins, such as animals, superheroes, and more. You can also customize your stickman's appearance by changing its color, size, and shape. You can have fun with different looks and styles for your stickman.

    -

    Remove ads

    -

    Another benefit of Stickman Hook mod apk is that you will be able to remove all the ads that can be annoying and distracting. You will not have to watch any ads before or after each level. You will also not have to deal with any pop-ups or banners that can interfere with your gameplay. You will be able to enjoy the game without any interruptions or delays.

    -

    Enjoy unlimited fun

    -

    The last benefit of Stickman Hook mod apk is that you will be able to enjoy unlimited fun without any restrictions. You will not have to worry about running out of lives, coins, or energy. You will also not have to wait for any timers or cooldowns. You will be able to play the game as much as you want and whenever you want. You will be able to explore all the levels and challenges that the game has to offer.

    -

    Conclusion

    -

    Stickman Hook is a game that will keep you entertained for hours with its simple yet challenging gameplay, colorful graphics, and smooth animations. You will have to control a stickman who can swing from one platform to another using ropes. You will have to avoid obstacles and reach the finish line. You can also unlock various skins for your stickman and perform flips and tricks in the air.

    -

    If you want to enjoy the game without any limitations or interruptions, then you should download the mod apk version of Stickman Hook. This is a modified version of the game that gives you access to all the features and benefits that are not available in the original version. You will be able to unlock all the skins, remove all the ads, and enjoy unlimited fun.

    -

    To download and install Stickman Hook mod apk, you just have to follow some simple steps. You need an Android device with Android 4.4 or higher, enough storage space, and unknown sources enabled. You need to click on this link to download the mod apk file, save it on your device, locate it and tap on it, follow the instructions, and launch the game.

    -

    We hope that this article has helped you learn more about Stickman Hook and how to download and install its mod apk version. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

    -

    FAQs

    -
      -
    • Is Stickman Hook mod apk safe?
    • -

      Yes, Stickman Hook mod apk is safe to download and install. It does not contain any viruses or malware that can harm your device or data. It also does not require any root access or permissions that can compromise your privacy or security.

      -
    • Is Stickman Hook mod apk free?
    • -

      Yes, Stickman Hook mod apk is free to download and install. You don't need to pay any money or make any in-app purchases to get it. You also don't need to register or sign up for anything to use it.

      -
    • What are the differences between Stickman Hook mod apk and original version?
    • -

      The main differences between Stickman Hook mod apk and original version are that the mod apk version gives you access to all the features and benefits that are not available in the original version. These include unlocking all the skins, removing all the ads, and enjoying unlimited fun.

      -
    • How can I update Stickman Hook mod apk?
    • -

      To update Stickman Hook mod apk, you need to follow the same steps as downloading and installing it. You need to click on this link to download the latest version of the mod apk file, save it on your device, locate it and tap on it, follow the instructions, and launch the game.

      -
    • How can I contact the developer of Stickman Hook mod apk?
    • -

      If you have any questions or feedback about Stickman Hook mod apk, you can contact the developer of the game by sending an email to support@madboxgames.io. You can also visit their website or follow them on their social media accounts .

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Survive the Zombie Apocalypse with Zombie Evil Horror 4 MOD APK.md b/spaces/congsaPfin/Manga-OCR/logs/How to Survive the Zombie Apocalypse with Zombie Evil Horror 4 MOD APK.md deleted file mode 100644 index b577c36ea69855adb571c9f37f52d7429c06b01c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Survive the Zombie Apocalypse with Zombie Evil Horror 4 MOD APK.md +++ /dev/null @@ -1,69 +0,0 @@ - -

    Zombie Evil Horror 4 Mod Apk: A Thrilling Horror Game for Android

    -

    If you are a fan of horror games, you might have heard of zombie evil horror 4 mod apk. This is a modified version of the original game that gives you unlimited money, weapons, and other features to make your gaming experience more exciting and fun. In this article, we will tell you everything you need to know about zombie evil horror 4 mod apk, including its features, how to download and install it, and some tips and tricks for playing it.

    -

    zombie evil horror 4 mod apk


    Download Zip ››› https://urlca.com/2uO9JW



    -

    What is Zombie Evil Horror 4 Mod Apk?

    -

    Zombie Evil Horror 4 Mod Apk is a casual action game developed by MaxOwe Games. It is the fourth installment of the Zombie Evil Horror series, which has been downloaded by millions of players around the world. The game is set in a post-apocalyptic world where zombies have taken over. You are one of the survivors who has to fight your way through various environments and challenges, while uncovering the secrets and mysteries behind the zombie outbreak.

    -

    Why is Zombie Evil Horror 4 Mod Apk Popular Among Horror Game Fans?

    -

    Zombie Evil Horror 4 Mod Apk is popular among horror game fans because it offers a lot of features that make it stand out from other zombie games. Some of these features are:

    -

    High-quality Graphics and Sound Effects

    -

    The game has high-quality graphics and sound effects that create a realistic and immersive atmosphere. You will feel the tension and fear as you face hordes of zombies in dark and creepy places. The game also has different weather effects, such as rain, fog, and thunder, that add to the mood.

    -

    Various Environments and Challenges

    -

    The game has various environments and challenges that will keep you entertained and engaged. You will teleport to different locations, such as forests, caves, hospitals, cities, and more. Each location has its own obstacles, enemies, and secrets. You will also face different types of zombies, such as fast zombies, slow zombies, giant zombies, boss zombies, and more.

    -

    zombie evil horror 4 mod apk download
    -zombie evil horror 4 mod apk unlimited money
    -zombie evil horror 4 mod apk android 1
    -zombie evil horror 4 mod apk latest version
    -zombie evil horror 4 mod apk offline
    -zombie evil horror 4 mod apk free
    -zombie evil horror 4 mod apk hack
    -zombie evil horror 4 mod apk revdl
    -zombie evil horror 4 mod apk rexdl
    -zombie evil horror 4 mod apk obb
    -zombie evil horror 5 apk (android game) - free download[^2^]
    -zombie evil horror 5 apk latest version[^2^]
    -zombie evil horror 5 apk download & install[^3^]
    -zombie evil horror 5 apk for android tv & tablet[^3^]
    -zombie evil horror 5 apk for pc windows[^3^]
    -zombie evil horror 3 apk (game) - free download[^1^]
    -zombie evil horror 3 apk latest version[^1^]
    -zombie evil horror 3 apk for android 4.4, 4.3, 4.2, 4.1[^1^]
    -zombie evil horror 3 apk for android 5, 6, 7, 8, 9, 10, 11, 12[^1^]
    -zombie evil horror game series by maxowe games[^1^]
    -maxowe games - developer of zombie evil horror games[^1^]
    -maxowe games - blogspot site for zombie evil horror games[^1^]
    -maxowe games - casual genre of zombie evil horror games[^1^]
    -maxowe games - google play id of zombie evil horror games[^1^]
    -maxowe games - installs of zombie evil horror games[^1^]
    -how to play zombie evil horror games on windows pc
    -how to play zombie evil horror games offline
    -how to play zombie evil horror games with friends
    -how to play zombie evil horror games with controller
    -how to play zombie evil horror games on android tv
    -how to update zombie evil horror games to latest version
    -how to hack zombie evil horror games with cheat engine
    -how to get unlimited money in zombie evil horror games
    -how to unlock all levels in zombie evil horror games
    -how to solve puzzles in zombie evil horror games
    -tips and tricks for playing zombie evil horror games
    -best weapons and items in zombie evil horror games
    -best strategies and tactics in zombie evil horror games
    -best graphics and sound settings in zombie evil horror games
    -best reviews and ratings of zombie evil horror games

    -

    Unlimited Money and Weapons

    -

    The game gives you unlimited money and weapons that you can use to upgrade your skills and equipment. You can buy new guns, ammo, grenades, health kits, armor, and more. You can also unlock new weapons by completing missions and achievements. You will have access to a variety of weapons, such as pistols, shotguns, rifles, snipers, machine guns, rocket launchers, flamethrowers, chainsaws, and more.

    -

    Easy Controls and Gameplay

    -

    The game has easy controls and gameplay that make it suitable for anyone who loves shooting games. You can control your character with simple touch gestures on your screen. You can move around by dragging your finger on the left side of the screen. You can aim and shoot by tapping on the right side of the screen. You can also switch weapons by swiping on the bottom of the screen.

    -

    How to Download and Install Zombie Evil Horror 4 Mod Apk

    -

    If you want to download and install zombie evil horror 4 and zombie evil horror 5? -

    Zombie Evil Horror 4 Mod Apk and Zombie Evil Horror 5 are both part of the Zombie Evil Horror series, but they have some differences. Zombie Evil Horror 4 Mod Apk is the fourth installment of the series, while Zombie Evil Horror 5 is the fifth and latest installment. Zombie Evil Horror 4 Mod Apk has more locations, weapons, and modes than Zombie Evil Horror 5, but Zombie Evil Horror 5 has better graphics, animations, and physics than Zombie Evil Horror 4 Mod Apk. Both games are fun and challenging, but you can choose the one that suits your preferences better.

    -

    Is zombie evil horror 4 mod apk safe to use?

    -

    Zombie Evil Horror 4 Mod Apk is safe to use as long as you download and install it from a trusted source and follow the requirements and precautions mentioned above. However, you should be aware that mod apk files can be risky and may harm your device or data. You should only use mod apk files at your own risk and responsibility.

    -

    How can I update zombie evil horror 4 mod apk?

    -

    To update zombie evil horror 4 mod apk, you need to check if there is a new version available on the source that you downloaded it from. If there is, you need to download and install the new version over the old one. You should also backup your game data before updating to avoid losing your progress.

    -

    Can I play zombie evil horror 4 mod apk offline?

    -

    Yes, you can play zombie evil horror 4 mod apk offline without an internet connection. However, you may not be able to access some features or modes that require an internet connection, such as online multiplayer or leaderboards.

    -

    Can I play zombie evil horror 4 mod apk with friends?

    -

    Yes, you can play zombie evil horror 4 mod apk with friends online or locally. You can join or create a room with your friends and play together in co-op or versus mode. You can also chat with your friends and share your scores and achievements.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Tekken 3 APK Experience the Classic Fighting Game with All Players on Android.md b/spaces/congsaPfin/Manga-OCR/logs/Tekken 3 APK Experience the Classic Fighting Game with All Players on Android.md deleted file mode 100644 index 61a436d055a3f4f62b2253c70e691e46e7d54ec9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Tekken 3 APK Experience the Classic Fighting Game with All Players on Android.md +++ /dev/null @@ -1,102 +0,0 @@ - -

    Tekken 3 APK All Players: How to Download and Play the Classic Fighting Game on Your Android Device

    -

    Introduction

    -

    If you are a fan of fighting games, you must have heard of Tekken 3, one of the most iconic and influential games in the genre. Tekken 3 is a 3D fighting game that was released in 1997 for arcades and in 1998 for PlayStation. It is widely considered as one of the best games of all time, and has sold over 8 million copies worldwide.

    -

    But what if you want to play Tekken 3 on your Android device? Is it possible to enjoy this classic game on your smartphone or tablet? The answer is yes, thanks to Tekken 3 APK, a modified version of the game that allows you to play it on your Android device with all players unlocked. In this article, we will show you how to download and install Tekken 3 APK on your device, and how to play it with all players unlocked.

    -

    tekken 3 apk all players


    Download Filehttps://urlca.com/2uObJX



    -

    What is Tekken 3?

    -

    Tekken 3 is the third installment in the Tekken series, a fighting game franchise created by Namco. The game features a new story mode that follows the events of Tekken 2, where a mysterious fighter named Ogre has killed many martial artists around the world. The game also introduces a new generation of characters, such as Jin Kazama, Ling Xiaoyu, Hwoarang, Eddy Gordo, and more.

    -

    Why is Tekken 3 popular?

    -

    Tekken 3 is popular for many reasons, such as:

    -
      -
    • It has a deep and balanced gameplay system that offers a variety of moves, combos, counters, and throws for each character.
    • -
    • It has a stunning graphics engine that showcases realistic animations, lighting effects, and backgrounds.
    • -
    • It has a catchy soundtrack that matches the mood and atmosphere of each stage.
    • -
    • It has a diverse and memorable roster of characters, each with their own personality, fighting style, and story.
    • -
    • It has several game modes, such as arcade mode, versus mode, team battle mode, survival mode, practice mode, and more.
    • -
    -

    What are the features of Tekken 3 APK?

    -

    Tekken 3 APK is a modified version of the original game that allows you to play it on your Android device. Some of the features of Tekken 3 APK are:

    -

    tekken 3 mobile version apk download
    -tekken 3 for android all characters unlocked
    -tekken 3 game free download for android mobile apk
    -tekken 3 apk mod with all players
    -tekken 3 full game apk for android
    -tekken 3 android apk latest version
    -tekken 3 apk offline mode with all fighters
    -tekken 3 apk + obb file download for android
    -tekken 3 apk cheats and tricks
    -tekken 3 apk best settings for android
    -tekken 3 apk highly compressed download
    -tekken 3 apk unlimited money and gems
    -tekken 3 apk online multiplayer mode
    -tekken 3 apk no root required
    -tekken 3 apk original game from bandai namco
    -tekken 3 apk how to unlock all characters
    -tekken 3 apk gameplay and review
    -tekken 3 apk compatible with all android devices
    -tekken 3 apk new features and updates
    -tekken 3 apk tips and guides for beginners
    -tekken 3 apk hidden characters and modes
    -tekken 3 apk best combos and moves
    -tekken 3 apk ranking system and trophies
    -tekken 3 apk soundtracks and themes
    -tekken 3 apk graphics and performance optimization
    -tekken 3 apk controller support and customization
    -tekken 3 apk save data and backup
    -tekken 3 apk installation and troubleshooting
    -tekken 3 apk comparison with other tekken games
    -tekken 3 apk fan-made mods and patches
    -tekken 3 apk history and development
    -tekken 3 apk storyline and characters bios
    -tekken 3 apk fun facts and trivia
    -tekken 3 apk wallpapers and screenshots
    -tekken 3 apk videos and tutorials
    -tekken 3 apk forums and communities
    -tekken 3 apk ratings and feedbacks
    -tekken 3 apk alternatives and similar games
    -tekken 3 apk legal and safe download sources
    -tekken 3 apk frequently asked questions and answers

    -
      -
    • It has all players unlocked, meaning you can choose any character from the start without having to unlock them.
    • -
    • It has cheats enabled, meaning you can activate various cheats such as infinite health, infinite time, one-hit kill, and more.
    • -
    • It has all modes unlocked, meaning you can access any game mode without having to complete certain requirements.
    • -
    • It has awesome graphics of Tekken 3 game that are optimized for mobile devices.
    • -
    • It has an easy user interface that lets you control the game with touch screen buttons or external controllers.
    • -
    -

    How to Download and Install Tekken 3 APK on Your Android Device

    -

    To download and install Tekken 3 APK on your Android device, you need to follow these steps:

    -

    Step 1: Download the Tekken 3 APK file from a trusted source

    -

    You can download the Tekken 3 APK file from a trusted source, such as [this link]. The file size is about 35 MB, so make sure you have enough storage space on your device. You can also scan the file with an antivirus app to ensure it is safe and virus-free.

    -

    Step 2: Enable unknown sources on your device settings

    -

    Before you can install the Tekken 3 APK file on your device, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then toggle on the unknown sources option. You may see a warning message, but you can ignore it and proceed.

    -

    Step 3: Install the Tekken 3 APK file on your device

    -

    Now that you have downloaded the Tekken 3 APK file and enabled unknown sources, you can install the file on your device. To do this, locate the file in your downloads folder or wherever you saved it, and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on install and wait for the process to finish. You may see some permissions requests, but you can grant them and continue.

    -

    How to Play Tekken 3 APK with All Players Unlocked

    -

    After you have installed the Tekken 3 APK file on your device, you can launch the game and start playing with all players unlocked. To do this, follow these steps:

    -

    Step 1: Launch the game and select your language

    -

    When you launch the game, you will see a splash screen with the Tekken 3 logo and some options. Tap on the language option and select your preferred language from the list. The game supports English, French, German, Italian, Spanish, Portuguese, Japanese, Korean, and Chinese languages.

    -

    Step 2: Choose your game mode and difficulty level

    -

    After selecting your language, you will see the main menu of the game with several game modes to choose from. You can choose from arcade mode, versus mode, team battle mode, survival mode, practice mode, and more. You can also adjust the difficulty level of the game from easy to hard. Tap on the game mode you want to play and then tap on start.

    -

    Step 3: Select your character from the roster of 23 fighters

    -

    Once you have chosen your game mode and difficulty level, you will see the character selection screen with a roster of 23 fighters to choose from. You can scroll through the characters by swiping left or right on the screen. You can also tap on the character's portrait to see their name, fighting style, nationality, age, height, weight, blood type, hobby, likes, dislikes, and more. You can choose any character you want without having to unlock them first. Tap on the character you want to play as and then tap on ok.

    -

    Step 4: Enjoy the thrilling combat and stunning graphics of Tekken 3

    -

    Now that you have selected your character, you are ready to play Tekken 3 APK with all players unlocked. You will see a loading screen with some tips and tricks for playing the game. Then you will see the stage where you will fight against your opponent. You can control your character with touch screen buttons or external controllers. You can perform various moves, combos, counters, and throws by using different combinations of buttons. You can also activate cheats by tapping on the cheat icon on the top right corner of the screen. You can enjoy the thrilling combat and stunning graphics of Tekken 3 as you defeat your opponents and advance through the game.

    -

    Conclusion

    -

    Tekken 3 APK is a modified version of Tekken 3 that allows you to play it on your Android device with all players unlocked. It is a great way to enjoy this classic fighting game on your smartphone or tablet without any hassle. You can download and install Tekken 3 APK on your device by following the steps we have shown in this article. You can also play Tekken 3 APK with all players unlocked by following the steps we have shown in this article. We hope this article has helped you learn how to download and play Tekken 3 APK with all players unlocked.

    -

    FAQs

    -

    Here are some frequently asked questions about Tekken 3 APK:

    -
      -
    1. Is Tekken 3 APK legal?
    2. -

      Tekken 3 APK is not an official version of Tekken 3 by Namco. It is a modified version of the game that has been created by third-party developers. Therefore, it is not legal to download and use Tekken 3 APK without the permission of the original developer. However, if you already own a copy of Tekken 3 for PlayStation, you may be able to use Tekken 3 APK as a backup or emulation of the game. However, this may vary depending on your local laws and regulations, so please check them before downloading and using Tekken 3 APK.

      -
    3. Is Tekken 3 APK safe?
    4. -

      Tekken 3 APK is not an official version of Tekken 3 by Namco. It is a modified version of the game that has been created by third-party developers. Therefore, it may not be safe to download and use Tekken 3 APK from unknown or untrusted sources. Some sources may contain malware, viruses, or spyware that can harm your device or steal your personal information. Therefore, we recommend you to download Tekken 3 APK from a trusted source, such as [this link]. You can also scan the file with an antivirus app to ensure it is safe and virus-free.

      -
    5. Does Tekken 3 APK require an internet connection?
    6. -

      No, Tekken 3 APK does not require an internet connection to play. You can play Tekken 3 APK offline without any problem. However, you may need an internet connection to download and install Tekken 3 APK on your device. You may also need an internet connection to access some online features of the game, such as leaderboards, achievements, or multiplayer modes.

      -
    7. Can I play Tekken 3 APK with my friends?
    8. -

      Yes, you can play Tekken 3 APK with your friends. You can play Tekken 3 APK in versus mode or team battle mode with your friends on the same device or on different devices. To play on the same device, you need to use a split-screen mode that divides the screen into two parts. To play on different devices, you need to use a Bluetooth or Wi-Fi connection that connects your devices. You can also play Tekken 3 APK online with your friends or other players around the world by using an online multiplayer mode that requires an internet connection.

      -
    9. Can I customize my character in Tekken 3 APK?
    10. -

      No, you cannot customize your character in Tekken 3 APK. You can only choose from the existing characters in the game, each with their own appearance, outfit, and accessories. You cannot change or modify any aspect of your character's appearance, such as hair color, skin tone, clothing style, or accessories. However, you can change your character's name and icon by using the options menu in the game.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/1stStudioSiberianMouseMashaAndVeronikaBabkoHardAvi64 !FREE!.md b/spaces/contluForse/HuggingGPT/assets/1stStudioSiberianMouseMashaAndVeronikaBabkoHardAvi64 !FREE!.md deleted file mode 100644 index a4957c1498dc94f2af7c5b52fb37a77f6db07001..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/1stStudioSiberianMouseMashaAndVeronikaBabkoHardAvi64 !FREE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

    1stStudioSiberianMouseMashaAndVeronikaBabkoHardAvi64


    Download Zip 🆓 https://ssurll.com/2uzyyZ



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/byoanet.py b/spaces/cooelf/Multimodal-CoT/timm/models/byoanet.py deleted file mode 100644 index 73c6811b9ce77aad8e11190e6a2d7599b1bb5c23..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/models/byoanet.py +++ /dev/null @@ -1,437 +0,0 @@ -""" Bring-Your-Own-Attention Network - -A flexible network w/ dataclass based config for stacking NN blocks including -self-attention (or similar) layers. - -Currently used to implement experimential variants of: - * Bottleneck Transformers - * Lambda ResNets - * HaloNets - -Consider all of the models definitions here as experimental WIP and likely to change. - -Hacked together by / copyright Ross Wightman, 2021. -""" -from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD -from .byobnet import ByoBlockCfg, ByoModelCfg, ByobNet, interleave_blocks -from .helpers import build_model_with_cfg -from .registry import register_model - -__all__ = [] - - -def _cfg(url='', **kwargs): - return { - 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), - 'crop_pct': 0.875, 'interpolation': 'bicubic', - 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, - 'first_conv': 'stem.conv1.conv', 'classifier': 'head.fc', - 'fixed_input_size': False, 'min_input_size': (3, 224, 224), - **kwargs - } - - -default_cfgs = { - # GPU-Efficient (ResNet) weights - 'botnet26t_256': _cfg(url='', fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)), - 'botnet50ts_256': _cfg(url='', fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)), - 'eca_botnext26ts_256': _cfg(url='', fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)), - - 'halonet_h1': _cfg(url='', input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256)), - 'halonet_h1_c4c5': _cfg(url='', input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256)), - 'halonet26t': _cfg(url='', input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256)), - 'halonet50ts': _cfg(url='', input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256)), - 'eca_halonext26ts': _cfg(url='', input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256)), - - 'lambda_resnet26t': _cfg(url='', min_input_size=(3, 128, 128), input_size=(3, 256, 256), pool_size=(8, 8)), - 'lambda_resnet50t': _cfg(url='', min_input_size=(3, 128, 128)), - 'eca_lambda_resnext26ts': _cfg(url='', min_input_size=(3, 128, 128), input_size=(3, 256, 256), pool_size=(8, 8)), - - 'swinnet26t_256': _cfg(url='', fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)), - 'swinnet50ts_256': _cfg(url='', fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)), - 'eca_swinnext26ts_256': _cfg(url='', fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)), - - 'rednet26t': _cfg(url='', input_size=(3, 256, 256), pool_size=(8, 8)), - 'rednet50ts': _cfg(url='', input_size=(3, 256, 256), pool_size=(8, 8)), -} - - -model_cfgs = dict( - - botnet26t=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25), - ByoBlockCfg(type='bottle', d=4, c=512, s=2, gs=0, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=2, c=1024, s=2, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=3, c=2048, s=2, gs=0, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - fixed_input_size=True, - self_attn_layer='bottleneck', - self_attn_kwargs=dict() - ), - botnet50ts=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=3, c=256, s=2, gs=0, br=0.25), - ByoBlockCfg(type='bottle', d=4, c=512, s=2, gs=0, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=6, c=1024, s=2, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=3, c=2048, s=1, gs=0, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='', - num_features=0, - fixed_input_size=True, - act_layer='silu', - self_attn_layer='bottleneck', - self_attn_kwargs=dict() - ), - eca_botnext26ts=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=16, br=0.25), - ByoBlockCfg(type='bottle', d=4, c=512, s=2, gs=16, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=2, c=1024, s=2, gs=16, br=0.25), - ByoBlockCfg(type='self_attn', d=3, c=2048, s=2, gs=16, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - fixed_input_size=True, - act_layer='silu', - attn_layer='eca', - self_attn_layer='bottleneck', - self_attn_kwargs=dict() - ), - - halonet_h1=ByoModelCfg( - blocks=( - ByoBlockCfg(type='self_attn', d=3, c=64, s=1, gs=0, br=1.0), - ByoBlockCfg(type='self_attn', d=3, c=128, s=2, gs=0, br=1.0), - ByoBlockCfg(type='self_attn', d=10, c=256, s=2, gs=0, br=1.0), - ByoBlockCfg(type='self_attn', d=3, c=512, s=2, gs=0, br=1.0), - ), - stem_chs=64, - stem_type='7x7', - stem_pool='maxpool', - num_features=0, - self_attn_layer='halo', - self_attn_kwargs=dict(block_size=8, halo_size=3), - ), - halonet_h1_c4c5=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=3, c=64, s=1, gs=0, br=1.0), - ByoBlockCfg(type='bottle', d=3, c=128, s=2, gs=0, br=1.0), - ByoBlockCfg(type='self_attn', d=10, c=256, s=2, gs=0, br=1.0), - ByoBlockCfg(type='self_attn', d=3, c=512, s=2, gs=0, br=1.0), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - self_attn_layer='halo', - self_attn_kwargs=dict(block_size=8, halo_size=3), - ), - halonet26t=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25), - ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=0, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=2, c=1024, s=2, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=0, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - self_attn_layer='halo', - self_attn_kwargs=dict(block_size=8, halo_size=2) # intended for 256x256 res - ), - halonet50ts=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25), - ByoBlockCfg(type='bottle', d=4, c=512, s=2, gs=0, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=6, c=1024, s=2, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=3, c=2048, s=2, gs=0, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - act_layer='silu', - self_attn_layer='halo', - self_attn_kwargs=dict(block_size=8, halo_size=2) - ), - eca_halonext26ts=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=16, br=0.25), - ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=16, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=2, c=1024, s=2, gs=16, br=0.25), - ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=16, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - act_layer='silu', - attn_layer='eca', - self_attn_layer='halo', - self_attn_kwargs=dict(block_size=8, halo_size=2) # intended for 256x256 res - ), - - lambda_resnet26t=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25), - ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=0, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=2, c=1024, s=2, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=0, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - self_attn_layer='lambda', - self_attn_kwargs=dict() - ), - lambda_resnet50t=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25), - ByoBlockCfg(type='bottle', d=4, c=512, s=2, gs=0, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=3, d=6, c=1024, s=2, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=3, c=2048, s=2, gs=0, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - self_attn_layer='lambda', - self_attn_kwargs=dict() - ), - eca_lambda_resnext26ts=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=16, br=0.25), - ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=16, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=2, c=1024, s=2, gs=16, br=0.25), - ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=16, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - act_layer='silu', - attn_layer='eca', - self_attn_layer='lambda', - self_attn_kwargs=dict() - ), - - swinnet26t=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=2, c=512, s=2, gs=0, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=2, c=1024, s=2, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=0, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - fixed_input_size=True, - self_attn_layer='swin', - self_attn_kwargs=dict(win_size=8) - ), - swinnet50ts=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=4, c=512, s=2, gs=0, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=2, c=1024, s=2, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=3, c=2048, s=2, gs=0, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - fixed_input_size=True, - act_layer='silu', - self_attn_layer='swin', - self_attn_kwargs=dict(win_size=8) - ), - eca_swinnext26ts=ByoModelCfg( - blocks=( - ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=16, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=2, c=512, s=2, gs=16, br=0.25), - interleave_blocks(types=('bottle', 'self_attn'), every=1, d=2, c=1024, s=2, gs=16, br=0.25), - ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=16, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - fixed_input_size=True, - act_layer='silu', - attn_layer='eca', - self_attn_layer='swin', - self_attn_kwargs=dict(win_size=8) - ), - - - rednet26t=ByoModelCfg( - blocks=( - ByoBlockCfg(type='self_attn', d=2, c=256, s=1, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=2, c=512, s=2, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=2, c=1024, s=2, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=0, br=0.25), - ), - stem_chs=64, - stem_type='tiered', # FIXME RedNet uses involution in middle of stem - stem_pool='maxpool', - num_features=0, - self_attn_layer='involution', - self_attn_kwargs=dict() - ), - rednet50ts=ByoModelCfg( - blocks=( - ByoBlockCfg(type='self_attn', d=3, c=256, s=1, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=4, c=512, s=2, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=2, c=1024, s=2, gs=0, br=0.25), - ByoBlockCfg(type='self_attn', d=3, c=2048, s=2, gs=0, br=0.25), - ), - stem_chs=64, - stem_type='tiered', - stem_pool='maxpool', - num_features=0, - act_layer='silu', - self_attn_layer='involution', - self_attn_kwargs=dict() - ), -) - - -def _create_byoanet(variant, cfg_variant=None, pretrained=False, **kwargs): - return build_model_with_cfg( - ByobNet, variant, pretrained, - default_cfg=default_cfgs[variant], - model_cfg=model_cfgs[variant] if not cfg_variant else model_cfgs[cfg_variant], - feature_cfg=dict(flatten_sequential=True), - **kwargs) - - -@register_model -def botnet26t_256(pretrained=False, **kwargs): - """ Bottleneck Transformer w/ ResNet26-T backbone. Bottleneck attn in final stage. - """ - kwargs.setdefault('img_size', 256) - return _create_byoanet('botnet26t_256', 'botnet26t', pretrained=pretrained, **kwargs) - - -@register_model -def botnet50ts_256(pretrained=False, **kwargs): - """ Bottleneck Transformer w/ ResNet50-T backbone. Bottleneck attn in final stage. - """ - kwargs.setdefault('img_size', 256) - return _create_byoanet('botnet50ts_256', 'botnet50ts', pretrained=pretrained, **kwargs) - - -@register_model -def eca_botnext26ts_256(pretrained=False, **kwargs): - """ Bottleneck Transformer w/ ResNet26-T backbone. Bottleneck attn in final stage. - """ - kwargs.setdefault('img_size', 256) - return _create_byoanet('eca_botnext26ts_256', 'eca_botnext26ts', pretrained=pretrained, **kwargs) - - -@register_model -def halonet_h1(pretrained=False, **kwargs): - """ HaloNet-H1. Halo attention in all stages as per the paper. - - This runs very slowly, param count lower than paper --> something is wrong. - """ - return _create_byoanet('halonet_h1', pretrained=pretrained, **kwargs) - - -@register_model -def halonet_h1_c4c5(pretrained=False, **kwargs): - """ HaloNet-H1 config w/ attention in last two stages. - """ - return _create_byoanet('halonet_h1_c4c5', pretrained=pretrained, **kwargs) - - -@register_model -def halonet26t(pretrained=False, **kwargs): - """ HaloNet w/ a ResNet26-t backbone, Hallo attention in final stage - """ - return _create_byoanet('halonet26t', pretrained=pretrained, **kwargs) - - -@register_model -def halonet50ts(pretrained=False, **kwargs): - """ HaloNet w/ a ResNet50-t backbone, Hallo attention in final stage - """ - return _create_byoanet('halonet50ts', pretrained=pretrained, **kwargs) - - -@register_model -def eca_halonext26ts(pretrained=False, **kwargs): - """ HaloNet w/ a ResNet26-t backbone, Hallo attention in final stage - """ - return _create_byoanet('eca_halonext26ts', pretrained=pretrained, **kwargs) - - -@register_model -def lambda_resnet26t(pretrained=False, **kwargs): - """ Lambda-ResNet-26T. Lambda layers in one C4 stage and all C5. - """ - return _create_byoanet('lambda_resnet26t', pretrained=pretrained, **kwargs) - - -@register_model -def lambda_resnet50t(pretrained=False, **kwargs): - """ Lambda-ResNet-50T. Lambda layers in one C4 stage and all C5. - """ - return _create_byoanet('lambda_resnet50t', pretrained=pretrained, **kwargs) - - -@register_model -def eca_lambda_resnext26ts(pretrained=False, **kwargs): - """ Lambda-ResNet-26T. Lambda layers in one C4 stage and all C5. - """ - return _create_byoanet('eca_lambda_resnext26ts', pretrained=pretrained, **kwargs) - - -@register_model -def swinnet26t_256(pretrained=False, **kwargs): - """ - """ - kwargs.setdefault('img_size', 256) - return _create_byoanet('swinnet26t_256', 'swinnet26t', pretrained=pretrained, **kwargs) - - -@register_model -def swinnet50ts_256(pretrained=False, **kwargs): - """ - """ - kwargs.setdefault('img_size', 256) - return _create_byoanet('swinnet50ts_256', 'swinnet50ts', pretrained=pretrained, **kwargs) - - -@register_model -def eca_swinnext26ts_256(pretrained=False, **kwargs): - """ - """ - kwargs.setdefault('img_size', 256) - return _create_byoanet('eca_swinnext26ts_256', 'eca_swinnext26ts', pretrained=pretrained, **kwargs) - - -@register_model -def rednet26t(pretrained=False, **kwargs): - """ - """ - return _create_byoanet('rednet26t', pretrained=pretrained, **kwargs) - - -@register_model -def rednet50ts(pretrained=False, **kwargs): - """ - """ - return _create_byoanet('rednet50ts', pretrained=pretrained, **kwargs) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/trainers/default.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/trainers/default.py deleted file mode 100644 index 29cd10ec376d5fe3ebcd957d807d2d3f83b6ec59..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/trainers/default.py +++ /dev/null @@ -1,175 +0,0 @@ -import logging - -import torch -import torch.nn.functional as F -from omegaconf import OmegaConf - -# from annotator.lama.saicinpainting.training.data.datasets import make_constant_area_crop_params -from annotator.lama.saicinpainting.training.losses.distance_weighting import make_mask_distance_weighter -from annotator.lama.saicinpainting.training.losses.feature_matching import feature_matching_loss, masked_l1_loss -# from annotator.lama.saicinpainting.training.modules.fake_fakes import FakeFakesGenerator -from annotator.lama.saicinpainting.training.trainers.base import BaseInpaintingTrainingModule, make_multiscale_noise -from annotator.lama.saicinpainting.utils import add_prefix_to_keys, get_ramp - -LOGGER = logging.getLogger(__name__) - - -def make_constant_area_crop_batch(batch, **kwargs): - crop_y, crop_x, crop_height, crop_width = make_constant_area_crop_params(img_height=batch['image'].shape[2], - img_width=batch['image'].shape[3], - **kwargs) - batch['image'] = batch['image'][:, :, crop_y : crop_y + crop_height, crop_x : crop_x + crop_width] - batch['mask'] = batch['mask'][:, :, crop_y: crop_y + crop_height, crop_x: crop_x + crop_width] - return batch - - -class DefaultInpaintingTrainingModule(BaseInpaintingTrainingModule): - def __init__(self, *args, concat_mask=True, rescale_scheduler_kwargs=None, image_to_discriminator='predicted_image', - add_noise_kwargs=None, noise_fill_hole=False, const_area_crop_kwargs=None, - distance_weighter_kwargs=None, distance_weighted_mask_for_discr=False, - fake_fakes_proba=0, fake_fakes_generator_kwargs=None, - **kwargs): - super().__init__(*args, **kwargs) - self.concat_mask = concat_mask - self.rescale_size_getter = get_ramp(**rescale_scheduler_kwargs) if rescale_scheduler_kwargs is not None else None - self.image_to_discriminator = image_to_discriminator - self.add_noise_kwargs = add_noise_kwargs - self.noise_fill_hole = noise_fill_hole - self.const_area_crop_kwargs = const_area_crop_kwargs - self.refine_mask_for_losses = make_mask_distance_weighter(**distance_weighter_kwargs) \ - if distance_weighter_kwargs is not None else None - self.distance_weighted_mask_for_discr = distance_weighted_mask_for_discr - - self.fake_fakes_proba = fake_fakes_proba - if self.fake_fakes_proba > 1e-3: - self.fake_fakes_gen = FakeFakesGenerator(**(fake_fakes_generator_kwargs or {})) - - def forward(self, batch): - if self.training and self.rescale_size_getter is not None: - cur_size = self.rescale_size_getter(self.global_step) - batch['image'] = F.interpolate(batch['image'], size=cur_size, mode='bilinear', align_corners=False) - batch['mask'] = F.interpolate(batch['mask'], size=cur_size, mode='nearest') - - if self.training and self.const_area_crop_kwargs is not None: - batch = make_constant_area_crop_batch(batch, **self.const_area_crop_kwargs) - - img = batch['image'] - mask = batch['mask'] - - masked_img = img * (1 - mask) - - if self.add_noise_kwargs is not None: - noise = make_multiscale_noise(masked_img, **self.add_noise_kwargs) - if self.noise_fill_hole: - masked_img = masked_img + mask * noise[:, :masked_img.shape[1]] - masked_img = torch.cat([masked_img, noise], dim=1) - - if self.concat_mask: - masked_img = torch.cat([masked_img, mask], dim=1) - - batch['predicted_image'] = self.generator(masked_img) - batch['inpainted'] = mask * batch['predicted_image'] + (1 - mask) * batch['image'] - - if self.fake_fakes_proba > 1e-3: - if self.training and torch.rand(1).item() < self.fake_fakes_proba: - batch['fake_fakes'], batch['fake_fakes_masks'] = self.fake_fakes_gen(img, mask) - batch['use_fake_fakes'] = True - else: - batch['fake_fakes'] = torch.zeros_like(img) - batch['fake_fakes_masks'] = torch.zeros_like(mask) - batch['use_fake_fakes'] = False - - batch['mask_for_losses'] = self.refine_mask_for_losses(img, batch['predicted_image'], mask) \ - if self.refine_mask_for_losses is not None and self.training \ - else mask - - return batch - - def generator_loss(self, batch): - img = batch['image'] - predicted_img = batch[self.image_to_discriminator] - original_mask = batch['mask'] - supervised_mask = batch['mask_for_losses'] - - # L1 - l1_value = masked_l1_loss(predicted_img, img, supervised_mask, - self.config.losses.l1.weight_known, - self.config.losses.l1.weight_missing) - - total_loss = l1_value - metrics = dict(gen_l1=l1_value) - - # vgg-based perceptual loss - if self.config.losses.perceptual.weight > 0: - pl_value = self.loss_pl(predicted_img, img, mask=supervised_mask).sum() * self.config.losses.perceptual.weight - total_loss = total_loss + pl_value - metrics['gen_pl'] = pl_value - - # discriminator - # adversarial_loss calls backward by itself - mask_for_discr = supervised_mask if self.distance_weighted_mask_for_discr else original_mask - self.adversarial_loss.pre_generator_step(real_batch=img, fake_batch=predicted_img, - generator=self.generator, discriminator=self.discriminator) - discr_real_pred, discr_real_features = self.discriminator(img) - discr_fake_pred, discr_fake_features = self.discriminator(predicted_img) - adv_gen_loss, adv_metrics = self.adversarial_loss.generator_loss(real_batch=img, - fake_batch=predicted_img, - discr_real_pred=discr_real_pred, - discr_fake_pred=discr_fake_pred, - mask=mask_for_discr) - total_loss = total_loss + adv_gen_loss - metrics['gen_adv'] = adv_gen_loss - metrics.update(add_prefix_to_keys(adv_metrics, 'adv_')) - - # feature matching - if self.config.losses.feature_matching.weight > 0: - need_mask_in_fm = OmegaConf.to_container(self.config.losses.feature_matching).get('pass_mask', False) - mask_for_fm = supervised_mask if need_mask_in_fm else None - fm_value = feature_matching_loss(discr_fake_features, discr_real_features, - mask=mask_for_fm) * self.config.losses.feature_matching.weight - total_loss = total_loss + fm_value - metrics['gen_fm'] = fm_value - - if self.loss_resnet_pl is not None: - resnet_pl_value = self.loss_resnet_pl(predicted_img, img) - total_loss = total_loss + resnet_pl_value - metrics['gen_resnet_pl'] = resnet_pl_value - - return total_loss, metrics - - def discriminator_loss(self, batch): - total_loss = 0 - metrics = {} - - predicted_img = batch[self.image_to_discriminator].detach() - self.adversarial_loss.pre_discriminator_step(real_batch=batch['image'], fake_batch=predicted_img, - generator=self.generator, discriminator=self.discriminator) - discr_real_pred, discr_real_features = self.discriminator(batch['image']) - discr_fake_pred, discr_fake_features = self.discriminator(predicted_img) - adv_discr_loss, adv_metrics = self.adversarial_loss.discriminator_loss(real_batch=batch['image'], - fake_batch=predicted_img, - discr_real_pred=discr_real_pred, - discr_fake_pred=discr_fake_pred, - mask=batch['mask']) - total_loss = total_loss + adv_discr_loss - metrics['discr_adv'] = adv_discr_loss - metrics.update(add_prefix_to_keys(adv_metrics, 'adv_')) - - - if batch.get('use_fake_fakes', False): - fake_fakes = batch['fake_fakes'] - self.adversarial_loss.pre_discriminator_step(real_batch=batch['image'], fake_batch=fake_fakes, - generator=self.generator, discriminator=self.discriminator) - discr_fake_fakes_pred, _ = self.discriminator(fake_fakes) - fake_fakes_adv_discr_loss, fake_fakes_adv_metrics = self.adversarial_loss.discriminator_loss( - real_batch=batch['image'], - fake_batch=fake_fakes, - discr_real_pred=discr_real_pred, - discr_fake_pred=discr_fake_fakes_pred, - mask=batch['mask'] - ) - total_loss = total_loss + fake_fakes_adv_discr_loss - metrics['discr_adv_fake_fakes'] = fake_fakes_adv_discr_loss - metrics.update(add_prefix_to_keys(fake_fakes_adv_metrics, 'adv_')) - - return total_loss, metrics diff --git a/spaces/crashedice/signify/SOURCE/yolo_files/utils/metrics.py b/spaces/crashedice/signify/SOURCE/yolo_files/utils/metrics.py deleted file mode 100644 index 323c84b6c873c87a3530a3caec60982a7d83a3b8..0000000000000000000000000000000000000000 --- a/spaces/crashedice/signify/SOURCE/yolo_files/utils/metrics.py +++ /dev/null @@ -1,223 +0,0 @@ -# Model validation metrics - -from pathlib import Path - -import matplotlib.pyplot as plt -import numpy as np -import torch - -from . import general - - -def fitness(x): - # Model fitness as a weighted combination of metrics - w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95] - return (x[:, :4] * w).sum(1) - - -def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=()): - """ Compute the average precision, given the recall and precision curves. - Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. - # Arguments - tp: True positives (nparray, nx1 or nx10). - conf: Objectness value from 0-1 (nparray). - pred_cls: Predicted object classes (nparray). - target_cls: True object classes (nparray). - plot: Plot precision-recall curve at mAP@0.5 - save_dir: Plot save directory - # Returns - The average precision as computed in py-faster-rcnn. - """ - - # Sort by objectness - i = np.argsort(-conf) - tp, conf, pred_cls = tp[i], conf[i], pred_cls[i] - - # Find unique classes - unique_classes = np.unique(target_cls) - nc = unique_classes.shape[0] # number of classes, number of detections - - # Create Precision-Recall curve and compute AP for each class - px, py = np.linspace(0, 1, 1000), [] # for plotting - ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000)) - for ci, c in enumerate(unique_classes): - i = pred_cls == c - n_l = (target_cls == c).sum() # number of labels - n_p = i.sum() # number of predictions - - if n_p == 0 or n_l == 0: - continue - else: - # Accumulate FPs and TPs - fpc = (1 - tp[i]).cumsum(0) - tpc = tp[i].cumsum(0) - - # Recall - recall = tpc / (n_l + 1e-16) # recall curve - r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases - - # Precision - precision = tpc / (tpc + fpc) # precision curve - p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score - - # AP from recall-precision curve - for j in range(tp.shape[1]): - ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j]) - if plot and j == 0: - py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5 - - # Compute F1 (harmonic mean of precision and recall) - f1 = 2 * p * r / (p + r + 1e-16) - if plot: - plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names) - plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1') - plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision') - plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall') - - i = f1.mean(0).argmax() # max F1 index - return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32') - - -def compute_ap(recall, precision): - """ Compute the average precision, given the recall and precision curves - # Arguments - recall: The recall curve (list) - precision: The precision curve (list) - # Returns - Average precision, precision curve, recall curve - """ - - # Append sentinel values to beginning and end - mrec = np.concatenate(([0.], recall, [recall[-1] + 0.01])) - mpre = np.concatenate(([1.], precision, [0.])) - - # Compute the precision envelope - mpre = np.flip(np.maximum.accumulate(np.flip(mpre))) - - # Integrate area under curve - method = 'interp' # methods: 'continuous', 'interp' - if method == 'interp': - x = np.linspace(0, 1, 101) # 101-point interp (COCO) - ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate - else: # 'continuous' - i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes - ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve - - return ap, mpre, mrec - - -class ConfusionMatrix: - # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix - def __init__(self, nc, conf=0.25, iou_thres=0.45): - self.matrix = np.zeros((nc + 1, nc + 1)) - self.nc = nc # number of classes - self.conf = conf - self.iou_thres = iou_thres - - def process_batch(self, detections, labels): - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - detections (Array[N, 6]), x1, y1, x2, y2, conf, class - labels (Array[M, 5]), class, x1, y1, x2, y2 - Returns: - None, updates confusion matrix accordingly - """ - detections = detections[detections[:, 4] > self.conf] - gt_classes = labels[:, 0].int() - detection_classes = detections[:, 5].int() - iou = general.box_iou(labels[:, 1:], detections[:, :4]) - - x = torch.where(iou > self.iou_thres) - if x[0].shape[0]: - matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() - if x[0].shape[0] > 1: - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 1], return_index=True)[1]] - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 0], return_index=True)[1]] - else: - matches = np.zeros((0, 3)) - - n = matches.shape[0] > 0 - m0, m1, _ = matches.transpose().astype(np.int16) - for i, gc in enumerate(gt_classes): - j = m0 == i - if n and sum(j) == 1: - self.matrix[detection_classes[m1[j]], gc] += 1 # correct - else: - self.matrix[self.nc, gc] += 1 # background FP - - if n: - for i, dc in enumerate(detection_classes): - if not any(m1 == i): - self.matrix[dc, self.nc] += 1 # background FN - - def matrix(self): - return self.matrix - - def plot(self, save_dir='', names=()): - try: - import seaborn as sn - - array = self.matrix / (self.matrix.sum(0).reshape(1, self.nc + 1) + 1E-6) # normalize - array[array < 0.005] = np.nan # don't annotate (would appear as 0.00) - - fig = plt.figure(figsize=(12, 9), tight_layout=True) - sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size - labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels - sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True, - xticklabels=names + ['background FP'] if labels else "auto", - yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1)) - fig.axes[0].set_xlabel('True') - fig.axes[0].set_ylabel('Predicted') - fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250) - except Exception as e: - pass - - def print(self): - for i in range(self.nc + 1): - print(' '.join(map(str, self.matrix[i]))) - - -# Plots ---------------------------------------------------------------------------------------------------------------- - -def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()): - # Precision-recall curve - fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) - py = np.stack(py, axis=1) - - if 0 < len(names) < 21: # display per-class legend if < 21 classes - for i, y in enumerate(py.T): - ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision) - else: - ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision) - - ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean()) - ax.set_xlabel('Recall') - ax.set_ylabel('Precision') - ax.set_xlim(0, 1) - ax.set_ylim(0, 1) - plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left") - fig.savefig(Path(save_dir), dpi=250) - - -def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'): - # Metric-confidence curve - fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) - - if 0 < len(names) < 21: # display per-class legend if < 21 classes - for i, y in enumerate(py): - ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric) - else: - ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric) - - y = py.mean(0) - ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}') - ax.set_xlabel(xlabel) - ax.set_ylabel(ylabel) - ax.set_xlim(0, 1) - ax.set_ylim(0, 1) - plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left") - fig.savefig(Path(save_dir), dpi=250) diff --git a/spaces/cscan/CodeFormer/CodeFormer/basicsr/ops/dcn/deform_conv.py b/spaces/cscan/CodeFormer/CodeFormer/basicsr/ops/dcn/deform_conv.py deleted file mode 100644 index 734154f9ed9447d585eae7df6886acb136f8a3cf..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/basicsr/ops/dcn/deform_conv.py +++ /dev/null @@ -1,377 +0,0 @@ -import math -import torch -from torch import nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn import functional as F -from torch.nn.modules.utils import _pair, _single - -try: - from . import deform_conv_ext -except ImportError: - import os - BASICSR_JIT = os.getenv('BASICSR_JIT') - if BASICSR_JIT == 'True': - from torch.utils.cpp_extension import load - module_path = os.path.dirname(__file__) - deform_conv_ext = load( - 'deform_conv', - sources=[ - os.path.join(module_path, 'src', 'deform_conv_ext.cpp'), - os.path.join(module_path, 'src', 'deform_conv_cuda.cpp'), - os.path.join(module_path, 'src', 'deform_conv_cuda_kernel.cu'), - ], - ) - - -class DeformConvFunction(Function): - - @staticmethod - def forward(ctx, - input, - offset, - weight, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - im2col_step=64): - if input is not None and input.dim() != 4: - raise ValueError(f'Expected 4D tensor as input, got {input.dim()}' 'D tensor instead.') - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.im2col_step = im2col_step - - ctx.save_for_backward(input, offset, weight) - - output = input.new_empty(DeformConvFunction._output_size(input, weight, ctx.padding, ctx.dilation, ctx.stride)) - - ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones - - if not input.is_cuda: - raise NotImplementedError - else: - cur_im2col_step = min(ctx.im2col_step, input.shape[0]) - assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize' - deform_conv_ext.deform_conv_forward(input, weight, - offset, output, ctx.bufs_[0], ctx.bufs_[1], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1], - ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups, - ctx.deformable_groups, cur_im2col_step) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, weight = ctx.saved_tensors - - grad_input = grad_offset = grad_weight = None - - if not grad_output.is_cuda: - raise NotImplementedError - else: - cur_im2col_step = min(ctx.im2col_step, input.shape[0]) - assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize' - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - deform_conv_ext.deform_conv_backward_input(input, offset, grad_output, grad_input, - grad_offset, weight, ctx.bufs_[0], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1], - ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups, - ctx.deformable_groups, cur_im2col_step) - - if ctx.needs_input_grad[2]: - grad_weight = torch.zeros_like(weight) - deform_conv_ext.deform_conv_backward_parameters(input, offset, grad_output, grad_weight, - ctx.bufs_[0], ctx.bufs_[1], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], - ctx.padding[1], ctx.padding[0], ctx.dilation[1], - ctx.dilation[0], ctx.groups, ctx.deformable_groups, 1, - cur_im2col_step) - - return (grad_input, grad_offset, grad_weight, None, None, None, None, None) - - @staticmethod - def _output_size(input, weight, padding, dilation, stride): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = padding[d] - kernel = dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError('convolution input is too small (output would be ' f'{"x".join(map(str, output_size))})') - return output_size - - -class ModulatedDeformConvFunction(Function): - - @staticmethod - def forward(ctx, - input, - offset, - mask, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1): - ctx.stride = stride - ctx.padding = padding - ctx.dilation = dilation - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.with_bias = bias is not None - if not ctx.with_bias: - bias = input.new_empty(1) # fake tensor - if not input.is_cuda: - raise NotImplementedError - if weight.requires_grad or mask.requires_grad or offset.requires_grad \ - or input.requires_grad: - ctx.save_for_backward(input, offset, mask, weight, bias) - output = input.new_empty(ModulatedDeformConvFunction._infer_shape(ctx, input, weight)) - ctx._bufs = [input.new_empty(0), input.new_empty(0)] - deform_conv_ext.modulated_deform_conv_forward(input, weight, bias, ctx._bufs[0], offset, mask, output, - ctx._bufs[1], weight.shape[2], weight.shape[3], ctx.stride, - ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation, - ctx.groups, ctx.deformable_groups, ctx.with_bias) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - if not grad_output.is_cuda: - raise NotImplementedError - input, offset, mask, weight, bias = ctx.saved_tensors - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - grad_mask = torch.zeros_like(mask) - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(bias) - deform_conv_ext.modulated_deform_conv_backward(input, weight, bias, ctx._bufs[0], offset, mask, ctx._bufs[1], - grad_input, grad_weight, grad_bias, grad_offset, grad_mask, - grad_output, weight.shape[2], weight.shape[3], ctx.stride, - ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation, - ctx.groups, ctx.deformable_groups, ctx.with_bias) - if not ctx.with_bias: - grad_bias = None - - return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, None, None, None, None, None) - - @staticmethod - def _infer_shape(ctx, input, weight): - n = input.size(0) - channels_out = weight.size(0) - height, width = input.shape[2:4] - kernel_h, kernel_w = weight.shape[2:4] - height_out = (height + 2 * ctx.padding - (ctx.dilation * (kernel_h - 1) + 1)) // ctx.stride + 1 - width_out = (width + 2 * ctx.padding - (ctx.dilation * (kernel_w - 1) + 1)) // ctx.stride + 1 - return n, channels_out, height_out, width_out - - -deform_conv = DeformConvFunction.apply -modulated_deform_conv = ModulatedDeformConvFunction.apply - - -class DeformConv(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=False): - super(DeformConv, self).__init__() - - assert not bias - assert in_channels % groups == 0, \ - f'in_channels {in_channels} is not divisible by groups {groups}' - assert out_channels % groups == 0, \ - f'out_channels {out_channels} is not divisible ' \ - f'by groups {groups}' - - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deformable_groups = deformable_groups - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // self.groups, *self.kernel_size)) - - self.reset_parameters() - - def reset_parameters(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - - def forward(self, x, offset): - # To fix an assert error in deform_conv_cuda.cpp:128 - # input image is smaller than kernel - input_pad = (x.size(2) < self.kernel_size[0] or x.size(3) < self.kernel_size[1]) - if input_pad: - pad_h = max(self.kernel_size[0] - x.size(2), 0) - pad_w = max(self.kernel_size[1] - x.size(3), 0) - x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - out = deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups, - self.deformable_groups) - if input_pad: - out = out[:, :, :out.size(2) - pad_h, :out.size(3) - pad_w].contiguous() - return out - - -class DeformConvPack(DeformConv): - """A Deformable Conv Encapsulation that acts as normal Conv layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(DeformConvPack, self).__init__(*args, **kwargs) - - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deformable_groups * 2 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_offset() - - def init_offset(self): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - offset = self.conv_offset(x) - return deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups, - self.deformable_groups) - - -class ModulatedDeformConv(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=True): - super(ModulatedDeformConv, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = stride - self.padding = padding - self.dilation = dilation - self.groups = groups - self.deformable_groups = deformable_groups - self.with_bias = bias - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // groups, *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.register_parameter('bias', None) - self.init_weights() - - def init_weights(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - if self.bias is not None: - self.bias.data.zero_() - - def forward(self, x, offset, mask): - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, - self.groups, self.deformable_groups) - - -class ModulatedDeformConvPack(ModulatedDeformConv): - """A ModulatedDeformable Conv Encapsulation that acts as normal Conv layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(ModulatedDeformConvPack, self).__init__(*args, **kwargs) - - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deformable_groups * 3 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_weights() - - def init_weights(self): - super(ModulatedDeformConvPack, self).init_weights() - if hasattr(self, 'conv_offset'): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - out = self.conv_offset(x) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, - self.groups, self.deformable_groups) diff --git a/spaces/cymic/Waifu_Diffusion_Webui/javascript/ui.js b/spaces/cymic/Waifu_Diffusion_Webui/javascript/ui.js deleted file mode 100644 index b1053201cf8d57b345b7267f4188072320127d01..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/javascript/ui.js +++ /dev/null @@ -1,233 +0,0 @@ -// various functions for interation with ui.py not large enough to warrant putting them in separate files - -function selected_gallery_index(){ - var buttons = gradioApp().querySelectorAll('[style="display: block;"].tabitem .gallery-item') - var button = gradioApp().querySelector('[style="display: block;"].tabitem .gallery-item.\\!ring-2') - - var result = -1 - buttons.forEach(function(v, i){ if(v==button) { result = i } }) - - return result -} - -function extract_image_from_gallery(gallery){ - if(gallery.length == 1){ - return gallery[0] - } - - index = selected_gallery_index() - - if (index < 0 || index >= gallery.length){ - return [null] - } - - return gallery[index]; -} - -function args_to_array(args){ - res = [] - for(var i=0;i label > textarea"); - txt2img_textarea?.addEventListener("input", () => update_token_counter("txt2img_token_button")); - txt2img_textarea?.addEventListener("keyup", (event) => submit_prompt(event, "txt2img_generate")); - } - if (!img2img_textarea) { - img2img_textarea = gradioApp().querySelector("#img2img_prompt > label > textarea"); - img2img_textarea?.addEventListener("input", () => update_token_counter("img2img_token_button")); - img2img_textarea?.addEventListener("keyup", (event) => submit_prompt(event, "img2img_generate")); - } -}) - -let txt2img_textarea, img2img_textarea = undefined; -let wait_time = 800 -let token_timeout; - -function update_txt2img_tokens(...args) { - update_token_counter("txt2img_token_button") - if (args.length == 2) - return args[0] - return args; -} - -function update_img2img_tokens(...args) { - update_token_counter("img2img_token_button") - if (args.length == 2) - return args[0] - return args; -} - -function update_token_counter(button_id) { - if (token_timeout) - clearTimeout(token_timeout); - token_timeout = setTimeout(() => gradioApp().getElementById(button_id)?.click(), wait_time); -} - -function submit_prompt(event, generate_button_id) { - if (event.altKey && event.keyCode === 13) { - event.preventDefault(); - gradioApp().getElementById(generate_button_id).click(); - return; - } -} - -function restart_reload(){ - document.body.innerHTML='

    Reloading...

    '; - setTimeout(function(){location.reload()},2000) -} diff --git a/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/cpp/libJPG/jpgd.cpp b/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/cpp/libJPG/jpgd.cpp deleted file mode 100644 index 36d06c8e9068570c3e7624895d474f33dbfe3d29..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/cpp/libJPG/jpgd.cpp +++ /dev/null @@ -1,3276 +0,0 @@ -// jpgd.cpp - C++ class for JPEG decompression. -// Public domain, Rich Geldreich -// Last updated Apr. 16, 2011 -// Alex Evans: Linear memory allocator (taken from jpge.h). -// -// Supports progressive and baseline sequential JPEG image files, and the most common chroma subsampling factors: Y, H1V1, H2V1, H1V2, and H2V2. -// -// Chroma upsampling quality: H2V2 is upsampled in the frequency domain, H2V1 and H1V2 are upsampled using point sampling. -// Chroma upsampling reference: "Fast Scheme for Image Size Change in the Compressed Domain" -// http://vision.ai.uiuc.edu/~dugad/research/dct/index.html - -#include "jpgd.h" -#include - -#include -// BEGIN EPIC MOD -#define JPGD_ASSERT(x) { assert(x); CA_ASSUME(x); } (void)0 -// END EPIC MOD - -#ifdef _MSC_VER -#pragma warning (disable : 4611) // warning C4611: interaction between '_setjmp' and C++ object destruction is non-portable -#endif - -// Set to 1 to enable freq. domain chroma upsampling on images using H2V2 subsampling (0=faster nearest neighbor sampling). -// This is slower, but results in higher quality on images with highly saturated colors. -#define JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING 1 - -#define JPGD_TRUE (1) -#define JPGD_FALSE (0) - -#define JPGD_MAX(a,b) (((a)>(b)) ? (a) : (b)) -#define JPGD_MIN(a,b) (((a)<(b)) ? (a) : (b)) - -namespace jpgd { - - static inline void *jpgd_malloc(size_t nSize) { return FMemory::Malloc(nSize); } - static inline void jpgd_free(void *p) { FMemory::Free(p); } - -// BEGIN EPIC MOD -//@UE3 - use UE3 BGRA encoding instead of assuming RGBA - // stolen from IImageWrapper.h - enum ERGBFormatJPG - { - Invalid = -1, - RGBA = 0, - BGRA = 1, - Gray = 2, - }; - static ERGBFormatJPG jpg_format; -// END EPIC MOD - - // DCT coefficients are stored in this sequence. - static int g_ZAG[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 }; - - enum JPEG_MARKER - { - M_SOF0 = 0xC0, M_SOF1 = 0xC1, M_SOF2 = 0xC2, M_SOF3 = 0xC3, M_SOF5 = 0xC5, M_SOF6 = 0xC6, M_SOF7 = 0xC7, M_JPG = 0xC8, - M_SOF9 = 0xC9, M_SOF10 = 0xCA, M_SOF11 = 0xCB, M_SOF13 = 0xCD, M_SOF14 = 0xCE, M_SOF15 = 0xCF, M_DHT = 0xC4, M_DAC = 0xCC, - M_RST0 = 0xD0, M_RST1 = 0xD1, M_RST2 = 0xD2, M_RST3 = 0xD3, M_RST4 = 0xD4, M_RST5 = 0xD5, M_RST6 = 0xD6, M_RST7 = 0xD7, - M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_DNL = 0xDC, M_DRI = 0xDD, M_DHP = 0xDE, M_EXP = 0xDF, - M_APP0 = 0xE0, M_APP15 = 0xEF, M_JPG0 = 0xF0, M_JPG13 = 0xFD, M_COM = 0xFE, M_TEM = 0x01, M_ERROR = 0x100, RST0 = 0xD0 - }; - - enum JPEG_SUBSAMPLING { JPGD_GRAYSCALE = 0, JPGD_YH1V1, JPGD_YH2V1, JPGD_YH1V2, JPGD_YH2V2 }; - -#define CONST_BITS 13 -#define PASS1_BITS 2 -#define SCALEDONE ((int32)1) - -#define FIX_0_298631336 ((int32)2446) /* FIX(0.298631336) */ -#define FIX_0_390180644 ((int32)3196) /* FIX(0.390180644) */ -#define FIX_0_541196100 ((int32)4433) /* FIX(0.541196100) */ -#define FIX_0_765366865 ((int32)6270) /* FIX(0.765366865) */ -#define FIX_0_899976223 ((int32)7373) /* FIX(0.899976223) */ -#define FIX_1_175875602 ((int32)9633) /* FIX(1.175875602) */ -#define FIX_1_501321110 ((int32)12299) /* FIX(1.501321110) */ -#define FIX_1_847759065 ((int32)15137) /* FIX(1.847759065) */ -#define FIX_1_961570560 ((int32)16069) /* FIX(1.961570560) */ -#define FIX_2_053119869 ((int32)16819) /* FIX(2.053119869) */ -#define FIX_2_562915447 ((int32)20995) /* FIX(2.562915447) */ -#define FIX_3_072711026 ((int32)25172) /* FIX(3.072711026) */ - -#define DESCALE(x,n) (((x) + (SCALEDONE << ((n)-1))) >> (n)) -#define DESCALE_ZEROSHIFT(x,n) (((x) + (128 << (n)) + (SCALEDONE << ((n)-1))) >> (n)) - -#define MULTIPLY(var, cnst) ((var) * (cnst)) - -#define CLAMP(i) ((static_cast(i) > 255) ? (((~i) >> 31) & 0xFF) : (i)) - - // Compiler creates a fast path 1D IDCT for X non-zero columns - template - struct Row - { - static void idct(int* pTemp, const jpgd_block_t* pSrc) - { - // ACCESS_COL() will be optimized at compile time to either an array access, or 0. -#define ACCESS_COL(x) (((x) < NONZERO_COLS) ? (int)pSrc[x] : 0) - - const int z2 = ACCESS_COL(2), z3 = ACCESS_COL(6); - - const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100); - const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065); - const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865); - - const int tmp0 = (ACCESS_COL(0) + ACCESS_COL(4)) << CONST_BITS; - const int tmp1 = (ACCESS_COL(0) - ACCESS_COL(4)) << CONST_BITS; - - const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2; - - const int atmp0 = ACCESS_COL(7), atmp1 = ACCESS_COL(5), atmp2 = ACCESS_COL(3), atmp3 = ACCESS_COL(1); - - const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3; - const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602); - - const int az1 = MULTIPLY(bz1, - FIX_0_899976223); - const int az2 = MULTIPLY(bz2, - FIX_2_562915447); - const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5; - const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5; - - const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3; - const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4; - const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3; - const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4; - - pTemp[0] = DESCALE(tmp10 + btmp3, CONST_BITS-PASS1_BITS); - pTemp[7] = DESCALE(tmp10 - btmp3, CONST_BITS-PASS1_BITS); - pTemp[1] = DESCALE(tmp11 + btmp2, CONST_BITS-PASS1_BITS); - pTemp[6] = DESCALE(tmp11 - btmp2, CONST_BITS-PASS1_BITS); - pTemp[2] = DESCALE(tmp12 + btmp1, CONST_BITS-PASS1_BITS); - pTemp[5] = DESCALE(tmp12 - btmp1, CONST_BITS-PASS1_BITS); - pTemp[3] = DESCALE(tmp13 + btmp0, CONST_BITS-PASS1_BITS); - pTemp[4] = DESCALE(tmp13 - btmp0, CONST_BITS-PASS1_BITS); - } - }; - - template <> - struct Row<0> - { - static void idct(int* pTemp, const jpgd_block_t* pSrc) - { -#ifdef _MSC_VER - pTemp; pSrc; -#endif - } - }; - - template <> - struct Row<1> - { - static void idct(int* pTemp, const jpgd_block_t* pSrc) - { - const int dcval = (pSrc[0] << PASS1_BITS); - - pTemp[0] = dcval; - pTemp[1] = dcval; - pTemp[2] = dcval; - pTemp[3] = dcval; - pTemp[4] = dcval; - pTemp[5] = dcval; - pTemp[6] = dcval; - pTemp[7] = dcval; - } - }; - - // Compiler creates a fast path 1D IDCT for X non-zero rows - template - struct Col - { - static void idct(uint8* pDst_ptr, const int* pTemp) - { - // ACCESS_ROW() will be optimized at compile time to either an array access, or 0. -#define ACCESS_ROW(x) (((x) < NONZERO_ROWS) ? pTemp[x * 8] : 0) - - const int z2 = ACCESS_ROW(2); - const int z3 = ACCESS_ROW(6); - - const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100); - const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065); - const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865); - - const int tmp0 = (ACCESS_ROW(0) + ACCESS_ROW(4)) << CONST_BITS; - const int tmp1 = (ACCESS_ROW(0) - ACCESS_ROW(4)) << CONST_BITS; - - const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2; - - const int atmp0 = ACCESS_ROW(7), atmp1 = ACCESS_ROW(5), atmp2 = ACCESS_ROW(3), atmp3 = ACCESS_ROW(1); - - const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3; - const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602); - - const int az1 = MULTIPLY(bz1, - FIX_0_899976223); - const int az2 = MULTIPLY(bz2, - FIX_2_562915447); - const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5; - const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5; - - const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3; - const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4; - const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3; - const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4; - - int i = DESCALE_ZEROSHIFT(tmp10 + btmp3, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*0] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp10 - btmp3, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*7] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp11 + btmp2, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*1] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp11 - btmp2, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*6] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp12 + btmp1, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*2] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp12 - btmp1, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*5] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp13 + btmp0, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*3] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp13 - btmp0, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*4] = (uint8)CLAMP(i); - } - }; - - template <> - struct Col<1> - { - static void idct(uint8* pDst_ptr, const int* pTemp) - { - int dcval = DESCALE_ZEROSHIFT(pTemp[0], PASS1_BITS+3); - const uint8 dcval_clamped = (uint8)CLAMP(dcval); - pDst_ptr[0*8] = dcval_clamped; - pDst_ptr[1*8] = dcval_clamped; - pDst_ptr[2*8] = dcval_clamped; - pDst_ptr[3*8] = dcval_clamped; - pDst_ptr[4*8] = dcval_clamped; - pDst_ptr[5*8] = dcval_clamped; - pDst_ptr[6*8] = dcval_clamped; - pDst_ptr[7*8] = dcval_clamped; - } - }; - - static const uint8 s_idct_row_table[] = - { - 1,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0, 2,1,0,0,0,0,0,0, 2,1,1,0,0,0,0,0, 2,2,1,0,0,0,0,0, 3,2,1,0,0,0,0,0, 4,2,1,0,0,0,0,0, 4,3,1,0,0,0,0,0, - 4,3,2,0,0,0,0,0, 4,3,2,1,0,0,0,0, 4,3,2,1,1,0,0,0, 4,3,2,2,1,0,0,0, 4,3,3,2,1,0,0,0, 4,4,3,2,1,0,0,0, 5,4,3,2,1,0,0,0, 6,4,3,2,1,0,0,0, - 6,5,3,2,1,0,0,0, 6,5,4,2,1,0,0,0, 6,5,4,3,1,0,0,0, 6,5,4,3,2,0,0,0, 6,5,4,3,2,1,0,0, 6,5,4,3,2,1,1,0, 6,5,4,3,2,2,1,0, 6,5,4,3,3,2,1,0, - 6,5,4,4,3,2,1,0, 6,5,5,4,3,2,1,0, 6,6,5,4,3,2,1,0, 7,6,5,4,3,2,1,0, 8,6,5,4,3,2,1,0, 8,7,5,4,3,2,1,0, 8,7,6,4,3,2,1,0, 8,7,6,5,3,2,1,0, - 8,7,6,5,4,2,1,0, 8,7,6,5,4,3,1,0, 8,7,6,5,4,3,2,0, 8,7,6,5,4,3,2,1, 8,7,6,5,4,3,2,2, 8,7,6,5,4,3,3,2, 8,7,6,5,4,4,3,2, 8,7,6,5,5,4,3,2, - 8,7,6,6,5,4,3,2, 8,7,7,6,5,4,3,2, 8,8,7,6,5,4,3,2, 8,8,8,6,5,4,3,2, 8,8,8,7,5,4,3,2, 8,8,8,7,6,4,3,2, 8,8,8,7,6,5,3,2, 8,8,8,7,6,5,4,2, - 8,8,8,7,6,5,4,3, 8,8,8,7,6,5,4,4, 8,8,8,7,6,5,5,4, 8,8,8,7,6,6,5,4, 8,8,8,7,7,6,5,4, 8,8,8,8,7,6,5,4, 8,8,8,8,8,6,5,4, 8,8,8,8,8,7,5,4, - 8,8,8,8,8,7,6,4, 8,8,8,8,8,7,6,5, 8,8,8,8,8,7,6,6, 8,8,8,8,8,7,7,6, 8,8,8,8,8,8,7,6, 8,8,8,8,8,8,8,6, 8,8,8,8,8,8,8,7, 8,8,8,8,8,8,8,8, - }; - - static const uint8 s_idct_col_table[] = { 1, 1, 2, 3, 3, 3, 3, 3, 3, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8 }; - - void idct(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr, int block_max_zag) - { - JPGD_ASSERT(block_max_zag >= 1); - JPGD_ASSERT(block_max_zag <= 64); - - if (block_max_zag == 1) - { - int k = ((pSrc_ptr[0] + 4) >> 3) + 128; - k = CLAMP(k); - k = k | (k<<8); - k = k | (k<<16); - - for (int i = 8; i > 0; i--) - { - *(int*)&pDst_ptr[0] = k; - *(int*)&pDst_ptr[4] = k; - pDst_ptr += 8; - } - return; - } - - int temp[64]; - - const jpgd_block_t* pSrc = pSrc_ptr; - int* pTemp = temp; - - const uint8* pRow_tab = &s_idct_row_table[(block_max_zag - 1) * 8]; - int i; - for (i = 8; i > 0; i--, pRow_tab++) - { - switch (*pRow_tab) - { - case 0: Row<0>::idct(pTemp, pSrc); break; - case 1: Row<1>::idct(pTemp, pSrc); break; - case 2: Row<2>::idct(pTemp, pSrc); break; - case 3: Row<3>::idct(pTemp, pSrc); break; - case 4: Row<4>::idct(pTemp, pSrc); break; - case 5: Row<5>::idct(pTemp, pSrc); break; - case 6: Row<6>::idct(pTemp, pSrc); break; - case 7: Row<7>::idct(pTemp, pSrc); break; - case 8: Row<8>::idct(pTemp, pSrc); break; - } - - pSrc += 8; - pTemp += 8; - } - - pTemp = temp; - - const int nonzero_rows = s_idct_col_table[block_max_zag - 1]; - for (i = 8; i > 0; i--) - { - switch (nonzero_rows) - { - case 1: Col<1>::idct(pDst_ptr, pTemp); break; - case 2: Col<2>::idct(pDst_ptr, pTemp); break; - case 3: Col<3>::idct(pDst_ptr, pTemp); break; - case 4: Col<4>::idct(pDst_ptr, pTemp); break; - case 5: Col<5>::idct(pDst_ptr, pTemp); break; - case 6: Col<6>::idct(pDst_ptr, pTemp); break; - case 7: Col<7>::idct(pDst_ptr, pTemp); break; - case 8: Col<8>::idct(pDst_ptr, pTemp); break; - } - - pTemp++; - pDst_ptr++; - } - } - - void idct_4x4(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr) - { - int temp[64]; - int* pTemp = temp; - const jpgd_block_t* pSrc = pSrc_ptr; - - for (int i = 4; i > 0; i--) - { - Row<4>::idct(pTemp, pSrc); - pSrc += 8; - pTemp += 8; - } - - pTemp = temp; - for (int i = 8; i > 0; i--) - { - Col<4>::idct(pDst_ptr, pTemp); - pTemp++; - pDst_ptr++; - } - } - - // Retrieve one character from the input stream. - inline uint jpeg_decoder::get_char() - { - // Any bytes remaining in buffer? - if (!m_in_buf_left) - { - // Try to get more bytes. - prep_in_buffer(); - // Still nothing to get? - if (!m_in_buf_left) - { - // Pad the end of the stream with 0xFF 0xD9 (EOI marker) - int t = m_tem_flag; - m_tem_flag ^= 1; - if (t) - return 0xD9; - else - return 0xFF; - } - } - - uint c = *m_pIn_buf_ofs++; - m_in_buf_left--; - - return c; - } - - // Same as previous method, except can indicate if the character is a pad character or not. - inline uint jpeg_decoder::get_char(bool *pPadding_flag) - { - if (!m_in_buf_left) - { - prep_in_buffer(); - if (!m_in_buf_left) - { - *pPadding_flag = true; - int t = m_tem_flag; - m_tem_flag ^= 1; - if (t) - return 0xD9; - else - return 0xFF; - } - } - - *pPadding_flag = false; - - uint c = *m_pIn_buf_ofs++; - m_in_buf_left--; - - return c; - } - - // Inserts a previously retrieved character back into the input buffer. - inline void jpeg_decoder::stuff_char(uint8 q) - { - *(--m_pIn_buf_ofs) = q; - m_in_buf_left++; - } - - // Retrieves one character from the input stream, but does not read past markers. Will continue to return 0xFF when a marker is encountered. - inline uint8 jpeg_decoder::get_octet() - { - bool padding_flag; - int c = get_char(&padding_flag); - - if (c == 0xFF) - { - if (padding_flag) - return 0xFF; - - c = get_char(&padding_flag); - if (padding_flag) - { - stuff_char(0xFF); - return 0xFF; - } - - if (c == 0x00) - return 0xFF; - else - { - stuff_char(static_cast(c)); - stuff_char(0xFF); - return 0xFF; - } - } - - return static_cast(c); - } - - // Retrieves a variable number of bits from the input stream. Does not recognize markers. - inline uint jpeg_decoder::get_bits(int num_bits) - { - if (!num_bits) - return 0; - - uint i = m_bit_buf >> (32 - num_bits); - - if ((m_bits_left -= num_bits) <= 0) - { - m_bit_buf <<= (num_bits += m_bits_left); - - uint c1 = get_char(); - uint c2 = get_char(); - m_bit_buf = (m_bit_buf & 0xFFFF0000) | (c1 << 8) | c2; - - m_bit_buf <<= -m_bits_left; - - m_bits_left += 16; - - JPGD_ASSERT(m_bits_left >= 0); - } - else - m_bit_buf <<= num_bits; - - return i; - } - - // Retrieves a variable number of bits from the input stream. Markers will not be read into the input bit buffer. Instead, an infinite number of all 1's will be returned when a marker is encountered. - inline uint jpeg_decoder::get_bits_no_markers(int num_bits) - { - if (!num_bits) - return 0; - - uint i = m_bit_buf >> (32 - num_bits); - - if ((m_bits_left -= num_bits) <= 0) - { - m_bit_buf <<= (num_bits += m_bits_left); - - if ((m_in_buf_left < 2) || (m_pIn_buf_ofs[0] == 0xFF) || (m_pIn_buf_ofs[1] == 0xFF)) - { - uint c1 = get_octet(); - uint c2 = get_octet(); - m_bit_buf |= (c1 << 8) | c2; - } - else - { - m_bit_buf |= ((uint)m_pIn_buf_ofs[0] << 8) | m_pIn_buf_ofs[1]; - m_in_buf_left -= 2; - m_pIn_buf_ofs += 2; - } - - m_bit_buf <<= -m_bits_left; - - m_bits_left += 16; - - JPGD_ASSERT(m_bits_left >= 0); - } - else - m_bit_buf <<= num_bits; - - return i; - } - - // Decodes a Huffman encoded symbol. - inline int jpeg_decoder::huff_decode(huff_tables *pH) - { - int symbol; - - // Check first 8-bits: do we have a complete symbol? - if ((symbol = pH->look_up[m_bit_buf >> 24]) < 0) - { - // Decode more bits, use a tree traversal to find symbol. - int ofs = 23; - do - { - symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))]; - ofs--; - } while (symbol < 0); - - get_bits_no_markers(8 + (23 - ofs)); - } - else - get_bits_no_markers(pH->code_size[symbol]); - - return symbol; - } - - // Decodes a Huffman encoded symbol. - inline int jpeg_decoder::huff_decode(huff_tables *pH, int& extra_bits) - { - int symbol; - - // Check first 8-bits: do we have a complete symbol? - if ((symbol = pH->look_up2[m_bit_buf >> 24]) < 0) - { - // Use a tree traversal to find symbol. - int ofs = 23; - do - { - symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))]; - ofs--; - } while (symbol < 0); - - get_bits_no_markers(8 + (23 - ofs)); - - extra_bits = get_bits_no_markers(symbol & 0xF); - } - else - { - JPGD_ASSERT(((symbol >> 8) & 31) == pH->code_size[symbol & 255] + ((symbol & 0x8000) ? (symbol & 15) : 0)); - - if (symbol & 0x8000) - { - get_bits_no_markers((symbol >> 8) & 31); - extra_bits = symbol >> 16; - } - else - { - int code_size = (symbol >> 8) & 31; - int num_extra_bits = symbol & 0xF; - int bits = code_size + num_extra_bits; - if (bits <= (m_bits_left + 16)) - extra_bits = get_bits_no_markers(bits) & ((1 << num_extra_bits) - 1); - else - { - get_bits_no_markers(code_size); - extra_bits = get_bits_no_markers(num_extra_bits); - } - } - - symbol &= 0xFF; - } - - return symbol; - } - - // Tables and macro used to fully decode the DPCM differences. - static const int s_extend_test[16] = { 0, 0x0001, 0x0002, 0x0004, 0x0008, 0x0010, 0x0020, 0x0040, 0x0080, 0x0100, 0x0200, 0x0400, 0x0800, 0x1000, 0x2000, 0x4000 }; - static const int s_extend_offset[16] = { 0, -1, -3, -7, -15, -31, -63, -127, -255, -511, -1023, -2047, -4095, -8191, -16383, -32767 }; - static const int s_extend_mask[] = { 0, (1<<0), (1<<1), (1<<2), (1<<3), (1<<4), (1<<5), (1<<6), (1<<7), (1<<8), (1<<9), (1<<10), (1<<11), (1<<12), (1<<13), (1<<14), (1<<15), (1<<16) }; -#define HUFF_EXTEND(x,s) ((x) < s_extend_test[s] ? (x) + s_extend_offset[s] : (x)) - - // Clamps a value between 0-255. - inline uint8 jpeg_decoder::clamp(int i) - { - if (static_cast(i) > 255) - i = (((~i) >> 31) & 0xFF); - - return static_cast(i); - } - - namespace DCT_Upsample - { - struct Matrix44 - { - typedef int Element_Type; - enum { NUM_ROWS = 4, NUM_COLS = 4 }; - - Element_Type v[NUM_ROWS][NUM_COLS]; - - inline int rows() const { return NUM_ROWS; } - inline int cols() const { return NUM_COLS; } - - inline const Element_Type & at(int r, int c) const { return v[r][c]; } - inline Element_Type & at(int r, int c) { return v[r][c]; } - - inline Matrix44() { } - - inline Matrix44& operator += (const Matrix44& a) - { - for (int r = 0; r < NUM_ROWS; r++) - { - at(r, 0) += a.at(r, 0); - at(r, 1) += a.at(r, 1); - at(r, 2) += a.at(r, 2); - at(r, 3) += a.at(r, 3); - } - return *this; - } - - inline Matrix44& operator -= (const Matrix44& a) - { - for (int r = 0; r < NUM_ROWS; r++) - { - at(r, 0) -= a.at(r, 0); - at(r, 1) -= a.at(r, 1); - at(r, 2) -= a.at(r, 2); - at(r, 3) -= a.at(r, 3); - } - return *this; - } - - friend inline Matrix44 operator + (const Matrix44& a, const Matrix44& b) - { - Matrix44 ret; - for (int r = 0; r < NUM_ROWS; r++) - { - ret.at(r, 0) = a.at(r, 0) + b.at(r, 0); - ret.at(r, 1) = a.at(r, 1) + b.at(r, 1); - ret.at(r, 2) = a.at(r, 2) + b.at(r, 2); - ret.at(r, 3) = a.at(r, 3) + b.at(r, 3); - } - return ret; - } - - friend inline Matrix44 operator - (const Matrix44& a, const Matrix44& b) - { - Matrix44 ret; - for (int r = 0; r < NUM_ROWS; r++) - { - ret.at(r, 0) = a.at(r, 0) - b.at(r, 0); - ret.at(r, 1) = a.at(r, 1) - b.at(r, 1); - ret.at(r, 2) = a.at(r, 2) - b.at(r, 2); - ret.at(r, 3) = a.at(r, 3) - b.at(r, 3); - } - return ret; - } - - static inline void add_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b) - { - for (int r = 0; r < 4; r++) - { - pDst[0*8 + r] = static_cast(a.at(r, 0) + b.at(r, 0)); - pDst[1*8 + r] = static_cast(a.at(r, 1) + b.at(r, 1)); - pDst[2*8 + r] = static_cast(a.at(r, 2) + b.at(r, 2)); - pDst[3*8 + r] = static_cast(a.at(r, 3) + b.at(r, 3)); - } - } - - static inline void sub_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b) - { - for (int r = 0; r < 4; r++) - { - pDst[0*8 + r] = static_cast(a.at(r, 0) - b.at(r, 0)); - pDst[1*8 + r] = static_cast(a.at(r, 1) - b.at(r, 1)); - pDst[2*8 + r] = static_cast(a.at(r, 2) - b.at(r, 2)); - pDst[3*8 + r] = static_cast(a.at(r, 3) - b.at(r, 3)); - } - } - }; - - const int FRACT_BITS = 10; - const int SCALE = 1 << FRACT_BITS; - - typedef int Temp_Type; -#define D(i) (((i) + (SCALE >> 1)) >> FRACT_BITS) -#define F(i) ((int)((i) * SCALE + .5f)) - - // Any decent C++ compiler will optimize this at compile time to a 0, or an array access. -#define AT(c, r) ((((c)>=NUM_COLS)||((r)>=NUM_ROWS)) ? 0 : pSrc[(c)+(r)*8]) - - // NUM_ROWS/NUM_COLS = # of non-zero rows/cols in input matrix - template - struct P_Q - { - static void calc(Matrix44& P, Matrix44& Q, const jpgd_block_t* pSrc) - { - // 4x8 = 4x8 times 8x8, matrix 0 is constant - const Temp_Type X000 = AT(0, 0); - const Temp_Type X001 = AT(0, 1); - const Temp_Type X002 = AT(0, 2); - const Temp_Type X003 = AT(0, 3); - const Temp_Type X004 = AT(0, 4); - const Temp_Type X005 = AT(0, 5); - const Temp_Type X006 = AT(0, 6); - const Temp_Type X007 = AT(0, 7); - const Temp_Type X010 = D(F(0.415735f) * AT(1, 0) + F(0.791065f) * AT(3, 0) + F(-0.352443f) * AT(5, 0) + F(0.277785f) * AT(7, 0)); - const Temp_Type X011 = D(F(0.415735f) * AT(1, 1) + F(0.791065f) * AT(3, 1) + F(-0.352443f) * AT(5, 1) + F(0.277785f) * AT(7, 1)); - const Temp_Type X012 = D(F(0.415735f) * AT(1, 2) + F(0.791065f) * AT(3, 2) + F(-0.352443f) * AT(5, 2) + F(0.277785f) * AT(7, 2)); - const Temp_Type X013 = D(F(0.415735f) * AT(1, 3) + F(0.791065f) * AT(3, 3) + F(-0.352443f) * AT(5, 3) + F(0.277785f) * AT(7, 3)); - const Temp_Type X014 = D(F(0.415735f) * AT(1, 4) + F(0.791065f) * AT(3, 4) + F(-0.352443f) * AT(5, 4) + F(0.277785f) * AT(7, 4)); - const Temp_Type X015 = D(F(0.415735f) * AT(1, 5) + F(0.791065f) * AT(3, 5) + F(-0.352443f) * AT(5, 5) + F(0.277785f) * AT(7, 5)); - const Temp_Type X016 = D(F(0.415735f) * AT(1, 6) + F(0.791065f) * AT(3, 6) + F(-0.352443f) * AT(5, 6) + F(0.277785f) * AT(7, 6)); - const Temp_Type X017 = D(F(0.415735f) * AT(1, 7) + F(0.791065f) * AT(3, 7) + F(-0.352443f) * AT(5, 7) + F(0.277785f) * AT(7, 7)); - const Temp_Type X020 = AT(4, 0); - const Temp_Type X021 = AT(4, 1); - const Temp_Type X022 = AT(4, 2); - const Temp_Type X023 = AT(4, 3); - const Temp_Type X024 = AT(4, 4); - const Temp_Type X025 = AT(4, 5); - const Temp_Type X026 = AT(4, 6); - const Temp_Type X027 = AT(4, 7); - const Temp_Type X030 = D(F(0.022887f) * AT(1, 0) + F(-0.097545f) * AT(3, 0) + F(0.490393f) * AT(5, 0) + F(0.865723f) * AT(7, 0)); - const Temp_Type X031 = D(F(0.022887f) * AT(1, 1) + F(-0.097545f) * AT(3, 1) + F(0.490393f) * AT(5, 1) + F(0.865723f) * AT(7, 1)); - const Temp_Type X032 = D(F(0.022887f) * AT(1, 2) + F(-0.097545f) * AT(3, 2) + F(0.490393f) * AT(5, 2) + F(0.865723f) * AT(7, 2)); - const Temp_Type X033 = D(F(0.022887f) * AT(1, 3) + F(-0.097545f) * AT(3, 3) + F(0.490393f) * AT(5, 3) + F(0.865723f) * AT(7, 3)); - const Temp_Type X034 = D(F(0.022887f) * AT(1, 4) + F(-0.097545f) * AT(3, 4) + F(0.490393f) * AT(5, 4) + F(0.865723f) * AT(7, 4)); - const Temp_Type X035 = D(F(0.022887f) * AT(1, 5) + F(-0.097545f) * AT(3, 5) + F(0.490393f) * AT(5, 5) + F(0.865723f) * AT(7, 5)); - const Temp_Type X036 = D(F(0.022887f) * AT(1, 6) + F(-0.097545f) * AT(3, 6) + F(0.490393f) * AT(5, 6) + F(0.865723f) * AT(7, 6)); - const Temp_Type X037 = D(F(0.022887f) * AT(1, 7) + F(-0.097545f) * AT(3, 7) + F(0.490393f) * AT(5, 7) + F(0.865723f) * AT(7, 7)); - - // 4x4 = 4x8 times 8x4, matrix 1 is constant - P.at(0, 0) = X000; - P.at(0, 1) = D(X001 * F(0.415735f) + X003 * F(0.791065f) + X005 * F(-0.352443f) + X007 * F(0.277785f)); - P.at(0, 2) = X004; - P.at(0, 3) = D(X001 * F(0.022887f) + X003 * F(-0.097545f) + X005 * F(0.490393f) + X007 * F(0.865723f)); - P.at(1, 0) = X010; - P.at(1, 1) = D(X011 * F(0.415735f) + X013 * F(0.791065f) + X015 * F(-0.352443f) + X017 * F(0.277785f)); - P.at(1, 2) = X014; - P.at(1, 3) = D(X011 * F(0.022887f) + X013 * F(-0.097545f) + X015 * F(0.490393f) + X017 * F(0.865723f)); - P.at(2, 0) = X020; - P.at(2, 1) = D(X021 * F(0.415735f) + X023 * F(0.791065f) + X025 * F(-0.352443f) + X027 * F(0.277785f)); - P.at(2, 2) = X024; - P.at(2, 3) = D(X021 * F(0.022887f) + X023 * F(-0.097545f) + X025 * F(0.490393f) + X027 * F(0.865723f)); - P.at(3, 0) = X030; - P.at(3, 1) = D(X031 * F(0.415735f) + X033 * F(0.791065f) + X035 * F(-0.352443f) + X037 * F(0.277785f)); - P.at(3, 2) = X034; - P.at(3, 3) = D(X031 * F(0.022887f) + X033 * F(-0.097545f) + X035 * F(0.490393f) + X037 * F(0.865723f)); - // 40 muls 24 adds - - // 4x4 = 4x8 times 8x4, matrix 1 is constant - Q.at(0, 0) = D(X001 * F(0.906127f) + X003 * F(-0.318190f) + X005 * F(0.212608f) + X007 * F(-0.180240f)); - Q.at(0, 1) = X002; - Q.at(0, 2) = D(X001 * F(-0.074658f) + X003 * F(0.513280f) + X005 * F(0.768178f) + X007 * F(-0.375330f)); - Q.at(0, 3) = X006; - Q.at(1, 0) = D(X011 * F(0.906127f) + X013 * F(-0.318190f) + X015 * F(0.212608f) + X017 * F(-0.180240f)); - Q.at(1, 1) = X012; - Q.at(1, 2) = D(X011 * F(-0.074658f) + X013 * F(0.513280f) + X015 * F(0.768178f) + X017 * F(-0.375330f)); - Q.at(1, 3) = X016; - Q.at(2, 0) = D(X021 * F(0.906127f) + X023 * F(-0.318190f) + X025 * F(0.212608f) + X027 * F(-0.180240f)); - Q.at(2, 1) = X022; - Q.at(2, 2) = D(X021 * F(-0.074658f) + X023 * F(0.513280f) + X025 * F(0.768178f) + X027 * F(-0.375330f)); - Q.at(2, 3) = X026; - Q.at(3, 0) = D(X031 * F(0.906127f) + X033 * F(-0.318190f) + X035 * F(0.212608f) + X037 * F(-0.180240f)); - Q.at(3, 1) = X032; - Q.at(3, 2) = D(X031 * F(-0.074658f) + X033 * F(0.513280f) + X035 * F(0.768178f) + X037 * F(-0.375330f)); - Q.at(3, 3) = X036; - // 40 muls 24 adds - } - }; - - template - struct R_S - { - static void calc(Matrix44& R, Matrix44& S, const jpgd_block_t* pSrc) - { - // 4x8 = 4x8 times 8x8, matrix 0 is constant - const Temp_Type X100 = D(F(0.906127f) * AT(1, 0) + F(-0.318190f) * AT(3, 0) + F(0.212608f) * AT(5, 0) + F(-0.180240f) * AT(7, 0)); - const Temp_Type X101 = D(F(0.906127f) * AT(1, 1) + F(-0.318190f) * AT(3, 1) + F(0.212608f) * AT(5, 1) + F(-0.180240f) * AT(7, 1)); - const Temp_Type X102 = D(F(0.906127f) * AT(1, 2) + F(-0.318190f) * AT(3, 2) + F(0.212608f) * AT(5, 2) + F(-0.180240f) * AT(7, 2)); - const Temp_Type X103 = D(F(0.906127f) * AT(1, 3) + F(-0.318190f) * AT(3, 3) + F(0.212608f) * AT(5, 3) + F(-0.180240f) * AT(7, 3)); - const Temp_Type X104 = D(F(0.906127f) * AT(1, 4) + F(-0.318190f) * AT(3, 4) + F(0.212608f) * AT(5, 4) + F(-0.180240f) * AT(7, 4)); - const Temp_Type X105 = D(F(0.906127f) * AT(1, 5) + F(-0.318190f) * AT(3, 5) + F(0.212608f) * AT(5, 5) + F(-0.180240f) * AT(7, 5)); - const Temp_Type X106 = D(F(0.906127f) * AT(1, 6) + F(-0.318190f) * AT(3, 6) + F(0.212608f) * AT(5, 6) + F(-0.180240f) * AT(7, 6)); - const Temp_Type X107 = D(F(0.906127f) * AT(1, 7) + F(-0.318190f) * AT(3, 7) + F(0.212608f) * AT(5, 7) + F(-0.180240f) * AT(7, 7)); - const Temp_Type X110 = AT(2, 0); - const Temp_Type X111 = AT(2, 1); - const Temp_Type X112 = AT(2, 2); - const Temp_Type X113 = AT(2, 3); - const Temp_Type X114 = AT(2, 4); - const Temp_Type X115 = AT(2, 5); - const Temp_Type X116 = AT(2, 6); - const Temp_Type X117 = AT(2, 7); - const Temp_Type X120 = D(F(-0.074658f) * AT(1, 0) + F(0.513280f) * AT(3, 0) + F(0.768178f) * AT(5, 0) + F(-0.375330f) * AT(7, 0)); - const Temp_Type X121 = D(F(-0.074658f) * AT(1, 1) + F(0.513280f) * AT(3, 1) + F(0.768178f) * AT(5, 1) + F(-0.375330f) * AT(7, 1)); - const Temp_Type X122 = D(F(-0.074658f) * AT(1, 2) + F(0.513280f) * AT(3, 2) + F(0.768178f) * AT(5, 2) + F(-0.375330f) * AT(7, 2)); - const Temp_Type X123 = D(F(-0.074658f) * AT(1, 3) + F(0.513280f) * AT(3, 3) + F(0.768178f) * AT(5, 3) + F(-0.375330f) * AT(7, 3)); - const Temp_Type X124 = D(F(-0.074658f) * AT(1, 4) + F(0.513280f) * AT(3, 4) + F(0.768178f) * AT(5, 4) + F(-0.375330f) * AT(7, 4)); - const Temp_Type X125 = D(F(-0.074658f) * AT(1, 5) + F(0.513280f) * AT(3, 5) + F(0.768178f) * AT(5, 5) + F(-0.375330f) * AT(7, 5)); - const Temp_Type X126 = D(F(-0.074658f) * AT(1, 6) + F(0.513280f) * AT(3, 6) + F(0.768178f) * AT(5, 6) + F(-0.375330f) * AT(7, 6)); - const Temp_Type X127 = D(F(-0.074658f) * AT(1, 7) + F(0.513280f) * AT(3, 7) + F(0.768178f) * AT(5, 7) + F(-0.375330f) * AT(7, 7)); - const Temp_Type X130 = AT(6, 0); - const Temp_Type X131 = AT(6, 1); - const Temp_Type X132 = AT(6, 2); - const Temp_Type X133 = AT(6, 3); - const Temp_Type X134 = AT(6, 4); - const Temp_Type X135 = AT(6, 5); - const Temp_Type X136 = AT(6, 6); - const Temp_Type X137 = AT(6, 7); - // 80 muls 48 adds - - // 4x4 = 4x8 times 8x4, matrix 1 is constant - R.at(0, 0) = X100; - R.at(0, 1) = D(X101 * F(0.415735f) + X103 * F(0.791065f) + X105 * F(-0.352443f) + X107 * F(0.277785f)); - R.at(0, 2) = X104; - R.at(0, 3) = D(X101 * F(0.022887f) + X103 * F(-0.097545f) + X105 * F(0.490393f) + X107 * F(0.865723f)); - R.at(1, 0) = X110; - R.at(1, 1) = D(X111 * F(0.415735f) + X113 * F(0.791065f) + X115 * F(-0.352443f) + X117 * F(0.277785f)); - R.at(1, 2) = X114; - R.at(1, 3) = D(X111 * F(0.022887f) + X113 * F(-0.097545f) + X115 * F(0.490393f) + X117 * F(0.865723f)); - R.at(2, 0) = X120; - R.at(2, 1) = D(X121 * F(0.415735f) + X123 * F(0.791065f) + X125 * F(-0.352443f) + X127 * F(0.277785f)); - R.at(2, 2) = X124; - R.at(2, 3) = D(X121 * F(0.022887f) + X123 * F(-0.097545f) + X125 * F(0.490393f) + X127 * F(0.865723f)); - R.at(3, 0) = X130; - R.at(3, 1) = D(X131 * F(0.415735f) + X133 * F(0.791065f) + X135 * F(-0.352443f) + X137 * F(0.277785f)); - R.at(3, 2) = X134; - R.at(3, 3) = D(X131 * F(0.022887f) + X133 * F(-0.097545f) + X135 * F(0.490393f) + X137 * F(0.865723f)); - // 40 muls 24 adds - // 4x4 = 4x8 times 8x4, matrix 1 is constant - S.at(0, 0) = D(X101 * F(0.906127f) + X103 * F(-0.318190f) + X105 * F(0.212608f) + X107 * F(-0.180240f)); - S.at(0, 1) = X102; - S.at(0, 2) = D(X101 * F(-0.074658f) + X103 * F(0.513280f) + X105 * F(0.768178f) + X107 * F(-0.375330f)); - S.at(0, 3) = X106; - S.at(1, 0) = D(X111 * F(0.906127f) + X113 * F(-0.318190f) + X115 * F(0.212608f) + X117 * F(-0.180240f)); - S.at(1, 1) = X112; - S.at(1, 2) = D(X111 * F(-0.074658f) + X113 * F(0.513280f) + X115 * F(0.768178f) + X117 * F(-0.375330f)); - S.at(1, 3) = X116; - S.at(2, 0) = D(X121 * F(0.906127f) + X123 * F(-0.318190f) + X125 * F(0.212608f) + X127 * F(-0.180240f)); - S.at(2, 1) = X122; - S.at(2, 2) = D(X121 * F(-0.074658f) + X123 * F(0.513280f) + X125 * F(0.768178f) + X127 * F(-0.375330f)); - S.at(2, 3) = X126; - S.at(3, 0) = D(X131 * F(0.906127f) + X133 * F(-0.318190f) + X135 * F(0.212608f) + X137 * F(-0.180240f)); - S.at(3, 1) = X132; - S.at(3, 2) = D(X131 * F(-0.074658f) + X133 * F(0.513280f) + X135 * F(0.768178f) + X137 * F(-0.375330f)); - S.at(3, 3) = X136; - // 40 muls 24 adds - } - }; - } // end namespace DCT_Upsample - - // Unconditionally frees all allocated m_blocks. - void jpeg_decoder::free_all_blocks() - { - m_pStream = NULL; - for (mem_block *b = m_pMem_blocks; b; ) - { - mem_block *n = b->m_pNext; - jpgd_free(b); - b = n; - } - m_pMem_blocks = NULL; - } - - // This method handles all errors. - // It could easily be changed to use C++ exceptions. - void jpeg_decoder::stop_decoding(jpgd_status status) - { - m_error_code = status; - free_all_blocks(); - longjmp(m_jmp_state, status); - - // we shouldn't get here as longjmp shouldn't return, but we put it here to make it explicit - // that this function doesn't return, otherwise we get this error: - // - // error : function declared 'noreturn' should not return - exit(1); - } - - void *jpeg_decoder::alloc(size_t nSize, bool zero) - { - nSize = (JPGD_MAX(nSize, 1) + 3) & ~3; - char *rv = NULL; - for (mem_block *b = m_pMem_blocks; b; b = b->m_pNext) - { - if ((b->m_used_count + nSize) <= b->m_size) - { - rv = b->m_data + b->m_used_count; - b->m_used_count += nSize; - break; - } - } - if (!rv) - { - int capacity = JPGD_MAX(32768 - 256, (nSize + 2047) & ~2047); - mem_block *b = (mem_block*)jpgd_malloc(sizeof(mem_block) + capacity); - if (!b) stop_decoding(JPGD_NOTENOUGHMEM); - b->m_pNext = m_pMem_blocks; m_pMem_blocks = b; - b->m_used_count = nSize; - b->m_size = capacity; - rv = b->m_data; - } - if (zero) memset(rv, 0, nSize); - return rv; - } - - void jpeg_decoder::word_clear(void *p, uint16 c, uint n) - { - uint8 *pD = (uint8*)p; - const uint8 l = c & 0xFF, h = (c >> 8) & 0xFF; - while (n) - { - pD[0] = l; pD[1] = h; pD += 2; - n--; - } - } - - // Refill the input buffer. - // This method will sit in a loop until (A) the buffer is full or (B) - // the stream's read() method reports and end of file condition. - void jpeg_decoder::prep_in_buffer() - { - m_in_buf_left = 0; - m_pIn_buf_ofs = m_in_buf; - - if (m_eof_flag) - return; - - do - { - int bytes_read = m_pStream->read(m_in_buf + m_in_buf_left, JPGD_IN_BUF_SIZE - m_in_buf_left, &m_eof_flag); - if (bytes_read == -1) - stop_decoding(JPGD_STREAM_READ); - - m_in_buf_left += bytes_read; - } while ((m_in_buf_left < JPGD_IN_BUF_SIZE) && (!m_eof_flag)); - - m_total_bytes_read += m_in_buf_left; - - // Pad the end of the block with M_EOI (prevents the decompressor from going off the rails if the stream is invalid). - // (This dates way back to when this decompressor was written in C/asm, and the all-asm Huffman decoder did some fancy things to increase perf.) - word_clear(m_pIn_buf_ofs + m_in_buf_left, 0xD9FF, 64); - } - - // Read a Huffman code table. - void jpeg_decoder::read_dht_marker() - { - int i, index, count; - uint8 huff_num[17]; - uint8 huff_val[256]; - - uint num_left = get_bits(16); - - if (num_left < 2) - stop_decoding(JPGD_BAD_DHT_MARKER); - - num_left -= 2; - - while (num_left) - { - index = get_bits(8); - - huff_num[0] = 0; - - count = 0; - - for (i = 1; i <= 16; i++) - { - huff_num[i] = static_cast(get_bits(8)); - count += huff_num[i]; - } - - if (count > 255) - stop_decoding(JPGD_BAD_DHT_COUNTS); - - for (i = 0; i < count; i++) - huff_val[i] = static_cast(get_bits(8)); - - i = 1 + 16 + count; - - if (num_left < (uint)i) - stop_decoding(JPGD_BAD_DHT_MARKER); - - num_left -= i; - - if ((index & 0x10) > 0x10) - stop_decoding(JPGD_BAD_DHT_INDEX); - - index = (index & 0x0F) + ((index & 0x10) >> 4) * (JPGD_MAX_HUFF_TABLES >> 1); - - if (index >= JPGD_MAX_HUFF_TABLES) - stop_decoding(JPGD_BAD_DHT_INDEX); - - if (!m_huff_num[index]) - m_huff_num[index] = (uint8 *)alloc(17); - - if (!m_huff_val[index]) - m_huff_val[index] = (uint8 *)alloc(256); - - m_huff_ac[index] = (index & 0x10) != 0; - memcpy(m_huff_num[index], huff_num, 17); - memcpy(m_huff_val[index], huff_val, 256); - } - } - - // Read a quantization table. - void jpeg_decoder::read_dqt_marker() - { - int n, i, prec; - uint num_left; - uint temp; - - num_left = get_bits(16); - - if (num_left < 2) - stop_decoding(JPGD_BAD_DQT_MARKER); - - num_left -= 2; - - while (num_left) - { - n = get_bits(8); - prec = n >> 4; - n &= 0x0F; - - if (n >= JPGD_MAX_QUANT_TABLES) - stop_decoding(JPGD_BAD_DQT_TABLE); - - if (!m_quant[n]) - m_quant[n] = (jpgd_quant_t *)alloc(64 * sizeof(jpgd_quant_t)); - - // read quantization entries, in zag order - for (i = 0; i < 64; i++) - { - temp = get_bits(8); - - if (prec) - temp = (temp << 8) + get_bits(8); - - m_quant[n][i] = static_cast(temp); - } - - i = 64 + 1; - - if (prec) - i += 64; - - if (num_left < (uint)i) - stop_decoding(JPGD_BAD_DQT_LENGTH); - - num_left -= i; - } - } - - // Read the start of frame (SOF) marker. - void jpeg_decoder::read_sof_marker() - { - int i; - uint num_left; - - num_left = get_bits(16); - - if (get_bits(8) != 8) /* precision: sorry, only 8-bit precision is supported right now */ - stop_decoding(JPGD_BAD_PRECISION); - - m_image_y_size = get_bits(16); - - if ((m_image_y_size < 1) || (m_image_y_size > JPGD_MAX_HEIGHT)) - stop_decoding(JPGD_BAD_HEIGHT); - - m_image_x_size = get_bits(16); - - if ((m_image_x_size < 1) || (m_image_x_size > JPGD_MAX_WIDTH)) - stop_decoding(JPGD_BAD_WIDTH); - - m_comps_in_frame = get_bits(8); - - if (m_comps_in_frame > JPGD_MAX_COMPONENTS) - stop_decoding(JPGD_TOO_MANY_COMPONENTS); - - if (num_left != (uint)(m_comps_in_frame * 3 + 8)) - stop_decoding(JPGD_BAD_SOF_LENGTH); - - for (i = 0; i < m_comps_in_frame; i++) - { - m_comp_ident[i] = get_bits(8); - m_comp_h_samp[i] = get_bits(4); - m_comp_v_samp[i] = get_bits(4); - m_comp_quant[i] = get_bits(8); - } - } - - // Used to skip unrecognized markers. - void jpeg_decoder::skip_variable_marker() - { - uint num_left; - - num_left = get_bits(16); - - if (num_left < 2) - stop_decoding(JPGD_BAD_VARIABLE_MARKER); - - num_left -= 2; - - while (num_left) - { - get_bits(8); - num_left--; - } - } - - // Read a define restart interval (DRI) marker. - void jpeg_decoder::read_dri_marker() - { - if (get_bits(16) != 4) - stop_decoding(JPGD_BAD_DRI_LENGTH); - - m_restart_interval = get_bits(16); - } - - // Read a start of scan (SOS) marker. - void jpeg_decoder::read_sos_marker() - { - uint num_left; - int i, ci, n, c, cc; - - num_left = get_bits(16); - - n = get_bits(8); - - m_comps_in_scan = n; - - num_left -= 3; - - if ( (num_left != (uint)(n * 2 + 3)) || (n < 1) || (n > JPGD_MAX_COMPS_IN_SCAN) ) - stop_decoding(JPGD_BAD_SOS_LENGTH); - - for (i = 0; i < n; i++) - { - cc = get_bits(8); - c = get_bits(8); - num_left -= 2; - - for (ci = 0; ci < m_comps_in_frame; ci++) - if (cc == m_comp_ident[ci]) - break; - - if (ci >= m_comps_in_frame) - stop_decoding(JPGD_BAD_SOS_COMP_ID); - - m_comp_list[i] = ci; - m_comp_dc_tab[ci] = (c >> 4) & 15; - m_comp_ac_tab[ci] = (c & 15) + (JPGD_MAX_HUFF_TABLES >> 1); - } - - m_spectral_start = get_bits(8); - m_spectral_end = get_bits(8); - m_successive_high = get_bits(4); - m_successive_low = get_bits(4); - - if (!m_progressive_flag) - { - m_spectral_start = 0; - m_spectral_end = 63; - } - - num_left -= 3; - - while (num_left) /* read past whatever is num_left */ - { - get_bits(8); - num_left--; - } - } - - // Finds the next marker. - int jpeg_decoder::next_marker() - { - uint c, bytes; - - bytes = 0; - - do - { - do - { - bytes++; - c = get_bits(8); - } while (c != 0xFF); - - do - { - c = get_bits(8); - } while (c == 0xFF); - - } while (c == 0); - - // If bytes > 0 here, there where extra bytes before the marker (not good). - - return c; - } - - // Process markers. Returns when an SOFx, SOI, EOI, or SOS marker is - // encountered. - int jpeg_decoder::process_markers() - { - int c; - - for ( ; ; ) - { - c = next_marker(); - - switch (c) - { - case M_SOF0: - case M_SOF1: - case M_SOF2: - case M_SOF3: - case M_SOF5: - case M_SOF6: - case M_SOF7: - // case M_JPG: - case M_SOF9: - case M_SOF10: - case M_SOF11: - case M_SOF13: - case M_SOF14: - case M_SOF15: - case M_SOI: - case M_EOI: - case M_SOS: - { - return c; - } - case M_DHT: - { - read_dht_marker(); - break; - } - // No arithmitic support - dumb patents! - case M_DAC: - { - stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT); - break; - } - case M_DQT: - { - read_dqt_marker(); - break; - } - case M_DRI: - { - read_dri_marker(); - break; - } - //case M_APP0: /* no need to read the JFIF marker */ - - case M_JPG: - case M_RST0: /* no parameters */ - case M_RST1: - case M_RST2: - case M_RST3: - case M_RST4: - case M_RST5: - case M_RST6: - case M_RST7: - case M_TEM: - { - stop_decoding(JPGD_UNEXPECTED_MARKER); - break; - } - default: /* must be DNL, DHP, EXP, APPn, JPGn, COM, or RESn or APP0 */ - { - skip_variable_marker(); - break; - } - } - } - } - - // Finds the start of image (SOI) marker. - // This code is rather defensive: it only checks the first 512 bytes to avoid - // false positives. - void jpeg_decoder::locate_soi_marker() - { - uint lastchar, thischar; - uint bytesleft; - - lastchar = get_bits(8); - - thischar = get_bits(8); - - /* ok if it's a normal JPEG file without a special header */ - - if ((lastchar == 0xFF) && (thischar == M_SOI)) - return; - - bytesleft = 4096; //512; - - for ( ; ; ) - { - if (--bytesleft == 0) - stop_decoding(JPGD_NOT_JPEG); - - lastchar = thischar; - - thischar = get_bits(8); - - if (lastchar == 0xFF) - { - if (thischar == M_SOI) - break; - else if (thischar == M_EOI) // get_bits will keep returning M_EOI if we read past the end - stop_decoding(JPGD_NOT_JPEG); - } - } - - // Check the next character after marker: if it's not 0xFF, it can't be the start of the next marker, so the file is bad. - thischar = (m_bit_buf >> 24) & 0xFF; - - if (thischar != 0xFF) - stop_decoding(JPGD_NOT_JPEG); - } - - // Find a start of frame (SOF) marker. - void jpeg_decoder::locate_sof_marker() - { - locate_soi_marker(); - - int c = process_markers(); - - switch (c) - { - case M_SOF2: - m_progressive_flag = JPGD_TRUE; - case M_SOF0: /* baseline DCT */ - case M_SOF1: /* extended sequential DCT */ - { - read_sof_marker(); - break; - } - case M_SOF9: /* Arithmitic coding */ - { - stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT); - break; - } - default: - { - stop_decoding(JPGD_UNSUPPORTED_MARKER); - break; - } - } - } - - // Find a start of scan (SOS) marker. - int jpeg_decoder::locate_sos_marker() - { - int c; - - c = process_markers(); - - if (c == M_EOI) - return JPGD_FALSE; - else if (c != M_SOS) - stop_decoding(JPGD_UNEXPECTED_MARKER); - - read_sos_marker(); - - return JPGD_TRUE; - } - - // Reset everything to default/uninitialized state. - void jpeg_decoder::init(jpeg_decoder_stream *pStream) - { - m_pMem_blocks = NULL; - m_error_code = JPGD_SUCCESS; - m_ready_flag = false; - m_image_x_size = m_image_y_size = 0; - m_pStream = pStream; - m_progressive_flag = JPGD_FALSE; - - memset(m_huff_ac, 0, sizeof(m_huff_ac)); - memset(m_huff_num, 0, sizeof(m_huff_num)); - memset(m_huff_val, 0, sizeof(m_huff_val)); - memset(m_quant, 0, sizeof(m_quant)); - - m_scan_type = 0; - m_comps_in_frame = 0; - - memset(m_comp_h_samp, 0, sizeof(m_comp_h_samp)); - memset(m_comp_v_samp, 0, sizeof(m_comp_v_samp)); - memset(m_comp_quant, 0, sizeof(m_comp_quant)); - memset(m_comp_ident, 0, sizeof(m_comp_ident)); - memset(m_comp_h_blocks, 0, sizeof(m_comp_h_blocks)); - memset(m_comp_v_blocks, 0, sizeof(m_comp_v_blocks)); - - m_comps_in_scan = 0; - memset(m_comp_list, 0, sizeof(m_comp_list)); - memset(m_comp_dc_tab, 0, sizeof(m_comp_dc_tab)); - memset(m_comp_ac_tab, 0, sizeof(m_comp_ac_tab)); - - m_spectral_start = 0; - m_spectral_end = 0; - m_successive_low = 0; - m_successive_high = 0; - m_max_mcu_x_size = 0; - m_max_mcu_y_size = 0; - m_blocks_per_mcu = 0; - m_max_blocks_per_row = 0; - m_mcus_per_row = 0; - m_mcus_per_col = 0; - m_expanded_blocks_per_component = 0; - m_expanded_blocks_per_mcu = 0; - m_expanded_blocks_per_row = 0; - m_freq_domain_chroma_upsample = false; - - memset(m_mcu_org, 0, sizeof(m_mcu_org)); - - m_total_lines_left = 0; - m_mcu_lines_left = 0; - m_real_dest_bytes_per_scan_line = 0; - m_dest_bytes_per_scan_line = 0; - m_dest_bytes_per_pixel = 0; - - memset(m_pHuff_tabs, 0, sizeof(m_pHuff_tabs)); - - memset(m_dc_coeffs, 0, sizeof(m_dc_coeffs)); - memset(m_ac_coeffs, 0, sizeof(m_ac_coeffs)); - memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu)); - - m_eob_run = 0; - - memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu)); - - m_pIn_buf_ofs = m_in_buf; - m_in_buf_left = 0; - m_eof_flag = false; - m_tem_flag = 0; - - memset(m_in_buf_pad_start, 0, sizeof(m_in_buf_pad_start)); - memset(m_in_buf, 0, sizeof(m_in_buf)); - memset(m_in_buf_pad_end, 0, sizeof(m_in_buf_pad_end)); - - m_restart_interval = 0; - m_restarts_left = 0; - m_next_restart_num = 0; - - m_max_mcus_per_row = 0; - m_max_blocks_per_mcu = 0; - m_max_mcus_per_col = 0; - - memset(m_last_dc_val, 0, sizeof(m_last_dc_val)); - m_pMCU_coefficients = NULL; - m_pSample_buf = NULL; - - m_total_bytes_read = 0; - - m_pScan_line_0 = NULL; - m_pScan_line_1 = NULL; - - // Ready the input buffer. - prep_in_buffer(); - - // Prime the bit buffer. - m_bits_left = 16; - m_bit_buf = 0; - - get_bits(16); - get_bits(16); - - for (int i = 0; i < JPGD_MAX_BLOCKS_PER_MCU; i++) - m_mcu_block_max_zag[i] = 64; - } - -#define SCALEBITS 16 -#define ONE_HALF ((int) 1 << (SCALEBITS-1)) -#define FIX(x) ((int) ((x) * (1L<> SCALEBITS; - m_cbb[i] = ( FIX(1.77200f) * k + ONE_HALF) >> SCALEBITS; - m_crg[i] = (-FIX(0.71414f)) * k; - m_cbg[i] = (-FIX(0.34414f)) * k + ONE_HALF; - } - } - - // This method throws back into the stream any bytes that where read - // into the bit buffer during initial marker scanning. - void jpeg_decoder::fix_in_buffer() - { - // In case any 0xFF's where pulled into the buffer during marker scanning. - JPGD_ASSERT((m_bits_left & 7) == 0); - - if (m_bits_left == 16) - stuff_char( (uint8)(m_bit_buf & 0xFF)); - - if (m_bits_left >= 8) - stuff_char( (uint8)((m_bit_buf >> 8) & 0xFF)); - - stuff_char((uint8)((m_bit_buf >> 16) & 0xFF)); - stuff_char((uint8)((m_bit_buf >> 24) & 0xFF)); - - m_bits_left = 16; - get_bits_no_markers(16); - get_bits_no_markers(16); - } - - void jpeg_decoder::transform_mcu(int mcu_row) - { - jpgd_block_t* pSrc_ptr = m_pMCU_coefficients; - uint8* pDst_ptr = m_pSample_buf + mcu_row * m_blocks_per_mcu * 64; - - for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++) - { - idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]); - pSrc_ptr += 64; - pDst_ptr += 64; - } - } - - static const uint8 s_max_rc[64] = - { - 17, 18, 34, 50, 50, 51, 52, 52, 52, 68, 84, 84, 84, 84, 85, 86, 86, 86, 86, 86, - 102, 118, 118, 118, 118, 118, 118, 119, 120, 120, 120, 120, 120, 120, 120, 136, - 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, - 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136 - }; - - void jpeg_decoder::transform_mcu_expand(int mcu_row) - { - jpgd_block_t* pSrc_ptr = m_pMCU_coefficients; - uint8* pDst_ptr = m_pSample_buf + mcu_row * m_expanded_blocks_per_mcu * 64; - - // Y IDCT - int mcu_block; - for (mcu_block = 0; mcu_block < m_expanded_blocks_per_component; mcu_block++) - { - idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]); - pSrc_ptr += 64; - pDst_ptr += 64; - } - - // Chroma IDCT, with upsampling - jpgd_block_t temp_block[64]; - - for (int i = 0; i < 2; i++) - { - DCT_Upsample::Matrix44 P, Q, R, S; - - JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] >= 1); - JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] <= 64); - - switch (s_max_rc[m_mcu_block_max_zag[mcu_block++] - 1]) - { - case 1*16+1: - DCT_Upsample::P_Q<1, 1>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<1, 1>::calc(R, S, pSrc_ptr); - break; - case 1*16+2: - DCT_Upsample::P_Q<1, 2>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<1, 2>::calc(R, S, pSrc_ptr); - break; - case 2*16+2: - DCT_Upsample::P_Q<2, 2>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<2, 2>::calc(R, S, pSrc_ptr); - break; - case 3*16+2: - DCT_Upsample::P_Q<3, 2>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<3, 2>::calc(R, S, pSrc_ptr); - break; - case 3*16+3: - DCT_Upsample::P_Q<3, 3>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<3, 3>::calc(R, S, pSrc_ptr); - break; - case 3*16+4: - DCT_Upsample::P_Q<3, 4>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<3, 4>::calc(R, S, pSrc_ptr); - break; - case 4*16+4: - DCT_Upsample::P_Q<4, 4>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<4, 4>::calc(R, S, pSrc_ptr); - break; - case 5*16+4: - DCT_Upsample::P_Q<5, 4>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<5, 4>::calc(R, S, pSrc_ptr); - break; - case 5*16+5: - DCT_Upsample::P_Q<5, 5>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<5, 5>::calc(R, S, pSrc_ptr); - break; - case 5*16+6: - DCT_Upsample::P_Q<5, 6>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<5, 6>::calc(R, S, pSrc_ptr); - break; - case 6*16+6: - DCT_Upsample::P_Q<6, 6>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<6, 6>::calc(R, S, pSrc_ptr); - break; - case 7*16+6: - DCT_Upsample::P_Q<7, 6>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<7, 6>::calc(R, S, pSrc_ptr); - break; - case 7*16+7: - DCT_Upsample::P_Q<7, 7>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<7, 7>::calc(R, S, pSrc_ptr); - break; - case 7*16+8: - DCT_Upsample::P_Q<7, 8>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<7, 8>::calc(R, S, pSrc_ptr); - break; - case 8*16+8: - DCT_Upsample::P_Q<8, 8>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<8, 8>::calc(R, S, pSrc_ptr); - break; - default: - JPGD_ASSERT(false); - } - - DCT_Upsample::Matrix44 a(P + Q); P -= Q; - DCT_Upsample::Matrix44& b = P; - DCT_Upsample::Matrix44 c(R + S); R -= S; - DCT_Upsample::Matrix44& d = R; - - DCT_Upsample::Matrix44::add_and_store(temp_block, a, c); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - DCT_Upsample::Matrix44::sub_and_store(temp_block, a, c); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - DCT_Upsample::Matrix44::add_and_store(temp_block, b, d); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - DCT_Upsample::Matrix44::sub_and_store(temp_block, b, d); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - pSrc_ptr += 64; - } - } - - // Loads and dequantizes the next row of (already decoded) coefficients. - // Progressive images only. - void jpeg_decoder::load_next_row() - { - int i; - jpgd_block_t *p; - jpgd_quant_t *q; - int mcu_row, mcu_block, row_block = 0; - int component_num, component_id; - int block_x_mcu[JPGD_MAX_COMPONENTS]; - - memset(block_x_mcu, 0, JPGD_MAX_COMPONENTS * sizeof(int)); - - for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++) - { - int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0; - - for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++) - { - component_id = m_mcu_org[mcu_block]; - q = m_quant[m_comp_quant[component_id]]; - - p = m_pMCU_coefficients + 64 * mcu_block; - - jpgd_block_t* pAC = coeff_buf_getp(m_ac_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs); - jpgd_block_t* pDC = coeff_buf_getp(m_dc_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs); - p[0] = pDC[0]; - memcpy(&p[1], &pAC[1], 63 * sizeof(jpgd_block_t)); - - for (i = 63; i > 0; i--) - if (p[g_ZAG[i]]) - break; - - m_mcu_block_max_zag[mcu_block] = i + 1; - - for ( ; i >= 0; i--) - if (p[g_ZAG[i]]) - p[g_ZAG[i]] = static_cast(p[g_ZAG[i]] * q[i]); - - row_block++; - - if (m_comps_in_scan == 1) - block_x_mcu[component_id]++; - else - { - if (++block_x_mcu_ofs == m_comp_h_samp[component_id]) - { - block_x_mcu_ofs = 0; - - if (++block_y_mcu_ofs == m_comp_v_samp[component_id]) - { - block_y_mcu_ofs = 0; - - block_x_mcu[component_id] += m_comp_h_samp[component_id]; - } - } - } - } - - if (m_freq_domain_chroma_upsample) - transform_mcu_expand(mcu_row); - else - transform_mcu(mcu_row); - } - - if (m_comps_in_scan == 1) - m_block_y_mcu[m_comp_list[0]]++; - else - { - for (component_num = 0; component_num < m_comps_in_scan; component_num++) - { - component_id = m_comp_list[component_num]; - - m_block_y_mcu[component_id] += m_comp_v_samp[component_id]; - } - } - } - - // Restart interval processing. - void jpeg_decoder::process_restart() - { - int i; - int c = 0; - - // Align to a byte boundry - // FIXME: Is this really necessary? get_bits_no_markers() never reads in markers! - //get_bits_no_markers(m_bits_left & 7); - - // Let's scan a little bit to find the marker, but not _too_ far. - // 1536 is a "fudge factor" that determines how much to scan. - for (i = 1536; i > 0; i--) - if (get_char() == 0xFF) - break; - - if (i == 0) - stop_decoding(JPGD_BAD_RESTART_MARKER); - - for ( ; i > 0; i--) - if ((c = get_char()) != 0xFF) - break; - - if (i == 0) - stop_decoding(JPGD_BAD_RESTART_MARKER); - - // Is it the expected marker? If not, something bad happened. - if (c != (m_next_restart_num + M_RST0)) - stop_decoding(JPGD_BAD_RESTART_MARKER); - - // Reset each component's DC prediction values. - memset(&m_last_dc_val, 0, m_comps_in_frame * sizeof(uint)); - - m_eob_run = 0; - - m_restarts_left = m_restart_interval; - - m_next_restart_num = (m_next_restart_num + 1) & 7; - - // Get the bit buffer going again... - - m_bits_left = 16; - get_bits_no_markers(16); - get_bits_no_markers(16); - } - - static inline int dequantize_ac(int c, int q) { c *= q; return c; } - - // Decodes and dequantizes the next row of coefficients. - void jpeg_decoder::decode_next_row() - { - int row_block = 0; - - for (int mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++) - { - if ((m_restart_interval) && (m_restarts_left == 0)) - process_restart(); - - jpgd_block_t* p = m_pMCU_coefficients; - for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++, p += 64) - { - int component_id = m_mcu_org[mcu_block]; - jpgd_quant_t* q = m_quant[m_comp_quant[component_id]]; - - int r, s; - s = huff_decode(m_pHuff_tabs[m_comp_dc_tab[component_id]], r); - s = HUFF_EXTEND(r, s); - - m_last_dc_val[component_id] = (s += m_last_dc_val[component_id]); - - p[0] = static_cast(s * q[0]); - - int prev_num_set = m_mcu_block_max_zag[mcu_block]; - - huff_tables *pH = m_pHuff_tabs[m_comp_ac_tab[component_id]]; - - int k; - for (k = 1; k < 64; k++) - { - int extra_bits; - s = huff_decode(pH, extra_bits); - - r = s >> 4; - s &= 15; - - if (s) - { - if (r) - { - if ((k + r) > 63) - stop_decoding(JPGD_DECODE_ERROR); - - if (k < prev_num_set) - { - int n = JPGD_MIN(r, prev_num_set - k); - int kt = k; - while (n--) - p[g_ZAG[kt++]] = 0; - } - - k += r; - } - - s = HUFF_EXTEND(extra_bits, s); - - JPGD_ASSERT(k < 64); - - p[g_ZAG[k]] = static_cast(dequantize_ac(s, q[k])); //s * q[k]; - } - else - { - if (r == 15) - { - if ((k + 16) > 64) - stop_decoding(JPGD_DECODE_ERROR); - - if (k < prev_num_set) - { - int n = JPGD_MIN(16, prev_num_set - k); - int kt = k; - while (n--) - { - JPGD_ASSERT(kt <= 63); - p[g_ZAG[kt++]] = 0; - } - } - - k += 16 - 1; // - 1 because the loop counter is k - // BEGIN EPIC MOD - JPGD_ASSERT(k < 64 && p[g_ZAG[k]] == 0); - // END EPIC MOD - } - else - break; - } - } - - if (k < prev_num_set) - { - int kt = k; - while (kt < prev_num_set) - p[g_ZAG[kt++]] = 0; - } - - m_mcu_block_max_zag[mcu_block] = k; - - row_block++; - } - - if (m_freq_domain_chroma_upsample) - transform_mcu_expand(mcu_row); - else - transform_mcu(mcu_row); - - m_restarts_left--; - } - } - - // YCbCr H1V1 (1x1:1:1, 3 m_blocks per MCU) to RGB - void jpeg_decoder::H1V1Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d = m_pScan_line_0; - uint8 *s = m_pSample_buf + row * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int j = 0; j < 8; j++) - { - int y = s[j]; - int cb = s[64+j]; - int cr = s[128+j]; - - if (jpg_format == ERGBFormatJPG::BGRA) - { - d[0] = clamp(y + m_cbb[cb]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_crr[cr]); - d[3] = 255; - } - else - { - d[0] = clamp(y + m_crr[cr]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_cbb[cb]); - d[3] = 255; - } - d += 4; - } - - s += 64*3; - } - } - - // YCbCr H2V1 (2x1:1:1, 4 m_blocks per MCU) to RGB - void jpeg_decoder::H2V1Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d0 = m_pScan_line_0; - uint8 *y = m_pSample_buf + row * 8; - uint8 *c = m_pSample_buf + 2*64 + row * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int l = 0; l < 2; l++) - { - for (int j = 0; j < 4; j++) - { - int cb = c[0]; - int cr = c[64]; - - int rc = m_crr[cr]; - int gc = ((m_crg[cr] + m_cbg[cb]) >> 16); - int bc = m_cbb[cb]; - - int yy = y[j<<1]; - if (jpg_format == ERGBFormatJPG::BGRA) - { - d0[0] = clamp(yy+bc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+rc); - d0[3] = 255; - yy = y[(j<<1)+1]; - d0[4] = clamp(yy+bc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+rc); - d0[7] = 255; - } - else - { - d0[0] = clamp(yy+rc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+bc); - d0[3] = 255; - yy = y[(j<<1)+1]; - d0[4] = clamp(yy+rc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+bc); - d0[7] = 255; - } - - d0 += 8; - - c++; - } - y += 64; - } - - y += 64*4 - 64*2; - c += 64*4 - 8; - } - } - - // YCbCr H2V1 (1x2:1:1, 4 m_blocks per MCU) to RGB - void jpeg_decoder::H1V2Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d0 = m_pScan_line_0; - uint8 *d1 = m_pScan_line_1; - uint8 *y; - uint8 *c; - - if (row < 8) - y = m_pSample_buf + row * 8; - else - y = m_pSample_buf + 64*1 + (row & 7) * 8; - - c = m_pSample_buf + 64*2 + (row >> 1) * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int j = 0; j < 8; j++) - { - int cb = c[0+j]; - int cr = c[64+j]; - - int rc = m_crr[cr]; - int gc = ((m_crg[cr] + m_cbg[cb]) >> 16); - int bc = m_cbb[cb]; - - int yy = y[j]; - if (jpg_format == ERGBFormatJPG::BGRA) - { - d0[0] = clamp(yy+bc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+rc); - d0[3] = 255; - yy = y[8+j]; - d1[0] = clamp(yy+bc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+rc); - d1[3] = 255; - } - else - { - d0[0] = clamp(yy+rc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+bc); - d0[3] = 255; - yy = y[8+j]; - d1[0] = clamp(yy+rc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+bc); - d1[3] = 255; - } - - d0 += 4; - d1 += 4; - } - - y += 64*4; - c += 64*4; - } - } - - // YCbCr H2V2 (2x2:1:1, 6 m_blocks per MCU) to RGB - void jpeg_decoder::H2V2Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d0 = m_pScan_line_0; - uint8 *d1 = m_pScan_line_1; - uint8 *y; - uint8 *c; - - if (row < 8) - y = m_pSample_buf + row * 8; - else - y = m_pSample_buf + 64*2 + (row & 7) * 8; - - c = m_pSample_buf + 64*4 + (row >> 1) * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int l = 0; l < 2; l++) - { - for (int j = 0; j < 8; j += 2) - { - int cb = c[0]; - int cr = c[64]; - - int rc = m_crr[cr]; - int gc = ((m_crg[cr] + m_cbg[cb]) >> 16); - int bc = m_cbb[cb]; - - int yy = y[j]; - if (jpg_format == ERGBFormatJPG::BGRA) - { - d0[0] = clamp(yy+bc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+rc); - d0[3] = 255; - yy = y[j+1]; - d0[4] = clamp(yy+bc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+rc); - d0[7] = 255; - yy = y[j+8]; - d1[0] = clamp(yy+bc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+rc); - d1[3] = 255; - yy = y[j+8+1]; - d1[4] = clamp(yy+bc); - d1[5] = clamp(yy+gc); - d1[6] = clamp(yy+rc); - d1[7] = 255; - } - else - { - d0[0] = clamp(yy+rc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+bc); - d0[3] = 255; - yy = y[j+1]; - d0[4] = clamp(yy+rc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+bc); - d0[7] = 255; - yy = y[j+8]; - d1[0] = clamp(yy+rc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+bc); - d1[3] = 255; - yy = y[j+8+1]; - d1[4] = clamp(yy+rc); - d1[5] = clamp(yy+gc); - d1[6] = clamp(yy+bc); - d1[7] = 255; - } - - d0 += 8; - d1 += 8; - - c++; - } - y += 64; - } - - y += 64*6 - 64*2; - c += 64*6 - 8; - } - } - - // Y (1 block per MCU) to 8-bit grayscale - void jpeg_decoder::gray_convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d = m_pScan_line_0; - uint8 *s = m_pSample_buf + row * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - *(uint *)d = *(uint *)s; - *(uint *)(&d[4]) = *(uint *)(&s[4]); - - s += 64; - d += 8; - } - } - - void jpeg_decoder::expanded_convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - - uint8* Py = m_pSample_buf + (row / 8) * 64 * m_comp_h_samp[0] + (row & 7) * 8; - - uint8* d = m_pScan_line_0; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int k = 0; k < m_max_mcu_x_size; k += 8) - { - const int Y_ofs = k * 8; - const int Cb_ofs = Y_ofs + 64 * m_expanded_blocks_per_component; - const int Cr_ofs = Y_ofs + 64 * m_expanded_blocks_per_component * 2; - for (int j = 0; j < 8; j++) - { - int y = Py[Y_ofs + j]; - int cb = Py[Cb_ofs + j]; - int cr = Py[Cr_ofs + j]; - - if (jpg_format == ERGBFormatJPG::BGRA) - { - d[0] = clamp(y + m_cbb[cb]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_crr[cr]); - d[3] = 255; - } - else - { - d[0] = clamp(y + m_crr[cr]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_cbb[cb]); - d[3] = 255; - } - - d += 4; - } - } - - Py += 64 * m_expanded_blocks_per_mcu; - } - } - - // Find end of image (EOI) marker, so we can return to the user the exact size of the input stream. - void jpeg_decoder::find_eoi() - { - if (!m_progressive_flag) - { - // Attempt to read the EOI marker. - //get_bits_no_markers(m_bits_left & 7); - - // Prime the bit buffer - m_bits_left = 16; - get_bits(16); - get_bits(16); - - // The next marker _should_ be EOI - process_markers(); - } - - m_total_bytes_read -= m_in_buf_left; - } - - int jpeg_decoder::decode(const void** pScan_line, uint* pScan_line_len) - { - if ((m_error_code) || (!m_ready_flag)) - return JPGD_FAILED; - - if (m_total_lines_left == 0) - return JPGD_DONE; - - if (m_mcu_lines_left == 0) - { - if (setjmp(m_jmp_state)) - return JPGD_FAILED; - - if (m_progressive_flag) - load_next_row(); - else - decode_next_row(); - - // Find the EOI marker if that was the last row. - if (m_total_lines_left <= m_max_mcu_y_size) - find_eoi(); - - m_mcu_lines_left = m_max_mcu_y_size; - } - - if (m_freq_domain_chroma_upsample) - { - expanded_convert(); - *pScan_line = m_pScan_line_0; - } - else - { - switch (m_scan_type) - { - case JPGD_YH2V2: - { - if ((m_mcu_lines_left & 1) == 0) - { - H2V2Convert(); - *pScan_line = m_pScan_line_0; - } - else - *pScan_line = m_pScan_line_1; - - break; - } - case JPGD_YH2V1: - { - H2V1Convert(); - *pScan_line = m_pScan_line_0; - break; - } - case JPGD_YH1V2: - { - if ((m_mcu_lines_left & 1) == 0) - { - H1V2Convert(); - *pScan_line = m_pScan_line_0; - } - else - *pScan_line = m_pScan_line_1; - - break; - } - case JPGD_YH1V1: - { - H1V1Convert(); - *pScan_line = m_pScan_line_0; - break; - } - case JPGD_GRAYSCALE: - { - gray_convert(); - *pScan_line = m_pScan_line_0; - - break; - } - } - } - - *pScan_line_len = m_real_dest_bytes_per_scan_line; - - m_mcu_lines_left--; - m_total_lines_left--; - - return JPGD_SUCCESS; - } - - // Creates the tables needed for efficient Huffman decoding. - void jpeg_decoder::make_huff_table(int index, huff_tables *pH) - { - int p, i, l, si; - uint8 huffsize[257]; - uint huffcode[257]; - uint code; - uint subtree; - int code_size; - int lastp; - int nextfreeentry; - int currententry; - - pH->ac_table = m_huff_ac[index] != 0; - - p = 0; - - for (l = 1; l <= 16; l++) - { - for (i = 1; i <= m_huff_num[index][l]; i++) - huffsize[p++] = static_cast(l); - } - - huffsize[p] = 0; - - lastp = p; - - code = 0; - si = huffsize[0]; - p = 0; - - while (huffsize[p]) - { - while (huffsize[p] == si) - { - huffcode[p++] = code; - code++; - } - - code <<= 1; - si++; - } - - memset(pH->look_up, 0, sizeof(pH->look_up)); - memset(pH->look_up2, 0, sizeof(pH->look_up2)); - memset(pH->tree, 0, sizeof(pH->tree)); - memset(pH->code_size, 0, sizeof(pH->code_size)); - - nextfreeentry = -1; - - p = 0; - - while (p < lastp) - { - i = m_huff_val[index][p]; - code = huffcode[p]; - code_size = huffsize[p]; - - pH->code_size[i] = static_cast(code_size); - - if (code_size <= 8) - { - code <<= (8 - code_size); - - for (l = 1 << (8 - code_size); l > 0; l--) - { - JPGD_ASSERT(i < 256); - - pH->look_up[code] = i; - - bool has_extrabits = false; - int extra_bits = 0; - int num_extra_bits = i & 15; - - int bits_to_fetch = code_size; - if (num_extra_bits) - { - int total_codesize = code_size + num_extra_bits; - if (total_codesize <= 8) - { - has_extrabits = true; - extra_bits = ((1 << num_extra_bits) - 1) & (code >> (8 - total_codesize)); - JPGD_ASSERT(extra_bits <= 0x7FFF); - bits_to_fetch += num_extra_bits; - } - } - - if (!has_extrabits) - pH->look_up2[code] = i | (bits_to_fetch << 8); - else - pH->look_up2[code] = i | 0x8000 | (extra_bits << 16) | (bits_to_fetch << 8); - - code++; - } - } - else - { - subtree = (code >> (code_size - 8)) & 0xFF; - - currententry = pH->look_up[subtree]; - - if (currententry == 0) - { - pH->look_up[subtree] = currententry = nextfreeentry; - pH->look_up2[subtree] = currententry = nextfreeentry; - - nextfreeentry -= 2; - } - - code <<= (16 - (code_size - 8)); - - for (l = code_size; l > 9; l--) - { - if ((code & 0x8000) == 0) - currententry--; - - if (pH->tree[-currententry - 1] == 0) - { - pH->tree[-currententry - 1] = nextfreeentry; - - currententry = nextfreeentry; - - nextfreeentry -= 2; - } - else - currententry = pH->tree[-currententry - 1]; - - code <<= 1; - } - - if ((code & 0x8000) == 0) - currententry--; - - pH->tree[-currententry - 1] = i; - } - - p++; - } - } - - // Verifies the quantization tables needed for this scan are available. - void jpeg_decoder::check_quant_tables() - { - for (int i = 0; i < m_comps_in_scan; i++) - if (m_quant[m_comp_quant[m_comp_list[i]]] == NULL) - stop_decoding(JPGD_UNDEFINED_QUANT_TABLE); - } - - // Verifies that all the Huffman tables needed for this scan are available. - void jpeg_decoder::check_huff_tables() - { - for (int i = 0; i < m_comps_in_scan; i++) - { - if ((m_spectral_start == 0) && (m_huff_num[m_comp_dc_tab[m_comp_list[i]]] == NULL)) - stop_decoding(JPGD_UNDEFINED_HUFF_TABLE); - - if ((m_spectral_end > 0) && (m_huff_num[m_comp_ac_tab[m_comp_list[i]]] == NULL)) - stop_decoding(JPGD_UNDEFINED_HUFF_TABLE); - } - - for (int i = 0; i < JPGD_MAX_HUFF_TABLES; i++) - if (m_huff_num[i]) - { - if (!m_pHuff_tabs[i]) - m_pHuff_tabs[i] = (huff_tables *)alloc(sizeof(huff_tables)); - - make_huff_table(i, m_pHuff_tabs[i]); - } - } - - // Determines the component order inside each MCU. - // Also calcs how many MCU's are on each row, etc. - void jpeg_decoder::calc_mcu_block_order() - { - int component_num, component_id; - int max_h_samp = 0, max_v_samp = 0; - - for (component_id = 0; component_id < m_comps_in_frame; component_id++) - { - if (m_comp_h_samp[component_id] > max_h_samp) - max_h_samp = m_comp_h_samp[component_id]; - - if (m_comp_v_samp[component_id] > max_v_samp) - max_v_samp = m_comp_v_samp[component_id]; - } - - for (component_id = 0; component_id < m_comps_in_frame; component_id++) - { - m_comp_h_blocks[component_id] = ((((m_image_x_size * m_comp_h_samp[component_id]) + (max_h_samp - 1)) / max_h_samp) + 7) / 8; - m_comp_v_blocks[component_id] = ((((m_image_y_size * m_comp_v_samp[component_id]) + (max_v_samp - 1)) / max_v_samp) + 7) / 8; - } - - if (m_comps_in_scan == 1) - { - m_mcus_per_row = m_comp_h_blocks[m_comp_list[0]]; - m_mcus_per_col = m_comp_v_blocks[m_comp_list[0]]; - } - else - { - m_mcus_per_row = (((m_image_x_size + 7) / 8) + (max_h_samp - 1)) / max_h_samp; - m_mcus_per_col = (((m_image_y_size + 7) / 8) + (max_v_samp - 1)) / max_v_samp; - } - - if (m_comps_in_scan == 1) - { - m_mcu_org[0] = m_comp_list[0]; - - m_blocks_per_mcu = 1; - } - else - { - m_blocks_per_mcu = 0; - - for (component_num = 0; component_num < m_comps_in_scan; component_num++) - { - int num_blocks; - - component_id = m_comp_list[component_num]; - - num_blocks = m_comp_h_samp[component_id] * m_comp_v_samp[component_id]; - - while (num_blocks--) - m_mcu_org[m_blocks_per_mcu++] = component_id; - } - } - } - - // Starts a new scan. - int jpeg_decoder::init_scan() - { - if (!locate_sos_marker()) - return JPGD_FALSE; - - calc_mcu_block_order(); - - check_huff_tables(); - - check_quant_tables(); - - memset(m_last_dc_val, 0, m_comps_in_frame * sizeof(uint)); - - m_eob_run = 0; - - if (m_restart_interval) - { - m_restarts_left = m_restart_interval; - m_next_restart_num = 0; - } - - fix_in_buffer(); - - return JPGD_TRUE; - } - - // Starts a frame. Determines if the number of components or sampling factors - // are supported. - void jpeg_decoder::init_frame() - { - int i; - - if (m_comps_in_frame == 1) - { - if ((m_comp_h_samp[0] != 1) || (m_comp_v_samp[0] != 1)) - stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS); - - m_scan_type = JPGD_GRAYSCALE; - m_max_blocks_per_mcu = 1; - m_max_mcu_x_size = 8; - m_max_mcu_y_size = 8; - } - else if (m_comps_in_frame == 3) - { - if ( ((m_comp_h_samp[1] != 1) || (m_comp_v_samp[1] != 1)) || - ((m_comp_h_samp[2] != 1) || (m_comp_v_samp[2] != 1)) ) - stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS); - - if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1)) - { - m_scan_type = JPGD_YH1V1; - - m_max_blocks_per_mcu = 3; - m_max_mcu_x_size = 8; - m_max_mcu_y_size = 8; - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1)) - { - m_scan_type = JPGD_YH2V1; - m_max_blocks_per_mcu = 4; - m_max_mcu_x_size = 16; - m_max_mcu_y_size = 8; - } - else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 2)) - { - m_scan_type = JPGD_YH1V2; - m_max_blocks_per_mcu = 4; - m_max_mcu_x_size = 8; - m_max_mcu_y_size = 16; - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2)) - { - m_scan_type = JPGD_YH2V2; - m_max_blocks_per_mcu = 6; - m_max_mcu_x_size = 16; - m_max_mcu_y_size = 16; - } - else - stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS); - } - else - stop_decoding(JPGD_UNSUPPORTED_COLORSPACE); - - m_max_mcus_per_row = (m_image_x_size + (m_max_mcu_x_size - 1)) / m_max_mcu_x_size; - m_max_mcus_per_col = (m_image_y_size + (m_max_mcu_y_size - 1)) / m_max_mcu_y_size; - - // These values are for the *destination* pixels: after conversion. - if (m_scan_type == JPGD_GRAYSCALE) - m_dest_bytes_per_pixel = 1; - else - m_dest_bytes_per_pixel = 4; - - m_dest_bytes_per_scan_line = ((m_image_x_size + 15) & 0xFFF0) * m_dest_bytes_per_pixel; - - m_real_dest_bytes_per_scan_line = (m_image_x_size * m_dest_bytes_per_pixel); - - // Initialize two scan line buffers. - m_pScan_line_0 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true); - if ((m_scan_type == JPGD_YH1V2) || (m_scan_type == JPGD_YH2V2)) - m_pScan_line_1 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true); - - m_max_blocks_per_row = m_max_mcus_per_row * m_max_blocks_per_mcu; - - // Should never happen - if (m_max_blocks_per_row > JPGD_MAX_BLOCKS_PER_ROW) - stop_decoding(JPGD_ASSERTION_ERROR); - - // Allocate the coefficient buffer, enough for one MCU - m_pMCU_coefficients = (jpgd_block_t*)alloc(m_max_blocks_per_mcu * 64 * sizeof(jpgd_block_t)); - - for (i = 0; i < m_max_blocks_per_mcu; i++) - m_mcu_block_max_zag[i] = 64; - - m_expanded_blocks_per_component = m_comp_h_samp[0] * m_comp_v_samp[0]; - m_expanded_blocks_per_mcu = m_expanded_blocks_per_component * m_comps_in_frame; - m_expanded_blocks_per_row = m_max_mcus_per_row * m_expanded_blocks_per_mcu; - // Freq. domain chroma upsampling is only supported for H2V2 subsampling factor. -// BEGIN EPIC MOD -#if JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING - m_freq_domain_chroma_upsample = (m_expanded_blocks_per_mcu == 4*3); -#else - m_freq_domain_chroma_upsample = 0; -#endif -// END EPIC MOD - - if (m_freq_domain_chroma_upsample) - m_pSample_buf = (uint8 *)alloc(m_expanded_blocks_per_row * 64); - else - m_pSample_buf = (uint8 *)alloc(m_max_blocks_per_row * 64); - - m_total_lines_left = m_image_y_size; - - m_mcu_lines_left = 0; - - create_look_ups(); - } - - // The coeff_buf series of methods originally stored the coefficients - // into a "virtual" file which was located in EMS, XMS, or a disk file. A cache - // was used to make this process more efficient. Now, we can store the entire - // thing in RAM. - jpeg_decoder::coeff_buf* jpeg_decoder::coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y) - { - coeff_buf* cb = (coeff_buf*)alloc(sizeof(coeff_buf)); - - cb->block_num_x = block_num_x; - cb->block_num_y = block_num_y; - cb->block_len_x = block_len_x; - cb->block_len_y = block_len_y; - cb->block_size = (block_len_x * block_len_y) * sizeof(jpgd_block_t); - cb->pData = (uint8 *)alloc(cb->block_size * block_num_x * block_num_y, true); - return cb; - } - - inline jpgd_block_t *jpeg_decoder::coeff_buf_getp(coeff_buf *cb, int block_x, int block_y) - { - JPGD_ASSERT((block_x < cb->block_num_x) && (block_y < cb->block_num_y)); - return (jpgd_block_t *)(cb->pData + block_x * cb->block_size + block_y * (cb->block_size * cb->block_num_x)); - } - - // The following methods decode the various types of m_blocks encountered - // in progressively encoded images. - void jpeg_decoder::decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - int s, r; - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y); - - if ((s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_dc_tab[component_id]])) != 0) - { - r = pD->get_bits_no_markers(s); - s = HUFF_EXTEND(r, s); - } - - pD->m_last_dc_val[component_id] = (s += pD->m_last_dc_val[component_id]); - - p[0] = static_cast(s << pD->m_successive_low); - } - - void jpeg_decoder::decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - if (pD->get_bits_no_markers(1)) - { - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y); - - p[0] |= (1 << pD->m_successive_low); - } - } - - void jpeg_decoder::decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - int k, s, r; - - if (pD->m_eob_run) - { - pD->m_eob_run--; - return; - } - - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y); - - for (k = pD->m_spectral_start; k <= pD->m_spectral_end; k++) - { - s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]); - - r = s >> 4; - s &= 15; - - if (s) - { - if ((k += r) > 63) - pD->stop_decoding(JPGD_DECODE_ERROR); - - r = pD->get_bits_no_markers(s); - s = HUFF_EXTEND(r, s); - - p[g_ZAG[k]] = static_cast(s << pD->m_successive_low); - } - else - { - if (r == 15) - { - if ((k += 15) > 63) - pD->stop_decoding(JPGD_DECODE_ERROR); - } - else - { - pD->m_eob_run = 1 << r; - - if (r) - pD->m_eob_run += pD->get_bits_no_markers(r); - - pD->m_eob_run--; - - break; - } - } - } - } - - void jpeg_decoder::decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - int s, k, r; - int p1 = 1 << pD->m_successive_low; - int m1 = (-1) << pD->m_successive_low; - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y); - - k = pD->m_spectral_start; - - if (pD->m_eob_run == 0) - { - for ( ; k <= pD->m_spectral_end; k++) - { - s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]); - - r = s >> 4; - s &= 15; - - if (s) - { - if (s != 1) - pD->stop_decoding(JPGD_DECODE_ERROR); - - if (pD->get_bits_no_markers(1)) - s = p1; - else - s = m1; - } - else - { - if (r != 15) - { - pD->m_eob_run = 1 << r; - - if (r) - pD->m_eob_run += pD->get_bits_no_markers(r); - - break; - } - } - - do - { - // BEGIN EPIC MOD - JPGD_ASSERT(k < 64); - // END EPIC MOD - - jpgd_block_t *this_coef = p + g_ZAG[k]; - - if (*this_coef != 0) - { - if (pD->get_bits_no_markers(1)) - { - if ((*this_coef & p1) == 0) - { - if (*this_coef >= 0) - *this_coef = static_cast(*this_coef + p1); - else - *this_coef = static_cast(*this_coef + m1); - } - } - } - else - { - if (--r < 0) - break; - } - - k++; - - } while (k <= pD->m_spectral_end); - - if ((s) && (k < 64)) - { - p[g_ZAG[k]] = static_cast(s); - } - } - } - - if (pD->m_eob_run > 0) - { - for ( ; k <= pD->m_spectral_end; k++) - { - // BEGIN EPIC MOD - JPGD_ASSERT(k < 64); - // END EPIC MOD - - jpgd_block_t *this_coef = p + g_ZAG[k]; - - if (*this_coef != 0) - { - if (pD->get_bits_no_markers(1)) - { - if ((*this_coef & p1) == 0) - { - if (*this_coef >= 0) - *this_coef = static_cast(*this_coef + p1); - else - *this_coef = static_cast(*this_coef + m1); - } - } - } - } - - pD->m_eob_run--; - } - } - - // Decode a scan in a progressively encoded image. - void jpeg_decoder::decode_scan(pDecode_block_func decode_block_func) - { - int mcu_row, mcu_col, mcu_block; - int block_x_mcu[JPGD_MAX_COMPONENTS], m_block_y_mcu[JPGD_MAX_COMPONENTS]; - - memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu)); - - for (mcu_col = 0; mcu_col < m_mcus_per_col; mcu_col++) - { - int component_num, component_id; - - memset(block_x_mcu, 0, sizeof(block_x_mcu)); - - for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++) - { - int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0; - - if ((m_restart_interval) && (m_restarts_left == 0)) - process_restart(); - - for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++) - { - component_id = m_mcu_org[mcu_block]; - - decode_block_func(this, component_id, block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs); - - if (m_comps_in_scan == 1) - block_x_mcu[component_id]++; - else - { - if (++block_x_mcu_ofs == m_comp_h_samp[component_id]) - { - block_x_mcu_ofs = 0; - - if (++block_y_mcu_ofs == m_comp_v_samp[component_id]) - { - block_y_mcu_ofs = 0; - block_x_mcu[component_id] += m_comp_h_samp[component_id]; - } - } - } - } - - m_restarts_left--; - } - - if (m_comps_in_scan == 1) - m_block_y_mcu[m_comp_list[0]]++; - else - { - for (component_num = 0; component_num < m_comps_in_scan; component_num++) - { - component_id = m_comp_list[component_num]; - m_block_y_mcu[component_id] += m_comp_v_samp[component_id]; - } - } - } - } - - // Decode a progressively encoded image. - void jpeg_decoder::init_progressive() - { - int i; - - if (m_comps_in_frame == 4) - stop_decoding(JPGD_UNSUPPORTED_COLORSPACE); - - // Allocate the coefficient buffers. - for (i = 0; i < m_comps_in_frame; i++) - { - m_dc_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 1, 1); - m_ac_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 8, 8); - } - - for ( ; ; ) - { - int dc_only_scan, refinement_scan; - pDecode_block_func decode_block_func; - - if (!init_scan()) - break; - - dc_only_scan = (m_spectral_start == 0); - refinement_scan = (m_successive_high != 0); - - if ((m_spectral_start > m_spectral_end) || (m_spectral_end > 63)) - stop_decoding(JPGD_BAD_SOS_SPECTRAL); - - if (dc_only_scan) - { - if (m_spectral_end) - stop_decoding(JPGD_BAD_SOS_SPECTRAL); - } - else if (m_comps_in_scan != 1) /* AC scans can only contain one component */ - stop_decoding(JPGD_BAD_SOS_SPECTRAL); - - if ((refinement_scan) && (m_successive_low != m_successive_high - 1)) - stop_decoding(JPGD_BAD_SOS_SUCCESSIVE); - - if (dc_only_scan) - { - if (refinement_scan) - decode_block_func = decode_block_dc_refine; - else - decode_block_func = decode_block_dc_first; - } - else - { - if (refinement_scan) - decode_block_func = decode_block_ac_refine; - else - decode_block_func = decode_block_ac_first; - } - - decode_scan(decode_block_func); - - m_bits_left = 16; - get_bits(16); - get_bits(16); - } - - m_comps_in_scan = m_comps_in_frame; - - for (i = 0; i < m_comps_in_frame; i++) - m_comp_list[i] = i; - - calc_mcu_block_order(); - } - - void jpeg_decoder::init_sequential() - { - if (!init_scan()) - stop_decoding(JPGD_UNEXPECTED_MARKER); - } - - void jpeg_decoder::decode_start() - { - init_frame(); - - if (m_progressive_flag) - init_progressive(); - else - init_sequential(); - } - - void jpeg_decoder::decode_init(jpeg_decoder_stream *pStream) - { - init(pStream); - locate_sof_marker(); - } - - jpeg_decoder::jpeg_decoder(jpeg_decoder_stream *pStream) - { - if (setjmp(m_jmp_state)) - return; - decode_init(pStream); - } - - int jpeg_decoder::begin_decoding() - { - if (m_ready_flag) - return JPGD_SUCCESS; - - if (m_error_code) - return JPGD_FAILED; - - if (setjmp(m_jmp_state)) - return JPGD_FAILED; - - decode_start(); - - m_ready_flag = true; - - return JPGD_SUCCESS; - } - - jpeg_decoder::~jpeg_decoder() - { - free_all_blocks(); - } - - jpeg_decoder_file_stream::jpeg_decoder_file_stream() - { - m_pFile = NULL; - m_eof_flag = false; - m_error_flag = false; - } - - void jpeg_decoder_file_stream::close() - { - if (m_pFile) - { - fclose(m_pFile); - m_pFile = NULL; - } - - m_eof_flag = false; - m_error_flag = false; - } - - jpeg_decoder_file_stream::~jpeg_decoder_file_stream() - { - close(); - } - - bool jpeg_decoder_file_stream::open(const char *Pfilename) - { - close(); - - m_eof_flag = false; - m_error_flag = false; - -#if defined(_MSC_VER) - m_pFile = NULL; - fopen_s(&m_pFile, Pfilename, "rb"); -#else - m_pFile = fopen(Pfilename, "rb"); -#endif - return m_pFile != NULL; - } - - int jpeg_decoder_file_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) - { - if (!m_pFile) - return -1; - - if (m_eof_flag) - { - *pEOF_flag = true; - return 0; - } - - if (m_error_flag) - return -1; - - int bytes_read = static_cast(fread(pBuf, 1, max_bytes_to_read, m_pFile)); - if (bytes_read < max_bytes_to_read) - { - if (ferror(m_pFile)) - { - m_error_flag = true; - return -1; - } - - m_eof_flag = true; - *pEOF_flag = true; - } - - return bytes_read; - } - - bool jpeg_decoder_mem_stream::open(const uint8 *pSrc_data, uint size) - { - close(); - m_pSrc_data = pSrc_data; - m_ofs = 0; - m_size = size; - return true; - } - - int jpeg_decoder_mem_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) - { - *pEOF_flag = false; - - if (!m_pSrc_data) - return -1; - - uint bytes_remaining = m_size - m_ofs; - if ((uint)max_bytes_to_read > bytes_remaining) - { - max_bytes_to_read = bytes_remaining; - *pEOF_flag = true; - } - - memcpy(pBuf, m_pSrc_data + m_ofs, max_bytes_to_read); - m_ofs += max_bytes_to_read; - - return max_bytes_to_read; - } - - unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps) - { - if (!actual_comps) - return NULL; - *actual_comps = 0; - - if ((!pStream) || (!width) || (!height) || (!req_comps)) - return NULL; - - if ((req_comps != 1) && (req_comps != 3) && (req_comps != 4)) - return NULL; - - jpeg_decoder decoder(pStream); - if (decoder.get_error_code() != JPGD_SUCCESS) - return NULL; - - const int image_width = decoder.get_width(), image_height = decoder.get_height(); - *width = image_width; - *height = image_height; - *actual_comps = decoder.get_num_components(); - - if (decoder.begin_decoding() != JPGD_SUCCESS) - return NULL; - - const int dst_bpl = image_width * req_comps; - - uint8 *pImage_data = (uint8*)jpgd_malloc(dst_bpl * image_height); - if (!pImage_data) - return NULL; - - for (int y = 0; y < image_height; y++) - { - const uint8* pScan_line = 0; - uint scan_line_len; - if (decoder.decode((const void**)&pScan_line, &scan_line_len) != JPGD_SUCCESS) - { - jpgd_free(pImage_data); - return NULL; - } - - uint8 *pDst = pImage_data + y * dst_bpl; - - if (((req_comps == 4) && (decoder.get_num_components() == 3)) || - ((req_comps == 1) && (decoder.get_num_components() == 1))) - { - memcpy(pDst, pScan_line, dst_bpl); - } - else if (decoder.get_num_components() == 1) - { - if (req_comps == 3) - { - for (int x = 0; x < image_width; x++) - { - uint8 luma = pScan_line[x]; - pDst[0] = luma; - pDst[1] = luma; - pDst[2] = luma; - pDst += 3; - } - } - else - { - for (int x = 0; x < image_width; x++) - { - uint8 luma = pScan_line[x]; - pDst[0] = luma; - pDst[1] = luma; - pDst[2] = luma; - pDst[3] = 255; - pDst += 4; - } - } - } - else if (decoder.get_num_components() == 3) - { - if (req_comps == 1) - { - const int YR = 19595, YG = 38470, YB = 7471; - for (int x = 0; x < image_width; x++) - { - int r = pScan_line[x*4+0]; - int g = pScan_line[x*4+1]; - int b = pScan_line[x*4+2]; - *pDst++ = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - } - } - else - { - for (int x = 0; x < image_width; x++) - { - pDst[0] = pScan_line[x*4+0]; - pDst[1] = pScan_line[x*4+1]; - pDst[2] = pScan_line[x*4+2]; - pDst += 3; - } - } - } - } - - return pImage_data; - } - -// BEGIN EPIC MOD - unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format) - { - jpg_format = (ERGBFormatJPG)format; -// EMD EPIC MOD - jpgd::jpeg_decoder_mem_stream mem_stream(pSrc_data, src_data_size); - return decompress_jpeg_image_from_stream(&mem_stream, width, height, actual_comps, req_comps); - } - - unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps) - { - jpgd::jpeg_decoder_file_stream file_stream; - if (!file_stream.open(pSrc_filename)) - return NULL; - return decompress_jpeg_image_from_stream(&file_stream, width, height, actual_comps, req_comps); - } - -} // namespace jpgd diff --git a/spaces/danielpedriniportfolio/AutoDA/pages/06-Enconding.py b/spaces/danielpedriniportfolio/AutoDA/pages/06-Enconding.py deleted file mode 100644 index c90f11a34e20289c9f6e912c0ac2099f4bd211e7..0000000000000000000000000000000000000000 --- a/spaces/danielpedriniportfolio/AutoDA/pages/06-Enconding.py +++ /dev/null @@ -1,58 +0,0 @@ -import pandas as pd -import streamlit as st -from sklearn.preprocessing import LabelEncoder -from sklearn.preprocessing import OneHotEncoder - -def reload_data(): - st.write("Reloading data...") - df_original = st.session_state["df_original"] - df = df_original.copy() - st.session_state.df = df - del st.session_state['df_target'] - del st.session_state['best'] - st.experimental_rerun() - -st.set_page_config(layout='wide') -col1, col2, col3 = st.columns([15, 70, 15]) - -with col1: - st.write('') -with col2: - if 'df' not in st.session_state: - st.warning('Please upload a CSV file') - else: - st.header('Encoding') - if st.button('Reload data'): - reload_data() - - df = st.session_state['df'] - st.dataframe(df.head()) - # select columns that is categorical - columns = df.select_dtypes(include=['object', 'bool']).columns - # add integer columns that has more than 2 unique values - for col in df.select_dtypes(include=['int64']).columns: - if df[col].nunique() > 2: - columns = columns.append(pd.Index([col])) - selected_columns = st.multiselect('Select the columns', columns) - if len(selected_columns) > 0: - select_encoding = st.selectbox('Select the encoding', ['Label Encoding', 'One Hot Encoding']) - if st.button('Encode'): - if select_encoding == 'Label Encoding': - le = LabelEncoder() - df[selected_columns] = df[selected_columns].apply(lambda col: le.fit_transform(col)).astype(int) - st.session_state.df = df - st.experimental_rerun() - elif select_encoding == 'One Hot Encoding': - ohe = OneHotEncoder() - selected_columns_2d = df[selected_columns].values.reshape(-1, 1) - encoded_columns = ohe.fit_transform(selected_columns_2d).toarray() - categories = [df[col].unique() for col in selected_columns] - new_columns = [f"{col}_{cat}" for col, cats in zip(selected_columns, categories) for cat in cats] - encoded_df = pd.DataFrame(encoded_columns, columns=new_columns) - encoded_df = encoded_df.astype(int) - df_new = pd.concat([df, encoded_df], axis=1) - df = df_new.drop(columns=selected_columns).copy() - st.session_state.df = df - st.experimental_rerun() -with col3: - st.write('') \ No newline at end of file diff --git a/spaces/davanstrien/qdrant_test/load.py b/spaces/davanstrien/qdrant_test/load.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-0a4ea765.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-0a4ea765.js deleted file mode 100644 index db5a878b2ab740081bdc9829280727ed01bfeee7..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-0a4ea765.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as p}from"./StaticForm-775ac3c9.js";import"./index-9e76ffee.js";const t=["static"];export{p as Component,t as modes}; -//# sourceMappingURL=index-0a4ea765.js.map diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_inline/text.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_inline/text.py deleted file mode 100644 index f306b2e4cecde67aa3d363d3844e8b40a70a48b1..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_inline/text.py +++ /dev/null @@ -1,53 +0,0 @@ -# Skip text characters for text token, place those to pending buffer -# and increment current pos -from .state_inline import StateInline - -# Rule to skip pure text -# '{}$%@~+=:' reserved for extensions - -# !!!! Don't confuse with "Markdown ASCII Punctuation" chars -# http://spec.commonmark.org/0.15/#ascii-punctuation-character - - -_TerminatorChars = { - "\n", - "!", - "#", - "$", - "%", - "&", - "*", - "+", - "-", - ":", - "<", - "=", - ">", - "@", - "[", - "\\", - "]", - "^", - "_", - "`", - "{", - "}", - "~", -} - - -def text(state: StateInline, silent: bool) -> bool: - pos = state.pos - posMax = state.posMax - while (pos < posMax) and state.src[pos] not in _TerminatorChars: - pos += 1 - - if pos == state.pos: - return False - - if not silent: - state.pending += state.src[state.pos : pos] - - state.pos = pos - - return True diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_gtk4cairo.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_gtk4cairo.py deleted file mode 100644 index 83cbd081c26d9de8c284c2e89cd3bd751e17d4ed..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_gtk4cairo.py +++ /dev/null @@ -1,29 +0,0 @@ -from contextlib import nullcontext - -from .backend_cairo import ( # noqa - FigureCanvasCairo, _RendererGTKCairo as RendererGTK4Cairo) -from .backend_gtk4 import Gtk, FigureCanvasGTK4, _BackendGTK4 - - -class FigureCanvasGTK4Cairo(FigureCanvasCairo, FigureCanvasGTK4): - _context_is_scaled = True - - def on_draw_event(self, widget, ctx): - with (self.toolbar._wait_cursor_for_draw_cm() if self.toolbar - else nullcontext()): - self._renderer.set_context(ctx) - scale = self.device_pixel_ratio - # Scale physical drawing to logical size. - ctx.scale(1 / scale, 1 / scale) - allocation = self.get_allocation() - Gtk.render_background( - self.get_style_context(), ctx, - allocation.x, allocation.y, - allocation.width, allocation.height) - self._renderer.dpi = self.figure.dpi - self.figure.draw(self._renderer) - - -@_BackendGTK4.export -class _BackendGTK4Cairo(_BackendGTK4): - FigureCanvas = FigureCanvasGTK4Cairo diff --git a/spaces/devfinwiz/Dynamic-QR/README.md b/spaces/devfinwiz/Dynamic-QR/README.md deleted file mode 100644 index 1396059c99a7a3d3333d99b0081bf1558f528e5a..0000000000000000000000000000000000000000 --- a/spaces/devfinwiz/Dynamic-QR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dynamic QRGenerator -emoji: 😻 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.3 -app_file: Driver.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/diacanFperku/AutoGPT/Cummins Calterm Full Keygen Download Site HOT!.md b/spaces/diacanFperku/AutoGPT/Cummins Calterm Full Keygen Download Site HOT!.md deleted file mode 100644 index a10e081b1fe96ef41c726ba422e2c259e03be26f..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Cummins Calterm Full Keygen Download Site HOT!.md +++ /dev/null @@ -1,8 +0,0 @@ - -

    each cummins service center is equipped with a service advisor who can assist you with routine maintenance, repairs and diagnostics. you can also download and print a technical service manual (tsm) and parts catalog from the cummins website. the tsm includes diagnostic codes, electrical diagrams, illustrations, adjustment recommendations, test procedures and parts replacement recommendations.

    -

    the inline 7 adapter communicates with your pc using a universal serial bus (usb) through a standard usb connector as well as through wifi or bluetooth. as an industry leader, cummins is among the first to release an adapter with all three connection options. inline 7 is fully compliant with the technology and maintenance councils rp1210 standard, and also offers 250 and 500 k baud support, including up to 1 megabaud support.

    -

    cummins calterm full keygen download site


    Download Zip ✵✵✵ https://gohhs.com/2uFUst



    -

    inline 7
    the inline 7 adapter communicates with your pc using a universal serial bus (usb) through a standard usb connector as well as through wifi or bluetooth. as an industry leader, cummins is among the first to release an adapter with all three connection options. inline 7 is fully compliant with the technology and maintenance councils rp1210 standard, and also offers 250 and 500 k baud support, including up to 1 megabaud support.

    -

    cummins insite is a small application designed specifically for your cummins engine. it is designed to monitor electronic control modules (ecms) and modify calibration parameters and feature settings. calterm is a useful tool for modifying calibration parameters and feature settings in an engineering development and test environment. you can use calterm to easily create and modify calibration parameters and feature settings..

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Green Street Hooligans 1080p Legendado 12.md b/spaces/diacanFperku/AutoGPT/Green Street Hooligans 1080p Legendado 12.md deleted file mode 100644 index c5e561807121d929d553c0533b33594031767a20..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Green Street Hooligans 1080p Legendado 12.md +++ /dev/null @@ -1,28 +0,0 @@ -

    Green Street Hooligans 1080p Legendado 12


    Download Ziphttps://gohhs.com/2uFVN4



    - -A: - -There is no such thing as greenstreet hooligans, every match played was played by hooligans of one club, which became known as hooligans of greenstreet. "greenstreet hooligans" is a common nickname for football hooligans - -Wikipedia - -This a TV show that has nothing to do with this fact. - -1. Field of the Invention - -The present invention relates to the field of integrated circuit design and manufacture and, more particularly, to the reduction of the effect of voltage-mode noise during power-up of a flash memory integrated circuit. - -2. Description of the Related Art - -As the performance requirements for integrated circuits continue to increase, the functional voltage available to operate the circuit components is continually decreasing. In general, the maximum operating voltage of an integrated circuit increases as the operating speed of the components increases. However, the operating voltage of any component is limited by the supply voltage of the integrated circuit, which is generally set to a value of 3.3 volts. - -Integrated circuits typically operate using a variety of voltage modes. In a low voltage (low voltage) mode, the voltage supplied to the integrated circuit is kept near the voltage supply level of the integrated circuit. In a high voltage (high voltage) mode, the voltage supplied to the integrated circuit is set higher than the voltage supply level. In a high voltage ramping mode, the voltage supplied to the integrated circuit is progressively ramped up from the voltage supply level to the desired operating voltage. - -When the operating voltage of an integrated circuit is ramped up in a high voltage ramping mode, current paths in the integrated circuit are inadvertently charged. Charging of current paths occurs as a result of capacitance and resistance associated with the various components of the integrated circuit. As a result of the voltage mode noise induced during a high voltage ramping mode, the integrated circuit may experience an improper power-up sequence. In addition, the power-up sequence may be degraded by other noise factors. - -The problem of power-up noise is particularly acute for integrated circuits which operate in a low voltage mode or a high voltage ramping mode. The problem is exacerbated for integrated circuits that incorporate non-volatile flash memory devices. - -Non-volatile flash memory devices are divided into two categories: NOR flash memory devices and NAND flash memory devices. A flash memory device stores information in an array of memory cells. The array of memory cells includes a large number of memory cells, such as, for example 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Gta San Andreas Car Mirror Mod.md b/spaces/diacanFperku/AutoGPT/Gta San Andreas Car Mirror Mod.md deleted file mode 100644 index 1aaa9c03ddd4d1100e6ccfa2d36a2380826b1fb8..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Gta San Andreas Car Mirror Mod.md +++ /dev/null @@ -1,10 +0,0 @@ - -

    a friend of mine (who has made a car mirror mod for gta: san andreas) and myself have been talking about a car mirror mod for gta: san andreas. the mod would be a mod that you can use to get a mirror for your car, which would be installed on the side of your car and would show your first person view on the other side. this would be a lot of work for me because of the modder that made the car mirror mod, but it would probably be a good mod.

    -

    Gta San Andreas Car Mirror Mod


    Download ✑ ✑ ✑ https://gohhs.com/2uFUdy



    -

    i'm making a car mirror mod that doesn't have the graphics menu. it would be a car mirror mod that you could use to get a mirror for your car, which would be installed on the side of your car and would show your first person view on the other side. however, it would not be a graphics menu mod.

    -

    if you have a friend, and you pick him up in the game, you have the option to call a taxi for him. if you do, and then you pick up another friend, or a taxi drops off the first friend, he'll always return to his taxi, even if he's in the middle of an area. if you leave the taxi in a location, the taxi will always go there, even if you're not in the taxi. this can be useful for exploring the san andreas countryside.

    -

    in gta san andreas, the player can take a taxi to the airport and leave the taxi on the ground near the terminal's entrance. then, when the player returns and picks the taxi up, it will leave the player in the airport and head for the taxi's pick-up spot. it's also possible to take a taxi to the airport and leave it there, then drive away and leave the taxi in a different location.

    -

    -

    you can also use taxi cabs to move around in gta san andreas. the player can enter the back seat of a taxi cab, and it will drive like a regular car. in fact, you can use a taxi to get past checkpoints.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/diagaiwei/ir_chinese_medqa/utility/rankings/split_by_offset.py b/spaces/diagaiwei/ir_chinese_medqa/utility/rankings/split_by_offset.py deleted file mode 100644 index 9d2e8cfd2e770097542a6059f39e51b039f3014f..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/utility/rankings/split_by_offset.py +++ /dev/null @@ -1,44 +0,0 @@ -""" -Split the ranked lists after retrieval with a merged query set. -""" - -import os -import random - -from argparse import ArgumentParser - - -def main(args): - output_paths = ['{}.{}'.format(args.ranking, split) for split in args.names] - assert all(not os.path.exists(path) for path in output_paths), output_paths - - output_files = [open(path, 'w') for path in output_paths] - - with open(args.ranking) as f: - for line in f: - qid, pid, rank, *other = line.strip().split('\t') - qid = int(qid) - split_output_path = output_files[qid // args.gap - 1] - qid = qid % args.gap - - split_output_path.write('\t'.join([str(x) for x in [qid, pid, rank, *other]]) + '\n') - - print(f.name) - - _ = [f.close() for f in output_files] - - print("#> Done!") - - -if __name__ == "__main__": - random.seed(12345) - - parser = ArgumentParser(description='Subsample the dev set.') - parser.add_argument('--ranking', dest='ranking', required=True) - - parser.add_argument('--names', dest='names', required=False, default=['train', 'dev', 'test'], type=str, nargs='+') # order matters! - parser.add_argument('--gap', dest='gap', required=False, default=1_000_000_000, type=int) # larger than any individual query set - - args = parser.parse_args() - - main(args) diff --git a/spaces/digitalxingtong/Un-Bert-Vits2/commons.py b/spaces/digitalxingtong/Un-Bert-Vits2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Un-Bert-Vits2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/dineshreddy/WALT/mmdet/core/bbox/coder/base_bbox_coder.py b/spaces/dineshreddy/WALT/mmdet/core/bbox/coder/base_bbox_coder.py deleted file mode 100644 index cf0b34c7cc2fe561718b0c884990beb40a993643..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/core/bbox/coder/base_bbox_coder.py +++ /dev/null @@ -1,17 +0,0 @@ -from abc import ABCMeta, abstractmethod - - -class BaseBBoxCoder(metaclass=ABCMeta): - """Base bounding box coder.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def encode(self, bboxes, gt_bboxes): - """Encode deltas between bboxes and ground truth boxes.""" - - @abstractmethod - def decode(self, bboxes, bboxes_pred): - """Decode the predicted bboxes according to prediction and base - boxes.""" diff --git a/spaces/dineshreddy/WALT/mmdet/models/builder.py b/spaces/dineshreddy/WALT/mmdet/models/builder.py deleted file mode 100644 index 81c927e507a7c1625ffb114de10e93c94927af25..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/builder.py +++ /dev/null @@ -1,77 +0,0 @@ -import warnings - -from mmcv.utils import Registry, build_from_cfg -from torch import nn - -BACKBONES = Registry('backbone') -NECKS = Registry('neck') -ROI_EXTRACTORS = Registry('roi_extractor') -SHARED_HEADS = Registry('shared_head') -HEADS = Registry('head') -LOSSES = Registry('loss') -DETECTORS = Registry('detector') - - -def build(cfg, registry, default_args=None): - """Build a module. - - Args: - cfg (dict, list[dict]): The config of modules, is is either a dict - or a list of configs. - registry (:obj:`Registry`): A registry the module belongs to. - default_args (dict, optional): Default arguments to build the module. - Defaults to None. - - Returns: - nn.Module: A built nn module. - """ - if isinstance(cfg, list): - modules = [ - build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg - ] - return nn.Sequential(*modules) - else: - return build_from_cfg(cfg, registry, default_args) - - -def build_backbone(cfg): - """Build backbone.""" - return build(cfg, BACKBONES) - - -def build_neck(cfg): - """Build neck.""" - return build(cfg, NECKS) - - -def build_roi_extractor(cfg): - """Build roi extractor.""" - return build(cfg, ROI_EXTRACTORS) - - -def build_shared_head(cfg): - """Build shared head.""" - return build(cfg, SHARED_HEADS) - - -def build_head(cfg): - """Build head.""" - return build(cfg, HEADS) - - -def build_loss(cfg): - """Build loss.""" - return build(cfg, LOSSES) - - -def build_detector(cfg, train_cfg=None, test_cfg=None): - """Build detector.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg)) diff --git a/spaces/dineshreddy/WALT/mmdet/models/necks/fpn_carafe.py b/spaces/dineshreddy/WALT/mmdet/models/necks/fpn_carafe.py deleted file mode 100644 index 302e6576df9914e49166539108d6048b78c1fe71..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/necks/fpn_carafe.py +++ /dev/null @@ -1,267 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule, build_upsample_layer, xavier_init -from mmcv.ops.carafe import CARAFEPack - -from ..builder import NECKS - - -@NECKS.register_module() -class FPN_CARAFE(nn.Module): - """FPN_CARAFE is a more flexible implementation of FPN. It allows more - choice for upsample methods during the top-down pathway. - - It can reproduce the performance of ICCV 2019 paper - CARAFE: Content-Aware ReAssembly of FEatures - Please refer to https://arxiv.org/abs/1905.02188 for more details. - - Args: - in_channels (list[int]): Number of channels for each input feature map. - out_channels (int): Output channels of feature pyramids. - num_outs (int): Number of output stages. - start_level (int): Start level of feature pyramids. - (Default: 0) - end_level (int): End level of feature pyramids. - (Default: -1 indicates the last level). - norm_cfg (dict): Dictionary to construct and config norm layer. - activate (str): Type of activation function in ConvModule - (Default: None indicates w/o activation). - order (dict): Order of components in ConvModule. - upsample (str): Type of upsample layer. - upsample_cfg (dict): Dictionary to construct and config upsample layer. - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=0, - end_level=-1, - norm_cfg=None, - act_cfg=None, - order=('conv', 'norm', 'act'), - upsample_cfg=dict( - type='carafe', - up_kernel=5, - up_group=1, - encoder_kernel=3, - encoder_dilation=1)): - super(FPN_CARAFE, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.with_bias = norm_cfg is None - self.upsample_cfg = upsample_cfg.copy() - self.upsample = self.upsample_cfg.get('type') - self.relu = nn.ReLU(inplace=False) - - self.order = order - assert order in [('conv', 'norm', 'act'), ('act', 'conv', 'norm')] - - assert self.upsample in [ - 'nearest', 'bilinear', 'deconv', 'pixel_shuffle', 'carafe', None - ] - if self.upsample in ['deconv', 'pixel_shuffle']: - assert hasattr( - self.upsample_cfg, - 'upsample_kernel') and self.upsample_cfg.upsample_kernel > 0 - self.upsample_kernel = self.upsample_cfg.pop('upsample_kernel') - - if end_level == -1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level < inputs, no extra level is allowed - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - assert num_outs == end_level - start_level - self.start_level = start_level - self.end_level = end_level - - self.lateral_convs = nn.ModuleList() - self.fpn_convs = nn.ModuleList() - self.upsample_modules = nn.ModuleList() - - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - norm_cfg=norm_cfg, - bias=self.with_bias, - act_cfg=act_cfg, - inplace=False, - order=self.order) - fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - bias=self.with_bias, - act_cfg=act_cfg, - inplace=False, - order=self.order) - if i != self.backbone_end_level - 1: - upsample_cfg_ = self.upsample_cfg.copy() - if self.upsample == 'deconv': - upsample_cfg_.update( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=self.upsample_kernel, - stride=2, - padding=(self.upsample_kernel - 1) // 2, - output_padding=(self.upsample_kernel - 1) // 2) - elif self.upsample == 'pixel_shuffle': - upsample_cfg_.update( - in_channels=out_channels, - out_channels=out_channels, - scale_factor=2, - upsample_kernel=self.upsample_kernel) - elif self.upsample == 'carafe': - upsample_cfg_.update(channels=out_channels, scale_factor=2) - else: - # suppress warnings - align_corners = (None - if self.upsample == 'nearest' else False) - upsample_cfg_.update( - scale_factor=2, - mode=self.upsample, - align_corners=align_corners) - upsample_module = build_upsample_layer(upsample_cfg_) - self.upsample_modules.append(upsample_module) - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - # add extra conv layers (e.g., RetinaNet) - extra_out_levels = ( - num_outs - self.backbone_end_level + self.start_level) - if extra_out_levels >= 1: - for i in range(extra_out_levels): - in_channels = ( - self.in_channels[self.backbone_end_level - - 1] if i == 0 else out_channels) - extra_l_conv = ConvModule( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - norm_cfg=norm_cfg, - bias=self.with_bias, - act_cfg=act_cfg, - inplace=False, - order=self.order) - if self.upsample == 'deconv': - upsampler_cfg_ = dict( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=self.upsample_kernel, - stride=2, - padding=(self.upsample_kernel - 1) // 2, - output_padding=(self.upsample_kernel - 1) // 2) - elif self.upsample == 'pixel_shuffle': - upsampler_cfg_ = dict( - in_channels=out_channels, - out_channels=out_channels, - scale_factor=2, - upsample_kernel=self.upsample_kernel) - elif self.upsample == 'carafe': - upsampler_cfg_ = dict( - channels=out_channels, - scale_factor=2, - **self.upsample_cfg) - else: - # suppress warnings - align_corners = (None - if self.upsample == 'nearest' else False) - upsampler_cfg_ = dict( - scale_factor=2, - mode=self.upsample, - align_corners=align_corners) - upsampler_cfg_['type'] = self.upsample - upsample_module = build_upsample_layer(upsampler_cfg_) - extra_fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - bias=self.with_bias, - act_cfg=act_cfg, - inplace=False, - order=self.order) - self.upsample_modules.append(upsample_module) - self.fpn_convs.append(extra_fpn_conv) - self.lateral_convs.append(extra_l_conv) - - # default init_weights for conv(msra) and norm in ConvModule - def init_weights(self): - """Initialize the weights of module.""" - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - xavier_init(m, distribution='uniform') - for m in self.modules(): - if isinstance(m, CARAFEPack): - m.init_weights() - - def slice_as(self, src, dst): - """Slice ``src`` as ``dst`` - - Note: - ``src`` should have the same or larger size than ``dst``. - - Args: - src (torch.Tensor): Tensors to be sliced. - dst (torch.Tensor): ``src`` will be sliced to have the same - size as ``dst``. - - Returns: - torch.Tensor: Sliced tensor. - """ - assert (src.size(2) >= dst.size(2)) and (src.size(3) >= dst.size(3)) - if src.size(2) == dst.size(2) and src.size(3) == dst.size(3): - return src - else: - return src[:, :, :dst.size(2), :dst.size(3)] - - def tensor_add(self, a, b): - """Add tensors ``a`` and ``b`` that might have different sizes.""" - if a.size() == b.size(): - c = a + b - else: - c = a + self.slice_as(b, a) - return c - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [] - for i, lateral_conv in enumerate(self.lateral_convs): - if i <= self.backbone_end_level - self.start_level: - input = inputs[min(i + self.start_level, len(inputs) - 1)] - else: - input = laterals[-1] - lateral = lateral_conv(input) - laterals.append(lateral) - - # build top-down path - for i in range(len(laterals) - 1, 0, -1): - if self.upsample is not None: - upsample_feat = self.upsample_modules[i - 1](laterals[i]) - else: - upsample_feat = laterals[i] - laterals[i - 1] = self.tensor_add(laterals[i - 1], upsample_feat) - - # build outputs - num_conv_outs = len(self.fpn_convs) - outs = [] - for i in range(num_conv_outs): - out = self.fpn_convs[i](laterals[i]) - outs.append(out) - return tuple(outs) diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/master/master_toy_dataset.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/master/master_toy_dataset.py deleted file mode 100644 index 3d0440240a28a2d64b2f0442cae7d628a7542f42..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/master/master_toy_dataset.py +++ /dev/null @@ -1,30 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', '../../_base_/recog_models/master.py', - '../../_base_/schedules/schedule_adam_step_12e.py', - '../../_base_/recog_pipelines/master_pipeline.py', - '../../_base_/recog_datasets/toy_data.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -data = dict( - workers_per_gpu=2, - samples_per_gpu=8, - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/dorkai/singpt/modules/models.py b/spaces/dorkai/singpt/modules/models.py deleted file mode 100644 index f4bb11fd3f7292657b008ab644b5be121d9980e5..0000000000000000000000000000000000000000 --- a/spaces/dorkai/singpt/modules/models.py +++ /dev/null @@ -1,168 +0,0 @@ -import json -import os -import time -import zipfile -from pathlib import Path - -import numpy as np -import torch -import transformers -from transformers import AutoModelForCausalLM, AutoTokenizer - -import modules.shared as shared - -transformers.logging.set_verbosity_error() - -local_rank = None - -if shared.args.flexgen: - from flexgen.flex_opt import (CompressionConfig, ExecutionEnv, OptLM, - Policy, str2bool) - -if shared.args.deepspeed: - import deepspeed - from transformers.deepspeed import (HfDeepSpeedConfig, - is_deepspeed_zero3_enabled) - - from modules.deepspeed_parameters import generate_ds_config - - # Distributed setup - local_rank = shared.args.local_rank if shared.args.local_rank is not None else int(os.getenv("LOCAL_RANK", "0")) - world_size = int(os.getenv("WORLD_SIZE", "1")) - torch.cuda.set_device(local_rank) - deepspeed.init_distributed() - ds_config = generate_ds_config(shared.args.bf16, 1 * world_size, shared.args.nvme_offload_dir) - dschf = HfDeepSpeedConfig(ds_config) # Keep this object alive for the Transformers integration - - -def load_model(model_name): - print(f"Loading {model_name}...") - t0 = time.time() - - shared.is_RWKV = model_name.lower().startswith('rwkv-') - - # Default settings - if not any([shared.args.cpu, shared.args.load_in_8bit, shared.args.gptq_bits, shared.args.auto_devices, shared.args.disk, shared.args.gpu_memory is not None, shared.args.cpu_memory is not None, shared.args.deepspeed, shared.args.flexgen, shared.is_RWKV]): - if any(size in shared.model_name.lower() for size in ('13b', '20b', '30b')): - model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), device_map='auto', load_in_8bit=True) - else: - model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), low_cpu_mem_usage=True, torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16).cuda() - - # FlexGen - elif shared.args.flexgen: - # Initialize environment - env = ExecutionEnv.create(shared.args.disk_cache_dir) - - # Offloading policy - policy = Policy(1, 1, - shared.args.percent[0], shared.args.percent[1], - shared.args.percent[2], shared.args.percent[3], - shared.args.percent[4], shared.args.percent[5], - overlap=True, sep_layer=True, pin_weight=shared.args.pin_weight, - cpu_cache_compute=False, attn_sparsity=1.0, - compress_weight=shared.args.compress_weight, - comp_weight_config=CompressionConfig( - num_bits=4, group_size=64, - group_dim=0, symmetric=False), - compress_cache=False, - comp_cache_config=CompressionConfig( - num_bits=4, group_size=64, - group_dim=2, symmetric=False)) - - model = OptLM(f"facebook/{shared.model_name}", env, "models", policy) - - # DeepSpeed ZeRO-3 - elif shared.args.deepspeed: - model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16) - model = deepspeed.initialize(model=model, config_params=ds_config, model_parameters=None, optimizer=None, lr_scheduler=None)[0] - model.module.eval() # Inference - print(f"DeepSpeed ZeRO-3 is enabled: {is_deepspeed_zero3_enabled()}") - - # RMKV model (not on HuggingFace) - elif shared.is_RWKV: - from modules.RWKV import RWKVModel, RWKVTokenizer - - model = RWKVModel.from_pretrained(Path(f'models/{model_name}'), dtype="fp32" if shared.args.cpu else "bf16" if shared.args.bf16 else "fp16", device="cpu" if shared.args.cpu else "cuda") - tokenizer = RWKVTokenizer.from_pretrained(Path('models')) - - return model, tokenizer - - # Quantized model - elif shared.args.gptq_bits > 0: - from modules.GPTQ_loader import load_quantized - - model = load_quantized(model_name) - - # Custom - else: - command = "AutoModelForCausalLM.from_pretrained" - params = ["low_cpu_mem_usage=True"] - if not shared.args.cpu and not torch.cuda.is_available(): - print("Warning: no GPU has been detected.\nFalling back to CPU mode.\n") - shared.args.cpu = True - - if shared.args.cpu: - params.append("low_cpu_mem_usage=True") - params.append("torch_dtype=torch.float32") - else: - params.append("device_map='auto'") - params.append("load_in_8bit=True" if shared.args.load_in_8bit else "torch_dtype=torch.bfloat16" if shared.args.bf16 else "torch_dtype=torch.float16") - - if shared.args.gpu_memory: - memory_map = shared.args.gpu_memory - max_memory = f"max_memory={{0: '{memory_map[0]}GiB'" - for i in range(1, len(memory_map)): - max_memory += (f", {i}: '{memory_map[i]}GiB'") - max_memory += (f", 'cpu': '{shared.args.cpu_memory or '99'}GiB'}}") - params.append(max_memory) - elif not shared.args.load_in_8bit: - total_mem = (torch.cuda.get_device_properties(0).total_memory/(1024*1024)) - suggestion = round((total_mem-1000)/1000)*1000 - if total_mem-suggestion < 800: - suggestion -= 1000 - suggestion = int(round(suggestion/1000)) - print(f"\033[1;32;1mAuto-assiging --gpu-memory {suggestion} for your GPU to try to prevent out-of-memory errors.\nYou can manually set other values.\033[0;37;0m") - params.append(f"max_memory={{0: '{suggestion}GiB', 'cpu': '{shared.args.cpu_memory or '99'}GiB'}}") - if shared.args.disk: - params.append(f"offload_folder='{shared.args.disk_cache_dir}'") - - command = f"{command}(Path(f'models/{shared.model_name}'), {', '.join(set(params))})" - model = eval(command) - - # Loading the tokenizer - if shared.model_name.lower().startswith(('gpt4chan', 'gpt-4chan', '4chan')) and Path("models/gpt-j-6B/").exists(): - tokenizer = AutoTokenizer.from_pretrained(Path("models/gpt-j-6B/")) - else: - tokenizer = AutoTokenizer.from_pretrained(Path(f"models/{shared.model_name}/")) - tokenizer.truncation_side = 'left' - - print(f"Loaded the model in {(time.time()-t0):.2f} seconds.") - return model, tokenizer - -def load_soft_prompt(name): - if name == 'None': - shared.soft_prompt = False - shared.soft_prompt_tensor = None - else: - with zipfile.ZipFile(Path(f'softprompts/{name}.zip')) as zf: - zf.extract('tensor.npy') - zf.extract('meta.json') - j = json.loads(open('meta.json', 'r').read()) - print(f"\nLoading the softprompt \"{name}\".") - for field in j: - if field != 'name': - if type(j[field]) is list: - print(f"{field}: {', '.join(j[field])}") - else: - print(f"{field}: {j[field]}") - print() - tensor = np.load('tensor.npy') - Path('tensor.npy').unlink() - Path('meta.json').unlink() - tensor = torch.Tensor(tensor).to(device=shared.model.device, dtype=shared.model.dtype) - tensor = torch.reshape(tensor, (1, tensor.shape[0], tensor.shape[1])) - - shared.soft_prompt = True - shared.soft_prompt_tensor = tensor - - return name diff --git a/spaces/dragonSwing/annotate-anything/tag2text/models/utils.py b/spaces/dragonSwing/annotate-anything/tag2text/models/utils.py deleted file mode 100644 index ab456bc8effcb52b50e8f97ef28aa5c1c2b56b36..0000000000000000000000000000000000000000 --- a/spaces/dragonSwing/annotate-anything/tag2text/models/utils.py +++ /dev/null @@ -1,241 +0,0 @@ -import json -import math -import os -from pathlib import Path -from typing import List -from urllib.parse import urlparse - -import torch -from models.swin_transformer import interpolate_relative_pos_embed -from models.vit import interpolate_pos_embed -from timm.models.hub import download_cached_file -from torch import nn -from transformers import BertTokenizer - -CONFIG_PATH = Path(__file__).resolve().parents[1] - - -def read_json(rpath): - with open(rpath) as f: - return json.load(f) - - -def tie_encoder_decoder_weights( - encoder: nn.Module, decoder: nn.Module, base_model_prefix: str, skip_key: str -): - uninitialized_encoder_weights: List[str] = [] - if decoder.__class__ != encoder.__class__: - logger.info( - f"{decoder.__class__} and {encoder.__class__} are not equal. In this case make sure that all encoder weights are correctly initialized." - ) - - def tie_encoder_to_decoder_recursively( - decoder_pointer: nn.Module, - encoder_pointer: nn.Module, - module_name: str, - uninitialized_encoder_weights: List[str], - skip_key: str, - depth=0, - ): - assert isinstance(decoder_pointer, nn.Module) and isinstance( - encoder_pointer, nn.Module - ), f"{decoder_pointer} and {encoder_pointer} have to be of type torch.nn.Module" - if hasattr(decoder_pointer, "weight") and skip_key not in module_name: - assert hasattr(encoder_pointer, "weight") - encoder_pointer.weight = decoder_pointer.weight - if hasattr(decoder_pointer, "bias"): - assert hasattr(encoder_pointer, "bias") - encoder_pointer.bias = decoder_pointer.bias - print(module_name + " is tied") - return - - encoder_modules = encoder_pointer._modules - decoder_modules = decoder_pointer._modules - if len(decoder_modules) > 0: - assert ( - len(encoder_modules) > 0 - ), f"Encoder module {encoder_pointer} does not match decoder module {decoder_pointer}" - - all_encoder_weights = { - module_name + "/" + sub_name for sub_name in encoder_modules.keys() - } - encoder_layer_pos = 0 - for name, module in decoder_modules.items(): - if name.isdigit(): - encoder_name = str(int(name) + encoder_layer_pos) - decoder_name = name - if not isinstance( - decoder_modules[decoder_name], - type(encoder_modules[encoder_name]), - ) and len(encoder_modules) != len(decoder_modules): - # this can happen if the name corresponds to the position in a list module list of layers - # in this case the decoder has added a cross-attention that the encoder does not have - # thus skip this step and subtract one layer pos from encoder - encoder_layer_pos -= 1 - continue - elif name not in encoder_modules: - continue - elif depth > 500: - raise ValueError( - "Max depth of recursive function `tie_encoder_to_decoder` reached. It seems that there is a circular dependency between two or more `nn.Modules` of your model." - ) - else: - decoder_name = encoder_name = name - tie_encoder_to_decoder_recursively( - decoder_modules[decoder_name], - encoder_modules[encoder_name], - module_name + "/" + name, - uninitialized_encoder_weights, - skip_key, - depth=depth + 1, - ) - all_encoder_weights.remove(module_name + "/" + encoder_name) - - uninitialized_encoder_weights += list(all_encoder_weights) - - # tie weights recursively - tie_encoder_to_decoder_recursively( - decoder, encoder, base_model_prefix, uninitialized_encoder_weights, skip_key - ) - - -class GroupWiseLinear(nn.Module): - # could be changed to: - # output = torch.einsum('ijk,zjk->ij', x, self.W) - # or output = torch.einsum('ijk,jk->ij', x, self.W[0]) - def __init__(self, num_class, hidden_dim, bias=True): - super().__init__() - self.num_class = num_class - self.hidden_dim = hidden_dim - self.bias = bias - - self.W = nn.Parameter(torch.Tensor(1, num_class, hidden_dim)) - if bias: - self.b = nn.Parameter(torch.Tensor(1, num_class)) - self.reset_parameters() - - def reset_parameters(self): - stdv = 1.0 / math.sqrt(self.W.size(2)) - for i in range(self.num_class): - self.W[0][i].data.uniform_(-stdv, stdv) - if self.bias: - for i in range(self.num_class): - self.b[0][i].data.uniform_(-stdv, stdv) - - def forward(self, x): - # x: B,K,d - x = (self.W * x).sum(-1) - if self.bias: - x = x + self.b - return x - - -def init_tokenizer(): - tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") - tokenizer.add_special_tokens({"bos_token": "[DEC]"}) - tokenizer.add_special_tokens({"additional_special_tokens": ["[ENC]"]}) - tokenizer.enc_token_id = tokenizer.additional_special_tokens_ids[0] - return tokenizer - - -def create_vit( - vit, image_size, use_grad_checkpointing=False, ckpt_layer=0, drop_path_rate=0 -): - assert vit in ["base", "large"], "vit parameter must be base or large" - if vit == "base": - vision_width = 768 - visual_encoder = VisionTransformer( - img_size=image_size, - patch_size=16, - embed_dim=vision_width, - depth=12, - num_heads=12, - use_grad_checkpointing=use_grad_checkpointing, - ckpt_layer=ckpt_layer, - drop_path_rate=0 or drop_path_rate, - ) - elif vit == "large": - vision_width = 1024 - visual_encoder = VisionTransformer( - img_size=image_size, - patch_size=16, - embed_dim=vision_width, - depth=24, - num_heads=16, - use_grad_checkpointing=use_grad_checkpointing, - ckpt_layer=ckpt_layer, - drop_path_rate=0.1 or drop_path_rate, - ) - return visual_encoder, vision_width - - -def is_url(url_or_filename): - parsed = urlparse(url_or_filename) - return parsed.scheme in ("http", "https") - - -def load_checkpoint(model, url_or_filename): - if is_url(url_or_filename): - cached_file = download_cached_file( - url_or_filename, check_hash=False, progress=True - ) - checkpoint = torch.load(cached_file, map_location="cpu") - elif os.path.isfile(url_or_filename): - checkpoint = torch.load(url_or_filename, map_location="cpu") - else: - raise RuntimeError("checkpoint url or path is invalid") - - state_dict = checkpoint["model"] - - state_dict["visual_encoder.pos_embed"] = interpolate_pos_embed( - state_dict["visual_encoder.pos_embed"], model.visual_encoder - ) - if "visual_encoder_m.pos_embed" in model.state_dict().keys(): - state_dict["visual_encoder_m.pos_embed"] = interpolate_pos_embed( - state_dict["visual_encoder_m.pos_embed"], model.visual_encoder_m - ) - for key in model.state_dict().keys(): - if key in state_dict.keys(): - if state_dict[key].shape != model.state_dict()[key].shape: - del state_dict[key] - - msg = model.load_state_dict(state_dict, strict=False) - print("load checkpoint from %s" % url_or_filename) - return model, msg - - -def load_checkpoint_swinbase(model, url_or_filename, kwargs): - if kwargs["image_size"] == 224: - vision_config_path = f"{CONFIG_PATH}/configs/swin/config_swinB_224.json" - elif kwargs["image_size"] == 384: - vision_config_path = f"{CONFIG_PATH}/configs/swin/config_swinB_384.json" - window_size = read_json(vision_config_path)["window_size"] - print("--------------") - print(url_or_filename) - print("--------------") - if is_url(url_or_filename): - cached_file = download_cached_file( - url_or_filename, check_hash=False, progress=True - ) - checkpoint = torch.load(cached_file, map_location="cpu") - elif os.path.isfile(url_or_filename): - checkpoint = torch.load(url_or_filename, map_location="cpu") - else: - raise RuntimeError("checkpoint url or path is invalid") - - state_dict = checkpoint["model"] - - for k in list(state_dict.keys()): - if "relative_position_bias_table" in k: - dst_num_pos = (2 * window_size - 1) ** 2 - state_dict[k] = interpolate_relative_pos_embed( - state_dict[k], dst_num_pos, param_name=k - ) - elif ("relative_position_index" in k) or ("attn_mask" in k): - del state_dict[k] - elif "vision_multi" in k: - state_dict[k.replace("vision_multi", "tagging_head")] = state_dict.pop(k) - - msg = model.load_state_dict(state_dict, strict=False) - print("load checkpoint from %s" % url_or_filename) - return model, msg diff --git a/spaces/drift-ai/recruiter-assistant-jbfxrs/app.py b/spaces/drift-ai/recruiter-assistant-jbfxrs/app.py deleted file mode 100644 index 7ba7b8090f7afebbfd086f2c981bbcc45376ee25..0000000000000000000000000000000000000000 --- a/spaces/drift-ai/recruiter-assistant-jbfxrs/app.py +++ /dev/null @@ -1,359 +0,0 @@ -import json -import os -from concurrent.futures import ThreadPoolExecutor - -import gradio as gr -import requests -from langchain.chat_models import ChatOpenAI - -import utils -from prompts import preprocess, recruiting_assistant, matches, intro - -llm = ChatOpenAI(temperature=0.0, openai_api_key=os.environ["OPENAI"]) -MAX_SKILLS = 10 - - -def preprocess_resume(llm, resume) -> str: - result = preprocess.preprocess_resume(llm, resume) - resume_preprocess = result["resume_preprocess"] - return resume_preprocess - - -def call_endpoint(resume_preprocessed): - url = f"https://3jxjznzonb.execute-api.eu-west-1.amazonaws.com/dev/prediction" # vervang met uw API-eindpunt - headers = { - "Content-Type": "application/json", - "x-api-key": os.environ["API_KEY"], - } # pas headers indien nodig aan - response = requests.post( - url, - headers=headers, - data=json.dumps({"text": resume_preprocessed, "limit": 10}), - ) - response_data = response.json() - return response_data - - -def postprocess_vancy(vacancies, resume): - if "prediction" in vacancies: - prediction = vacancies["prediction"] - if isinstance(prediction, list): - # Convert prediction to HTML table - html_table = "" - # Add table headers - html_table += "" - - # Prepare a list to hold the futures - futures = [] - matches_score_tuples = [] - # Create a ThreadPoolExecutor - with ThreadPoolExecutor() as executor: - for i, vacancy in enumerate(prediction): - # Schedule the get_skills_match function to run and store the future - future = executor.submit( - matches.get_skills_match, llm, vacancy, resume - ) - futures.append((i, vacancy, future)) - - # Collect the results as they become available - for i, vacancy, future in futures: - skills_match = future.result() - skills_match_predicted = utils.get_json_list_from_result( - skills_match, "skills_match_predicted" - ) - print("getting matched skills ...") - counter = 0 - matched_skills = [] - for element in skills_match_predicted: - if element["resume_index"] > 0 and element["vacancy_index"] > 0: - counter += 1 - if counter > MAX_SKILLS: - break - matched_skills.append(element["content"]) - - # matched_skills = [ - # element["content"] - # for element in skills_match_predicted - # if element["resume_index"] > 0 and element["vacancy_index"] > 0 - # ] - for element in matched_skills: - vacancy = vacancy.lower().replace( - element.lower(), - f'{element}', - ) - matches_score_tuples.append( - (vacancy, matched_skills, len(matched_skills)) - ) - print("sorting matches based on the score.") - matches_score_tuples.sort(key=lambda x: x[-1], reverse=True) - print("constructing html table.") - for i, element in enumerate(matches_score_tuples, 1): - vacancy, matched_skills, score = element - vacancy_formatted = vacancy.replace(".,", "
    ") - vacancy_formatted = f"VACATURE {i}:
    {vacancy_formatted}" - matches_html = "
    - ".join(matched_skills) - resume_matched_formatted = f"Score: {score}
    -{matches_html}" - html_table += f"" - html_table += "
    VacancyMatch
    {vacancy_formatted}{resume_matched_formatted}
    " - return html_table - - return "niets teruggevonden, probeer nogmaals ..." - - -def search(resume): - original_resume = resume - resume_preprocessed = preprocess_resume(llm, resume) - vacancies = call_endpoint(resume_preprocessed) - vacancies_formatted = postprocess_vancy(vacancies, original_resume) - return vacancies_formatted - - -def get_intro_jobseeker(vacancy, resume): - intro_ = intro.get_intro_jobseeker(llm, vacancy, resume) - return intro_["intro"] - - -def get_intro_jobissuer(vacancy, resume): - intro_ = intro.get_intro_jobissuer(llm, vacancy, resume) - return intro_["intro"] - - -def get_intro(vacancy, resume): - jobseeker = get_intro_jobseeker(vacancy, resume) - jobissuer = get_intro_jobissuer(vacancy, resume) - return f""" -EMAIL KANDIDAAT: -================= - -{jobseeker} - -EMAIL BEDRIJF: -=============== - -{jobissuer} - -""" - - -examples = [ - """ - Jan De Vries - Magazijnier - - Adres: Hoofdstraat 123, 1000 Brussel - Telefoon: +32 123 456 789 - E-mail: jan.devries@email.com - - Ervaren magazijnier met meer dan 7 jaar ervaring in het beheer van auto-onderdelen in grootschalige distributiecentra. Ik zoek een positie waar ik mijn expertise in voorraadbeheer, orderverwerking en teammanagement kan toepassen om de efficiëntie en productiviteit van het magazijn te verbeteren. - - Werkervaring - - Magazijnier, AutoParts Warehouse, Gent - Januari 2017 - Heden - - Verantwoordelijk voor de ontvangst en verwerking van inkomende leveringen. - Coördinatie van de dagelijkse pick- en pack-activiteiten. - Beheer van een team van 5 medewerkers om te zorgen voor tijdige leveringen. - Implementatie van een nieuw voorraadsysteem wat resulteerde in een vermindering van 15% in overstock. - Assistent Magazijnier, CarParts Distributie, Antwerpen - Juni 2014 - December 2016 - - Geholpen bij het organiseren van het magazijn voor optimale opslag. - Bijgehouden van voorraadniveaus en tijdig bestellingen geplaatst. - Geholpen bij het trainen van nieuwe medewerkers. - Opleiding - - Diploma Secundair Onderwijs, Technisch Onderwijs, VTI Brussel - 2010 - 2013 - - Vaardigheden - - Grondige kennis van auto-onderdelen. - Ervaren in het gebruik van voorraadbeheersystemen. - Uitstekende organisatorische en multitasking-vaardigheden. - Sterke communicatieve vaardigheden en teamspeler. - Bekwaam in het gebruik van heftrucks en andere magazijnapparatuur. - Talen - - Nederlands (Moedertaal) - Engels (Vloeiend) - Frans (Basis) - Certificaten - - Heftruckcertificaat, VDAB, 2014 - """, - """ - Johannes van der Meer - Chef-kok - - Adres: Klaverstraat 45, 3000 Leiden - Telefoon: +31 6 1234 5678 - E-mail: johannesvdm@email.nl - - Gepassioneerde en ervaren chef-kok met meer dan 12 jaar ervaring in zowel traditionele Nederlandse als internationale keukens. Bekend om het creëren van innovatieve en heerlijke gerechten met een focus op verse en lokale ingrediënten. Sterke leiderschapsvaardigheden en een bewezen vermogen om keukenteams te leiden en te trainen. - - WERKERVARING - - Executive Chef – Luxe Restaurant De Zon, Amsterdam - Januari 2019 - Heden - - Verantwoordelijk voor het dagelijks beheer van de keuken. - Ontwikkeling en implementatie van nieuwe menu's volgens seizoensgebonden beschikbaarheid. - Training en begeleiding van een team van 15 koks en keukenpersoneel. - Sous Chef – Brasserie Lente, Utrecht - Juni 2013 - December 2018 - - Assisteerde de hoofdchef bij het plannen van menu's en het organiseren van speciale evenementen. - Beheerde voedselvoorraden en budgetten. - Onderhield relaties met leveranciers en zorgde voor de hoogste kwaliteit ingrediënten. - Chef de Partie – Restaurant De Oude Molen, Den Haag - Augustus 2008 - Mei 2013 - - Gespecialiseerd in sauzen en was verantwoordelijk voor de sauzen- en soepensectie. - Assisteerde bij de voorbereiding van dagelijkse specials. - Hielp bij het opleiden van junior koks. - OPLEIDING - - Diploma Culinaire Kunst, Culinaire School van Amsterdam - 2005 - 2007 - - Vakdiploma Kok, ROC Leiden - 2003 - 2005 - - VAARDIGHEDEN - - Uitstekende kooktechnieken - Menuontwikkeling - Teamleiderschap - Voedselveiligheid en hygiëne - Voorraadbeheer - Budgetbeheer - TALEN - - Nederlands (Moedertaal) - Engels (Vloeiend) - Frans (Basis) - - """, - """ - Naam: John Doe - Koeltechnieker - - Adres: Parkstraat 123, 1000 Stadsveld - Telefoon: +31 6 1234 5678 - E-mail: john.doe@email.com - Geboortedatum: 15 juli 1985 - Nationaliteit: Nederlandse - - Een toegewijde en bekwame koeltechnieker met 8 jaar ervaring in het ontwerpen, installeren, onderhouden en repareren van koelsystemen. Technisch onderlegd en bekend met verschillende koeltechnieken. Een probleemoplosser die snel storingen kan diagnosticeren en efficiënte oplossingen kan implementeren. Goed in teamverband en klantgericht. - - Werkervaring: - - Service Technicus bij KoelTech B.V. - januari 2018 tot heden - - Verantwoordelijk voor het installeren en onderhouden van commerciële en industriële koelsystemen. - Diagnose stellen en repareren van storingen in koelapparatuur. - Uitvoeren van preventief onderhoud om de prestaties van koelinstallaties te optimaliseren. - Klantgerichte aanpak om vragen en problemen van klanten op te lossen. - Assistent Koeltechnieker bij CoolAir Installaties - maart 2014 tot december 2017 - - Betrokken bij het ontwerp en de installatie van nieuwe koelsystemen in residentiële gebouwen. - Uitvoeren van druk- en temperatuurmetingen om de juiste werking van de systemen te waarborgen. - Oplossen van technische problemen en het vervangen van defecte onderdelen. - Het trainen van klanten in het juiste gebruik en onderhoud van koelapparatuur. - Opleiding: - - MBO Koeltechniek - Technische School Stadsveld - 2014 - Certificaat Veilig werken met koelinstallaties - Koelacademie Nederland - 2014 - Vaardigheden: - - Uitgebreide kennis van verschillende koeltechnieken en -systemen. - Sterk technisch inzicht en probleemoplossend vermogen. - Bekend met veiligheidsvoorschriften en -procedures in de koeltechniek. - Goede communicatieve vaardigheden om effectief te kunnen samenwerken met collega's en klanten. - Bekwaamheid in het lezen van technische tekeningen en schema's. - Talen: - - Nederlands: Moedertaal - Engels: Goed - Referenties: - Beschikbaar op verzoek. - - Opmerking: Dit CV bevat fictieve gegevens en dient alleen ter illustratie. Gebruik het als een sjabloon en vervang de gegevens door je eigen informatie bij het maken van een CV. - """, -] - -demo = gr.Blocks(theme=gr.themes.Soft()) - -with demo: - with gr.Group(): - with gr.Box(): - with gr.Row(elem_id="prompt-container").style( - mobile_collapse=False, equal_height=True - ): - with gr.Column(): - text_cv = gr.Textbox( - lines=7, - label="1. Voer een CV in en krijg relevante vacatures terug.", - ) - b1 = gr.Button("Zoek Vacatures", variant='primary', size="sm") - html_search_result = gr.HTML( - label="Top vacatures gevonden in de database", - ) - b1.click(search, inputs=text_cv, outputs=html_search_result) - - gr.Markdown( - """ -
    -
    -
    -
    -
    - """ - ) - - text_vacature = gr.Textbox( - label="2. Selecteer een geschikte vacature voor deze CV, plak deze in het tekstveld hieronder en krijg een relevante intro.", - lines=7, - ) - b2 = gr.Button("Schrijf Intro", variant='primary', size="sm") - - gr.Markdown( - """ -
    -
    -
    -
    -
    - """ - ) - - text_intro = gr.Textbox( - label="3. Introductie E-mail", - lines=7, - ) - b2.click( - get_intro, - inputs=[text_vacature, text_cv], - outputs=[text_intro], - ) - - gr.Markdown( - """ -
    -
    -
    -
    -
    - """ - ) - - gr.Examples( - examples=examples, - fn=search, - inputs=text_cv, - outputs=html_search_result, - cache_examples=False, - ) - -demo.launch() diff --git a/spaces/dvc890/go-chatgpt-api/api/platform/api.go b/spaces/dvc890/go-chatgpt-api/api/platform/api.go deleted file mode 100644 index f8541b055ee51b193f531703d10f9d30f093f4ba..0000000000000000000000000000000000000000 --- a/spaces/dvc890/go-chatgpt-api/api/platform/api.go +++ /dev/null @@ -1,224 +0,0 @@ -package platform - -import ( - "bytes" - "encoding/json" - "fmt" - "io" - "strings" - - "github.com/gin-gonic/gin" - "github.com/linweiyuan/go-chatgpt-api/api" - - http "github.com/bogdanfinn/fhttp" -) - -func ListModels(c *gin.Context) { - handleGet(c, apiListModels) -} - -func RetrieveModel(c *gin.Context) { - model := c.Param("model") - handleGet(c, fmt.Sprintf(apiRetrieveModel, model)) -} - -//goland:noinspection GoUnhandledErrorResult -func CreateCompletions(c *gin.Context) { - var request CreateCompletionsRequest - c.ShouldBindJSON(&request) - data, _ := json.Marshal(request) - resp, err := handlePost(c, apiCreateCompletions, data, request.Stream) - if err != nil { - return - } - - defer resp.Body.Close() - if request.Stream { - api.HandleConversationResponse(c, resp) - } else { - io.Copy(c.Writer, resp.Body) - } -} - -//goland:noinspection GoUnhandledErrorResult -func CreateChatCompletions(c *gin.Context) { - var request ChatCompletionsRequest - c.ShouldBindJSON(&request) - data, _ := json.Marshal(request) - resp, err := handlePost(c, apiCreataeChatCompletions, data, request.Stream) - if err != nil { - return - } - - defer resp.Body.Close() - if request.Stream { - api.HandleConversationResponse(c, resp) - } else { - io.Copy(c.Writer, resp.Body) - } -} - -//goland:noinspection GoUnhandledErrorResult -func CreateEdit(c *gin.Context) { - var request CreateEditRequest - c.ShouldBindJSON(&request) - data, _ := json.Marshal(request) - resp, err := handlePost(c, apiCreateEdit, data, false) - if err != nil { - return - } - - defer resp.Body.Close() - io.Copy(c.Writer, resp.Body) -} - -//goland:noinspection GoUnhandledErrorResult -func CreateImage(c *gin.Context) { - var request CreateImageRequest - c.ShouldBindJSON(&request) - data, _ := json.Marshal(request) - resp, err := handlePost(c, apiCreateImage, data, false) - if err != nil { - return - } - - defer resp.Body.Close() - io.Copy(c.Writer, resp.Body) -} - -//goland:noinspection GoUnhandledErrorResult -func CreateEmbeddings(c *gin.Context) { - var request CreateEmbeddingsRequest - c.ShouldBindJSON(&request) - data, _ := json.Marshal(request) - resp, err := handlePost(c, apiCreateEmbeddings, data, false) - if err != nil { - return - } - - defer resp.Body.Close() - io.Copy(c.Writer, resp.Body) -} - -func CreateModeration(c *gin.Context) { - var request CreateModerationRequest - c.ShouldBindJSON(&request) - data, _ := json.Marshal(request) - resp, err := handlePost(c, apiCreateModeration, data, false) - if err != nil { - return - } - - defer resp.Body.Close() - io.Copy(c.Writer, resp.Body) -} - -func ListFiles(c *gin.Context) { - handleGet(c, apiListFiles) -} - -func GetCreditGrants(c *gin.Context) { - handleGet(c, apiGetCreditGrants) -} - -//goland:noinspection GoUnhandledErrorResult -func Login(c *gin.Context) { - var loginInfo api.LoginInfo - if err := c.ShouldBindJSON(&loginInfo); err != nil { - c.AbortWithStatusJSON(http.StatusBadRequest, api.ReturnMessage(api.ParseUserInfoErrorMessage)) - return - } - - userLogin := UserLogin{ - client: api.NewHttpClient(), - } - - // hard refresh cookies - resp, _ := userLogin.client.Get(auth0LogoutUrl) - defer resp.Body.Close() - - // get authorized url - authorizedUrl, statusCode, err := userLogin.GetAuthorizedUrl("") - if err != nil { - c.AbortWithStatusJSON(statusCode, api.ReturnMessage(err.Error())) - return - } - - // get state - state, _, _ := userLogin.GetState(authorizedUrl) - - // check username - statusCode, err = userLogin.CheckUsername(state, loginInfo.Username) - if err != nil { - c.AbortWithStatusJSON(statusCode, api.ReturnMessage(err.Error())) - return - } - - // check password - code, statusCode, err := userLogin.CheckPassword(state, loginInfo.Username, loginInfo.Password) - if err != nil { - c.AbortWithStatusJSON(statusCode, api.ReturnMessage(err.Error())) - return - } - - // get access token - accessToken, statusCode, err := userLogin.GetAccessToken(code) - if err != nil { - c.AbortWithStatusJSON(statusCode, api.ReturnMessage(err.Error())) - return - } - - // get session key - var getAccessTokenResponse GetAccessTokenResponse - json.Unmarshal([]byte(accessToken), &getAccessTokenResponse) - req, _ := http.NewRequest(http.MethodPost, dashboardLoginUrl, strings.NewReader("{}")) - req.Header.Set("Content-Type", "application/json") - req.Header.Set("User-Agent", api.UserAgent) - req.Header.Set("Authorization", api.GetAccessToken(getAccessTokenResponse.AccessToken)) - resp, err = userLogin.client.Do(req) - if err != nil { - c.AbortWithStatusJSON(http.StatusInternalServerError, api.ReturnMessage(err.Error())) - return - } - - defer resp.Body.Close() - if resp.StatusCode != http.StatusOK { - c.AbortWithStatusJSON(resp.StatusCode, api.ReturnMessage(getSessionKeyErrorMessage)) - return - } - - io.Copy(c.Writer, resp.Body) -} - -func GetSubscription(c *gin.Context) { - handleGet(c, apiGetSubscription) -} - -func GetApiKeys(c *gin.Context) { - handleGet(c, apiGetApiKeys) -} - -//goland:noinspection GoUnhandledErrorResult -func handleGet(c *gin.Context, url string) { - req, _ := http.NewRequest(http.MethodGet, url, nil) - req.Header.Set("Authorization", api.GetAccessToken(c.GetHeader(api.AuthorizationHeader))) - resp, _ := api.Client.Do(req) - defer resp.Body.Close() - io.Copy(c.Writer, resp.Body) -} - -func handlePost(c *gin.Context, url string, data []byte, stream bool) (*http.Response, error) { - req, _ := http.NewRequest(http.MethodPost, url, bytes.NewBuffer(data)) - req.Header.Set("Authorization", api.GetAccessToken(c.GetHeader(api.AuthorizationHeader))) - if stream { - req.Header.Set("Accept", "text/event-stream") - } - req.Header.Set("Content-Type", "application/json") - resp, err := api.Client.Do(req) - if err != nil { - c.AbortWithStatusJSON(http.StatusInternalServerError, api.ReturnMessage(err.Error())) - return nil, err - } - - return resp, nil -} diff --git a/spaces/eaedk/agri-tech-fastapi-with-GUI/assets/templates/index.html b/spaces/eaedk/agri-tech-fastapi-with-GUI/assets/templates/index.html deleted file mode 100644 index 32beeaca6317f2261dbc955f3c7b58518099cc2e..0000000000000000000000000000000000000000 --- a/spaces/eaedk/agri-tech-fastapi-with-GUI/assets/templates/index.html +++ /dev/null @@ -1,14 +0,0 @@ - - - - - - - - Document - - -

    Welcome to Agri-Tech API

    -

    Kindly access the API Documentation link here.

    - - \ No newline at end of file diff --git a/spaces/editing-images/project/index.html b/spaces/editing-images/project/index.html deleted file mode 100644 index b1e32bbe4aaf3bccdd09ca76ab774c905ecc4a78..0000000000000000000000000000000000000000 --- a/spaces/editing-images/project/index.html +++ /dev/null @@ -1,446 +0,0 @@ - - - - - - LEDITS - Pipeline for editing images - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    -
    -
    -
    -
    -

    LEDITS: Real Image Editing with DDPM Inversion and Semantic - Guidance

    -
    - - Linoy Tsaban, - - Apolinário Passos - -
    - Hugging Face🤗 -
    - - -
    -
    -
    -
    -
    - -
    -
    -
    - - - -

    - Left to right: original image, edited image purely with DDPM inversion, edited image with LEDITS- DDPM - inversion X - Semantic Guidance -

    - -
    -
    -
    - - -
    -
    - -
    -
    -

    Abstract

    -
    -

    - Recent large-scale text-guided diffusion models provide powerful image generation capabilities. - Currently, a significant effort is given to enable the modification of these images using text - only as means to offer intuitive and versatile editing. However, editing proves to be difficult - for these generative models due to the inherent nature of editing techniques, which involves - preserving certain content from the original image. Conversely, in text-based models, even minor - modifications to the text prompt frequently result in an entirely distinct result, making - attaining one-shot generation that accurately corresponds to the user's intent exceedingly - challenging. In addition, to edit a real image using these state-of-the-art tools, one must - first invert the image into the pretrained model’s domain - adding another factor affecting the - edit quality, as well as latency. In this overview, we propose LEDITS- a combined lightweight approach - for real-image editing, incorporating the Edit Friendly DDPM inversion technique with - Semantic Guidance, thus extending Semantic Guidance to real image editing, while - harnessing the editing capabilities of DDPM inversion. This approach achieves versatile edits, - both subtle and extensive as well as alterations in composition and style, while requiring no - additional training and optimization nor extensions to the architecture. -

    - -
    -
    -
    - -
    -
    - -
    -
    -
    - grid_example_1_LEDITS.jpg -
    -
    -
    - - - -
    -
    - -
    -

    DDPM Inversion X SEGA

    -
    -
    -

    - The exceptional realism and diversity of text-guided diffusion models in image synthesis have sparked - significant interest, leading to ongoing research on utilizing these models for image editing. Recently, - intuitive text-based editing showcased the ability to effortlessly manipulate synthesized images using - text alone. In a recent work by Brack et - al. the concept of semantic guidance (SEGA) was introduced - for diffusion models, demonstrating sophisticated image composition and editing capabilities without the - need for additional training or external guidance. Text-guided editing of a real image with - state-of-the-art tools requires inverting the given image and textual prompt. That is, finding a - sequence of noise vectors that produces the input image when fed with the prompt into the diffusion - process. A novel inversion method for DDPM was proposed - by Huberman-Spiegelglas et al., - which - computes noise maps that exhibit stronger image structure encoding and - generates diverse state-of-the-art results for text-based editing tasks. In this work we demonstrate the - extended editing capabilities obtained from combining the two techniques. -

    -
    -
    -
    - examples_abstract -
    -
    -
    -
    -

    How Does it Work? -

    -
    -

    - Our approach for the integration consists of a simple modification to the SEGA - scheme - of the diffusion denoising process. This modification allows the flexibility of editing with - both - methods while still maintaining complete control over the editing effect of each component. - First, - we apply DDPM inversion on the input image to estimate the latent code associated with it. To - apply - the editing operations, we perform the denoising loop such that for each timestep, we repeat the - logic used in SEGA but with the DDPM scheme, using the pre-computed noise vectors. -

    - -
    - diagram -
    - -
    - - -
    -
    - - -
    -
    -
    - - -

    We explored two editing workflows:

    - -
    - - -
    -
    -

    🎨

    -

    Pure SEGA Editing with DDPM Inversion

    -

    - Using DDPM purely for inversion (i.e. target prompt=””), such that a perfect - reconstruction - of the original image is achieved, and editing is performed by adding SEGA edit concepts. -

    - -
    -
    - - -
    -

    🎨 + 🖌

    -

    Combined SEGA Editing with DDPM Inversion

    -
    -
    -

    - Perform two editing operations simultaneously by choosing a target prompt that - reflects a - desired edited output, in addition - to adding SEGA edit concepts. -

    - -
    - -
    -
    - -
    -
    -
    - -
    - - -
    -
    -
    - -

    - Left to right: original image, edited image purely with DDPM inversion, edited images with LEDITS -

    - -
    -
    -
    - - -
    -
    - -
    - - -
    -
    -

    🎨

    -

    Fidelity vs. Creativity

    -

    - LEDITS adds another layer of flexibility in tuning the effect of the desired - edit, balancing between preserving the original image semantics and applying creative edits -

    - -
    -
    - - -
    -

    -

    Flexibility and Versatility

    -
    -
    -

    - Multiple edit operations can be applied independently and simultaneously with a target - prompt (reflecting one or more edit operations) and SEGA concepts (each reflecting an edit - operation) -

    - -
    - -
    -
    - -
    -

    🤝

    -

    Complementing Capabilities

    -
    -
    -

    - The combined control can compensate for the limitations of one approach or the other in - various cases -

    - -
    - -
    -
    -
    -
    - example_imgs_2 - -
    - - - -
    -
    - - -
    -
    -

    BibTeX

    -
    @article{tsaban2023ledits,
    -  author    = {Linoy Tsaban and Apolinário Passos},
    -  title     = {LEDITS: Real Image Editing with DDPM Inversion and Semantic Guidance},
    -  year      = {2023},
    -  eprint    = {2307.00522},
    -  archivePrefix = {arXiv},
    -  primaryClass={cs.CV}
    -}
    -
    -
    - - - - - - diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/options/__init__.py b/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/options/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/epexVfeibi/Imagedeblurr/Adobe After Effects CC 2018 V15.1.2.69 Utorrentl.md b/spaces/epexVfeibi/Imagedeblurr/Adobe After Effects CC 2018 V15.1.2.69 Utorrentl.md deleted file mode 100644 index 1a00eb6679063e93df80e4ebb4d6d16dced5b7c4..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/Adobe After Effects CC 2018 V15.1.2.69 Utorrentl.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Adobe After Effects CC 2018 V15.1.2.69 Utorrentl


    DOWNLOADhttps://jinyurl.com/2uEpOa



    -
    -CRACK Adobe After Effects CC 2018 V15.1.2.69 (x64) + Patch ... So, make sure utorrent is installed into your. PC then, try to downloading .... Adobe After Effects ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/evaluate-metric/matthews_correlation/matthews_correlation.py b/spaces/evaluate-metric/matthews_correlation/matthews_correlation.py deleted file mode 100644 index 58a477a928113558a68c019540daadd206d56458..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/matthews_correlation/matthews_correlation.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright 2021 The HuggingFace Datasets Authors and the current dataset script contributor. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Matthews Correlation metric.""" - -import datasets -import numpy as np -from sklearn.metrics import matthews_corrcoef - -import evaluate - - -_DESCRIPTION = """ -Compute the Matthews correlation coefficient (MCC) - -The Matthews correlation coefficient is used in machine learning as a -measure of the quality of binary and multiclass classifications. It takes -into account true and false positives and negatives and is generally -regarded as a balanced measure which can be used even if the classes are of -very different sizes. The MCC is in essence a correlation coefficient value -between -1 and +1. A coefficient of +1 represents a perfect prediction, 0 -an average random prediction and -1 an inverse prediction. The statistic -is also known as the phi coefficient. [source: Wikipedia] -""" - -_KWARGS_DESCRIPTION = """ -Args: - predictions (list of int): Predicted labels, as returned by a model. - references (list of int): Ground truth labels. - average (`string`): This parameter is used for multilabel configs. Defaults to `None`. - - None (default): Returns an array of Matthews correlation coefficients, one for each feature - - 'macro': Calculate metrics for each feature, and find their unweighted mean. - sample_weight (list of int, float, or bool): Sample weights. Defaults to `None`. -Returns: - matthews_correlation (dict containing float): Matthews correlation. -Examples: - Example 1, a basic example with only predictions and references as inputs: - >>> matthews_metric = evaluate.load("matthews_correlation") - >>> results = matthews_metric.compute(references=[1, 3, 2, 0, 3, 2], - ... predictions=[1, 2, 2, 0, 3, 3]) - >>> print(round(results['matthews_correlation'], 2)) - 0.54 - - Example 2, the same example as above, but also including sample weights: - >>> matthews_metric = evaluate.load("matthews_correlation") - >>> results = matthews_metric.compute(references=[1, 3, 2, 0, 3, 2], - ... predictions=[1, 2, 2, 0, 3, 3], - ... sample_weight=[0.5, 3, 1, 1, 1, 2]) - >>> print(round(results['matthews_correlation'], 2)) - 0.1 - - Example 3, the same example as above, but with sample weights that cause a negative correlation: - >>> matthews_metric = evaluate.load("matthews_correlation") - >>> results = matthews_metric.compute(references=[1, 3, 2, 0, 3, 2], - ... predictions=[1, 2, 2, 0, 3, 3], - ... sample_weight=[0.5, 1, 0, 0, 0, 1]) - >>> print(round(results['matthews_correlation'], 2)) - -0.25 - - Example 4, Multi-label without averaging: - >>> matthews_metric = evaluate.load("matthews_correlation", config_name="multilabel") - >>> results = matthews_metric.compute(references=[[0,1], [1,0], [1,1]], - ... predictions=[[0,1], [1,1], [0,1]]) - >>> print(results['matthews_correlation']) - [0.5, 0.0] - - Example 5, Multi-label with averaging: - >>> matthews_metric = evaluate.load("matthews_correlation", config_name="multilabel") - >>> results = matthews_metric.compute(references=[[0,1], [1,0], [1,1]], - ... predictions=[[0,1], [1,1], [0,1]], - ... average='macro') - >>> print(round(results['matthews_correlation'], 2)) - 0.25 -""" - -_CITATION = """\ -@article{scikit-learn, - title={Scikit-learn: Machine Learning in {P}ython}, - author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. - and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. - and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and - Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, - journal={Journal of Machine Learning Research}, - volume={12}, - pages={2825--2830}, - year={2011} -} -""" - - -@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) -class MatthewsCorrelation(evaluate.Metric): - def _info(self): - return evaluate.MetricInfo( - description=_DESCRIPTION, - citation=_CITATION, - inputs_description=_KWARGS_DESCRIPTION, - features=datasets.Features( - { - "predictions": datasets.Sequence(datasets.Value("int32")), - "references": datasets.Sequence(datasets.Value("int32")), - } - if self.config_name == "multilabel" - else { - "predictions": datasets.Value("int32"), - "references": datasets.Value("int32"), - } - ), - reference_urls=[ - "https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html" - ], - ) - - def _compute(self, predictions, references, sample_weight=None, average=None): - if self.config_name == "multilabel": - references = np.array(references) - predictions = np.array(predictions) - if not (references.ndim == 2 and predictions.ndim == 2): - raise ValueError("For multi-label inputs, both references and predictions should be 2-dimensional") - matthews_corr = [ - matthews_corrcoef(predictions[:, i], references[:, i], sample_weight=sample_weight) - for i in range(references.shape[1]) - ] - if average == "macro": - matthews_corr = np.mean(matthews_corr) - elif average is not None: - raise ValueError("Invalid `average`: expected `macro`, or None ") - else: - matthews_corr = float(matthews_corrcoef(references, predictions, sample_weight=sample_weight)) - return {"matthews_correlation": matthews_corr} diff --git a/spaces/facebook/MusicGen/scripts/templates/survey.html b/spaces/facebook/MusicGen/scripts/templates/survey.html deleted file mode 100644 index 785d1e61b7ac21619416ba70dd4719ff250f3f4b..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/scripts/templates/survey.html +++ /dev/null @@ -1,131 +0,0 @@ -{% extends "base.html" %} -{% block content %} -

    Survey #{{signature}}

    -{% if success %} -

    Your ratings have been saved! -You have been moved to the next random seed, if you want -to keep rating more samples.

    -{% endif %} -{% if already_filled %} -

    You already rated those samples in the past, - filling this form will override your previous ratings. -

    -{% endif %} -

    Welcome {{session['user']}} to the survey #{{signature}}. -Go to the result page to check the results. Go to the home page to start a new survey. -

    - -{% for error in errors %} -

    {{error}}

    -{% endfor %} - -{% if not blind %} -

    Base config is: {{ref_name}}

    -

    The following experiments are compared:

    -
      - {% for experiment in experiments %} -
    • {{experiment.xp.sig}} ({{experiment.epoch}} epochs): {{experiment.name}}
    • - {% endfor %} -
    -{% else %} -

    This is a blind experiment, the order of all XPs is shuffled with every sample.

    -{% endif %} -

    The current random seed is {{seed}}. You can change it with the following form, and also update blind/non blind. -

    -
    - - - - - - -
    - -

    Samples

    -
    -
    -{% for id in model_ids %} -
    -

    {{id}}

    - {% for model in models_by_id[id] %} - {% if loop.index == 1 and model.is_prompted %} -
    -

    Prompt is

    - -

    Ground truth is

    - -
    - {% endif %} - {% for err in model['errors'] %} -

    {{err}}

    - {% endfor %} -
    - {% if not blind %} -

    {{model.xp.sig}}:

    - {% endif %} - -

    Rating:

    -
    - {% for rating in ratings %} - {{rating}} - {% endfor %} - -
    -

    -
    - {% endfor %} -
    -
    -{% endfor %} - - -
    - -{% endblock %} diff --git a/spaces/facebook/XLS-R-1B-EN-15/README.md b/spaces/facebook/XLS-R-1B-EN-15/README.md deleted file mode 100644 index 43edeaae721deac54b362f2a9ca5f05e505b0de6..0000000000000000000000000000000000000000 --- a/spaces/facebook/XLS-R-1B-EN-15/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: XLS-R EN-to-All 1B -emoji: 🎙️ -colorFrom: gray -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/fadhilsadeli/Muhammad_Fadhil_Sadeli_HCK002/README.md b/spaces/fadhilsadeli/Muhammad_Fadhil_Sadeli_HCK002/README.md deleted file mode 100644 index 334385bd9b9e0b1d9fb8997f5d09821539854e02..0000000000000000000000000000000000000000 --- a/spaces/fadhilsadeli/Muhammad_Fadhil_Sadeli_HCK002/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Muhammad Fadhil Sadeli HCK002 -emoji: ⚡ -colorFrom: pink -colorTo: red -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/failfast/2D-GameCreator/src/store/atoms.ts b/spaces/failfast/2D-GameCreator/src/store/atoms.ts deleted file mode 100644 index 7121fd52382efde522c933c73900303e26298093..0000000000000000000000000000000000000000 --- a/spaces/failfast/2D-GameCreator/src/store/atoms.ts +++ /dev/null @@ -1,18 +0,0 @@ -import { atomWithStorage } from "jotai/utils"; - -import { baseGame } from "@/constants/baseGame"; - -export const answersAtom = atomWithStorage< - { - id: string; - content: string; - task: string; - }[] ->("2DGameCreator", [ - { - id: "1", - content: baseGame.default, - task: "Base Game", - }, -]); -export const showCodeAtom = atomWithStorage("2DGameCreator-editor", false); diff --git a/spaces/falterWliame/Face_Mask_Detection/Damage 1992 Movie In Hindi 13 UPD.md b/spaces/falterWliame/Face_Mask_Detection/Damage 1992 Movie In Hindi 13 UPD.md deleted file mode 100644 index e9484956af076f73dcc055bd45fee5f1161762f1..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Damage 1992 Movie In Hindi 13 UPD.md +++ /dev/null @@ -1,11 +0,0 @@ -

    Damage 1992 Movie In Hindi 13


    DOWNLOAD ✶✶✶ https://urlca.com/2uDdN4



    - -20 April 2021 - Hollywood Movie Damage ( 1992) in Hindi | Explaining Kahaniya's stories Top British politician Stephen (Jeremy Irons) is in... India, a country little known about in England. -He comes here at the invitation of the Prime Minister to make a report in Parliament. -Knowing nothing about Hindus, Steven meets a young beautiful woman (Kajol) named Shweta (Rita). -Shweta doesn't believe in God and is an atheist, but her father invites Steven to the family party anyway. -There Steven and Shweta fall in love... -"India: Pursuit of Perfection": An Endless Movie. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Freemake Video Converter 4.1.10.491 Crack Product Key 2020 [TOP].md b/spaces/falterWliame/Face_Mask_Detection/Freemake Video Converter 4.1.10.491 Crack Product Key 2020 [TOP].md deleted file mode 100644 index e9c24dfc59710d5ae2a210a766980c3ea379690c..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Freemake Video Converter 4.1.10.491 Crack Product Key 2020 [TOP].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Freemake Video Converter 4.1.10.491 Crack Product Key 2020


    Downloadhttps://urlca.com/2uDd7b



    -
    -Freemake Video Converter Key Crack will help you to convert video free to AVI, MP4, WMV, MKV, FLV, SWF, 3GP, DVD, MP3, iPod, iPhone, PSP, Android, rip. 1fdad05405
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Mirzya Movie In Hindi Dubbed Free Download Mp4.md b/spaces/falterWliame/Face_Mask_Detection/Mirzya Movie In Hindi Dubbed Free Download Mp4.md deleted file mode 100644 index 9a866e51efba0dbad70980c26652b2d58e0ba7c5..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Mirzya Movie In Hindi Dubbed Free Download Mp4.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Mirzya Movie In Hindi Dubbed Free Download Mp4


    Download Filehttps://urlca.com/2uDcX4



    - -MIRZYA Title Song | MIRZYA | Rakeysh Omprakash Mehra | Gulzar | Shankar ... Latest Bollywood Full Movie 2020 | Bollywood Full Action Movie | Full HD Movie ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/fatiXbelha/sd/3 Download Music - The Free and Reliable Music Downloader for All Your Needs.md b/spaces/fatiXbelha/sd/3 Download Music - The Free and Reliable Music Downloader for All Your Needs.md deleted file mode 100644 index dbdb52b403c09edcc8654333c616b825ce44ce70..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/3 Download Music - The Free and Reliable Music Downloader for All Your Needs.md +++ /dev/null @@ -1,159 +0,0 @@ - -

    3 Download Music: How to Get Free and Legal MP3 Songs Online

    -

    Do you love listening to music but don't want to pay for streaming services or buy digital albums? Do you want to own your music and listen to it offline whenever you want? Do you want to support your favorite artists and respect their rights? If you answered yes to any of these questions, then this article is for you.

    -

    In this article, we will show you how to download music for free and legally from some of the best websites in 2022. We will also explain what MP3 is, why you should download music, and what are the benefits and risks of doing so. By the end of this article, you will have a clear idea of how to get free and legal MP3 songs online.

    -

    3 download music


    Download Zip ⚙⚙⚙ https://urllie.com/2uNDxF



    -

    Introduction

    -

    What is MP3 and why download music?

    -

    MP3 is a type of audio file format that compresses sound data into a smaller size without losing much quality. MP3 stands for MPEG-1 Audio Layer 3, which is a standard developed by the Moving Picture Experts Group (MPEG) in the early 1990s. MP3 files can be played on most devices, such as computers, smartphones, tablets, and MP3 players.

    -

    Downloading music means saving audio files from the internet to your device's storage. This way, you can listen to them anytime, anywhere, without needing an internet connection or a streaming service subscription. Downloading music also allows you to create your own playlists, edit the files, and share them with others.

    -

    The benefits of downloading music for free and legally

    -

    There are many benefits of downloading music for free and legally, such as:

    -
      -
    • You can save money by not paying for streaming services or digital albums.
    • -
    • You can support your favorite artists by giving them exposure, feedback, or donations.
    • -
    • You can discover new music by exploring different genres, artists, and platforms.
    • -
    • You can enjoy high-quality sound without ads or interruptions.
    • -
    • You can respect the rights of the artists and avoid legal troubles.
    • -
    -

    The risks of downloading music illegally or from untrusted sources

    -

    However, there are also some risks of downloading music illegally or from untrusted sources, such as:

    -
      -
    • You can violate the copyright laws and face fines or lawsuits.
    • -
    • You can harm your device or data by downloading viruses, malware, or spyware.
    • -
    • You can compromise your privacy or security by exposing your personal information or browsing history.
    • -
    • You can damage the reputation or income of the artists by depriving them of their deserved royalties.
    • -
    • You can miss out on the latest updates or features of the official platforms or services.
    • -
    -

    The best free and legal music download sites in 2022

    -

    Bandcamp

    -

    What is Bandcamp and how does it work?

    -

    Bandcamp is a website that allows artists to upload their music and set their own prices for digital downloads, physical albums, and merchandise. Bandcamp was founded in 2008 and has since become one of the most popular and trusted platforms for independent musicians and fans. Bandcamp has over 6 million tracks and 4 million albums from more than 800,000 artists. Bandcamp also offers a streaming app, a weekly podcast, and a daily editorial feature.

    -

    How to download music from Bandcamp for free?

    -

    Downloading music from Bandcamp for free is possible, but it depends on the artist's choice. Some artists offer their music for free or for a "name your price" option, which means you can pay as much or as little as you want, or even nothing at all. To download music from Bandcamp for free, you need to:

    -
      -
    1. Go to the Bandcamp website or app and search for the music you want.
    2. -
    3. Click on the album or track you want to download and see if it has a "buy now" or "name your price" button.
    4. -
    5. If it has a "name your price" button, enter zero or any amount you want to pay and click "download to your computer".
    6. -
    7. If it has a "buy now" button, you need to pay the specified amount to download the music.
    8. -
    9. Choose the format you want to download, such as MP3, FLAC, WAV, etc.
    10. -
    11. Enjoy your music!
    12. -
    -

    The pros and cons of Bandcamp

    -

    Bandcamp has many pros and cons, such as:

    - - - - - -
    ProsCons
    You can support your favorite artists directly and pay what you want.Not all music is available for free or for a name your price option.
    You can discover new and diverse music from different genres and regions.The website and app design are not very user-friendly or attractive.
    You can enjoy high-quality audio files and unlimited streaming.The music catalog is not as extensive as other platforms or services.
    -

    DatPiff

    -

    What is DatPiff and how does it work?

    -

    DatPiff is a website that specializes in hip-hop and rap music, especially mixtapes. DatPiff was launched in 2005 and has since become one of the leading sources for free and legal mixtapes online. DatPiff has over 1.3 million mixtapes from more than 500,000 artists. DatPiff also offers a streaming app, a radio station, and a video section.

    -

    3 download music free online
    -3 download music mp3 converter
    -3 download music app for android
    -3 download music from youtube
    -3 download music to computer
    -3 download music videos
    -3 download music albums
    -3 download music offline
    -3 download music player
    -3 download music legally
    -3 download music sites
    -3 download music for iphone
    -3 download music from spotify
    -3 download music from soundcloud
    -3 download music from bandcamp
    -3 download music from internet archive
    -3 download music from datpiff
    -3 download music from free music archive
    -3 download music in high quality
    -3 download music in different formats
    -3 download music with lyrics
    -3 download music with cover art
    -3 download music with metadata
    -3 download music with id3 tags
    -3 download music with no ads
    -3 download music with no virus
    -3 download music with no registration
    -3 download music with no limit
    -3 download music with one click
    -3 download music with audials one
    -3 download music by genre
    -3 download music by artist
    -3 download music by album
    -3 download music by song title
    -3 download music by playlist
    -3 download music by mood
    -3 download music by year
    -3 download music by popularity
    -3 download music by language
    -3 download music by country

    -

    How to download music from DatPiff for free?

    -

    Downloading music from DatPiff for free is very easy, as most of the mixtapes are available for free download. To download music from DatPiff for free, you need to:

    -
      -
    1. Go to the DatPiff website or app and search for the music you want.
    2. -
    3. Click on the mixtape you want to download and see if it has a "download" or "stream only" button.
    4. -
    5. If it has a "download" button, click on it and wait for the download to start.
    6. -
    7. If it has a "stream only" button, you can only listen to the mixtape online.
    8. -
    9. Choose the format you want to download, such as MP3, ZIP, etc.
    10. -
    11. Enjoy your music!
    12. -
    -

    The pros and cons of DatPiff

    -

    DatPiff has many pros and cons, such as:

    - - - - - -
    ProsCons
    You can access a huge collection of free and legal hip-hop and rap mixtapes.The website and app are full of ads and pop-ups that can be annoying.
    You can discover new and upcoming artists and trends in the hip-hop scene.The audio quality is not very high and some files may be corrupted or incomplete.
    You can enjoy unlimited streaming and offline listening.The music selection is limited to hip-hop and rap genres only.
    -

    Free Music Archive

    -

    What is Free Music Archive and how does it work?

    -

    Free Music Archive is a website that offers a library of free and legal music from various genres and artists. Free Music Archive was created in 2009 by WFMU, a public radio station in New Jersey, in collaboration with other curators and partners. Free Music Archive has over 150 ,000 tracks and 40,000 artists from more than 100 countries. Free Music Archive also offers a streaming app, a blog, and a forum.

    -

    How to download music from Free Music Archive for free?

    -

    Downloading music from Free Music Archive for free is very simple, as all the music is available for free download. To download music from Free Music Archive for free, you need to:

    -
      -
    1. Go to the Free Music Archive website or app and search for the music you want.
    2. -
    3. Click on the track or album you want to download and see if it has a "download" or "play" button.
    4. -
    5. If it has a "download" button, click on it and choose the format you want to download, such as MP3, OGG, etc.
    6. -
    7. If it has a "play" button, you can only listen to the track online.
    8. -
    9. Enjoy your music!
    10. -
    -

    The pros and cons of Free Music Archive

    -

    Free Music Archive has many pros and cons, such as:

    - - - - - -
    ProsCons
    You can access a diverse and eclectic collection of free and legal music from various genres and artists.The website and app are not very modern or intuitive and may have some bugs or glitches.
    You can discover new and independent music from different cultures and backgrounds.The audio quality may vary depending on the source and format of the music.
    You can enjoy unlimited streaming and offline listening.The music catalog is not updated very frequently and some tracks or albums may be removed or unavailable.
    -

    Conclusion

    -

    Summary of the main points

    -

    In this article, we have shown you how to download music for free and legally from some of the best websites in 2022. We have also explained what MP3 is, why you should download music, and what are the benefits and risks of doing so. We have reviewed three of the most popular and trusted platforms for free and legal music downloads: Bandcamp, DatPiff, and Free Music Archive. We have compared their features, pros, and cons, and given you step-by-step instructions on how to download music from them.

    -

    Recommendations and tips for downloading music online

    -

    Here are some recommendations and tips for downloading music online:

    -
      -
    • Always check the license and terms of use of the music before downloading it. Some music may be free for personal use only, or may require attribution or permission from the artist.
    • -
    • Always scan the downloaded files for viruses or malware before opening them. Some websites may contain malicious links or pop-ups that can harm your device or data.
    • -
    • Always respect the rights and wishes of the artists. If you like their music, consider supporting them by buying their albums, merchandise, or tickets, or by donating to them directly.
    • -
    • Always explore new music and genres. You never know what gems you may find in the vast world of online music.
    • -
    • Always enjoy your music!
    • -
    -

    FAQs

    -

    Here are some frequently asked questions about downloading music online:

    -
      -
    1. Is downloading music online illegal?
    2. -

      No, downloading music online is not illegal, as long as you do it from authorized sources that offer free and legal music downloads. However, downloading music from unauthorized sources that violate the copyright laws is illegal and can result in fines or lawsuits.

      -
    3. What is the best format for downloading music online?
    4. -

      The best format for downloading music online depends on your preferences and needs. Generally, MP3 is the most common and compatible format that offers good quality and small size. However, if you want higher quality or lossless audio, you may opt for formats such as FLAC, WAV, or ALAC.

      -
    5. How can I download music online faster?
    6. -

      You can download music online faster by using a reliable internet connection, a good browser, and a download manager. A download manager is a software that helps you organize, resume, and speed up your downloads. Some examples of download managers are IDM, JDownloader, or uGet.

      -
    7. How can I download music online safely?
    8. -

      You can download music online safely by using a trusted website or app that offers free and legal music downloads. You should also use an antivirus software, a firewall, and a VPN to protect your device and data from viruses, malware, spyware, hackers, or trackers.

      -
    9. 5. How can I download music online for free and legally?
    10. -

      You can download music online for free and legally by using one of the websites or apps that we have reviewed in this article: Bandcamp, DatPiff, or Free Music Archive. These platforms offer a variety of music genres and artists that you can download for free or for a name your price option. You can also check out other websites or apps that offer free and legal music downloads, such as Jamendo, SoundCloud, or Audiomack.

      -
    -

    I hope you enjoyed this article and learned something new about downloading music online. If you have any questions or comments, feel free to leave them below. Happy listening!

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/CarX Highway Racing MOD APK Hack Latest Version The Best Racing Game for Android.md b/spaces/fatiXbelha/sd/CarX Highway Racing MOD APK Hack Latest Version The Best Racing Game for Android.md deleted file mode 100644 index 4ac37c1fe9dea79064d641987fdb941a54bd0d60..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/CarX Highway Racing MOD APK Hack Latest Version The Best Racing Game for Android.md +++ /dev/null @@ -1,97 +0,0 @@ - -

    CarX Highway Racing Mod APK Hack Latest Version: A Review

    -

    If you are a fan of car racing games, you might have heard of CarX Highway Racing. It is one of the most realistic and thrilling racing games on mobile devices. But what if you want to enjoy the game without any limitations or restrictions? That's where CarX Highway Racing Mod APK Hack comes in handy. In this article, we will review the game and its modded version, and show you how to download and install it on your device.

    -

    What is CarX Highway Racing?

    -

    CarX Highway Racing is a dramatic and engaging racing game that offers classic competitive races for gamers. In the game, the player will act as a new racer. You will master the cars on dangerous roads, challenge the police and other rivals, and complete various missions and tasks. The game is developed by CarX Technologies, a company that specializes in creating realistic car physics and graphics for games.

    -

    carx highway racing mod apk hack latest version


    Downloadhttps://urllie.com/2uNwTB



    -

    Features of CarX Highway Racing

    -

    CarX Highway Racing has many features that make it stand out from other racing games. Here are some of them:

    -

    Realistic physics and graphics

    -

    The game uses the advanced CarX Engine, which simulates the behavior of real cars on different surfaces and conditions. You can feel the speed, the drift, the brake, and the collision of your car as if you were driving it in real life. The game also has stunning graphics and effects, such as day and night cycles, weather changes, dynamic shadows, and reflections.

    -

    Diverse cars and tracks

    -

    The game offers a wide range of cars to choose from, including sports cars, muscle cars, supercars, and more. You can customize your car with different colors, decals, wheels, spoilers, and other parts. You can also upgrade your car's performance with engine tuning, suspension, tires, brakes, and other components. The game has over 20 tracks to race on, each with its own characteristics and challenges.

    -

    Exciting game modes and missions

    -

    The game has several game modes to keep you entertained. You can play the campaign mode, where you have to complete over 100 missions and events. You can also play the online mode, where you can compete with other players from around the world. You can also play the offline mode, where you can race against AI opponents or practice your skills.

    -

    What is CarX Highway Racing Mod APK Hack?

    -

    CarX Highway Racing Mod APK Hack is a modified version of the original game that gives you access to unlimited resources and features. With this modded version, you can enjoy the game without any limitations or restrictions.

    -

    Benefits of using CarX Highway Racing Mod APK Hack

    -

    There are many benefits of using CarX Highway Racing Mod APK Hack. Here are some of them:

    -

    Unlimited money and gold

    -

    With this modded version, you will have unlimited money and gold in your account. You can use them to buy any car you want, upgrade it to the max level, or unlock all the tracks. You don't have to worry about running out of money or gold ever again.

    -

    Unlocked all cars and upgrades

    -

    With this modded version, you will have all the cars and upgrades unlocked from the start. You don't have to wait for hours or days to unlock them by playing the game. You can choose any car you like and customize it to your liking.

    -

    carx highway racing unlimited money mod apk download
    -carx highway racing hack apk latest version 2021
    -carx highway racing mod apk free download for android
    -carx highway racing hack mod apk unlimited gold
    -carx highway racing mod apk latest version offline
    -carx highway racing mod apk android 1 download
    -carx highway racing hack apk download 2020
    -carx highway racing mod apk unlimited money and gold
    -carx highway racing hack mod apk rexdl
    -carx highway racing mod apk latest version 2020
    -carx highway racing hack apk free download no root
    -carx highway racing mod apk obb download
    -carx highway racing hack mod apk revdl
    -carx highway racing mod apk latest version 1.74.8
    -carx highway racing hack apk unlimited money and gold
    -carx highway racing mod apk download for pc
    -carx highway racing hack mod apk happymod
    -carx highway racing mod apk latest version 2022
    -carx highway racing hack apk online
    -carx highway racing mod apk unlimited all
    -carx highway racing hack mod apk android 1
    -carx highway racing mod apk new update
    -carx highway racing hack apk ios
    -carx highway racing mod apk unlocked everything
    -carx highway racing hack mod apk no root
    -carx highway racing mod apk old version
    -carx highway racing hack apk 2021 download
    -carx highway racing mod apk unlimited nitro
    -carx highway racing hack mod apk 2020 download
    -carx highway racing mod apk data download
    -carx highway racing hack mod apk 1.74.8 download
    -carx highway racing mod apk all cars unlocked
    -carx highway racing hack mod apk 1.74.7 download
    -carx highway racing mod apk latest version uptodown
    -carx highway racing hack mod apk unlimited everything
    -carx highway racing mod apk highly compressed download
    -carx highway racing hack mod apk unlimited nitro and money
    -carx highway racing mod apk latest version apkpure
    -carx highway racing hack mod apk unlimited money and gold download

    -

    No ads

    No ads and root required

    -

    With this modded version, you will not see any annoying ads or pop-ups in the game. You can enjoy the game without any interruptions or distractions. You also don't need to root your device to use this modded version. You can install it easily and safely on your device.

    -

    How to download and install CarX Highway Racing Mod APK Hack?

    -

    If you want to download and install CarX Highway Racing Mod APK Hack on your device, you need to follow some simple steps. Here are the step-by-step guides for Android and PC devices:

    -

    Step-by-step guide for Android devices

    -
      -
    1. First, you need to enable the unknown sources option on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
    2. -
    3. Next, you need to download the CarX Highway Racing Mod APK Hack file from a trusted source. You can use this link to download it.
    4. -
    5. After downloading the file, locate it in your file manager and tap on it to start the installation process.
    6. -
    7. Follow the instructions on the screen and wait for the installation to finish.
    8. -
    9. Once the installation is done, you can launch the game and enjoy it with unlimited resources and features.
    10. -
    -

    Step-by-step guide for PC devices

    -
      -
    1. First, you need to download and install an Android emulator on your PC. An Android emulator is a software that allows you to run Android apps and games on your PC. You can use any emulator of your choice, but we recommend using BlueStacks or NoxPlayer.
    2. -
    3. Next, you need to download the CarX Highway Racing Mod APK Hack file from a trusted source. You can use this link to download it.
    4. -
    5. After downloading the file, open your emulator and drag and drop the file into it. Alternatively, you can use the built-in browser of your emulator to download the file directly.
    6. -
    7. Follow the instructions on the screen and wait for the installation to finish.
    8. -
    9. Once the installation is done, you can launch the game and enjoy it with unlimited resources and features.
    10. -
    -

    Conclusion

    -

    CarX Highway Racing is a realistic and thrilling racing game that offers classic competitive races for gamers. It has many features that make it stand out from other racing games, such as realistic physics and graphics, diverse cars and tracks, and exciting game modes and missions. However, if you want to enjoy the game without any limitations or restrictions, you can use CarX Highway Racing Mod APK Hack. This modded version gives you access to unlimited money and gold, unlocked all cars and upgrades, no ads and root required, and more. You can download and install it easily on your device by following our step-by-step guides above. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave them in the comments section below.

    -

    FAQs

    -
      -
    • Is CarX Highway Racing Mod APK Hack safe to use?
    • -

      Yes, CarX Highway Racing Mod APK Hack is safe to use. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source and scan it with an antivirus before installing it.

      -
    • Is CarX Highway Racing Mod APK Hack compatible with my device?
    • -

      CarX Highway Racing Mod APK Hack is compatible with most Android devices that run on Android 4.1 or higher. It is also compatible with PC devices that have an Android emulator installed.

      -
    • Can I play CarX Highway Racing Mod APK Hack online?
    • -

      Yes, you can play CarX Highway Racing Mod APK Hack online with other players from around the world. However, you should be careful not to get banned by the game developers for using a modded version. To avoid this, you can use a VPN service or play offline mode.

      -
    • Can I update CarX Highway Racing Mod APK Hack?
    • -

      No, you cannot update CarX Highway Racing Mod APK Hack from the Google Play Store or any other source. If you do so, you will lose all the modded features and resources. To update the game, you need to download and install the latest version of CarX Highway Racing Mod APK Hack from a trusted source.

      -
    • Can I request more features for CarX Highway Racing Mod APK Hack?
    • -

      Yes, you can request more features for CarX Highway Racing Mod APK Hack by contacting the mod

      developer or the mod community. You can find their contact details on the website where you downloaded the modded version. However, there is no guarantee that they will fulfill your request or respond to your feedback.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Cricket League v1.0.5 MOD APK The Ultimate Cricket Game for Fans.md b/spaces/fatiXbelha/sd/Cricket League v1.0.5 MOD APK The Ultimate Cricket Game for Fans.md deleted file mode 100644 index dab5b4955bfdabd86b6a121d398b4e4f5b143e60..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Cricket League v1.0.5 MOD APK The Ultimate Cricket Game for Fans.md +++ /dev/null @@ -1,76 +0,0 @@ - -

      Cricket League Mod APK Download: A Guide for Cricket Fans

      -

      Are you a cricket fan who loves to play fast, fun, and exciting cricket games on your mobile device? If so, you might have heard of Cricket League, a 3D multiplayer cricket game developed by Miniclip. In this game, you can bat, bowl, and field your way to the top of the league in quick two over matches against your friends or players around the world. You can also unlock and upgrade your players and balls, play in different locations, and compete in leagues to become the master team.

      -

      But what if you want to get more out of this game without spending real money or watching ads? What if you want to unlock all the characters and locations, get unlimited coins and gems, and enjoy the game without any restrictions? Well, there is a way to do that, and it is by downloading the modded version of Cricket League. In this article, we will show you how to download and install Cricket League Mod APK on your Android device, what are the features and benefits of playing this version, what are the drawbacks and risks involved, how to play like a pro, and how to get more out of this game. Let's get started!

      -

      cricket league mod apk download


      DOWNLOADhttps://urllie.com/2uNEgY



      -

      How to Download and Install Cricket League Mod APK

      -

      The first step to enjoy Cricket League Mod APK is to find a reliable source for the modded APK file. There are many websites that offer this file, but not all of them are safe or trustworthy. Some of them may contain viruses or malware that can harm your device or steal your personal information. Some of them may also provide outdated or incompatible versions that may not work properly or crash your game. Therefore, you need to be careful when choosing where to download Cricket League Mod APK.

      -

      One of the websites that we recommend is HappyMod, which is a platform for mod lovers to download, request, and test android mods. Here, you can find Cricket League v1.0.5 MOD APK (Allways Perfect), which is one of the latest versions of Cricket League Mod APK. This version has the following features: - Allways Perfect - Unlimited Coins - Unlimited Gems - All Characters Unlocked - All Locations Unlocked - No Ads - No Root Required To download and install Cricket League Mod APK from HappyMod, follow these steps: - Step 1: Go to HappyMod and search for Cricket League Mod APK. You will see the result for Cricket League v1.0.5 MOD APK (Allways Perfect). Click on it and you will be redirected to the download page. - Step 2: Before you download the APK file, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. - Step 3: Now, you can download the APK file by clicking on the Download button on the download page. The file size is about 50 MB, so it may take a few minutes depending on your internet speed. Once the download is complete, you will see a notification on your device. - Step 4: Tap on the notification and you will be prompted to install the APK file. Follow the instructions on the screen and wait for the installation to finish. Once it is done, you will see a confirmation message and an icon for Cricket League on your device. Tap on it and launch the game. You will be able to enjoy the modded features of Cricket League Mod APK.

      What are the Features of Cricket League Mod APK?

      -

      As we mentioned earlier, Cricket League Mod APK has many features that make it more fun and exciting than the original version of the game. Here are some of them:

      - - Unlimited coins and gems: Coins and gems are the main currencies in Cricket League. You can use them to unlock and upgrade your players and balls, which can improve your performance and skills in the game. However, in the original version, you have to earn them by playing matches, watching ads, or buying them with real money. This can be time-consuming, boring, or expensive. But with Cricket League Mod APK, you don't have to worry about that. You will get unlimited coins and gems from the start, so you can unlock and upgrade everything you want without any hassle. - All characters and locations unlocked: Cricket League has a variety of characters and locations that you can choose from. Each character has different stats and abilities, such as power, accuracy, stamina, speed, etc. Each location has different pitches and weather conditions, such as grassy, dusty, rainy, etc. However, in the original version, you have to unlock them by playing matches or spending coins and gems. This can be limiting and frustrating if you want to try different combinations and strategies. But with Cricket League Mod APK, you don't have to worry about that either. You will get all the characters and locations unlocked from the start, so you can play with any character in any location you want without any restriction. - No ads and no root required: Ads are annoying and distracting, especially when you are playing a fast-paced game like Cricket League. They can interrupt your gameplay, waste your time, or consume your data. In the original version, you have to watch ads to earn coins or gems or to access some features of the game. This can ruin your experience and enjoyment of the game. But with Cricket League Mod APK, you don't have to worry about that at all. You will get no ads in this version, so you can play without any interruption or annoyance. Moreover, you don't need to root your device to install or play this version. This means that you don't have to risk damaging your device or voiding its warranty to enjoy this game.

      What are the Benefits of Playing Cricket League Mod APK?

      -

      Besides the features that we mentioned above, there are also some benefits of playing Cricket League Mod APK that make it worth trying. Here are some of them:

      - - Experience a fast, fun, and exciting 3D multiplayer cricket game with realistic graphics and animations: Cricket League is one of the best cricket games for mobile devices that offers a realistic and immersive 3D cricket experience. The game has stunning graphics and smooth animations that make you feel like you are playing in a real cricket stadium. The game also has realistic sound effects and commentary that add to the atmosphere and excitement of the game. - Learn cricket controls in under a minute and play quick two over matches in 3-5 minutes: Cricket League is easy to learn and play for anyone who loves cricket or wants to try it out. The game has simple and intuitive controls that let you bat, bowl, and field with just a few taps and swipes on your screen. You can learn how to play cricket in under a minute and play quick two over matches in 3-5 minutes. This makes the game ideal for casual and busy players who want to enjoy a short and thrilling cricket game anytime and anywhere. - Play with your friends from around the world and compete in leagues to become the master team: Cricket League is a multiplayer game that lets you play with your friends or other players from around the world. You can challenge them to friendly matches or join leagues to compete for glory and prizes. You can also chat with them and send them emojis and stickers to express your emotions and reactions. You can also create your own team and invite your friends to join you. You can customize your team name, logo, and jersey, and show off your skills and teamwork to the world. - Travel all over the world playing against the best cricketers from the best pitches: Cricket League has a variety of locations that you can play in, such as India, Australia, England, South Africa, Pakistan, Sri Lanka, etc. Each location has different pitches and weather conditions that affect the gameplay and strategy. You can also play against different cricketers from different countries, each with their own strengths and weaknesses. You can learn from them and improve your cricket knowledge and skills. - Play with awesome deliveries like Doosra, Sling, In/Out Swings: Cricket League has a variety of balls and deliveries that you can use to bat and bowl. Each ball has different characteristics, such as speed, spin, swing, bounce, etc. Each delivery has different effects, such as curving, reversing, drifting, etc. You can use these balls and deliveries to surprise your opponents and score more runs or take more wickets. You can also unlock and upgrade your balls to make them more powerful and effective.

      What are the Drawbacks of Playing Cricket League Mod APK?

      -

      While Cricket League Mod APK has many advantages, it also has some drawbacks that you should be aware of before playing it. Here are some of them:

      - - The modded version may not be compatible with some devices or updates: Cricket League Mod APK is not an official version of the game, but a modified one that may not work well with some devices or updates. Some devices may not support the modded version or may experience glitches or errors while playing it. Some updates may not be compatible with the modded version or may overwrite it with the original version. This may cause you to lose your progress or features in the game. - The modded version may not be safe or secure from viruses or malware: Cricket League Mod APK is not a verified or trusted version of the game, but a hacked one that may contain viruses or malware that can harm your device or steal your personal information. Some websites that offer the modded version may also be malicious or fraudulent that may infect your device or trick you into giving them your data or money. This may expose you to cyberattacks or identity theft. - The modded version may not be fair or ethical for other players who play the original version: Cricket League Mod APK is not a fair or ethical version of the game, but a cheating one that gives you an unfair advantage over other players who play the original version. Some players may consider this as cheating or hacking and may report you or ban you from the game. Some players may also lose interest or enjoyment in the game if they face players who use the modded version. This may affect the reputation and popularity of the game.

      How to Play Cricket League Mod APK Like a Pro?

      -

      If you want to play Cricket League Mod APK like a pro, you need to know some tips and tricks that can help you improve your performance and skills in the game. Here are some of them:

      - - Tips and tricks for batting: When batting, you need to pay attention to the ball type, speed, direction, and bounce. You also need to time your swipe correctly and aim for the gaps in the field. You can use different types of shots, such as lofted, grounded, sweep, reverse sweep, etc., depending on the situation. You can also use power-ups, such as super bat, super sixer, etc., to boost your batting power and score more runs. - Tips and tricks for bowling: When bowling, you need to choose the right ball type, speed, direction, and spin. You also need to vary your pace and direction to confuse your opponent. You can use different types of deliveries, such as doosra, sling, in/out swings, etc., depending on the situation. You can also use power-ups, such as super ball, super wicket, etc., to boost your bowling power and take more wickets. - Tips and tricks for fielding: When fielding, you need to position your fielders strategically and catch the ball as soon as possible. You also need to throw the ball accurately and quickly to the stumps or the keeper. You can use different types of fielding, such as aggressive, defensive, balanced, etc., depending on the situation. You can also use power-ups, such as super fielder, super catch, etc., to boost your fielding skills and prevent more runs. - How to use different types of balls and deliveries: Cricket League has a variety of balls and deliveries that you can use to bat and bowl. Each ball has different characteristics, such as speed, spin, swing, bounce, etc. Each delivery has different effects, such as curving, reversing, drifting, etc. Here are some examples of how to use them: - Doosra: A delivery that spins away from the right-handed batsman or towards the left-handed batsman. It is used to deceive the batsman who expects the ball to spin in the opposite direction. It is effective against batsmen who play on the front foot or try to hit the ball on the leg side. - Sling: A delivery that swings in the air towards the batsman. It is used to surprise the batsman who expects the ball to go straight or away from him. It is effective against batsmen who play on the back foot or try to hit the ball on the off side. - In/Out Swings: Deliveries that swing in or out in the air and then reverse their direction after pitching. They are used to confuse the batsman who expects the ball to continue its initial swing. They are effective against batsmen who play late or across the line. - How to manage your team and choose the best players for each match: Cricket League lets you create and customize your own team and choose your players for each match. You can also unlock and upgrade your players and balls to improve their stats and abilities. Here are some tips on how to manage your team and choose the best players for each match: - Choose your players based on their roles, such as opener, middle-order, finisher, spinner, fast bowler, etc. You can also check their stats, such as power, accuracy, stamina, speed, etc., to see their strengths and weaknesses. - Choose your players based on their form, such as hot, cold, average, etc. You can also check their performance history, such as runs scored, wickets taken, catches made, etc., to see their consistency and reliability. - Choose your players based on their compatibility with each other and with the pitch and weather conditions. You can also check their skills, such as doosra, sling, in/out swings, etc., to see their versatility and adaptability. - How to check the field and vary your pace and direction: Cricket League lets you check the field before each delivery and vary your pace and direction while batting or bowling. This can help you plan your strategy and tactics for each delivery and match. Here are some tips on how to check the field and vary your pace and direction: - Check the field before each delivery by tapping on the field icon on the top right corner of the screen. You will see a bird's eye view of the field with dots representing your fielders. You can also see the gaps in the field where you can hit or bowl. - Vary your pace while batting by tapping on the bat icon on the bottom left corner of the screen. You will see a slider that lets you adjust your batting power from low to high. You can use low power for defensive shots or high power for aggressive shots. - Vary your direction while batting by swiping left or right on the screen. You will see an arrow that shows you where you are aiming your shot. You can use left or right swipes for off side or leg side shots. - Vary your pace while bowling by tapping on the ball icon on the bottom left corner of the screen. You will see a slider that lets you adjust your bowling speed from slow to fast. You can use slow speed for spinners or fast speed for pacers. - Vary your direction while bowling by swiping up or down on the screen. You will see an arrow that shows you where you are aiming your delivery. You can use up or down swipes for straight or swinging deliveries.

      How to Get More Out of Cricket League Mod APK?

      -

      If you want to get more out of Cricket League Mod APK, you need to know some ways that can help you enhance your experience and enjoyment of the game. Here are some of them:

      - - How to connect your account to Facebook and save your progress: Cricket League lets you connect your account to Facebook and save your progress in the game. This can help you secure your data and prevent losing it if you change your device or uninstall the game. It can also help you sync your progress across different devices and play with your Facebook friends. To connect your account to Facebook, follow these steps: - Tap on the menu icon on the top left corner of the screen and then tap on the settings icon on the bottom right corner of the screen. - Tap on the Facebook icon and then log in with your Facebook account. - Tap on the confirm button and then wait for the connection to be successful. - You will see a message that says "Your progress is now saved on Facebook". You can now play with your Facebook friends and sync your progress across different devices. - How to follow Cricket League on social media for exclusive offers and bonuses: Cricket League has official accounts on social media platforms, such as Facebook, Twitter, Instagram, YouTube, etc. You can follow them to get exclusive offers and bonuses, such as free coins, gems, balls, etc. You can also get updates, news, tips, tricks, videos, etc., about the game. To follow Cricket League on social media, follow these steps: - Tap on the menu icon on the top left corner of the screen and then tap on the social media icons on the bottom left corner of the screen. - You will be redirected to the respective social media pages of Cricket League. You can then follow them by tapping on the follow button or liking their page. - You will see a message that says "You have followed Cricket League on [social media platform]". You can now get exclusive offers and bonuses and stay updated about the game. - How to contact the developer for feedback and support: Cricket League is developed by Miniclip, a leading developer and publisher of mobile games. You can contact them for feedback and support if you have any questions, suggestions, issues, or problems with the game. To contact them, follow these steps: - Tap on the menu icon on the top left corner of the screen and then tap on the settings icon on the bottom right corner of the screen. - Tap on the help icon and then tap on the contact us button. - You will be redirected to a form where you can fill in your name, email address, subject, and message. You can also attach a screenshot if needed. - Tap on the send button and then wait for a reply from Miniclip.

      Conclusion

      -

      Cricket League is a great game for cricket fans who want to play fast, fun, and exciting cricket games on their mobile devices. However, if you want to get more out of this game without spending real money or watching ads, you can try downloading Cricket League Mod APK. This is a modded version of Cricket League that gives you unlimited coins and gems, all characters and locations unlocked, no ads, no root required, and other features that make it more fun and exciting than the original version.

      -

      cricket league hack apk download
      -cricket league unlimited coins mod apk
      -cricket league 2023 mod apk free download
      -cricket league premium mod apk latest version
      -cricket league online multiplayer mod apk
      -cricket league pro mod apk unlocked everything
      -cricket league 3d mod apk android 1
      -cricket league fantasy mod apk unlimited money
      -cricket league world cup mod apk download
      -cricket league vip mod apk no ads
      -cricket league mega mod apk revdl
      -cricket league real mod apk unlimited gems
      -cricket league ultimate mod apk offline
      -cricket league super mod apk 1.0.5
      -cricket league full mod apk download for pc
      -cricket league cracked mod apk pure
      -cricket league original mod apk obb
      -cricket league new mod apk update
      -cricket league best mod apk 2023
      -cricket league hd mod apk download apkpure
      -cricket league old mod apk 2023
      -cricket league lite mod apk download for android
      -cricket league beta mod apk rexdl
      -cricket league classic mod apk unlimited everything and coins
      -cricket league gold mod apk free shopping
      -cricket league deluxe mod apk no root
      -cricket league extreme mod apk all unlocked
      -cricket league plus mod apk download latest version
      -cricket league master mod apk unlimited players
      -cricket league legend mod apk god mode
      -cricket league elite mod apk high graphics
      -cricket league royal mod apk unlimited tickets
      -cricket league star mod apk all teams unlocked
      -cricket league power mod apk one hit kill
      -cricket league champion mod apk unlimited lives
      -cricket league professional mod apk no verification
      -cricket league modern mod apk anti ban
      -cricket league amazing mod apk all features unlocked
      -cricket league awesome mod apk unlimited energy
      -cricket league advanced mod apk no survey
      -cricket league epic mod apk all levels unlocked
      -cricket league special mod apk unlimited cash and coins
      -cricket league smart mod apk download for ios
      -cricket league easy mod apk unlimited diamonds and gems
      -cricket legend hard mode APK unlimited money and gold

      -

      In this article, we have shown you how to download and install Cricket League Mod APK on your Android device, what are the features and benefits of playing this version, what are the drawbacks and risks involved, how to play like a pro, and how to get more out of this game. We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below.

      -

      Now that you know everything about Cricket League Mod APK, why not give it a try and see for yourself how awesome it is? Download it now and enjoy playing cricket like never before!

      -

      FAQs

      -

      Here are some frequently asked questions about Cricket League Mod APK:

      - - Q: Is Cricket League Mod APK safe to download and play? -- A: Cricket League Mod APK is not an official version of Cricket League, but a modified one that may not be safe or secure from viruses or malware. Some websites that offer this version may also be malicious or fraudulent that may infect your device or trick you into giving them your data or money. Therefore, you need to be careful when choosing where to download Cricket League Mod APK. We recommend using HappyMod, which is a platform for mod lovers to download, request, and test android mods. Here, you can find Cricket League v1.0.5 MOD APK (Allways Perfect), which is one of the latest versions of Cricket League Mod APK. - Q: How can I update Cricket League Mod APK? -- A: Cricket League Mod APK may not be compatible with some updates or may overwrite it with the original version. Therefore, you need to check for updates regularly and download them from the same source that you downloaded the modded version. You can also check HappyMod for the latest versions of Cricket League Mod APK and download them from there. - Q: How can I uninstall Cricket League Mod APK? -- A: Cricket League Mod APK can be uninstalled like any other app on your device. You can go to Settings > Apps > Cricket League and tap on the uninstall button. You can also long-press on the icon of Cricket League on your device and drag it to the uninstall option. - Q: Can I play Cricket League Mod APK offline? -- A: Cricket League Mod APK requires an internet connection to play, as it is a multiplayer game that connects you with other players from around the world. However, you can play some offline modes, such as practice mode or challenge mode, without an internet connection. - Q: Can I play Cricket League Mod APK on PC? -- A: Cricket League Mod APK is designed for Android devices, but you can also play it on PC using an Android emulator. An Android emulator is a software that simulates an Android device on your PC and lets you run Android apps and games on it. Some of the popular Android emulators are BlueStacks, NoxPlayer, LDPlayer, etc. You can download and install any of them on your PC and then download and install Cricket League Mod APK on it.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/tcbert/__init__.py b/spaces/fclong/summary/fengshen/examples/tcbert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Achieve Your Instagram Goals with 5000 Followers Pro App for Android Devices.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Achieve Your Instagram Goals with 5000 Followers Pro App for Android Devices.md deleted file mode 100644 index 25dd29b14c32cc8b76c8cdc0122eb43e46c858a8..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Achieve Your Instagram Goals with 5000 Followers Pro App for Android Devices.md +++ /dev/null @@ -1,109 +0,0 @@ - -

      How to Get 5000+ Instagram Followers with an App

      -

      Instagram is one of the most popular social media platforms in the world, with over 1 billion monthly active users. If you want to grow your brand, business, or personal profile on Instagram, you need to have a large and engaged audience that likes and comments on your posts. But how do you get more Instagram followers without spending hours on the app every day?

      -

      One of the easiest and fastest ways to get more Instagram followers is to use an app that helps you gain real and targeted followers. An app can help you automate the process of following, liking, and commenting on other users' posts, as well as promote your own profile to potential followers. An app can also help you analyze your performance, track your growth, and optimize your strategy.

      -

      instagram followers 5000+ app download


      Download Filehttps://gohhs.com/2uPv48



      -

      But not all Instagram followers apps are created equal. Some apps may offer fake or bot followers that can harm your reputation and engagement. Some apps may also violate Instagram's terms of service and put your account at risk of being banned or suspended. Therefore, you need to be careful when choosing an app to get more Instagram followers.

      -

      In this article, we will review one of the best Instagram followers apps in 2023, called 5000 Followers Pro Instagram. We will also give you some other options to consider if you want to try different apps. By the end of this article, you will know how to get 5000+ Instagram followers with an app in a matter of days.

      -

      The Best Instagram Followers App in 2023

      -

      5000 Followers Pro Instagram

      -

      5000 Followers Pro Instagram is an app that claims to help you get thousands of real followers on Instagram in a short time. It works by using a coin system, where you can earn coins by following other users or completing tasks. You can then use the coins to promote your own profile and get more followers.

      -

      The app has some features that make it stand out from other similar apps, such as:

      -
        -
      • It allows you to switch between multiple accounts and earn coins faster.
      • -
      • It lets you choose the target audience for your promotion based on hashtags, locations, or usernames.
      • -
      • It provides you with statistics and reports on your follower growth and activity.
      • -
      • It has a user-friendly interface and a simple design.
      • -
      -

      However, the app also has some drawbacks that you should be aware of, such as:

      -

      instagram followers 5000+ app apk
      -instagram followers 5000+ app free
      -instagram followers 5000+ app ios
      -instagram followers 5000+ app android
      -instagram followers 5000+ app review
      -instagram followers 5000+ app online
      -instagram followers 5000+ app for pc
      -instagram followers 5000+ app mod
      -instagram followers 5000+ app hack
      -instagram followers 5000+ app legit
      -instagram followers 5000+ app pro
      -instagram followers 5000+ app premium
      -instagram followers 5000+ app latest version
      -instagram followers 5000+ app no verification
      -instagram followers 5000+ app no survey
      -instagram followers 5000+ app no password
      -instagram followers 5000+ app no login
      -instagram followers 5000+ app no human verification
      -instagram followers 5000+ app no root
      -instagram followers 5000+ app no jailbreak
      -instagram followers 5000+ app safe
      -instagram followers 5000+ app real
      -instagram followers 5000+ app best
      -instagram followers 5000+ app top
      -instagram followers 5000+ app popular
      -instagram followers 5000+ app trusted
      -instagram followers 5000+ app reliable
      -instagram followers 5000+ app fast
      -instagram followers 5000+ app easy
      -instagram followers 5000+ app simple
      -instagram followers 5000+ app effective
      -instagram followers 5000+ app working
      -instagram followers 5000+ app new
      -instagram followers 5000+ app updated
      -instagram followers 5000+ app cheap
      -instagram followers 5000+ app affordable
      -instagram followers 5000+ app low cost
      -instagram followers 5000+ app high quality
      -instagram followers 5000+ app organic
      -instagram followers 5000+ app genuine
      -instagram followers 5000+ app instant
      -instagram followers 5000+ app quick
      -instagram followers 5000+ app unlimited
      -instagram followers 5000+ app guaranteed
      -instagram followers 5000+ app permanent
      -instagram followers 5000+ app active
      -instagram followers 5000+ app targeted
      -instagram followers 5000+ app niche specific

      -
        -
      • It requires you to follow a lot of users in order to earn enough coins for promotion.
      • -
      • It may not guarantee that the followers you get are genuine and interested in your niche.
      • -
      • It may not comply with Instagram's policies and guidelines, which could result in your account being flagged or banned.
      • -
      -

      If you want to try 5000 Followers Pro Instagram, you can download it from this link. The app is compatible with Android devices only. To use it, you need to sign in with your Instagram account and start earning coins by following other users or completing tasks. You can then use the coins to promote your profile and get more followers.

      -

      Other Instagram Followers Apps to Consider

      -

      If you are not satisfied with 5000 Followers Pro Instagram or want to explore other options, here are some other Instagram followers apps that you can consider:

      - - - - -
      NameDescriptionLink
      AiGrowAiGrow is an AI-powered

      Here is the continuation of the article:

      -
      Followers GalleryFollowers Gallery is a free app that helps you get real and active Instagram followers and likes by using a coin system. You can earn coins by following or liking other users, and then use them to get more followers and likes for your own profile.this link
      GetInstaGetInsta is another free app that helps you get more Instagram followers and likes organically. It also uses a coin system, where you can earn coins by completing tasks or buying them with money. You can then use the coins to get followers and likes from real users.this link
      -

      Conclusion

      -

      Getting more Instagram followers and likes can be a challenging task, especially if you want to do it in a safe and effective way. However, with the help of an app, you can make the process easier and faster. In this article, we have reviewed one of the best Instagram followers apps in 2023, 5000 Followers Pro Instagram, and also given you some other options to consider. We hope that this article has helped you find the best app for your needs and goals.

      -

      If you want to try 5000 Followers Pro Instagram or any of the other apps mentioned in this article, you can download them from the links provided. Remember to follow the instructions and guidelines of each app carefully, and avoid using any app that may violate Instagram's terms of service or put your account at risk. Also, remember that an app alone is not enough to grow your Instagram profile. You also need to create high-quality content, engage with your audience, and use hashtags and other strategies to boost your visibility and reach.

      -

      So, what are you waiting for? Download an app today and start getting more Instagram followers and likes in no time!

      -

      FAQs

      -

      Here are some of the most frequently asked questions and answers about Instagram followers apps:

      -
        -
      1. Are Instagram followers apps safe to use?
      2. -

        It depends on the app. Some apps are safe and reliable, while others may be scammy or risky. You should always do your research before using any app, and check the reviews, ratings, features, and policies of each app. You should also avoid any app that asks for your password, personal information, or payment details.

        -
      3. Do Instagram followers apps guarantee real and active followers?
      4. -

        Again, it depends on the app. Some apps may offer real and active followers, while others may offer fake or bot followers. You should always look for apps that have a transparent and organic system of getting followers, such as using coins or tasks. You should also check the quality and engagement of the followers you get from each app.

        -
      5. How many followers can I get from an Instagram followers app?
      6. -

        The number of followers you can get from an app depends on several factors, such as the app itself, the plan you choose, the coins you have, the target audience you select, and the competition level of your niche. Generally speaking, most apps can help you get hundreds or thousands of followers in a short period of time.

        -
      7. How much does an Instagram followers app cost?
      8. -

        The cost of an Instagram followers app varies depending on the app itself, the plan you choose, the features you get, and the payment method you use. Some apps are free to use, while others may charge a fee or offer in-app purchases. You should always compare different apps and plans before choosing one that suits your budget and needs.

        -
      9. What are some tips to use an Instagram followers app effectively?
      10. -

        Here are some tips to use an Instagram followers app effectively:

        -
          -
        • Choose an app that is safe, reliable, and reputable.
        • -
        • Follow the instructions and guidelines of each app carefully.
        • -
        • Earn coins by following or liking other users or completing tasks.
        • -
        • Use coins to promote your profile and get more followers and likes.
        • -
        • Select a target audience that matches your niche and goals.
        • -
        • Analyze your performance and growth with statistics and reports.
        • -
        • Create high-quality content and engage with your audience regularly.
        • -
        • Use hashtags and other strategies to boost your visibility and reach.
        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/socket.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/socket.d.ts deleted file mode 100644 index 518b7712971aa9a56db8208628f6c6178da0e5d9..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/socket.d.ts +++ /dev/null @@ -1,162 +0,0 @@ -/// -import { EventEmitter } from "events"; -import { IncomingMessage } from "http"; -import { Transport } from "./transport"; -import { RawData } from "engine.io-parser"; -export interface SendOptions { - compress?: boolean; -} -export declare class Socket extends EventEmitter { - readonly protocol: number; - readonly request: IncomingMessage; - readonly remoteAddress: string; - _readyState: string; - transport: Transport; - private server; - private upgrading; - private upgraded; - private writeBuffer; - private packetsFn; - private sentCallbackFn; - private cleanupFn; - private checkIntervalTimer; - private upgradeTimeoutTimer; - private pingTimeoutTimer; - private pingIntervalTimer; - /** - * This is the session identifier that the client will use in the subsequent HTTP requests. It must not be shared with - * others parties, as it might lead to session hijacking. - * - * @private - */ - private readonly id; - get readyState(): string; - set readyState(state: string); - /** - * Client class (abstract). - * - * @api private - */ - constructor(id: any, server: any, transport: any, req: any, protocol: any); - /** - * Called upon transport considered open. - * - * @api private - */ - private onOpen; - /** - * Called upon transport packet. - * - * @param {Object} packet - * @api private - */ - private onPacket; - /** - * Called upon transport error. - * - * @param {Error} error object - * @api private - */ - private onError; - /** - * Pings client every `this.pingInterval` and expects response - * within `this.pingTimeout` or closes connection. - * - * @api private - */ - private schedulePing; - /** - * Resets ping timeout. - * - * @api private - */ - private resetPingTimeout; - /** - * Attaches handlers for the given transport. - * - * @param {Transport} transport - * @api private - */ - private setTransport; - /** - * Upgrades socket to the given transport - * - * @param {Transport} transport - * @api private - */ - private maybeUpgrade; - /** - * Clears listeners and timers associated with current transport. - * - * @api private - */ - private clearTransport; - /** - * Called upon transport considered closed. - * Possible reasons: `ping timeout`, `client error`, `parse error`, - * `transport error`, `server close`, `transport close` - */ - private onClose; - /** - * Setup and manage send callback - * - * @api private - */ - private setupSendCallback; - /** - * Sends a message packet. - * - * @param {Object} data - * @param {Object} options - * @param {Function} callback - * @return {Socket} for chaining - * @api public - */ - send(data: RawData, options?: SendOptions, callback?: () => void): this; - /** - * Alias of {@link send}. - * - * @param data - * @param options - * @param callback - */ - write(data: RawData, options?: SendOptions, callback?: () => void): this; - /** - * Sends a packet. - * - * @param {String} type - packet type - * @param {String} data - * @param {Object} options - * @param {Function} callback - * - * @api private - */ - private sendPacket; - /** - * Attempts to flush the packets buffer. - * - * @api private - */ - private flush; - /** - * Get available upgrades for this socket. - * - * @api private - */ - private getAvailableUpgrades; - /** - * Closes the socket and underlying transport. - * - * @param {Boolean} discard - optional, discard the transport - * @return {Socket} for chaining - * @api public - */ - close(discard?: boolean): void; - /** - * Closes the underlying transport. - * - * @param {Boolean} discard - * @api private - */ - private closeTransport; -} diff --git a/spaces/flatindo/Image-Diffusion-WebUI/diffusion_webui/diffusion_models/img2img_app.py b/spaces/flatindo/Image-Diffusion-WebUI/diffusion_webui/diffusion_models/img2img_app.py deleted file mode 100644 index a85ee16eedf67ea8ce58374513f9e7a7a3843a39..0000000000000000000000000000000000000000 --- a/spaces/flatindo/Image-Diffusion-WebUI/diffusion_webui/diffusion_models/img2img_app.py +++ /dev/null @@ -1,155 +0,0 @@ -import gradio as gr -import torch -from diffusers import StableDiffusionImg2ImgPipeline -from PIL import Image - -from diffusion_webui.utils.model_list import stable_model_list -from diffusion_webui.utils.scheduler_list import ( - SCHEDULER_MAPPING, - get_scheduler, -) - - -class StableDiffusionImage2ImageGenerator: - def __init__(self): - self.pipe = None - - def load_model(self, stable_model_path, scheduler): - if self.pipe is None or self.pipe.model_name != stable_model_path or self.pipe.scheduler_name != scheduler: - self.pipe = StableDiffusionImg2ImgPipeline.from_pretrained( - stable_model_path, safety_checker=None, torch_dtype=torch.float16 - ) - - self.pipe.model_name = stable_model_path - self.pipe.scheduler_name = scheduler - self.pipe = get_scheduler(pipe=self.pipe, scheduler=scheduler) - self.pipe.to("cuda") - self.pipe.enable_xformers_memory_efficient_attention() - - return self.pipe - - def generate_image( - self, - image_path: str, - stable_model_path: str, - prompt: str, - negative_prompt: str, - num_images_per_prompt: int, - scheduler: str, - guidance_scale: int, - num_inference_step: int, - seed_generator=0, - ): - pipe = self.load_model( - stable_model_path=stable_model_path, - scheduler=scheduler, - ) - - if seed_generator == 0: - random_seed = torch.randint(0, 1000000, (1,)) - generator = torch.manual_seed(random_seed) - else: - generator = torch.manual_seed(seed_generator) - - image = Image.open(image_path) - images = pipe( - prompt, - image=image, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - num_inference_steps=num_inference_step, - guidance_scale=guidance_scale, - generator=generator, - ).images - - return images - - def app(): - with gr.Blocks(): - with gr.Row(): - with gr.Column(): - image2image_image_file = gr.Image( - type="filepath", label="Image" - ).style(height=260) - - image2image_prompt = gr.Textbox( - lines=1, - placeholder="Prompt", - show_label=False, - ) - - image2image_negative_prompt = gr.Textbox( - lines=1, - placeholder="Negative Prompt", - show_label=False, - ) - - with gr.Row(): - with gr.Column(): - image2image_model_path = gr.Dropdown( - choices=stable_model_list, - value=stable_model_list[0], - label="Stable Model Id", - ) - - image2image_guidance_scale = gr.Slider( - minimum=0.1, - maximum=15, - step=0.1, - value=7.5, - label="Guidance Scale", - ) - image2image_num_inference_step = gr.Slider( - minimum=1, - maximum=100, - step=1, - value=50, - label="Num Inference Step", - ) - with gr.Row(): - with gr.Column(): - image2image_scheduler = gr.Dropdown( - choices=list(SCHEDULER_MAPPING.keys()), - value=list(SCHEDULER_MAPPING.keys())[0], - label="Scheduler", - ) - image2image_num_images_per_prompt = gr.Slider( - minimum=1, - maximum=4, - step=1, - value=1, - label="Number Of Images", - ) - - image2image_seed_generator = gr.Slider( - minimum=0, - maximum=1000000, - step=1, - value=0, - label="Seed(0 for random)", - ) - - image2image_predict_button = gr.Button(value="Generator") - - with gr.Column(): - output_image = gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery", - ).style(grid=(1, 2)) - - image2image_predict_button.click( - fn=StableDiffusionImage2ImageGenerator().generate_image, - inputs=[ - image2image_image_file, - image2image_model_path, - image2image_prompt, - image2image_negative_prompt, - image2image_num_images_per_prompt, - image2image_scheduler, - image2image_guidance_scale, - image2image_num_inference_step, - image2image_seed_generator, - ], - outputs=[output_image], - ) diff --git a/spaces/flatindo/generate2/diffusion_webui/diffusion_models/base_controlnet_pipeline.py b/spaces/flatindo/generate2/diffusion_webui/diffusion_models/base_controlnet_pipeline.py deleted file mode 100644 index 167158b11b477a72c019da69d25d0c7318eacae5..0000000000000000000000000000000000000000 --- a/spaces/flatindo/generate2/diffusion_webui/diffusion_models/base_controlnet_pipeline.py +++ /dev/null @@ -1,31 +0,0 @@ -class ControlnetPipeline: - def __init__(self): - self.pipe = None - - def load_model(self, stable_model_path: str, controlnet_model_path: str): - raise NotImplementedError() - - def load_image(self, image_path: str): - raise NotImplementedError() - - def controlnet_preprocces(self, read_image: str): - raise NotImplementedError() - - def generate_image( - self, - image_path: str, - stable_model_path: str, - controlnet_model_path: str, - prompt: str, - negative_prompt: str, - num_images_per_prompt: int, - guidance_scale: int, - num_inference_step: int, - controlnet_conditioning_scale: int, - scheduler: str, - seed_generator: int, - ): - raise NotImplementedError() - - def web_interface(): - raise NotImplementedError() diff --git a/spaces/flax-community/TamilLanguageDemos/app.py b/spaces/flax-community/TamilLanguageDemos/app.py deleted file mode 100644 index b95fe5334d1a2389c70c2ffc424eb04727977f4d..0000000000000000000000000000000000000000 --- a/spaces/flax-community/TamilLanguageDemos/app.py +++ /dev/null @@ -1,109 +0,0 @@ -""" Script for streamlit demo - @author: AbinayaM02 -""" - -# Install necessary libraries -from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline -import streamlit as st -import json - -# Read the config -with open("config.json") as f: - config = json.loads(f.read()) - -# Set page layout -st.set_page_config( - page_title="Tamil Language Models", - page_icon="U+270D", - layout="wide", - initial_sidebar_state="expanded" - ) - -# Load the model -@st.cache(allow_output_mutation=True) -def load_model(model_name): - with st.spinner('Waiting for the model to load.....'): - model = AutoModelWithLMHead.from_pretrained(model_name) - tokenizer = AutoTokenizer.from_pretrained(model_name) - return model, tokenizer - -# Side bar -img = st.sidebar.image("images/tamil_logo.jpg", width=300) - -# Choose the model based on selection -st.sidebar.title("கதை சொல்லி!") -page = st.sidebar.selectbox(label="Select model", - options=config["models"], - help="Select the model to generate the text") -data = st.sidebar.selectbox(label="Select data", - options=config[page], - help="Select the data on which the model is trained") -if page == "Text Generation" and data == "Oscar + IndicNLP": - st.sidebar.markdown( - "[Model tracking on wandb](https://wandb.ai/wandb/hf-flax-gpt2-tamil/runs/watdq7ib/overview?workspace=user-abinayam)", - unsafe_allow_html=True - ) - st.sidebar.markdown( - "[Model card](https://huggingface.co/abinayam/gpt-2-tamil)", - unsafe_allow_html=True - ) -elif page == "Text Generation" and data == "Oscar": - st.sidebar.markdown( - "[Model tracking on wandb](https://wandb.ai/abinayam/hf-flax-gpt-2-tamil/runs/1ddv4131/overview?workspace=user-abinayam)", - unsafe_allow_html=True - ) - st.sidebar.markdown( - "[Model card](https://huggingface.co/flax-community/gpt-2-tamil)", - unsafe_allow_html=True - ) - -# Main page -st.title("Tamil Language Demos") -st.markdown( - "Built as part of the Flax/Jax Community week, this demo uses [GPT2 trained on Oscar dataset](https://huggingface.co/flax-community/gpt-2-tamil) " - "and [GPT2 trained on Oscar & IndicNLP dataset] (https://huggingface.co/abinayam/gpt-2-tamil) " - "to show language generation!" -) - -# Set default options for examples -prompts = config["examples"] + ["Custom"] - -if page == 'Text Generation' and data == 'Oscar': - st.header('Tamil text generation with GPT2') - st.markdown('A simple demo using gpt-2-tamil model trained on Oscar dataset!') - model, tokenizer = load_model(config[data]) -elif page == 'Text Generation' and data == "Oscar + Indic Corpus": - st.header('Tamil text generation with GPT2') - st.markdown('A simple demo using gpt-2-tamil model trained on Oscar + IndicNLP dataset') - model, tokenizer = load_model(config[data]) -else: - st.title('Tamil News classification with Finetuned GPT2') - st.markdown('In progress') - -if page == "Text Generation": - # Set default options - prompt = st.selectbox('Examples', prompts, index=0) - if prompt == "Custom": - prompt_box = "", - text = st.text_input( - 'Add your custom text in Tamil', - "", - max_chars=1000) - else: - prompt_box = prompt - text = st.text_input( - 'Selected example in Tamil', - prompt, - max_chars=1000) - max_len = st.slider('Select length of the sentence to generate', 25, 300, 100) - gen_bt = st.button('Generate') - - # Generate text - if gen_bt: - try: - with st.spinner('Generating...'): - generator = pipeline('text-generation', model=model, tokenizer=tokenizer) - seqs = generator(prompt_box, max_length=max_len)[0]['generated_text'] - st.write(seqs) - except Exception as e: - st.exception(f'Exception: {e}') diff --git a/spaces/florim/MedGPT/CONTRIBUTING.md b/spaces/florim/MedGPT/CONTRIBUTING.md deleted file mode 100644 index 79169a0c1951853303f73ffa1fddb3518685606a..0000000000000000000000000000000000000000 --- a/spaces/florim/MedGPT/CONTRIBUTING.md +++ /dev/null @@ -1,105 +0,0 @@ -# Contributing to ProjectName - -First of all, thank you for considering contributing to our project! We appreciate your time and effort, and we value any contribution, whether it's reporting a bug, suggesting a new feature, or submitting a pull request. - -This document provides guidelines and best practices to help you contribute effectively. - -## Table of Contents - -- [Code of Conduct](#code-of-conduct) -- [Getting Started](#getting-started) -- [How to Contribute](#how-to-contribute) - - [Reporting Bugs](#reporting-bugs) - - [Suggesting Enhancements](#suggesting-enhancements) - - [Submitting Pull Requests](#submitting-pull-requests) -- [Style Guidelines](#style-guidelines) - - [Code Formatting](#code-formatting) - - [Pre-Commit Hooks](#pre-commit-hooks) - -## Code of Conduct - -By participating in this project, you agree to abide by our [Code of Conduct](CODE_OF_CONDUCT.md). Please read it to understand the expectations we have for everyone who contributes to this project. - -## 📢 A Quick Word -Right now we will not be accepting any Contributions that add non-essential commands to Auto-GPT. - -However, you absolutely can still add these commands to Auto-GPT in the form of plugins. Please check out this [template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template). -> ⚠️ Plugin support is expected to ship within the week. You can follow PR #757 for more updates! - -## Getting Started - -To start contributing, follow these steps: - -1. Fork the repository and clone your fork. -2. Create a new branch for your changes (use a descriptive name, such as `fix-bug-123` or `add-new-feature`). -3. Make your changes in the new branch. -4. Test your changes thoroughly. -5. Commit and push your changes to your fork. -6. Create a pull request following the guidelines in the [Submitting Pull Requests](#submitting-pull-requests) section. - -## How to Contribute - -### Reporting Bugs - -If you find a bug in the project, please create an issue on GitHub with the following information: - -- A clear, descriptive title for the issue. -- A description of the problem, including steps to reproduce the issue. -- Any relevant logs, screenshots, or other supporting information. - -### Suggesting Enhancements - -If you have an idea for a new feature or improvement, please create an issue on GitHub with the following information: - -- A clear, descriptive title for the issue. -- A detailed description of the proposed enhancement, including any benefits and potential drawbacks. -- Any relevant examples, mockups, or supporting information. - -### Submitting Pull Requests - -When submitting a pull request, please ensure that your changes meet the following criteria: - -- Your pull request should be atomic and focus on a single change. -- Your pull request should include tests for your change. -- You should have thoroughly tested your changes with multiple different prompts. -- You should have considered potential risks and mitigations for your changes. -- You should have documented your changes clearly and comprehensively. -- You should not include any unrelated or "extra" small tweaks or changes. - -## Style Guidelines - -### Code Formatting - -We use the `black` code formatter to maintain a consistent coding style across the project. Please ensure that your code is formatted using `black` before submitting a pull request. You can install `black` using `pip`: - -```bash -pip install black -``` - -To format your code, run the following command in the project's root directory: - -```bash -black . -``` -### Pre-Commit Hooks -We use pre-commit hooks to ensure that code formatting and other checks are performed automatically before each commit. To set up pre-commit hooks for this project, follow these steps: - -Install the pre-commit package using pip: -```bash -pip install pre-commit -``` - -Run the following command in the project's root directory to install the pre-commit hooks: -```bash -pre-commit install -``` - -Now, the pre-commit hooks will run automatically before each commit, checking your code formatting and other requirements. - -If you encounter any issues or have questions, feel free to reach out to the maintainers or open a new issue on GitHub. We're here to help and appreciate your efforts to contribute to the project. - -Happy coding, and once again, thank you for your contributions! - -Maintainers will look at PR that have no merge conflicts when deciding what to add to the project. Make sure your PR shows up here: - -https://github.com/Torantulino/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-is%3Aconflict+ \ No newline at end of file diff --git a/spaces/florim/MedGPT/tests/test_config.py b/spaces/florim/MedGPT/tests/test_config.py deleted file mode 100644 index b472a24c78edd1f931a76c68e08ed544bbe61d98..0000000000000000000000000000000000000000 --- a/spaces/florim/MedGPT/tests/test_config.py +++ /dev/null @@ -1,84 +0,0 @@ -from unittest import TestCase - -from autogpt.config import Config - - -class TestConfig(TestCase): - """ - Test cases for the Config class, which handles the configuration settings - for the AI and ensures it behaves as a singleton. - """ - - def setUp(self): - """ - Set up the test environment by creating an instance of the Config class. - """ - self.config = Config() - - def test_singleton(self): - """ - Test if the Config class behaves as a singleton by ensuring that two instances are the same. - """ - config2 = Config() - self.assertIs(self.config, config2) - - def test_initial_values(self): - """ - Test if the initial values of the Config class attributes are set correctly. - """ - self.assertFalse(self.config.debug_mode) - self.assertFalse(self.config.continuous_mode) - self.assertFalse(self.config.speak_mode) - self.assertEqual(self.config.fast_llm_model, "gpt-3.5-turbo") - self.assertEqual(self.config.smart_llm_model, "gpt-4") - self.assertEqual(self.config.fast_token_limit, 4000) - self.assertEqual(self.config.smart_token_limit, 8000) - - def test_set_continuous_mode(self): - """ - Test if the set_continuous_mode() method updates the continuous_mode attribute. - """ - self.config.set_continuous_mode(True) - self.assertTrue(self.config.continuous_mode) - - def test_set_speak_mode(self): - """ - Test if the set_speak_mode() method updates the speak_mode attribute. - """ - self.config.set_speak_mode(True) - self.assertTrue(self.config.speak_mode) - - def test_set_fast_llm_model(self): - """ - Test if the set_fast_llm_model() method updates the fast_llm_model attribute. - """ - self.config.set_fast_llm_model("gpt-3.5-turbo-test") - self.assertEqual(self.config.fast_llm_model, "gpt-3.5-turbo-test") - - def test_set_smart_llm_model(self): - """ - Test if the set_smart_llm_model() method updates the smart_llm_model attribute. - """ - self.config.set_smart_llm_model("gpt-4-test") - self.assertEqual(self.config.smart_llm_model, "gpt-4-test") - - def test_set_fast_token_limit(self): - """ - Test if the set_fast_token_limit() method updates the fast_token_limit attribute. - """ - self.config.set_fast_token_limit(5000) - self.assertEqual(self.config.fast_token_limit, 5000) - - def test_set_smart_token_limit(self): - """ - Test if the set_smart_token_limit() method updates the smart_token_limit attribute. - """ - self.config.set_smart_token_limit(9000) - self.assertEqual(self.config.smart_token_limit, 9000) - - def test_set_debug_mode(self): - """ - Test if the set_debug_mode() method updates the debug_mode attribute. - """ - self.config.set_debug_mode(True) - self.assertTrue(self.config.debug_mode) diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/distshift.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/distshift.py deleted file mode 100644 index 437a6180846a8c1c278ddba932c4dc1ab185fe67..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/distshift.py +++ /dev/null @@ -1,70 +0,0 @@ -from gym_minigrid.minigrid import * -from gym_minigrid.register import register - -class DistShiftEnv(MiniGridEnv): - """ - Distributional shift environment. - """ - - def __init__( - self, - width=9, - height=7, - agent_start_pos=(1,1), - agent_start_dir=0, - strip2_row=2 - ): - self.agent_start_pos = agent_start_pos - self.agent_start_dir = agent_start_dir - self.goal_pos = (width-2, 1) - self.strip2_row = strip2_row - - super().__init__( - width=width, - height=height, - max_steps=4*width*height, - # Set this to True for maximum speed - see_through_walls=True - ) - - def _gen_grid(self, width, height): - # Create an empty grid - self.grid = Grid(width, height) - - # Generate the surrounding walls - self.grid.wall_rect(0, 0, width, height) - - # Place a goal square in the bottom-right corner - self.put_obj(Goal(), *self.goal_pos) - - # Place the lava rows - for i in range(self.width - 6): - self.grid.set(3+i, 1, Lava()) - self.grid.set(3+i, self.strip2_row, Lava()) - - # Place the agent - if self.agent_start_pos is not None: - self.agent_pos = self.agent_start_pos - self.agent_dir = self.agent_start_dir - else: - self.place_agent() - - self.mission = "get to the green goal square" - -class DistShift1(DistShiftEnv): - def __init__(self): - super().__init__(strip2_row=2) - -class DistShift2(DistShiftEnv): - def __init__(self): - super().__init__(strip2_row=5) - -register( - id='MiniGrid-DistShift1-v0', - entry_point='gym_minigrid.envs:DistShift1' -) - -register( - id='MiniGrid-DistShift2-v0', - entry_point='gym_minigrid.envs:DistShift2' -) diff --git a/spaces/foghuang/ChatGLM2-6B/web_demo.py b/spaces/foghuang/ChatGLM2-6B/web_demo.py deleted file mode 100644 index b4408e6d5fcf37a3aecc3a005d669d50b45bfe15..0000000000000000000000000000000000000000 --- a/spaces/foghuang/ChatGLM2-6B/web_demo.py +++ /dev/null @@ -1,108 +0,0 @@ -from transformers import AutoModel, AutoTokenizer -import gradio as gr -import mdtex2html -from utils import load_model_on_gpus - -tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True) -model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).cuda() -# 多显卡支持,使用下面两行代替上面一行,将num_gpus改为你实际的显卡数量 -# from utils import load_model_on_gpus -# model = load_model_on_gpus("THUDM/chatglm2-6b", num_gpus=2) -model = model.eval() - -"""Override Chatbot.postprocess""" - - -def postprocess(self, y): - if y is None: - return [] - for i, (message, response) in enumerate(y): - y[i] = ( - None if message is None else mdtex2html.convert((message)), - None if response is None else mdtex2html.convert(response), - ) - return y - - -gr.Chatbot.postprocess = postprocess - - -def parse_text(text): - """copy from https://github.com/GaiZhenbiao/ChuanhuChatGPT/""" - lines = text.split("\n") - lines = [line for line in lines if line != ""] - count = 0 - for i, line in enumerate(lines): - if "```" in line: - count += 1 - items = line.split('`') - if count % 2 == 1: - lines[i] = f'
        '
        -            else:
        -                lines[i] = f'
        ' - else: - if i > 0: - if count % 2 == 1: - line = line.replace("`", "\`") - line = line.replace("<", "<") - line = line.replace(">", ">") - line = line.replace(" ", " ") - line = line.replace("*", "*") - line = line.replace("_", "_") - line = line.replace("-", "-") - line = line.replace(".", ".") - line = line.replace("!", "!") - line = line.replace("(", "(") - line = line.replace(")", ")") - line = line.replace("$", "$") - lines[i] = "
        "+line - text = "".join(lines) - return text - - -def predict(input, chatbot, max_length, top_p, temperature, history, past_key_values): - chatbot.append((parse_text(input), "")) - for response, history, past_key_values in model.stream_chat(tokenizer, input, history, past_key_values=past_key_values, - return_past_key_values=True, - max_length=max_length, top_p=top_p, - temperature=temperature): - chatbot[-1] = (parse_text(input), parse_text(response)) - - yield chatbot, history, past_key_values - - -def reset_user_input(): - return gr.update(value='') - - -def reset_state(): - return [], [], None - - -with gr.Blocks() as demo: - gr.HTML("""

        ChatGLM2-6B

        """) - - chatbot = gr.Chatbot() - with gr.Row(): - with gr.Column(scale=4): - with gr.Column(scale=12): - user_input = gr.Textbox(show_label=False, placeholder="Input...", lines=10).style( - container=False) - with gr.Column(min_width=32, scale=1): - submitBtn = gr.Button("Submit", variant="primary") - with gr.Column(scale=1): - emptyBtn = gr.Button("Clear History") - max_length = gr.Slider(0, 32768, value=8192, step=1.0, label="Maximum length", interactive=True) - top_p = gr.Slider(0, 1, value=0.8, step=0.01, label="Top P", interactive=True) - temperature = gr.Slider(0, 1, value=0.95, step=0.01, label="Temperature", interactive=True) - - history = gr.State([]) - past_key_values = gr.State(None) - - submitBtn.click(predict, [user_input, chatbot, max_length, top_p, temperature, history, past_key_values], - [chatbot, history, past_key_values], show_progress=True) - submitBtn.click(reset_user_input, [], [user_input]) - - emptyBtn.click(reset_state, outputs=[chatbot, history, past_key_values], show_progress=True) - -demo.queue().launch(share=True, inbrowser=True) diff --git a/spaces/giswqs/solara-geospatial/pages/00_home.py b/spaces/giswqs/solara-geospatial/pages/00_home.py deleted file mode 100644 index 64a47772fc264f114125a6eda9adb6cf13448fac..0000000000000000000000000000000000000000 --- a/spaces/giswqs/solara-geospatial/pages/00_home.py +++ /dev/null @@ -1,26 +0,0 @@ -import solara - - -@solara.component -def Page(): - with solara.Column(align="center"): - markdown = """ - ## Solara for Geospatial Applications - - ### Introduction - - **A collection of [Solara](https://github.com/widgetti/solara) web apps for geospatial applications.** - - Just a proof-of-concept for now. Not all features are working yet. More features will be added in the future. Click on the menu above to see the other pages. - - - Web App: - - GitHub: - - Hugging Face: - - ### Demos - - ![](https://i.imgur.com/4uIEnAJ.gif) - - """ - - solara.Markdown(markdown) diff --git a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/scripts/merge_clusters.py b/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/scripts/merge_clusters.py deleted file mode 100644 index 2780f9d971d847b3ad0b59e9a33780553ebce902..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/scripts/merge_clusters.py +++ /dev/null @@ -1,114 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import numpy as np -import tqdm -import torch -import random -from shutil import copyfile - -from npy_append_array import NpyAppendArray - - -def get_parser(): - parser = argparse.ArgumentParser( - description="transforms features via a given pca and stored them in target dir" - ) - # fmt: off - parser.add_argument('source', help='directory with features') - parser.add_argument('--split', help='which split to read', required=True) - parser.add_argument('--save-dir', help='where to save the output', required=True) - parser.add_argument('--cluster-dir', help='where the clusters are') - parser.add_argument('--pooling', type=str, default='mean', choices=['mean', 'sample'], help='how to pool') - # fmt: on - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - source_path = osp.join(args.source, args.split) - cluster_path = osp.join(args.cluster_dir, args.split + ".src") - print(f"data path: {source_path}") - - features = np.load(source_path + ".npy", mmap_mode="r") - sizes = [] - offsets = [] - offset = 0 - with open(source_path + ".lengths", "r") as len_f: - for line in len_f: - length = int(line.rstrip()) - sizes.append(length) - offsets.append(offset) - offset += length - - clusters = [] - with open(cluster_path, "r") as cf: - for line in cf: - line = line.rstrip() - items = line.split() - items = list(map(int, items)) - clusters.append(items) - - os.makedirs(args.save_dir, exist_ok=True) - save_path = osp.join(args.save_dir, args.split) - - copyfile(source_path + ".tsv", save_path + ".tsv") - - if os.path.exists(source_path + ".phn"): - copyfile(source_path + ".phn", save_path + ".phn") - if os.path.exists(osp.join(args.source, "dict.phn.txt")): - copyfile( - osp.join(args.source, "dict.phn.txt"), - osp.join(args.save_dir, "dict.phn.txt"), - ) - if os.path.exists(source_path + ".wrd"): - copyfile(source_path + ".wrd", save_path + ".wrd") - - if osp.exists(save_path + ".npy"): - os.remove(save_path + ".npy") - npaa = NpyAppendArray(save_path + ".npy") - - def merge(feats, clust): - feats = torch.from_numpy(feats.copy()) - clust = torch.LongTensor(clust) - _, counts = clust.unique_consecutive(return_counts=True) - curr = 0 - - merged = [] - for c in counts: - c = c.item() - start = curr - end = curr + c - curr += c - if args.pooling == "mean": - new_x = feats[start:end].mean(dim=0) - elif args.pooling == "sample": - new_x = feats[start + int(random.random() * c)] - else: - raise NotImplementedError() - merged.append(new_x) - - return torch.stack(merged, dim=0).numpy() - - with open(save_path + ".lengths", "w") as l_f: - for size, offset, clust in tqdm.tqdm( - zip(sizes, offsets, clusters), total=len(sizes) - ): - end = size + offset - feats = features[offset:end] - feats = merge(feats, clust) - print(len(feats), file=l_f) - npaa.append(feats) - - -if __name__ == "__main__": - main() diff --git a/spaces/gradio/HuBERT/fairseq/config/__init__.py b/spaces/gradio/HuBERT/fairseq/config/__init__.py deleted file mode 100644 index 6264236915a7269a4d920ee8213004374dd86a9a..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/config/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/gradio/HuBERT/fairseq/data/audio/__init__.py b/spaces/gradio/HuBERT/fairseq/data/audio/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/gradio/HuBERT/fairseq/data/audio/feature_transforms/__init__.py b/spaces/gradio/HuBERT/fairseq/data/audio/feature_transforms/__init__.py deleted file mode 100644 index 359fa069716cba0dd615ce0959368b20828c31f7..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/data/audio/feature_transforms/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -import importlib -import os -from abc import ABC, abstractmethod -from typing import Dict, Optional - - -class AudioFeatureTransform(ABC): - @classmethod - @abstractmethod - def from_config_dict(cls, config: Optional[Dict] = None): - pass - - -AUDIO_FEATURE_TRANSFORM_REGISTRY = {} -AUDIO_FEATURE_TRANSFORM_CLASS_NAMES = set() - - -def register_audio_feature_transform(name): - def register_audio_feature_transform_cls(cls): - if name in AUDIO_FEATURE_TRANSFORM_REGISTRY: - raise ValueError(f"Cannot register duplicate transform ({name})") - if not issubclass(cls, AudioFeatureTransform): - raise ValueError( - f"Transform ({name}: {cls.__name__}) must extend " - "AudioFeatureTransform" - ) - if cls.__name__ in AUDIO_FEATURE_TRANSFORM_CLASS_NAMES: - raise ValueError( - f"Cannot register audio feature transform with duplicate " - f"class name ({cls.__name__})" - ) - AUDIO_FEATURE_TRANSFORM_REGISTRY[name] = cls - AUDIO_FEATURE_TRANSFORM_CLASS_NAMES.add(cls.__name__) - return cls - - return register_audio_feature_transform_cls - - -def get_audio_feature_transform(name): - return AUDIO_FEATURE_TRANSFORM_REGISTRY[name] - - -transforms_dir = os.path.dirname(__file__) -for file in os.listdir(transforms_dir): - path = os.path.join(transforms_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - name = file[: file.find(".py")] if file.endswith(".py") else file - importlib.import_module("fairseq.data.audio.feature_transforms." + name) - - -class CompositeAudioFeatureTransform(AudioFeatureTransform): - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - _transforms = _config.get("transforms") - if _transforms is None: - return None - transforms = [ - get_audio_feature_transform(_t).from_config_dict(_config.get(_t)) - for _t in _transforms - ] - return CompositeAudioFeatureTransform(transforms) - - def __init__(self, transforms): - self.transforms = [t for t in transforms if t is not None] - - def __call__(self, x): - for t in self.transforms: - x = t(x) - return x - - def __repr__(self): - format_string = ( - [self.__class__.__name__ + "("] - + [f" {t.__repr__()}" for t in self.transforms] - + [")"] - ) - return "\n".join(format_string) diff --git a/spaces/gradio/chatbot_multimodal/run.py b/spaces/gradio/chatbot_multimodal/run.py deleted file mode 100644 index f9d3b5dbeba276a01039089ed9f05e3369588563..0000000000000000000000000000000000000000 --- a/spaces/gradio/chatbot_multimodal/run.py +++ /dev/null @@ -1,54 +0,0 @@ -import gradio as gr -import os -import time - -# Chatbot demo with multimodal input (text, markdown, LaTeX, code blocks, image, audio, & video). Plus shows support for streaming text. - - -def add_text(history, text): - history = history + [(text, None)] - return history, gr.Textbox(value="", interactive=False) - - -def add_file(history, file): - history = history + [((file.name,), None)] - return history - - -def bot(history): - response = "**That's cool!**" - history[-1][1] = "" - for character in response: - history[-1][1] += character - time.sleep(0.05) - yield history - - -with gr.Blocks() as demo: - chatbot = gr.Chatbot( - [], - elem_id="chatbot", - bubble_full_width=False, - avatar_images=(None, (os.path.join(os.path.dirname(__file__), "avatar.png"))), - ) - - with gr.Row(): - txt = gr.Textbox( - scale=4, - show_label=False, - placeholder="Enter text and press enter, or upload an image", - container=False, - ) - btn = gr.UploadButton("📁", file_types=["image", "video", "audio"]) - - txt_msg = txt.submit(add_text, [chatbot, txt], [chatbot, txt], queue=False).then( - bot, chatbot, chatbot, api_name="bot_response" - ) - txt_msg.then(lambda: gr.Textbox(interactive=True), None, [txt], queue=False) - file_msg = btn.upload(add_file, [chatbot, btn], [chatbot], queue=False).then( - bot, chatbot, chatbot - ) - -demo.queue() -if __name__ == "__main__": - demo.launch(allowed_paths=["avatar.png"]) diff --git a/spaces/gradio/pictionary/run.py b/spaces/gradio/pictionary/run.py deleted file mode 100644 index ce9be4d1f3fa9f72a6cf516ab24484c8960d5e74..0000000000000000000000000000000000000000 --- a/spaces/gradio/pictionary/run.py +++ /dev/null @@ -1,56 +0,0 @@ -from pathlib import Path - -import numpy as np -import torch -import gradio as gr -from torch import nn -import gdown - -url = 'https://drive.google.com/uc?id=1dsk2JNZLRDjC-0J4wIQX_FcVurPaXaAZ' -output = 'pytorch_model.bin' -gdown.download(url, output, quiet=False) - -LABELS = Path('class_names.txt').read_text().splitlines() - -model = nn.Sequential( - nn.Conv2d(1, 32, 3, padding='same'), - nn.ReLU(), - nn.MaxPool2d(2), - nn.Conv2d(32, 64, 3, padding='same'), - nn.ReLU(), - nn.MaxPool2d(2), - nn.Conv2d(64, 128, 3, padding='same'), - nn.ReLU(), - nn.MaxPool2d(2), - nn.Flatten(), - nn.Linear(1152, 256), - nn.ReLU(), - nn.Linear(256, len(LABELS)), -) -state_dict = torch.load('pytorch_model.bin', map_location='cpu') -model.load_state_dict(state_dict, strict=False) -model.eval() - -def predict(im): - if im is None: - return None - im = np.asarray(im.resize((28, 28))) - - x = torch.tensor(im, dtype=torch.float32).unsqueeze(0).unsqueeze(0) / 255. - - with torch.no_grad(): - out = model(x) - - probabilities = torch.nn.functional.softmax(out[0], dim=0) - - values, indices = torch.topk(probabilities, 5) - - return {LABELS[i]: v.item() for i, v in zip(indices, values)} - - -interface = gr.Interface(predict, - inputs=gr.Sketchpad(label="Draw Here", brush_radius=5, type="pil", shape=(120, 120)), - outputs=gr.Label(label="Guess"), - live=True) - -interface.queue().launch() diff --git a/spaces/gradio/reversible_flow/run.py b/spaces/gradio/reversible_flow/run.py deleted file mode 100644 index ff5db0343f200265fbee262f7451ca6d2e5920fc..0000000000000000000000000000000000000000 --- a/spaces/gradio/reversible_flow/run.py +++ /dev/null @@ -1,15 +0,0 @@ -import gradio as gr - -def increase(num): - return num + 1 - -with gr.Blocks() as demo: - a = gr.Number(label="a") - b = gr.Number(label="b") - atob = gr.Button("a > b") - btoa = gr.Button("b > a") - atob.click(increase, a, b) - btoa.click(increase, b, a) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/gwang-kim/DATID-3D/datid3d_test.py b/spaces/gwang-kim/DATID-3D/datid3d_test.py deleted file mode 100644 index 058c45c3bd9f8290794ed60a7a8be33c1e16dc25..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/datid3d_test.py +++ /dev/null @@ -1,251 +0,0 @@ -import os -from os.path import join as opj -import argparse -from glob import glob - -### Parameters -parser = argparse.ArgumentParser() - -# For all -parser.add_argument('--mode', type=str, required=True, choices=['image', 'video', 'manip', 'manip_from_inv'], - help="image: Sample images and shapes, " - "video: Sample pose-controlled videos, " - "manip: Manipulated 3D reconstruction from images, " - "manip_from_inv: Manipulated 3D reconstruction from inverted latent") -parser.add_argument('--network', type=str, nargs='+', required=True) -parser.add_argument('--generator_type', default='ffhq', type=str, choices=['ffhq', 'cat']) # ffhq, cat -parser.add_argument('--outdir', type=str, default='test_runs') -parser.add_argument('--trunc', type=float, default=0.7) -parser.add_argument('--seeds', type=str, default='100-200') -parser.add_argument('--down_src_eg3d_from_nvidia', default=True) -parser.add_argument('--num_inv_steps', default=300, type=int) -# Manipulated 3D reconstruction -parser.add_argument('--indir', type=str, default='input_imgs') -parser.add_argument('--name_tag', type=str, default='') -# Sample images -parser.add_argument('--shape', default=True) -parser.add_argument('--shape_format', type=str, choices=['.mrc', '.ply'], default='.mrc') -parser.add_argument('--shape_only_first', type=bool, default=False) -# Sample pose-controlled videos -parser.add_argument('--grid', default='1x1') -parser.add_argument('--w_frames', type=int, default=120) - - - -args = parser.parse_args() -os.makedirs(args.outdir, exist_ok=True) -print() - - -network_command = '' -for network_path in args.network: - network_command += f"--network {opj('..', network_path)} " - - - -### Sample images -if args.mode == 'image': - image_path = opj(args.outdir, f'image{args.name_tag}') - os.makedirs(image_path, exist_ok=True) - - os.chdir('eg3d') - command = f"""python gen_samples.py \ - {network_command} \ - --seeds={args.seeds} \ - --generator_type={args.generator_type} \ - --outdir={opj('..', image_path)} \ - --shapes={args.shape} \ - --shape_format={args.shape_format} \ - --shape_only_first={args.shape_only_first} \ - --trunc={args.trunc} \ - """ - print(f"{command} \n") - os.system(command) - os.chdir('..') - - - - - -### Sample pose-controlled videos -if args.mode == 'video': - video_path = opj(args.outdir, f'video{args.name_tag}') - os.makedirs(video_path, exist_ok=True) - - os.chdir('eg3d') - command = f"""python gen_videos.py \ - {network_command} \ - --seeds={args.seeds} \ - --generator_type={args.generator_type} \ - --outdir={opj('..', video_path)} \ - --shapes=False \ - --trunc={args.trunc} \ - --grid={args.grid} \ - --w-frames={args.w_frames} - """ - print(f"{command} \n") - os.system(command) - os.chdir('..') - - -### Manipulated 3D reconstruction from images -if args.mode == 'manip': - input_path = opj(args.indir) - align_path = opj(args.outdir, f'manip_3D_recon{args.name_tag}', '1_align_result') - pose_path = opj(args.outdir, f'manip_3D_recon{args.name_tag}', '2_pose_result') - inversion_path = opj(args.outdir, f'manip_3D_recon{args.name_tag}', '3_inversion_result') - manip_path = opj(args.outdir, f'manip_3D_recon{args.name_tag}', '4_manip_result') - - os.makedirs(opj(args.outdir, f'manip_3D_recon{args.name_tag}'), exist_ok=True) - os.makedirs(align_path, exist_ok=True) - os.makedirs(pose_path, exist_ok=True) - os.makedirs(inversion_path, exist_ok=True) - os.makedirs(manip_path, exist_ok=True) - - os.chdir('eg3d') - if args.generator_type == 'cat': - generator_id = 'afhqcats512-128.pkl' - else: - generator_id = 'ffhqrebalanced512-128.pkl' - generator_path = f'pretrained/{generator_id}' - if not os.path.exists(generator_path): - os.makedirs(f'pretrained', exist_ok=True) - print("Pretrained EG3D model cannot be found. Downloading the pretrained EG3D models.") - if args.down_src_eg3d_from_nvidia == True: - os.system(f'wget -c https://api.ngc.nvidia.com/v2/models/nvidia/research/eg3d/versions/1/files/{generator_id} -O {generator_path}') - else: - os.system(f'wget https://huggingface.co/gwang-kim/datid3d-finetuned-eg3d-models/resolve/main/finetuned_models/nvidia_{generator_id} -O {generator_path}') - os.chdir('..') - - ## Align images and Pose extraction - os.chdir('pose_estimation') - if not os.path.exists('checkpoints/pretrained/epoch_20.pth') or not os.path.exists('BFM'): - print(f"BFM and pretrained DeepFaceRecon3D model cannot be found. Downloading the pretrained pose estimation model and BFM files, put epoch_20.pth in ./pose_estimation/checkpoints/pretrained/ and put unzip BFM.zip in ./pose_estimation/.") - - try: - from gdown import download as drive_download - drive_download(f'https://drive.google.com/uc?id=1mdqkEUepHZROeOj99pXogAPJPqzBDN2G', './BFM.zip', quiet=False) - os.system('unzip BFM.zip') - drive_download(f'https://drive.google.com/uc?id=1zawY7jYDJlUGnSAXn1pgIHgIvJpiSmj5', './checkpoints/pretrained/epoch_20.pth', quiet=False) - except: - os.system("pip install -U --no-cache-dir gdown --pre") - from gdown import download as drive_download - drive_download(f'https://drive.google.com/uc?id=1mdqkEUepHZROeOj99pXogAPJPqzBDN2G', './BFM.zip', quiet=False) - os.system('unzip BFM.zip') - drive_download(f'https://drive.google.com/uc?id=1zawY7jYDJlUGnSAXn1pgIHgIvJpiSmj5', './checkpoints/pretrained/epoch_20.pth', quiet=False) - - print() - command = f"""python extract_pose.py 0 \ - {opj('..', input_path)} {opj('..', align_path)} {opj('..', pose_path)} - """ - print(f"{command} \n") - os.system(command) - os.chdir('..') - - ## Invert images to the latent space of 3D GANs - os.chdir('eg3d') - command = f"""python run_inversion.py \ - --outdir={opj('..', inversion_path)} \ - --latent_space_type=w_plus \ - --network={generator_path} \ - --image_path={opj('..', pose_path)} \ - --num_steps={args.num_inv_steps} - """ - print(f"{command} \n") - os.system(command) - os.chdir('..') - - ## Generate videos, images and mesh - os.chdir('eg3d') - w_pths = sorted(glob(opj('..', inversion_path, '*.pt'))) - if len(w_pths) == 0: - print("No inverted latent") - exit() - for w_pth in w_pths: - print(f"{w_pth} \n") - - command = f"""python gen_samples.py \ - {network_command} \ - --w_pth={w_pth} \ - --seeds='100-200' \ - --generator_type={args.generator_type} \ - --outdir={opj('..', manip_path)} \ - --shapes={args.shape} \ - --shape_format={args.shape_format} \ - --shape_only_first={args.shape_only_first} \ - --trunc={args.trunc} \ - """ - print(f"{command} \n") - os.system(command) - - command = f"""python gen_videos.py \ - {network_command} \ - --w_pth={w_pth} \ - --seeds='100-200' \ - --generator_type={args.generator_type} \ - --outdir={opj('..', manip_path)} \ - --shapes=False \ - --trunc={args.trunc} \ - --grid=1x1 \ - --w-frames={args.w_frames} - """ - print(f"{command} \n") - os.system(command) - os.chdir('..') - - - - - -### Manipulated 3D reconstruction from inverted latent -if args.mode == 'manip_from_inv': - input_path = opj(args.indir) - align_path = opj(args.outdir, f'manip_3D_recon{args.name_tag}', '1_align_result') - pose_path = opj(args.outdir, f'manip_3D_recon{args.name_tag}', '2_pose_result') - inversion_path = opj(args.outdir, f'manip_3D_recon{args.name_tag}', '3_inversion_result') - manip_path = opj(args.outdir, f'manip_3D_recon{args.name_tag}', '4_manip_result') - - os.makedirs(opj(args.outdir, f'manip_3D_recon{args.name_tag}'), exist_ok=True) - os.makedirs(align_path, exist_ok=True) - os.makedirs(pose_path, exist_ok=True) - os.makedirs(inversion_path, exist_ok=True) - os.makedirs(manip_path, exist_ok=True) - - ## Generate videos, images and mesh - os.chdir('eg3d') - w_pths = sorted(glob(opj('..', inversion_path, '*.pt'))) - if len(w_pths) == 0: - print("No inverted latent") - exit() - for w_pth in w_pths: - print(f"{w_pth} \n") - - command = f"""python gen_samples.py \ - {network_command} \ - --w_pth={w_pth} \ - --seeds='100-200' \ - --generator_type={args.generator_type} \ - --outdir={opj('..', manip_path)} \ - --shapes={args.shape} \ - --shape_format={args.shape_format} \ - --shape_only_first={args.shape_only_first} \ - --trunc={args.trunc} \ - """ - print(f"{command} \n") - os.system(command) - - command = f"""python gen_videos.py \ - {network_command} \ - --w_pth={w_pth} \ - --seeds='100-200' \ - --generator_type={args.generator_type} \ - --outdir={opj('..', manip_path)} \ - --shapes=False \ - --trunc={args.trunc} \ - --grid=1x1 - """ - print(f"{command} \n") - os.system(command) - os.chdir('..') - - diff --git a/spaces/gwang-kim/DATID-3D/eg3d/training/superresolution.py b/spaces/gwang-kim/DATID-3D/eg3d/training/superresolution.py deleted file mode 100644 index cfa1425b1c692dcb6127489d5cb03d82015d596f..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/training/superresolution.py +++ /dev/null @@ -1,322 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: LicenseRef-NvidiaProprietary -# -# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual -# property and proprietary rights in and to this material, related -# documentation and any modifications thereto. Any use, reproduction, -# disclosure or distribution of this material and related documentation -# without an express license agreement from NVIDIA CORPORATION or -# its affiliates is strictly prohibited. - -"""Superresolution network architectures from the paper -"Efficient Geometry-aware 3D Generative Adversarial Networks".""" - -import torch -from training.networks_stylegan2 import Conv2dLayer, SynthesisLayer, ToRGBLayer -from torch_utils.ops import upfirdn2d -from torch_utils import persistence -from torch_utils import misc - -from training.networks_stylegan2 import SynthesisBlock -import numpy as np -from training.networks_stylegan3 import SynthesisLayer as AFSynthesisLayer - - -#---------------------------------------------------------------------------- - -# for 512x512 generation -@persistence.persistent_class -class SuperresolutionHybrid8X(torch.nn.Module): - def __init__(self, channels, img_resolution, sr_num_fp16_res, sr_antialias, - num_fp16_res=4, conv_clamp=None, channel_base=None, channel_max=None,# IGNORE - **block_kwargs): - super().__init__() - assert img_resolution == 512 - - use_fp16 = sr_num_fp16_res > 0 - self.input_resolution = 128 - self.sr_antialias = sr_antialias - self.block0 = SynthesisBlock(channels, 128, w_dim=512, resolution=256, - img_channels=3, is_last=False, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs) - self.block1 = SynthesisBlock(128, 64, w_dim=512, resolution=512, - img_channels=3, is_last=True, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs) - self.register_buffer('resample_filter', upfirdn2d.setup_filter([1,3,3,1])) - - def forward(self, rgb, x, ws, **block_kwargs): - ws = ws[:, -1:, :].repeat(1, 3, 1) - - if x.shape[-1] != self.input_resolution: - x = torch.nn.functional.interpolate(x, size=(self.input_resolution, self.input_resolution), - mode='bilinear', align_corners=False, antialias=self.sr_antialias) - rgb = torch.nn.functional.interpolate(rgb, size=(self.input_resolution, self.input_resolution), - mode='bilinear', align_corners=False, antialias=self.sr_antialias) - - x, rgb = self.block0(x, rgb, ws, **block_kwargs) - x, rgb = self.block1(x, rgb, ws, **block_kwargs) - return rgb - -#---------------------------------------------------------------------------- - -# for 256x256 generation -@persistence.persistent_class -class SuperresolutionHybrid4X(torch.nn.Module): - def __init__(self, channels, img_resolution, sr_num_fp16_res, sr_antialias, - num_fp16_res=4, conv_clamp=None, channel_base=None, channel_max=None,# IGNORE - **block_kwargs): - super().__init__() - assert img_resolution == 256 - use_fp16 = sr_num_fp16_res > 0 - self.sr_antialias = sr_antialias - self.input_resolution = 128 - self.block0 = SynthesisBlockNoUp(channels, 128, w_dim=512, resolution=128, - img_channels=3, is_last=False, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs) - self.block1 = SynthesisBlock(128, 64, w_dim=512, resolution=256, - img_channels=3, is_last=True, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs) - self.register_buffer('resample_filter', upfirdn2d.setup_filter([1,3,3,1])) - - def forward(self, rgb, x, ws, **block_kwargs): - ws = ws[:, -1:, :].repeat(1, 3, 1) - - if x.shape[-1] < self.input_resolution: - x = torch.nn.functional.interpolate(x, size=(self.input_resolution, self.input_resolution), - mode='bilinear', align_corners=False, antialias=self.sr_antialias) - rgb = torch.nn.functional.interpolate(rgb, size=(self.input_resolution, self.input_resolution), - mode='bilinear', align_corners=False, antialias=self.sr_antialias) - - x, rgb = self.block0(x, rgb, ws, **block_kwargs) - x, rgb = self.block1(x, rgb, ws, **block_kwargs) - return rgb - -#---------------------------------------------------------------------------- - -# for 128 x 128 generation -@persistence.persistent_class -class SuperresolutionHybrid2X(torch.nn.Module): - def __init__(self, channels, img_resolution, sr_num_fp16_res, sr_antialias, - num_fp16_res=4, conv_clamp=None, channel_base=None, channel_max=None,# IGNORE - **block_kwargs): - super().__init__() - assert img_resolution == 128 - - use_fp16 = sr_num_fp16_res > 0 - self.input_resolution = 64 - self.sr_antialias = sr_antialias - self.block0 = SynthesisBlockNoUp(channels, 128, w_dim=512, resolution=64, - img_channels=3, is_last=False, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs) - self.block1 = SynthesisBlock(128, 64, w_dim=512, resolution=128, - img_channels=3, is_last=True, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs) - self.register_buffer('resample_filter', upfirdn2d.setup_filter([1,3,3,1])) - - def forward(self, rgb, x, ws, **block_kwargs): - ws = ws[:, -1:, :].repeat(1, 3, 1) - - if x.shape[-1] != self.input_resolution: - x = torch.nn.functional.interpolate(x, size=(self.input_resolution, self.input_resolution), - mode='bilinear', align_corners=False, antialias=self.sr_antialias) - rgb = torch.nn.functional.interpolate(rgb, size=(self.input_resolution, self.input_resolution), - mode='bilinear', align_corners=False, antialias=self.sr_antialias) - - x, rgb = self.block0(x, rgb, ws, **block_kwargs) - x, rgb = self.block1(x, rgb, ws, **block_kwargs) - return rgb - -#---------------------------------------------------------------------------- - -# TODO: Delete (here for backwards compatibility with old 256x256 models) -@persistence.persistent_class -class SuperresolutionHybridDeepfp32(torch.nn.Module): - def __init__(self, channels, img_resolution, sr_num_fp16_res, - num_fp16_res=4, conv_clamp=None, channel_base=None, channel_max=None,# IGNORE - **block_kwargs): - super().__init__() - assert img_resolution == 256 - use_fp16 = sr_num_fp16_res > 0 - - self.input_resolution = 128 - self.block0 = SynthesisBlockNoUp(channels, 128, w_dim=512, resolution=128, - img_channels=3, is_last=False, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs) - self.block1 = SynthesisBlock(128, 64, w_dim=512, resolution=256, - img_channels=3, is_last=True, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs) - self.register_buffer('resample_filter', upfirdn2d.setup_filter([1,3,3,1])) - - def forward(self, rgb, x, ws, **block_kwargs): - ws = ws[:, -1:, :].repeat(1, 3, 1) - - if x.shape[-1] < self.input_resolution: - x = torch.nn.functional.interpolate(x, size=(self.input_resolution, self.input_resolution), - mode='bilinear', align_corners=False) - rgb = torch.nn.functional.interpolate(rgb, size=(self.input_resolution, self.input_resolution), - mode='bilinear', align_corners=False) - - x, rgb = self.block0(x, rgb, ws, **block_kwargs) - x, rgb = self.block1(x, rgb, ws, **block_kwargs) - return rgb - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisBlockNoUp(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels, 0 = first block. - out_channels, # Number of output channels. - w_dim, # Intermediate latent (W) dimensionality. - resolution, # Resolution of this block. - img_channels, # Number of output color channels. - is_last, # Is this the last block? - architecture = 'skip', # Architecture: 'orig', 'skip', 'resnet'. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = 256, # Clamp the output of convolution layers to +-X, None = disable clamping. - use_fp16 = False, # Use FP16 for this block? - fp16_channels_last = False, # Use channels-last memory format with FP16? - fused_modconv_default = True, # Default value of fused_modconv. 'inference_only' = True for inference, False for training. - **layer_kwargs, # Arguments for SynthesisLayer. - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.w_dim = w_dim - self.resolution = resolution - self.img_channels = img_channels - self.is_last = is_last - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.fused_modconv_default = fused_modconv_default - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.num_conv = 0 - self.num_torgb = 0 - - if in_channels == 0: - self.const = torch.nn.Parameter(torch.randn([out_channels, resolution, resolution])) - - if in_channels != 0: - self.conv0 = SynthesisLayer(in_channels, out_channels, w_dim=w_dim, resolution=resolution, - conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs) - self.num_conv += 1 - - self.conv1 = SynthesisLayer(out_channels, out_channels, w_dim=w_dim, resolution=resolution, - conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs) - self.num_conv += 1 - - if is_last or architecture == 'skip': - self.torgb = ToRGBLayer(out_channels, img_channels, w_dim=w_dim, - conv_clamp=conv_clamp, channels_last=self.channels_last) - self.num_torgb += 1 - - if in_channels != 0 and architecture == 'resnet': - self.skip = Conv2dLayer(in_channels, out_channels, kernel_size=1, bias=False, up=2, - resample_filter=resample_filter, channels_last=self.channels_last) - - def forward(self, x, img, ws, force_fp32=False, fused_modconv=None, update_emas=False, **layer_kwargs): - _ = update_emas # unused - misc.assert_shape(ws, [None, self.num_conv + self.num_torgb, self.w_dim]) - w_iter = iter(ws.unbind(dim=1)) - if ws.device.type != 'cuda': - force_fp32 = True - dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - if fused_modconv is None: - fused_modconv = self.fused_modconv_default - if fused_modconv == 'inference_only': - fused_modconv = (not self.training) - - # Input. - if self.in_channels == 0: - x = self.const.to(dtype=dtype, memory_format=memory_format) - x = x.unsqueeze(0).repeat([ws.shape[0], 1, 1, 1]) - else: - misc.assert_shape(x, [None, self.in_channels, self.resolution, self.resolution]) - x = x.to(dtype=dtype, memory_format=memory_format) - - # Main layers. - if self.in_channels == 0: - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - elif self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, gain=np.sqrt(0.5), **layer_kwargs) - x = y.add_(x) - else: - x = self.conv0(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - - # ToRGB. - # if img is not None: - # misc.assert_shape(img, [None, self.img_channels, self.resolution // 2, self.resolution // 2]) - # img = upfirdn2d.upsample2d(img, self.resample_filter) - if self.is_last or self.architecture == 'skip': - y = self.torgb(x, next(w_iter), fused_modconv=fused_modconv) - y = y.to(dtype=torch.float32, memory_format=torch.contiguous_format) - img = img.add_(y) if img is not None else y - - assert x.dtype == dtype - assert img is None or img.dtype == torch.float32 - return x, img - - def extra_repr(self): - return f'resolution={self.resolution:d}, architecture={self.architecture:s}' - - -#---------------------------------------------------------------------------- - -# for 512x512 generation -@persistence.persistent_class -class SuperresolutionHybrid8XDC(torch.nn.Module): - def __init__(self, channels, img_resolution, sr_num_fp16_res, sr_antialias, - num_fp16_res=4, conv_clamp=None, channel_base=None, channel_max=None,# IGNORE - **block_kwargs): - super().__init__() - assert img_resolution == 512 - - use_fp16 = sr_num_fp16_res > 0 - self.input_resolution = 128 - self.sr_antialias = sr_antialias - self.block0 = SynthesisBlock(channels, 256, w_dim=512, resolution=256, - img_channels=3, is_last=False, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs) - self.block1 = SynthesisBlock(256, 128, w_dim=512, resolution=512, - img_channels=3, is_last=True, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs) - - def forward(self, rgb, x, ws, **block_kwargs): - ws = ws[:, -1:, :].repeat(1, 3, 1) - - if x.shape[-1] != self.input_resolution: - x = torch.nn.functional.interpolate(x, size=(self.input_resolution, self.input_resolution), - mode='bilinear', align_corners=False, antialias=self.sr_antialias) - rgb = torch.nn.functional.interpolate(rgb, size=(self.input_resolution, self.input_resolution), - mode='bilinear', align_corners=False, antialias=self.sr_antialias) - - x, rgb = self.block0(x, rgb, ws, **block_kwargs) - x, rgb = self.block1(x, rgb, ws, **block_kwargs) - return rgb - - - -class SuperresolutionHybrid8XDC_afhq(torch.nn.Module): - def __init__(self, channels, img_resolution, sr_num_fp16_res, sr_antialias, - num_fp16_res=4, conv_clamp=None, channel_base=None, channel_max=None,# IGNORE - **block_kwargs): - super().__init__() - assert img_resolution == 512 - - use_fp16 = sr_num_fp16_res > 0 - self.input_resolution = 128 - self.sr_antialias = sr_antialias - self.block0 = SynthesisBlock(channels, 128, w_dim=512, resolution=256, - img_channels=3, is_last=False, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs) - self.block1 = SynthesisBlock(128, 64, w_dim=512, resolution=512, - img_channels=3, is_last=True, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs) - - def forward(self, rgb, x, ws, **block_kwargs): - ws = ws[:, -1:, :].repeat(1, 3, 1) - - if x.shape[-1] != self.input_resolution: - x = torch.nn.functional.interpolate(x, size=(self.input_resolution, self.input_resolution), - mode='bilinear', align_corners=False, antialias=self.sr_antialias) - rgb = torch.nn.functional.interpolate(rgb, size=(self.input_resolution, self.input_resolution), - mode='bilinear', align_corners=False, antialias=self.sr_antialias) - - x, rgb = self.block0(x, rgb, ws, **block_kwargs) - x, rgb = self.block1(x, rgb, ws, **block_kwargs) - return rgb - -#---------------------------------------------------------------------------- \ No newline at end of file diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/test.py b/spaces/gwang-kim/DATID-3D/pose_estimation/test.py deleted file mode 100644 index a6f87a907dd10c68de36427cec540ca8eaff6a08..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/test.py +++ /dev/null @@ -1,108 +0,0 @@ -"""This script is the test script for Deep3DFaceRecon_pytorch -""" - -import os -from options.test_options import TestOptions -from models import create_model -from util.visualizer import MyVisualizer -from util.preprocess import align_img -from PIL import Image -import numpy as np -from util.load_mats import load_lm3d -import torch -import json - -def get_data_path(root='examples'): - im_path = [os.path.join(root, i) for i in sorted(os.listdir(root)) if i.endswith('png') or i.endswith('jpg')] - lm_path = [i.replace('png', 'txt').replace('jpg', 'txt') for i in im_path] - lm_path = [os.path.join(i.replace(i.split(os.path.sep)[-1],''),'detections',i.split(os.path.sep)[-1]) for i in lm_path] - return im_path, lm_path - -def read_data(im_path, lm_path, lm3d_std, to_tensor=True, rescale_factor=466.285): - im = Image.open(im_path).convert('RGB') - _, H = im.size - lm = np.loadtxt(lm_path).astype(np.float32) - lm = lm.reshape([-1, 2]) - lm[:, -1] = H - 1 - lm[:, -1] - _, im_pil, lm, _, im_high = align_img(im, lm, lm3d_std, rescale_factor=rescale_factor) - if to_tensor: - im = torch.tensor(np.array(im_pil)/255., dtype=torch.float32).permute(2, 0, 1).unsqueeze(0) - lm = torch.tensor(lm).unsqueeze(0) - else: - im = im_pil - return im, lm, im_pil, im_high - -def main(rank, opt, name='examples'): - device = torch.device(rank) - torch.cuda.set_device(device) - model = create_model(opt) - model.setup(opt) - model.device = device - model.parallelize() - model.eval() - visualizer = MyVisualizer(opt) - print("ROOT") - print(name) - im_path, lm_path = get_data_path(name) - lm3d_std = load_lm3d(opt.bfm_folder) - - cropping_params = {} - - out_dir_crop1024 = os.path.join(name, "crop_1024") - if not os.path.exists(out_dir_crop1024): - os.makedirs(out_dir_crop1024) - out_dir = os.path.join(name, 'epoch_%s_%06d'%(opt.epoch, 0)) - if not os.path.exists(out_dir): - os.makedirs(out_dir) - for i in range(len(im_path)): - print(i, im_path[i]) - img_name = im_path[i].split(os.path.sep)[-1].replace('.png','').replace('.jpg','') - if not os.path.isfile(lm_path[i]): - continue - - # 2 passes for cropping image for NeRF and for pose extraction - for r in range(2): - if r==0: - rescale_factor = 300 # optimized for NeRF training - center_crop_size = 700 - output_size = 512 - - # left = int(im_high.size[0]/2 - center_crop_size/2) - # upper = int(im_high.size[1]/2 - center_crop_size/2) - # right = left + center_crop_size - # lower = upper + center_crop_size - # im_cropped = im_high.crop((left, upper, right,lower)) - # im_cropped = im_cropped.resize((output_size, output_size), resample=Image.LANCZOS) - cropping_params[os.path.basename(im_path[i])] = { - 'lm': np.loadtxt(lm_path[i]).astype(np.float32).tolist(), - 'lm3d_std': lm3d_std.tolist(), - 'rescale_factor': rescale_factor, - 'center_crop_size': center_crop_size, - 'output_size': output_size} - - # im_high.save(os.path.join(out_dir_crop1024, img_name+'.png'), compress_level=0) - # im_cropped.save(os.path.join(out_dir_crop1024, img_name+'.png'), compress_level=0) - elif not opt.skip_model: - rescale_factor = 466.285 - im_tensor, lm_tensor, _, im_high = read_data(im_path[i], lm_path[i], lm3d_std, rescale_factor=rescale_factor) - - data = { - 'imgs': im_tensor, - 'lms': lm_tensor - } - model.set_input(data) # unpack data from data loader - model.test() # run inference - # visuals = model.get_current_visuals() # get image results - # visualizer.display_current_results(visuals, 0, opt.epoch, dataset=name.split(os.path.sep)[-1], - # save_results=True, count=i, name=img_name, add_image=False) - # import pdb; pdb.set_trace() - model.save_mesh(os.path.join(out_dir,img_name+'.obj')) - model.save_coeff(os.path.join(out_dir,img_name+'.mat')) # save predicted coefficients - - with open(os.path.join(name, 'cropping_params.json'), 'w') as outfile: - json.dump(cropping_params, outfile, indent=4) - -if __name__ == '__main__': - opt = TestOptions().parse() # get test options - main(0, opt,opt.img_folder) - diff --git a/spaces/hamacojr/CAT-Seg/cat_seg/third_party/clip.py b/spaces/hamacojr/CAT-Seg/cat_seg/third_party/clip.py deleted file mode 100644 index 916eb2745a411064f519592414150c408beb7204..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/cat_seg/third_party/clip.py +++ /dev/null @@ -1,211 +0,0 @@ -import hashlib -import os -import urllib -import warnings -from typing import Union, List - -import torch -from PIL import Image -from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize -from tqdm import tqdm - -#from .model import build_model -from .model_vpt import build_model -from .simple_tokenizer import SimpleTokenizer as _Tokenizer - -__all__ = ["available_models", "load", "tokenize"] -_tokenizer = _Tokenizer() - -_MODELS = { - "RN50": "https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt", - "RN101": "https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt", - "RN50x4": "https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt", - "RN50x16": "https://openaipublic.azureedge.net/clip/models/52378b407f34354e150460fe41077663dd5b39c54cd0bfd2b27167a4a06ec9aa/RN50x16.pt", - "RN50x64": "https://openaipublic.azureedge.net/clip/models/be1cfb55d75a9666199fb2206c106743da0f6468c9d327f3e0d0a543a9919d9c/RN50x64.pt", - "ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt", - "ViT-B/16": "https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt", - "ViT-L/14": "https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt", - "ViT-L/14@336px": "https://openaipublic.azureedge.net/clip/models/3035c92b350959924f9f00213499208652fc7ea050643e8b385c2dac08641f02/ViT-L-14-336px.pt", -} - - -def _download(url: str, root: str = os.path.expanduser("~/.cache/clip")): - os.makedirs(root, exist_ok=True) - filename = os.path.basename(url) - - expected_sha256 = url.split("/")[-2] - download_target = os.path.join(root, filename) - - if os.path.exists(download_target) and not os.path.isfile(download_target): - raise RuntimeError(f"{download_target} exists and is not a regular file") - - if os.path.isfile(download_target): - if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256: - return download_target - else: - warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file") - - with urllib.request.urlopen(url) as source, open(download_target, "wb") as output: - with tqdm(total=int(source.info().get("Content-Length")), ncols=80) as loop: - while True: - buffer = source.read(8192) - if not buffer: - break - - output.write(buffer) - loop.update(len(buffer)) - - if hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256: - raise RuntimeError(f"Model has been downloaded but the SHA256 checksum does not not match") - - return download_target - - -def available_models(): - return list(_MODELS.keys()) - - -def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit=True, prompt_depth=0, prompt_length=0): - if name not in _MODELS: - raise RuntimeError(f"Model {name} not found; available models = {available_models()}") - - model_path = _download(_MODELS[name]) - model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval() - n_px = model.input_resolution.item() - - transform = Compose([ - Resize(n_px, interpolation=Image.BICUBIC), - CenterCrop(n_px), - lambda image: image.convert("RGB"), - ToTensor(), - Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)), - ]) - - if not jit: - model = build_model(model.state_dict(), prompt_depth, prompt_length).to(device) - return model, transform - - # patch the device names - device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[]) - device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1] - - def patch_device(module): - graphs = [module.graph] if hasattr(module, "graph") else [] - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("prim::Constant"): - if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"): - node.copyAttributes(device_node) - - model.apply(patch_device) - patch_device(model.encode_image) - patch_device(model.encode_text) - - # patch dtype to float32 on CPU - if device == "cpu": - float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[]) - float_input = list(float_holder.graph.findNode("aten::to").inputs())[1] - float_node = float_input.node() - - def patch_float(module): - graphs = [module.graph] if hasattr(module, "graph") else [] - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("aten::to"): - inputs = list(node.inputs()) - for i in [1, 2]: # dtype can be the second or third argument to aten::to() - if inputs[i].node()["value"] == 5: - inputs[i].node().copyAttributes(float_node) - - model.apply(patch_float) - patch_float(model.encode_image) - patch_float(model.encode_text) - - model.float() - - return model, transform - - -def load_custom(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit=True, n_px=224): - if name not in _MODELS: - raise RuntimeError(f"Model {name} not found; available models = {available_models()}") - - model_path = _download(_MODELS[name]) - model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval() - # n_px = model.input_resolution.item() - - transform = Compose([ - Resize(n_px, interpolation=Image.BICUBIC), - CenterCrop(n_px), - lambda image: image.convert("RGB"), - ToTensor(), - Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)), - ]) - - if not jit: - model = build_model(model.state_dict()).to(device) - return model, transform - - # patch the device names - device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[]) - device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1] - - def patch_device(module): - graphs = [module.graph] if hasattr(module, "graph") else [] - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("prim::Constant"): - if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"): - node.copyAttributes(device_node) - - model.apply(patch_device) - patch_device(model.encode_image) - patch_device(model.encode_text) - - # patch dtype to float32 on CPU - if device == "cpu": - float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[]) - float_input = list(float_holder.graph.findNode("aten::to").inputs())[1] - float_node = float_input.node() - - def patch_float(module): - graphs = [module.graph] if hasattr(module, "graph") else [] - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("aten::to"): - inputs = list(node.inputs()) - for i in [1, 2]: # dtype can be the second or third argument to aten::to() - if inputs[i].node()["value"] == 5: - inputs[i].node().copyAttributes(float_node) - - model.apply(patch_float) - patch_float(model.encode_image) - patch_float(model.encode_text) - - model.float() - - return model, transform - -def tokenize(texts: Union[str, List[str]], context_length: int = 77): - if isinstance(texts, str): - texts = [texts] - - sot_token = _tokenizer.encoder["<|startoftext|>"] - eot_token = _tokenizer.encoder["<|endoftext|>"] - all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts] - result = torch.zeros(len(all_tokens), context_length, dtype=torch.long) - - for i, tokens in enumerate(all_tokens): - if len(tokens) > context_length: - raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}") - result[i, :len(tokens)] = torch.tensor(tokens) - - return result diff --git a/spaces/hamacojr/SAM-CAT-Seg/datasets/prepare_voc.py b/spaces/hamacojr/SAM-CAT-Seg/datasets/prepare_voc.py deleted file mode 100644 index 6ab2ca43ada301d72ec09df61c82bf30d2f20036..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/datasets/prepare_voc.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved -# Modified by Feng Liang from https://github.com/MendelXu/zsseg.baseline/blob/master/datasets/prepare_voc_sem_seg.py -# Modified by Heeseong Shin from https://github.com/facebookresearch/ov-seg/blob/main/datasets/prepare_voc_sem_seg.py - -import os -import os.path as osp -from pathlib import Path -import tqdm - -import numpy as np -from PIL import Image - - -clsID_to_trID = { - 0: 255, - 1: 0, - 2: 1, - 3: 2, - 4: 3, - 5: 4, - 6: 5, - 7: 6, - 8: 7, - 9: 8, - 10: 9, - 11: 10, - 12: 11, - 13: 12, - 14: 13, - 15: 14, - 16: 15, - 17: 16, - 18: 17, - 19: 18, - 20: 19, - 255: 255, -} -clsID_to_trID_bg = clsID_to_trID.copy() -clsID_to_trID_bg[0] = 20 - -def convert_to_trainID( - maskpath, out_mask_dir, is_train, clsID_to_trID=clsID_to_trID, suffix="" -): - mask = np.array(Image.open(maskpath)) - mask_copy = np.ones_like(mask, dtype=np.uint8) * 255 - for clsID, trID in clsID_to_trID.items(): - mask_copy[mask == clsID] = trID - seg_filename = ( - osp.join(out_mask_dir, "train" + suffix, osp.basename(maskpath)) - if is_train - else osp.join(out_mask_dir, "val" + suffix, osp.basename(maskpath)) - ) - if len(np.unique(mask_copy)) == 1 and np.unique(mask_copy)[0] == 255: - return - Image.fromarray(mask_copy).save(seg_filename, "PNG") - - - -if __name__ == "__main__": - dataset_dir = Path(os.getenv("DETECTRON2_DATASETS", "datasets")) - print('Caution: we only generate the validation set!') - voc_path = dataset_dir / "VOCdevkit" / "VOC2012" - out_mask_dir = voc_path / "annotations_detectron2" - out_mask_dir_bg = voc_path / "annotations_detectron2_bg" - #out_image_dir = voc_path / "images_detectron2" - for name in ["val"]: - os.makedirs((out_mask_dir / name), exist_ok=True) - os.makedirs((out_mask_dir_bg / name), exist_ok=True) - #os.makedirs((out_image_dir / name), exist_ok=True) - val_list = [ - osp.join(voc_path, "SegmentationClassAug", f + ".png") - for f in np.loadtxt(osp.join(voc_path, "ImageSets/Segmentation/val.txt"), dtype=np.str).tolist() - ] - for file in tqdm.tqdm(val_list): - convert_to_trainID(file, out_mask_dir, is_train=False) - convert_to_trainID(file, out_mask_dir_bg, is_train=False, clsID_to_trID=clsID_to_trID_bg) \ No newline at end of file diff --git a/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/grid_sample_gradfix.py b/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/grid_sample_gradfix.py deleted file mode 100644 index 979ee831b232c68b8c271be9e376c70c57a31b02..0000000000000000000000000000000000000000 --- a/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/grid_sample_gradfix.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom replacement for `torch.nn.functional.grid_sample` that -supports arbitrarily high order gradients between the input and output. -Only works on 2D images and assumes -`mode='bilinear'`, `padding_mode='zeros'`, `align_corners=False`.""" - -import torch - -# pylint: disable=redefined-builtin -# pylint: disable=arguments-differ -# pylint: disable=protected-access - -#---------------------------------------------------------------------------- - -enabled = False # Enable the custom op by setting this to true. - -#---------------------------------------------------------------------------- - -def grid_sample(input, grid): - if _should_use_custom_op(): - return _GridSample2dForward.apply(input, grid) - return torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False) - -#---------------------------------------------------------------------------- - -def _should_use_custom_op(): - return enabled - -#---------------------------------------------------------------------------- - -class _GridSample2dForward(torch.autograd.Function): - @staticmethod - def forward(ctx, input, grid): - assert input.ndim == 4 - assert grid.ndim == 4 - output = torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False) - ctx.save_for_backward(input, grid) - return output - - @staticmethod - def backward(ctx, grad_output): - input, grid = ctx.saved_tensors - grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid) - return grad_input, grad_grid - -#---------------------------------------------------------------------------- - -class _GridSample2dBackward(torch.autograd.Function): - @staticmethod - def forward(ctx, grad_output, input, grid): - op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward') - grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False) - ctx.save_for_backward(grid) - return grad_input, grad_grid - - @staticmethod - def backward(ctx, grad2_grad_input, grad2_grad_grid): - _ = grad2_grad_grid # unused - grid, = ctx.saved_tensors - grad2_grad_output = None - grad2_input = None - grad2_grid = None - - if ctx.needs_input_grad[0]: - grad2_grad_output = _GridSample2dForward.apply(grad2_grad_input, grid) - - assert not ctx.needs_input_grad[2] - return grad2_grad_output, grad2_input, grad2_grid - -#---------------------------------------------------------------------------- diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/engine/singlepath_trainer.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/engine/singlepath_trainer.py deleted file mode 100644 index c73ba7e60a8d5367a314b98b1379386cfcc4ffac..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/engine/singlepath_trainer.py +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import datetime -import logging -import time -import random -import torch -import torch.distributed as dist -from maskrcnn_benchmark.utils.comm import get_world_size, synchronize, broadcast_data -from maskrcnn_benchmark.utils.metric_logger import MetricLogger -from maskrcnn_benchmark.utils.ema import ModelEma - - -def reduce_loss_dict(loss_dict): - """ - Reduce the loss dictionary from all processes so that process with rank - 0 has the averaged results. Returns a dict with the same fields as - loss_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return loss_dict - with torch.no_grad(): - loss_names = [] - all_losses = [] - for k in sorted(loss_dict.keys()): - loss_names.append(k) - all_losses.append(loss_dict[k]) - all_losses = torch.stack(all_losses, dim=0) - dist.reduce(all_losses, dst=0) - if dist.get_rank() == 0: - # only main process gets accumulated, so only divide by - # world_size in this case - all_losses /= world_size - reduced_losses = {k: v for k, v in zip(loss_names, all_losses)} - return reduced_losses - - -def do_train( - cfg, - model, - data_loader, - optimizer, - scheduler, - checkpointer, - device, - checkpoint_period, - arguments, - rngs=None -): - logger = logging.getLogger("maskrcnn_benchmark.trainer") - logger.info("Start training") - meters = MetricLogger(delimiter=" ") - max_iter = len(data_loader) - start_iter = arguments["iteration"] - model.train() - model_ema = None - if cfg.SOLVER.MODEL_EMA>0: - model_ema = ModelEma(model, decay=cfg.SOLVER.MODEL_EMA) - start_training_time = time.time() - end = time.time() - - for iteration, (images, targets, _) in enumerate(data_loader, start_iter): - - if any(len(target) < 1 for target in targets): - logger.error("Iteration={iteration + 1} || Image Ids used for training {_} || targets Length={[len(target) for target in targets]}" ) - continue - data_time = time.time() - end - iteration = iteration + 1 - arguments["iteration"] = iteration - - images = images.to(device) - targets = [target.to(device) for target in targets] - - # synchronize rngs - if rngs is None: - if isinstance(model, torch.nn.parallel.DistributedDataParallel): - mix_nums = model.module.mix_nums - else: - mix_nums = model.mix_nums - rngs = [random.randint(0, mix-1) for mix in mix_nums] - rngs = broadcast_data(rngs) - - for param in model.parameters(): - param.requires_grad = False - loss_dict = model(images, targets, rngs) - - losses = sum(loss for loss in loss_dict.values()) - - # reduce losses over all GPUs for logging purposes - loss_dict_reduced = reduce_loss_dict(loss_dict) - losses_reduced = sum(loss for loss in loss_dict_reduced.values()) - meters.update(loss=losses_reduced, **loss_dict_reduced) - - optimizer.zero_grad() - losses.backward() - optimizer.step() - scheduler.step() - - if model_ema is not None: - model_ema.update(model) - arguments["model_ema"] = model_ema.state_dict() - - batch_time = time.time() - end - end = time.time() - meters.update(time=batch_time, data=data_time) - - eta_seconds = meters.time.global_avg * (max_iter - iteration) - eta_string = str(datetime.timedelta(seconds=int(eta_seconds))) - - if iteration % 20 == 0 or iteration == max_iter: - logger.info( - meters.delimiter.join( - [ - "eta: {eta}", - "iter: {iter}", - "{meters}", - "lr: {lr:.6f}", - "max mem: {memory:.0f}", - ] - ).format( - eta=eta_string, - iter=iteration, - meters=str(meters), - lr=optimizer.param_groups[0]["lr"], - memory=torch.cuda.max_memory_allocated() / 1024.0 / 1024.0, - ) - ) - if iteration % checkpoint_period == 0: - checkpointer.save("model_{:07d}".format(iteration), **arguments) - if iteration == max_iter: - if model_ema is not None: - model.load_state_dict(model_ema.state_dict()) - checkpointer.save("model_final", **arguments) - - total_training_time = time.time() - start_training_time - total_time_str = str(datetime.timedelta(seconds=total_training_time)) - logger.info( - "Total training time: {} ({:.4f} s / it)".format( - total_time_str, total_training_time / (max_iter) - ) - ) diff --git a/spaces/hardon-server/remove-background-on-image/app.py b/spaces/hardon-server/remove-background-on-image/app.py deleted file mode 100644 index 1c934859de6cf5be33c2d1dad1224c6e55b0a3fe..0000000000000000000000000000000000000000 --- a/spaces/hardon-server/remove-background-on-image/app.py +++ /dev/null @@ -1,9 +0,0 @@ -import gradio as gr -from rembg import remove - -def segment(image): - return remove(image) - -demo = gr.Interface(fn=segment, inputs="image", outputs="image") -demo.queue(concurrency_count=3) -demo.launch(show_api=False, debug=False, share=False, show_error=False) \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tools/inference.sh b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tools/inference.sh deleted file mode 100644 index 3b9d39ed92e9cb574ac4349f457a52a27c38aac3..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tools/inference.sh +++ /dev/null @@ -1,4 +0,0 @@ -python finetune_net.py \ - --num-gpus 1 \ - --config-file ../configs/Misc/parsing_inference.yaml \ - --eval-only MODEL.WEIGHTS ./model_final.pth TEST.AUG.ENABLED False diff --git a/spaces/hasibzunair/fifa-tryon-demo/setup_model_weights.py b/spaces/hasibzunair/fifa-tryon-demo/setup_model_weights.py deleted file mode 100644 index a101749c1356d9e60888627de726f56f07259ef9..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/setup_model_weights.py +++ /dev/null @@ -1,13 +0,0 @@ -import os -import gdown - -os.makedirs('./saved_models/u2net', exist_ok=True) -os.makedirs('./saved_models/u2net_portrait', exist_ok=True) - -gdown.download('https://drive.google.com/uc?id=1ao1ovG1Qtx4b7EoskHXmi2E9rp5CHLcZ', - './saved_models/u2net/u2net.pth', - quiet=False) - -gdown.download('https://drive.google.com/uc?id=1IG3HdpcRiDoWNookbncQjeaPN28t90yW', - './saved_models/u2net_portrait/u2net_portrait.pth', - quiet=False) diff --git a/spaces/hbestm/gpt-academic-play/crazy_functions/test_project/cpp/cppipc/buffer.cpp b/spaces/hbestm/gpt-academic-play/crazy_functions/test_project/cpp/cppipc/buffer.cpp deleted file mode 100644 index 0ac0fa7bc3ced0447ba4caa359355dd4252670b3..0000000000000000000000000000000000000000 --- a/spaces/hbestm/gpt-academic-play/crazy_functions/test_project/cpp/cppipc/buffer.cpp +++ /dev/null @@ -1,87 +0,0 @@ -#include "libipc/buffer.h" -#include "libipc/utility/pimpl.h" - -#include - -namespace ipc { - -bool operator==(buffer const & b1, buffer const & b2) { - return (b1.size() == b2.size()) && (std::memcmp(b1.data(), b2.data(), b1.size()) == 0); -} - -bool operator!=(buffer const & b1, buffer const & b2) { - return !(b1 == b2); -} - -class buffer::buffer_ : public pimpl { -public: - void* p_; - std::size_t s_; - void* a_; - buffer::destructor_t d_; - - buffer_(void* p, std::size_t s, buffer::destructor_t d, void* a) - : p_(p), s_(s), a_(a), d_(d) { - } - - ~buffer_() { - if (d_ == nullptr) return; - d_((a_ == nullptr) ? p_ : a_, s_); - } -}; - -buffer::buffer() - : buffer(nullptr, 0, nullptr, nullptr) { -} - -buffer::buffer(void* p, std::size_t s, destructor_t d) - : p_(p_->make(p, s, d, nullptr)) { -} - -buffer::buffer(void* p, std::size_t s, destructor_t d, void* additional) - : p_(p_->make(p, s, d, additional)) { -} - -buffer::buffer(void* p, std::size_t s) - : buffer(p, s, nullptr) { -} - -buffer::buffer(char const & c) - : buffer(const_cast(&c), 1) { -} - -buffer::buffer(buffer&& rhs) - : buffer() { - swap(rhs); -} - -buffer::~buffer() { - p_->clear(); -} - -void buffer::swap(buffer& rhs) { - std::swap(p_, rhs.p_); -} - -buffer& buffer::operator=(buffer rhs) { - swap(rhs); - return *this; -} - -bool buffer::empty() const noexcept { - return (impl(p_)->p_ == nullptr) || (impl(p_)->s_ == 0); -} - -void* buffer::data() noexcept { - return impl(p_)->p_; -} - -void const * buffer::data() const noexcept { - return impl(p_)->p_; -} - -std::size_t buffer::size() const noexcept { - return impl(p_)->s_; -} - -} // namespace ipc diff --git a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/metrics.py b/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/metrics.py deleted file mode 100644 index 5646f40e9860f90648e1dc8d074277de9b827b97..0000000000000000000000000000000000000000 --- a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/metrics.py +++ /dev/null @@ -1,360 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license -""" -Model validation metrics -""" - -import math -import warnings -from pathlib import Path - -import matplotlib.pyplot as plt -import numpy as np -import torch - -from utils import TryExcept, threaded - - -def fitness(x): - # Model fitness as a weighted combination of metrics - w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95] - return (x[:, :4] * w).sum(1) - - -def smooth(y, f=0.05): - # Box filter of fraction f - nf = round(len(y) * f * 2) // 2 + 1 # number of filter elements (must be odd) - p = np.ones(nf // 2) # ones padding - yp = np.concatenate((p * y[0], y, p * y[-1]), 0) # y padded - return np.convolve(yp, np.ones(nf) / nf, mode='valid') # y-smoothed - - -def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=(), eps=1e-16, prefix=''): - """ Compute the average precision, given the recall and precision curves. - Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. - # Arguments - tp: True positives (nparray, nx1 or nx10). - conf: Objectness value from 0-1 (nparray). - pred_cls: Predicted object classes (nparray). - target_cls: True object classes (nparray). - plot: Plot precision-recall curve at mAP@0.5 - save_dir: Plot save directory - # Returns - The average precision as computed in py-faster-rcnn. - """ - - # Sort by objectness - i = np.argsort(-conf) - tp, conf, pred_cls = tp[i], conf[i], pred_cls[i] - - # Find unique classes - unique_classes, nt = np.unique(target_cls, return_counts=True) - nc = unique_classes.shape[0] # number of classes, number of detections - - # Create Precision-Recall curve and compute AP for each class - px, py = np.linspace(0, 1, 1000), [] # for plotting - ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000)) - for ci, c in enumerate(unique_classes): - i = pred_cls == c - n_l = nt[ci] # number of labels - n_p = i.sum() # number of predictions - if n_p == 0 or n_l == 0: - continue - - # Accumulate FPs and TPs - fpc = (1 - tp[i]).cumsum(0) - tpc = tp[i].cumsum(0) - - # Recall - recall = tpc / (n_l + eps) # recall curve - r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases - - # Precision - precision = tpc / (tpc + fpc) # precision curve - p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score - - # AP from recall-precision curve - for j in range(tp.shape[1]): - ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j]) - if plot and j == 0: - py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5 - - # Compute F1 (harmonic mean of precision and recall) - f1 = 2 * p * r / (p + r + eps) - names = [v for k, v in names.items() if k in unique_classes] # list: only classes that have data - names = dict(enumerate(names)) # to dict - if plot: - plot_pr_curve(px, py, ap, Path(save_dir) / f'{prefix}PR_curve.png', names) - plot_mc_curve(px, f1, Path(save_dir) / f'{prefix}F1_curve.png', names, ylabel='F1') - plot_mc_curve(px, p, Path(save_dir) / f'{prefix}P_curve.png', names, ylabel='Precision') - plot_mc_curve(px, r, Path(save_dir) / f'{prefix}R_curve.png', names, ylabel='Recall') - - i = smooth(f1.mean(0), 0.1).argmax() # max F1 index - p, r, f1 = p[:, i], r[:, i], f1[:, i] - tp = (r * nt).round() # true positives - fp = (tp / (p + eps) - tp).round() # false positives - return tp, fp, p, r, f1, ap, unique_classes.astype(int) - - -def compute_ap(recall, precision): - """ Compute the average precision, given the recall and precision curves - # Arguments - recall: The recall curve (list) - precision: The precision curve (list) - # Returns - Average precision, precision curve, recall curve - """ - - # Append sentinel values to beginning and end - mrec = np.concatenate(([0.0], recall, [1.0])) - mpre = np.concatenate(([1.0], precision, [0.0])) - - # Compute the precision envelope - mpre = np.flip(np.maximum.accumulate(np.flip(mpre))) - - # Integrate area under curve - method = 'interp' # methods: 'continuous', 'interp' - if method == 'interp': - x = np.linspace(0, 1, 101) # 101-point interp (COCO) - ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate - else: # 'continuous' - i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes - ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve - - return ap, mpre, mrec - - -class ConfusionMatrix: - # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix - def __init__(self, nc, conf=0.25, iou_thres=0.45): - self.matrix = np.zeros((nc + 1, nc + 1)) - self.nc = nc # number of classes - self.conf = conf - self.iou_thres = iou_thres - - def process_batch(self, detections, labels): - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - detections (Array[N, 6]), x1, y1, x2, y2, conf, class - labels (Array[M, 5]), class, x1, y1, x2, y2 - Returns: - None, updates confusion matrix accordingly - """ - if detections is None: - gt_classes = labels.int() - for gc in gt_classes: - self.matrix[self.nc, gc] += 1 # background FN - return - - detections = detections[detections[:, 4] > self.conf] - gt_classes = labels[:, 0].int() - detection_classes = detections[:, 5].int() - iou = box_iou(labels[:, 1:], detections[:, :4]) - - x = torch.where(iou > self.iou_thres) - if x[0].shape[0]: - matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() - if x[0].shape[0] > 1: - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 1], return_index=True)[1]] - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 0], return_index=True)[1]] - else: - matches = np.zeros((0, 3)) - - n = matches.shape[0] > 0 - m0, m1, _ = matches.transpose().astype(int) - for i, gc in enumerate(gt_classes): - j = m0 == i - if n and sum(j) == 1: - self.matrix[detection_classes[m1[j]], gc] += 1 # correct - else: - self.matrix[self.nc, gc] += 1 # true background - - if n: - for i, dc in enumerate(detection_classes): - if not any(m1 == i): - self.matrix[dc, self.nc] += 1 # predicted background - - def tp_fp(self): - tp = self.matrix.diagonal() # true positives - fp = self.matrix.sum(1) - tp # false positives - # fn = self.matrix.sum(0) - tp # false negatives (missed detections) - return tp[:-1], fp[:-1] # remove background class - - @TryExcept('WARNING ⚠️ ConfusionMatrix plot failure') - def plot(self, normalize=True, save_dir='', names=()): - import seaborn as sn - - array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-9) if normalize else 1) # normalize columns - array[array < 0.005] = np.nan # don't annotate (would appear as 0.00) - - fig, ax = plt.subplots(1, 1, figsize=(12, 9), tight_layout=True) - nc, nn = self.nc, len(names) # number of classes, names - sn.set(font_scale=1.0 if nc < 50 else 0.8) # for label size - labels = (0 < nn < 99) and (nn == nc) # apply names to ticklabels - ticklabels = (names + ['background']) if labels else 'auto' - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress empty matrix RuntimeWarning: All-NaN slice encountered - sn.heatmap(array, - ax=ax, - annot=nc < 30, - annot_kws={ - 'size': 8}, - cmap='Blues', - fmt='.2f', - square=True, - vmin=0.0, - xticklabels=ticklabels, - yticklabels=ticklabels).set_facecolor((1, 1, 1)) - ax.set_xlabel('True') - ax.set_ylabel('Predicted') - ax.set_title('Confusion Matrix') - fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250) - plt.close(fig) - - def print(self): - for i in range(self.nc + 1): - print(' '.join(map(str, self.matrix[i]))) - - -def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7): - # Returns Intersection over Union (IoU) of box1(1,4) to box2(n,4) - - # Get the coordinates of bounding boxes - if xywh: # transform from xywh to xyxy - (x1, y1, w1, h1), (x2, y2, w2, h2) = box1.chunk(4, -1), box2.chunk(4, -1) - w1_, h1_, w2_, h2_ = w1 / 2, h1 / 2, w2 / 2, h2 / 2 - b1_x1, b1_x2, b1_y1, b1_y2 = x1 - w1_, x1 + w1_, y1 - h1_, y1 + h1_ - b2_x1, b2_x2, b2_y1, b2_y2 = x2 - w2_, x2 + w2_, y2 - h2_, y2 + h2_ - else: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1.chunk(4, -1) - b2_x1, b2_y1, b2_x2, b2_y2 = box2.chunk(4, -1) - w1, h1 = b1_x2 - b1_x1, (b1_y2 - b1_y1).clamp(eps) - w2, h2 = b2_x2 - b2_x1, (b2_y2 - b2_y1).clamp(eps) - - # Intersection area - inter = (b1_x2.minimum(b2_x2) - b1_x1.maximum(b2_x1)).clamp(0) * \ - (b1_y2.minimum(b2_y2) - b1_y1.maximum(b2_y1)).clamp(0) - - # Union Area - union = w1 * h1 + w2 * h2 - inter + eps - - # IoU - iou = inter / union - if CIoU or DIoU or GIoU: - cw = b1_x2.maximum(b2_x2) - b1_x1.minimum(b2_x1) # convex (smallest enclosing box) width - ch = b1_y2.maximum(b2_y2) - b1_y1.minimum(b2_y1) # convex height - if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 - c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared - rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center dist ** 2 - if CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * (torch.atan(w2 / h2) - torch.atan(w1 / h1)).pow(2) - with torch.no_grad(): - alpha = v / (v - iou + (1 + eps)) - return iou - (rho2 / c2 + v * alpha) # CIoU - return iou - rho2 / c2 # DIoU - c_area = cw * ch + eps # convex area - return iou - (c_area - union) / c_area # GIoU https://arxiv.org/pdf/1902.09630.pdf - return iou # IoU - - -def box_iou(box1, box2, eps=1e-7): - # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - box1 (Tensor[N, 4]) - box2 (Tensor[M, 4]) - Returns: - iou (Tensor[N, M]): the NxM matrix containing the pairwise - IoU values for every element in boxes1 and boxes2 - """ - - # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) - (a1, a2), (b1, b2) = box1.unsqueeze(1).chunk(2, 2), box2.unsqueeze(0).chunk(2, 2) - inter = (torch.min(a2, b2) - torch.max(a1, b1)).clamp(0).prod(2) - - # IoU = inter / (area1 + area2 - inter) - return inter / ((a2 - a1).prod(2) + (b2 - b1).prod(2) - inter + eps) - - -def bbox_ioa(box1, box2, eps=1e-7): - """ Returns the intersection over box2 area given box1, box2. Boxes are x1y1x2y2 - box1: np.array of shape(4) - box2: np.array of shape(nx4) - returns: np.array of shape(n) - """ - - # Get the coordinates of bounding boxes - b1_x1, b1_y1, b1_x2, b1_y2 = box1 - b2_x1, b2_y1, b2_x2, b2_y2 = box2.T - - # Intersection area - inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \ - (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0) - - # box2 area - box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + eps - - # Intersection over box2 area - return inter_area / box2_area - - -def wh_iou(wh1, wh2, eps=1e-7): - # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2 - wh1 = wh1[:, None] # [N,1,2] - wh2 = wh2[None] # [1,M,2] - inter = torch.min(wh1, wh2).prod(2) # [N,M] - return inter / (wh1.prod(2) + wh2.prod(2) - inter + eps) # iou = inter / (area1 + area2 - inter) - - -# Plots ---------------------------------------------------------------------------------------------------------------- - - -@threaded -def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()): - # Precision-recall curve - fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) - py = np.stack(py, axis=1) - - if 0 < len(names) < 21: # display per-class legend if < 21 classes - for i, y in enumerate(py.T): - ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision) - else: - ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision) - - ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean()) - ax.set_xlabel('Recall') - ax.set_ylabel('Precision') - ax.set_xlim(0, 1) - ax.set_ylim(0, 1) - ax.legend(bbox_to_anchor=(1.04, 1), loc='upper left') - ax.set_title('Precision-Recall Curve') - fig.savefig(save_dir, dpi=250) - plt.close(fig) - - -@threaded -def plot_mc_curve(px, py, save_dir=Path('mc_curve.png'), names=(), xlabel='Confidence', ylabel='Metric'): - # Metric-confidence curve - fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) - - if 0 < len(names) < 21: # display per-class legend if < 21 classes - for i, y in enumerate(py): - ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric) - else: - ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric) - - y = smooth(py.mean(0), 0.05) - ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}') - ax.set_xlabel(xlabel) - ax.set_ylabel(ylabel) - ax.set_xlim(0, 1) - ax.set_ylim(0, 1) - ax.legend(bbox_to_anchor=(1.04, 1), loc='upper left') - ax.set_title(f'{ylabel}-Confidence Curve') - fig.savefig(save_dir, dpi=250) - plt.close(fig) diff --git a/spaces/huggingface-tools/text-download/app.py b/spaces/huggingface-tools/text-download/app.py deleted file mode 100644 index 6265c4d24f1ab7d9eb44bfbd71b67f600671ef50..0000000000000000000000000000000000000000 --- a/spaces/huggingface-tools/text-download/app.py +++ /dev/null @@ -1,4 +0,0 @@ -from transformers.tools.base import launch_gradio_demo -from text_download import TextDownloadTool - -launch_gradio_demo(TextDownloadTool) diff --git a/spaces/hysts/bizarre-pose-estimator-segmenter/README.md b/spaces/hysts/bizarre-pose-estimator-segmenter/README.md deleted file mode 100644 index 9f33306daa76b25b5dc42503b9e8020bcbc6d340..0000000000000000000000000000000000000000 --- a/spaces/hysts/bizarre-pose-estimator-segmenter/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Bizarre Pose Estimator Segmenter -emoji: 🐢 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf4m_r50.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf4m_r50.py deleted file mode 100644 index b44fc68da88dd2c2d1e003c345ef04a5f43ead86..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf4m_r50.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/WebFace4M" -config.num_classes = 205990 -config.num_image = 4235242 -config.num_epoch = 20 -config.warmup_epoch = 0 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/igrab666/polish_text_summarization/app.py b/spaces/igrab666/polish_text_summarization/app.py deleted file mode 100644 index 3b6a1edc661b8623a61358c5bc4c4739be04b7c8..0000000000000000000000000000000000000000 --- a/spaces/igrab666/polish_text_summarization/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio as gr -title = 'Polish Text Summarization - temporary version' -text_ = "Niemiecki rząd przez kilka tygodni ukrywał listę z bronią, jaką przygotował przemysł zbrojeniowy, a która mogła trafić do ukraińskiej armii i pomóc walczyć z Rosjanami. Dodatkowo twierdził, że wszystko było tajemnicą, gdyż to Ukraina tego chciała. Tym twierdzeniom zaprzeczają Ukraińcy. — To jest nieludzkie. Jest to niezgodne z prawem międzynarodowym. Rażąca niesprawiedliwość, ból Ukraińców — są to sprawy bardzo bliskie nam wszystkim — powiedział kanclerz Niemiec Olaf Scholz (SPD). Miało to miejsce 27 lutego, ponad miesiąc przed masakrą w Buczy, w tzw. przemówieniu na przełomie czasów. — Ta nowa rzeczywistość wymaga jasnej odpowiedzi. Wczoraj podjęliśmy decyzję, że Niemcy będą dostarczać Ukrainie broń do obrony kraju — kontynuował Scholz. " -interface = gr.Interface.load("huggingface/facebook/bart-large-cnn", # TO O DZIWO DZIALA NAWET DLA JEZYKA POLSKIEGO JAKOS..... -# interface = gr.Interface.load("huggingface/dkleczek/bert-base-polish-cased-v1", # to nie dziala -# interface = gr.Interface.load("huggingface/clarin-pl/roberta-polish-kgr10", # to nie dziala - - - -#interface = gr.Interface.load("huggingface/henryk/bert-base-multilingual-cased-finetuned-polish-squad2", # to generuje keyword -#interface = gr.Interface.load("huggingface/sdadas/polish-roberta-large-v2", # to nie dziala -#interface = gr.Interface.load("huggingface/allegro/plt5-large", # DZIALA ALE BEZNADZIEJNIE. DUZO CHINSKICH ZNAKOW LADUJE - -#interface = gr.Interface.load("huggingface/allegro/plt5-small", # DZIALA ALE BEZNADZIEJNIE. DUZO CHINSKICH ZNAKOW LADUJE - - - -title = title, -theme = "peach", -examples = [[text_]]).launch() diff --git a/spaces/impira/docquery/app.py b/spaces/impira/docquery/app.py deleted file mode 100644 index 1c7417f2d0db043353e3e03ec8eae4e0c793efc1..0000000000000000000000000000000000000000 --- a/spaces/impira/docquery/app.py +++ /dev/null @@ -1,428 +0,0 @@ -import os - -os.environ["TOKENIZERS_PARALLELISM"] = "false" - -from PIL import Image, ImageDraw -import traceback - -import gradio as gr - -import torch -from docquery import pipeline -from docquery.document import load_document, ImageDocument -from docquery.ocr_reader import get_ocr_reader - - -def ensure_list(x): - if isinstance(x, list): - return x - else: - return [x] - - -CHECKPOINTS = { - "LayoutLMv1 🦉": "impira/layoutlm-document-qa", - "LayoutLMv1 for Invoices 💸": "impira/layoutlm-invoices", - "Donut 🍩": "naver-clova-ix/donut-base-finetuned-docvqa", -} - -PIPELINES = {} - - -def construct_pipeline(task, model): - global PIPELINES - if model in PIPELINES: - return PIPELINES[model] - - device = "cuda" if torch.cuda.is_available() else "cpu" - ret = pipeline(task=task, model=CHECKPOINTS[model], device=device) - PIPELINES[model] = ret - return ret - - -def run_pipeline(model, question, document, top_k): - pipeline = construct_pipeline("document-question-answering", model) - return pipeline(question=question, **document.context, top_k=top_k) - - -# TODO: Move into docquery -# TODO: Support words past the first page (or window?) -def lift_word_boxes(document, page): - return document.context["image"][page][1] - - -def expand_bbox(word_boxes): - if len(word_boxes) == 0: - return None - - min_x, min_y, max_x, max_y = zip(*[x[1] for x in word_boxes]) - min_x, min_y, max_x, max_y = [min(min_x), min(min_y), max(max_x), max(max_y)] - return [min_x, min_y, max_x, max_y] - - -# LayoutLM boxes are normalized to 0, 1000 -def normalize_bbox(box, width, height, padding=0.005): - min_x, min_y, max_x, max_y = [c / 1000 for c in box] - if padding != 0: - min_x = max(0, min_x - padding) - min_y = max(0, min_y - padding) - max_x = min(max_x + padding, 1) - max_y = min(max_y + padding, 1) - return [min_x * width, min_y * height, max_x * width, max_y * height] - - -examples = [ - [ - "invoice.png", - "What is the invoice number?", - ], - [ - "contract.jpeg", - "What is the purchase amount?", - ], - [ - "statement.png", - "What are net sales for 2020?", - ], - # [ - # "docquery.png", - # "How many likes does the space have?", - # ], - # [ - # "hacker_news.png", - # "What is the title of post number 5?", - # ], -] - -question_files = { - "What are net sales for 2020?": "statement.pdf", - "How many likes does the space have?": "https://huggingface.co/spaces/impira/docquery", - "What is the title of post number 5?": "https://news.ycombinator.com", -} - - -def process_path(path): - error = None - if path: - try: - document = load_document(path) - return ( - document, - gr.update(visible=True, value=document.preview), - gr.update(visible=True), - gr.update(visible=False, value=None), - gr.update(visible=False, value=None), - None, - ) - except Exception as e: - traceback.print_exc() - error = str(e) - return ( - None, - gr.update(visible=False, value=None), - gr.update(visible=False), - gr.update(visible=False, value=None), - gr.update(visible=False, value=None), - gr.update(visible=True, value=error) if error is not None else None, - None, - ) - - -def process_upload(file): - if file: - return process_path(file.name) - else: - return ( - None, - gr.update(visible=False, value=None), - gr.update(visible=False), - gr.update(visible=False, value=None), - gr.update(visible=False, value=None), - None, - ) - - -colors = ["#64A087", "green", "black"] - - -def process_question(question, document, model=list(CHECKPOINTS.keys())[0]): - if not question or document is None: - return None, None, None - - text_value = None - predictions = run_pipeline(model, question, document, 3) - pages = [x.copy().convert("RGB") for x in document.preview] - for i, p in enumerate(ensure_list(predictions)): - if i == 0: - text_value = p["answer"] - else: - # Keep the code around to produce multiple boxes, but only show the top - # prediction for now - break - - if "word_ids" in p: - image = pages[p["page"]] - draw = ImageDraw.Draw(image, "RGBA") - word_boxes = lift_word_boxes(document, p["page"]) - x1, y1, x2, y2 = normalize_bbox( - expand_bbox([word_boxes[i] for i in p["word_ids"]]), - image.width, - image.height, - ) - draw.rectangle(((x1, y1), (x2, y2)), fill=(0, 255, 0, int(0.4 * 255))) - - return ( - gr.update(visible=True, value=pages), - gr.update(visible=True, value=predictions), - gr.update( - visible=True, - value=text_value, - ), - ) - - -def load_example_document(img, question, model): - if img is not None: - if question in question_files: - document = load_document(question_files[question]) - else: - document = ImageDocument(Image.fromarray(img), get_ocr_reader()) - preview, answer, answer_text = process_question(question, document, model) - return document, question, preview, gr.update(visible=True), answer, answer_text - else: - return None, None, None, gr.update(visible=False), None, None - - -CSS = """ -#question input { - font-size: 16px; -} -#url-textbox { - padding: 0 !important; -} -#short-upload-box .w-full { - min-height: 10rem !important; -} -/* I think something like this can be used to re-shape - * the table - */ -/* -.gr-samples-table tr { - display: inline; -} -.gr-samples-table .p-2 { - width: 100px; -} -*/ -#select-a-file { - width: 100%; -} -#file-clear { - padding-top: 2px !important; - padding-bottom: 2px !important; - padding-left: 8px !important; - padding-right: 8px !important; - margin-top: 10px; -} -.gradio-container .gr-button-primary { - background: linear-gradient(180deg, #CDF9BE 0%, #AFF497 100%); - border: 1px solid #B0DCCC; - border-radius: 8px; - color: #1B8700; -} -.gradio-container.dark button#submit-button { - background: linear-gradient(180deg, #CDF9BE 0%, #AFF497 100%); - border: 1px solid #B0DCCC; - border-radius: 8px; - color: #1B8700 -} - -table.gr-samples-table tr td { - border: none; - outline: none; -} - -table.gr-samples-table tr td:first-of-type { - width: 0%; -} - -div#short-upload-box div.absolute { - display: none !important; -} - -gradio-app > div > div > div > div.w-full > div, .gradio-app > div > div > div > div.w-full > div { - gap: 0px 2%; -} - -gradio-app div div div div.w-full, .gradio-app div div div div.w-full { - gap: 0px; -} - -gradio-app h2, .gradio-app h2 { - padding-top: 10px; -} - -#answer { - overflow-y: scroll; - color: white; - background: #666; - border-color: #666; - font-size: 20px; - font-weight: bold; -} - -#answer span { - color: white; -} - -#answer textarea { - color:white; - background: #777; - border-color: #777; - font-size: 18px; -} - -#url-error input { - color: red; -} -""" - -with gr.Blocks(css=CSS) as demo: - gr.Markdown("# DocQuery: Document Query Engine") - gr.Markdown( - "DocQuery (created by [Impira](https://impira.com?utm_source=huggingface&utm_medium=referral&utm_campaign=docquery_space))" - " uses LayoutLMv1 fine-tuned on DocVQA, a document visual question" - " answering dataset, as well as SQuAD, which boosts its English-language comprehension." - " To use it, simply upload an image or PDF, type a question, and click 'submit', or " - " click one of the examples to load them." - " DocQuery is MIT-licensed and available on [Github](https://github.com/impira/docquery)." - ) - - document = gr.Variable() - example_question = gr.Textbox(visible=False) - example_image = gr.Image(visible=False) - - with gr.Row(equal_height=True): - with gr.Column(): - with gr.Row(): - gr.Markdown("## 1. Select a file", elem_id="select-a-file") - img_clear_button = gr.Button( - "Clear", variant="secondary", elem_id="file-clear", visible=False - ) - image = gr.Gallery(visible=False) - with gr.Row(equal_height=True): - with gr.Column(): - with gr.Row(): - url = gr.Textbox( - show_label=False, - placeholder="URL", - lines=1, - max_lines=1, - elem_id="url-textbox", - ) - submit = gr.Button("Get") - url_error = gr.Textbox( - visible=False, - elem_id="url-error", - max_lines=1, - interactive=False, - label="Error", - ) - gr.Markdown("— or —") - upload = gr.File(label=None, interactive=True, elem_id="short-upload-box") - gr.Examples( - examples=examples, - inputs=[example_image, example_question], - ) - - with gr.Column() as col: - gr.Markdown("## 2. Ask a question") - question = gr.Textbox( - label="Question", - placeholder="e.g. What is the invoice number?", - lines=1, - max_lines=1, - ) - model = gr.Radio( - choices=list(CHECKPOINTS.keys()), - value=list(CHECKPOINTS.keys())[0], - label="Model", - ) - - with gr.Row(): - clear_button = gr.Button("Clear", variant="secondary") - submit_button = gr.Button( - "Submit", variant="primary", elem_id="submit-button" - ) - with gr.Column(): - output_text = gr.Textbox( - label="Top Answer", visible=False, elem_id="answer" - ) - output = gr.JSON(label="Output", visible=False) - - for cb in [img_clear_button, clear_button]: - cb.click( - lambda _: ( - gr.update(visible=False, value=None), - None, - gr.update(visible=False, value=None), - gr.update(visible=False, value=None), - gr.update(visible=False), - None, - None, - None, - gr.update(visible=False, value=None), - None, - ), - inputs=clear_button, - outputs=[ - image, - document, - output, - output_text, - img_clear_button, - example_image, - upload, - url, - url_error, - question, - ], - ) - - upload.change( - fn=process_upload, - inputs=[upload], - outputs=[document, image, img_clear_button, output, output_text, url_error], - ) - submit.click( - fn=process_path, - inputs=[url], - outputs=[document, image, img_clear_button, output, output_text, url_error], - ) - - question.submit( - fn=process_question, - inputs=[question, document, model], - outputs=[image, output, output_text], - ) - - submit_button.click( - process_question, - inputs=[question, document, model], - outputs=[image, output, output_text], - ) - - model.change( - process_question, - inputs=[question, document, model], - outputs=[image, output, output_text], - ) - - example_image.change( - fn=load_example_document, - inputs=[example_image, example_question, model], - outputs=[document, question, image, img_clear_button, output, output_text], - ) - -if __name__ == "__main__": - demo.launch(enable_queue=False) diff --git a/spaces/imseldrith/AI-Rephraser/README.md b/spaces/imseldrith/AI-Rephraser/README.md deleted file mode 100644 index 20a8431766fc7a2ce3d07c5642f0f4e6674c9a2a..0000000000000000000000000000000000000000 --- a/spaces/imseldrith/AI-Rephraser/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI Rephraser -emoji: 🐢 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Kapandji Anatomie Fonctionnelle Pdf 23.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Kapandji Anatomie Fonctionnelle Pdf 23.md deleted file mode 100644 index a79945086fc4b63bfe4eb26825f9b986966c1370..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Kapandji Anatomie Fonctionnelle Pdf 23.md +++ /dev/null @@ -1,13 +0,0 @@ -

        kapandji anatomie fonctionnelle pdf 23


        DOWNLOAD ►►►►► https://urlin.us/2uEwh3



        - -kapandji anatomy fontctionnelle pdf 23 mb. -Nous vous invitez sur un site web, sur un projet davoir des accessoires, en. -Pdf, sans connaissance. -A4 citation book of the year pdf. -Download the book for free in the electronic library TheLib. -Search the world's information, including webpages, images, videos and more. -Google has many special features to help you find exactly what you're looking for. -Free download 8a78ff9644
        -
        -
        -

        diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Matrixjewelrysoftwarecrackdownload [REPACK].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Matrixjewelrysoftwarecrackdownload [REPACK].md deleted file mode 100644 index f9094af1b0239a107db8b329b60df0377d80515f..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Matrixjewelrysoftwarecrackdownload [REPACK].md +++ /dev/null @@ -1,10 +0,0 @@ - -

        https://urilerof.com/covans-recipe-fancy-chicken-salad-770-ver.rar PC Feine Kunst bf v1.1.1.rar.X86,matrixjewelrysoftwarecrackdownload, 33 237 https://trello.com/c/vgrsy7Tr/16-33-party-3-thingiverse-matrixjewelrysoftwarecrackdownload

        -

        https://trello.com/c/Ceq23bJI/52-matrixjewelrysoftwarecrackdownload-better https://preelectyum.store/matrixjewelrysoftwarecrackdownload. crack matrixjewelrysoftwarecrackdownload crack matrixjewelrysoftwarecrackdownload.

        -

        matrixjewelrysoftwarecrackdownload


        Download Zip ★★★★★ https://urlin.us/2uEx0G



        -

        https://trello.com/c/zk6eATpP/16-2021-magix-acid-pro-8-v801-x86-crack https://preelectyum.store/big-ten-2-2-download-date-game-sx. https://trello.com/c/Ceq23bJI/52-matrixjewelrysoftwarecrackdownload-better https://cdn.trello.com/assets/2/b/1/4/f/8/medium/entrena hipotetica de cero alcachofas. Matriplot 2010 - movercita - download.rar. #

        -

        https://trello.com/c/zk6eATpP/16-2021-magix-acid-pro-8-v801-x86-crack https://preelectyum.store/matrixjewelrysoftwarecrackdownload. https://trello.com/c/Ceq23bJI/52-matrixjewelrysoftwarecrackdownload-better https://trello.com/c/Ceq23bJI/52-matrixjewelrysoftwarecrackdownload-better.

        -

        https://trello.com/c/Ceq23bJI/52-matrixjewelrysoftwarecrackdownload-better https://preelectyum.store/big-ten-2-2-download-date-game-sx. https://trello.com/c/Ceq23bJI/52-matrixjewelrysoftwarecrackdownload-better https://preelectyum.

        -

        20 06 44.four poster three.0.0.crack.for.windows.mac.denis-morandini.co https://trello.com/c/jAH8Mfhp/14-matrixjewelrysoftwarecrackdownload-best. http://store.myhtml.at/matrixjewelrysoftwarecrackdownload.htm.hdtvbox-pvr-usb-black-applematrixjewelrysoftwarecrackdownload https://trello.com/c/AAmV3RQq/51-matrixjewelrysoftwarecrackdownload-link.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Mixedinkeymashup2crack.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Mixedinkeymashup2crack.md deleted file mode 100644 index dabf007a39f7bd1337a4b4e931cf26751568d807..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Mixedinkeymashup2crack.md +++ /dev/null @@ -1,11 +0,0 @@ - -

        mixedinkeymashup2crack andyo 07/07/2015 12:31 PM. RawTranser v.0.2.0 (Build 2.4.0) (Free) by Automatic Transferalink to crack continuous built by.cannondefender v2.5.1 (Free).mixedinkeymashup2crack Crack - Free.

        -

        mixedinkeymashup2crack


        Downloadhttps://urlin.us/2uEwGw



        -

        mixedinkeymashup2crack bf997631f mi8auce 7/25/2014 03:13 PM Mi8auce. (#) m.f. # p.m.. Win.8 x64. Auto.Crack v.1.0 (Win.v.1.0) by [Mi8auce] (Free).mixedinkeymashup2crack XnryptoCrack v.1.0 (Win.v.1.0) by [Mi8auce] (Free) XnryptoCrack Login,XnryptoCrack Crack,XnryptoCrack How To Crack,XnryptoCrack Crack v.1.0 (Win.v.1.0) by [Mi8auce] (Free) Vcshre (Win.v.1.0) by [Mi8auce] (Free) https://trello.

        -

        https://trello.com/c/D6rzcgVg/18-mixedinkeymashup2crack https://trello.com/c/IjBptKZT/30-mixedinkeymashup2crack https://trello.com/c/a7UxlgJg/28-mixedinkeymashup2crack https://trello.com/c/HvtNPaAt/22-mixedinkeymashup2crack https://trello.com/c/VBN9RqLJ/40-mixedinkeymashup2crack

        -

        . https://trello.com/c/y8DpBHpT/mixedinkeymashup2crack mixedinkeymashup2crack

        . mlskdjflksg. https://trello.com/c/U1Oy4yQP/25-mixedinkeymashup2crack https://trello.com/c/wfRjP4zz/31-mixedinkeymashup2crack https://trello.com/c/N7x3vRAu/27-mixedinkeymashup2crack https://trello.com/c/3Sn17HXq/24-mixedinkeymashup2crack

        -

        . https://trello.com/c/lzDm3Qk7/32-mixedinkeymashup2crack https://trello.com/c/g7kMb8x7/40-soft-restaurant-2011-ver-38-crack-added-by-users https://trello.com/c/DyV51Wfr/23-mixedinkeymashup2crack

        -

        -

        https://trello.com/c/bwwtcMKb/25-mixedinkeymashup2crack https://trello.com/c/hKl3thQe/18-soft-restaurant-2011-ver-38-crack-added-by-users https://trello.com/c/T9iHXW0j/40-mixedinkeymashup2crack-link https://trello.com/c/igZNhUzb/18-best-switch-bot-v3-pumt2

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/NCH Switch Sound File Converter Plus 11.15.2 Keygen Crack Utorrent [HOT].md b/spaces/inplisQlawa/anything-midjourney-v4-1/NCH Switch Sound File Converter Plus 11.15.2 Keygen Crack Utorrent [HOT].md deleted file mode 100644 index 7eaf7817fa7ce2fab61f6c796a6bb68b45dfe49e..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/NCH Switch Sound File Converter Plus 11.15.2 Keygen Crack Utorrent [HOT].md +++ /dev/null @@ -1,7 +0,0 @@ - -

        .. nch switch audio converter plus crack is a powerful audio conversion utility that does not only convert audio but also change audio format. it also works as a recorder and has the option to record audio directly from the microphone. the user interface is very simple and easy to use. this program is simple to use and the interface is very simple and easy to use. it also does not require any prior experience to use it. this is a free to use program that will convert the audio format of your choice. it also has the option to change the audio format of the video file to the one that you want to use.

        -

        switch audio converter plus crack supports almost all of the popular audio formats. the list of supported audio formats is huge and includes mp3, ogg, wma, wav, aac, flac, ape, amr and many more.. the tool is easy to use and is a powerful audio converter that can convert any audio file to any other audio file format. the converter plus version allows you to choose the output format, the output file quality, the desired output path, the desired speed, and the desired audio bitrate. switch audio converter plus is a free audio converter tool that allows you to convert audio files from one format to another. it supports all popular audio formats like mp3, wav, wma, ogg, aac, and flac. the user interface is very simple and easy to use. this program is simple to use and the interface is very simple and easy to use. it also does not require any prior experience to use it.

        -

        NCH Switch Sound File Converter Plus 11.15.2 Keygen Crack utorrent


        DOWNLOAD ►►► https://urlin.us/2uExRd



        -

        nch switch audio converter plus crack works well with all audio formats and it supports almost all the popular audio formats like mp3, wma, ogg, aac, flac, ape, amr and many more.. this is a powerful audio converter tool that can convert any audio file to any other audio file format. the converter plus version allows you to choose the output format, the output file quality, the desired output path, the desired speed, and the desired audio bitrate. nch switch audio converter plus crack is a free audio converter tool that allows you to convert audio files from one format to another. it supports all popular audio formats like mp3, wav, wma, ogg, aac, and flac. the user interface is very simple and easy to use. this program is simple to use and the interface is very simple and easy to use. it also does not require any prior experience to use it.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Adobe After Effects CC 2019 16.1 [UPD] Crack Activation Key Free Download.md b/spaces/inreVtussa/clothingai/Examples/Adobe After Effects CC 2019 16.1 [UPD] Crack Activation Key Free Download.md deleted file mode 100644 index 7b5bc4dc4a1a2aa73e58cd72ac451da6b9a0b532..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Adobe After Effects CC 2019 16.1 [UPD] Crack Activation Key Free Download.md +++ /dev/null @@ -1,27 +0,0 @@ -
        -``` -

        Adobe After Effects CC 2019 16.1 Crack Activation Key Free Download

        -

        Adobe After Effects CC 2019 16.1 Crack is a powerful and professional video editing software that allows you to create stunning motion graphics and visual effects. Whether you are working on a film, a TV show, a commercial, or a web video, Adobe After Effects CC 2019 16.1 Crack can help you bring your creative vision to life.

        -

        Adobe After Effects CC 2019 16.1 Crack offers many new features and enhancements, such as the Content-Aware Fill tool, which can automatically remove unwanted objects from your video clips; the Advanced Puppet Tool, which can create realistic animations for your characters and objects; the Expression Editor, which can simplify and speed up your workflow with expressions; and the Essential Graphics Panel, which can let you create and customize motion graphics templates for easy reuse.

        -

        Adobe After Effects CC 2019 16.1 Crack Activation Key Free Download


        Download ✏ ✏ ✏ https://tiurll.com/2uClc8



        -

        Adobe After Effects CC 2019 16.1 Crack also supports a wide range of formats and codecs, including VR and 360-degree video, as well as integration with other Adobe applications, such as Premiere Pro, Photoshop, Illustrator, Audition, and Media Encoder. With Adobe After Effects CC 2019 16.1 Crack, you can unleash your creativity and deliver amazing results for any project.

        -

        If you want to download and install Adobe After Effects CC 2019 16.1 Crack for free, you can follow these simple steps:

        -
          -
        1. Download the setup file from the link below.
        2. -
        3. Extract the file using WinRAR or any other extraction tool.
        4. -
        5. Run the setup file and follow the instructions.
        6. -
        7. Copy the crack file and paste it into the installation folder.
        8. -
        9. Run the program and enjoy!
        10. -
        -

        Download link: https://example.com/adobe-after-effects-cc-2019-16-1-crack/

        -``` - -``` -

        Adobe After Effects CC 2019 16.1 Crack is not only a video editing software, but also a powerful compositing and animation tool. You can use it to create stunning visual effects for your films, such as explosions, fire, smoke, rain, snow, and more. You can also use it to create motion graphics for your titles, logos, lower thirds, transitions, and more. You can even use it to create interactive animations for your web and mobile applications.

        -

        Adobe After Effects CC 2019 16.1 Crack has a user-friendly interface that allows you to work efficiently and creatively. You can easily preview your work in real-time, adjust your settings and parameters, and apply effects and presets. You can also use the timeline panel to organize your layers and keyframes, the project panel to manage your assets and compositions, the effects panel to browse and apply effects, and the tools panel to access various tools for editing and manipulating your elements.

        -

        Adobe After Effects CC 2019 16.1 Crack also has a rich and diverse community of users and developers who share their tips, tutorials, plugins, scripts, templates, and more. You can find many resources online that can help you learn and improve your skills with Adobe After Effects CC 2019 16.1 Crack. You can also join forums and groups where you can ask questions, get feedback, and collaborate with other users.

        -

        Adobe After Effects CC 2019 16.1 Crack is a must-have software for anyone who wants to create amazing videos with stunning effects and animations. It is a versatile and powerful tool that can handle any challenge and meet any demand. With Adobe After Effects CC 2019 16.1 Crack, you can turn your imagination into reality.

        -```

        -

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Anurag I21 Crack Zip File REPACK.md b/spaces/inreVtussa/clothingai/Examples/Anurag I21 Crack Zip File REPACK.md deleted file mode 100644 index ea4ee482d1e0d55e899fa72025904ef1c302c44f..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Anurag I21 Crack Zip File REPACK.md +++ /dev/null @@ -1,6 +0,0 @@ -

        anurag i21 crack zip file


        Download File ►►► https://tiurll.com/2uCjWc



        -
        -October 19, 2017 - Now you download this Anurag i21 Photoshop plugin from here. Anurag i21 is compatible with Adobe Photoshop 9 (CS2), Adobe Photoshop 10 ( Adobe Photoshop CC (CS6), also known as Photoshop CC. Over the years, Adobe has seriously changed the approach to Photoshop. This makes working with Photoshop much easier. Photoshop CC (CS5 and CS6) has many new features such as more powerful color tools, new drawing tools, image processing filters, new features, and more.Photoshop CC (CS6) allows you to create 3D objects in Photoshop. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/inreVtussa/clothingai/Examples/Codevisionavr V3 24 Crack Cocaine BETTER.md b/spaces/inreVtussa/clothingai/Examples/Codevisionavr V3 24 Crack Cocaine BETTER.md deleted file mode 100644 index 64a7e589f3951319b2a705b429254627f060beab..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Codevisionavr V3 24 Crack Cocaine BETTER.md +++ /dev/null @@ -1,6 +0,0 @@ -

        codevisionavr v3 24 crack cocaine


        Downloadhttps://tiurll.com/2uCjWH



        -
        -24. 2.2.4.3 Executing a user-defined program before Make .... 2.3.3 Serial Communication Terminal .... 3. CodeVisionAVR C Compiler Reference Guide . 4. Libraries. 5. Working with peripherals. 6. Using the loader. 7. Creating programs in CodeVisionAVR. 8. Using the debugger. 9. Using the CodeVisionAVR function . 10. Fixing programs. 11. Getting and editing data. 12. Installing and removing libraries. 13. Working with program text. 14. Using variables. 15. 15. Using input, output and variable output functions. 16. Debugging programs. 17. Saving data in a file. 18. Saving files. 19. Creating files. 20. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/ismot/1702t1/dataset/pano_s2d3d_dataset.py b/spaces/ismot/1702t1/dataset/pano_s2d3d_dataset.py deleted file mode 100644 index b6939fea1a08e5f1c1eb985b85fc739be0c53b04..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/dataset/pano_s2d3d_dataset.py +++ /dev/null @@ -1,107 +0,0 @@ -""" -@date: 2021/6/16 -@description: -""" -import math -import os -import numpy as np - -from dataset.communal.read import read_image, read_label -from dataset.communal.base_dataset import BaseDataset -from utils.logger import get_logger - - -class PanoS2D3DDataset(BaseDataset): - def __init__(self, root_dir, mode, shape=None, max_wall_num=0, aug=None, camera_height=1.6, logger=None, - split_list=None, patch_num=256, keys=None, for_test_index=None, subset=None): - super().__init__(mode, shape, max_wall_num, aug, camera_height, patch_num, keys) - - if logger is None: - logger = get_logger() - self.root_dir = root_dir - - if mode is None: - return - label_dir = os.path.join(root_dir, 'valid' if mode == 'val' else mode, 'label_cor') - img_dir = os.path.join(root_dir, 'valid' if mode == 'val' else mode, 'img') - - if split_list is None: - split_list = [name.split('.')[0] for name in os.listdir(label_dir) if - not name.startswith('.') and name.endswith('txt')] - - split_list.sort() - - assert subset == 'pano' or subset == 's2d3d' or subset is None, 'error subset' - if subset == 'pano': - split_list = [name for name in split_list if 'pano_' in name] - logger.info(f"Use PanoContext Dataset") - elif subset == 's2d3d': - split_list = [name for name in split_list if 'camera_' in name] - logger.info(f"Use Stanford2D3D Dataset") - - if for_test_index is not None: - split_list = split_list[:for_test_index] - - self.data = [] - invalid_num = 0 - for name in split_list: - img_path = os.path.join(img_dir, f"{name}.png") - label_path = os.path.join(label_dir, f"{name}.txt") - - if not os.path.exists(img_path): - logger.warning(f"{img_path} not exists") - invalid_num += 1 - continue - if not os.path.exists(label_path): - logger.warning(f"{label_path} not exists") - invalid_num += 1 - continue - - with open(label_path, 'r') as f: - lines = [line for line in f.readlines() if - len([c for c in line.split(' ') if c[0].isnumeric()]) > 1] - if len(lines) % 2 != 0: - invalid_num += 1 - continue - self.data.append([img_path, label_path]) - - logger.info( - f"Build dataset mode: {self.mode} valid: {len(self.data)} invalid: {invalid_num}") - - def __getitem__(self, idx): - rgb_path, label_path = self.data[idx] - label = read_label(label_path, data_type='Pano_S2D3D') - image = read_image(rgb_path, self.shape) - output = self.process_data(label, image, self.patch_num) - return output - - -if __name__ == '__main__': - - modes = ['test', 'val', 'train'] - for i in range(1): - for mode in modes: - print(mode) - mp3d_dataset = PanoS2D3DDataset(root_dir='../src/dataset/pano_s2d3d', mode=mode, aug={ - # 'STRETCH': True, - # 'ROTATE': True, - # 'FLIP': True, - # 'GAMMA': True - }) - continue - save_dir = f'../src/dataset/pano_s2d3d/visualization/{mode}' - if not os.path.isdir(save_dir): - os.makedirs(save_dir) - - bar = tqdm(mp3d_dataset, ncols=100) - for data in bar: - bar.set_description(f"Processing {data['id']}") - boundary_list = depth2boundaries(data['ratio'], data['depth'], step=None) - pano_img = draw_boundaries(data['image'].transpose(1, 2, 0), boundary_list=boundary_list, show=False) - Image.fromarray((pano_img * 255).astype(np.uint8)).save( - os.path.join(save_dir, f"{data['id']}_boundary.png")) - - floorplan = draw_floorplan(uv2xyz(boundary_list[0])[..., ::2], show=False, - marker_color=None, center_color=0.8, show_radius=None) - Image.fromarray((floorplan.squeeze() * 255).astype(np.uint8)).save( - os.path.join(save_dir, f"{data['id']}_floorplan.png")) diff --git a/spaces/ismot/1702t1/utils/conversion.py b/spaces/ismot/1702t1/utils/conversion.py deleted file mode 100644 index 906d5f6dcbe635e1d2a67e032fb2de30a9dee5fa..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/utils/conversion.py +++ /dev/null @@ -1,346 +0,0 @@ -""" -@date: 2021/06/19 -@description: -Specification of 4 coordinate systems: -Pixel coordinates (used in panoramic images), the range is related to the image size, -generally converted to UV coordinates first, the first is horizontal coordinates, -increasing to the right, the second is column coordinates, increasing down - -Uv coordinates (used in panoramic images), the range is [0~1], the upper left corner is the origin, -u is the abscissa and increases to the right, V is the column coordinate and increases to the right - -Longitude and latitude coordinates (spherical), the range of longitude lon is [-pi~ PI], -and the range of dimension is [-pi/2~ PI /2]. The center of the panorama is the origin, -and the longitude increases to the right and the dimension increases to the down - -Xyz coordinate (used in 3-dimensional space, of course, -it can also represent longitude and latitude coordinates on the sphere). -If on the sphere, the coordinate mode length is 1, when y is projected to the height of the camera, -the real position information of space points will be obtained - -Correspondence between longitude and latitude coordinates and xyz coordinates: - | -pi/2 - | - lef _ _ _ _ _ |_ _ _ _ _ - -pi / | \ - pi | - - - - - -\ - z 0 mid - right \_ _ _ _ _ /_|_ _ _ _ _ _/ - / | - / | - x/ | y pi/2 -""" - -import numpy as np -import torch -import functools - - -@functools.lru_cache() -def get_u(w, is_np, b=None): - u = pixel2uv(np.array(range(w)) if is_np else torch.arange(0, w), w=w, axis=0) - if b is not None: - u = u[np.newaxis].repeat(b) if is_np else u.repeat(b, 1) - return u - - -@functools.lru_cache() -def get_lon(w, is_np, b=None): - lon = pixel2lonlat(np.array(range(w)) if is_np else torch.arange(0, w), w=w, axis=0) - if b is not None: - lon = lon[np.newaxis].repeat(b, axis=0) if is_np else lon.repeat(b, 1) - return lon - - -def pixel2uv(pixel, w=1024, h=512, axis=None): - pixel = pixel.astype(np.float) if isinstance(pixel, np.ndarray) else pixel.float() - # +0.5 will make left/right and up/down coordinates symmetric - if axis is None: - u = (pixel[..., 0:1] + 0.5) / w - v = (pixel[..., 1:] + 0.5) / h - elif axis == 0: - u = (pixel + 0.5) / (w * 1.0) - return u - elif axis == 1: - v = (pixel + 0.5) / (h * 1.0) - return v - else: - assert False, "axis error" - - lst = [u, v] - uv = np.concatenate(lst, axis=-1) if isinstance(pixel, np.ndarray) else torch.cat(lst, dim=-1) - return uv - - -def pixel2lonlat(pixel, w=1024, h=512, axis=None): - uv = pixel2uv(pixel, w, h, axis) - lonlat = uv2lonlat(uv, axis) - return lonlat - - -def pixel2xyz(pixel, w=1024, h=512): - lonlat = pixel2lonlat(pixel, w, h) - xyz = lonlat2xyz(lonlat) - return xyz - - -def uv2lonlat(uv, axis=None): - if axis is None: - lon = (uv[..., 0:1] - 0.5) * 2 * np.pi - lat = (uv[..., 1:] - 0.5) * np.pi - elif axis == 0: - lon = (uv - 0.5) * 2 * np.pi - return lon - elif axis == 1: - lat = (uv - 0.5) * np.pi - return lat - else: - assert False, "axis error" - - lst = [lon, lat] - lonlat = np.concatenate(lst, axis=-1) if isinstance(uv, np.ndarray) else torch.cat(lst, dim=-1) - return lonlat - - -def uv2xyz(uv, plan_y=None, spherical=False): - lonlat = uv2lonlat(uv) - xyz = lonlat2xyz(lonlat) - if spherical: - # Projection onto the sphere - return xyz - - if plan_y is None: - from utils.boundary import boundary_type - plan_y = boundary_type(uv) - # Projection onto the specified plane - xyz = xyz * (plan_y / xyz[..., 1])[..., None] - - return xyz - - -def lonlat2xyz(lonlat, plan_y=None): - lon = lonlat[..., 0:1] - lat = lonlat[..., 1:] - cos = np.cos if isinstance(lonlat, np.ndarray) else torch.cos - sin = np.sin if isinstance(lonlat, np.ndarray) else torch.sin - x = cos(lat) * sin(lon) - y = sin(lat) - z = cos(lat) * cos(lon) - lst = [x, y, z] - xyz = np.concatenate(lst, axis=-1) if isinstance(lonlat, np.ndarray) else torch.cat(lst, dim=-1) - - if plan_y is not None: - xyz = xyz * (plan_y / xyz[..., 1])[..., None] - - return xyz - - -##################### - - -def xyz2lonlat(xyz): - atan2 = np.arctan2 if isinstance(xyz, np.ndarray) else torch.atan2 - asin = np.arcsin if isinstance(xyz, np.ndarray) else torch.asin - norm = np.linalg.norm(xyz, axis=-1) if isinstance(xyz, np.ndarray) else torch.norm(xyz, p=2, dim=-1) - xyz_norm = xyz / norm[..., None] - x = xyz_norm[..., 0:1] - y = xyz_norm[..., 1:2] - z = xyz_norm[..., 2:] - lon = atan2(x, z) - lat = asin(y) - lst = [lon, lat] - lonlat = np.concatenate(lst, axis=-1) if isinstance(xyz, np.ndarray) else torch.cat(lst, dim=-1) - return lonlat - - -def xyz2uv(xyz): - lonlat = xyz2lonlat(xyz) - uv = lonlat2uv(lonlat) - return uv - - -def xyz2pixel(xyz, w=1024, h=512): - uv = xyz2uv(xyz) - pixel = uv2pixel(uv, w, h) - return pixel - - -def lonlat2uv(lonlat, axis=None): - if axis is None: - u = lonlat[..., 0:1] / (2 * np.pi) + 0.5 - v = lonlat[..., 1:] / np.pi + 0.5 - elif axis == 0: - u = lonlat / (2 * np.pi) + 0.5 - return u - elif axis == 1: - v = lonlat / np.pi + 0.5 - return v - else: - assert False, "axis error" - - lst = [u, v] - uv = np.concatenate(lst, axis=-1) if isinstance(lonlat, np.ndarray) else torch.cat(lst, dim=-1) - return uv - - -def lonlat2pixel(lonlat, w=1024, h=512, axis=None, need_round=True): - uv = lonlat2uv(lonlat, axis) - pixel = uv2pixel(uv, w, h, axis, need_round) - return pixel - - -def uv2pixel(uv, w=1024, h=512, axis=None, need_round=True): - """ - :param uv: [[u1, v1], [u2, v2] ...] - :param w: width of panorama image - :param h: height of panorama image - :param axis: sometimes the input data is only u(axis =0) or only v(axis=1) - :param need_round: - :return: - """ - if axis is None: - pu = uv[..., 0:1] * w - 0.5 - pv = uv[..., 1:] * h - 0.5 - elif axis == 0: - pu = uv * w - 0.5 - if need_round: - pu = pu.round().astype(np.int) if isinstance(uv, np.ndarray) else pu.round().int() - return pu - elif axis == 1: - pv = uv * h - 0.5 - if need_round: - pv = pv.round().astype(np.int) if isinstance(uv, np.ndarray) else pv.round().int() - return pv - else: - assert False, "axis error" - - lst = [pu, pv] - if need_round: - pixel = np.concatenate(lst, axis=-1).round().astype(np.int) if isinstance(uv, np.ndarray) else torch.cat(lst, - dim=-1).round().int() - else: - pixel = np.concatenate(lst, axis=-1) if isinstance(uv, np.ndarray) else torch.cat(lst, dim=-1) - pixel[..., 0] = pixel[..., 0] % w - pixel[..., 1] = pixel[..., 1] % h - - return pixel - - -##################### - - -def xyz2depth(xyz, plan_y=1): - """ - :param xyz: - :param plan_y: - :return: - """ - xyz = xyz * (plan_y / xyz[..., 1])[..., None] - xz = xyz[..., ::2] - depth = np.linalg.norm(xz, axis=-1) if isinstance(xz, np.ndarray) else torch.norm(xz, dim=-1) - return depth - - -def uv2depth(uv, plan_y=None): - if plan_y is None: - from utils.boundary import boundary_type - plan_y = boundary_type(uv) - - xyz = uv2xyz(uv, plan_y) - depth = xyz2depth(xyz, plan_y) - return depth - - -def lonlat2depth(lonlat, plan_y=None): - if plan_y is None: - from utils.boundary import boundary_type - plan_y = boundary_type(lonlat2uv(lonlat)) - - xyz = lonlat2xyz(lonlat, plan_y) - depth = xyz2depth(xyz, plan_y) - return depth - - -def depth2xyz(depth, plan_y=1): - """ - :param depth: [patch_num] or [b, patch_num] - :param plan_y: - :return: - """ - is_np = isinstance(depth, np.ndarray) - w = depth.shape[-1] - - lon = get_lon(w, is_np, b=depth.shape[0] if len(depth.shape) == 2 else None) - if not is_np: - lon = lon.to(depth.device) - - cos = np.cos if is_np else torch.cos - sin = np.sin if is_np else torch.sin - # polar covert to cartesian - if len(depth.shape) == 2: - b = depth.shape[0] - xyz = np.zeros((b, w, 3)) if is_np else torch.zeros((b, w, 3)) - else: - xyz = np.zeros((w, 3)) if is_np else torch.zeros((w, 3)) - - if not is_np: - xyz = xyz.to(depth.device) - - xyz[..., 0] = depth * sin(lon) - xyz[..., 1] = plan_y - xyz[..., 2] = depth * cos(lon) - return xyz - - -def depth2uv(depth, plan_y=1): - xyz = depth2xyz(depth, plan_y) - uv = xyz2uv(xyz) - return uv - - -def depth2pixel(depth, w=1024, h=512, need_round=True, plan_y=1): - uv = depth2uv(depth, plan_y) - pixel = uv2pixel(uv, w, h, need_round=need_round) - return pixel - - -if __name__ == '__main__': - a = np.array([[0.5, 1, 0.5]]) - a = xyz2pixel(a) - print(a) - - -if __name__ == '__main__1': - np.set_printoptions(suppress=True) - - a = np.array([[0, 0], [1023, 511]]) - a = pixel2xyz(a) - a = xyz2pixel(a) - print(a) - - ########### - a = torch.tensor([[0, 0], [1023, 511]]) - a = pixel2xyz(a) - a = xyz2pixel(a) - print(a) - - ########### - u = np.array([0, 256, 512, 1023]) - lon = pixel2lonlat(u, axis=0) - u = lonlat2pixel(lon, axis=0) - print(u) - - u = torch.tensor([0, 256, 512, 1023]) - lon = pixel2lonlat(u, axis=0) - u = lonlat2pixel(lon, axis=0) - print(u) - - ########### - v = np.array([0, 256, 511]) - lat = pixel2lonlat(v, axis=1) - v = lonlat2pixel(lat, axis=1) - print(v) - - v = torch.tensor([0, 256, 511]) - lat = pixel2lonlat(v, axis=1) - v = lonlat2pixel(lat, axis=1) - print(v) diff --git a/spaces/ivanmeyer/DreamlikeArt-PhotoReal-2.0/app.py b/spaces/ivanmeyer/DreamlikeArt-PhotoReal-2.0/app.py deleted file mode 100644 index ca8efd1958966aa79412bcedb0368a86d0e9e77f..0000000000000000000000000000000000000000 --- a/spaces/ivanmeyer/DreamlikeArt-PhotoReal-2.0/app.py +++ /dev/null @@ -1,159 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path -import random -import string -import time -from queue import Queue -from threading import Thread -import emoji - - - -text_gen=gr.Interface.load("spaces/phenomenon1981/MagicPrompt-Stable-Diffusion") -def get_prompts(prompt_text): - if prompt_text: - return text_gen("photo, " + prompt_text) - else: - return text_gen("") -proc1=gr.Interface.load("models/dreamlike-art/dreamlike-photoreal-2.0") - -def restart_script_periodically(): - while True: - random_time = random.randint(540, 600) - time.sleep(random_time) - os.execl(sys.executable, sys.executable, *sys.argv) - - -restart_thread = Thread(target=restart_script_periodically, daemon=True) -restart_thread.start() - - -queue = Queue() -queue_threshold = 100 - -def add_random_noise(prompt, noise_level=0.00): - if noise_level == 0: - noise_level = 0.00 - percentage_noise = noise_level * 5 - num_noise_chars = int(len(prompt) * (percentage_noise/100)) - noise_indices = random.sample(range(len(prompt)), num_noise_chars) - prompt_list = list(prompt) - noise_chars = list(string.ascii_letters + string.punctuation + ' ' + string.digits) - noise_chars.extend(['😍', '💩', '😂', '🤔', '😊', '🤗', '😭', '🙄', '😷', '🤯', '🤫', '🥴', '😴', '🤩', '🥳', '😔', '😩', '🤪', '😇', '🤢', '😈', '👹', '👻', '🤖', '👽', '💀', '🎃', '🎅', '🎄', '🎁', '🎂', '🎉', '🎈', '🎊', '🎮', '❤️', '💔', '💕', '💖', '💗', '🐶', '🐱', '🐭', '🐹', '🦊', '🐻', '🐨', '🐯', '🦁', '🐘', '🔥', '🌧️', '🌞', '🌈', '💥', '🌴', '🌊', '🌺', '🌻', '🌸', '🎨', '🌅', '🌌', '☁️', '⛈️', '❄️', '☀️', '🌤️', '⛅️', '🌥️', '🌦️', '🌧️', '🌩️', '🌨️', '🌫️', '☔️', '🌬️', '💨', '🌪️', '🌈']) - for index in noise_indices: - prompt_list[index] = random.choice(noise_chars) - return "".join(prompt_list) - - -def send_it1(inputs, noise_level, proc1=proc1): - prompt_with_noise = add_random_noise(inputs, noise_level) - while queue.qsize() >= queue_threshold: - time.sleep(2) - queue.put(prompt_with_noise) - output1 = proc1(prompt_with_noise) - return output1 - -def send_it2(inputs, noise_level, proc1=proc1): - prompt_with_noise = add_random_noise(inputs, noise_level) - while queue.qsize() >= queue_threshold: - time.sleep(2) - queue.put(prompt_with_noise) - output2 = proc1(prompt_with_noise) - return output2 - -#def send_it3(inputs, noise_level, proc1=proc1): - #prompt_with_noise = add_random_noise(inputs, noise_level) - #while queue.qsize() >= queue_threshold: - #time.sleep(2) - #queue.put(prompt_with_noise) - #output3 = proc1(prompt_with_noise) - #return output3 - -#def send_it4(inputs, noise_level, proc1=proc1): - #prompt_with_noise = add_random_noise(inputs, noise_level) - #while queue.qsize() >= queue_threshold: - #time.sleep(2) - #queue.put(prompt_with_noise) - #output4 = proc1(prompt_with_noise) - #return output4 - - - -with gr.Blocks(css='style.css') as demo: - gr.HTML( - """ -
        -
        -

        - Dreamlike Photoreal 2.0 -

        -
        -

        - Noise Level: Controls how much randomness is added to the input before it is sent to the model. Higher noise level produces more diverse outputs, while lower noise level produces similar outputs, - created by Phenomenon1981. -

        -

        - ❤️ Press the Like Button if you enjoy my space! ❤️ -

        -
        - """ - ) - with gr.Column(elem_id="col-container"): - with gr.Row(variant="compact"): - input_text = gr.Textbox( - label="Short Prompt", - show_label=False, - max_lines=2, - placeholder="Enter a basic idea and click 'Magic Prompt'. Got no ideas? No problem, Simply just hit the magic button!", - ).style( - container=False, - ) - see_prompts = gr.Button("✨ Magic Prompt ✨").style(full_width=False) - - - with gr.Row(variant="compact"): - prompt = gr.Textbox( - label="Enter your prompt", - show_label=False, - max_lines=2, - placeholder="Full Prompt", - ).style( - container=False, - ) - run = gr.Button("Generate Images").style(full_width=False) - - with gr.Row(): - with gr.Row(): - noise_level = gr.Slider(minimum=0.0, maximum=3, step=0.1, label="Noise Level") - with gr.Row(): - with gr.Row(): - output1=gr.Image(label="Dreamlike-photoreal-2.0",show_label=False) - output2=gr.Image(label="Dreamlike-photoreal-2.0",show_label=False) - - #with gr.Row(): - #output1=gr.Image() - - see_prompts.click(get_prompts, inputs=[input_text], outputs=[prompt], queue=False) - run.click(send_it1, inputs=[prompt, noise_level], outputs=[output1]) - run.click(send_it2, inputs=[prompt, noise_level], outputs=[output2]) - - - - with gr.Row(): - gr.HTML( - """ - -
        -

        Unleash your creative side and generate mesmerizing images with just a few clicks! Enter a spark of inspiration in the "Basic Idea" text box and click the "Magic Prompt" button to elevate it to a polished masterpiece. Make any final tweaks in the "Full Prompt" box and hit the "Generate Images" button to watch your vision come to life. Experiment with the "Noise Level" for a diverse range of outputs, from similar to wildly unique. Let the fun begin! -

        -
        - """ -) - - demo.launch(enable_queue=True, inline=True) - block.queue(concurrency_count=100) \ No newline at end of file diff --git a/spaces/ivelin/ui-refexp/README.md b/spaces/ivelin/ui-refexp/README.md deleted file mode 100644 index 3f4317a38c2a9c6e1a74c704377c1d1b8b3b90a0..0000000000000000000000000000000000000000 --- a/spaces/ivelin/ui-refexp/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: UI RefExp (by GuardianUI) -emoji: 🐕 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: agpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jackyccl/segment-anything/groundingdino/version.py b/spaces/jackyccl/segment-anything/groundingdino/version.py deleted file mode 100644 index 3dc1f76bc69e3f559bee6253b24fc93acee9e1f9..0000000000000000000000000000000000000000 --- a/spaces/jackyccl/segment-anything/groundingdino/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "0.1.0" diff --git a/spaces/javakhangnguyen/Object-Remove/src/__init__.py b/spaces/javakhangnguyen/Object-Remove/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/jbilcke-hf/VideoChain-UI/Dockerfile b/spaces/jbilcke-hf/VideoChain-UI/Dockerfile deleted file mode 100644 index 91319be9b3dd35d916d18fba5260f51125c46b50..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoChain-UI/Dockerfile +++ /dev/null @@ -1,65 +0,0 @@ -FROM node:18-alpine AS base - -# Install dependencies only when needed -FROM base AS deps -# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. -RUN apk add --no-cache libc6-compat -WORKDIR /app - -# Install dependencies based on the preferred package manager -COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ -RUN \ - if [ -f yarn.lock ]; then yarn --frozen-lockfile; \ - elif [ -f package-lock.json ]; then npm ci; \ - elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \ - else echo "Lockfile not found." && exit 1; \ - fi - -# Uncomment the following lines if you want to use a secret at buildtime, -# for example to access your private npm packages -# RUN --mount=type=secret,id=HF_EXAMPLE_SECRET,mode=0444,required=true \ -# $(cat /run/secrets/HF_EXAMPLE_SECRET) - -# Rebuild the source code only when needed -FROM base AS builder -WORKDIR /app -COPY --from=deps /app/node_modules ./node_modules -COPY . . - -# Next.js collects completely anonymous telemetry data about general usage. -# Learn more here: https://nextjs.org/telemetry -# Uncomment the following line in case you want to disable telemetry during the build. -# ENV NEXT_TELEMETRY_DISABLED 1 - -# RUN yarn build - -# If you use yarn, comment out this line and use the line above -RUN npm run build - -# Production image, copy all the files and run next -FROM base AS runner -WORKDIR /app - -ENV NODE_ENV production -# Uncomment the following line in case you want to disable telemetry during runtime. -# ENV NEXT_TELEMETRY_DISABLED 1 - -RUN addgroup --system --gid 1001 nodejs -RUN adduser --system --uid 1001 nextjs - -COPY --from=builder /app/public ./public - -# Automatically leverage output traces to reduce image size -# https://nextjs.org/docs/advanced-features/output-file-tracing -COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ -COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static -COPY --from=builder --chown=nextjs:nodejs /app/.next/cache ./.next/cache -# COPY --from=builder --chown=nextjs:nodejs /app/.next/cache/fetch-cache ./.next/cache/fetch-cache - -USER nextjs - -EXPOSE 3000 - -ENV PORT 3000 - -CMD ["node", "server.js"] \ No newline at end of file diff --git a/spaces/jbilcke-hf/media-server/scripts/download_fresh_music.sh b/spaces/jbilcke-hf/media-server/scripts/download_fresh_music.sh deleted file mode 100644 index a9df6c02d1d39d51ad4daef482329598d627ac51..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/media-server/scripts/download_fresh_music.sh +++ /dev/null @@ -1,7 +0,0 @@ - -echo "Downloading latest music files.." -wget https://www.dropbox.com/s/fzxqbu87ul3ctqa/pack1.zip -unzip -o pack1.zip -d . -cp *.m4a $WEBTV_AUDIO_STORAGE_PATH_CHANNEL_1 -mv *.m4a $WEBTV_AUDIO_STORAGE_PATH_CHANNEL_2 -rm -Rf __MACOSX diff --git a/spaces/jeang/ernie_demo_toy/app.py b/spaces/jeang/ernie_demo_toy/app.py deleted file mode 100644 index 03761dcf9a1bd09c07331901d733abff18a04193..0000000000000000000000000000000000000000 --- a/spaces/jeang/ernie_demo_toy/app.py +++ /dev/null @@ -1,19 +0,0 @@ -import gradio as gr -from ernie.ernie import SentenceClassifier -from ernie import helper - -file_list=['config.json', 'tf_model.h5', 'tokenizer.json', 'vocab.txt', 'special_tokens_map.json', 'tokenizer_config.json'] - -for f in file_list: - helper.download_from_hub(repo_id='jeang/bert-finetuned-sentence-classification-toy', filename=f, cache_dir='model/') - -classifier = SentenceClassifier(model_path='model/', max_length=128, labels_no=2) - -def classify(sentence): - probability = classifier.predict_one(sentence)[1] - - return 'probability = ' + str(probability) + ' (' + ('positive' if probability >= 0.5 else 'negative') + ')' - - -iface = gr.Interface(fn=classify, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/jin-nin/artist/app.py b/spaces/jin-nin/artist/app.py deleted file mode 100644 index d395b49e5cd2ea468ef4f6449186a613a5da0f05..0000000000000000000000000000000000000000 --- a/spaces/jin-nin/artist/app.py +++ /dev/null @@ -1,117 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path -import random -import string -import time -from queue import Queue -from threading import Thread -import emoji - -def pipe(val): - return val - -ru2en = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-ru-en") - -text_gen=gr.Interface.load("spaces/phenomenon1981/MagicPrompt-Stable-Diffusion") -def get_prompts(prompt_text): - return text_gen("photo, " + prompt_text + ", high details, harmony, ideal proportions") - -proc1=gr.Interface.load("models/dreamlike-art/dreamlike-photoreal-2.0") - -def restart_script_periodically(): - while True: - time.sleep(600) # 10 minutes - try: - os.execl(sys.executable, sys.executable, *sys.argv) - except: - pass - -restart_thread = Thread(target=restart_script_periodically, daemon=True) -restart_thread.start() - -queue = Queue() -queue_threshold = 800 - -def add_random_noise(prompt, noise_level=0.00): - if noise_level == 0: - noise_level = 0.00 - # Get the percentage of characters to add as noise - percentage_noise = noise_level * 5 - # Get the number of characters to add as noise - num_noise_chars = int(len(prompt) * (percentage_noise/100)) - # Get the indices of the characters to add noise to - noise_indices = random.sample(range(len(prompt)), num_noise_chars) - # Add noise to the selected characters - prompt_list = list(prompt) - # Add numbers, special characters, and all emojis to the list of characters used to add noise - noise_chars = string.ascii_letters + string.punctuation + ' ' + string.digits + emoji.emojize(":all:") - for index in noise_indices: - prompt_list[index] = random.choice(noise_chars) - return "".join(prompt_list) - - -def send_it(inputs, noise_level, proc1=proc1): - prompt_with_noise = add_random_noise(inputs, noise_level) - output = proc1(prompt_with_noise) - return output - -with gr.Blocks(analytics_enabled=False, css='style.css') as demo: - with gr.Column(elem_id="col-container"): - - with gr.Row(elem_id="shorts"): - input_text_ru = gr.Textbox( - label="Short Ru", - placeholder="девушка", - ) - Translate = gr.Button("Ru ▶ En", elem_id="translate").style(full_width=False) - input_text = gr.Textbox( - label="Short En", - placeholder="girl", - ) - - with gr.Row(elem_id="longs"): - with gr.Column(scale=1000): - prompt = gr.Textbox( - label="Prompt", - placeholder="proto, girl, high details", - ) - with gr.Column(scale=1, elem_id="longs-fillers"): - Literally = gr.Button("◀ Literally") - Explain = gr.Button("◀ Explain") - - with gr.Row(elem_id="params"): - with gr.Column(): - noise_level = gr.Slider(minimum=0.0, value=1.0, maximum=3, step=0.1, label="Noise Level") - - with gr.Row(elem_id="paints"): - with gr.Column(): - run1 = gr.Button("🔻 Paint") - output1=gr.Image() - with gr.Column(): - run2 = gr.Button("🔻 Paint") - output2=gr.Image() - with gr.Column(): - run3 = gr.Button("🔻 Paint") - output3=gr.Image() - # with gr.Column(): - # run4 = gr.Button("🔻 Paint") - # output4=gr.Image() - - gr.Markdown("Honor **Artists**: Yayoi Kusama, Pablo Picasso, Leonardo da Vinci, Banksy, Rembrandt, Frida Kahlo, Vincent van Gogh, Henri Matisse, Salvador Dali, Claude Monet, Andy Warhol, Georgia O'Keeffe, Jackson Pollock, Marcel Duchamp, Edward Hopper, Willem de Kooning, Mark Rothko, David Hockney") - gr.Markdown("Popuar **Styles**: Anime, Abstract, Minimalist, Cyberpunk, Steampunk, Organic, Geometric, Sci-Fi, Futuristic, Vaporwave, Gothic") - gr.Markdown("Known **Moods**: joyful, light-hearted, exciting, calming, soothing, playful, fun, bright, colourful, dynamic, energetic, passionate, romantic, vibrant, vivid") - gr.Markdown("Typical **Techniques**: oil on canvas, chinese painting, graffiti, watercolour, graphite, cinematic, film noir, fluorescent, moody lighting, silhouette, ultraviolet, x-ray, olaroid, double exposure, fisheye lens, bokeh") - - Literally.click(pipe, inputs=[input_text], outputs=[prompt], queue=False) - Translate.click(ru2en, inputs=[input_text_ru], outputs=[input_text], queue=False, api_name="ru2en") - Explain.click(text_gen, inputs=[input_text], outputs=[prompt], queue=False, api_name="explain") - - run1.click(send_it, inputs=[prompt, noise_level], outputs=[output1], api_name="paint/0") - run2.click(send_it, inputs=[prompt, noise_level], outputs=[output2], api_name="paint/1") - run3.click(send_it, inputs=[prompt, noise_level], outputs=[output3], api_name="paint/2") - # run4.click(send_it, inputs=[prompt, noise_level], outputs=[output4]) - - demo.launch(enable_queue=True, inline=True, show_api=False) - block.queue(concurrency_count=2) diff --git a/spaces/jmesikto/whisper-webui/src/segments.py b/spaces/jmesikto/whisper-webui/src/segments.py deleted file mode 100644 index ec2650dceade5d0b2022264f6419115eab085aea..0000000000000000000000000000000000000000 --- a/spaces/jmesikto/whisper-webui/src/segments.py +++ /dev/null @@ -1,55 +0,0 @@ -from typing import Any, Dict, List - -import copy - -def merge_timestamps(timestamps: List[Dict[str, Any]], merge_window: float = 5, max_merge_size: float = 30, padding_left: float = 1, padding_right: float = 1): - result = [] - - if len(timestamps) == 0: - return result - if max_merge_size is None: - return timestamps - - if padding_left is None: - padding_left = 0 - if padding_right is None: - padding_right = 0 - - processed_time = 0 - current_segment = None - - for i in range(len(timestamps)): - next_segment = timestamps[i] - - delta = next_segment['start'] - processed_time - - # Note that segments can still be longer than the max merge size, they just won't be merged in that case - if current_segment is None or (merge_window is not None and delta > merge_window) \ - or next_segment['end'] - current_segment['start'] > max_merge_size: - # Finish the current segment - if current_segment is not None: - # Add right padding - finish_padding = min(padding_right, delta / 2) if delta < padding_left + padding_right else padding_right - current_segment['end'] += finish_padding - delta -= finish_padding - - result.append(current_segment) - - # Start a new segment - current_segment = copy.deepcopy(next_segment) - - # Pad the segment - current_segment['start'] = current_segment['start'] - min(padding_left, delta) - processed_time = current_segment['end'] - - else: - # Merge the segment - current_segment['end'] = next_segment['end'] - processed_time = current_segment['end'] - - # Add the last segment - if current_segment is not None: - current_segment['end'] += padding_right - result.append(current_segment) - - return result \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/TLSA.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/TLSA.py deleted file mode 100644 index c9ba199112f76f3c2cc797299ba6fe70e9550e31..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/TLSA.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -import dns.immutable -import dns.rdtypes.tlsabase - - -@dns.immutable.immutable -class TLSA(dns.rdtypes.tlsabase.TLSABase): - - """TLSA record""" diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/mtiLib/__main__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/mtiLib/__main__.py deleted file mode 100644 index 29c802bcc83b3ca35bbd0e6521f47a368b5f9092..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/mtiLib/__main__.py +++ /dev/null @@ -1,5 +0,0 @@ -import sys -from fontTools.mtiLib import main - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/johnson906/recipedia/src/args.py b/spaces/johnson906/recipedia/src/args.py deleted file mode 100644 index 5be18618fc6c5d132f07a725d370eaf0bea9a6bd..0000000000000000000000000000000000000000 --- a/spaces/johnson906/recipedia/src/args.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. - -import argparse -import os - - -def get_parser(): - - parser = argparse.ArgumentParser() - - parser.add_argument('--save_dir', type=str, default='path/to/save/models', - help='path where checkpoints will be saved') - - parser.add_argument('--project_name', type=str, default='inversecooking', - help='name of the directory where models will be saved within save_dir') - - parser.add_argument('--model_name', type=str, default='model', - help='save_dir/project_name/model_name will be the path where logs and checkpoints are stored') - - parser.add_argument('--transfer_from', type=str, default='', - help='specify model name to transfer from') - - parser.add_argument('--suff', type=str, default='', - help='the id of the dictionary to load for training') - - parser.add_argument('--image_model', type=str, default='resnet50', choices=['resnet18', 'resnet50', 'resnet101', - 'resnet152', 'inception_v3']) - - parser.add_argument('--recipe1m_dir', type=str, default='path/to/recipe1m', - help='directory where recipe1m dataset is extracted') - - parser.add_argument('--aux_data_dir', type=str, default='../data', - help='path to other necessary data files (eg. vocabularies)') - - parser.add_argument('--crop_size', type=int, default=224, help='size for randomly or center cropping images') - - parser.add_argument('--image_size', type=int, default=256, help='size to rescale images') - - parser.add_argument('--log_step', type=int , default=10, help='step size for printing log info') - - parser.add_argument('--learning_rate', type=float, default=0.001, - help='base learning rate') - - parser.add_argument('--scale_learning_rate_cnn', type=float, default=0.01, - help='lr multiplier for cnn weights') - - parser.add_argument('--lr_decay_rate', type=float, default=0.99, - help='learning rate decay factor') - - parser.add_argument('--lr_decay_every', type=int, default=1, - help='frequency of learning rate decay (default is every epoch)') - - parser.add_argument('--weight_decay', type=float, default=0.) - - parser.add_argument('--embed_size', type=int, default=512, - help='hidden size for all projections') - - parser.add_argument('--n_att', type=int, default=8, - help='number of attention heads in the instruction decoder') - - parser.add_argument('--n_att_ingrs', type=int, default=4, - help='number of attention heads in the ingredient decoder') - - parser.add_argument('--transf_layers', type=int, default=16, - help='number of transformer layers in the instruction decoder') - - parser.add_argument('--transf_layers_ingrs', type=int, default=4, - help='number of transformer layers in the ingredient decoder') - - parser.add_argument('--num_epochs', type=int, default=400, - help='maximum number of epochs') - - parser.add_argument('--batch_size', type=int, default=128) - - parser.add_argument('--num_workers', type=int, default=8) - - parser.add_argument('--dropout_encoder', type=float, default=0.3, - help='dropout ratio for the image and ingredient encoders') - - parser.add_argument('--dropout_decoder_r', type=float, default=0.3, - help='dropout ratio in the instruction decoder') - - parser.add_argument('--dropout_decoder_i', type=float, default=0.3, - help='dropout ratio in the ingredient decoder') - - parser.add_argument('--finetune_after', type=int, default=-1, - help='epoch to start training cnn. -1 is never, 0 is from the beginning') - - parser.add_argument('--loss_weight', nargs='+', type=float, default=[1.0, 0.0, 0.0, 0.0], - help='training loss weights. 1) instruction, 2) ingredient, 3) eos 4) cardinality') - - parser.add_argument('--max_eval', type=int, default=4096, - help='number of validation samples to evaluate during training') - - parser.add_argument('--label_smoothing_ingr', type=float, default=0.1, - help='label smoothing for bce loss for ingredients') - - parser.add_argument('--patience', type=int, default=50, - help='maximum number of epochs to allow before early stopping') - - parser.add_argument('--maxseqlen', type=int, default=15, - help='maximum length of each instruction') - - parser.add_argument('--maxnuminstrs', type=int, default=10, - help='maximum number of instructions') - - parser.add_argument('--maxnumims', type=int, default=5, - help='maximum number of images per sample') - - parser.add_argument('--maxnumlabels', type=int, default=20, - help='maximum number of ingredients per sample') - - parser.add_argument('--es_metric', type=str, default='loss', choices=['loss', 'iou_sample'], - help='early stopping metric to track') - - parser.add_argument('--eval_split', type=str, default='val') - - parser.add_argument('--numgens', type=int, default=3) - - parser.add_argument('--greedy', dest='greedy', action='store_true', - help='enables greedy sampling (inference only)') - parser.set_defaults(greedy=False) - - parser.add_argument('--temperature', type=float, default=1.0, - help='sampling temperature (when greedy is False)') - - parser.add_argument('--beam', type=int, default=-1, - help='beam size. -1 means no beam search (either greedy or sampling)') - - parser.add_argument('--ingrs_only', dest='ingrs_only', action='store_true', - help='train or evaluate the model only for ingredient prediction') - parser.set_defaults(ingrs_only=False) - - parser.add_argument('--recipe_only', dest='recipe_only', action='store_true', - help='train or evaluate the model only for instruction generation') - parser.set_defaults(recipe_only=False) - - parser.add_argument('--log_term', dest='log_term', action='store_true', - help='if used, shows training log in stdout instead of saving it to a file.') - parser.set_defaults(log_term=False) - - parser.add_argument('--notensorboard', dest='tensorboard', action='store_false', - help='if used, tensorboard logs will not be saved') - parser.set_defaults(tensorboard=True) - - parser.add_argument('--resume', dest='resume', action='store_true', - help='resume training from the checkpoint in model_name') - parser.set_defaults(resume=False) - - parser.add_argument('--nodecay_lr', dest='decay_lr', action='store_false', - help='disables learning rate decay') - parser.set_defaults(decay_lr=True) - - parser.add_argument('--load_jpeg', dest='use_lmdb', action='store_false', - help='if used, images are loaded from jpg files instead of lmdb') - parser.set_defaults(use_lmdb=True) - - parser.add_argument('--get_perplexity', dest='get_perplexity', action='store_true', - help='used to get perplexity in evaluation') - parser.set_defaults(get_perplexity=False) - - parser.add_argument('--use_true_ingrs', dest='use_true_ingrs', action='store_true', - help='if used, true ingredients will be used as input to obtain the recipe in evaluation') - parser.set_defaults(use_true_ingrs=False) - - args = parser.parse_args() - - return args diff --git a/spaces/jone/Music_Source_Separation/bytesep/data/data_modules.py b/spaces/jone/Music_Source_Separation/bytesep/data/data_modules.py deleted file mode 100644 index e37b4109f8b915ea864b19795374038184388308..0000000000000000000000000000000000000000 --- a/spaces/jone/Music_Source_Separation/bytesep/data/data_modules.py +++ /dev/null @@ -1,187 +0,0 @@ -from typing import Dict, List, NoReturn, Optional - -import h5py -import librosa -import numpy as np -import torch -from pytorch_lightning.core.datamodule import LightningDataModule - -from bytesep.data.samplers import DistributedSamplerWrapper -from bytesep.utils import int16_to_float32 - - -class DataModule(LightningDataModule): - def __init__( - self, - train_sampler: object, - train_dataset: object, - num_workers: int, - distributed: bool, - ): - r"""Data module. - - Args: - train_sampler: Sampler object - train_dataset: Dataset object - num_workers: int - distributed: bool - """ - super().__init__() - self._train_sampler = train_sampler - self.train_dataset = train_dataset - self.num_workers = num_workers - self.distributed = distributed - - def setup(self, stage: Optional[str] = None) -> NoReturn: - r"""called on every device.""" - - # SegmentSampler is used for selecting segments for training. - # On multiple devices, each SegmentSampler samples a part of mini-batch - # data. - if self.distributed: - self.train_sampler = DistributedSamplerWrapper(self._train_sampler) - - else: - self.train_sampler = self._train_sampler - - def train_dataloader(self) -> torch.utils.data.DataLoader: - r"""Get train loader.""" - train_loader = torch.utils.data.DataLoader( - dataset=self.train_dataset, - batch_sampler=self.train_sampler, - collate_fn=collate_fn, - num_workers=self.num_workers, - pin_memory=True, - ) - - return train_loader - - -class Dataset: - def __init__(self, augmentor: object, segment_samples: int): - r"""Used for getting data according to a meta. - - Args: - augmentor: Augmentor class - segment_samples: int - """ - self.augmentor = augmentor - self.segment_samples = segment_samples - - def __getitem__(self, meta: Dict) -> Dict: - r"""Return data according to a meta. E.g., an input meta looks like: { - 'vocals': [['song_A.h5', 6332760, 6465060], ['song_B.h5', 198450, 330750]], - 'accompaniment': [['song_C.h5', 24232920, 24365250], ['song_D.h5', 1569960, 1702260]]}. - } - - Then, vocals segments of song_A and song_B will be mixed (mix-audio augmentation). - Accompaniment segments of song_C and song_B will be mixed (mix-audio augmentation). - Finally, mixture is created by summing vocals and accompaniment. - - Args: - meta: dict, e.g., { - 'vocals': [['song_A.h5', 6332760, 6465060], ['song_B.h5', 198450, 330750]], - 'accompaniment': [['song_C.h5', 24232920, 24365250], ['song_D.h5', 1569960, 1702260]]} - } - - Returns: - data_dict: dict, e.g., { - 'vocals': (channels, segments_num), - 'accompaniment': (channels, segments_num), - 'mixture': (channels, segments_num), - } - """ - source_types = meta.keys() - data_dict = {} - - for source_type in source_types: - # E.g., ['vocals', 'bass', ...] - - waveforms = [] # Audio segments to be mix-audio augmented. - - for m in meta[source_type]: - # E.g., { - # 'hdf5_path': '.../song_A.h5', - # 'key_in_hdf5': 'vocals', - # 'begin_sample': '13406400', - # 'end_sample': 13538700, - # } - - hdf5_path = m['hdf5_path'] - key_in_hdf5 = m['key_in_hdf5'] - bgn_sample = m['begin_sample'] - end_sample = m['end_sample'] - - with h5py.File(hdf5_path, 'r') as hf: - - if source_type == 'audioset': - index_in_hdf5 = m['index_in_hdf5'] - waveform = int16_to_float32( - hf['waveform'][index_in_hdf5][bgn_sample:end_sample] - ) - waveform = waveform[None, :] - else: - waveform = int16_to_float32( - hf[key_in_hdf5][:, bgn_sample:end_sample] - ) - - if self.augmentor: - waveform = self.augmentor(waveform, source_type) - - waveform = librosa.util.fix_length( - waveform, size=self.segment_samples, axis=1 - ) - # (channels_num, segments_num) - - waveforms.append(waveform) - # E.g., waveforms: [(channels_num, audio_samples), (channels_num, audio_samples)] - - # mix-audio augmentation - data_dict[source_type] = np.sum(waveforms, axis=0) - # data_dict[source_type]: (channels_num, audio_samples) - - # data_dict looks like: { - # 'voclas': (channels_num, audio_samples), - # 'accompaniment': (channels_num, audio_samples) - # } - - # Mix segments from different sources. - mixture = np.sum( - [data_dict[source_type] for source_type in source_types], axis=0 - ) - data_dict['mixture'] = mixture - # shape: (channels_num, audio_samples) - - return data_dict - - -def collate_fn(list_data_dict: List[Dict]) -> Dict: - r"""Collate mini-batch data to inputs and targets for training. - - Args: - list_data_dict: e.g., [ - {'vocals': (channels_num, segment_samples), - 'accompaniment': (channels_num, segment_samples), - 'mixture': (channels_num, segment_samples) - }, - {'vocals': (channels_num, segment_samples), - 'accompaniment': (channels_num, segment_samples), - 'mixture': (channels_num, segment_samples) - }, - ...] - - Returns: - data_dict: e.g. { - 'vocals': (batch_size, channels_num, segment_samples), - 'accompaniment': (batch_size, channels_num, segment_samples), - 'mixture': (batch_size, channels_num, segment_samples) - } - """ - data_dict = {} - - for key in list_data_dict[0].keys(): - data_dict[key] = torch.Tensor( - np.array([data_dict[key] for data_dict in list_data_dict]) - ) - - return data_dict diff --git a/spaces/jspr/autodrummer/app.py b/spaces/jspr/autodrummer/app.py deleted file mode 100644 index 2845b6a6de79090d77eea5a08315ee3076c05e67..0000000000000000000000000000000000000000 --- a/spaces/jspr/autodrummer/app.py +++ /dev/null @@ -1,71 +0,0 @@ -import gradio as gr -import openai -from t2a import text_to_audio -import joblib -from sentence_transformers import SentenceTransformer -import numpy as np -import os - -reg = joblib.load('text_reg.joblib') -model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') -finetune = "davinci:ft-personal:autodrummer-v5-2022-11-04-22-34-07" - -with open('description.txt', 'r') as f: - description = f.read() -with open('article.txt', 'r') as f: - article = f.read() - -def get_note_text(prompt): - prompt = prompt + " ->" - # get completion from finetune - response = openai.Completion.create( - engine=finetune, - prompt=prompt, - temperature=0.5, - max_tokens=200, - top_p=1, - frequency_penalty=0, - presence_penalty=0, - stop=["###"] - ) - return response.choices[0].text.strip() - -def increment_count(): - with open('count.txt', 'r') as f: - count = int(f.read()) - count += 1 - with open('count.txt', 'w') as f: - f.write(str(count)) - -def get_drummer_output(prompt, tempo): - openai.api_key = os.environ['key'] - if tempo == "fast": - tempo = 138 - elif tempo == "slow": - tempo = 100 - note_text = get_note_text(prompt) - # note_text = note_text + " " + note_text - # prompt_enc = model.encode([prompt]) - # bpm = int(reg.predict(prompt_enc)[0]) + 20 - audio = text_to_audio(note_text, tempo) - audio = np.array(audio.get_array_of_samples(), dtype=np.float32) - increment_count() - return (96000, audio) - -iface = gr.Interface( - fn=get_drummer_output, - inputs=[ - "text", - gr.Radio(["fast", "slow"], label="Tempo", default="fast"), - ], - examples=[ - ["hiphop groove 808", "fast"], - ["rock metal", "fast"], - ["disco funk", "fast"], - ], - outputs="audio", - title='Autodrummer', - description=description, - article=article, -) -iface.launch() \ No newline at end of file diff --git a/spaces/kaicheng/ChatGPT_ad/assets/html/appearance_switcher.html b/spaces/kaicheng/ChatGPT_ad/assets/html/appearance_switcher.html deleted file mode 100644 index 9375071fbdfda7bfd622d7f7bd2dfdd0c494341b..0000000000000000000000000000000000000000 --- a/spaces/kaicheng/ChatGPT_ad/assets/html/appearance_switcher.html +++ /dev/null @@ -1,11 +0,0 @@ -
        - - {label} - - - - -
        diff --git a/spaces/kcagle/AutoGPT/tests/unit/json_tests.py b/spaces/kcagle/AutoGPT/tests/unit/json_tests.py deleted file mode 100644 index 25c383377708359b5cfec28e0625343c5692f15c..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/tests/unit/json_tests.py +++ /dev/null @@ -1,114 +0,0 @@ -import unittest - -from autogpt.json_utils.json_fix_llm import fix_and_parse_json - - -class TestParseJson(unittest.TestCase): - def test_valid_json(self): - # Test that a valid JSON string is parsed correctly - json_str = '{"name": "John", "age": 30, "city": "New York"}' - obj = fix_and_parse_json(json_str) - self.assertEqual(obj, {"name": "John", "age": 30, "city": "New York"}) - - def test_invalid_json_minor(self): - # Test that an invalid JSON string can be fixed with gpt - json_str = '{"name": "John", "age": 30, "city": "New York",}' - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), - {"name": "John", "age": 30, "city": "New York"}, - ) - - def test_invalid_json_major_with_gpt(self): - # Test that an invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=True), - {"name": "John", "age": 30, "city": "New York"}, - ) - - def test_invalid_json_major_without_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - # Assert that this raises an exception: - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I suggest we start by browsing the repository to find any issues that we can fix. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/kdrkdrkdr/ProsekaTTS/modules.py b/spaces/kdrkdrkdr/ProsekaTTS/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/ProsekaTTS/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/kepl/gpt/g4f/Provider/Providers/Vercel.py b/spaces/kepl/gpt/g4f/Provider/Providers/Vercel.py deleted file mode 100644 index e5df9cf017e4c1a265f5c9d5e48eb5c10a56e60a..0000000000000000000000000000000000000000 --- a/spaces/kepl/gpt/g4f/Provider/Providers/Vercel.py +++ /dev/null @@ -1,162 +0,0 @@ -import os -import json -import base64 -import execjs -import queue -import threading - -from curl_cffi import requests -from ...typing import sha256, Dict, get_type_hints - -url = 'https://play.vercel.ai' -supports_stream = True -needs_auth = False - -models = { - 'claude-instant-v1': 'anthropic:claude-instant-v1', - 'claude-v1': 'anthropic:claude-v1', - 'alpaca-7b': 'replicate:replicate/alpaca-7b', - 'stablelm-tuned-alpha-7b': 'replicate:stability-ai/stablelm-tuned-alpha-7b', - 'bloom': 'huggingface:bigscience/bloom', - 'bloomz': 'huggingface:bigscience/bloomz', - 'flan-t5-xxl': 'huggingface:google/flan-t5-xxl', - 'flan-ul2': 'huggingface:google/flan-ul2', - 'gpt-neox-20b': 'huggingface:EleutherAI/gpt-neox-20b', - 'oasst-sft-4-pythia-12b-epoch-3.5': 'huggingface:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5', - 'santacoder': 'huggingface:bigcode/santacoder', - 'command-medium-nightly': 'cohere:command-medium-nightly', - 'command-xlarge-nightly': 'cohere:command-xlarge-nightly', - 'code-cushman-001': 'openai:code-cushman-001', - 'code-davinci-002': 'openai:code-davinci-002', - 'gpt-3.5-turbo': 'openai:gpt-3.5-turbo', - 'text-ada-001': 'openai:text-ada-001', - 'text-babbage-001': 'openai:text-babbage-001', - 'text-curie-001': 'openai:text-curie-001', - 'text-davinci-002': 'openai:text-davinci-002', - 'text-davinci-003': 'openai:text-davinci-003' -} -model = models.keys() - -vercel_models = {'anthropic:claude-instant-v1': {'id': 'anthropic:claude-instant-v1', 'provider': 'anthropic', 'providerHumanName': 'Anthropic', 'makerHumanName': 'Anthropic', 'minBillingTier': 'hobby', 'parameters': {'temperature': {'value': 1, 'range': [0, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0.1, 1]}, 'topK': {'value': 1, 'range': [1, 500]}, 'presencePenalty': {'value': 1, 'range': [0, 1]}, 'frequencyPenalty': {'value': 1, 'range': [0, 1]}, 'stopSequences': {'value': ['\n\nHuman:'], 'range': []}}, 'name': 'claude-instant-v1'}, 'anthropic:claude-v1': {'id': 'anthropic:claude-v1', 'provider': 'anthropic', 'providerHumanName': 'Anthropic', 'makerHumanName': 'Anthropic', 'minBillingTier': 'hobby', 'parameters': {'temperature': {'value': 1, 'range': [0, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0.1, 1]}, 'topK': {'value': 1, 'range': [1, 500]}, 'presencePenalty': {'value': 1, 'range': [0, 1]}, 'frequencyPenalty': {'value': 1, 'range': [0, 1]}, 'stopSequences': {'value': ['\n\nHuman:'], 'range': []}}, 'name': 'claude-v1'}, 'replicate:replicate/alpaca-7b': {'id': 'replicate:replicate/alpaca-7b', 'provider': 'replicate', 'providerHumanName': 'Replicate', 'makerHumanName': 'Stanford', 'parameters': {'temperature': {'value': 0.75, 'range': [0.01, 5]}, 'maximumLength': {'value': 200, 'range': [50, 512]}, 'topP': {'value': 0.95, 'range': [0.01, 1]}, 'presencePenalty': {'value': 0, 'range': [0, 1]}, 'frequencyPenalty': {'value': 0, 'range': [0, 1]}, 'repetitionPenalty': {'value': 1.1765, 'range': [0.01, 5]}, 'stopSequences': {'value': [], 'range': []}}, 'version': '2014ee1247354f2e81c0b3650d71ca715bc1e610189855f134c30ecb841fae21', 'name': 'alpaca-7b'}, 'replicate:stability-ai/stablelm-tuned-alpha-7b': {'id': 'replicate:stability-ai/stablelm-tuned-alpha-7b', 'provider': 'replicate', 'makerHumanName': 'StabilityAI', 'providerHumanName': 'Replicate', 'parameters': {'temperature': {'value': 0.75, 'range': [0.01, 5]}, 'maximumLength': {'value': 200, 'range': [50, 512]}, 'topP': {'value': 0.95, 'range': [0.01, 1]}, 'presencePenalty': {'value': 0, 'range': [0, 1]}, 'frequencyPenalty': {'value': 0, 'range': [0, 1]}, 'repetitionPenalty': {'value': 1.1765, 'range': [0.01, 5]}, 'stopSequences': {'value': [], 'range': []}}, 'version': '4a9a32b4fd86c2d047f1d271fa93972683ec6ef1cf82f402bd021f267330b50b', 'name': 'stablelm-tuned-alpha-7b'}, 'huggingface:bigscience/bloom': {'id': 'huggingface:bigscience/bloom', 'provider': 'huggingface', 'providerHumanName': 'HuggingFace', 'makerHumanName': 'BigScience', 'instructions': "Do NOT talk to Bloom as an entity, it's not a chatbot but a webpage/blog/article completion model. For the best results: mimic a few words of a webpage similar to the content you want to generate. Start a sentence as if YOU were writing a blog, webpage, math post, coding article and Bloom will generate a coherent follow-up.", 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 0.95, 'range': [0.01, 0.99]}, 'topK': {'value': 4, 'range': [1, 500]}, 'repetitionPenalty': {'value': 1.03, 'range': [0.1, 2]}}, 'name': 'bloom'}, 'huggingface:bigscience/bloomz': {'id': 'huggingface:bigscience/bloomz', 'provider': 'huggingface', 'providerHumanName': 'HuggingFace', 'makerHumanName': 'BigScience', 'instructions': 'We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "Translate to English: Je t\'aime.", the model will most likely answer "I love you.".', 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 0.95, 'range': [0.01, 0.99]}, 'topK': {'value': 4, 'range': [1, 500]}, 'repetitionPenalty': {'value': 1.03, 'range': [0.1, 2]}}, 'name': 'bloomz'}, 'huggingface:google/flan-t5-xxl': {'id': 'huggingface:google/flan-t5-xxl', 'provider': 'huggingface', 'makerHumanName': 'Google', 'providerHumanName': 'HuggingFace', 'name': 'flan-t5-xxl', 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 0.95, 'range': [0.01, 0.99]}, 'topK': {'value': 4, 'range': [1, 500]}, 'repetitionPenalty': {'value': 1.03, 'range': [0.1, 2]}}}, 'huggingface:google/flan-ul2': {'id': 'huggingface:google/flan-ul2', 'provider': 'huggingface', 'providerHumanName': 'HuggingFace', 'makerHumanName': 'Google', 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 0.95, 'range': [0.01, 0.99]}, 'topK': {'value': 4, 'range': [1, 500]}, 'repetitionPenalty': {'value': 1.03, 'range': [0.1, 2]}}, 'name': 'flan-ul2'}, 'huggingface:EleutherAI/gpt-neox-20b': {'id': 'huggingface:EleutherAI/gpt-neox-20b', 'provider': 'huggingface', 'providerHumanName': 'HuggingFace', 'makerHumanName': 'EleutherAI', 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 0.95, 'range': [0.01, 0.99]}, 'topK': {'value': 4, 'range': [1, 500]}, 'repetitionPenalty': {'value': 1.03, 'range': [0.1, 2]}, 'stopSequences': {'value': [], 'range': []}}, 'name': 'gpt-neox-20b'}, 'huggingface:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5': {'id': 'huggingface:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5', 'provider': 'huggingface', 'providerHumanName': 'HuggingFace', 'makerHumanName': 'OpenAssistant', 'parameters': {'maximumLength': {'value': 200, 'range': [50, 1024]}, 'typicalP': {'value': 0.2, 'range': [0.1, 0.99]}, 'repetitionPenalty': {'value': 1, 'range': [0.1, 2]}}, 'name': 'oasst-sft-4-pythia-12b-epoch-3.5'}, 'huggingface:bigcode/santacoder': { - 'id': 'huggingface:bigcode/santacoder', 'provider': 'huggingface', 'providerHumanName': 'HuggingFace', 'makerHumanName': 'BigCode', 'instructions': 'The model was trained on GitHub code. As such it is not an instruction model and commands like "Write a function that computes the square root." do not work well. You should phrase commands like they occur in source code such as comments (e.g. # the following function computes the sqrt) or write a function signature and docstring and let the model complete the function body.', 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 0.95, 'range': [0.01, 0.99]}, 'topK': {'value': 4, 'range': [1, 500]}, 'repetitionPenalty': {'value': 1.03, 'range': [0.1, 2]}}, 'name': 'santacoder'}, 'cohere:command-medium-nightly': {'id': 'cohere:command-medium-nightly', 'provider': 'cohere', 'providerHumanName': 'Cohere', 'makerHumanName': 'Cohere', 'name': 'command-medium-nightly', 'parameters': {'temperature': {'value': 0.9, 'range': [0, 2]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0, 1]}, 'topK': {'value': 0, 'range': [0, 500]}, 'presencePenalty': {'value': 0, 'range': [0, 1]}, 'frequencyPenalty': {'value': 0, 'range': [0, 1]}, 'stopSequences': {'value': [], 'range': []}}}, 'cohere:command-xlarge-nightly': {'id': 'cohere:command-xlarge-nightly', 'provider': 'cohere', 'providerHumanName': 'Cohere', 'makerHumanName': 'Cohere', 'name': 'command-xlarge-nightly', 'parameters': {'temperature': {'value': 0.9, 'range': [0, 2]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0, 1]}, 'topK': {'value': 0, 'range': [0, 500]}, 'presencePenalty': {'value': 0, 'range': [0, 1]}, 'frequencyPenalty': {'value': 0, 'range': [0, 1]}, 'stopSequences': {'value': [], 'range': []}}}, 'openai:gpt-4': {'id': 'openai:gpt-4', 'provider': 'openai', 'providerHumanName': 'OpenAI', 'makerHumanName': 'OpenAI', 'name': 'gpt-4', 'minBillingTier': 'pro', 'parameters': {'temperature': {'value': 0.7, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0.1, 1]}, 'presencePenalty': {'value': 0, 'range': [0, 1]}, 'frequencyPenalty': {'value': 0, 'range': [0, 1]}, 'stopSequences': {'value': [], 'range': []}}}, 'openai:code-cushman-001': {'id': 'openai:code-cushman-001', 'provider': 'openai', 'providerHumanName': 'OpenAI', 'makerHumanName': 'OpenAI', 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0.1, 1]}, 'presencePenalty': {'value': 0, 'range': [0, 1]}, 'frequencyPenalty': {'value': 0, 'range': [0, 1]}, 'stopSequences': {'value': [], 'range': []}}, 'name': 'code-cushman-001'}, 'openai:code-davinci-002': {'id': 'openai:code-davinci-002', 'provider': 'openai', 'providerHumanName': 'OpenAI', 'makerHumanName': 'OpenAI', 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0.1, 1]}, 'presencePenalty': {'value': 0, 'range': [0, 1]}, 'frequencyPenalty': {'value': 0, 'range': [0, 1]}, 'stopSequences': {'value': [], 'range': []}}, 'name': 'code-davinci-002'}, 'openai:gpt-3.5-turbo': {'id': 'openai:gpt-3.5-turbo', 'provider': 'openai', 'providerHumanName': 'OpenAI', 'makerHumanName': 'OpenAI', 'parameters': {'temperature': {'value': 0.7, 'range': [0, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0.1, 1]}, 'topK': {'value': 1, 'range': [1, 500]}, 'presencePenalty': {'value': 1, 'range': [0, 1]}, 'frequencyPenalty': {'value': 1, 'range': [0, 1]}, 'stopSequences': {'value': [], 'range': []}}, 'name': 'gpt-3.5-turbo'}, 'openai:text-ada-001': {'id': 'openai:text-ada-001', 'provider': 'openai', 'providerHumanName': 'OpenAI', 'makerHumanName': 'OpenAI', 'name': 'text-ada-001', 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0.1, 1]}, 'presencePenalty': {'value': 0, 'range': [0, 1]}, 'frequencyPenalty': {'value': 0, 'range': [0, 1]}, 'stopSequences': {'value': [], 'range': []}}}, 'openai:text-babbage-001': {'id': 'openai:text-babbage-001', 'provider': 'openai', 'providerHumanName': 'OpenAI', 'makerHumanName': 'OpenAI', 'name': 'text-babbage-001', 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0.1, 1]}, 'presencePenalty': {'value': 0, 'range': [0, 1]}, 'frequencyPenalty': {'value': 0, 'range': [0, 1]}, 'stopSequences': {'value': [], 'range': []}}}, 'openai:text-curie-001': {'id': 'openai:text-curie-001', 'provider': 'openai', 'providerHumanName': 'OpenAI', 'makerHumanName': 'OpenAI', 'name': 'text-curie-001', 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0.1, 1]}, 'presencePenalty': {'value': 0, 'range': [0, 1]}, 'frequencyPenalty': {'value': 0, 'range': [0, 1]}, 'stopSequences': {'value': [], 'range': []}}}, 'openai:text-davinci-002': {'id': 'openai:text-davinci-002', 'provider': 'openai', 'providerHumanName': 'OpenAI', 'makerHumanName': 'OpenAI', 'name': 'text-davinci-002', 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0.1, 1]}, 'presencePenalty': {'value': 0, 'range': [0, 1]}, 'frequencyPenalty': {'value': 0, 'range': [0, 1]}, 'stopSequences': {'value': [], 'range': []}}}, 'openai:text-davinci-003': {'id': 'openai:text-davinci-003', 'provider': 'openai', 'providerHumanName': 'OpenAI', 'makerHumanName': 'OpenAI', 'name': 'text-davinci-003', 'parameters': {'temperature': {'value': 0.5, 'range': [0.1, 1]}, 'maximumLength': {'value': 200, 'range': [50, 1024]}, 'topP': {'value': 1, 'range': [0.1, 1]}, 'presencePenalty': {'value': 0, 'range': [0, 1]}, 'frequencyPenalty': {'value': 0, 'range': [0, 1]}, 'stopSequences': {'value': [], 'range': []}}}} - - -# based on https://github.com/ading2210/vercel-llm-api // modified -class Client: - def __init__(self): - self.session = requests.Session() - self.headers = { - 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110 Safari/537.36', - 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8', - 'Accept-Encoding': 'gzip, deflate, br', - 'Accept-Language': 'en-US,en;q=0.5', - 'Te': 'trailers', - 'Upgrade-Insecure-Requests': '1' - } - self.session.headers.update(self.headers) - - def get_token(self): - b64 = self.session.get('https://sdk.vercel.ai/openai.jpeg').text - data = json.loads(base64.b64decode(b64)) - - code = 'const globalThis = {data: `sentinel`}; function token() {return (%s)(%s)}' % ( - data['c'], data['a']) - - token_string = json.dumps(separators=(',', ':'), - obj={'r': execjs.compile(code).call('token'), 't': data['t']}) - - return base64.b64encode(token_string.encode()).decode() - - def get_default_params(self, model_id): - return {key: param['value'] for key, param in vercel_models[model_id]['parameters'].items()} - - def generate(self, model_id: str, prompt: str, params: dict = {}): - if not ':' in model_id: - model_id = models[model_id] - - defaults = self.get_default_params(model_id) - - payload = defaults | params | { - 'prompt': prompt, - 'model': model_id, - } - - headers = self.headers | { - 'Accept-Encoding': 'gzip, deflate, br', - 'Custom-Encoding': self.get_token(), - 'Host': 'sdk.vercel.ai', - 'Origin': 'https://sdk.vercel.ai', - 'Referrer': 'https://sdk.vercel.ai', - 'Sec-Fetch-Dest': 'empty', - 'Sec-Fetch-Mode': 'cors', - 'Sec-Fetch-Site': 'same-origin', - } - - chunks_queue = queue.Queue() - error = None - response = None - - def callback(data): - chunks_queue.put(data.decode()) - - def request_thread(): - nonlocal response, error - for _ in range(3): - try: - response = self.session.post('https://sdk.vercel.ai/api/generate', - json=payload, headers=headers, content_callback=callback) - response.raise_for_status() - - except Exception as e: - if _ == 2: - error = e - - else: - continue - - thread = threading.Thread(target=request_thread, daemon=True) - thread.start() - - text = '' - index = 0 - while True: - try: - chunk = chunks_queue.get(block=True, timeout=0.1) - - except queue.Empty: - if error: - raise error - - elif response: - break - - else: - continue - - text += chunk - lines = text.split('\n') - - if len(lines) - 1 > index: - new = lines[index:-1] - for word in new: - yield json.loads(word) - index = len(lines) - 1 - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - yield 'Vercel is currently not working.' - return - - conversation = 'This is a conversation between a human and a language model, respond to the last message accordingly, referring to the past history of messages if needed.\n' - - for message in messages: - conversation += '%s: %s\n' % (message['role'], message['content']) - - conversation += 'assistant: ' - - completion = Client().generate(model, conversation) - - for token in completion: - yield token - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/speaker_encoder/compute_embed.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/speaker_encoder/compute_embed.py deleted file mode 100644 index 2fee33db0168f40efc42145c06fa62016e3e008e..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/speaker_encoder/compute_embed.py +++ /dev/null @@ -1,40 +0,0 @@ -from speaker_encoder import inference as encoder -from multiprocessing.pool import Pool -from functools import partial -from pathlib import Path -# from utils import logmmse -# from tqdm import tqdm -# import numpy as np -# import librosa - - -def embed_utterance(fpaths, encoder_model_fpath): - if not encoder.is_loaded(): - encoder.load_model(encoder_model_fpath) - - # Compute the speaker embedding of the utterance - wav_fpath, embed_fpath = fpaths - wav = np.load(wav_fpath) - wav = encoder.preprocess_wav(wav) - embed = encoder.embed_utterance(wav) - np.save(embed_fpath, embed, allow_pickle=False) - - -def create_embeddings(outdir_root: Path, wav_dir: Path, encoder_model_fpath: Path, n_processes: int): - - wav_dir = outdir_root.joinpath("audio") - metadata_fpath = synthesizer_root.joinpath("train.txt") - assert wav_dir.exists() and metadata_fpath.exists() - embed_dir = synthesizer_root.joinpath("embeds") - embed_dir.mkdir(exist_ok=True) - - # Gather the input wave filepath and the target output embed filepath - with metadata_fpath.open("r") as metadata_file: - metadata = [line.split("|") for line in metadata_file] - fpaths = [(wav_dir.joinpath(m[0]), embed_dir.joinpath(m[2])) for m in metadata] - - # TODO: improve on the multiprocessing, it's terrible. Disk I/O is the bottleneck here. - # Embed the utterances in separate threads - func = partial(embed_utterance, encoder_model_fpath=encoder_model_fpath) - job = Pool(n_processes).imap(func, fpaths) - list(tqdm(job, "Embedding", len(fpaths), unit="utterances")) \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/speaker_encoder/inference.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/speaker_encoder/inference.py deleted file mode 100644 index 15e6bf16ba9e551473cd6b179bb518f0704ac33d..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/speaker_encoder/inference.py +++ /dev/null @@ -1,177 +0,0 @@ -from speaker_encoder.params_data import * -from speaker_encoder.model import SpeakerEncoder -from speaker_encoder.audio import preprocess_wav # We want to expose this function from here -from matplotlib import cm -from speaker_encoder import audio -from pathlib import Path -import matplotlib.pyplot as plt -import numpy as np -import torch - -_model = None # type: SpeakerEncoder -_device = None # type: torch.device - - -def load_model(weights_fpath: Path, device=None): - """ - Loads the model in memory. If this function is not explicitely called, it will be run on the - first call to embed_frames() with the default weights file. - - :param weights_fpath: the path to saved model weights. - :param device: either a torch device or the name of a torch device (e.g. "cpu", "cuda"). The - model will be loaded and will run on this device. Outputs will however always be on the cpu. - If None, will default to your GPU if it"s available, otherwise your CPU. - """ - # TODO: I think the slow loading of the encoder might have something to do with the device it - # was saved on. Worth investigating. - global _model, _device - if device is None: - _device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - elif isinstance(device, str): - _device = torch.device(device) - _model = SpeakerEncoder(_device, torch.device("cpu")) - checkpoint = torch.load(weights_fpath) - _model.load_state_dict(checkpoint["model_state"]) - _model.eval() - print("Loaded encoder \"%s\" trained to step %d" % (weights_fpath.name, checkpoint["step"])) - - -def is_loaded(): - return _model is not None - - -def embed_frames_batch(frames_batch): - """ - Computes embeddings for a batch of mel spectrogram. - - :param frames_batch: a batch mel of spectrogram as a numpy array of float32 of shape - (batch_size, n_frames, n_channels) - :return: the embeddings as a numpy array of float32 of shape (batch_size, model_embedding_size) - """ - if _model is None: - raise Exception("Model was not loaded. Call load_model() before inference.") - - frames = torch.from_numpy(frames_batch).to(_device) - embed = _model.forward(frames).detach().cpu().numpy() - return embed - - -def compute_partial_slices(n_samples, partial_utterance_n_frames=partials_n_frames, - min_pad_coverage=0.75, overlap=0.5): - """ - Computes where to split an utterance waveform and its corresponding mel spectrogram to obtain - partial utterances of each. Both the waveform and the mel - spectrogram slices are returned, so as to make each partial utterance waveform correspond to - its spectrogram. This function assumes that the mel spectrogram parameters used are those - defined in params_data.py. - - The returned ranges may be indexing further than the length of the waveform. It is - recommended that you pad the waveform with zeros up to wave_slices[-1].stop. - - :param n_samples: the number of samples in the waveform - :param partial_utterance_n_frames: the number of mel spectrogram frames in each partial - utterance - :param min_pad_coverage: when reaching the last partial utterance, it may or may not have - enough frames. If at least of are present, - then the last partial utterance will be considered, as if we padded the audio. Otherwise, - it will be discarded, as if we trimmed the audio. If there aren't enough frames for 1 partial - utterance, this parameter is ignored so that the function always returns at least 1 slice. - :param overlap: by how much the partial utterance should overlap. If set to 0, the partial - utterances are entirely disjoint. - :return: the waveform slices and mel spectrogram slices as lists of array slices. Index - respectively the waveform and the mel spectrogram with these slices to obtain the partial - utterances. - """ - assert 0 <= overlap < 1 - assert 0 < min_pad_coverage <= 1 - - samples_per_frame = int((sampling_rate * mel_window_step / 1000)) - n_frames = int(np.ceil((n_samples + 1) / samples_per_frame)) - frame_step = max(int(np.round(partial_utterance_n_frames * (1 - overlap))), 1) - - # Compute the slices - wav_slices, mel_slices = [], [] - steps = max(1, n_frames - partial_utterance_n_frames + frame_step + 1) - for i in range(0, steps, frame_step): - mel_range = np.array([i, i + partial_utterance_n_frames]) - wav_range = mel_range * samples_per_frame - mel_slices.append(slice(*mel_range)) - wav_slices.append(slice(*wav_range)) - - # Evaluate whether extra padding is warranted or not - last_wav_range = wav_slices[-1] - coverage = (n_samples - last_wav_range.start) / (last_wav_range.stop - last_wav_range.start) - if coverage < min_pad_coverage and len(mel_slices) > 1: - mel_slices = mel_slices[:-1] - wav_slices = wav_slices[:-1] - - return wav_slices, mel_slices - - -def embed_utterance(wav, using_partials=True, return_partials=False, **kwargs): - """ - Computes an embedding for a single utterance. - - # TODO: handle multiple wavs to benefit from batching on GPU - :param wav: a preprocessed (see audio.py) utterance waveform as a numpy array of float32 - :param using_partials: if True, then the utterance is split in partial utterances of - frames and the utterance embedding is computed from their - normalized average. If False, the utterance is instead computed from feeding the entire - spectogram to the network. - :param return_partials: if True, the partial embeddings will also be returned along with the - wav slices that correspond to the partial embeddings. - :param kwargs: additional arguments to compute_partial_splits() - :return: the embedding as a numpy array of float32 of shape (model_embedding_size,). If - is True, the partial utterances as a numpy array of float32 of shape - (n_partials, model_embedding_size) and the wav partials as a list of slices will also be - returned. If is simultaneously set to False, both these values will be None - instead. - """ - # Process the entire utterance if not using partials - if not using_partials: - frames = audio.wav_to_mel_spectrogram(wav) - embed = embed_frames_batch(frames[None, ...])[0] - if return_partials: - return embed, None, None - return embed - - # Compute where to split the utterance into partials and pad if necessary - wave_slices, mel_slices = compute_partial_slices(len(wav), **kwargs) - max_wave_length = wave_slices[-1].stop - if max_wave_length >= len(wav): - wav = np.pad(wav, (0, max_wave_length - len(wav)), "constant") - - # Split the utterance into partials - frames = audio.wav_to_mel_spectrogram(wav) - frames_batch = np.array([frames[s] for s in mel_slices]) - partial_embeds = embed_frames_batch(frames_batch) - - # Compute the utterance embedding from the partial embeddings - raw_embed = np.mean(partial_embeds, axis=0) - embed = raw_embed / np.linalg.norm(raw_embed, 2) - - if return_partials: - return embed, partial_embeds, wave_slices - return embed - - -def embed_speaker(wavs, **kwargs): - raise NotImplemented() - - -def plot_embedding_as_heatmap(embed, ax=None, title="", shape=None, color_range=(0, 0.30)): - if ax is None: - ax = plt.gca() - - if shape is None: - height = int(np.sqrt(len(embed))) - shape = (height, -1) - embed = embed.reshape(shape) - - cmap = cm.get_cmap() - mappable = ax.imshow(embed, cmap=cmap) - cbar = plt.colorbar(mappable, ax=ax, fraction=0.046, pad=0.04) - cbar.set_clim(*color_range) - - ax.set_xticks([]), ax.set_yticks([]) - ax.set_title(title) diff --git a/spaces/kevinwang676/FreeVC/commons.py b/spaces/kevinwang676/FreeVC/commons.py deleted file mode 100644 index fc384912618494475bda9d68fa76530f4fe2a27b..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/FreeVC/commons.py +++ /dev/null @@ -1,171 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/kevinwang676/vits-fast-finetuning-pcr/monotonic_align/core.py b/spaces/kevinwang676/vits-fast-finetuning-pcr/monotonic_align/core.py deleted file mode 100644 index 1f940605fe4fd0738fa0006149fcba14ef88223a..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/vits-fast-finetuning-pcr/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 diff --git a/spaces/ko5cles/lyric_writer/README.md b/spaces/ko5cles/lyric_writer/README.md deleted file mode 100644 index 6729e473b8c5aa5e4054a66fb94569fc19a41e4a..0000000000000000000000000000000000000000 --- a/spaces/ko5cles/lyric_writer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Lyric Writer -emoji: 👁 -colorFrom: purple -colorTo: gray -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/speech_synthesis/data_utils.py b/spaces/koajoel/PolyFormer/fairseq/examples/speech_synthesis/data_utils.py deleted file mode 100644 index f43a4a90046fb9ee4944dc06ba377c1faade141d..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/speech_synthesis/data_utils.py +++ /dev/null @@ -1,320 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -from pathlib import Path -from typing import Optional, List, Dict -import zipfile -import tempfile -from dataclasses import dataclass -from itertools import groupby - -import torch -import torch.nn.functional as F -import numpy as np -from tqdm import tqdm - -from examples.speech_to_text.data_utils import load_tsv_to_dicts -from fairseq.data.audio.audio_utils import TTSSpectrogram, TTSMelScale - - -def trim_or_pad_to_target_length( - data_1d_or_2d: np.ndarray, target_length: int -) -> np.ndarray: - assert len(data_1d_or_2d.shape) in {1, 2} - delta = data_1d_or_2d.shape[0] - target_length - if delta >= 0: # trim if being longer - data_1d_or_2d = data_1d_or_2d[: target_length] - else: # pad if being shorter - if len(data_1d_or_2d.shape) == 1: - data_1d_or_2d = np.concatenate( - [data_1d_or_2d, np.zeros(-delta)], axis=0 - ) - else: - data_1d_or_2d = np.concatenate( - [data_1d_or_2d, np.zeros((-delta, data_1d_or_2d.shape[1]))], - axis=0 - ) - return data_1d_or_2d - - -def extract_logmel_spectrogram( - waveform: torch.Tensor, sample_rate: int, - output_path: Optional[Path] = None, win_length: int = 1024, - hop_length: int = 256, n_fft: int = 1024, - win_fn: callable = torch.hann_window, n_mels: int = 80, - f_min: float = 0., f_max: float = 8000, eps: float = 1e-5, - overwrite: bool = False, target_length: Optional[int] = None -): - if output_path is not None and output_path.is_file() and not overwrite: - return - - spectrogram_transform = TTSSpectrogram( - n_fft=n_fft, win_length=win_length, hop_length=hop_length, - window_fn=win_fn - ) - mel_scale_transform = TTSMelScale( - n_mels=n_mels, sample_rate=sample_rate, f_min=f_min, f_max=f_max, - n_stft=n_fft // 2 + 1 - ) - spectrogram = spectrogram_transform(waveform) - mel_spec = mel_scale_transform(spectrogram) - logmel_spec = torch.clamp(mel_spec, min=eps).log() - assert len(logmel_spec.shape) == 3 and logmel_spec.shape[0] == 1 - logmel_spec = logmel_spec.squeeze().t() # D x T -> T x D - if target_length is not None: - trim_or_pad_to_target_length(logmel_spec, target_length) - - if output_path is not None: - np.save(output_path.as_posix(), logmel_spec) - else: - return logmel_spec - - -def extract_pitch( - waveform: torch.Tensor, sample_rate: int, - output_path: Optional[Path] = None, hop_length: int = 256, - log_scale: bool = True, phoneme_durations: Optional[List[int]] = None -): - if output_path is not None and output_path.is_file(): - return - - try: - import pyworld - except ImportError: - raise ImportError("Please install PyWORLD: pip install pyworld") - - _waveform = waveform.squeeze(0).double().numpy() - pitch, t = pyworld.dio( - _waveform, sample_rate, frame_period=hop_length / sample_rate * 1000 - ) - pitch = pyworld.stonemask(_waveform, pitch, t, sample_rate) - - if phoneme_durations is not None: - pitch = trim_or_pad_to_target_length(pitch, sum(phoneme_durations)) - try: - from scipy.interpolate import interp1d - except ImportError: - raise ImportError("Please install SciPy: pip install scipy") - nonzero_ids = np.where(pitch != 0)[0] - interp_fn = interp1d( - nonzero_ids, - pitch[nonzero_ids], - fill_value=(pitch[nonzero_ids[0]], pitch[nonzero_ids[-1]]), - bounds_error=False, - ) - pitch = interp_fn(np.arange(0, len(pitch))) - d_cumsum = np.cumsum(np.concatenate([np.array([0]), phoneme_durations])) - pitch = np.array( - [ - np.mean(pitch[d_cumsum[i-1]: d_cumsum[i]]) - for i in range(1, len(d_cumsum)) - ] - ) - assert len(pitch) == len(phoneme_durations) - - if log_scale: - pitch = np.log(pitch + 1) - - if output_path is not None: - np.save(output_path.as_posix(), pitch) - else: - return pitch - - -def extract_energy( - waveform: torch.Tensor, output_path: Optional[Path] = None, - hop_length: int = 256, n_fft: int = 1024, log_scale: bool = True, - phoneme_durations: Optional[List[int]] = None -): - if output_path is not None and output_path.is_file(): - return - - assert len(waveform.shape) == 2 and waveform.shape[0] == 1 - waveform = waveform.view(1, 1, waveform.shape[1]) - waveform = F.pad( - waveform.unsqueeze(1), [n_fft // 2, n_fft // 2, 0, 0], - mode="reflect" - ) - waveform = waveform.squeeze(1) - - fourier_basis = np.fft.fft(np.eye(n_fft)) - cutoff = int((n_fft / 2 + 1)) - fourier_basis = np.vstack( - [np.real(fourier_basis[:cutoff, :]), - np.imag(fourier_basis[:cutoff, :])] - ) - - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - forward_transform = F.conv1d( - waveform, forward_basis, stride=hop_length, padding=0 - ) - - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - magnitude = torch.sqrt(real_part ** 2 + imag_part ** 2) - energy = torch.norm(magnitude, dim=1).squeeze(0).numpy() - - if phoneme_durations is not None: - energy = trim_or_pad_to_target_length(energy, sum(phoneme_durations)) - d_cumsum = np.cumsum(np.concatenate([np.array([0]), phoneme_durations])) - energy = np.array( - [ - np.mean(energy[d_cumsum[i - 1]: d_cumsum[i]]) - for i in range(1, len(d_cumsum)) - ] - ) - assert len(energy) == len(phoneme_durations) - - if log_scale: - energy = np.log(energy + 1) - - if output_path is not None: - np.save(output_path.as_posix(), energy) - else: - return energy - - -def get_global_cmvn(feature_root: Path, output_path: Optional[Path] = None): - mean_x, mean_x2, n_frames = None, None, 0 - feature_paths = feature_root.glob("*.npy") - for p in tqdm(feature_paths): - with open(p, 'rb') as f: - frames = np.load(f).squeeze() - - n_frames += frames.shape[0] - - cur_mean_x = frames.sum(axis=0) - if mean_x is None: - mean_x = cur_mean_x - else: - mean_x += cur_mean_x - - cur_mean_x2 = (frames ** 2).sum(axis=0) - if mean_x2 is None: - mean_x2 = cur_mean_x2 - else: - mean_x2 += cur_mean_x2 - - mean_x /= n_frames - mean_x2 /= n_frames - var_x = mean_x2 - mean_x ** 2 - std_x = np.sqrt(np.maximum(var_x, 1e-10)) - - if output_path is not None: - with open(output_path, 'wb') as f: - np.savez(f, mean=mean_x, std=std_x) - else: - return {"mean": mean_x, "std": std_x} - - -def ipa_phonemize(text, lang="en-us", use_g2p=False): - if use_g2p: - assert lang == "en-us", "g2pE phonemizer only works for en-us" - try: - from g2p_en import G2p - g2p = G2p() - return " ".join("|" if p == " " else p for p in g2p(text)) - except ImportError: - raise ImportError( - "Please install phonemizer: pip install g2p_en" - ) - else: - try: - from phonemizer import phonemize - from phonemizer.separator import Separator - return phonemize( - text, backend='espeak', language=lang, - separator=Separator(word="| ", phone=" ") - ) - except ImportError: - raise ImportError( - "Please install phonemizer: pip install phonemizer" - ) - - -@dataclass -class ForceAlignmentInfo(object): - tokens: List[str] - frame_durations: List[int] - start_sec: Optional[float] - end_sec: Optional[float] - - -def get_mfa_alignment_by_sample_id( - textgrid_zip_path: str, sample_id: str, sample_rate: int, - hop_length: int, silence_phones: List[str] = ("sil", "sp", "spn") -) -> ForceAlignmentInfo: - try: - import tgt - except ImportError: - raise ImportError("Please install TextGridTools: pip install tgt") - - filename = f"{sample_id}.TextGrid" - out_root = Path(tempfile.gettempdir()) - tgt_path = out_root / filename - with zipfile.ZipFile(textgrid_zip_path) as f_zip: - f_zip.extract(filename, path=out_root) - textgrid = tgt.io.read_textgrid(tgt_path.as_posix()) - os.remove(tgt_path) - - phones, frame_durations = [], [] - start_sec, end_sec, end_idx = 0, 0, 0 - for t in textgrid.get_tier_by_name("phones")._objects: - s, e, p = t.start_time, t.end_time, t.text - # Trim leading silences - if len(phones) == 0: - if p in silence_phones: - continue - else: - start_sec = s - phones.append(p) - if p not in silence_phones: - end_sec = e - end_idx = len(phones) - r = sample_rate / hop_length - frame_durations.append(int(np.round(e * r) - np.round(s * r))) - # Trim tailing silences - phones = phones[:end_idx] - frame_durations = frame_durations[:end_idx] - - return ForceAlignmentInfo( - tokens=phones, frame_durations=frame_durations, start_sec=start_sec, - end_sec=end_sec - ) - - -def get_mfa_alignment( - textgrid_zip_path: str, sample_ids: List[str], sample_rate: int, - hop_length: int -) -> Dict[str, ForceAlignmentInfo]: - return { - i: get_mfa_alignment_by_sample_id( - textgrid_zip_path, i, sample_rate, hop_length - ) for i in tqdm(sample_ids) - } - - -def get_unit_alignment( - id_to_unit_tsv_path: str, sample_ids: List[str] -) -> Dict[str, ForceAlignmentInfo]: - id_to_units = { - e["id"]: e["units"] for e in load_tsv_to_dicts(id_to_unit_tsv_path) - } - id_to_units = {i: id_to_units[i].split() for i in sample_ids} - id_to_units_collapsed = { - i: [uu for uu, _ in groupby(u)] for i, u in id_to_units.items() - } - id_to_durations = { - i: [len(list(g)) for _, g in groupby(u)] for i, u in id_to_units.items() - } - - return { - i: ForceAlignmentInfo( - tokens=id_to_units_collapsed[i], frame_durations=id_to_durations[i], - start_sec=None, end_sec=None - ) - for i in sample_ids - } diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/attrs/filters.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/attrs/filters.py deleted file mode 100644 index 52959005b088f0e5116c8b6acdbcc5937bbaacc8..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/attrs/filters.py +++ /dev/null @@ -1,3 +0,0 @@ -# SPDX-License-Identifier: MIT - -from attr.filters import * # noqa diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/importlib_resources/tests/test_open.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/importlib_resources/tests/test_open.py deleted file mode 100644 index 83b737dc85cb674d3c76f4f2676856d98d65d264..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/importlib_resources/tests/test_open.py +++ /dev/null @@ -1,85 +0,0 @@ -import unittest - -import importlib_resources as resources -from . import data01 -from . import util - - -class CommonBinaryTests(util.CommonTests, unittest.TestCase): - def execute(self, package, path): - target = resources.files(package).joinpath(path) - with target.open('rb'): - pass - - -class CommonTextTests(util.CommonTests, unittest.TestCase): - def execute(self, package, path): - target = resources.files(package).joinpath(path) - with target.open(encoding='utf-8'): - pass - - -class OpenTests: - def test_open_binary(self): - target = resources.files(self.data) / 'binary.file' - with target.open('rb') as fp: - result = fp.read() - self.assertEqual(result, b'\x00\x01\x02\x03') - - def test_open_text_default_encoding(self): - target = resources.files(self.data) / 'utf-8.file' - with target.open(encoding='utf-8') as fp: - result = fp.read() - self.assertEqual(result, 'Hello, UTF-8 world!\n') - - def test_open_text_given_encoding(self): - target = resources.files(self.data) / 'utf-16.file' - with target.open(encoding='utf-16', errors='strict') as fp: - result = fp.read() - self.assertEqual(result, 'Hello, UTF-16 world!\n') - - def test_open_text_with_errors(self): - """ - Raises UnicodeError without the 'errors' argument. - """ - target = resources.files(self.data) / 'utf-16.file' - with target.open(encoding='utf-8', errors='strict') as fp: - self.assertRaises(UnicodeError, fp.read) - with target.open(encoding='utf-8', errors='ignore') as fp: - result = fp.read() - self.assertEqual( - result, - 'H\x00e\x00l\x00l\x00o\x00,\x00 ' - '\x00U\x00T\x00F\x00-\x001\x006\x00 ' - '\x00w\x00o\x00r\x00l\x00d\x00!\x00\n\x00', - ) - - def test_open_binary_FileNotFoundError(self): - target = resources.files(self.data) / 'does-not-exist' - with self.assertRaises(FileNotFoundError): - target.open('rb') - - def test_open_text_FileNotFoundError(self): - target = resources.files(self.data) / 'does-not-exist' - with self.assertRaises(FileNotFoundError): - target.open(encoding='utf-8') - - -class OpenDiskTests(OpenTests, unittest.TestCase): - def setUp(self): - self.data = data01 - - -class OpenDiskNamespaceTests(OpenTests, unittest.TestCase): - def setUp(self): - from . import namespacedata01 - - self.data = namespacedata01 - - -class OpenZipTests(OpenTests, util.ZipSetup, unittest.TestCase): - pass - - -if __name__ == '__main__': - unittest.main() diff --git a/spaces/lauraibnz/midi-audioldm/app.py b/spaces/lauraibnz/midi-audioldm/app.py deleted file mode 100644 index 26fe077d6ad3ea9e02571b1393ec789762baa163..0000000000000000000000000000000000000000 --- a/spaces/lauraibnz/midi-audioldm/app.py +++ /dev/null @@ -1,89 +0,0 @@ -import gradio as gr -from diffusers import AudioLDMControlNetPipeline, ControlNetModel -import os -from pretty_midi import PrettyMIDI -from tempfile import _TemporaryFileWrapper -import torch -import torchaudio - -SAMPLE_RATE=16000 - -if torch.cuda.is_available(): - device = "cuda" - torch_dtype = torch.float16 -else: - device = "cpu" - torch_dtype = torch.float32 - -controlnet = ControlNetModel.from_pretrained( - "lauraibnz/midi-audioldm-v2", torch_dtype=torch_dtype) -pipe = AudioLDMControlNetPipeline.from_pretrained( - "cvssp/audioldm-m-full", controlnet=controlnet, torch_dtype=torch_dtype) -pipe = pipe.to(device) -generator = torch.Generator(device) - -def predict(midi_file=None, midi_synth=None, prompt="", neg_prompt="", duration=None, seed=0, cond=1, inf=20, guidance_scale=2.5, guess=False): - if isinstance(midi_file, _TemporaryFileWrapper): - midi_file = midi_file.name - midi = PrettyMIDI(midi_file) - if not duration or duration == 0: - duration = midi_synth[1].shape[0]/SAMPLE_RATE - if not prompt and not neg_prompt: - guess_mode = True - audio = pipe( - prompt, - negative_prompt=neg_prompt, - midi=midi, - audio_length_in_s=duration, - num_inference_steps=inf, - controlnet_conditioning_scale=float(cond), - guess_mode=guess, - generator=generator.manual_seed(int(seed)), - guidance_scale=float(guidance_scale), - ) - return (SAMPLE_RATE, audio.audios.T) - -def synthesize(midi_file=None): - if isinstance(midi_file, _TemporaryFileWrapper): - midi_file = midi_file.name - midi = PrettyMIDI(midi_file) - midi_synth = midi.synthesize(fs=SAMPLE_RATE) - midi_synth = midi_synth.reshape(midi_synth.shape[0], 1) - return (SAMPLE_RATE, midi_synth) - -def run_example(midi_file=None, prompt="", neg_prompt="", duration=None, seed=0, cond=1, inf=20, guidance_scale=2.5, guess=False): - midi_synth = synthesize(midi_file) - gen_audio = predict(midi_file, midi_synth, prompt, neg_prompt, duration, seed, cond, inf, guidance_scale, guess) - return midi_synth, gen_audio - -with gr.Blocks(title="🎹 MIDI-AudioLDM", theme=gr.themes.Base(text_size=gr.themes.sizes.text_md, font=[gr.themes.GoogleFont("Nunito Sans")])) as demo: - gr.HTML( - """ -

        🎹 MIDI-AudioLDM

        - """) - gr.Markdown( - """ - MIDI-AudioLDM is a MIDI-conditioned text-to-audio model based on the project [AudioLDM](https://huggingface.co/spaces/haoheliu/audioldm-text-to-audio-generation). The model has been conditioned using the ControlNet architecture and has been developed within Hugging Face’s [🧨 Diffusers](https://huggingface.co/docs/diffusers/) framework. Once trained, MIDI-AudioLDM accepts a MIDI file and a text prompt as input and returns an audio file, which is an interpretation of the MIDI based on the given text description. This enables detailed control over different musical aspects such as notes, mood and timbre. - """) - with gr.Column(variant='panel'): - midi = gr.File(label="midi file", file_types=[".mid"]) - prompt = gr.Textbox(label="prompt", info="Enter a descriptive text prompt to guide the audio generation.") - with gr.Row(): - with gr.Column(): - midi_synth = gr.Audio(label="synthesized midi") - midi.upload(synthesize, midi, midi_synth) - with gr.Column(): - audio = gr.Audio(label="generated audio") - with gr.Accordion("Advanced Settings", open=False): - duration = gr.Slider(0, 20, step=2.5, label="duration", info="Modify the duration in seconds of the output audio file. If not set it will be determined by the MIDI file.") - inf = gr.Slider(0, 100, value=40, step=1, label="inference steps", info="Edit the number of denoising steps. A larger number usually leads to higher quality but slower results.") - guidance_scale = gr.Slider(0, 4, value=2.5, step=0.5, label="guidance scale", info="Modify the guidance scale. The higher the value the more linked the generated audio to the text prompt, sometimes at the expense of lower quality.") - neg_prompt = gr.Textbox(label="negative prompt", info="Optionally enter a negative text prompt not to guide the audio generation.") - seed = gr.Number(value=48, label="random seed", info="Change the random seed for a different generation result.") - cond = gr.Slider(0.0, 1.0, value=1.0, step=0.1, label="conditioning scale", info="Choose a value between 0 and 1. The larger the more it will take the conditioning into account. Lower values are recommended for more creative prompts.") - guess = gr.Checkbox(label="guess mode", info="Optionally select guess mode. If so, the model will try to recognize the content of the MIDI without the need of a text prompt.") - btn = gr.Button("Generate") - btn.click(predict, inputs=[midi, midi_synth, prompt, neg_prompt, duration, seed, cond, inf, guidance_scale, guess], outputs=[audio]) - gr.Examples(examples=[["S00.mid", "piano", "", 10, 48, 1.0, 20, 2.5, False], ["S00.mid", "violin", "", 10, 48, 1.0, 20, 2.5, False], ["S00.mid", "woman singing, studio recording", "noise", 10, 48, 1.0, 20, 2.5, False], ["S00.mid", "jazz band, clean", "noise", 10, 48, 1.0, 20, 2.5, False], ["S00.mid", "choir", "noise, percussion", 10, 48, 1.0, 20, 2.5, False]], inputs=[midi, prompt, neg_prompt, duration, seed, cond, inf, guidance_scale, guess], fn=run_example, outputs=[midi_synth, audio], cache_examples=True) - -demo.launch() \ No newline at end of file diff --git a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/utils/download_util.py b/spaces/leafShen/CodeFormer/CodeFormer/basicsr/utils/download_util.py deleted file mode 100644 index 2a267915743ee3f3232bc8fe992466b52468979a..0000000000000000000000000000000000000000 --- a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/utils/download_util.py +++ /dev/null @@ -1,95 +0,0 @@ -import math -import os -import requests -from torch.hub import download_url_to_file, get_dir -from tqdm import tqdm -from urllib.parse import urlparse - -from .misc import sizeof_fmt - - -def download_file_from_google_drive(file_id, save_path): - """Download files from google drive. - Ref: - https://stackoverflow.com/questions/25010369/wget-curl-large-file-from-google-drive # noqa E501 - Args: - file_id (str): File id. - save_path (str): Save path. - """ - - session = requests.Session() - URL = 'https://docs.google.com/uc?export=download' - params = {'id': file_id} - - response = session.get(URL, params=params, stream=True) - token = get_confirm_token(response) - if token: - params['confirm'] = token - response = session.get(URL, params=params, stream=True) - - # get file size - response_file_size = session.get(URL, params=params, stream=True, headers={'Range': 'bytes=0-2'}) - print(response_file_size) - if 'Content-Range' in response_file_size.headers: - file_size = int(response_file_size.headers['Content-Range'].split('/')[1]) - else: - file_size = None - - save_response_content(response, save_path, file_size) - - -def get_confirm_token(response): - for key, value in response.cookies.items(): - if key.startswith('download_warning'): - return value - return None - - -def save_response_content(response, destination, file_size=None, chunk_size=32768): - if file_size is not None: - pbar = tqdm(total=math.ceil(file_size / chunk_size), unit='chunk') - - readable_file_size = sizeof_fmt(file_size) - else: - pbar = None - - with open(destination, 'wb') as f: - downloaded_size = 0 - for chunk in response.iter_content(chunk_size): - downloaded_size += chunk_size - if pbar is not None: - pbar.update(1) - pbar.set_description(f'Download {sizeof_fmt(downloaded_size)} / {readable_file_size}') - if chunk: # filter out keep-alive new chunks - f.write(chunk) - if pbar is not None: - pbar.close() - - -def load_file_from_url(url, model_dir=None, progress=True, file_name=None): - """Load file form http url, will download models if necessary. - Ref:https://github.com/1adrianb/face-alignment/blob/master/face_alignment/utils.py - Args: - url (str): URL to be downloaded. - model_dir (str): The path to save the downloaded model. Should be a full path. If None, use pytorch hub_dir. - Default: None. - progress (bool): Whether to show the download progress. Default: True. - file_name (str): The downloaded file name. If None, use the file name in the url. Default: None. - Returns: - str: The path to the downloaded file. - """ - if model_dir is None: # use the pytorch hub_dir - hub_dir = get_dir() - model_dir = os.path.join(hub_dir, 'checkpoints') - - os.makedirs(model_dir, exist_ok=True) - - parts = urlparse(url) - filename = os.path.basename(parts.path) - if file_name is not None: - filename = file_name - cached_file = os.path.abspath(os.path.join(model_dir, filename)) - if not os.path.exists(cached_file): - print(f'Downloading: "{url}" to {cached_file}\n') - download_url_to_file(url, cached_file, hash_prefix=None, progress=progress) - return cached_file \ No newline at end of file diff --git a/spaces/leilevy/bingo/tests/kblob.ts b/spaces/leilevy/bingo/tests/kblob.ts deleted file mode 100644 index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000 --- a/spaces/leilevy/bingo/tests/kblob.ts +++ /dev/null @@ -1,27 +0,0 @@ -import FormData from 'form-data' - -import { fetch } from '@/lib/isomorphic' - -const formData = new FormData() - -const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}} - -formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - - -fetch('https://bing.vcanbb.top/images/kblob', - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": "https://bing.vcanbb.top/web/index.html", - "Referrer-Policy": "origin-when-cross-origin", - ...formData.getHeaders() - } - - } -).then(res => res.text()) -.then(res => console.log('res', res)) diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/multimodal/pipelines/llava/llava.py b/spaces/leogabraneth/text-generation-webui-main/extensions/multimodal/pipelines/llava/llava.py deleted file mode 100644 index 09b5aff7b8826b5da2645be39075b599af2d24da..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/extensions/multimodal/pipelines/llava/llava.py +++ /dev/null @@ -1,262 +0,0 @@ -import time -from abc import abstractmethod -from typing import List, Tuple - -import torch -from huggingface_hub import hf_hub_download -from PIL import Image -from transformers import CLIPImageProcessor, CLIPVisionModel - -from extensions.multimodal.abstract_pipeline import AbstractMultimodalPipeline -from modules import shared -from modules.logging_colors import logger -from modules.text_generation import encode - - -def expand2square(pil_img: Image.Image, background_color: Tuple[int]) -> Image.Image: - width, height = pil_img.size - if width == height: - return pil_img - elif width > height: - result = Image.new(pil_img.mode, (width, width), background_color) - result.paste(pil_img, (0, (width - height) // 2)) - return result - else: - result = Image.new(pil_img.mode, (height, height), background_color) - result.paste(pil_img, ((height - width) // 2, 0)) - return result - - -class LLaVA_v0_Pipeline(AbstractMultimodalPipeline): - CLIP_REPO = "openai/clip-vit-large-patch14" - - def __init__(self, params: dict) -> None: - super().__init__() - self.clip_device = self._get_device("vision_device", params) - self.clip_dtype = self._get_dtype("vision_bits", params) - self.projector_device = self._get_device("projector_device", params) - self.projector_dtype = self._get_dtype("projector_bits", params) - self.image_processor, self.vision_tower, self.mm_projector = self._load_models() - - def _load_models(self): - start_ts = time.time() - - logger.info(f"LLaVA - Loading CLIP from {self.CLIP_REPO} as {self.clip_dtype} on {self.clip_device}...") - image_processor = CLIPImageProcessor.from_pretrained(self.CLIP_REPO, torch_dtype=self.clip_dtype) - vision_tower = CLIPVisionModel.from_pretrained(self.CLIP_REPO, torch_dtype=self.clip_dtype).to(self.clip_device) - - logger.info(f"LLaVA - Loading projector from {self.llava_projector_repo()} as {self.projector_dtype} on {self.projector_device}...") - projector_path = hf_hub_download(self.llava_projector_repo(), self.llava_projector_filename()) - mm_projector = self.build_mm_projector() - projector_data = torch.load(projector_path) - projector_data = {k[19:]: v for k, v in projector_data.items() if k.startswith('model.mm_projector.')} - mm_projector.load_state_dict(projector_data) - mm_projector = mm_projector.to(self.projector_device) - - logger.info(f"LLaVA supporting models loaded, took {time.time() - start_ts:.2f} seconds") - return image_processor, vision_tower, mm_projector - - def build_mm_projector(self) -> torch.nn.Module: - projector_shape = self.llava_projector_shape() - if len(projector_shape) == 2: - return torch.nn.Linear(*projector_shape) - else: - modules = [] - modules.append(torch.nn.Linear(projector_shape[0], projector_shape[1])) - for i in range(2, len(projector_shape)): - modules.append(torch.nn.GELU()) - modules.append(torch.nn.Linear(projector_shape[i-1], projector_shape[i])) - return torch.nn.Sequential(*modules) - - @staticmethod - def image_start() -> str: - return "" - - @staticmethod - def image_end() -> str: - return "" - - @staticmethod - def num_image_embeds() -> int: - return 256 - - @staticmethod - def embed_tokens(input_ids: torch.Tensor) -> torch.Tensor: - for attr in ['', 'model', 'model.model', 'model.model.model']: - tmp = getattr(shared.model, attr, None) if attr != '' else shared.model - if tmp is not None and hasattr(tmp, 'embed_tokens'): - func = tmp.embed_tokens - break - else: - raise ValueError('The embed_tokens method has not been found for this loader.') - - return func(input_ids).to(shared.model.device, dtype=shared.model.dtype) - - @staticmethod - def placeholder_embeddings() -> torch.Tensor: - return LLaVA_v0_Pipeline.embed_tokens(encode(""*256, add_bos_token=False)[0]) - - def embed_images(self, images: List[Image.Image]) -> torch.Tensor: - images = self.image_processor(images, return_tensors='pt')['pixel_values'] - images = images.to(self.clip_device, dtype=self.clip_dtype) - - with torch.no_grad(): - image_forward_outs = self.vision_tower(images, output_hidden_states=True) - select_hidden_state_layer = -2 - select_hidden_state = image_forward_outs.hidden_states[select_hidden_state_layer] - image_features = select_hidden_state[:, 1:].to(self.projector_device, dtype=self.projector_dtype) - image_features = self.mm_projector(image_features) - return image_features.to(shared.model.device, dtype=shared.model.dtype) - - @staticmethod - @abstractmethod - def llava_projector_repo() -> str: - pass - - @staticmethod - @abstractmethod - def llava_projector_filename() -> str: - pass - - @staticmethod - @abstractmethod - def llava_projector_shape() -> Tuple[int, int]: - pass - - -class LLaVA_v0_13B_Pipeline(LLaVA_v0_Pipeline): - def __init__(self, params: dict) -> None: - super().__init__(params) - - @staticmethod - def name() -> str: - return "llava-13b" - - @staticmethod - def placeholder_token_id() -> int: - return 32000 - - @staticmethod - def llava_projector_shape() -> Tuple[int, int]: - return (1024, 5120) - - @staticmethod - def llava_projector_filename() -> str: - return "mm_projector.bin" - - @staticmethod - def llava_projector_repo() -> str: - return "liuhaotian/LLaVA-13b-delta-v0" - - -class LLaVA_v0_7B_Pipeline(LLaVA_v0_Pipeline): - def __init__(self, params: dict) -> None: - super().__init__(params) - - @staticmethod - def name() -> str: - return "llava-7b" - - @staticmethod - def placeholder_token_id() -> int: - return 32001 - - @staticmethod - def llava_projector_shape() -> Tuple[int, int]: - return (1024, 4096) - - @staticmethod - def llava_projector_filename() -> str: - return "mm_projector.bin" - - @staticmethod - def llava_projector_repo() -> str: - return "liuhaotian/LLaVA-7b-delta-v0" - - -class LLaVA_LLaMA_2_13B_Pipeline(LLaVA_v0_13B_Pipeline): - def __init__(self, params: dict) -> None: - super().__init__(params) - - @staticmethod - def name() -> str: - return "llava-llama-2-13b" - - @staticmethod - def placeholder_token_id() -> int: - return 0 - - @staticmethod - def llava_projector_repo() -> str: - return "liuhaotian/llava-llama-2-13b-chat-lightning-preview" - - @staticmethod - def image_start() -> str: - return "" - - @staticmethod - def image_end() -> str: - return "" - - @staticmethod - def placeholder_embeddings() -> torch.Tensor: - return LLaVA_v0_Pipeline.embed_tokens(encode(""*256, add_bos_token=False)[0]) - - -class LLaVA_v1_5_13B_Pipeline(LLaVA_v0_13B_Pipeline): - CLIP_REPO = "openai/clip-vit-large-patch14-336" - - def __init__(self, params: dict) -> None: - super().__init__(params) - - @staticmethod - def name() -> str: - return "llava-v1.5-13b" - - @staticmethod - def llava_projector_shape() -> Tuple[int, int]: - return (1024, 5120, 5120) - - @staticmethod - def placeholder_token_id() -> int: - return 0 - - @staticmethod - def llava_projector_repo() -> str: - return "liuhaotian/llava-v1.5-13b" - - @staticmethod - def image_start() -> str: - return "" - - @staticmethod - def image_end() -> str: - return "" - - @staticmethod - def num_image_embeds() -> int: - return 576 - - def embed_images(self, images: List[Image.Image]) -> torch.Tensor: - # pad it to square first - images = [ - expand2square(image, tuple(int(x*255) for x in self.image_processor.image_mean)) - for image in images - ] - return super().embed_images(images) - - @staticmethod - def placeholder_embeddings() -> torch.Tensor: - return LLaVA_v0_Pipeline.embed_tokens(encode(""*576, add_bos_token=False)[0]) - -class LLaVA_v1_5_7B_Pipeline(LLaVA_v1_5_13B_Pipeline): - @staticmethod - def name() -> str: - return "llava-v1.5-7b" - - @staticmethod - def llava_projector_shape() -> Tuple[int, int]: - return (1024, 4096, 4096) - @staticmethod - def llava_projector_repo() -> str: - return "liuhaotian/llava-v1.5-7b" \ No newline at end of file diff --git a/spaces/leoken2023/bingo/README.md b/spaces/leoken2023/bingo/README.md deleted file mode 100644 index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000 --- a/spaces/leoken2023/bingo/README.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: bingo -emoji: 😊 -colorFrom: red -colorTo: red -sdk: docker -license: mit -duplicated_from: hf4all/bingo ---- - -
        - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -问题反馈请前往 https://github.com/weaigc/bingo/issues -
        - - diff --git a/spaces/lewiswu1209/MockingBird/utils/audio_utils.py b/spaces/lewiswu1209/MockingBird/utils/audio_utils.py deleted file mode 100644 index 1dbeddbc65d2048fd90b348db6ff15a420a70f2b..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/utils/audio_utils.py +++ /dev/null @@ -1,60 +0,0 @@ - -import torch -import torch.utils.data -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def load_wav(full_path): - sampling_rate, data = read(full_path) - return data, sampling_rate - -def _dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def _spectral_normalize_torch(magnitudes): - output = _dynamic_range_compression_torch(magnitudes) - return output - -mel_basis = {} -hann_window = {} - -def mel_spectrogram( - y, - n_fft, - num_mels, - sampling_rate, - hop_size, - win_size, - fmin, - fmax, - center=False, - output_energy=False, -): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - if fmax not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[str(fmax)+'_'+str(y.device)] = torch.from_numpy(mel).float().to(y.device) - hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9)) - mel_spec = torch.matmul(mel_basis[str(fmax)+'_'+str(y.device)], spec) - mel_spec = _spectral_normalize_torch(mel_spec) - if output_energy: - energy = torch.norm(spec, dim=1) - return mel_spec, energy - else: - return mel_spec diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/D3DGear 4.41 Keygen Fixed.md b/spaces/lincquiQcaudo/Top-20-Diffusion/D3DGear 4.41 Keygen Fixed.md deleted file mode 100644 index ad655740f5e404aa06268035ea4d12b9327c50fb..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/D3DGear 4.41 Keygen Fixed.md +++ /dev/null @@ -1,101 +0,0 @@ -
        -

        D3DGear 4.41 Keygen: A Powerful Software for Gamers

        -

        If you are a gamer who wants to measure, record and stream your gameplay, you might be interested in D3DGear 4.41 Keygen. This software is a fast and easy-to-use tool that can help you improve your gaming performance and experience. In this article, we will tell you what D3DGear 4.41 Keygen can do for you and how to get it for free.

        -

        D3DGear 4.41 Keygen


        Download →→→ https://bytlly.com/2uGwtL



        -

        What is D3DGear 4.41 Keygen?

        -

        D3DGear 4.41 Keygen is a software that can measure the FPS (frames per second) of your games, display it on the screen, and save it to a log file. FPS is an important indicator of how smoothly your game runs and how well your computer can handle it. By knowing your FPS, you can adjust your settings and optimize your game performance.

        -

        But that's not all. D3DGear 4.41 Keygen can also record your gameplay in high quality video and audio, with minimal impact on your game performance. You can save your video files in various formats, such as AVI, MP4, WMV, etc., and edit them later with your favorite video editing software. You can also stream your gameplay live to popular platforms like Twitch, YouTube, Facebook, etc., with just a few clicks.

        -

        D3DGear 4.41 Keygen is compatible with most 3D games and applications, such as DirectX, OpenGL, Vulkan, etc. It supports Windows XP, Vista, 7, 8, 8.1 and 10 operating systems. It has a simple and intuitive interface that makes it easy to use even for beginners.

        -

        -

        How to Get D3DGear 4.41 Keygen for Free?

        -

        D3DGear 4.41 Keygen is a premium software that normally costs $34.95 for a lifetime license. However, you can get it for free by using a crack file that can activate the full version of the software without paying anything.

        -

        To get D3DGear 4.41 Keygen for free, you need to follow these steps:

        -
          -
        1. Download the D3DGear 4.41 setup file from the official website or any trusted source.
        2. -
        3. Install the software on your computer by following the instructions.
        4. -
        5. Download the D3DGear 4.41 crack file from the link below.
        6. -
        7. Copy the crack file and paste it into the installation folder of D3DGear 4.41.
        8. -
        9. Run the crack file as administrator and click on the "Activate" button.
        10. -
        11. Enjoy the full version of D3DGear 4.41 Keygen for free.
        12. -
        -

        Note: The crack file is safe and virus-free, but you may need to disable your antivirus or firewall before using it.

        -

        Conclusion

        -

        D3DGear 4.41 Keygen is a powerful software that can help you measure, record and stream your gameplay with ease and quality. It is a must-have tool for any gamer who wants to enhance their gaming performance and experience. You can get it for free by using a crack file that can activate the full version of the software without any cost.

        -

        If you liked this article, please share it with your friends and leave a comment below.

        -

        How to Use D3DGear 4.41 Keygen?

        -

        Using D3DGear 4.41 Keygen is very simple and straightforward. You don't need to configure any complicated settings or options. Just follow these steps:

        -
          -
        1. Launch D3DGear 4.41 Keygen from your desktop or start menu.
        2. -
        3. Select the game or application that you want to measure, record or stream from the list of detected programs.
        4. -
        5. Click on the "Start" button to start D3DGear 4.41 Keygen.
        6. -
        7. Enjoy your game and see your FPS on the screen. You can also press the hotkeys to take screenshots, start or stop recording, or switch streaming platforms.
        8. -
        9. When you are done, click on the "Stop" button to stop D3DGear 4.41 Keygen.
        10. -
        11. You can find your recorded videos, screenshots and log files in the output folder of D3DGear 4.41 Keygen.
        12. -
        -

        Note: You can customize the hotkeys, output formats, video quality, audio settings and more from the "Options" menu of D3DGear 4.41 Keygen.

        -

        Why Choose D3DGear 4.41 Keygen?

        -

        D3DGear 4.41 Keygen is one of the best software for gamers who want to measure, record and stream their gameplay. Here are some of the reasons why you should choose D3DGear 4.41 Keygen over other similar software:

        -
          -
        • D3DGear 4.41 Keygen is fast and lightweight. It does not slow down your game or system performance.
        • -
        • D3DGear 4.41 Keygen is compatible with most 3D games and applications, regardless of their graphics engine or API.
        • -
        • D3DGear 4.41 Keygen supports high-resolution video recording and streaming, up to 4K and 120 FPS.
        • -
        • D3DGear 4.41 Keygen supports multiple streaming platforms and protocols, such as RTMP, RTSP, HLS, etc.
        • -
        • D3DGear 4.41 Keygen supports Oculus Rift VR games recording and streaming.
        • -
        • D3DGear 4.41 Keygen has a simple and user-friendly interface that makes it easy to use for anyone.
        • -
        -

        With D3DGear 4.41 Keygen, you can enjoy your games and share them with your friends and fans without any hassle or compromise.

        -

        What are the Alternatives to D3DGear 4.41 Keygen?

        -

        D3DGear 4.41 Keygen is not the only software that can measure, record and stream your gameplay. There are other alternatives that you can try if you are looking for different features or options. Here are some of the most popular alternatives to D3DGear 4.41 Keygen:

        -
          -
        • Fraps: Fraps is one of the most well-known software for game recording and benchmarking. It can show your FPS, capture screenshots and record videos with sound. However, Fraps has some limitations, such as large file sizes, low video quality and lack of streaming support.
        • -
        • OBS Studio: OBS Studio is a free and open-source software for video recording and live streaming. It has a lot of features and customization options, such as multiple sources, filters, transitions, scenes, etc. It also supports various streaming platforms and protocols. However, OBS Studio can be complex and confusing for beginners and may require more system resources.
        • -
        • Bandicam: Bandicam is a lightweight and easy-to-use software for screen recording and game recording. It can record high-quality videos with small file sizes and minimal impact on your game performance. It also supports webcam overlay, mouse effects, voice mixing and more. However, Bandicam does not support live streaming and may have some compatibility issues with some games.
        • -
        -

        These are some of the alternatives to D3DGear 4.41 Keygen that you can try if you want to compare or switch to different software. However, we still recommend D3DGear 4.41 Keygen as one of the best software for gamers who want to measure, record and stream their gameplay with ease and quality.

        -

        How to Troubleshoot D3DGear 4.41 Keygen?

        -

        D3DGear 4.41 Keygen is a reliable and stable software that works well with most games and applications. However, sometimes you may encounter some problems or errors that prevent you from using D3DGear 4.41 Keygen properly. Here are some of the common issues and solutions that you can try if you face any trouble with D3DGear 4.41 Keygen:

        -
          -
        • D3DGear 4.41 Keygen does not detect your game or application: Make sure that you have launched D3DGear 4.41 Keygen before launching your game or application. If D3DGear 4.41 Keygen still does not detect your game or application, try to run both of them as administrator.
        • -
        • D3DGear 4.41 Keygen does not show FPS on the screen: Make sure that you have enabled the FPS display option from the "Options" menu of D3DGear 4.41 Keygen. You can also change the position, color and size of the FPS display from the same menu.
        • -
        • D3DGear 4.41 Keygen does not record or stream your gameplay: Make sure that you have enough disk space and network bandwidth for recording or streaming your gameplay. You can also check the output folder and streaming settings from the "Options" menu of D3DGear 4.41 Keygen.
        • -
        • D3DGear 4.41 Keygen causes your game or system to crash or freeze: Make sure that you have updated your graphics drivers and DirectX to the latest version. You can also lower your game settings and video quality to reduce the load on your system.
        • -
        -

        If none of these solutions work for you, you can contact the D3DGear support team via email or visit their official website for more help and information.

        -

        How to Uninstall D3DGear 4.41 Keygen?

        -

        If you want to uninstall D3DGear 4.41 Keygen from your computer, you can follow these simple steps:

        -
          -
        1. Close D3DGear 4.41 Keygen and any game or application that is using it.
        2. -
        3. Go to the Control Panel and select "Programs and Features".
        4. -
        5. Find and select D3DGear 4.41 Keygen from the list of installed programs and click on "Uninstall".
        6. -
        7. Follow the instructions on the screen to complete the uninstallation process.
        8. -
        9. Delete any leftover files or folders of D3DGear 4.41 Keygen from your computer.
        10. -
        -

        Note: Uninstalling D3DGear 4.41 Keygen will not delete your recorded videos, screenshots or log files. You can keep them or delete them manually if you want.

        -

        How to Update D3DGear 4.41 Keygen?

        -

        D3DGear 4.41 Keygen is a software that is constantly updated and improved by the developers. They add new features, fix bugs and enhance compatibility with new games and applications. Therefore, it is important to update D3DGear 4.41 Keygen regularly to enjoy the best performance and experience.

        -

        To update D3DGear 4.41 Keygen, you can follow these simple steps:

        -
          -
        1. Open D3DGear 4.41 Keygen from your desktop or start menu.
        2. -
        3. Click on the "Help" menu and select "Check for Updates".
        4. -
        5. If there is a new version available, you will see a notification window with the download link.
        6. -
        7. Click on the link and download the latest version of D3DGear 4.41 Keygen.
        8. -
        9. Install the new version over the old one by following the instructions.
        10. -
        11. Restart your computer and enjoy the updated D3DGear 4.41 Keygen.
        12. -
        -

        Note: You do not need to uninstall the old version or use a new crack file to update D3DGear 4.41 Keygen.

        -

        How to Review D3DGear 4.41 Keygen?

        -

        If you are satisfied with D3DGear 4.41 Keygen and want to share your feedback and opinion with other users, you can write a review for D3DGear 4.41 Keygen on various websites and platforms. Here are some tips on how to write a good review for D3DGear 4.41 Keygen:

        -
          -
        • Be honest and objective. Do not exaggerate or lie about your experience with D3DGear 4.41 Keygen.
        • -
        • Be specific and detailed. Explain what you liked and disliked about D3DGear 4.41 Keygen, and give examples of how it helped or hindered your gameplay.
        • -
        • Be concise and clear. Avoid using long sentences, jargon or slang that may confuse or bore your readers.
        • -
        • Be respectful and polite. Do not use offensive or abusive language that may offend or hurt other users or developers.
        • -
        • Be helpful and informative. Provide useful information and tips that may help other users to use D3DGear 4.41 Keygen better.
        • -
        -

        By writing a good review for D3DGear 4.41 Keygen, you can help other users to make an informed decision about whether to use it or not. You can also help the developers to improve their software and make it more user-friendly and efficient.

        -

        Conclusion

        -

        D3DGear 4.41 Keygen is a powerful and easy-to-use software that can help you measure, record and stream your gameplay with high quality and minimal impact on your game performance. It is compatible with most 3D games and applications, and supports various output formats and streaming platforms. You can get it for free by using a crack file that can activate the full version of the software without any cost.

        -

        If you are looking for a software that can help you enhance your gaming performance and experience, you should definitely try D3DGear 4.41 Keygen. It is one of the best software for gamers who want to measure, record and stream their gameplay with ease and quality.

        -

        If you liked this article, please share it with your friends and leave a comment below.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Gta Vice City Setup R00 [UPDATED].md b/spaces/lincquiQcaudo/Top-20-Diffusion/Gta Vice City Setup R00 [UPDATED].md deleted file mode 100644 index fd0209c214e081958b1dfe5f1d58c8598f543e83..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Gta Vice City Setup R00 [UPDATED].md +++ /dev/null @@ -1,34 +0,0 @@ -

        Gta vice city setup r00


        Download Zip === https://bytlly.com/2uGwS3



        -
        -0R1DD 0D0F: A0F9:015D 0000 3210 0000 0000....BT.......02. 0F0: A09F:0DF 00A0. - -Error code -1073741816 or –2147483648 is similar to the 1011 error. Error code 0 is also a common error code.The error message Error code -1073741816 is one of the most common error codes. This error code is actually used to report error if a software or hardware system crashes while in use. Error code 1073741816 -2147483648 is a Windows error code and also used to report a computer or software crash. - -Still, there are situations where you get this error. To know what is causing it, you must find out what this error code means and how to fix it. - -Here are the error code explanations: - -1073741816-2147483648 occurs when the application or system crashes due to a corrupted file. - -0D0F:A09F:0DF 0A0 is the error code encountered when the system hangs due to a hardware problem. - -All mentioned error codes were found in the original game GTA Vice City. - -To repair this error message: - -When you encounter the error code -1073741816-2147483648, it means your application or system crashed due to a corrupted file. You need to repair the corrupted file to fix the problem. - -How to fix the error code 0D0F:A09F:0DF 0A0: - -Once the error occurs, you must find the cause. The cause can be a faulty hard drive, virus attack, a faulty operating system, etc. To find the cause, you need to perform troubleshooting. You can either use a hardware scanner or run a system scan with the help of software like CCleaner. - -To fix the problem, you must delete any corrupted or incorrect file. You can use a trusted software like CCleaner or similar. - -You can search for the file that has the error 0D0F:A09F:0DF 0A0 on your computer to fix it. - -You must find the file that has this error in it. Delete this file and close all running applications or open it on the data file and delete it. - -Run GTA Vice City on the data file and it will automatically repair the file 4fefd39f24
        -
        -
        -

        diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Matlab R2012a Free Download With Crack.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Matlab R2012a Free Download With Crack.md deleted file mode 100644 index 9cc18cbe9f714b06b4529d4f575aadc76d207a14..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Matlab R2012a Free Download With Crack.md +++ /dev/null @@ -1,13 +0,0 @@ -

        matlab r2012a free download with crack


        Download Filehttps://bytlly.com/2uGw7G



        -
        -September 20, 2013 - Matlab 2012a for Windows (32 & 64 bit) ISO + License (cracked) [Original] (torrent download). MATLAB (matrix laboratory) is a numerical ... All rights to materials and books belong to their authors and publishers. -On our site you can download movies for free without registration in one file. -Every day dozens of new products of any genres and directions. -Download books, magazines and audiobooks. -A variety of formats in the electronic library BooksGid. -We all know and love the legendary 3D first-person shooter Far Cry 3 - and in this video I will tell you about all the secrets in this game that I found. -Here you can download - Books - Collection - Games. -Books. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Office 2013 Avec Crack Sur Tunisia Sat.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Office 2013 Avec Crack Sur Tunisia Sat.md deleted file mode 100644 index afac2f4784564995767a743ce0d7d3189016e624..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Office 2013 Avec Crack Sur Tunisia Sat.md +++ /dev/null @@ -1,90 +0,0 @@ - -

        Microsoft Office 2013 Avec Crack Sur Tunisia Sat

        - -

        Microsoft Office 2013 est l'une des suites bureautiques les plus populaires et les plus complètes du marché. Elle comprend des applications telles que Word, Excel, PowerPoint, Outlook, OneNote, Access, Publisher et Lync. Avec Microsoft Office 2013, vous pouvez créer, modifier, partager et collaborer sur des documents, des feuilles de calcul, des présentations, des courriels et plus encore.

        -

        Microsoft Office 2013 Avec Crack Sur Tunisia Sat


        Download Filehttps://bytlly.com/2uGxAt



        - -

        Mais comment profiter de Microsoft Office 2013 sans payer le prix fort ? La réponse est simple : en utilisant un crack. Un crack est un logiciel qui permet de contourner la protection d'un programme et de l'activer sans avoir besoin d'une clé de licence valide. Ainsi, vous pouvez bénéficier de toutes les fonctionnalités de Microsoft Office 2013 sans dépenser un centime.

        - -

        Mais où trouver un crack fiable et efficace pour Microsoft Office 2013 ? La réponse est également simple : sur Tunisia Sat. Tunisia Sat est un forum tunisien qui regroupe des milliers d'utilisateurs passionnés par l'informatique, le multimédia, le hacking et le partage. Sur Tunisia Sat, vous pouvez trouver des cracks pour tous les logiciels que vous souhaitez, y compris Microsoft Office 2013.

        - -

        Comment télécharger et installer Microsoft Office 2013 avec crack sur Tunisia Sat ?

        - -

        Pour télécharger et installer Microsoft Office 2013 avec crack sur Tunisia Sat, il vous suffit de suivre ces étapes :

        -

        - -
          -
        1. Rendez-vous sur le site https://www.tunisia-sat.com/ et inscrivez-vous gratuitement si vous n'avez pas encore de compte.
        2. -
        3. Recherchez le sujet "Microsoft Office 2013 avec crack" dans la section "Logiciels PC" ou cliquez sur ce lien : https://www.tunisia-sat.com/forums/threads/3792490/.
        4. -
        5. Téléchargez le fichier "Office_2013_Professional_Plus_Crack.rar" qui contient le setup de Microsoft Office 2013 et le crack KMSpico.
        6. -
        7. Extrayez le fichier "Office_2013_Professional_Plus_Crack.rar" avec un logiciel comme WinRAR ou 7-Zip.
        8. -
        9. Lancez le fichier "setup.exe" et suivez les instructions pour installer Microsoft Office 2013 sur votre ordinateur.
        10. -
        11. Une fois l'installation terminée, ne lancez pas encore Microsoft Office 2013.
        12. -
        13. Ouvrez le dossier "KMSpico" et lancez le fichier "KMSpico_setup.exe" pour installer le crack.
        14. -
        15. Une fois le crack installé, lancez-le en cliquant sur l'icône "KMSpico" sur votre bureau ou dans votre menu démarrer.
        16. -
        17. Cliquez sur le bouton rouge pour activer Microsoft Office 2013.
        18. -
        19. Fermez le crack et lancez Microsoft Office 2013. Vous verrez que votre suite bureautique est activée et prête à l'emploi.
        20. -
        - -

        Voilà, vous avez réussi à télécharger et installer Microsoft Office 2013 avec crack sur Tunisia Sat. Vous pouvez désormais profiter de toutes les fonctionnalités de cette suite bureautique sans aucune limitation. N'oubliez pas de remercier les membres de Tunisia Sat qui ont partagé ce crack avec vous et de respecter les règles du forum.

        - -

        Quels sont les avantages et les inconvénients de Microsoft Office 2013 avec crack sur Tunisia Sat ?

        - -

        Microsoft Office 2013 avec crack sur Tunisia Sat présente des avantages et des inconvénients que vous devez connaître avant de l'utiliser. Voici quelques-uns d'entre eux :

        - -

        Les avantages

        - -
          -
        • Vous pouvez utiliser Microsoft Office 2013 gratuitement sans avoir à payer une licence coûteuse.
        • -
        • Vous pouvez accéder à toutes les fonctionnalités de Microsoft Office 2013 sans aucune restriction.
        • -
        • Vous pouvez bénéficier des mises à jour automatiques de Microsoft Office 2013 sans risquer de perdre votre activation.
        • -
        • Vous pouvez compter sur le soutien de la communauté de Tunisia Sat en cas de problème ou de question.
        • -
        - -

        Les inconvénients

        - -
          -
        • Vous violez les droits d'auteur de Microsoft en utilisant un crack pour activer leur produit.
        • -
        • Vous vous exposez à des risques de sécurité en téléchargeant et en installant un logiciel non officiel qui peut contenir des virus ou des malwares.
        • -
        • Vous ne bénéficiez pas du support technique ni de la garantie de Microsoft en cas de dysfonctionnement ou de dommage causé par leur produit.
        • -
        • Vous ne respectez pas l'éthique ni la morale en profitant du travail des développeurs sans les rémunérer.
        • -
        - -

        C'est à vous de peser le pour et le contre avant de décider d'utiliser ou non Microsoft Office 2013 avec crack sur Tunisia Sat. Nous vous conseillons toutefois d'opter pour une solution légale et sûre en achetant une licence officielle auprès de Microsoft ou d'un revendeur agréé. Vous pourrez ainsi soutenir les créateurs de cette suite bureautique et bénéficier d'un service optimal.

        - -

        Conclusion

        - -

        Microsoft Office 2013 est une suite bureautique performante et polyvalente qui vous permet de réaliser toutes sortes de tâches liées au traitement de texte, au calcul, à la présentation, à la communication et à la collaboration. Pour l'utiliser, vous devez disposer d'une clé de licence valide que vous pouvez acheter auprès de Microsoft ou d'un revendeur agréé. Si vous ne souhaitez pas payer pour cette licence, vous pouvez utiliser un crack pour activer Microsoft Office 2013 sans avoir besoin d'une clé. Vous pouvez trouver ce crack sur Tunisia Sat, un forum tunisien dédié à l'informatique, au multimédia, au hacking et au partage. Cependant, cette méthode présente des risques juridiques, sécuritaires et éthiques que vous devez prendre en compte avant de l'adopter. Nous vous recommandons donc de respecter les droits d'auteur et d'acheter une licence officielle pour utiliser Microsoft Office 2013 en toute légalité et en toute sécurité.

        -

        Quelles sont les alternatives à Microsoft Office 2013 avec crack sur Tunisia Sat ?

        - -

        Si vous n'êtes pas convaincu par Microsoft Office 2013 avec crack sur Tunisia Sat, vous pouvez opter pour d'autres solutions pour utiliser une suite bureautique gratuite ou à moindre coût. Voici quelques-unes d'entre elles :

        - -

        Les suites bureautiques en ligne

        - -

        Vous pouvez utiliser des suites bureautiques en ligne qui vous permettent de créer, modifier, partager et stocker des documents, des feuilles de calcul, des présentations et plus encore sur le cloud. Ces suites sont accessibles depuis n'importe quel navigateur web et ne nécessitent pas d'installation ni d'activation. Parmi les plus connues, on peut citer :

        - -
          -
        • Google Docs, qui fait partie de la suite Google Workspace et qui est compatible avec les formats de Microsoft Office.
        • -
        • Microsoft Office Online, qui est la version en ligne de Microsoft Office et qui offre les mêmes fonctionnalités que la version installée.
        • -
        • Zoho Writer, qui fait partie de la suite Zoho Workplace et qui propose des outils avancés de collaboration et de publication.
        • -
        - -

        Ces suites bureautiques en ligne sont gratuites pour un usage personnel ou limité, mais peuvent nécessiter un abonnement payant pour un usage professionnel ou illimité.

        - -

        Les suites bureautiques libres

        - -

        Vous pouvez également utiliser des suites bureautiques libres qui sont des logiciels open source que vous pouvez télécharger, installer et utiliser gratuitement et légalement sur votre ordinateur. Ces suites sont généralement compatibles avec les formats de Microsoft Office et offrent des fonctionnalités similaires ou supérieures. Parmi les plus populaires, on peut citer :

        - -
          -
        • LibreOffice, qui est la suite bureautique libre la plus utilisée au monde et qui comprend six applications : Writer, Calc, Impress, Draw, Math et Base.
        • -
        • Apache OpenOffice, qui est la suite bureautique libre historique et qui comprend cinq applications : Writer, Calc, Impress, Draw et Base.
        • -
        • FreeOffice, qui est une suite bureautique libre compatible avec les versions récentes de Microsoft Office et qui comprend trois applications : TextMaker, PlanMaker et Presentations.
        • -
        - -

        Ces suites bureautiques libres sont gratuites pour tous les usages, mais peuvent accepter des dons ou des contributions volontaires pour soutenir leur développement.

        - -

        Conclusion

        - -

        Microsoft Office 2013 est une suite bureautique performante et polyvalente qui vous permet de réaliser toutes sortes de tâches liées au traitement de texte, au calcul, à la présentation, à la communication et à la collaboration. Pour l'utiliser, vous devez disposer d'une clé de licence valide que vous pouvez acheter auprès de Microsoft ou d'un revendeur agréé. Si vous ne souhaitez pas payer pour cette licence, vous pouvez utiliser un crack pour activer Microsoft Office 2013 sans avoir besoin d'une clé. Vous pouvez trouver ce crack sur Tunisia Sat, un forum tunisien dédié à l'informatique, au multimédia, au hacking et au partage. Cependant, cette méthode présente des risques juridiques, sécuritaires et éthiques que vous devez prendre en compte avant de l'adopter. Nous vous recommandons donc de respecter les droits d'auteur et d'acheter une licence officielle pour utiliser Microsoft Office 2013 en toute légalité et en toute sécurité. Si vous cherchez des alternatives à Microsoft Office 2013 avec crack sur Tunisia Sat, vous pouvez opter pour des suites bureautiques en ligne ou des suites bureautiques libres qui sont gratuites ou à moindre coût et qui offrent des fonctionnalités similaires ou supérieures.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/lint/sdpipe_webui/utils/__init__.py b/spaces/lint/sdpipe_webui/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/lithiumice/SadTalker/src/generate_facerender_batch.py b/spaces/lithiumice/SadTalker/src/generate_facerender_batch.py deleted file mode 100644 index 9ec7a169706e9e4697f0f847d4e3d46101bd55d9..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/generate_facerender_batch.py +++ /dev/null @@ -1,134 +0,0 @@ -import os -import numpy as np -from PIL import Image -from skimage import io, img_as_float32, transform -import torch -import scipy.io as scio - -def get_facerender_data(coeff_path, pic_path, first_coeff_path, audio_path, - batch_size, input_yaw_list=None, input_pitch_list=None, input_roll_list=None, - expression_scale=1.0, still_mode = False, preprocess='crop'): - - semantic_radius = 13 - video_name = os.path.splitext(os.path.split(coeff_path)[-1])[0] - txt_path = os.path.splitext(coeff_path)[0] - - data={} - - img1 = Image.open(pic_path) - source_image = np.array(img1) - source_image = img_as_float32(source_image) - source_image = transform.resize(source_image, (256, 256, 3)) - source_image = source_image.transpose((2, 0, 1)) - source_image_ts = torch.FloatTensor(source_image).unsqueeze(0) - source_image_ts = source_image_ts.repeat(batch_size, 1, 1, 1) - data['source_image'] = source_image_ts - - source_semantics_dict = scio.loadmat(first_coeff_path) - - if preprocess.lower() != 'full': - source_semantics = source_semantics_dict['coeff_3dmm'][:1,:70] #1 70 - else: - source_semantics = source_semantics_dict['coeff_3dmm'][:1,:73] #1 70 - - source_semantics_new = transform_semantic_1(source_semantics, semantic_radius) - source_semantics_ts = torch.FloatTensor(source_semantics_new).unsqueeze(0) - source_semantics_ts = source_semantics_ts.repeat(batch_size, 1, 1) - data['source_semantics'] = source_semantics_ts - - # target - generated_dict = scio.loadmat(coeff_path) - generated_3dmm = generated_dict['coeff_3dmm'] - generated_3dmm[:, :64] = generated_3dmm[:, :64] * expression_scale - - if preprocess.lower() == 'full': - generated_3dmm = np.concatenate([generated_3dmm, np.repeat(source_semantics[:,70:], generated_3dmm.shape[0], axis=0)], axis=1) - - if still_mode: - generated_3dmm[:, 64:] = np.repeat(source_semantics[:, 64:], generated_3dmm.shape[0], axis=0) - - with open(txt_path+'.txt', 'w') as f: - for coeff in generated_3dmm: - for i in coeff: - f.write(str(i)[:7] + ' '+'\t') - f.write('\n') - - target_semantics_list = [] - frame_num = generated_3dmm.shape[0] - data['frame_num'] = frame_num - for frame_idx in range(frame_num): - target_semantics = transform_semantic_target(generated_3dmm, frame_idx, semantic_radius) - target_semantics_list.append(target_semantics) - - remainder = frame_num%batch_size - if remainder!=0: - for _ in range(batch_size-remainder): - target_semantics_list.append(target_semantics) - - target_semantics_np = np.array(target_semantics_list) #frame_num 70 semantic_radius*2+1 - target_semantics_np = target_semantics_np.reshape(batch_size, -1, target_semantics_np.shape[-2], target_semantics_np.shape[-1]) - data['target_semantics_list'] = torch.FloatTensor(target_semantics_np) - data['video_name'] = video_name - data['audio_path'] = audio_path - - if input_yaw_list is not None: - yaw_c_seq = gen_camera_pose(input_yaw_list, frame_num, batch_size) - data['yaw_c_seq'] = torch.FloatTensor(yaw_c_seq) - if input_pitch_list is not None: - pitch_c_seq = gen_camera_pose(input_pitch_list, frame_num, batch_size) - data['pitch_c_seq'] = torch.FloatTensor(pitch_c_seq) - if input_roll_list is not None: - roll_c_seq = gen_camera_pose(input_roll_list, frame_num, batch_size) - data['roll_c_seq'] = torch.FloatTensor(roll_c_seq) - - return data - -def transform_semantic_1(semantic, semantic_radius): - semantic_list = [semantic for i in range(0, semantic_radius*2+1)] - coeff_3dmm = np.concatenate(semantic_list, 0) - return coeff_3dmm.transpose(1,0) - -def transform_semantic_target(coeff_3dmm, frame_index, semantic_radius): - num_frames = coeff_3dmm.shape[0] - seq = list(range(frame_index- semantic_radius, frame_index + semantic_radius+1)) - index = [ min(max(item, 0), num_frames-1) for item in seq ] - coeff_3dmm_g = coeff_3dmm[index, :] - return coeff_3dmm_g.transpose(1,0) - -def gen_camera_pose(camera_degree_list, frame_num, batch_size): - - new_degree_list = [] - if len(camera_degree_list) == 1: - for _ in range(frame_num): - new_degree_list.append(camera_degree_list[0]) - remainder = frame_num%batch_size - if remainder!=0: - for _ in range(batch_size-remainder): - new_degree_list.append(new_degree_list[-1]) - new_degree_np = np.array(new_degree_list).reshape(batch_size, -1) - return new_degree_np - - degree_sum = 0. - for i, degree in enumerate(camera_degree_list[1:]): - degree_sum += abs(degree-camera_degree_list[i]) - - degree_per_frame = degree_sum/(frame_num-1) - for i, degree in enumerate(camera_degree_list[1:]): - degree_last = camera_degree_list[i] - degree_step = degree_per_frame * abs(degree-degree_last)/(degree-degree_last) - new_degree_list = new_degree_list + list(np.arange(degree_last, degree, degree_step)) - if len(new_degree_list) > frame_num: - new_degree_list = new_degree_list[:frame_num] - elif len(new_degree_list) < frame_num: - for _ in range(frame_num-len(new_degree_list)): - new_degree_list.append(new_degree_list[-1]) - print(len(new_degree_list)) - print(frame_num) - - remainder = frame_num%batch_size - if remainder!=0: - for _ in range(batch_size-remainder): - new_degree_list.append(new_degree_list[-1]) - new_degree_np = np.array(new_degree_list).reshape(batch_size, -1) - return new_degree_np - diff --git a/spaces/lj1995/trump/commons.py b/spaces/lj1995/trump/commons.py deleted file mode 100644 index ba2dad2c884a34d3ffcf6e0795d04d764d6a5eec..0000000000000000000000000000000000000000 --- a/spaces/lj1995/trump/commons.py +++ /dev/null @@ -1,164 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/lojban/text-to-speech/vits/data_utils.py b/spaces/lojban/text-to-speech/vits/data_utils.py deleted file mode 100644 index effb67868080fdb2f95b32966e6b37ac8b48b18c..0000000000000000000000000000000000000000 --- a/spaces/lojban/text-to-speech/vits/data_utils.py +++ /dev/null @@ -1,392 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import vits.commons as commons -from vits.mel_processing import spectrogram_torch -from vits.utils import load_wav_to_torch, load_filepaths_and_text -from vits.text import text_to_sequence, cleaned_text_to_sequence - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_and_text) - self._filter() - - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - audiopath, text = audiopath_and_text[0], audiopath_and_text[1] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - return (text, spec, wav) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths - - -"""Multi speaker version""" -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - for audiopath, sid, text in self.audiopaths_sid_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_sid_text_new.append([audiopath, sid, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - sid = self.get_sid(sid) - return (text, spec, wav, sid) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i+1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid+1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/ltg/chat-nort5/question_detection_norbert3_small/configuration_norbert.py b/spaces/ltg/chat-nort5/question_detection_norbert3_small/configuration_norbert.py deleted file mode 100644 index 450a0286801acce50a7dd9378efa34391e1ca918..0000000000000000000000000000000000000000 --- a/spaces/ltg/chat-nort5/question_detection_norbert3_small/configuration_norbert.py +++ /dev/null @@ -1,34 +0,0 @@ -from transformers.configuration_utils import PretrainedConfig - - -class NorbertConfig(PretrainedConfig): - """Configuration class to store the configuration of a `NorbertModel`. - """ - def __init__( - self, - vocab_size=50000, - attention_probs_dropout_prob=0.1, - hidden_dropout_prob=0.1, - hidden_size=768, - intermediate_size=2048, - max_position_embeddings=512, - position_bucket_size=32, - num_attention_heads=12, - num_hidden_layers=12, - layer_norm_eps=1.0e-7, - output_all_encoded_layers=True, - **kwargs, - ): - super().__init__(**kwargs) - - self.vocab_size = vocab_size - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.intermediate_size = intermediate_size - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.max_position_embeddings = max_position_embeddings - self.output_all_encoded_layers = output_all_encoded_layers - self.position_bucket_size = position_bucket_size - self.layer_norm_eps = layer_norm_eps diff --git a/spaces/ltgoslo/ssa-perin/data/field/field.py b/spaces/ltgoslo/ssa-perin/data/field/field.py deleted file mode 100644 index f646b2cfb80ce7879b9118065657294f433c0ccc..0000000000000000000000000000000000000000 --- a/spaces/ltgoslo/ssa-perin/data/field/field.py +++ /dev/null @@ -1,70 +0,0 @@ -import torch -from data.field.mini_torchtext.field import Field as TorchTextField -from collections import Counter, OrderedDict - - -# small change of vocab building to correspond to our version of Dataset -class Field(TorchTextField): - def build_vocab(self, *args, **kwargs): - counter = Counter() - sources = [] - for arg in args: - if isinstance(arg, torch.utils.data.Dataset): - sources += [arg.get_examples(name) for name, field in arg.fields.items() if field is self] - else: - sources.append(arg) - for data in sources: - for x in data: - if not self.sequential: - x = [x] - counter.update(x) - - specials = list( - OrderedDict.fromkeys( - tok - for tok in [self.unk_token, self.pad_token, self.init_token, self.eos_token] + kwargs.pop("specials", []) - if tok is not None - ) - ) - self.vocab = self.vocab_cls(counter, specials=specials, **kwargs) - - def process(self, example, device=None): - if self.include_lengths: - example = example, len(example) - tensor = self.numericalize(example, device=device) - return tensor - - def numericalize(self, ex, device=None): - if self.include_lengths and not isinstance(ex, tuple): - raise ValueError("Field has include_lengths set to True, but input data is not a tuple of (data batch, batch lengths).") - - if isinstance(ex, tuple): - ex, lengths = ex - lengths = torch.tensor(lengths, dtype=self.dtype, device=device) - - if self.use_vocab: - if self.sequential: - ex = [self.vocab.stoi[x] for x in ex] - else: - ex = self.vocab.stoi[ex] - - if self.postprocessing is not None: - ex = self.postprocessing(ex, self.vocab) - else: - numericalization_func = self.dtypes[self.dtype] - - if not self.sequential: - ex = numericalization_func(ex) if isinstance(ex, str) else ex - if self.postprocessing is not None: - ex = self.postprocessing(ex, None) - - var = torch.tensor(ex, dtype=self.dtype, device=device) - - if self.sequential and not self.batch_first: - var.t_() - if self.sequential: - var = var.contiguous() - - if self.include_lengths: - return var, lengths - return var diff --git a/spaces/ma-xu/LIVE/thrust/thrust/detail/complex/ctanhf.h b/spaces/ma-xu/LIVE/thrust/thrust/detail/complex/ctanhf.h deleted file mode 100644 index f6923d1df6d723092fc7522dd197bb66fa7f3fa4..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/detail/complex/ctanhf.h +++ /dev/null @@ -1,124 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * Copyright 2013 Filipe RNC Maia - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*- - * Copyright (c) 2011 David Schultz - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice unmodified, this list of conditions, and the following - * disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR - * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES - * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. - * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, - * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT - * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF - * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * Adapted from FreeBSD by Filipe Maia, filipe.c.maia@gmail.com: - * freebsd/lib/msun/src/s_ctanhf.c - */ - -/* - * Hyperbolic tangent of a complex argument z. See ctanh.c for details. - */ - -#pragma once - -#include -#include -#include - -namespace thrust{ -namespace detail{ -namespace complex{ - -using thrust::complex; - -__host__ __device__ inline -complex ctanhf(const complex& z){ - float x, y; - float t, beta, s, rho, denom; - uint32_t hx, ix; - - x = z.real(); - y = z.imag(); - - get_float_word(hx, x); - ix = hx & 0x7fffffff; - - if (ix >= 0x7f800000) { - if (ix & 0x7fffff) - return (complex(x, (y == 0.0f ? y : x * y))); - set_float_word(x, hx - 0x40000000); - return (complex(x, - copysignf(0, isinf(y) ? y : sinf(y) * cosf(y)))); - } - - if (!isfinite(y)) - return (complex(y - y, y - y)); - - if (ix >= 0x41300000) { /* x >= 11 */ - float exp_mx = expf(-fabsf(x)); - return (complex(copysignf(1.0f, x), - 4.0f * sinf(y) * cosf(y) * exp_mx * exp_mx)); - } - - t = tanf(y); - beta = 1.0f + t * t; - s = sinhf(x); - rho = sqrtf(1.0f + s * s); - denom = 1.0f + beta * s * s; - return (complex((beta * rho * s) / denom, t / denom)); -} - - __host__ __device__ inline - complex ctanf(complex z){ - z = ctanhf(complex(-z.imag(), z.real())); - return (complex(z.imag(), -z.real())); - } - -} // namespace complex - -} // namespace detail - -template <> -__host__ __device__ -inline complex tan(const complex& z){ - return detail::complex::ctanf(z); -} - -template <> -__host__ __device__ -inline complex tanh(const complex& z){ - return detail::complex::ctanhf(z); -} - -} // namespace thrust diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/unique_by_key.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/unique_by_key.h deleted file mode 100644 index 1d40011787cb8eaea25a969c855c1c758a0225e4..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/unique_by_key.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits unique_by_key -#include - diff --git a/spaces/menghanxia/disco/models/position_encoding.py b/spaces/menghanxia/disco/models/position_encoding.py deleted file mode 100644 index 0af5df59e2878dd8e7487598e0aa083ace445371..0000000000000000000000000000000000000000 --- a/spaces/menghanxia/disco/models/position_encoding.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Various positional encodings for the transformer. -""" -import math -import torch -from torch import nn - - -class PositionEmbeddingSine(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperature = temperature - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, token_tensors): - ## input: (B,C,H,W) - x = token_tensors - h, w = x.shape[-2:] - identity_map= torch.ones((h,w), device=x.device) - y_embed = identity_map.cumsum(0, dtype=torch.float32) - x_embed = identity_map.cumsum(1, dtype=torch.float32) - if self.normalize: - eps = 1e-6 - y_embed = y_embed / (y_embed[-1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, -1:] + eps) * self.scale - - dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) - - pos_x = x_embed[:, :, None] / dim_t - pos_y = y_embed[:, :, None] / dim_t - pos_x = torch.stack((pos_x[:, :, 0::2].sin(), pos_x[:, :, 1::2].cos()), dim=3).flatten(2) - pos_y = torch.stack((pos_y[:, :, 0::2].sin(), pos_y[:, :, 1::2].cos()), dim=3).flatten(2) - pos = torch.cat((pos_y, pos_x), dim=2).permute(2, 0, 1) - batch_pos = pos.unsqueeze(0).repeat(x.shape[0], 1, 1, 1) - return batch_pos - - -class PositionEmbeddingLearned(nn.Module): - """ - Absolute pos embedding, learned. - """ - def __init__(self, n_pos_x=16, n_pos_y=16, num_pos_feats=64): - super().__init__() - self.row_embed = nn.Embedding(n_pos_y, num_pos_feats) - self.col_embed = nn.Embedding(n_pos_x, num_pos_feats) - self.reset_parameters() - - def reset_parameters(self): - nn.init.uniform_(self.row_embed.weight) - nn.init.uniform_(self.col_embed.weight) - - def forward(self, token_tensors): - ## input: (B,C,H,W) - x = token_tensors - h, w = x.shape[-2:] - i = torch.arange(w, device=x.device) - j = torch.arange(h, device=x.device) - x_emb = self.col_embed(i) - y_emb = self.row_embed(j) - pos = torch.cat([ - x_emb.unsqueeze(0).repeat(h, 1, 1), - y_emb.unsqueeze(1).repeat(1, w, 1), - ], dim=-1).permute(2, 0, 1) - batch_pos = pos.unsqueeze(0).repeat(x.shape[0], 1, 1, 1) - return batch_pos - - -def build_position_encoding(num_pos_feats=64, n_pos_x=16, n_pos_y=16, is_learned=False): - if is_learned: - position_embedding = PositionEmbeddingLearned(n_pos_x, n_pos_y, num_pos_feats) - else: - position_embedding = PositionEmbeddingSine(num_pos_feats, normalize=True) - - return position_embedding \ No newline at end of file diff --git a/spaces/merve/anonymization/source/private-and-fair/top-bot-digits.js b/spaces/merve/anonymization/source/private-and-fair/top-bot-digits.js deleted file mode 100644 index bc2f85ec8cb3b5544245f159aa62ff2fbffbcbb5..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/source/private-and-fair/top-bot-digits.js +++ /dev/null @@ -1,66 +0,0 @@ - -!(async function(){ - await util.getFile(`cns-cache/mnist_train_raw_3.npy`) - var digitMetadata = await util.getFile('mnist_train.csv') - var {byLabel} = util.decorateDigitMetadata(digitMetadata) - - var sel = d3.select('.top-bot-digits').html('') - .at({role: 'graphics-document', 'aria-label': `The twenty-five MNIST 3 digits most and least senstive to higher and lower privacy. The digits most sensitive to higher privacy are much more poorly drawn than the onces least sensitive to higher privacy.`}) - - var digitSel = sel.append('div') - var buttonSel = sel.append('div.digit-button-container') - .appendMany('div.button', d3.range(10)) - .text(d => d) - .on('click', d => drawClass(byLabel[d])) - - drawClass(byLabel[3]) - - async function drawClass(digitClass){ - buttonSel.classed('active', d => d == digitClass.key) - await util.getFile(`cns-cache/mnist_train_raw_${digitClass.key}.npy`) - - var nRows = 5 - var nCols = 5 - - var bot = _.sortBy(digitClass, d => +d.priv_order).slice(0, nRows*nCols) - var top = _.sortBy(digitClass, d => -d.priv_order).slice(0, nRows*nCols) - - digitSel.html('').append('div') - .st({maxWidth: 640, margin: '0 auto'}) - .appendMany('div', [bot, top]) - .st({display: 'inline-block'}) - .each(drawDigitBlock) - - - function drawDigitBlock(digits, isBot){ - var s = 2 - - var sel = d3.select(this).append('div') - - var c = d3.conventions({ - sel, - width: s*29*nCols, - height: s*29*nRows, - layers: 'cs', - margin: {top: 30, bottom: 10, right: 10, left: 10} - }) - - var ctx = c.layers[0] - - digits.forEach((d, i) => { - util.drawDigit( - ctx, - +d.i, - s, - (i % nCols)*s*29, - Math.floor(i/nCols)*s*29 - ) - }) - - c.svg.append('text') - .text(isBot ? 'Least sensitive to higher privacy' : 'Most sensitive to higher privacy') - .at({dy: '-.4em', textAnchor: 'middle', x: c.width/2, fontWeight: 600, fontSize: 14}) - } - } - -})() \ No newline at end of file diff --git a/spaces/merve/data-leak/public/measuring-diversity/columns-height.js b/spaces/merve/data-leak/public/measuring-diversity/columns-height.js deleted file mode 100644 index 3933c17b4bb8abe209b3573bb436c53c47543b1b..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/public/measuring-diversity/columns-height.js +++ /dev/null @@ -1,177 +0,0 @@ -window.initColumns = function(id, metrics, measures){ - var c = d3.conventions({ - sel: d3.select(id).html('').st({width: 775, margin: '0px auto', left: 27}), - margin: {left: 260, top: 40}, - height: 600, - }) - - var sets = d3.range(numRows).map(i => { - var shapes = columnShapes[i] - shapes = _.sortBy(shapes, d => d.shape) - shapes = _.sortBy(shapes, d => d.size) - shapes = _.sortBy(shapes, d => d.color) - shapes = _.sortBy(shapes, d => d.color == 'green' ? 0 : 1) - - - shapes.nG = d3.sum(shapes, d => d.color == 'green') - shapes.nB = d3.sum(shapes, d => d.color == 'blue') - shapes.nO = d3.sum(shapes, d => d.color == 'orange') - shapes.nR = d3.sum(shapes, d => d.color == 'red') - - shapes.forEach((d, i) => { - d.i = i - d.sizeVal = d.sizeVal < 1 ? .6 : 1 - }) - shapes.i = i - return shapes - }) - - var colW = 200 - var colWpad = 50 - var colH = 20 - var colHpad = 10 - var offsetW = -20 - - var colSel = c.svg.appendMany('g', measures) - .translate((d, i) => [.5 + i*(colW + colWpad) + offsetW, .5]) - - colSel.append('text').text(d => d.ranking_display_text) - .at({y: -20, textAnchor: 'middle', x: colW/2, fontWeight: 600, }) - - var rowSel = colSel.appendMany('g.row', sets) - .translate(d => d.i*(colH + colHpad), 1) - - var colMean = colSel.filter((d, i) => i === 0) - var colMin = colSel.filter((d, i) => i === 1) - var scoreLabelsMean = colMean.selectAll('.row').append('text') - .at({x: -5, y: 15, textAnchor: 'end'}) - .st({fontSize: '13px', opacity: .7}) - var scoreLabelsMin = colMin.selectAll('.row').append('text') - .at({x: 222, y: 15, textAnchor: 'end'}) - .st({fontSize: '13px', opacity: .7}) - - colSel.each(function(d, i){ - d.rowSel = d3.select(this).selectAll('.row') - - c.svg.append('marker') - .attr('id', 'arrow') - .attr('viewBox', '-10 -10 20 20') - .attr('markerWidth', 20) - .attr('markerHeight', 20) - .attr('orient', 'auto') - .append('path') - .attr('d', 'M-6.75,-6.75 L 0,0 L -6.75,6.75') - .at({fill: '#000'}) - - - if (i){ - var pathstr = ['M', 160, -25, 'C', 215, -25, 215, -25, 215, -5].join(' ') - } else{ - var pathstr = ['M', 35, -25, 'C', -20, -25, -20, -25, -20, -5].join(' ') - } - d3.select(this).append('path') - .at({stroke: '#000', fill: 'none', d: pathstr, markerEnd: 'url(#arrow)', strokeWidth: .6}) - }) - - - var s = colH - var p = 2 - - var l0Sel = c.svg.appendMany('path.set', sets).classed('set1', true) - .translate(d => [colW + offsetW, s/2 + .5]) - - drawRow(rowSel) - function drawRow(rowSel){ - rowSel.append('rect.set.no-stroke') - .at({x: -p, y: -p, width: colW + p*2, height: colH + p*2, fill: '#fff'}).classed('set1', true) - - rowSel.appendMany('g', d => d) - .translate(d => [d.i*s + s/2, s/2]) - .each(function(d){ - - var sOffset = 12 - var classNames = [d.shape, d.size, d.color, 'rank-item'].join(' ') - var shapeSel = d3.select(this).append('rect') - .at({ - x: -s/2, - y: -s/2 + (d.size == 'small' ? sOffset/2 : 0) - .5, - width: s - .5, - height: s - (d.size == 'small' ? sOffset : 0), - fill: d.fill, - class: classNames - }) - - if (d.shape == 'triangle'){ - var shapeSel = d3.select(this).append('circle') - .at({r: 2, fill: '#fff', stroke: '#000', strokeWidth: .5, class: classNames}) - } - }) - - } - - var setSel = c.svg.selectAll('.set1') - .on('mouseover', selectSet) - - sets.selected = sets[0] - function selectSet(set){ - sets.selected = set - sets.forEach(d => d.selected = d == set) - setSel - .classed('selected', d => d.selected) - .filter(d => d.selected) - .lower() - - rowSel.classed('selected', d => d.selected) - - sliders.render() - } - - - var sliders = makeSliders(metrics, sets, c, selectSet, drawRow, () => { - sets.forEach(shapes => { - shapes.score = metrics.map(m => { - var v = d3.sum(shapes, (d, i) => shapes[i][m.field] == m.key) - return Math.abs(m.target - v/shapes.length) - }) - }) - - measures.forEach(m => { - sets.forEach(shapes => { - shapes[m.str] = m.fn(shapes.score) - }) - _.sortBy(sets, d => d[m.str] + d.i/10000000)//.reverse() - .forEach((d, i) => d['i' + m.str] = i) - - m.rowSel.translate(d => d['i' + m.str]*(colH + colHpad), 1) - }) - - var p = 0 - l0Sel.at({d: d => [ - 'M', p, d['iUtilitarian']*(colH + colHpad), - 'L', colWpad - p, d['iEgalitarian']*(colH + colHpad), - ].join(' ')}) - - - scoreLabelsMean.text(d => { - return d3.format('.2f')(d['Utilitarian'])// + '%' - }) - scoreLabelsMin.text(d => { - return measures[1].ppFn(d['score']).replace('%', '')// + '%' - }) - }) - - sliders.render() - selectSet(_.sortBy(sets, d => d.iEgalitarian)[0]) -} -window.initColumns('#columns-height', metrics1, measures) -window.initColumns('#columns-height-disagree', metrics2, measures2) - -// Only highlight green items in the second ranking chart. -d3.select('#columns-height-disagree').selectAll('.rank-item').at({opacity: .3}) -d3.select('#columns-height-disagree').selectAll('.green').at({opacity: 1}) - -// Only highlight the green slider in the second ranking chart. -d3.select('#columns-height-disagree').selectAll('.slider').at({opacity: d => { - return d.key !== 'green' ? 0.35: 1 -}}) - diff --git a/spaces/merve/data-leak/source/anonymization/annotations.js b/spaces/merve/data-leak/source/anonymization/annotations.js deleted file mode 100644 index ed45db46369d1bb2a709b20bd97c29451d4284c0..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/source/anonymization/annotations.js +++ /dev/null @@ -1,38 +0,0 @@ -var annotations = - -[ -] - - - - -function addSwoop(c){ - var swoopy = d3.swoopyDrag() - .x(d => c.x(d.x)) - .y(d => c.y(d.y)) - .draggable(0) - .annotations(annotations) - - var swoopySel = c.svg.append('g.annotations').call(swoopy) - - c.svg.append('marker#arrow') - .attr('viewBox', '-10 -10 20 20') - .attr('markerWidth', 20) - .attr('markerHeight', 20) - .attr('orient', 'auto') - .append('path').at({d: 'M-6.75,-6.75 L 0,0 L -6.75,6.75'}) - - - swoopySel.selectAll('path').attr('marker-end', 'url(#arrow)') - window.annotationSel = swoopySel.selectAll('g') - .st({fontSize: 12, opacity: d => d.slide == 0 ? 1 : 0}) - - swoopySel.selectAll('text') - .each(function(d){ - d3.select(this) - .text('') //clear existing text - .tspans(d3.wordwrap(d.text, d.width || 20), 12) //wrap after 20 char - }) -} - - diff --git a/spaces/merzigo/MKAtaturkv2/app.py b/spaces/merzigo/MKAtaturkv2/app.py deleted file mode 100644 index a207cf06228e8446002ef176ad9101615e15a21e..0000000000000000000000000000000000000000 --- a/spaces/merzigo/MKAtaturkv2/app.py +++ /dev/null @@ -1,313 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from transformers import pipeline -from PIL import Image - -model_id = 'Extraphy/mustafa-kemal-ataturkv2' -prefix = 'Atatürk' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -#pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - #model_id, - #torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - # scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - #pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - - - -css = """ -.button, input, optgroup, select, textarea { - font-family: 'Source Sans Pro'; - font-size: 80%; - font-weight: inherit; - line-height: inherit; - color: orangered; - margin: 0; - padding: 0; - font-weight: 600; -} - .gr-form { - display: contents; - flex-direction: column; - align-items: center; - justify-content: center; - width: 50%; - line-height: 0.9rem; -} - -.gr-label { - font-size: 1.2rem; - margin-bottom: 0.5rem; -} - -.gr-input, .gr-select { - width: 100%; - font-size: 0.8rem; - border: 1px solid #f78900; - border-radius: 0.15rem; -} - -.gr-input[type="number"] { - width: fit-content; -} - -.gr-button { - font-size: 1.2rem; - padding: 1.5rem 0.2rem; - border: none; - border-radius: 0.25rem; - background-image: linear-gradient(0deg, rgb(15, 54, 96) 0.00%,rgb(252, 133, 123) 99.00%); - background: linear-gradient(0deg, rgb(185, 54, 96) 0.00%,rgb(252, 133, 123) 99.00%); - color: #f3f3f3; - cursor: copy; -} - -.gr-button:hover { - background-color: Red; -} - - -} -.main-div div { - display: inline-flex; - align-items: center; - gap: .8rem; - font-size: 1.75rem; -} - -.main-div div h1 { - font-weight: 1000; - margin-bottom: 7px; -} - -.main-div p { - margin-bottom: 10px; - font-size: 94%; -} - -a { - text-decoration: underline; -} - -.tabs { - margin-top: 0; - margin-bottom: 0; -} - -#gallery { - min-height: 20rem; -} -.container { - width: 70%; -} - .hcontainer { - width: 80%; - display: flex; - overflow: auto; - min-height: 185px; - align-items: center; - flex-direction: column; - justify-content: flex-start; -} -.hcontainer1 { - top: 20px; - right: 0px; - width: 100%; - height: 90px; - display: flex; - position: absolute; - align-items: flex-start; - border-radius: var(--dl-radius-radius-radius8); - justify-content: flex-start; -} -.hcontainer2 { - top: 20px; - right: 0px; - width: 100%; - height: 100%; - display: flex; - position: absolute; - align-items: flex-start; - border-radius: var(--dl-radius-radius-radius8); - justify-content: flex-start; - background-image: linear-gradient(0deg, rgb(185, 54, 96) 0.00%,rgb(252, 133, 123) 99.00%); -} -.home-image { - top: 0px; - right: var(--dl-space-space-halfunit); - width: 50%; - bottom: 0px; - margin: auto; - position: absolute; - object-fit: cover; -} -.home-image1 { - top: 17px; - left: 23px; - width: 348px; - height: 52px; - margin: auto; - position: absolute; - align-self: flex-start; - object-fit: cover; -} -:root { - --dl-color-gray-500: #595959; - --dl-color-gray-700: #999999; - --dl-color-gray-900: #D9D9D9; - --dl-size-size-large: 144px; - --dl-size-size-small: 48px; - --dl-color-danger-300: #A22020; - --dl-color-danger-500: #BF2626; - --dl-color-danger-700: #E14747; - --dl-color-gray-black: #000000; - --dl-color-gray-white: #FFFFFF; - --dl-size-size-medium: 96px; - --dl-size-size-xlarge: 192px; - --dl-size-size-xsmall: 16px; - --dl-space-space-unit: 16px; - --dl-color-primary-100: #003EB3; - --dl-color-primary-300: #0074F0; - --dl-color-primary-500: #14A9FF; - --dl-color-primary-700: #85DCFF; - --dl-color-success-300: #199033; - --dl-color-success-500: #32A94C; - --dl-color-success-700: #4CC366; - --dl-size-size-xxlarge: 288px; - --dl-radius-radius-round: 50%; - --dl-space-space-halfunit: 8px; - --dl-space-space-sixunits: 96px; - --dl-space-space-twounits: 32px; - --dl-radius-radius-radius2: 2px; - --dl-radius-radius-radius4: 4px; - --dl-radius-radius-radius8: 8px; - --dl-space-space-fiveunits: 80px; - --dl-space-space-fourunits: 64px; - --dl-space-space-threeunits: 48px; - --dl-space-space-oneandhalfunits: 24px; -} - - -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
        -
        -
        - image - image -
        -
        -
        - """ - ) - with gr.Row(): - - with gr.Column(scale=60): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", value="Portrait of Atatürk, fantasy, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, artgerm and greg rutkowski and alphonse mucha", - examples=[ - ["What a beautiful morning for a walk!"], - ["It was the best of times, it was the worst of times."], - ], - show_label=False, max_lines=3,placeholder=f"{prefix} [your prompt]").style(container=True) - generate = gr.Button(value="ÜRET").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=40): - with gr.Tab("Seçenekler"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negatif Girdi", placeholder="Çıkarılacak Girdiler", value="lowres, text, error, cropped, low quality, duplicate, mutilated, out of frame, extra fingers, mutated hands, mutation, deformed, blurry, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, missing arms, missing legs, fused fingers") - #auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically (Atatürk)", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Sanat Ölçeği", value=7.5, maximum=15) - steps = gr.Slider(label="İşlem Adımı", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Genişlik", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Yükseklik", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Köken (0 = Rastgele)', value=0, step=1) - - #with gr.Tab("Image to image"): - with gr.Group(): - gr.Textbox( - label="Bilgiler", - lines=3, - value="Mustafa Kemal Atatürk'ün bilinen ve genç halini üretebileceğiniz bir yapay zeka modeli. Bilinen hali için Atatürk, genç hali için ise GençAtatürk yazabilirsiniz.", - ) - - - - - #image2 = gr.Image(label="Image", height=256, tool="editor", type="pil") - #strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - #auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, neg_prompt] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
        -

        Daha fazla bilgi için SKB


        -

        -


        -

        This space was created using SD Space Creator.

        -
        - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/mikkoar/marco/src/app/loading.css b/spaces/mikkoar/marco/src/app/loading.css deleted file mode 100644 index eaaab6a86a228334c4eca3c5368ae6f0f593d405..0000000000000000000000000000000000000000 --- a/spaces/mikkoar/marco/src/app/loading.css +++ /dev/null @@ -1,68 +0,0 @@ -::-webkit-scrollbar { - width: 10px; - height: 10px; - display: none; -} - -::-webkit-scrollbar-button:start:decrement, -::-webkit-scrollbar-button:end:increment { - height: 30px; - background-color: transparent; -} - -::-webkit-scrollbar-track-piece { - background-color: #3b3b3b; - -webkit-border-radius: 16px; -} - -::-webkit-scrollbar-thumb:vertical { - height: 50px; - background-color: #666; - border: 1px solid #eee; - -webkit-border-radius: 6px; -} - -/* loading start */ -.loading-spinner { - display: flex; - justify-content: center; - align-items: center; - height: 100vh; - opacity: 1; - transition: opacity .8s ease-out; -} - -.loading-spinner.hidden { - opacity: 0; -} - -.loading-spinner>div { - width: 30px; - height: 30px; - background: linear-gradient(90deg, #2870EA 10.79%, #1B4AEF 87.08%); - - border-radius: 100%; - display: inline-block; - animation: sk-bouncedelay 1.4s infinite ease-in-out both; -} - -.loading-spinner .bounce1 { - animation-delay: -0.32s; -} - -.loading-spinner .bounce2 { - animation-delay: -0.16s; -} - -@keyframes sk-bouncedelay { - - 0%, - 80%, - 100% { - transform: scale(0); - } - - 40% { - transform: scale(1.0); - } -} diff --git a/spaces/miniv/bingai/README.md b/spaces/miniv/bingai/README.md deleted file mode 100644 index 4f4a8278b21a639dffe4691a6df955ddb60d2c4f..0000000000000000000000000000000000000000 --- a/spaces/miniv/bingai/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bingai -emoji: 🏃 -colorFrom: blue -colorTo: red -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mishtert/tracer/results_utils.py b/spaces/mishtert/tracer/results_utils.py deleted file mode 100644 index 90a61e9beaea650839471376802135fa391dbae2..0000000000000000000000000000000000000000 --- a/spaces/mishtert/tracer/results_utils.py +++ /dev/null @@ -1,120 +0,0 @@ -import streamlit as st - -def ReadPDFFile(doc, PageNum, searchtext, BreakText): - Read = False - found = False - Extracttext = [] - # - page = doc.loadPage(PageNum) - pagetext = page.getText("text") - # - text_instances = page.searchFor(searchtext) - # - if (text_instances): - page_display = page.getDisplayList() - dictionary_elements = page_display.getTextPage().extractDICT() - for block in dictionary_elements['blocks']: - for line in block['lines']: - for span in line['spans']: - Line = span['text'] - if (Line.strip() == searchtext): - Read = True - if (BreakText != "" and BreakText in Line.strip() and Read == True): - Extracttext.append(Line) - found = True - break - if (Read): - if (len(Line) > 1): - Extracttext.append(Line) - if (found): break - if (found): break - # - return Extracttext - - -def PopulateDict(Extracttext, DictText): - Extractedtext = [] - # DictText = {} - ShareAmts = [] - Key = "" - Extractedtext = Extracttext[4:len(Extracttext) - 1] - for readtext in Extractedtext: - if (not any(map(str.isdigit, readtext)) and "(" in readtext): - Key = Key + readtext - else: - if (not any(map(str.isdigit, readtext))): - if (ShareAmts): - DictText[Key] = ShareAmts - Key = readtext # String data - else: - Key = readtext - ShareAmts = [] - else: - ShareAmts.append(readtext.strip()) - # - if (ShareAmts): - DictText[Key] = ShareAmts - # - return DictText - - -def get_brief(DictText): - for Key in DictText: - Values = DictText[Key] - if (len(Values) == 3): - Val0 = Values[0].replace(",", ".") - Val1 = Values[1].replace(",", ".") - Val0 = Val0.replace("%", "") - Val1 = Val1.replace("%", "") - Val2 = Values[2] - Val2 = Val2.replace("(", "") - Val2 = Val2.replace(")", "") - Val2 = Val2.replace("]", "") - Val2 = Val2.replace("[", "") - Val2 = Val2.replace("pt", "%") - Val2 = Val2.replace(" %", "%") - # - if (float(Val0) > float(Val1)): - st.markdown('* {} ${}B, **up** {} YoY vs ${}B in 1Q21.'.format(Key, round(float(Val0), 2), Val2, - round(float(Val1), 2)),unsafe_allow_html=True) - else: - st.markdown('* {} ${}B, **down** {} YoY vs ${}B in 1Q21.'.format(Key, round(float(Val0), 2), Val2, - round(float(Val1), 2)),unsafe_allow_html=True) - if (len(Values) == 2): - Val0 = Values[0].replace(",", ".") - Val1 = Values[1].replace(",", ".") - Val0 = Val0.replace("%", "") - Val1 = Val1.replace("%", "") - Val0 = Val0.replace("(", "") - Val0 = Val0.replace(")", "") - Val1 = Val1.replace("(", "") - Val1 = Val1.replace(")", "") - # - if (float(Val0) > float(Val1)): - diffVal = (float(Val0) - float(Val1)) - Res = (100 * diffVal) / (float(Val1)) - st.markdown('* {} ${}B, **up** {}% YoY vs ${}B in 1Q21.'.format(Key, round(float(Val0), 2), round(Res, 2), - round(float(Val1), 2)),unsafe_allow_html=True) - else: - diffVal = (float(Val1) - float(Val0)) - Res = (100 * diffVal) / (float(Val0)) - st.markdown('* {} ${}B, **down** {}% YoY vs ${}B in 1Q21.'.format(Key, round(float(Val0), 2), round(Res, 2), - round(float(Val1), 2)),unsafe_allow_html=True) - if (len(Values) == 1): - Values = str(Values) - Values = Values.replace("]", "") - Values = Values.replace("[", "") - Values = Values.replace(" %", "%") - if ("Impact" in Key): - if ("(" in Values): - Values = Values.replace("(", "") - Values = Values.replace(")", "") - st.markdown('* {} on revenue growth, **negative** {}.'.format(Key, Values),unsafe_allow_html=True) - else: - st.markdown('* {} on revenue growth, {}.'.format(Key, Values),unsafe_allow_html=True) - else: - if ("Organic" in Key): - st.write('* {}, {}.'.format(Key, Values)) - else: - st.write('* {}, {}.'.format(Key, Values)) - diff --git a/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/conversation/[id]/web-search/$types.d.ts b/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/conversation/[id]/web-search/$types.d.ts deleted file mode 100644 index 007cffe9f867e23e5026c0f6f11d31639854d363..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/conversation/[id]/web-search/$types.d.ts +++ /dev/null @@ -1,9 +0,0 @@ -import type * as Kit from '@sveltejs/kit'; - -type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never; -type RouteParams = { id: string } -type RouteId = '/conversation/[id]/web-search'; - -export type EntryGenerator = () => Promise> | Array; -export type RequestHandler = Kit.RequestHandler; -export type RequestEvent = Kit.RequestEvent; \ No newline at end of file diff --git a/spaces/mithril-security/blind_chat/src/routes/search/[id]/+server.ts b/spaces/mithril-security/blind_chat/src/routes/search/[id]/+server.ts deleted file mode 100644 index 240de4cd73a9d03090618a8d475313735ad0d96e..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/src/routes/search/[id]/+server.ts +++ /dev/null @@ -1,7 +0,0 @@ -import { collections } from "$lib/server/database"; -import { hashConv } from "$lib/utils/hashConv.js"; -import { error } from "@sveltejs/kit"; - -export async function GET({ params, locals }) { - return new Response(JSON.stringify(""), { headers: { "Content-Type": "application/json" } }); -} diff --git a/spaces/miyaaa666/bingo/src/components/button-scroll-to-bottom.tsx b/spaces/miyaaa666/bingo/src/components/button-scroll-to-bottom.tsx deleted file mode 100644 index b68ab9c0e48320c356e51a52d11b9ca63909e6c5..0000000000000000000000000000000000000000 --- a/spaces/miyaaa666/bingo/src/components/button-scroll-to-bottom.tsx +++ /dev/null @@ -1,34 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' -import { useAtBottom } from '@/lib/hooks/use-at-bottom' -import { Button, type ButtonProps } from '@/components/ui/button' -import { IconArrowDown } from '@/components/ui/icons' - -export function ButtonScrollToBottom({ className, ...props }: ButtonProps) { - const isAtBottom = useAtBottom() - - return ( - - ) -} diff --git a/spaces/miyaaa666/bingo/src/components/turn-counter.tsx b/spaces/miyaaa666/bingo/src/components/turn-counter.tsx deleted file mode 100644 index 08a9e488f044802a8600f4d195b106567c35aab4..0000000000000000000000000000000000000000 --- a/spaces/miyaaa666/bingo/src/components/turn-counter.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import React from 'react' -import { Throttling } from '@/lib/bots/bing/types' - -export interface TurnCounterProps { - throttling?: Throttling -} - -export function TurnCounter({ throttling }: TurnCounterProps) { - if (!throttling) { - return null - } - - return ( -
        -
        - {throttling.numUserMessagesInConversation} - - {throttling.maxNumUserMessagesInConversation} -
        -
        -
        - ) -} diff --git a/spaces/ml6team/controlnet-interior-design/config.py b/spaces/ml6team/controlnet-interior-design/config.py deleted file mode 100644 index f882b519dcc3b4c23900005e6ea22ce5148e1173..0000000000000000000000000000000000000000 --- a/spaces/ml6team/controlnet-interior-design/config.py +++ /dev/null @@ -1,35 +0,0 @@ -"""File with configs""" -from palette import COLOR_MAPPING_, COLOR_MAPPING - -HEIGHT = 512 -WIDTH = 512 - -def to_rgb(color: str) -> tuple: - """Convert hex color to rgb. - Args: - color (str): hex color - Returns: - tuple: rgb color - """ - return tuple(int(color[i:i+2], 16) for i in (1, 3, 5)) - -COLOR_NAMES = list(COLOR_MAPPING.keys()) -COLOR_RGB = [to_rgb(k) for k in COLOR_MAPPING_.keys()] + [(0, 0, 0), (255, 255, 255)] -INVERSE_COLORS = {v: to_rgb(k) for k, v in COLOR_MAPPING_.items()} -COLOR_MAPPING_RGB = {to_rgb(k): v for k, v in COLOR_MAPPING_.items()} - -def map_colors(color: str) -> str: - """Map color to hex value. - Args: - color (str): color name - Returns: - str: hex value - """ - return COLOR_MAPPING[color] - -def map_colors_rgb(color: tuple) -> str: - return COLOR_MAPPING_RGB[color] - - -POS_PROMPT = "tree, sky, cloud, scenery, outdoors, grass, flowers, sunlight, beautiful, ultra detailed beautiful landscape, architectural renderings vegetation, high res, best high quality landscape, outdoor lighting, sunshine, 4k, 8k, realistic" -NEG_PROMPT= "lowres, deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, missing limb, blurry, floating limbs, disconnected limbs, malformed hands, blur, out of focus, long neck, long body, mutated hands and fingers, out of frame" diff --git a/spaces/mmlab-ntu/relate-anything-model/segment_anything/modeling/common.py b/spaces/mmlab-ntu/relate-anything-model/segment_anything/modeling/common.py deleted file mode 100644 index 2bf15236a3eb24d8526073bc4fa2b274cccb3f96..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/relate-anything-model/segment_anything/modeling/common.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn - -from typing import Type - - -class MLPBlock(nn.Module): - def __init__( - self, - embedding_dim: int, - mlp_dim: int, - act: Type[nn.Module] = nn.GELU, - ) -> None: - super().__init__() - self.lin1 = nn.Linear(embedding_dim, mlp_dim) - self.lin2 = nn.Linear(mlp_dim, embedding_dim) - self.act = act() - - def forward(self, x: torch.Tensor) -> torch.Tensor: - return self.lin2(self.act(self.lin1(x))) - - -# From https://github.com/facebookresearch/detectron2/blob/main/detectron2/layers/batch_norm.py # noqa -# Itself from https://github.com/facebookresearch/ConvNeXt/blob/d1fa8f6fef0a165b27399986cc2bdacc92777e40/models/convnext.py#L119 # noqa -class LayerNorm2d(nn.Module): - def __init__(self, num_channels: int, eps: float = 1e-6) -> None: - super().__init__() - self.weight = nn.Parameter(torch.ones(num_channels)) - self.bias = nn.Parameter(torch.zeros(num_channels)) - self.eps = eps - - def forward(self, x: torch.Tensor) -> torch.Tensor: - u = x.mean(1, keepdim=True) - s = (x - u).pow(2).mean(1, keepdim=True) - x = (x - u) / torch.sqrt(s + self.eps) - x = self.weight[:, None, None] * x + self.bias[:, None, None] - return x diff --git a/spaces/mrmocciai/rvc-models/infer_pack/models_onnx.py b/spaces/mrmocciai/rvc-models/infer_pack/models_onnx.py deleted file mode 100644 index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000 --- a/spaces/mrmocciai/rvc-models/infer_pack/models_onnx.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/mshkdm/VToonify/vtoonify/model/encoder/encoders/psp_encoders.py b/spaces/mshkdm/VToonify/vtoonify/model/encoder/encoders/psp_encoders.py deleted file mode 100644 index f69d38200b6be4997673ae38ed481fd21f88b419..0000000000000000000000000000000000000000 --- a/spaces/mshkdm/VToonify/vtoonify/model/encoder/encoders/psp_encoders.py +++ /dev/null @@ -1,186 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn -from torch.nn import Linear, Conv2d, BatchNorm2d, PReLU, Sequential, Module - -from model.encoder.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE -from model.stylegan.model import EqualLinear - - -class GradualStyleBlock(Module): - def __init__(self, in_c, out_c, spatial): - super(GradualStyleBlock, self).__init__() - self.out_c = out_c - self.spatial = spatial - num_pools = int(np.log2(spatial)) - modules = [] - modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU()] - for i in range(num_pools - 1): - modules += [ - Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU() - ] - self.convs = nn.Sequential(*modules) - self.linear = EqualLinear(out_c, out_c, lr_mul=1) - - def forward(self, x): - x = self.convs(x) - x = x.view(-1, self.out_c) - x = self.linear(x) - return x - - -class GradualStyleEncoder(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(GradualStyleEncoder, self).__init__() - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - self.styles = nn.ModuleList() - self.style_count = opts.n_styles - self.coarse_ind = 3 - self.middle_ind = 7 - for i in range(self.style_count): - if i < self.coarse_ind: - style = GradualStyleBlock(512, 512, 16) - elif i < self.middle_ind: - style = GradualStyleBlock(512, 512, 32) - else: - style = GradualStyleBlock(512, 512, 64) - self.styles.append(style) - self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0) - self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0) - - def _upsample_add(self, x, y): - '''Upsample and add two feature maps. - Args: - x: (Variable) top feature map to be upsampled. - y: (Variable) lateral feature map. - Returns: - (Variable) added feature map. - Note in PyTorch, when input size is odd, the upsampled feature map - with `F.upsample(..., scale_factor=2, mode='nearest')` - maybe not equal to the lateral feature map size. - e.g. - original input size: [N,_,15,15] -> - conv2d feature map size: [N,_,8,8] -> - upsampled feature map size: [N,_,16,16] - So we choose bilinear upsample which supports arbitrary output sizes. - ''' - _, _, H, W = y.size() - return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y - - def forward(self, x): - x = self.input_layer(x) - - latents = [] - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 6: - c1 = x - elif i == 20: - c2 = x - elif i == 23: - c3 = x - - for j in range(self.coarse_ind): - latents.append(self.styles[j](c3)) - - p2 = self._upsample_add(c3, self.latlayer1(c2)) - for j in range(self.coarse_ind, self.middle_ind): - latents.append(self.styles[j](p2)) - - p1 = self._upsample_add(p2, self.latlayer2(c1)) - for j in range(self.middle_ind, self.style_count): - latents.append(self.styles[j](p1)) - - out = torch.stack(latents, dim=1) - return out - - -class BackboneEncoderUsingLastLayerIntoW(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(BackboneEncoderUsingLastLayerIntoW, self).__init__() - print('Using BackboneEncoderUsingLastLayerIntoW') - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - self.output_pool = torch.nn.AdaptiveAvgPool2d((1, 1)) - self.linear = EqualLinear(512, 512, lr_mul=1) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_pool(x) - x = x.view(-1, 512) - x = self.linear(x) - return x - - -class BackboneEncoderUsingLastLayerIntoWPlus(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(BackboneEncoderUsingLastLayerIntoWPlus, self).__init__() - print('Using BackboneEncoderUsingLastLayerIntoWPlus') - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.n_styles = opts.n_styles - self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - self.output_layer_2 = Sequential(BatchNorm2d(512), - torch.nn.AdaptiveAvgPool2d((7, 7)), - Flatten(), - Linear(512 * 7 * 7, 512)) - self.linear = EqualLinear(512, 512 * self.n_styles, lr_mul=1) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer_2(x) - x = self.linear(x) - x = x.view(-1, self.n_styles, 512) - return x diff --git a/spaces/mshukor/UnIVAL/data/file_dataset.py b/spaces/mshukor/UnIVAL/data/file_dataset.py deleted file mode 100644 index 785e3abc951ee1c5346f6799daeab46682d2569a..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/data/file_dataset.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright 2022 The OFA-Sys Team. -# All rights reserved. -# This source code is licensed under the Apache 2.0 license -# found in the LICENSE file in the root directory. - -import os -import torch -import pickle - - -class FileDataset: - def __init__(self, file_path, selected_col_ids=None, dtypes=None, separator="\t", cached_index=False): - self.file_path = file_path - assert os.path.exists(self.file_path), "Error: The local datafile {} not exists!".format(self.file_path) - - self.separator = separator - if selected_col_ids is None: - # default to all fields - self.selected_col_ids = list( - range(len(open(self.file_path).readline().rstrip("\n").split(self.separator)))) - else: - self.selected_col_ids = [int(col_id) for col_id in selected_col_ids.split(",")] - if dtypes is None: - # default to str - self.dtypes = [str for col_id in self.selected_col_ids] - else: - self.dtypes = [eval(col_dtype) for col_dtype in dtypes.split(",")] - assert len(self.dtypes) == len(self.selected_col_ids) - - self.data_cnt = 0 - try: - self.slice_id = torch.distributed.get_rank() - self.slice_count = torch.distributed.get_world_size() - except Exception: - self.slice_id = 0 - self.slice_count = 1 - self.cached_index = cached_index - self._init_seek_index() - self._reader = self._get_reader() - print("file {} slice_id {} row count {} total row count {}".format( - self.file_path, self.slice_id, self.row_count, self.total_row_count) - ) - - def _init_seek_index(self): - if self.cached_index: - cache_path = "{}.index".format(self.file_path) - assert os.path.exists(cache_path), "cache file {} not exists!".format(cache_path) - self.total_row_count, self.lineid_to_offset = pickle.load(open(cache_path, "rb")) - print("local datafile {} slice_id {} use cached row_count and line_idx-to-offset mapping".format( - self.file_path, self.slice_id)) - else: - # make an iteration over the file to get row_count and line_idx-to-offset mapping - fp = open(self.file_path, "rb") - print("local datafile {} slice_id {} begin to initialize row_count and line_idx-to-offset mapping".format( - self.file_path, self.slice_id)) - self.total_row_count = 0 - offset = 0 - self.lineid_to_offset = [] - for line in fp: - self.lineid_to_offset.append(offset) - self.total_row_count += 1 - # offset += len(line.encode('utf-8')) - offset += len(line) #fp.tell() #len(line) - self._compute_start_pos_and_row_count() - print("local datafile {} slice_id {} finished initializing row_count and line_idx-to-offset mapping".format( - self.file_path, self.slice_id)) - - def _compute_start_pos_and_row_count(self): - self.row_count = self.total_row_count // self.slice_count - if self.slice_id < self.total_row_count - self.row_count * self.slice_count: - self.row_count += 1 - self.start_pos = self.row_count * self.slice_id - else: - self.start_pos = self.row_count * self.slice_id + (self.total_row_count - self.row_count * self.slice_count) - - def _get_reader(self): - fp = open(self.file_path, "r") - fp.seek(self.lineid_to_offset[self.start_pos]) - return fp - - def _seek(self, offset=0): - try: - print("slice_id {} seek offset {}".format(self.slice_id, self.start_pos + offset)) - self._reader.seek(self.lineid_to_offset[self.start_pos + offset]) - self.data_cnt = offset - except Exception: - print("slice_id {} seek offset {}".format(self.slice_id, offset)) - self._reader.seek(self.lineid_to_offset[offset]) - self.data_cnt = offset - - def __del__(self): - self._reader.close() - - def __len__(self): - return self.row_count - - def get_total_row_count(self): - return self.total_row_count - - def __getitem__(self, index): - if self.data_cnt == self.row_count: - print("reach the end of datafile, start a new reader") - self.data_cnt = 0 - self._reader = self._get_reader() - column_l = self._reader.readline().rstrip("\n").split(self.separator) - - self.data_cnt += 1 - # try: - column_l = [dtype(column_l[col_id]) for col_id, dtype in zip(self.selected_col_ids, self.dtypes)] - # except: - # print(column_l, self.data_cnt, self.start_pos, self.slice_id) - # print(self._reader.readline().rstrip("\n").split(self.separator)) - return column_l diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/scripts/pca.py b/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/scripts/pca.py deleted file mode 100644 index 948cf5319fd86ba1bccff65270b2881048faf9b1..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/scripts/pca.py +++ /dev/null @@ -1,53 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import numpy as np - -import faiss - - - -def get_parser(): - parser = argparse.ArgumentParser( - description="compute a pca matrix given an array of numpy features" - ) - # fmt: off - parser.add_argument('data', help='numpy file containing features') - parser.add_argument('--output', help='where to save the pca matrix', required=True) - parser.add_argument('--dim', type=int, help='dim for pca reduction', required=True) - parser.add_argument('--eigen-power', type=float, default=0, help='eigen power, -0.5 for whitening') - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - print("Reading features") - x = np.load(args.data, mmap_mode="r") - - print("Computing PCA") - pca = faiss.PCAMatrix(x.shape[-1], args.dim, args.eigen_power) - pca.train(x) - b = faiss.vector_to_array(pca.b) - A = faiss.vector_to_array(pca.A).reshape(pca.d_out, pca.d_in) - - os.makedirs(args.output, exist_ok=True) - - prefix = str(args.dim) - if args.eigen_power != 0: - prefix += f"_{args.eigen_power}" - - np.save(osp.join(args.output, f"{prefix}_pca_A"), A.T) - np.save(osp.join(args.output, f"{prefix}_pca_b"), b) - - -if __name__ == "__main__": - main() diff --git a/spaces/mthsk/sovits-models-misc/cluster/train_cluster.py b/spaces/mthsk/sovits-models-misc/cluster/train_cluster.py deleted file mode 100644 index 4ac025d400414226e66849407f477ae786c3d5d3..0000000000000000000000000000000000000000 --- a/spaces/mthsk/sovits-models-misc/cluster/train_cluster.py +++ /dev/null @@ -1,89 +0,0 @@ -import os -from glob import glob -from pathlib import Path -import torch -import logging -import argparse -import torch -import numpy as np -from sklearn.cluster import KMeans, MiniBatchKMeans -import tqdm -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) -import time -import random - -def train_cluster(in_dir, n_clusters, use_minibatch=True, verbose=False): - - logger.info(f"Loading features from {in_dir}") - features = [] - nums = 0 - for path in tqdm.tqdm(in_dir.glob("*.soft.pt")): - features.append(torch.load(path).squeeze(0).numpy().T) - # print(features[-1].shape) - features = np.concatenate(features, axis=0) - print(nums, features.nbytes/ 1024**2, "MB , shape:",features.shape, features.dtype) - features = features.astype(np.float32) - logger.info(f"Clustering features of shape: {features.shape}") - t = time.time() - if use_minibatch: - kmeans = MiniBatchKMeans(n_clusters=n_clusters,verbose=verbose, batch_size=4096, max_iter=80).fit(features) - else: - kmeans = KMeans(n_clusters=n_clusters,verbose=verbose).fit(features) - print(time.time()-t, "s") - - x = { - "n_features_in_": kmeans.n_features_in_, - "_n_threads": kmeans._n_threads, - "cluster_centers_": kmeans.cluster_centers_, - } - print("end") - - return x - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser() - parser.add_argument('--dataset', type=Path, default="./dataset/44k", - help='path of training data directory') - parser.add_argument('--output', type=Path, default="logs/44k", - help='path of model output directory') - - args = parser.parse_args() - - checkpoint_dir = args.output - dataset = args.dataset - n_clusters = 10000 - - ckpt = {} - for spk in os.listdir(dataset): - if os.path.isdir(dataset/spk): - print(f"train kmeans for {spk}...") - in_dir = dataset/spk - x = train_cluster(in_dir, n_clusters, verbose=False) - ckpt[spk] = x - - checkpoint_path = checkpoint_dir / f"kmeans_{n_clusters}.pt" - checkpoint_path.parent.mkdir(exist_ok=True, parents=True) - torch.save( - ckpt, - checkpoint_path, - ) - - - # import cluster - # for spk in tqdm.tqdm(os.listdir("dataset")): - # if os.path.isdir(f"dataset/{spk}"): - # print(f"start kmeans inference for {spk}...") - # for feature_path in tqdm.tqdm(glob(f"dataset/{spk}/*.discrete.npy", recursive=True)): - # mel_path = feature_path.replace(".discrete.npy",".mel.npy") - # mel_spectrogram = np.load(mel_path) - # feature_len = mel_spectrogram.shape[-1] - # c = np.load(feature_path) - # c = utils.tools.repeat_expand_2d(torch.FloatTensor(c), feature_len).numpy() - # feature = c.T - # feature_class = cluster.get_cluster_result(feature, spk) - # np.save(feature_path.replace(".discrete.npy", ".discrete_class.npy"), feature_class) - - diff --git a/spaces/mygyasir/genious_bgremover/carvekit/ml/arch/tracerb7/effi_utils.py b/spaces/mygyasir/genious_bgremover/carvekit/ml/arch/tracerb7/effi_utils.py deleted file mode 100644 index b578ca258f4d9301483320a6db019953f78ed4cc..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/genious_bgremover/carvekit/ml/arch/tracerb7/effi_utils.py +++ /dev/null @@ -1,579 +0,0 @@ -""" -Original author: lukemelas (github username) -Github repo: https://github.com/lukemelas/EfficientNet-PyTorch -With adjustments and added comments by workingcoder (github username). -License: Apache License 2.0 -Reimplemented: Min Seok Lee and Wooseok Shin -""" - -import collections -import re -from functools import partial - -import math -import torch -from torch import nn -from torch.nn import functional as F - -# Parameters for the entire model (stem, all blocks, and head) -GlobalParams = collections.namedtuple( - "GlobalParams", - [ - "width_coefficient", - "depth_coefficient", - "image_size", - "dropout_rate", - "num_classes", - "batch_norm_momentum", - "batch_norm_epsilon", - "drop_connect_rate", - "depth_divisor", - "min_depth", - "include_top", - ], -) - -# Parameters for an individual model block -BlockArgs = collections.namedtuple( - "BlockArgs", - [ - "num_repeat", - "kernel_size", - "stride", - "expand_ratio", - "input_filters", - "output_filters", - "se_ratio", - "id_skip", - ], -) - -# Set GlobalParams and BlockArgs's defaults -GlobalParams.__new__.__defaults__ = (None,) * len(GlobalParams._fields) -BlockArgs.__new__.__defaults__ = (None,) * len(BlockArgs._fields) - - -# An ordinary implementation of Swish function -class Swish(nn.Module): - def forward(self, x): - return x * torch.sigmoid(x) - - -# A memory-efficient implementation of Swish function -class SwishImplementation(torch.autograd.Function): - @staticmethod - def forward(ctx, i): - result = i * torch.sigmoid(i) - ctx.save_for_backward(i) - return result - - @staticmethod - def backward(ctx, grad_output): - i = ctx.saved_tensors[0] - sigmoid_i = torch.sigmoid(i) - return grad_output * (sigmoid_i * (1 + i * (1 - sigmoid_i))) - - -class MemoryEfficientSwish(nn.Module): - def forward(self, x): - return SwishImplementation.apply(x) - - -def round_filters(filters, global_params): - """Calculate and round number of filters based on width multiplier. - Use width_coefficient, depth_divisor and min_depth of global_params. - - Args: - filters (int): Filters number to be calculated. - global_params (namedtuple): Global params of the model. - - Returns: - new_filters: New filters number after calculating. - """ - multiplier = global_params.width_coefficient - if not multiplier: - return filters - divisor = global_params.depth_divisor - min_depth = global_params.min_depth - filters *= multiplier - min_depth = min_depth or divisor # pay attention to this line when using min_depth - # follow the formula transferred from official TensorFlow implementation - new_filters = max(min_depth, int(filters + divisor / 2) // divisor * divisor) - if new_filters < 0.9 * filters: # prevent rounding by more than 10% - new_filters += divisor - return int(new_filters) - - -def round_repeats(repeats, global_params): - """Calculate module's repeat number of a block based on depth multiplier. - Use depth_coefficient of global_params. - - Args: - repeats (int): num_repeat to be calculated. - global_params (namedtuple): Global params of the model. - - Returns: - new repeat: New repeat number after calculating. - """ - multiplier = global_params.depth_coefficient - if not multiplier: - return repeats - # follow the formula transferred from official TensorFlow implementation - return int(math.ceil(multiplier * repeats)) - - -def drop_connect(inputs, p, training): - """Drop connect. - - Args: - input (tensor: BCWH): Input of this structure. - p (float: 0.0~1.0): Probability of drop connection. - training (bool): The running mode. - - Returns: - output: Output after drop connection. - """ - assert 0 <= p <= 1, "p must be in range of [0,1]" - - if not training: - return inputs - - batch_size = inputs.shape[0] - keep_prob = 1 - p - - # generate binary_tensor mask according to probability (p for 0, 1-p for 1) - random_tensor = keep_prob - random_tensor += torch.rand( - [batch_size, 1, 1, 1], dtype=inputs.dtype, device=inputs.device - ) - binary_tensor = torch.floor(random_tensor) - - output = inputs / keep_prob * binary_tensor - return output - - -def get_width_and_height_from_size(x): - """Obtain height and width from x. - - Args: - x (int, tuple or list): Data size. - - Returns: - size: A tuple or list (H,W). - """ - if isinstance(x, int): - return x, x - if isinstance(x, list) or isinstance(x, tuple): - return x - else: - raise TypeError() - - -def calculate_output_image_size(input_image_size, stride): - """Calculates the output image size when using Conv2dSamePadding with a stride. - Necessary for static padding. Thanks to mannatsingh for pointing this out. - - Args: - input_image_size (int, tuple or list): Size of input image. - stride (int, tuple or list): Conv2d operation's stride. - - Returns: - output_image_size: A list [H,W]. - """ - if input_image_size is None: - return None - image_height, image_width = get_width_and_height_from_size(input_image_size) - stride = stride if isinstance(stride, int) else stride[0] - image_height = int(math.ceil(image_height / stride)) - image_width = int(math.ceil(image_width / stride)) - return [image_height, image_width] - - -# Note: -# The following 'SamePadding' functions make output size equal ceil(input size/stride). -# Only when stride equals 1, can the output size be the same as input size. -# Don't be confused by their function names ! ! ! - - -def get_same_padding_conv2d(image_size=None): - """Chooses static padding if you have specified an image size, and dynamic padding otherwise. - Static padding is necessary for ONNX exporting of models. - - Args: - image_size (int or tuple): Size of the image. - - Returns: - Conv2dDynamicSamePadding or Conv2dStaticSamePadding. - """ - if image_size is None: - return Conv2dDynamicSamePadding - else: - return partial(Conv2dStaticSamePadding, image_size=image_size) - - -class Conv2dDynamicSamePadding(nn.Conv2d): - """2D Convolutions like TensorFlow, for a dynamic image size. - The padding is operated in forward function by calculating dynamically. - """ - - # Tips for 'SAME' mode padding. - # Given the following: - # i: width or height - # s: stride - # k: kernel size - # d: dilation - # p: padding - # Output after Conv2d: - # o = floor((i+p-((k-1)*d+1))/s+1) - # If o equals i, i = floor((i+p-((k-1)*d+1))/s+1), - # => p = (i-1)*s+((k-1)*d+1)-i - - def __init__( - self, - in_channels, - out_channels, - kernel_size, - stride=1, - dilation=1, - groups=1, - bias=True, - ): - super().__init__( - in_channels, out_channels, kernel_size, stride, 0, dilation, groups, bias - ) - self.stride = self.stride if len(self.stride) == 2 else [self.stride[0]] * 2 - - def forward(self, x): - ih, iw = x.size()[-2:] - kh, kw = self.weight.size()[-2:] - sh, sw = self.stride - oh, ow = math.ceil(ih / sh), math.ceil( - iw / sw - ) # change the output size according to stride ! ! ! - pad_h = max((oh - 1) * self.stride[0] + (kh - 1) * self.dilation[0] + 1 - ih, 0) - pad_w = max((ow - 1) * self.stride[1] + (kw - 1) * self.dilation[1] + 1 - iw, 0) - if pad_h > 0 or pad_w > 0: - x = F.pad( - x, [pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2] - ) - return F.conv2d( - x, - self.weight, - self.bias, - self.stride, - self.padding, - self.dilation, - self.groups, - ) - - -class Conv2dStaticSamePadding(nn.Conv2d): - """2D Convolutions like TensorFlow's 'SAME' mode, with the given input image size. - The padding mudule is calculated in construction function, then used in forward. - """ - - # With the same calculation as Conv2dDynamicSamePadding - - def __init__( - self, - in_channels, - out_channels, - kernel_size, - stride=1, - image_size=None, - **kwargs - ): - super().__init__(in_channels, out_channels, kernel_size, stride, **kwargs) - self.stride = self.stride if len(self.stride) == 2 else [self.stride[0]] * 2 - - # Calculate padding based on image size and save it - assert image_size is not None - ih, iw = (image_size, image_size) if isinstance(image_size, int) else image_size - kh, kw = self.weight.size()[-2:] - sh, sw = self.stride - oh, ow = math.ceil(ih / sh), math.ceil(iw / sw) - pad_h = max((oh - 1) * self.stride[0] + (kh - 1) * self.dilation[0] + 1 - ih, 0) - pad_w = max((ow - 1) * self.stride[1] + (kw - 1) * self.dilation[1] + 1 - iw, 0) - if pad_h > 0 or pad_w > 0: - self.static_padding = nn.ZeroPad2d( - (pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2) - ) - else: - self.static_padding = nn.Identity() - - def forward(self, x): - x = self.static_padding(x) - x = F.conv2d( - x, - self.weight, - self.bias, - self.stride, - self.padding, - self.dilation, - self.groups, - ) - return x - - -def get_same_padding_maxPool2d(image_size=None): - """Chooses static padding if you have specified an image size, and dynamic padding otherwise. - Static padding is necessary for ONNX exporting of models. - - Args: - image_size (int or tuple): Size of the image. - - Returns: - MaxPool2dDynamicSamePadding or MaxPool2dStaticSamePadding. - """ - if image_size is None: - return MaxPool2dDynamicSamePadding - else: - return partial(MaxPool2dStaticSamePadding, image_size=image_size) - - -class MaxPool2dDynamicSamePadding(nn.MaxPool2d): - """2D MaxPooling like TensorFlow's 'SAME' mode, with a dynamic image size. - The padding is operated in forward function by calculating dynamically. - """ - - def __init__( - self, - kernel_size, - stride, - padding=0, - dilation=1, - return_indices=False, - ceil_mode=False, - ): - super().__init__( - kernel_size, stride, padding, dilation, return_indices, ceil_mode - ) - self.stride = [self.stride] * 2 if isinstance(self.stride, int) else self.stride - self.kernel_size = ( - [self.kernel_size] * 2 - if isinstance(self.kernel_size, int) - else self.kernel_size - ) - self.dilation = ( - [self.dilation] * 2 if isinstance(self.dilation, int) else self.dilation - ) - - def forward(self, x): - ih, iw = x.size()[-2:] - kh, kw = self.kernel_size - sh, sw = self.stride - oh, ow = math.ceil(ih / sh), math.ceil(iw / sw) - pad_h = max((oh - 1) * self.stride[0] + (kh - 1) * self.dilation[0] + 1 - ih, 0) - pad_w = max((ow - 1) * self.stride[1] + (kw - 1) * self.dilation[1] + 1 - iw, 0) - if pad_h > 0 or pad_w > 0: - x = F.pad( - x, [pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2] - ) - return F.max_pool2d( - x, - self.kernel_size, - self.stride, - self.padding, - self.dilation, - self.ceil_mode, - self.return_indices, - ) - - -class MaxPool2dStaticSamePadding(nn.MaxPool2d): - """2D MaxPooling like TensorFlow's 'SAME' mode, with the given input image size. - The padding mudule is calculated in construction function, then used in forward. - """ - - def __init__(self, kernel_size, stride, image_size=None, **kwargs): - super().__init__(kernel_size, stride, **kwargs) - self.stride = [self.stride] * 2 if isinstance(self.stride, int) else self.stride - self.kernel_size = ( - [self.kernel_size] * 2 - if isinstance(self.kernel_size, int) - else self.kernel_size - ) - self.dilation = ( - [self.dilation] * 2 if isinstance(self.dilation, int) else self.dilation - ) - - # Calculate padding based on image size and save it - assert image_size is not None - ih, iw = (image_size, image_size) if isinstance(image_size, int) else image_size - kh, kw = self.kernel_size - sh, sw = self.stride - oh, ow = math.ceil(ih / sh), math.ceil(iw / sw) - pad_h = max((oh - 1) * self.stride[0] + (kh - 1) * self.dilation[0] + 1 - ih, 0) - pad_w = max((ow - 1) * self.stride[1] + (kw - 1) * self.dilation[1] + 1 - iw, 0) - if pad_h > 0 or pad_w > 0: - self.static_padding = nn.ZeroPad2d( - (pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2) - ) - else: - self.static_padding = nn.Identity() - - def forward(self, x): - x = self.static_padding(x) - x = F.max_pool2d( - x, - self.kernel_size, - self.stride, - self.padding, - self.dilation, - self.ceil_mode, - self.return_indices, - ) - return x - - -class BlockDecoder(object): - """Block Decoder for readability, - straight from the official TensorFlow repository. - """ - - @staticmethod - def _decode_block_string(block_string): - """Get a block through a string notation of arguments. - - Args: - block_string (str): A string notation of arguments. - Examples: 'r1_k3_s11_e1_i32_o16_se0.25_noskip'. - - Returns: - BlockArgs: The namedtuple defined at the top of this file. - """ - assert isinstance(block_string, str) - - ops = block_string.split("_") - options = {} - for op in ops: - splits = re.split(r"(\d.*)", op) - if len(splits) >= 2: - key, value = splits[:2] - options[key] = value - - # Check stride - assert ("s" in options and len(options["s"]) == 1) or ( - len(options["s"]) == 2 and options["s"][0] == options["s"][1] - ) - - return BlockArgs( - num_repeat=int(options["r"]), - kernel_size=int(options["k"]), - stride=[int(options["s"][0])], - expand_ratio=int(options["e"]), - input_filters=int(options["i"]), - output_filters=int(options["o"]), - se_ratio=float(options["se"]) if "se" in options else None, - id_skip=("noskip" not in block_string), - ) - - @staticmethod - def _encode_block_string(block): - """Encode a block to a string. - - Args: - block (namedtuple): A BlockArgs type argument. - - Returns: - block_string: A String form of BlockArgs. - """ - args = [ - "r%d" % block.num_repeat, - "k%d" % block.kernel_size, - "s%d%d" % (block.strides[0], block.strides[1]), - "e%s" % block.expand_ratio, - "i%d" % block.input_filters, - "o%d" % block.output_filters, - ] - if 0 < block.se_ratio <= 1: - args.append("se%s" % block.se_ratio) - if block.id_skip is False: - args.append("noskip") - return "_".join(args) - - @staticmethod - def decode(string_list): - """Decode a list of string notations to specify blocks inside the network. - - Args: - string_list (list[str]): A list of strings, each string is a notation of block. - - Returns: - blocks_args: A list of BlockArgs namedtuples of block args. - """ - assert isinstance(string_list, list) - blocks_args = [] - for block_string in string_list: - blocks_args.append(BlockDecoder._decode_block_string(block_string)) - return blocks_args - - @staticmethod - def encode(blocks_args): - """Encode a list of BlockArgs to a list of strings. - - Args: - blocks_args (list[namedtuples]): A list of BlockArgs namedtuples of block args. - - Returns: - block_strings: A list of strings, each string is a notation of block. - """ - block_strings = [] - for block in blocks_args: - block_strings.append(BlockDecoder._encode_block_string(block)) - return block_strings - - -def create_block_args( - width_coefficient=None, - depth_coefficient=None, - image_size=None, - dropout_rate=0.2, - drop_connect_rate=0.2, - num_classes=1000, - include_top=True, -): - """Create BlockArgs and GlobalParams for efficientnet model. - - Args: - width_coefficient (float) - depth_coefficient (float) - image_size (int) - dropout_rate (float) - drop_connect_rate (float) - num_classes (int) - - Meaning as the name suggests. - - Returns: - blocks_args, global_params. - """ - - # Blocks args for the whole model(efficientnet-b0 by default) - # It will be modified in the construction of EfficientNet Class according to model - blocks_args = [ - "r1_k3_s11_e1_i32_o16_se0.25", - "r2_k3_s22_e6_i16_o24_se0.25", - "r2_k5_s22_e6_i24_o40_se0.25", - "r3_k3_s22_e6_i40_o80_se0.25", - "r3_k5_s11_e6_i80_o112_se0.25", - "r4_k5_s22_e6_i112_o192_se0.25", - "r1_k3_s11_e6_i192_o320_se0.25", - ] - blocks_args = BlockDecoder.decode(blocks_args) - - global_params = GlobalParams( - width_coefficient=width_coefficient, - depth_coefficient=depth_coefficient, - image_size=image_size, - dropout_rate=dropout_rate, - num_classes=num_classes, - batch_norm_momentum=0.99, - batch_norm_epsilon=1e-3, - drop_connect_rate=drop_connect_rate, - depth_divisor=8, - min_depth=None, - include_top=include_top, - ) - - return blocks_args, global_params diff --git a/spaces/nakas/musika_api/models.py b/spaces/nakas/musika_api/models.py deleted file mode 100644 index 254bf152c1cd7032a9fad0b77d7977ff9ec65686..0000000000000000000000000000000000000000 --- a/spaces/nakas/musika_api/models.py +++ /dev/null @@ -1,783 +0,0 @@ -import numpy as np -import tensorflow as tf -from tensorflow.python.keras.utils.layer_utils import count_params - -from layers import AddNoise - - -class Models_functions: - def __init__(self, args): - - self.args = args - - if self.args.mixed_precision: - self.mixed_precision = tf.keras.mixed_precision - self.policy = tf.keras.mixed_precision.Policy("mixed_float16") - tf.keras.mixed_precision.set_global_policy(self.policy) - self.init = tf.keras.initializers.he_uniform() - - def conv_util( - self, inp, filters, kernel_size=(1, 3), strides=(1, 1), noise=False, upsample=False, padding="same", bnorm=True - ): - - x = inp - - bias = True - if bnorm: - bias = False - - if upsample: - x = tf.keras.layers.Conv2DTranspose( - filters, - kernel_size=kernel_size, - strides=strides, - activation="linear", - padding=padding, - kernel_initializer=self.init, - use_bias=bias, - )(x) - else: - x = tf.keras.layers.Conv2D( - filters, - kernel_size=kernel_size, - strides=strides, - activation="linear", - padding=padding, - kernel_initializer=self.init, - use_bias=bias, - )(x) - - if noise: - x = AddNoise(self.args.datatype)(x) - - if bnorm: - x = tf.keras.layers.BatchNormalization()(x) - - x = tf.keras.activations.swish(x) - - return x - - def pixel_shuffle(self, x, factor=2): - bs_dim, h_dim, w_dim, c_dim = tf.shape(x)[0], x.shape[1], x.shape[2], x.shape[3] - x = tf.reshape(x, [bs_dim, h_dim, w_dim, c_dim // factor, factor]) - x = tf.transpose(x, [0, 1, 2, 4, 3]) - return tf.reshape(x, [bs_dim, h_dim, w_dim * factor, c_dim // factor]) - - def adain(self, x, emb, name): - emb = tf.keras.layers.Conv2D( - x.shape[-1], - kernel_size=(1, 1), - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name=name, - )(emb) - x = x / (tf.math.reduce_std(x, -2, keepdims=True) + 1e-5) - return x * emb - - def conv_util_gen( - self, - inp, - filters, - kernel_size=(1, 9), - strides=(1, 1), - noise=False, - upsample=False, - emb=None, - se1=None, - name="0", - ): - - x = inp - - if upsample: - x = tf.keras.layers.Conv2DTranspose( - filters, - kernel_size=kernel_size, - strides=strides, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name=name + "c", - )(x) - else: - x = tf.keras.layers.Conv2D( - filters, - kernel_size=kernel_size, - strides=strides, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name=name + "c", - )(x) - - if noise: - x = AddNoise(self.args.datatype, name=name + "r")(x) - - if emb is not None: - x = self.adain(x, emb, name=name + "ai") - else: - x = tf.keras.layers.BatchNormalization(name=name + "bn")(x) - - x = tf.keras.activations.swish(x) - - return x - - def res_block_disc(self, inp, filters, kernel_size=(1, 3), kernel_size_2=None, strides=(1, 1), name="0"): - - if kernel_size_2 is None: - kernel_size_2 = kernel_size - - x = tf.keras.layers.Conv2D( - inp.shape[-1], - kernel_size=kernel_size_2, - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - name=name + "c0", - )(inp) - x = tf.keras.layers.LeakyReLU(0.2)(x) - x = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * x - x = tf.keras.layers.Conv2D( - filters, - kernel_size=kernel_size, - strides=strides, - activation="linear", - padding="same", - kernel_initializer=self.init, - name=name + "c1", - )(x) - x = tf.keras.layers.LeakyReLU(0.2)(x) - x = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * x - - if strides != (1, 1): - inp = tf.keras.layers.AveragePooling2D(strides, padding="same")(inp) - - if inp.shape[-1] != filters: - inp = tf.keras.layers.Conv2D( - filters, - kernel_size=1, - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=False, - name=name + "c3", - )(inp) - - return x + inp - - def build_encoder2(self): - - inpf = tf.keras.layers.Input((1, self.args.shape, self.args.hop // 4)) - - inpfls = tf.split(inpf, 8, -2) - inpb = tf.concat(inpfls, 0) - - g0 = self.conv_util(inpb, self.args.hop, kernel_size=(1, 3), strides=(1, 1), padding="same", bnorm=False) - g1 = self.conv_util( - g0, self.args.hop + self.args.hop // 2, kernel_size=(1, 3), strides=(1, 2), padding="valid", bnorm=False - ) - g2 = self.conv_util( - g1, self.args.hop + self.args.hop // 2, kernel_size=(1, 3), strides=(1, 1), padding="same", bnorm=False - ) - g3 = self.conv_util(g2, self.args.hop * 2, kernel_size=(1, 3), strides=(1, 2), padding="valid", bnorm=False) - g4 = self.conv_util(g3, self.args.hop * 2, kernel_size=(1, 3), strides=(1, 1), padding="same", bnorm=False) - g5 = self.conv_util(g4, self.args.hop * 3, kernel_size=(1, 3), strides=(1, 1), padding="valid", bnorm=False) - g5 = self.conv_util(g5, self.args.hop * 3, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False) - - g = tf.keras.layers.Conv2D( - self.args.latdepth, - kernel_size=(1, 1), - strides=1, - padding="valid", - kernel_initializer=self.init, - name="cbottle", - activation="tanh", - )(g5) - - gls = tf.split(g, 8, 0) - g = tf.concat(gls, -2) - gls = tf.split(g, 2, -2) - g = tf.concat(gls, 0) - - gf = tf.cast(g, tf.float32) - - return tf.keras.Model(inpf, gf, name="ENC2") - - def build_decoder2(self): - - inpf = tf.keras.layers.Input((1, self.args.shape // 32, self.args.latdepth)) - - g = inpf - - g = self.conv_util( - g, self.args.hop * 3, kernel_size=(1, 3), strides=(1, 1), upsample=False, noise=True, bnorm=False - ) - g = self.conv_util( - g, - self.args.hop * 2 + self.args.hop // 2, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - bnorm=False, - ) - g = self.conv_util( - g, - self.args.hop * 2 + self.args.hop // 2, - kernel_size=(1, 3), - strides=(1, 1), - upsample=False, - noise=True, - bnorm=False, - ) - g = self.conv_util( - g, self.args.hop * 2, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - g = self.conv_util( - g, self.args.hop * 2, kernel_size=(1, 3), strides=(1, 1), upsample=False, noise=True, bnorm=False - ) - g = self.conv_util( - g, - self.args.hop + self.args.hop // 2, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - bnorm=False, - ) - g = self.conv_util(g, self.args.hop, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False) - - gf = tf.keras.layers.Conv2D( - self.args.hop // 4, kernel_size=(1, 1), strides=1, padding="same", kernel_initializer=self.init, name="cout" - )(g) - - gfls = tf.split(gf, 2, 0) - gf = tf.concat(gfls, -2) - - gf = tf.cast(gf, tf.float32) - - return tf.keras.Model(inpf, gf, name="DEC2") - - def build_encoder(self): - - dim = ((4 * self.args.hop) // 2) + 1 - - inpf = tf.keras.layers.Input((dim, self.args.shape, 1)) - - ginp = tf.transpose(inpf, [0, 3, 2, 1]) - - g0 = self.conv_util(ginp, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False) - g1 = self.conv_util(g0, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False) - g2 = self.conv_util(g1, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False) - g4 = self.conv_util(g2, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False) - g5 = self.conv_util(g4, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False) - - g = tf.keras.layers.Conv2D( - self.args.hop // 4, kernel_size=(1, 1), strides=1, padding="valid", kernel_initializer=self.init - )(g5) - - g = tf.keras.activations.tanh(g) - - gls = tf.split(g, 2, -2) - g = tf.concat(gls, 0) - - gf = tf.cast(g, tf.float32) - - return tf.keras.Model(inpf, gf, name="ENC") - - def build_decoder(self): - - dim = ((4 * self.args.hop) // 2) + 1 - - inpf = tf.keras.layers.Input((1, self.args.shape // 2, self.args.hop // 4)) - - g = inpf - - g0 = self.conv_util(g, self.args.hop * 3, kernel_size=(1, 3), strides=(1, 1), noise=True, bnorm=False) - g1 = self.conv_util(g0, self.args.hop * 3, kernel_size=(1, 3), strides=(1, 2), noise=True, bnorm=False) - g2 = self.conv_util(g1, self.args.hop * 2, kernel_size=(1, 3), strides=(1, 2), noise=True, bnorm=False) - g3 = self.conv_util(g2, self.args.hop, kernel_size=(1, 3), strides=(1, 2), noise=True, bnorm=False) - g = self.conv_util(g3, self.args.hop, kernel_size=(1, 3), strides=(1, 2), noise=True, bnorm=False) - - g33 = self.conv_util( - g, self.args.hop, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - g22 = self.conv_util( - g3, self.args.hop * 2, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - g11 = self.conv_util( - g22 + g2, self.args.hop * 3, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - g00 = self.conv_util( - g11 + g1, self.args.hop * 3, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - - g = tf.keras.layers.Conv2D( - dim, kernel_size=(1, 1), strides=(1, 1), kernel_initializer=self.init, padding="same" - )(g00 + g0) - gf = tf.clip_by_value(g, -1.0, 1.0) - - g = self.conv_util( - g22, self.args.hop * 3, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - g = self.conv_util( - g + g11, self.args.hop * 3, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False - ) - g = tf.keras.layers.Conv2D( - dim, kernel_size=(1, 1), strides=(1, 1), kernel_initializer=self.init, padding="same" - )(g + g00) - pf = tf.clip_by_value(g, -1.0, 1.0) - - gfls = tf.split(gf, self.args.shape // self.args.window, 0) - gf = tf.concat(gfls, -2) - - pfls = tf.split(pf, self.args.shape // self.args.window, 0) - pf = tf.concat(pfls, -2) - - s = tf.transpose(gf, [0, 2, 3, 1]) - p = tf.transpose(pf, [0, 2, 3, 1]) - - s = tf.cast(tf.squeeze(s, -1), tf.float32) - p = tf.cast(tf.squeeze(p, -1), tf.float32) - - return tf.keras.Model(inpf, [s, p], name="DEC") - - def build_critic(self): - - sinp = tf.keras.layers.Input(shape=(1, self.args.latlen, self.args.latdepth * 2)) - - sf = tf.keras.layers.Conv2D( - self.args.base_channels * 3, - kernel_size=(1, 4), - strides=(1, 2), - activation="linear", - padding="same", - kernel_initializer=self.init, - name="1c", - )(sinp) - sf = tf.keras.layers.LeakyReLU(0.2)(sf) - - sf = self.res_block_disc(sf, self.args.base_channels * 4, kernel_size=(1, 4), strides=(1, 2), name="2") - - sf = self.res_block_disc(sf, self.args.base_channels * 5, kernel_size=(1, 4), strides=(1, 2), name="3") - - sf = self.res_block_disc(sf, self.args.base_channels * 6, kernel_size=(1, 4), strides=(1, 2), name="4") - - sf = self.res_block_disc(sf, self.args.base_channels * 7, kernel_size=(1, 4), strides=(1, 2), name="5") - - if not self.args.small: - sf = self.res_block_disc( - sf, self.args.base_channels * 7, kernel_size=(1, 4), strides=(1, 2), kernel_size_2=(1, 1), name="6" - ) - - sf = tf.keras.layers.Conv2D( - self.args.base_channels * 7, - kernel_size=(1, 3), - strides=(1, 1), - activation="linear", - padding="same", - kernel_initializer=self.init, - name="7c", - )(sf) - sf = tf.keras.layers.LeakyReLU(0.2)(sf) - - gf = tf.keras.layers.Dense(1, activation="linear", use_bias=True, kernel_initializer=self.init, name="7d")( - tf.keras.layers.Flatten()(sf) - ) - - gf = tf.cast(gf, tf.float32) - - return tf.keras.Model(sinp, gf, name="C") - - def build_generator(self): - - dim = self.args.latdepth * 2 - - inpf = tf.keras.layers.Input((self.args.latlen, self.args.latdepth * 2)) - - inpfls = tf.split(inpf, 2, -2) - inpb = tf.concat(inpfls, 0) - - inpg = tf.reduce_mean(inpb, -2) - inp1 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(tf.expand_dims(inpb, -3)) - inp2 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp1) - inp3 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp2) - inp4 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp3) - inp5 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp4) - if not self.args.small: - inp6 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp5) - - if not self.args.small: - g = tf.keras.layers.Dense( - 4 * (self.args.base_channels * 7), - activation="linear", - use_bias=True, - kernel_initializer=self.init, - name="00d", - )(tf.keras.layers.Flatten()(inp6)) - g = tf.keras.layers.Reshape((1, 4, self.args.base_channels * 7))(g) - g = AddNoise(self.args.datatype, name="00n")(g) - g = self.adain(g, inp5, name="00ai") - g = tf.keras.activations.swish(g) - else: - g = tf.keras.layers.Dense( - 4 * (self.args.base_channels * 7), - activation="linear", - use_bias=True, - kernel_initializer=self.init, - name="00d", - )(tf.keras.layers.Flatten()(inp5)) - g = tf.keras.layers.Reshape((1, 4, self.args.base_channels * 7))(g) - g = AddNoise(self.args.datatype, name="00n")(g) - g = self.adain(g, inp4, name="00ai") - g = tf.keras.activations.swish(g) - - if not self.args.small: - g1 = self.conv_util_gen( - g, - self.args.base_channels * 6, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - emb=inp4, - name="0", - ) - g1 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g1 - g1 = self.conv_util_gen( - g1, - self.args.base_channels * 6, - kernel_size=(1, 4), - strides=(1, 1), - upsample=False, - noise=True, - emb=inp4, - name="1", - ) - g1 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g1 - g1 = g1 + tf.keras.layers.Conv2D( - g1.shape[-1], - kernel_size=(1, 1), - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name="res1c", - )(self.pixel_shuffle(g)) - else: - g1 = self.conv_util_gen( - g, - self.args.base_channels * 6, - kernel_size=(1, 1), - strides=(1, 1), - upsample=False, - noise=True, - emb=inp4, - name="0_small", - ) - g1 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g1 - g1 = self.conv_util_gen( - g1, - self.args.base_channels * 6, - kernel_size=(1, 1), - strides=(1, 1), - upsample=False, - noise=True, - emb=inp4, - name="1_small", - ) - g1 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g1 - g1 = g1 + tf.keras.layers.Conv2D( - g1.shape[-1], - kernel_size=(1, 1), - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name="res1c_small", - )(g) - - g2 = self.conv_util_gen( - g1, - self.args.base_channels * 5, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - emb=inp3, - name="2", - ) - g2 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g2 - g2 = self.conv_util_gen( - g2, - self.args.base_channels * 5, - kernel_size=(1, 4), - strides=(1, 1), - upsample=False, - noise=True, - emb=inp3, - name="3", - ) - g2 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g2 - g2 = g2 + tf.keras.layers.Conv2D( - g2.shape[-1], - kernel_size=(1, 1), - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name="res2c", - )(self.pixel_shuffle(g1)) - - g3 = self.conv_util_gen( - g2, - self.args.base_channels * 4, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - emb=inp2, - name="4", - ) - g3 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g3 - g3 = self.conv_util_gen( - g3, - self.args.base_channels * 4, - kernel_size=(1, 4), - strides=(1, 1), - upsample=False, - noise=True, - emb=inp2, - name="5", - ) - g3 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g3 - g3 = g3 + tf.keras.layers.Conv2D( - g3.shape[-1], - kernel_size=(1, 1), - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name="res3c", - )(self.pixel_shuffle(g2)) - - g4 = self.conv_util_gen( - g3, - self.args.base_channels * 3, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - emb=inp1, - name="6", - ) - g4 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g4 - g4 = self.conv_util_gen( - g4, - self.args.base_channels * 3, - kernel_size=(1, 4), - strides=(1, 1), - upsample=False, - noise=True, - emb=inp1, - name="7", - ) - g4 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g4 - g4 = g4 + tf.keras.layers.Conv2D( - g4.shape[-1], - kernel_size=(1, 1), - strides=1, - activation="linear", - padding="same", - kernel_initializer=self.init, - use_bias=True, - name="res4c", - )(self.pixel_shuffle(g3)) - - g5 = self.conv_util_gen( - g4, - self.args.base_channels * 2, - kernel_size=(1, 4), - strides=(1, 2), - upsample=True, - noise=True, - emb=tf.expand_dims(tf.cast(inpb, dtype=self.args.datatype), -3), - name="8", - ) - - gf = tf.keras.layers.Conv2D( - dim, kernel_size=(1, 4), strides=(1, 1), kernel_initializer=self.init, padding="same", name="9c" - )(g5) - - gfls = tf.split(gf, 2, 0) - gf = tf.concat(gfls, -2) - - gf = tf.cast(gf, tf.float32) - - return tf.keras.Model(inpf, gf, name="GEN") - - # Load past models from path to resume training or test - def load(self, path, load_dec=False): - gen = self.build_generator() - critic = self.build_critic() - enc = self.build_encoder() - dec = self.build_decoder() - enc2 = self.build_encoder2() - dec2 = self.build_decoder2() - gen_ema = self.build_generator() - - switch = tf.Variable(-1.0, dtype=tf.float32) - - if self.args.mixed_precision: - opt_disc = self.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam(0.0001, 0.5)) - opt_dec = self.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam(0.0001, 0.5)) - else: - opt_disc = tf.keras.optimizers.Adam(0.0001, 0.9) - opt_dec = tf.keras.optimizers.Adam(0.0001, 0.9) - - if load_dec: - dec.load_weights(self.args.dec_path + "/dec.h5") - dec2.load_weights(self.args.dec_path + "/dec2.h5") - enc.load_weights(self.args.dec_path + "/enc.h5") - enc2.load_weights(self.args.dec_path + "/enc2.h5") - - else: - grad_vars = critic.trainable_weights - zero_grads = [tf.zeros_like(w) for w in grad_vars] - opt_disc.apply_gradients(zip(zero_grads, grad_vars)) - - grad_vars = gen.trainable_variables - zero_grads = [tf.zeros_like(w) for w in grad_vars] - opt_dec.apply_gradients(zip(zero_grads, grad_vars)) - - if not self.args.testing: - opt_disc.set_weights(np.load(path + "/opt_disc.npy", allow_pickle=True)) - opt_dec.set_weights(np.load(path + "/opt_dec.npy", allow_pickle=True)) - critic.load_weights(path + "/critic.h5") - gen.load_weights(path + "/gen.h5") - switch = tf.Variable(float(np.load(path + "/switch.npy", allow_pickle=True)), dtype=tf.float32) - - gen_ema.load_weights(path + "/gen_ema.h5") - dec.load_weights(self.args.dec_path + "/dec.h5") - dec2.load_weights(self.args.dec_path + "/dec2.h5") - enc.load_weights(self.args.dec_path + "/enc.h5") - enc2.load_weights(self.args.dec_path + "/enc2.h5") - - return ( - critic, - gen, - enc, - dec, - enc2, - dec2, - gen_ema, - [opt_dec, opt_disc], - switch, - ) - - def build(self): - gen = self.build_generator() - critic = self.build_critic() - enc = self.build_encoder() - dec = self.build_decoder() - enc2 = self.build_encoder2() - dec2 = self.build_decoder2() - gen_ema = self.build_generator() - - switch = tf.Variable(-1.0, dtype=tf.float32) - - gen_ema = tf.keras.models.clone_model(gen) - gen_ema.set_weights(gen.get_weights()) - - if self.args.mixed_precision: - opt_disc = self.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam(0.0001, 0.5)) - opt_dec = self.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam(0.0001, 0.5)) - else: - opt_disc = tf.keras.optimizers.Adam(0.0001, 0.5) - opt_dec = tf.keras.optimizers.Adam(0.0001, 0.5) - - return ( - critic, - gen, - enc, - dec, - enc2, - dec2, - gen_ema, - [opt_dec, opt_disc], - switch, - ) - - def get_networks(self): - ( - critic, - gen, - enc, - dec, - enc2, - dec2, - gen_ema_1, - [opt_dec, opt_disc], - switch, - ) = self.load(self.args.load_path_1, load_dec=False) - print(f"Networks loaded from {self.args.load_path_1}") - - ( - critic, - gen, - enc, - dec, - enc2, - dec2, - gen_ema_2, - [opt_dec, opt_disc], - switch, - ) = self.load(self.args.load_path_2, load_dec=False) - print(f"Networks loaded from {self.args.load_path_2}") - - ( - critic, - gen, - enc, - dec, - enc2, - dec2, - gen_ema_3, - [opt_dec, opt_disc], - switch, - ) = self.load(self.args.load_path_3, load_dec=False) - print(f"Networks loaded from {self.args.load_path_3}") - - return ( - (critic, gen, enc, dec, enc2, dec2, gen_ema_1, [opt_dec, opt_disc], switch), - (critic, gen, enc, dec, enc2, dec2, gen_ema_2, [opt_dec, opt_disc], switch), - (critic, gen, enc, dec, enc2, dec2, gen_ema_3, [opt_dec, opt_disc], switch), - ) - - def initialize_networks(self): - - ( - (critic, gen, enc, dec, enc2, dec2, gen_ema_1, [opt_dec, opt_disc], switch), - (critic, gen, enc, dec, enc2, dec2, gen_ema_2, [opt_dec, opt_disc], switch), - (critic, gen, enc, dec, enc2, dec2, gen_ema_3, [opt_dec, opt_disc], switch), - ) = self.get_networks() - - print(f"Critic params: {count_params(critic.trainable_variables)}") - print(f"Generator params: {count_params(gen.trainable_variables)}") - - return ( - (critic, gen, enc, dec, enc2, dec2, gen_ema_1, [opt_dec, opt_disc], switch), - (critic, gen, enc, dec, enc2, dec2, gen_ema_2, [opt_dec, opt_disc], switch), - (critic, gen, enc, dec, enc2, dec2, gen_ema_3, [opt_dec, opt_disc], switch), - ) diff --git a/spaces/naveed92/web_qa/README.md b/spaces/naveed92/web_qa/README.md deleted file mode 100644 index 4aeff0348f02ff693d9a4f02d49442d9c11455d2..0000000000000000000000000000000000000000 --- a/spaces/naveed92/web_qa/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Web Qa -emoji: 📉 -colorFrom: indigo -colorTo: indigo -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Queen 2014 Movie Download VERIFIED Kickass 720p 12.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Queen 2014 Movie Download VERIFIED Kickass 720p 12.md deleted file mode 100644 index b9539e17a25ef6d2306810aae9f4430e798e2dec..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Queen 2014 Movie Download VERIFIED Kickass 720p 12.md +++ /dev/null @@ -1,19 +0,0 @@ - -

        Queen 2014 Movie: A Hilarious and Heartwarming Journey of Self-Discovery

        -

        Queen is a 2014 Hindi-language comedy-drama film that stars Kangana Ranaut as Rani, a shy and sheltered young woman who decides to go on her solo honeymoon to Paris and Amsterdam after her fiancé (Rajkummar Rao) calls off their wedding. Along the way, she meets new friends, explores new cultures, and learns to live life on her own terms.

        -

        The film is directed by Vikas Bahl and produced by Anurag Kashyap, Vikramaditya Motwane, and Madhu Mantena. It features a catchy soundtrack by Amit Trivedi and witty dialogues by Anvita Dutt Guptan and Ranaut herself. The film also stars Lisa Haydon as Vijayalakshmi, a free-spirited single mother who becomes Rani's confidante in Paris.

        -

        Queen 2014 Movie Download Kickass 720p 12


        DOWNLOAD ○○○ https://urlcod.com/2uIb1p



        -

        Queen received rave reviews from critics and audiences alike for its refreshing and empowering portrayal of a female protagonist. The film won several awards, including the National Film Award for Best Actress for Ranaut and the Filmfare Award for Best Film. It also became one of the highest-grossing Indian films featuring a female lead.

        -

        Queen is a must-watch for anyone who loves comedy, drama, and travel. It will make you laugh, cry, and cheer for Rani as she discovers herself and the world. You can watch Queen on Netflix or download it from Kickass in 720p quality.

        - -

        Queen is not just a comedy, but also a drama that explores the themes of identity, culture, and feminism. Rani's journey is not only geographical, but also emotional and psychological. She learns to overcome her insecurities, prejudices, and fears, and to embrace her true self. She also challenges the stereotypes and expectations that society has imposed on her as a woman and as an Indian.

        -

        The film also showcases the beauty and diversity of Europe, especially Paris and Amsterdam. The cinematography by Bobby Singh and Siddharth Diwan captures the stunning landscapes, architecture, and culture of these cities. The film also features some memorable scenes and locations, such as the Eiffel Tower, the Moulin Rouge, the Red Light District, and the canal cruise.

        -

        Queen has been praised by critics and audiences alike for its refreshing and empowering portrayal of a female protagonist. The film won several awards, including the National Film Award for Best Actress for Ranaut and the Filmfare Award for Best Film. It also became one of the highest-grossing Indian films featuring a female lead.

        -

        If you are looking for a film that will make you laugh, cry, and cheer for Rani as she discovers herself and the world, Queen is the perfect choice for you. You can watch Queen on Netflix or download it from Kickass in 720p quality.

        - -

        Queen is also a showcase of the talented cast and crew who worked on the film. Kangana Ranaut delivers a stellar performance as Rani, bringing out her vulnerability, innocence, and charm. She also improvised some of her dialogues and is credited as an additional dialogue writer. Rajkummar Rao plays Vijay, Rani's selfish and arrogant ex-fiance, who realizes his mistake too late. Lisa Haydon is a scene-stealer as Vijayalakshmi, Rani's outgoing and adventurous friend in Paris.

        -

        The film is directed by Vikas Bahl, who co-wrote the script with Chaitally Parmar and Parveez Shaikh. Bahl is known for his realistic and relatable films, such as Chillar Party and Super 30. He won the National Film Award for Best Feature Film in Hindi for Queen. The film also features a catchy soundtrack by Amit Trivedi, who composed songs in different genres and languages to suit the mood and setting of the film. The lyrics are written by Anvita Dutt Guptan, who also wrote the dialogues.

        -

        -

        Queen is a film that celebrates the spirit of womanhood and the joy of travel. It is a film that inspires you to follow your dreams and to be yourself. It is a film that you will not forget easily. You can watch Queen on Netflix or download it from Kickass in 720p quality.

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/nightfury/SD-InPainting/clipseg/general_utils.py b/spaces/nightfury/SD-InPainting/clipseg/general_utils.py deleted file mode 100644 index 708d32e701a78f3ce848060baef561c8f11b1b2e..0000000000000000000000000000000000000000 --- a/spaces/nightfury/SD-InPainting/clipseg/general_utils.py +++ /dev/null @@ -1,272 +0,0 @@ -import json -import inspect -import torch -import os -import sys -import yaml -from shutil import copy, copytree -from os.path import join, dirname, realpath, expanduser, isfile, isdir, basename - - -class Logger(object): - - def __getattr__(self, k): - return print - -log = Logger() - -def training_config_from_cli_args(): - experiment_name = sys.argv[1] - experiment_id = int(sys.argv[2]) - - yaml_config = yaml.load(open(f'experiments/{experiment_name}'), Loader=yaml.SafeLoader) - - config = yaml_config['configuration'] - config = {**config, **yaml_config['individual_configurations'][experiment_id]} - config = AttributeDict(config) - return config - - -def score_config_from_cli_args(): - experiment_name = sys.argv[1] - experiment_id = int(sys.argv[2]) - - - yaml_config = yaml.load(open(f'experiments/{experiment_name}'), Loader=yaml.SafeLoader) - - config = yaml_config['test_configuration_common'] - - if type(yaml_config['test_configuration']) == list: - test_id = int(sys.argv[3]) - config = {**config, **yaml_config['test_configuration'][test_id]} - else: - config = {**config, **yaml_config['test_configuration']} - - if 'test_configuration' in yaml_config['individual_configurations'][experiment_id]: - config = {**config, **yaml_config['individual_configurations'][experiment_id]['test_configuration']} - - train_checkpoint_id = yaml_config['individual_configurations'][experiment_id]['name'] - - config = AttributeDict(config) - return config, train_checkpoint_id - - -def get_from_repository(local_name, repo_files, integrity_check=None, repo_dir='~/dataset_repository', - local_dir='~/datasets'): - """ copies files from repository to local folder. - - repo_files: list of filenames or list of tuples [filename, target path] - - e.g. get_from_repository('MyDataset', [['data/dataset1.tar', 'other/path/ds03.tar']) - will create a folder 'MyDataset' in local_dir, and extract the content of - '/data/dataset1.tar' to /MyDataset/other/path. - """ - - local_dir = realpath(join(expanduser(local_dir), local_name)) - - dataset_exists = True - - # check if folder is available - if not isdir(local_dir): - dataset_exists = False - - if integrity_check is not None: - try: - integrity_ok = integrity_check(local_dir) - except BaseException: - integrity_ok = False - - if integrity_ok: - log.hint('Passed custom integrity check') - else: - log.hint('Custom integrity check failed') - - dataset_exists = dataset_exists and integrity_ok - - if not dataset_exists: - - repo_dir = realpath(expanduser(repo_dir)) - - for i, filename in enumerate(repo_files): - - if type(filename) == str: - origin, target = filename, filename - archive_target = join(local_dir, basename(origin)) - extract_target = join(local_dir) - else: - origin, target = filename - archive_target = join(local_dir, dirname(target), basename(origin)) - extract_target = join(local_dir, dirname(target)) - - archive_origin = join(repo_dir, origin) - - log.hint(f'copy: {archive_origin} to {archive_target}') - - # make sure the path exists - os.makedirs(dirname(archive_target), exist_ok=True) - - if os.path.isfile(archive_target): - # only copy if size differs - if os.path.getsize(archive_target) != os.path.getsize(archive_origin): - log.hint(f'file exists but filesize differs: target {os.path.getsize(archive_target)} vs. origin {os.path.getsize(archive_origin)}') - copy(archive_origin, archive_target) - else: - copy(archive_origin, archive_target) - - extract_archive(archive_target, extract_target, noarchive_ok=True) - - # concurrent processes might have deleted the file - if os.path.isfile(archive_target): - os.remove(archive_target) - - -def extract_archive(filename, target_folder=None, noarchive_ok=False): - from subprocess import run, PIPE - - if filename.endswith('.tgz') or filename.endswith('.tar'): - command = f'tar -xf {filename}' - command += f' -C {target_folder}' if target_folder is not None else '' - elif filename.endswith('.tar.gz'): - command = f'tar -xzf {filename}' - command += f' -C {target_folder}' if target_folder is not None else '' - elif filename.endswith('zip'): - command = f'unzip {filename}' - command += f' -d {target_folder}' if target_folder is not None else '' - else: - if noarchive_ok: - return - else: - raise ValueError(f'unsuppored file ending of {filename}') - - log.hint(command) - result = run(command.split(), stdout=PIPE, stderr=PIPE) - if result.returncode != 0: - print(result.stdout, result.stderr) - - -class AttributeDict(dict): - """ - An extended dictionary that allows access to elements as atttributes and counts - these accesses. This way, we know if some attributes were never used. - """ - - def __init__(self, *args, **kwargs): - from collections import Counter - super().__init__(*args, **kwargs) - self.__dict__['counter'] = Counter() - - def __getitem__(self, k): - self.__dict__['counter'][k] += 1 - return super().__getitem__(k) - - def __getattr__(self, k): - self.__dict__['counter'][k] += 1 - return super().get(k) - - def __setattr__(self, k, v): - return super().__setitem__(k, v) - - def __delattr__(self, k, v): - return super().__delitem__(k, v) - - def unused_keys(self, exceptions=()): - return [k for k in super().keys() if self.__dict__['counter'][k] == 0 and k not in exceptions] - - def assume_no_unused_keys(self, exceptions=()): - if len(self.unused_keys(exceptions=exceptions)) > 0: - log.warning('Unused keys:', self.unused_keys(exceptions=exceptions)) - - -def get_attribute(name): - import importlib - - if name is None: - raise ValueError('The provided attribute is None') - - name_split = name.split('.') - mod = importlib.import_module('.'.join(name_split[:-1])) - return getattr(mod, name_split[-1]) - - - -def filter_args(input_args, default_args): - - updated_args = {k: input_args[k] if k in input_args else v for k, v in default_args.items()} - used_args = {k: v for k, v in input_args.items() if k in default_args} - unused_args = {k: v for k, v in input_args.items() if k not in default_args} - - return AttributeDict(updated_args), AttributeDict(used_args), AttributeDict(unused_args) - - -def load_model(checkpoint_id, weights_file=None, strict=True, model_args='from_config', with_config=False): - - config = json.load(open(join('logs', checkpoint_id, 'config.json'))) - - if model_args != 'from_config' and type(model_args) != dict: - raise ValueError('model_args must either be "from_config" or a dictionary of values') - - model_cls = get_attribute(config['model']) - - # load model - if model_args == 'from_config': - _, model_args, _ = filter_args(config, inspect.signature(model_cls).parameters) - - model = model_cls(**model_args) - - if weights_file is None: - weights_file = realpath(join('logs', checkpoint_id, 'weights.pth')) - else: - weights_file = realpath(join('logs', checkpoint_id, weights_file)) - - if isfile(weights_file): - weights = torch.load(weights_file) - for _, w in weights.items(): - assert not torch.any(torch.isnan(w)), 'weights contain NaNs' - model.load_state_dict(weights, strict=strict) - else: - raise FileNotFoundError(f'model checkpoint {weights_file} was not found') - - if with_config: - return model, config - - return model - - -class TrainingLogger(object): - - def __init__(self, model, log_dir, config=None, *args): - super().__init__() - self.model = model - self.base_path = join(f'logs/{log_dir}') if log_dir is not None else None - - os.makedirs('logs/', exist_ok=True) - os.makedirs(self.base_path, exist_ok=True) - - if config is not None: - json.dump(config, open(join(self.base_path, 'config.json'), 'w')) - - def iter(self, i, **kwargs): - if i % 100 == 0 and 'loss' in kwargs: - loss = kwargs['loss'] - print(f'iteration {i}: loss {loss:.4f}') - - def save_weights(self, only_trainable=False, weight_file='weights.pth'): - if self.model is None: - raise AttributeError('You need to provide a model reference when initializing TrainingTracker to save weights.') - - weights_path = join(self.base_path, weight_file) - - weight_dict = self.model.state_dict() - - if only_trainable: - weight_dict = {n: weight_dict[n] for n, p in self.model.named_parameters() if p.requires_grad} - - torch.save(weight_dict, weights_path) - log.info(f'Saved weights to {weights_path}') - - def __enter__(self): - return self - - def __exit__(self, type, value, traceback): - """ automatically stop processes if used in a context manager """ - pass \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tests/config/test_instantiate_config.py b/spaces/nikitaPDL2023/assignment4/detectron2/tests/config/test_instantiate_config.py deleted file mode 100644 index 6b728943ada9bc20af5a60fbe2b3ea58d804a362..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/tests/config/test_instantiate_config.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import os -import tempfile -import unittest -import yaml -from omegaconf import OmegaConf -from omegaconf import __version__ as oc_version -from dataclasses import dataclass - -from detectron2.config import LazyConfig, instantiate, LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.utils.testing import reload_lazy_config - -OC_VERSION = tuple(int(x) for x in oc_version.split(".")[:2]) - - -class TestClass: - def __init__(self, int_arg, list_arg=None, dict_arg=None, extra_arg=None): - self.int_arg = int_arg - self.list_arg = list_arg - self.dict_arg = dict_arg - self.extra_arg = extra_arg - - def __call__(self, call_arg): - return call_arg + self.int_arg - - -@unittest.skipIf(OC_VERSION < (2, 1), "omegaconf version too old") -class TestConstruction(unittest.TestCase): - def test_basic_construct(self): - cfg = L(TestClass)( - int_arg=3, - list_arg=[10], - dict_arg={}, - extra_arg=L(TestClass)(int_arg=4, list_arg="${..list_arg}"), - ) - - for x in [cfg, reload_lazy_config(cfg)]: - obj = instantiate(x) - self.assertIsInstance(obj, TestClass) - self.assertEqual(obj.int_arg, 3) - self.assertEqual(obj.extra_arg.int_arg, 4) - self.assertEqual(obj.extra_arg.list_arg, obj.list_arg) - - # Test interpolation - x.extra_arg.list_arg = [5] - obj = instantiate(x) - self.assertIsInstance(obj, TestClass) - self.assertEqual(obj.extra_arg.list_arg, [5]) - - def test_instantiate_other_obj(self): - # do nothing for other obj - self.assertEqual(instantiate(5), 5) - x = [3, 4, 5] - self.assertEqual(instantiate(x), x) - x = TestClass(1) - self.assertIs(instantiate(x), x) - x = {"xx": "yy"} - self.assertIs(instantiate(x), x) - - def test_instantiate_lazy_target(self): - # _target_ is result of instantiate - objconf = L(L(len)(int_arg=3))(call_arg=4) - objconf._target_._target_ = TestClass - self.assertEqual(instantiate(objconf), 7) - - def test_instantiate_list(self): - lst = [1, 2, L(TestClass)(int_arg=1)] - x = L(TestClass)(int_arg=lst) # list as an argument should be recursively instantiated - x = instantiate(x).int_arg - self.assertEqual(x[:2], [1, 2]) - self.assertIsInstance(x[2], TestClass) - self.assertEqual(x[2].int_arg, 1) - - def test_instantiate_dataclass(self): - cfg = L(ShapeSpec)(channels=1, width=3) - # Test original cfg as well as serialization - for x in [cfg, reload_lazy_config(cfg)]: - obj = instantiate(x) - self.assertIsInstance(obj, ShapeSpec) - self.assertEqual(obj.channels, 1) - self.assertEqual(obj.height, None) - - def test_instantiate_dataclass_as_subconfig(self): - cfg = L(TestClass)(int_arg=1, extra_arg=ShapeSpec(channels=1, width=3)) - # Test original cfg as well as serialization - for x in [cfg, reload_lazy_config(cfg)]: - obj = instantiate(x) - self.assertIsInstance(obj.extra_arg, ShapeSpec) - self.assertEqual(obj.extra_arg.channels, 1) - self.assertEqual(obj.extra_arg.height, None) - - def test_bad_lazycall(self): - with self.assertRaises(Exception): - L(3) - - def test_interpolation(self): - cfg = L(TestClass)(int_arg=3, extra_arg="${int_arg}") - - cfg.int_arg = 4 - obj = instantiate(cfg) - self.assertEqual(obj.extra_arg, 4) - - # Test that interpolation still works after serialization - cfg = reload_lazy_config(cfg) - cfg.int_arg = 5 - obj = instantiate(cfg) - self.assertEqual(obj.extra_arg, 5) diff --git a/spaces/nikoifirewall/First_shot_gradio_covid_sentiment_analysis/app.py b/spaces/nikoifirewall/First_shot_gradio_covid_sentiment_analysis/app.py deleted file mode 100644 index 30a2a254a7c48967cf9acdaef026a8bb803c385f..0000000000000000000000000000000000000000 --- a/spaces/nikoifirewall/First_shot_gradio_covid_sentiment_analysis/app.py +++ /dev/null @@ -1,57 +0,0 @@ -import gradio as gr - -from transformers import AutoModelForSequenceClassification -from transformers import TFAutoModelForSequenceClassification -from transformers import AutoTokenizer, AutoConfig -import numpy as np -from scipy.special import softmax - - -# setting up the requiremnts - -model_path = f"mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis" -tokenizer = AutoTokenizer.from_pretrained('mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis') -config = AutoConfig.from_pretrained(model_path) -model = AutoModelForSequenceClassification.from_pretrained(model_path) - -# Preprocess text (username and link placeholders) -def preprocess(text): - new_text = [] - for t in text.split(" "): - t = '@user' if t.startswith('@') and len(t) > 1 else t - t = 'http' if t.startswith('http') else t - new_text.append(t) - return " ".join(new_text) - -# Defining the main function -def sentiment_analysis(text): - text = preprocess(text) - - # PyTorch-based models - encoded_input = tokenizer(text, return_tensors='pt') - output = model(**encoded_input) - scores_ = output[0][0].detach().numpy() - scores_ = softmax(scores_) - - # Format output dict of scores - labels = ['Negative😢😢', 'Neutral', 'Positive😃😃'] - scores = {l:float(s) for (l,s) in zip(labels, scores_) } - - return scores - -welcome_message = "Welcome to Team Paris tweets first shot Sentimental Analysis App 😃 😃 😃 😃 " -demo = gr.Interface( - fn=sentiment_analysis, - inputs=gr.Textbox(placeholder="Write your tweet here..."), - outputs="label", - interpretation="default", - examples=[["This is wonderful!"]], - title=welcome_message, - description=("This is a sentimental analysis app built by fine tuning a model trained on financial news sentiment, we leverage what the model has learnt, /n, and fine tune it on twitter comments . The eval_loss of our model is 0.785") -) -demo.launch() -# def greet(name): -# return "Hello " + name + "!!" - -# iface = gr.Interface(fn=greet, inputs="text", outputs="text") -# iface.launch(inline = False) \ No newline at end of file diff --git a/spaces/noes14155/runwayml-stable-diffusion-v1-5/app.py b/spaces/noes14155/runwayml-stable-diffusion-v1-5/app.py deleted file mode 100644 index a82df332731f067826d3e1ef79fabceffb74d07e..0000000000000000000000000000000000000000 --- a/spaces/noes14155/runwayml-stable-diffusion-v1-5/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch() \ No newline at end of file diff --git a/spaces/nomic-ai/Dahoas_rm-static/README.md b/spaces/nomic-ai/Dahoas_rm-static/README.md deleted file mode 100644 index 489b5e7549e1de6b8ffa438630d7dc48d27bbaa1..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/Dahoas_rm-static/README.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Dahoas/rm-static -emoji: 🗺️ -colorFrom: purple -colorTo: red -sdk: static -pinned: false ---- diff --git a/spaces/nus-cs5647-team-5/Mandarin_Tone_Evaluation/utils/ops.py b/spaces/nus-cs5647-team-5/Mandarin_Tone_Evaluation/utils/ops.py deleted file mode 100644 index b301479dac353265010fc2c90410b8366913b05d..0000000000000000000000000000000000000000 --- a/spaces/nus-cs5647-team-5/Mandarin_Tone_Evaluation/utils/ops.py +++ /dev/null @@ -1,46 +0,0 @@ -import wave -import difflib -import numpy as np - - -def read_wav_data(filename: str) -> tuple: - ''' - 读取一个wav文件,返回声音信号的时域谱矩阵和播放时间 - ''' - wav = wave.open(filename, "rb") # 打开一个wav格式的声音文件流 - num_frame = wav.getnframes() # 获取帧数 - num_channel = wav.getnchannels() # 获取声道数 - framerate = wav.getframerate() # 获取帧速率 - num_sample_width = wav.getsampwidth() # 获取实例的比特宽度,即每一帧的字节数 - str_data = wav.readframes(num_frame) # 读取全部的帧 - wav.close() # 关闭流 - wave_data = np.fromstring(str_data, dtype=np.short) # 将声音文件数据转换为数组矩阵形式 - wave_data.shape = -1, num_channel # 按照声道数将数组整形,单声道时候是一列数组,双声道时候是两列的矩阵 - wave_data = wave_data.T # 将矩阵转置 - return wave_data, framerate, num_channel, num_sample_width - - -def get_edit_distance(str1, str2) -> int: - ''' - 计算两个串的编辑距离,支持str和list类型 - ''' - leven_cost = 0 - sequence_match = difflib.SequenceMatcher(None, str1, str2) - for tag, index_1, index_2, index_j1, index_j2 in sequence_match.get_opcodes(): - if tag == 'replace': - leven_cost += max(index_2 - index_1, index_j2 - index_j1) - elif tag == 'insert': - leven_cost += (index_j2 - index_j1) - elif tag == 'delete': - leven_cost += (index_2 - index_1) - return leven_cost - - -def ctc_decode_delete_tail_blank(ctc_decode_list): - ''' - 处理CTC解码后序列末尾余留的空白元素,删除掉 - ''' - p = 0 - while p < len(ctc_decode_list) and ctc_decode_list[p] != -1: - p += 1 - return ctc_decode_list[0:p] diff --git a/spaces/nyx-ai/stylegan2-flax-tpu/stylegan2/utils.py b/spaces/nyx-ai/stylegan2-flax-tpu/stylegan2/utils.py deleted file mode 100644 index 5acc5c4266075fc559370988f1921fdedb20da1c..0000000000000000000000000000000000000000 --- a/spaces/nyx-ai/stylegan2-flax-tpu/stylegan2/utils.py +++ /dev/null @@ -1,37 +0,0 @@ -from tqdm import tqdm -import requests -import os -import tempfile - - -def download(ckpt_dir, url): - name = url[url.rfind('/') + 1 : url.rfind('?')] - if ckpt_dir is None: - ckpt_dir = tempfile.gettempdir() - ckpt_dir = os.path.join(ckpt_dir, 'flaxmodels') - ckpt_file = os.path.join(ckpt_dir, name) - if not os.path.exists(ckpt_file): - print(f'Downloading: \"{url[:url.rfind("?")]}\" to {ckpt_file}') - if not os.path.exists(ckpt_dir): - os.makedirs(ckpt_dir) - - response = requests.get(url, stream=True) - total_size_in_bytes = int(response.headers.get('content-length', 0)) - progress_bar = tqdm(total=total_size_in_bytes, unit='iB', unit_scale=True) - - # first create temp file, in case the download fails - ckpt_file_temp = os.path.join(ckpt_dir, name + '.temp') - with open(ckpt_file_temp, 'wb') as file: - for data in response.iter_content(chunk_size=1024): - progress_bar.update(len(data)) - file.write(data) - progress_bar.close() - - if total_size_in_bytes != 0 and progress_bar.n != total_size_in_bytes: - print('An error occured while downloading, please try again.') - if os.path.exists(ckpt_file_temp): - os.remove(ckpt_file_temp) - else: - # if download was successful, rename the temp file - os.rename(ckpt_file_temp, ckpt_file) - return ckpt_file diff --git a/spaces/ochyai/ochyai_food/template.md b/spaces/ochyai/ochyai_food/template.md deleted file mode 100644 index b3bdbac4b435d8c96150f7fbc196535da64dd728..0000000000000000000000000000000000000000 --- a/spaces/ochyai/ochyai_food/template.md +++ /dev/null @@ -1,23 +0,0 @@ -### Title of New Recipe - -Please write your title of new recipe Here. - -### Your New Recipe Here - -Please write new recipe and brainstorm every point of new recipe to fill the details. - -### Your Instruction Here - -Please write your instruction to cook the dish of new recipe and brainstorm every point of new recipe to fill the details. - -### Your Comment and Feelings, taste of new recipe - -Please write review commnet of new recipe here and brainstorm every point of new recipe to fill the details. - -### Your Explanation to Blind Person - -Please write review commnet of new recipe here to explain to the blind people more concretely in detail. Please brainstorm every point of new recipe to fill the details. - -### Prompt for Visual Expression - -Please write prompt for visual expression in Generative AI for image the visual of the new recipe and brainstorm every point of new recipe to fill the details. \ No newline at end of file diff --git a/spaces/osanseviero/bidaf-elmo/README.md b/spaces/osanseviero/bidaf-elmo/README.md deleted file mode 100644 index 92e6c78e3646873fad3ffd827f5effd868f08c00..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/bidaf-elmo/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Bidaf Elmo -emoji: 🌍 -colorFrom: purple -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/oyjp1234/andite-anything-v4.0/README.md b/spaces/oyjp1234/andite-anything-v4.0/README.md deleted file mode 100644 index 4efc3fe120be0271f86ebd587c2c3b1c76aa0220..0000000000000000000000000000000000000000 --- a/spaces/oyjp1234/andite-anything-v4.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Andite Anything V4.0 -emoji: ⚡ -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/midas/midas/base_model.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/midas/midas/base_model.py deleted file mode 100644 index 5cf430239b47ec5ec07531263f26f5c24a2311cd..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/midas/midas/base_model.py +++ /dev/null @@ -1,16 +0,0 @@ -import torch - - -class BaseModel(torch.nn.Module): - def load(self, path): - """Load model from file. - - Args: - path (str): file path - """ - parameters = torch.load(path, map_location=torch.device('cpu')) - - if "optimizer" in parameters: - parameters = parameters["model"] - - self.load_state_dict(parameters) diff --git a/spaces/parkyzh/bingo/tailwind.config.js b/spaces/parkyzh/bingo/tailwind.config.js deleted file mode 100644 index 03da3c3c45be6983b9f5ffa6df5f1fd0870e9636..0000000000000000000000000000000000000000 --- a/spaces/parkyzh/bingo/tailwind.config.js +++ /dev/null @@ -1,48 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: [ - './src/pages/**/*.{js,ts,jsx,tsx,mdx}', - './src/components/**/*.{js,ts,jsx,tsx,mdx}', - './src/app/**/*.{js,ts,jsx,tsx,mdx}', - './src/ui/**/*.{js,ts,jsx,tsx,mdx}', - ], - "darkMode": "class", - theme: { - extend: { - colors: { - 'primary-blue': 'rgb(var(--color-primary-blue) / )', - secondary: 'rgb(var(--color-secondary) / )', - 'primary-background': 'rgb(var(--primary-background) / )', - 'primary-text': 'rgb(var(--primary-text) / )', - 'secondary-text': 'rgb(var(--secondary-text) / )', - 'light-text': 'rgb(var(--light-text) / )', - 'primary-border': 'rgb(var(--primary-border) / )', - }, - keyframes: { - slideDownAndFade: { - from: { opacity: 0, transform: 'translateY(-2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideLeftAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - slideUpAndFade: { - from: { opacity: 0, transform: 'translateY(2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideRightAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - }, - animation: { - slideDownAndFade: 'slideDownAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideLeftAndFade: 'slideLeftAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideUpAndFade: 'slideUpAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideRightAndFade: 'slideRightAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - }, - }, - }, - plugins: [require('@headlessui/tailwindcss'), require('tailwind-scrollbar')], -} diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/proggan.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/proggan.py deleted file mode 100644 index a82f6fecac2cf7b6d9016e201265691236997490..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/proggan.py +++ /dev/null @@ -1,302 +0,0 @@ -import torch, numpy, itertools -import torch.nn as nn -from collections import OrderedDict - - -def print_network(net, verbose=False): - num_params = 0 - for param in net.parameters(): - num_params += param.numel() - if verbose: - print(net) - print('Total number of parameters: {:3.3f} M'.format(num_params / 1e6)) - - -def from_pth_file(filename): - ''' - Instantiate from a pth file. - ''' - return from_state_dict(torch.load(filename)) - -def from_state_dict(state_dict): - if 'state_dict' in state_dict: - state_dict = state_dict['state_dict'] - # Convert old version of parameter names - if 'features.0.conv.weight' in state_dict: - state_dict = state_dict_from_old_pt_dict(state_dict) - sizes = sizes_from_state_dict(state_dict) - result = ProgressiveGenerator(sizes=sizes) - result.load_state_dict(state_dict) - return result - -############################################################################### -# Modules -############################################################################### - -class ProgressiveGenerator(nn.Sequential): - def __init__(self, resolution=None, sizes=None, modify_sequence=None, - output_tanh=True): - ''' - A pytorch progessive GAN generator that can be converted directly - from either a tensorflow model or a theano model. It consists of - a sequence of convolutional layers, organized in pairs, with an - upsampling and reduction of channels at every other layer; and - then finally followed by an output layer that reduces it to an - RGB [-1..1] image. - - The network can be given more layers to increase the output - resolution. The sizes argument indicates the fieature depth at - each upsampling, starting with the input z: [input-dim, 4x4-depth, - 8x8-depth, 16x16-depth...]. The output dimension is 2 * 2**len(sizes) - - Some default architectures can be selected by supplying the - resolution argument instead. - - The optional modify_sequence function can be used to transform the - sequence of layers before the network is constructed. - - If output_tanh is set to False, the network leaves the output - unclamped; otherwise, it applies a tanh to clamp the output to - [-1,1] before output. - ''' - assert (resolution is None) != (sizes is None) - if sizes is None: - sizes = { - 8: [512, 512, 512], - 16: [512, 512, 512, 512], - 32: [512, 512, 512, 512, 256], - 64: [512, 512, 512, 512, 256, 128], - 128: [512, 512, 512, 512, 256, 128, 64], - 256: [512, 512, 512, 512, 256, 128, 64, 32], - 1024: [512, 512, 512, 512, 512, 256, 128, 64, 32, 16] - }[resolution] - # Follow the schedule of upsampling given by sizes. - # layers are called: layer1, layer2, etc; then output_128x128 - sequence = [] - def add_d(layer, name=None): - if name is None: - name = 'layer%d' % (len(sequence) + 1) - sequence.append((name, layer)) - add_d(NormConvBlock(sizes[0], sizes[1], kernel_size=4, padding=3)) - add_d(NormConvBlock(sizes[1], sizes[1], kernel_size=3, padding=1)) - for i, (si, so) in enumerate(zip(sizes[1:-1], sizes[2:])): - add_d(NormUpscaleConvBlock(si, so, kernel_size=3, padding=1)) - add_d(NormConvBlock(so, so, kernel_size=3, padding=1)) - # Create an output layer. During training, the progressive GAN - # learns several such output layers for various resolutions; we - # just include the last (highest resolution) one. - dim = 4 * (2 ** (len(sequence) // 2 - 1)) - add_d(OutputConvBlock(sizes[-1], tanh=output_tanh), - name='output_%dx%d' % (dim, dim)) - # Allow the sequence to be modified - if modify_sequence is not None: - sequence = modify_sequence(sequence) - super().__init__(OrderedDict(sequence)) - - def forward(self, x): - # Convert vector input to 1x1 featuremap. - x = x.view(x.shape[0], x.shape[1], 1, 1) - return super().forward(x) - -class PixelNormLayer(nn.Module): - def __init__(self): - super(PixelNormLayer, self).__init__() - - def forward(self, x): - return x / torch.sqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8) - -class DoubleResolutionLayer(nn.Module): - def forward(self, x): - x = nn.functional.interpolate(x, scale_factor=2, mode='nearest') - return x - -class WScaleLayer(nn.Module): - def __init__(self, size, fan_in, gain=numpy.sqrt(2)): - super(WScaleLayer, self).__init__() - self.scale = gain / numpy.sqrt(fan_in) # No longer a parameter - self.b = nn.Parameter(torch.randn(size)) - self.size = size - - def forward(self, x): - x_size = x.size() - x = x * self.scale + self.b.view(1, -1, 1, 1).expand( - x_size[0], self.size, x_size[2], x_size[3]) - return x - -class NormConvBlock(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, padding): - super(NormConvBlock, self).__init__() - self.norm = PixelNormLayer() - self.conv = nn.Conv2d( - in_channels, out_channels, kernel_size, 1, padding, bias=False) - self.wscale = WScaleLayer(out_channels, in_channels, - gain=numpy.sqrt(2) / kernel_size) - self.relu = nn.LeakyReLU(inplace=True, negative_slope=0.2) - - def forward(self, x): - x = self.norm(x) - x = self.conv(x) - x = self.relu(self.wscale(x)) - return x - -class NormUpscaleConvBlock(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, padding): - super(NormUpscaleConvBlock, self).__init__() - self.norm = PixelNormLayer() - self.up = DoubleResolutionLayer() - self.conv = nn.Conv2d( - in_channels, out_channels, kernel_size, 1, padding, bias=False) - self.wscale = WScaleLayer(out_channels, in_channels, - gain=numpy.sqrt(2) / kernel_size) - self.relu = nn.LeakyReLU(inplace=True, negative_slope=0.2) - - def forward(self, x): - x = self.norm(x) - x = self.up(x) - x = self.conv(x) - x = self.relu(self.wscale(x)) - return x - -class OutputConvBlock(nn.Module): - def __init__(self, in_channels, tanh=False): - super().__init__() - self.norm = PixelNormLayer() - self.conv = nn.Conv2d( - in_channels, 3, kernel_size=1, padding=0, bias=False) - self.wscale = WScaleLayer(3, in_channels, gain=1) - self.clamp = nn.Hardtanh() if tanh else (lambda x: x) - - def forward(self, x): - x = self.norm(x) - x = self.conv(x) - x = self.wscale(x) - x = self.clamp(x) - return x - -############################################################################### -# Conversion -############################################################################### - -def from_tf_parameters(parameters): - ''' - Instantiate from tensorflow variables. - ''' - state_dict = state_dict_from_tf_parameters(parameters) - sizes = sizes_from_state_dict(state_dict) - result = ProgressiveGenerator(sizes=sizes) - result.load_state_dict(state_dict) - return result - -def from_old_pt_dict(parameters): - ''' - Instantiate from old pytorch state dict. - ''' - state_dict = state_dict_from_old_pt_dict(parameters) - sizes = sizes_from_state_dict(state_dict) - result = ProgressiveGenerator(sizes=sizes) - result.load_state_dict(state_dict) - return result - -def sizes_from_state_dict(params): - ''' - In a progressive GAN, the number of channels can change after each - upsampling. This function reads the state dict to figure the - number of upsamplings and the channel depth of each filter. - ''' - sizes = [] - for i in itertools.count(): - pt_layername = 'layer%d' % (i + 1) - try: - weight = params['%s.conv.weight' % pt_layername] - except KeyError: - break - if i == 0: - sizes.append(weight.shape[1]) - if i % 2 == 0: - sizes.append(weight.shape[0]) - return sizes - -def state_dict_from_tf_parameters(parameters): - ''' - Conversion from tensorflow parameters - ''' - def torch_from_tf(data): - return torch.from_numpy(data.eval()) - - params = dict(parameters) - result = {} - sizes = [] - for i in itertools.count(): - resolution = 4 * (2 ** (i // 2)) - # Translate parameter names. For example: - # 4x4/Dense/weight -> layer1.conv.weight - # 32x32/Conv0_up/weight -> layer7.conv.weight - # 32x32/Conv1/weight -> layer8.conv.weight - tf_layername = '%dx%d/%s' % (resolution, resolution, - 'Dense' if i == 0 else 'Conv' if i == 1 else - 'Conv0_up' if i % 2 == 0 else 'Conv1') - pt_layername = 'layer%d' % (i + 1) - # Stop looping when we run out of parameters. - try: - weight = torch_from_tf(params['%s/weight' % tf_layername]) - except KeyError: - break - # Transpose convolution weights into pytorch format. - if i == 0: - # Convert dense layer to 4x4 convolution - weight = weight.view(weight.shape[0], weight.shape[1] // 16, - 4, 4).permute(1, 0, 2, 3).flip(2, 3) - sizes.append(weight.shape[0]) - elif i % 2 == 0: - # Convert inverse convolution to convolution - weight = weight.permute(2, 3, 0, 1).flip(2, 3) - else: - # Ordinary Conv2d conversion. - weight = weight.permute(3, 2, 0, 1) - sizes.append(weight.shape[1]) - result['%s.conv.weight' % (pt_layername)] = weight - # Copy bias vector. - bias = torch_from_tf(params['%s/bias' % tf_layername]) - result['%s.wscale.b' % (pt_layername)] = bias - # Copy just finest-grained ToRGB output layers. For example: - # ToRGB_lod0/weight -> output.conv.weight - i -= 1 - resolution = 4 * (2 ** (i // 2)) - tf_layername = 'ToRGB_lod0' - pt_layername = 'output_%dx%d' % (resolution, resolution) - result['%s.conv.weight' % pt_layername] = torch_from_tf( - params['%s/weight' % tf_layername]).permute(3, 2, 0, 1) - result['%s.wscale.b' % pt_layername] = torch_from_tf( - params['%s/bias' % tf_layername]) - # Return parameters - return result - -def state_dict_from_old_pt_dict(params): - ''' - Conversion from the old pytorch model layer names. - ''' - result = {} - sizes = [] - for i in itertools.count(): - old_layername = 'features.%d' % i - pt_layername = 'layer%d' % (i + 1) - try: - weight = params['%s.conv.weight' % (old_layername)] - except KeyError: - break - if i == 0: - sizes.append(weight.shape[0]) - if i % 2 == 0: - sizes.append(weight.shape[1]) - result['%s.conv.weight' % (pt_layername)] = weight - result['%s.wscale.b' % (pt_layername)] = params[ - '%s.wscale.b' % (old_layername)] - # Copy the output layers. - i -= 1 - resolution = 4 * (2 ** (i // 2)) - pt_layername = 'output_%dx%d' % (resolution, resolution) - result['%s.conv.weight' % pt_layername] = params['output.conv.weight'] - result['%s.wscale.b' % pt_layername] = params['output.wscale.b'] - # Return parameters and also network architecture sizes. - return result - diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/upsegmodel/prroi_pool/functional.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/upsegmodel/prroi_pool/functional.py deleted file mode 100644 index 7dc7a8c282e846bd633c4fdc4190c4dca3da5a6f..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/upsegmodel/prroi_pool/functional.py +++ /dev/null @@ -1,70 +0,0 @@ -#! /usr/bin/env python3 -# -*- coding: utf-8 -*- -# File : functional.py -# Author : Jiayuan Mao, Tete Xiao -# Email : maojiayuan@gmail.com, jasonhsiao97@gmail.com -# Date : 07/13/2018 -# -# This file is part of PreciseRoIPooling. -# Distributed under terms of the MIT license. -# Copyright (c) 2017 Megvii Technology Limited. - -import torch -import torch.autograd as ag - -try: - from os.path import join as pjoin, dirname - from torch.utils.cpp_extension import load as load_extension - root_dir = pjoin(dirname(__file__), 'src') - _prroi_pooling = load_extension( - '_prroi_pooling', - [pjoin(root_dir, 'prroi_pooling_gpu.c'), pjoin(root_dir, 'prroi_pooling_gpu_impl.cu')], - verbose=False - ) -except ImportError: - raise ImportError('Can not compile Precise RoI Pooling library.') - -__all__ = ['prroi_pool2d'] - - -class PrRoIPool2DFunction(ag.Function): - @staticmethod - def forward(ctx, features, rois, pooled_height, pooled_width, spatial_scale): - assert 'FloatTensor' in features.type() and 'FloatTensor' in rois.type(), \ - 'Precise RoI Pooling only takes float input, got {} for features and {} for rois.'.format(features.type(), rois.type()) - - pooled_height = int(pooled_height) - pooled_width = int(pooled_width) - spatial_scale = float(spatial_scale) - - features = features.contiguous() - rois = rois.contiguous() - params = (pooled_height, pooled_width, spatial_scale) - - if features.is_cuda: - output = _prroi_pooling.prroi_pooling_forward_cuda(features, rois, *params) - ctx.params = params - # everything here is contiguous. - ctx.save_for_backward(features, rois, output) - else: - raise NotImplementedError('Precise RoI Pooling only supports GPU (cuda) implememtations.') - - return output - - @staticmethod - def backward(ctx, grad_output): - features, rois, output = ctx.saved_tensors - grad_input = grad_coor = None - - if features.requires_grad: - grad_output = grad_output.contiguous() - grad_input = _prroi_pooling.prroi_pooling_backward_cuda(features, rois, output, grad_output, *ctx.params) - if rois.requires_grad: - grad_output = grad_output.contiguous() - grad_coor = _prroi_pooling.prroi_pooling_coor_backward_cuda(features, rois, output, grad_output, *ctx.params) - - return grad_input, grad_coor, None, None, None - - -prroi_pool2d = PrRoIPool2DFunction.apply - diff --git a/spaces/pip64/geston1/README.md b/spaces/pip64/geston1/README.md deleted file mode 100644 index e14fe95d9bb9cc1b68a79afa13719f59028c7f08..0000000000000000000000000000000000000000 --- a/spaces/pip64/geston1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Geston1 -emoji: 🌍 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/prerna9811/Chord/portaudio/test/patest_sync.c b/spaces/prerna9811/Chord/portaudio/test/patest_sync.c deleted file mode 100644 index 52df0fe1bf30f27c4c6966038ab82786b23499a4..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/test/patest_sync.c +++ /dev/null @@ -1,271 +0,0 @@ -/** @file patest_sync.c - @ingroup test_src - @brief Test time stamping and synchronization of audio and video. - - A high latency is used so we can hear the difference in time. - Random durations are used so we know we are hearing the right beep - and not the one before or after. - - Sequence of events: - -# Foreground requests a beep. - -# Background randomly schedules a beep. - -# Foreground waits for the beep to be heard based on PaUtil_GetTime(). - -# Foreground outputs video (printf) in sync with audio. - -# Repeat. - - @author Phil Burk http://www.softsynth.com -*/ -/* - * $Id$ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include "portaudio.h" -#include "pa_util.h" -#define NUM_BEEPS (6) -#define SAMPLE_RATE (44100) -#define SAMPLE_PERIOD (1.0/44100.0) -#define FRAMES_PER_BUFFER (256) -#define BEEP_DURATION (400) -#define LATENCY_MSEC (2000) -#define SLEEP_MSEC (10) -#define TIMEOUT_MSEC (15000) - -#define STATE_BKG_IDLE (0) -#define STATE_BKG_PENDING (1) -#define STATE_BKG_BEEPING (2) -typedef struct -{ - float left_phase; - float right_phase; - int state; - volatile int requestBeep; /* Set by foreground, cleared by background. */ - PaTime beepTime; - int beepCount; - double latency; /* For debugging. */ -} -paTestData; - -static unsigned long GenerateRandomNumber( void ); -/************************************************************/ -/* Calculate pseudo-random 32 bit number based on linear congruential method. */ -static unsigned long GenerateRandomNumber( void ) -{ - static unsigned long randSeed = 99887766; /* Change this for different random sequences. */ - randSeed = (randSeed * 196314165) + 907633515; - return randSeed; -} - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int patestCallback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo *timeInfo, - PaStreamCallbackFlags statusFlags, void *userData ) -{ - /* Cast data passed through stream to our structure. */ - paTestData *data = (paTestData*)userData; - float *out = (float*)outputBuffer; - unsigned int i; - (void) inputBuffer; - - data->latency = timeInfo->outputBufferDacTime - timeInfo->currentTime; - - for( i=0; istate ) - { - case STATE_BKG_IDLE: - /* Schedule beep at some random time in the future. */ - if( data->requestBeep ) - { - int random = GenerateRandomNumber() >> 14; - data->beepTime = timeInfo->outputBufferDacTime + (( (double)(random + SAMPLE_RATE)) * SAMPLE_PERIOD ); - data->state = STATE_BKG_PENDING; - } - *out++ = 0.0; /* left */ - *out++ = 0.0; /* right */ - break; - - case STATE_BKG_PENDING: - if( (timeInfo->outputBufferDacTime + (i*SAMPLE_PERIOD)) >= data->beepTime ) - { - data->state = STATE_BKG_BEEPING; - data->beepCount = BEEP_DURATION; - data->left_phase = data->right_phase = 0.0; - } - *out++ = 0.0; /* left */ - *out++ = 0.0; /* right */ - break; - - case STATE_BKG_BEEPING: - if( data->beepCount <= 0 ) - { - data->state = STATE_BKG_IDLE; - data->requestBeep = 0; - *out++ = 0.0; /* left */ - *out++ = 0.0; /* right */ - } - else - { - /* Play sawtooth wave. */ - *out++ = data->left_phase; /* left */ - *out++ = data->right_phase; /* right */ - /* Generate simple sawtooth phaser that ranges between -1.0 and 1.0. */ - data->left_phase += 0.01f; - /* When signal reaches top, drop back down. */ - if( data->left_phase >= 1.0f ) data->left_phase -= 2.0f; - /* higher pitch so we can distinguish left and right. */ - data->right_phase += 0.03f; - if( data->right_phase >= 1.0f ) data->right_phase -= 2.0f; - } - data->beepCount -= 1; - break; - - default: - data->state = STATE_BKG_IDLE; - break; - } - } - return 0; -} -/*******************************************************************/ -int main(void); -int main(void) -{ - PaStream *stream; - PaError err; - paTestData DATA; - int i, timeout; - PaTime previousTime; - PaStreamParameters outputParameters; - printf("PortAudio Test: you should see BEEP at the same time you hear it.\n"); - printf("Wait for a few seconds random delay between BEEPs.\n"); - printf("BEEP %d times.\n", NUM_BEEPS ); - /* Initialize our DATA for use by callback. */ - DATA.left_phase = DATA.right_phase = 0.0; - DATA.state = STATE_BKG_IDLE; - DATA.requestBeep = 0; - /* Initialize library before making any other calls. */ - err = Pa_Initialize(); - if( err != paNoError ) goto error; - - outputParameters.device = Pa_GetDefaultOutputDevice(); - if (outputParameters.device == paNoDevice) { - fprintf(stderr,"Error: No default output device.\n"); - goto error; - } - outputParameters.channelCount = 2; - outputParameters.hostApiSpecificStreamInfo = NULL; - outputParameters.sampleFormat = paFloat32; - outputParameters.suggestedLatency = (double)LATENCY_MSEC / 1000; - - /* Open an audio I/O stream. */ - err = Pa_OpenStream( - &stream, - NULL, /* no input */ - &outputParameters, - SAMPLE_RATE, - FRAMES_PER_BUFFER, /* frames per buffer */ - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - patestCallback, - &DATA ); - if( err != paNoError ) goto error; - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto error; - - printf("started\n"); - fflush(stdout); - - previousTime = Pa_GetStreamTime( stream ); - for( i=0; i 0 ) ) Pa_Sleep(SLEEP_MSEC); - if( timeout <= 0 ) - { - fprintf( stderr, "Timed out waiting for background to acknowledge request.\n" ); - goto error; - } - printf("calc beep for %9.3f, latency = %6.3f\n", DATA.beepTime, DATA.latency ); - fflush(stdout); - - /* Wait for scheduled beep time. */ - timeout = TIMEOUT_MSEC + (10000/SLEEP_MSEC); - while( (Pa_GetStreamTime( stream ) < DATA.beepTime) && (timeout-- > 0 ) ) - { - Pa_Sleep(SLEEP_MSEC); - } - if( timeout <= 0 ) - { - fprintf( stderr, "Timed out waiting for time. Now = %9.3f, Beep for %9.3f.\n", - PaUtil_GetTime(), DATA.beepTime ); - goto error; - } - - /* Beep should be sounding now so print synchronized BEEP. */ - printf("hear \"BEEP\" at %9.3f, delta = %9.3f\n", - Pa_GetStreamTime( stream ), (DATA.beepTime - previousTime) ); - fflush(stdout); - - previousTime = DATA.beepTime; - } - - err = Pa_StopStream( stream ); - if( err != paNoError ) goto error; - - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto error; - - Pa_Terminate(); - printf("Test finished.\n"); - return err; -error: - Pa_Terminate(); - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return err; -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/McIdasImagePlugin.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/McIdasImagePlugin.py deleted file mode 100644 index bb79e71de5d2d05ced99dd82cfd5848d0d72ac72..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/McIdasImagePlugin.py +++ /dev/null @@ -1,75 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Basic McIdas support for PIL -# -# History: -# 1997-05-05 fl Created (8-bit images only) -# 2009-03-08 fl Added 16/32-bit support. -# -# Thanks to Richard Jones and Craig Swank for specs and samples. -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# - -import struct - -from . import Image, ImageFile - - -def _accept(s): - return s[:8] == b"\x00\x00\x00\x00\x00\x00\x00\x04" - - -## -# Image plugin for McIdas area images. - - -class McIdasImageFile(ImageFile.ImageFile): - format = "MCIDAS" - format_description = "McIdas area file" - - def _open(self): - # parse area file directory - s = self.fp.read(256) - if not _accept(s) or len(s) != 256: - msg = "not an McIdas area file" - raise SyntaxError(msg) - - self.area_descriptor_raw = s - self.area_descriptor = w = [0] + list(struct.unpack("!64i", s)) - - # get mode - if w[11] == 1: - mode = rawmode = "L" - elif w[11] == 2: - # FIXME: add memory map support - mode = "I" - rawmode = "I;16B" - elif w[11] == 4: - # FIXME: add memory map support - mode = "I" - rawmode = "I;32B" - else: - msg = "unsupported McIdas format" - raise SyntaxError(msg) - - self._mode = mode - self._size = w[10], w[9] - - offset = w[34] + w[15] - stride = w[15] + w[10] * w[11] * w[14] - - self.tile = [("raw", (0, 0) + self.size, offset, (rawmode, stride, 1))] - - -# -------------------------------------------------------------------- -# registry - -Image.register_open(McIdasImageFile.format, McIdasImageFile, _accept) - -# no default extension diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/utils/execeval.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/utils/execeval.py deleted file mode 100644 index 7e98fd378bfee470858b3a79e4ab3473086f2eef..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/utils/execeval.py +++ /dev/null @@ -1,53 +0,0 @@ -import ast -import sys - - -class _CatchDisplay: - """Class to temporarily catch sys.displayhook""" - - def __init__(self): - self.output = None - - def __enter__(self): - self.old_hook = sys.displayhook - sys.displayhook = self - return self - - def __exit__(self, type, value, traceback): - sys.displayhook = self.old_hook - # Returning False will cause exceptions to propagate - return False - - def __call__(self, output): - self.output = output - - -def eval_block(code, namespace=None, filename=""): - """ - Execute a multi-line block of code in the given namespace - - If the final statement in the code is an expression, return - the result of the expression. - """ - tree = ast.parse(code, filename="", mode="exec") - if namespace is None: - namespace = {} - catch_display = _CatchDisplay() - - if isinstance(tree.body[-1], ast.Expr): - to_exec, to_eval = tree.body[:-1], tree.body[-1:] - else: - to_exec, to_eval = tree.body, [] - - for node in to_exec: - compiled = compile(ast.Module([node], []), filename=filename, mode="exec") - exec(compiled, namespace) - - with catch_display: - for node in to_eval: - compiled = compile( - ast.Interactive([node]), filename=filename, mode="single" - ) - exec(compiled, namespace) - - return catch_display.output diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/contourpy/util/mpl_renderer.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/contourpy/util/mpl_renderer.py deleted file mode 100644 index dbcb5ca19a01e3ae000986673d66def23f9c2eac..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/contourpy/util/mpl_renderer.py +++ /dev/null @@ -1,613 +0,0 @@ -from __future__ import annotations - -import io -from typing import TYPE_CHECKING, Any, cast - -import matplotlib.collections as mcollections -import matplotlib.pyplot as plt -import numpy as np - -from contourpy import FillType, LineType -from contourpy.util.mpl_util import filled_to_mpl_paths, lines_to_mpl_paths, mpl_codes_to_offsets -from contourpy.util.renderer import Renderer - -if TYPE_CHECKING: - from matplotlib.axes import Axes - from matplotlib.figure import Figure - from numpy.typing import ArrayLike - - import contourpy._contourpy as cpy - - -class MplRenderer(Renderer): - _axes: Axes - _fig: Figure - _want_tight: bool - - """Utility renderer using Matplotlib to render a grid of plots over the same (x, y) range. - - Args: - nrows (int, optional): Number of rows of plots, default ``1``. - ncols (int, optional): Number of columns of plots, default ``1``. - figsize (tuple(float, float), optional): Figure size in inches, default ``(9, 9)``. - show_frame (bool, optional): Whether to show frame and axes ticks, default ``True``. - backend (str, optional): Matplotlib backend to use or ``None`` for default backend. - Default ``None``. - gridspec_kw (dict, optional): Gridspec keyword arguments to pass to ``plt.subplots``, - default None. - """ - def __init__( - self, - nrows: int = 1, - ncols: int = 1, - figsize: tuple[float, float] = (9, 9), - show_frame: bool = True, - backend: str | None = None, - gridspec_kw: dict[str, Any] | None = None, - ) -> None: - if backend is not None: - import matplotlib - matplotlib.use(backend) - - kwargs = dict(figsize=figsize, squeeze=False, sharex=True, sharey=True) - if gridspec_kw is not None: - kwargs["gridspec_kw"] = gridspec_kw - else: - kwargs["subplot_kw"] = dict(aspect="equal") - - self._fig, axes = plt.subplots(nrows, ncols, **kwargs) - self._axes = axes.flatten() - if not show_frame: - for ax in self._axes: - ax.axis("off") - - self._want_tight = True - - def __del__(self) -> None: - if hasattr(self, "_fig"): - plt.close(self._fig) - - def _autoscale(self) -> None: - # Using axes._need_autoscale attribute if need to autoscale before rendering after adding - # lines/filled. Only want to autoscale once per axes regardless of how many lines/filled - # added. - for ax in self._axes: - if getattr(ax, "_need_autoscale", False): - ax.autoscale_view(tight=True) - ax._need_autoscale = False - if self._want_tight and len(self._axes) > 1: - self._fig.tight_layout() - - def _get_ax(self, ax: Axes | int) -> Axes: - if isinstance(ax, int): - ax = self._axes[ax] - return ax - - def filled( - self, - filled: cpy.FillReturn, - fill_type: FillType, - ax: Axes | int = 0, - color: str = "C0", - alpha: float = 0.7, - ) -> None: - """Plot filled contours on a single Axes. - - Args: - filled (sequence of arrays): Filled contour data as returned by - :func:`~contourpy.ContourGenerator.filled`. - fill_type (FillType): Type of ``filled`` data, as returned by - :attr:`~contourpy.ContourGenerator.fill_type`. - ax (int or Maplotlib Axes, optional): Which axes to plot on, default ``0``. - color (str, optional): Color to plot with. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``tab10`` colormap. Default ``"C0"``. - alpha (float, optional): Opacity to plot with, default ``0.7``. - """ - ax = self._get_ax(ax) - paths = filled_to_mpl_paths(filled, fill_type) - collection = mcollections.PathCollection( - paths, facecolors=color, edgecolors="none", lw=0, alpha=alpha) - ax.add_collection(collection) - ax._need_autoscale = True - - def grid( - self, - x: ArrayLike, - y: ArrayLike, - ax: Axes | int = 0, - color: str = "black", - alpha: float = 0.1, - point_color: str | None = None, - quad_as_tri_alpha: float = 0, - ) -> None: - """Plot quad grid lines on a single Axes. - - Args: - x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points. - y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points. - ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``. - color (str, optional): Color to plot grid lines, default ``"black"``. - alpha (float, optional): Opacity to plot lines with, default ``0.1``. - point_color (str, optional): Color to plot grid points or ``None`` if grid points - should not be plotted, default ``None``. - quad_as_tri_alpha (float, optional): Opacity to plot ``quad_as_tri`` grid, default 0. - - Colors may be a string color or the letter ``"C"`` followed by an integer in the range - ``"C0"`` to ``"C9"`` to use a color from the ``tab10`` colormap. - - Warning: - ``quad_as_tri_alpha > 0`` plots all quads as though they are unmasked. - """ - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - kwargs = dict(color=color, alpha=alpha) - ax.plot(x, y, x.T, y.T, **kwargs) - if quad_as_tri_alpha > 0: - # Assumes no quad mask. - xmid = 0.25*(x[:-1, :-1] + x[1:, :-1] + x[:-1, 1:] + x[1:, 1:]) - ymid = 0.25*(y[:-1, :-1] + y[1:, :-1] + y[:-1, 1:] + y[1:, 1:]) - kwargs["alpha"] = quad_as_tri_alpha - ax.plot( - np.stack((x[:-1, :-1], xmid, x[1:, 1:])).reshape((3, -1)), - np.stack((y[:-1, :-1], ymid, y[1:, 1:])).reshape((3, -1)), - np.stack((x[1:, :-1], xmid, x[:-1, 1:])).reshape((3, -1)), - np.stack((y[1:, :-1], ymid, y[:-1, 1:])).reshape((3, -1)), - **kwargs) - if point_color is not None: - ax.plot(x, y, color=point_color, alpha=alpha, marker="o", lw=0) - ax._need_autoscale = True - - def lines( - self, - lines: cpy.LineReturn, - line_type: LineType, - ax: Axes | int = 0, - color: str = "C0", - alpha: float = 1.0, - linewidth: float = 1, - ) -> None: - """Plot contour lines on a single Axes. - - Args: - lines (sequence of arrays): Contour line data as returned by - :func:`~contourpy.ContourGenerator.lines`. - line_type (LineType): Type of ``lines`` data, as returned by - :attr:`~contourpy.ContourGenerator.line_type`. - ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``. - color (str, optional): Color to plot lines. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``tab10`` colormap. Default ``"C0"``. - alpha (float, optional): Opacity to plot lines with, default ``1.0``. - linewidth (float, optional): Width of lines, default ``1``. - """ - ax = self._get_ax(ax) - paths = lines_to_mpl_paths(lines, line_type) - collection = mcollections.PathCollection( - paths, facecolors="none", edgecolors=color, lw=linewidth, alpha=alpha) - ax.add_collection(collection) - ax._need_autoscale = True - - def mask( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike | np.ma.MaskedArray[Any, Any], - ax: Axes | int = 0, - color: str = "black", - ) -> None: - """Plot masked out grid points as circles on a single Axes. - - Args: - x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points. - y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points. - z (masked array of shape (ny, nx): z-values. - ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``. - color (str, optional): Circle color, default ``"black"``. - """ - mask = np.ma.getmask(z) # type: ignore[no-untyped-call] - if mask is np.ma.nomask: - return - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - ax.plot(x[mask], y[mask], "o", c=color) - - def save(self, filename: str, transparent: bool = False) -> None: - """Save plots to SVG or PNG file. - - Args: - filename (str): Filename to save to. - transparent (bool, optional): Whether background should be transparent, default - ``False``. - """ - self._autoscale() - self._fig.savefig(filename, transparent=transparent) - - def save_to_buffer(self) -> io.BytesIO: - """Save plots to an ``io.BytesIO`` buffer. - - Return: - BytesIO: PNG image buffer. - """ - self._autoscale() - buf = io.BytesIO() - self._fig.savefig(buf, format="png") - buf.seek(0) - return buf - - def show(self) -> None: - """Show plots in an interactive window, in the usual Matplotlib manner. - """ - self._autoscale() - plt.show() - - def title(self, title: str, ax: Axes | int = 0, color: str | None = None) -> None: - """Set the title of a single Axes. - - Args: - title (str): Title text. - ax (int or Matplotlib Axes, optional): Which Axes to set the title of, default ``0``. - color (str, optional): Color to set title. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``tab10`` colormap. Default is ``None`` which uses Matplotlib's default title color - that depends on the stylesheet in use. - """ - if color: - self._get_ax(ax).set_title(title, color=color) - else: - self._get_ax(ax).set_title(title) - - def z_values( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - ax: Axes | int = 0, - color: str = "green", - fmt: str = ".1f", - quad_as_tri: bool = False, - ) -> None: - """Show ``z`` values on a single Axes. - - Args: - x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points. - y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points. - z (array-like of shape (ny, nx): z-values. - ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``. - color (str, optional): Color of added text. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``tab10`` colormap. Default ``"green"``. - fmt (str, optional): Format to display z-values, default ``".1f"``. - quad_as_tri (bool, optional): Whether to show z-values at the ``quad_as_tri`` centers - of quads. - - Warning: - ``quad_as_tri=True`` shows z-values for all quads, even if masked. - """ - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - for j in range(ny): - for i in range(nx): - ax.text(x[j, i], y[j, i], f"{z[j, i]:{fmt}}", ha="center", va="center", - color=color, clip_on=True) - if quad_as_tri: - for j in range(ny-1): - for i in range(nx-1): - xx = np.mean(x[j:j+2, i:i+2]) - yy = np.mean(y[j:j+2, i:i+2]) - zz = np.mean(z[j:j+2, i:i+2]) - ax.text(xx, yy, f"{zz:{fmt}}", ha="center", va="center", color=color, - clip_on=True) - - -class MplTestRenderer(MplRenderer): - """Test renderer implemented using Matplotlib. - - No whitespace around plots and no spines/ticks displayed. - Uses Agg backend, so can only save to file/buffer, cannot call ``show()``. - """ - def __init__( - self, - nrows: int = 1, - ncols: int = 1, - figsize: tuple[float, float] = (9, 9), - ) -> None: - gridspec = { - "left": 0.01, - "right": 0.99, - "top": 0.99, - "bottom": 0.01, - "wspace": 0.01, - "hspace": 0.01, - } - super().__init__( - nrows, ncols, figsize, show_frame=True, backend="Agg", gridspec_kw=gridspec, - ) - - for ax in self._axes: - ax.set_xmargin(0.0) - ax.set_ymargin(0.0) - ax.set_xticks([]) - ax.set_yticks([]) - - self._want_tight = False - - -class MplDebugRenderer(MplRenderer): - """Debug renderer implemented using Matplotlib. - - Extends ``MplRenderer`` to add extra information to help in debugging such as markers, arrows, - text, etc. - """ - def __init__( - self, - nrows: int = 1, - ncols: int = 1, - figsize: tuple[float, float] = (9, 9), - show_frame: bool = True, - ) -> None: - super().__init__(nrows, ncols, figsize, show_frame) - - def _arrow( - self, - ax: Axes, - line_start: cpy.CoordinateArray, - line_end: cpy.CoordinateArray, - color: str, - alpha: float, - arrow_size: float, - ) -> None: - mid = 0.5*(line_start + line_end) - along = line_end - line_start - along /= np.sqrt(np.dot(along, along)) # Unit vector. - right = np.asarray((along[1], -along[0])) - arrow = np.stack(( - mid - (along*0.5 - right)*arrow_size, - mid + along*0.5*arrow_size, - mid - (along*0.5 + right)*arrow_size, - )) - ax.plot(arrow[:, 0], arrow[:, 1], "-", c=color, alpha=alpha) - - def _filled_to_lists_of_points_and_offsets( - self, - filled: cpy.FillReturn, - fill_type: FillType, - ) -> tuple[list[cpy.PointArray], list[cpy.OffsetArray]]: - if fill_type == FillType.OuterCode: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_OuterCode, filled) - all_points = filled[0] - all_offsets = [mpl_codes_to_offsets(codes) for codes in filled[1]] - elif fill_type == FillType.ChunkCombinedCode: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_ChunkCombinedCode, filled) - all_points = [points for points in filled[0] if points is not None] - all_offsets = [mpl_codes_to_offsets(codes) for codes in filled[1] if codes is not None] - elif fill_type == FillType.OuterOffset: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_OuterOffset, filled) - all_points = filled[0] - all_offsets = filled[1] - elif fill_type == FillType.ChunkCombinedOffset: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_ChunkCombinedOffset, filled) - all_points = [points for points in filled[0] if points is not None] - all_offsets = [offsets for offsets in filled[1] if offsets is not None] - elif fill_type == FillType.ChunkCombinedCodeOffset: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_ChunkCombinedCodeOffset, filled) - all_points = [] - all_offsets = [] - for points, codes, outer_offsets in zip(*filled): - if points is None: - continue - if TYPE_CHECKING: - assert codes is not None and outer_offsets is not None - all_points += np.split(points, outer_offsets[1:-1]) - all_codes = np.split(codes, outer_offsets[1:-1]) - all_offsets += [mpl_codes_to_offsets(codes) for codes in all_codes] - elif fill_type == FillType.ChunkCombinedOffsetOffset: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_ChunkCombinedOffsetOffset, filled) - all_points = [] - all_offsets = [] - for points, offsets, outer_offsets in zip(*filled): - if points is None: - continue - if TYPE_CHECKING: - assert offsets is not None and outer_offsets is not None - for i in range(len(outer_offsets)-1): - offs = offsets[outer_offsets[i]:outer_offsets[i+1]+1] - all_points.append(points[offs[0]:offs[-1]]) - all_offsets.append(offs - offs[0]) - else: - raise RuntimeError(f"Rendering FillType {fill_type} not implemented") - - return all_points, all_offsets - - def _lines_to_list_of_points( - self, lines: cpy.LineReturn, line_type: LineType, - ) -> list[cpy.PointArray]: - if line_type == LineType.Separate: - if TYPE_CHECKING: - lines = cast(cpy.LineReturn_Separate, lines) - all_lines = lines - elif line_type == LineType.SeparateCode: - if TYPE_CHECKING: - lines = cast(cpy.LineReturn_SeparateCode, lines) - all_lines = lines[0] - elif line_type == LineType.ChunkCombinedCode: - if TYPE_CHECKING: - lines = cast(cpy.LineReturn_ChunkCombinedCode, lines) - all_lines = [] - for points, codes in zip(*lines): - if points is not None: - if TYPE_CHECKING: - assert codes is not None - offsets = mpl_codes_to_offsets(codes) - for i in range(len(offsets)-1): - all_lines.append(points[offsets[i]:offsets[i+1]]) - elif line_type == LineType.ChunkCombinedOffset: - if TYPE_CHECKING: - lines = cast(cpy.LineReturn_ChunkCombinedOffset, lines) - all_lines = [] - for points, all_offsets in zip(*lines): - if points is not None: - if TYPE_CHECKING: - assert all_offsets is not None - for i in range(len(all_offsets)-1): - all_lines.append(points[all_offsets[i]:all_offsets[i+1]]) - else: - raise RuntimeError(f"Rendering LineType {line_type} not implemented") - - return all_lines - - def filled( - self, - filled: cpy.FillReturn, - fill_type: FillType, - ax: Axes | int = 0, - color: str = "C1", - alpha: float = 0.7, - line_color: str = "C0", - line_alpha: float = 0.7, - point_color: str = "C0", - start_point_color: str = "red", - arrow_size: float = 0.1, - ) -> None: - super().filled(filled, fill_type, ax, color, alpha) - - if line_color is None and point_color is None: - return - - ax = self._get_ax(ax) - all_points, all_offsets = self._filled_to_lists_of_points_and_offsets(filled, fill_type) - - # Lines. - if line_color is not None: - for points, offsets in zip(all_points, all_offsets): - for start, end in zip(offsets[:-1], offsets[1:]): - xys = points[start:end] - ax.plot(xys[:, 0], xys[:, 1], c=line_color, alpha=line_alpha) - - if arrow_size > 0.0: - n = len(xys) - for i in range(n-1): - self._arrow(ax, xys[i], xys[i+1], line_color, line_alpha, arrow_size) - - # Points. - if point_color is not None: - for points, offsets in zip(all_points, all_offsets): - mask = np.ones(offsets[-1], dtype=bool) - mask[offsets[1:]-1] = False # Exclude end points. - if start_point_color is not None: - start_indices = offsets[:-1] - mask[start_indices] = False # Exclude start points. - ax.plot( - points[:, 0][mask], points[:, 1][mask], "o", c=point_color, alpha=line_alpha) - - if start_point_color is not None: - ax.plot(points[:, 0][start_indices], points[:, 1][start_indices], "o", - c=start_point_color, alpha=line_alpha) - - def lines( - self, - lines: cpy.LineReturn, - line_type: LineType, - ax: Axes | int = 0, - color: str = "C0", - alpha: float = 1.0, - linewidth: float = 1, - point_color: str = "C0", - start_point_color: str = "red", - arrow_size: float = 0.1, - ) -> None: - super().lines(lines, line_type, ax, color, alpha, linewidth) - - if arrow_size == 0.0 and point_color is None: - return - - ax = self._get_ax(ax) - all_lines = self._lines_to_list_of_points(lines, line_type) - - if arrow_size > 0.0: - for line in all_lines: - for i in range(len(line)-1): - self._arrow(ax, line[i], line[i+1], color, alpha, arrow_size) - - if point_color is not None: - for line in all_lines: - start_index = 0 - end_index = len(line) - if start_point_color is not None: - ax.plot(line[0, 0], line[0, 1], "o", c=start_point_color, alpha=alpha) - start_index = 1 - if line[0][0] == line[-1][0] and line[0][1] == line[-1][1]: - end_index -= 1 - ax.plot(line[start_index:end_index, 0], line[start_index:end_index, 1], "o", - c=color, alpha=alpha) - - def point_numbers( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - ax: Axes | int = 0, - color: str = "red", - ) -> None: - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - for j in range(ny): - for i in range(nx): - quad = i + j*nx - ax.text(x[j, i], y[j, i], str(quad), ha="right", va="top", color=color, - clip_on=True) - - def quad_numbers( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - ax: Axes | int = 0, - color: str = "blue", - ) -> None: - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - for j in range(1, ny): - for i in range(1, nx): - quad = i + j*nx - xmid = x[j-1:j+1, i-1:i+1].mean() - ymid = y[j-1:j+1, i-1:i+1].mean() - ax.text(xmid, ymid, str(quad), ha="center", va="center", color=color, clip_on=True) - - def z_levels( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - lower_level: float, - upper_level: float | None = None, - ax: Axes | int = 0, - color: str = "green", - ) -> None: - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - for j in range(ny): - for i in range(nx): - zz = z[j, i] - if upper_level is not None and zz > upper_level: - z_level = 2 - elif zz > lower_level: - z_level = 1 - else: - z_level = 0 - ax.text(x[j, i], y[j, i], z_level, ha="left", va="bottom", color=color, - clip_on=True) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/G_P_K_G_.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/G_P_K_G_.py deleted file mode 100644 index eed34d92105926dcdb988ef345e8421a93b85518..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/G_P_K_G_.py +++ /dev/null @@ -1,126 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import bytesjoin, safeEval, readHex -from . import DefaultTable -import sys -import array - -GPKGFormat = """ - > # big endian - version: H - flags: H - numGMAPs: H - numGlyplets: H -""" -# psFontName is a byte string which follows the record above. This is zero padded -# to the beginning of the records array. The recordsOffsst is 32 bit aligned. - - -class table_G_P_K_G_(DefaultTable.DefaultTable): - def decompile(self, data, ttFont): - dummy, newData = sstruct.unpack2(GPKGFormat, data, self) - - GMAPoffsets = array.array("I") - endPos = (self.numGMAPs + 1) * 4 - GMAPoffsets.frombytes(newData[:endPos]) - if sys.byteorder != "big": - GMAPoffsets.byteswap() - self.GMAPs = [] - for i in range(self.numGMAPs): - start = GMAPoffsets[i] - end = GMAPoffsets[i + 1] - self.GMAPs.append(data[start:end]) - pos = endPos - endPos = pos + (self.numGlyplets + 1) * 4 - glyphletOffsets = array.array("I") - glyphletOffsets.frombytes(newData[pos:endPos]) - if sys.byteorder != "big": - glyphletOffsets.byteswap() - self.glyphlets = [] - for i in range(self.numGlyplets): - start = glyphletOffsets[i] - end = glyphletOffsets[i + 1] - self.glyphlets.append(data[start:end]) - - def compile(self, ttFont): - self.numGMAPs = len(self.GMAPs) - self.numGlyplets = len(self.glyphlets) - GMAPoffsets = [0] * (self.numGMAPs + 1) - glyphletOffsets = [0] * (self.numGlyplets + 1) - - dataList = [sstruct.pack(GPKGFormat, self)] - - pos = len(dataList[0]) + (self.numGMAPs + 1) * 4 + (self.numGlyplets + 1) * 4 - GMAPoffsets[0] = pos - for i in range(1, self.numGMAPs + 1): - pos += len(self.GMAPs[i - 1]) - GMAPoffsets[i] = pos - gmapArray = array.array("I", GMAPoffsets) - if sys.byteorder != "big": - gmapArray.byteswap() - dataList.append(gmapArray.tobytes()) - - glyphletOffsets[0] = pos - for i in range(1, self.numGlyplets + 1): - pos += len(self.glyphlets[i - 1]) - glyphletOffsets[i] = pos - glyphletArray = array.array("I", glyphletOffsets) - if sys.byteorder != "big": - glyphletArray.byteswap() - dataList.append(glyphletArray.tobytes()) - dataList += self.GMAPs - dataList += self.glyphlets - data = bytesjoin(dataList) - return data - - def toXML(self, writer, ttFont): - writer.comment("Most of this table will be recalculated by the compiler") - writer.newline() - formatstring, names, fixes = sstruct.getformat(GPKGFormat) - for name in names: - value = getattr(self, name) - writer.simpletag(name, value=value) - writer.newline() - - writer.begintag("GMAPs") - writer.newline() - for gmapData in self.GMAPs: - writer.begintag("hexdata") - writer.newline() - writer.dumphex(gmapData) - writer.endtag("hexdata") - writer.newline() - writer.endtag("GMAPs") - writer.newline() - - writer.begintag("glyphlets") - writer.newline() - for glyphletData in self.glyphlets: - writer.begintag("hexdata") - writer.newline() - writer.dumphex(glyphletData) - writer.endtag("hexdata") - writer.newline() - writer.endtag("glyphlets") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "GMAPs": - if not hasattr(self, "GMAPs"): - self.GMAPs = [] - for element in content: - if isinstance(element, str): - continue - itemName, itemAttrs, itemContent = element - if itemName == "hexdata": - self.GMAPs.append(readHex(itemContent)) - elif name == "glyphlets": - if not hasattr(self, "glyphlets"): - self.glyphlets = [] - for element in content: - if isinstance(element, str): - continue - itemName, itemAttrs, itemContent = element - if itemName == "hexdata": - self.glyphlets.append(readHex(itemContent)) - else: - setattr(self, name, safeEval(attrs["value"])) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/_p_o_s_t.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/_p_o_s_t.py deleted file mode 100644 index dba637117a0ac148af65c75853dd3bffbbbd1154..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/_p_o_s_t.py +++ /dev/null @@ -1,308 +0,0 @@ -from fontTools import ttLib -from fontTools.ttLib.standardGlyphOrder import standardGlyphOrder -from fontTools.misc import sstruct -from fontTools.misc.textTools import bytechr, byteord, tobytes, tostr, safeEval, readHex -from . import DefaultTable -import sys -import struct -import array -import logging - -log = logging.getLogger(__name__) - -postFormat = """ - > - formatType: 16.16F - italicAngle: 16.16F # italic angle in degrees - underlinePosition: h - underlineThickness: h - isFixedPitch: L - minMemType42: L # minimum memory if TrueType font is downloaded - maxMemType42: L # maximum memory if TrueType font is downloaded - minMemType1: L # minimum memory if Type1 font is downloaded - maxMemType1: L # maximum memory if Type1 font is downloaded -""" - -postFormatSize = sstruct.calcsize(postFormat) - - -class table__p_o_s_t(DefaultTable.DefaultTable): - def decompile(self, data, ttFont): - sstruct.unpack(postFormat, data[:postFormatSize], self) - data = data[postFormatSize:] - if self.formatType == 1.0: - self.decode_format_1_0(data, ttFont) - elif self.formatType == 2.0: - self.decode_format_2_0(data, ttFont) - elif self.formatType == 3.0: - self.decode_format_3_0(data, ttFont) - elif self.formatType == 4.0: - self.decode_format_4_0(data, ttFont) - else: - # supported format - raise ttLib.TTLibError( - "'post' table format %f not supported" % self.formatType - ) - - def compile(self, ttFont): - data = sstruct.pack(postFormat, self) - if self.formatType == 1.0: - pass # we're done - elif self.formatType == 2.0: - data = data + self.encode_format_2_0(ttFont) - elif self.formatType == 3.0: - pass # we're done - elif self.formatType == 4.0: - data = data + self.encode_format_4_0(ttFont) - else: - # supported format - raise ttLib.TTLibError( - "'post' table format %f not supported" % self.formatType - ) - return data - - def getGlyphOrder(self): - """This function will get called by a ttLib.TTFont instance. - Do not call this function yourself, use TTFont().getGlyphOrder() - or its relatives instead! - """ - if not hasattr(self, "glyphOrder"): - raise ttLib.TTLibError("illegal use of getGlyphOrder()") - glyphOrder = self.glyphOrder - del self.glyphOrder - return glyphOrder - - def decode_format_1_0(self, data, ttFont): - self.glyphOrder = standardGlyphOrder[: ttFont["maxp"].numGlyphs] - - def decode_format_2_0(self, data, ttFont): - (numGlyphs,) = struct.unpack(">H", data[:2]) - numGlyphs = int(numGlyphs) - if numGlyphs > ttFont["maxp"].numGlyphs: - # Assume the numGlyphs field is bogus, so sync with maxp. - # I've seen this in one font, and if the assumption is - # wrong elsewhere, well, so be it: it's hard enough to - # work around _one_ non-conforming post format... - numGlyphs = ttFont["maxp"].numGlyphs - data = data[2:] - indices = array.array("H") - indices.frombytes(data[: 2 * numGlyphs]) - if sys.byteorder != "big": - indices.byteswap() - data = data[2 * numGlyphs :] - maxIndex = max(indices) - self.extraNames = extraNames = unpackPStrings(data, maxIndex - 257) - self.glyphOrder = glyphOrder = [""] * int(ttFont["maxp"].numGlyphs) - for glyphID in range(numGlyphs): - index = indices[glyphID] - if index > 257: - try: - name = extraNames[index - 258] - except IndexError: - name = "" - else: - # fetch names from standard list - name = standardGlyphOrder[index] - glyphOrder[glyphID] = name - self.build_psNameMapping(ttFont) - - def build_psNameMapping(self, ttFont): - mapping = {} - allNames = {} - for i in range(ttFont["maxp"].numGlyphs): - glyphName = psName = self.glyphOrder[i] - if glyphName == "": - glyphName = "glyph%.5d" % i - if glyphName in allNames: - # make up a new glyphName that's unique - n = allNames[glyphName] - while (glyphName + "#" + str(n)) in allNames: - n += 1 - allNames[glyphName] = n + 1 - glyphName = glyphName + "#" + str(n) - - self.glyphOrder[i] = glyphName - allNames[glyphName] = 1 - if glyphName != psName: - mapping[glyphName] = psName - - self.mapping = mapping - - def decode_format_3_0(self, data, ttFont): - # Setting self.glyphOrder to None will cause the TTFont object - # try and construct glyph names from a Unicode cmap table. - self.glyphOrder = None - - def decode_format_4_0(self, data, ttFont): - from fontTools import agl - - numGlyphs = ttFont["maxp"].numGlyphs - indices = array.array("H") - indices.frombytes(data) - if sys.byteorder != "big": - indices.byteswap() - # In some older fonts, the size of the post table doesn't match - # the number of glyphs. Sometimes it's bigger, sometimes smaller. - self.glyphOrder = glyphOrder = [""] * int(numGlyphs) - for i in range(min(len(indices), numGlyphs)): - if indices[i] == 0xFFFF: - self.glyphOrder[i] = "" - elif indices[i] in agl.UV2AGL: - self.glyphOrder[i] = agl.UV2AGL[indices[i]] - else: - self.glyphOrder[i] = "uni%04X" % indices[i] - self.build_psNameMapping(ttFont) - - def encode_format_2_0(self, ttFont): - numGlyphs = ttFont["maxp"].numGlyphs - glyphOrder = ttFont.getGlyphOrder() - assert len(glyphOrder) == numGlyphs - indices = array.array("H") - extraDict = {} - extraNames = self.extraNames = [ - n for n in self.extraNames if n not in standardGlyphOrder - ] - for i in range(len(extraNames)): - extraDict[extraNames[i]] = i - for glyphID in range(numGlyphs): - glyphName = glyphOrder[glyphID] - if glyphName in self.mapping: - psName = self.mapping[glyphName] - else: - psName = glyphName - if psName in extraDict: - index = 258 + extraDict[psName] - elif psName in standardGlyphOrder: - index = standardGlyphOrder.index(psName) - else: - index = 258 + len(extraNames) - extraDict[psName] = len(extraNames) - extraNames.append(psName) - indices.append(index) - if sys.byteorder != "big": - indices.byteswap() - return ( - struct.pack(">H", numGlyphs) + indices.tobytes() + packPStrings(extraNames) - ) - - def encode_format_4_0(self, ttFont): - from fontTools import agl - - numGlyphs = ttFont["maxp"].numGlyphs - glyphOrder = ttFont.getGlyphOrder() - assert len(glyphOrder) == numGlyphs - indices = array.array("H") - for glyphID in glyphOrder: - glyphID = glyphID.split("#")[0] - if glyphID in agl.AGL2UV: - indices.append(agl.AGL2UV[glyphID]) - elif len(glyphID) == 7 and glyphID[:3] == "uni": - indices.append(int(glyphID[3:], 16)) - else: - indices.append(0xFFFF) - if sys.byteorder != "big": - indices.byteswap() - return indices.tobytes() - - def toXML(self, writer, ttFont): - formatstring, names, fixes = sstruct.getformat(postFormat) - for name in names: - value = getattr(self, name) - writer.simpletag(name, value=value) - writer.newline() - if hasattr(self, "mapping"): - writer.begintag("psNames") - writer.newline() - writer.comment( - "This file uses unique glyph names based on the information\n" - "found in the 'post' table. Since these names might not be unique,\n" - "we have to invent artificial names in case of clashes. In order to\n" - "be able to retain the original information, we need a name to\n" - "ps name mapping for those cases where they differ. That's what\n" - "you see below.\n" - ) - writer.newline() - items = sorted(self.mapping.items()) - for name, psName in items: - writer.simpletag("psName", name=name, psName=psName) - writer.newline() - writer.endtag("psNames") - writer.newline() - if hasattr(self, "extraNames"): - writer.begintag("extraNames") - writer.newline() - writer.comment( - "following are the name that are not taken from the standard Mac glyph order" - ) - writer.newline() - for name in self.extraNames: - writer.simpletag("psName", name=name) - writer.newline() - writer.endtag("extraNames") - writer.newline() - if hasattr(self, "data"): - writer.begintag("hexdata") - writer.newline() - writer.dumphex(self.data) - writer.endtag("hexdata") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name not in ("psNames", "extraNames", "hexdata"): - setattr(self, name, safeEval(attrs["value"])) - elif name == "psNames": - self.mapping = {} - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "psName": - self.mapping[attrs["name"]] = attrs["psName"] - elif name == "extraNames": - self.extraNames = [] - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "psName": - self.extraNames.append(attrs["name"]) - else: - self.data = readHex(content) - - -def unpackPStrings(data, n): - # extract n Pascal strings from data. - # if there is not enough data, use "" - - strings = [] - index = 0 - dataLen = len(data) - - for _ in range(n): - if dataLen <= index: - length = 0 - else: - length = byteord(data[index]) - index += 1 - - if dataLen <= index + length - 1: - name = "" - else: - name = tostr(data[index : index + length], encoding="latin1") - strings.append(name) - index += length - - if index < dataLen: - log.warning("%d extra bytes in post.stringData array", dataLen - index) - - elif dataLen < index: - log.warning("not enough data in post.stringData array") - - return strings - - -def packPStrings(strings): - data = b"" - for s in strings: - data = data + bytechr(len(s)) + tobytes(s, encoding="latin1") - return data diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fsspec/compression.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fsspec/compression.py deleted file mode 100644 index 53b7426e2a5d436cb64d28160bd2b32aee2f5966..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fsspec/compression.py +++ /dev/null @@ -1,174 +0,0 @@ -"""Helper functions for a standard streaming compression API""" -from zipfile import ZipFile - -import fsspec.utils -from fsspec.spec import AbstractBufferedFile - - -def noop_file(file, mode, **kwargs): - return file - - -# TODO: files should also be available as contexts -# should be functions of the form func(infile, mode=, **kwargs) -> file-like -compr = {None: noop_file} - - -def register_compression(name, callback, extensions, force=False): - """Register an "inferable" file compression type. - - Registers transparent file compression type for use with fsspec.open. - Compression can be specified by name in open, or "infer"-ed for any files - ending with the given extensions. - - Args: - name: (str) The compression type name. Eg. "gzip". - callback: A callable of form (infile, mode, **kwargs) -> file-like. - Accepts an input file-like object, the target mode and kwargs. - Returns a wrapped file-like object. - extensions: (str, Iterable[str]) A file extension, or list of file - extensions for which to infer this compression scheme. Eg. "gz". - force: (bool) Force re-registration of compression type or extensions. - - Raises: - ValueError: If name or extensions already registered, and not force. - - """ - if isinstance(extensions, str): - extensions = [extensions] - - # Validate registration - if name in compr and not force: - raise ValueError(f"Duplicate compression registration: {name}") - - for ext in extensions: - if ext in fsspec.utils.compressions and not force: - raise ValueError(f"Duplicate compression file extension: {ext} ({name})") - - compr[name] = callback - - for ext in extensions: - fsspec.utils.compressions[ext] = name - - -def unzip(infile, mode="rb", filename=None, **kwargs): - if "r" not in mode: - filename = filename or "file" - z = ZipFile(infile, mode="w", **kwargs) - fo = z.open(filename, mode="w") - fo.close = lambda closer=fo.close: closer() or z.close() - return fo - z = ZipFile(infile) - if filename is None: - filename = z.namelist()[0] - return z.open(filename, mode="r", **kwargs) - - -register_compression("zip", unzip, "zip") - -try: - from bz2 import BZ2File -except ImportError: - pass -else: - register_compression("bz2", BZ2File, "bz2") - -try: # pragma: no cover - from isal import igzip - - def isal(infile, mode="rb", **kwargs): - return igzip.IGzipFile(fileobj=infile, mode=mode, **kwargs) - - register_compression("gzip", isal, "gz") -except ImportError: - from gzip import GzipFile - - register_compression( - "gzip", lambda f, **kwargs: GzipFile(fileobj=f, **kwargs), "gz" - ) - -try: - from lzma import LZMAFile - - register_compression("lzma", LZMAFile, "xz") - register_compression("xz", LZMAFile, "xz", force=True) -except ImportError: - pass - -try: - import lzmaffi - - register_compression("lzma", lzmaffi.LZMAFile, "xz", force=True) - register_compression("xz", lzmaffi.LZMAFile, "xz", force=True) -except ImportError: - pass - - -class SnappyFile(AbstractBufferedFile): - def __init__(self, infile, mode, **kwargs): - import snappy - - super().__init__( - fs=None, path="snappy", mode=mode.strip("b") + "b", size=999999999, **kwargs - ) - self.infile = infile - if "r" in mode: - self.codec = snappy.StreamDecompressor() - else: - self.codec = snappy.StreamCompressor() - - def _upload_chunk(self, final=False): - self.buffer.seek(0) - out = self.codec.add_chunk(self.buffer.read()) - self.infile.write(out) - return True - - def seek(self, loc, whence=0): - raise NotImplementedError("SnappyFile is not seekable") - - def seekable(self): - return False - - def _fetch_range(self, start, end): - """Get the specified set of bytes from remote""" - data = self.infile.read(end - start) - return self.codec.decompress(data) - - -try: - import snappy - - snappy.compress - # Snappy may use the .sz file extension, but this is not part of the - # standard implementation. - register_compression("snappy", SnappyFile, []) - -except (ImportError, NameError, AttributeError): - pass - -try: - import lz4.frame - - register_compression("lz4", lz4.frame.open, "lz4") -except ImportError: - pass - -try: - import zstandard as zstd - - def zstandard_file(infile, mode="rb"): - if "r" in mode: - cctx = zstd.ZstdDecompressor() - return cctx.stream_reader(infile) - else: - cctx = zstd.ZstdCompressor(level=10) - return cctx.stream_writer(infile) - - register_compression("zstd", zstandard_file, "zst") -except ImportError: - pass - - -def available_compressions(): - """Return a list of the implemented compressions.""" - return list(compr) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_getattr.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_getattr.py deleted file mode 100644 index a34e82ed81ba1b4bc319ef3158fbd1d6cc493d60..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_getattr.py +++ /dev/null @@ -1,35 +0,0 @@ -from importlib import import_module -from pkgutil import walk_packages - -import matplotlib -import pytest - -# Get the names of all matplotlib submodules, -# except for the unit tests and private modules. -module_names = [ - m.name - for m in walk_packages( - path=matplotlib.__path__, prefix=f'{matplotlib.__name__}.' - ) - if not m.name.startswith(__package__) - and not any(x.startswith('_') for x in m.name.split('.')) -] - - -@pytest.mark.parametrize('module_name', module_names) -@pytest.mark.filterwarnings('ignore::DeprecationWarning') -@pytest.mark.filterwarnings('ignore::ImportWarning') -def test_getattr(module_name): - """ - Test that __getattr__ methods raise AttributeError for unknown keys. - See #20822, #20855. - """ - try: - module = import_module(module_name) - except (ImportError, RuntimeError) as e: - # Skip modules that cannot be imported due to missing dependencies - pytest.skip(f'Cannot import {module_name} due to {e}') - - key = 'THIS_SYMBOL_SHOULD_NOT_EXIST' - if hasattr(module, key): - delattr(module, key) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/npy_os.h b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/npy_os.h deleted file mode 100644 index 0ce5d78b42c0e53c660654e297446d7811901aa2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/npy_os.h +++ /dev/null @@ -1,42 +0,0 @@ -#ifndef NUMPY_CORE_INCLUDE_NUMPY_NPY_OS_H_ -#define NUMPY_CORE_INCLUDE_NUMPY_NPY_OS_H_ - -#if defined(linux) || defined(__linux) || defined(__linux__) - #define NPY_OS_LINUX -#elif defined(__FreeBSD__) || defined(__NetBSD__) || \ - defined(__OpenBSD__) || defined(__DragonFly__) - #define NPY_OS_BSD - #ifdef __FreeBSD__ - #define NPY_OS_FREEBSD - #elif defined(__NetBSD__) - #define NPY_OS_NETBSD - #elif defined(__OpenBSD__) - #define NPY_OS_OPENBSD - #elif defined(__DragonFly__) - #define NPY_OS_DRAGONFLY - #endif -#elif defined(sun) || defined(__sun) - #define NPY_OS_SOLARIS -#elif defined(__CYGWIN__) - #define NPY_OS_CYGWIN -/* We are on Windows.*/ -#elif defined(_WIN32) - /* We are using MinGW (64-bit or 32-bit)*/ - #if defined(__MINGW32__) || defined(__MINGW64__) - #define NPY_OS_MINGW - /* Otherwise, if _WIN64 is defined, we are targeting 64-bit Windows*/ - #elif defined(_WIN64) - #define NPY_OS_WIN64 - /* Otherwise assume we are targeting 32-bit Windows*/ - #else - #define NPY_OS_WIN32 - #endif -#elif defined(__APPLE__) - #define NPY_OS_DARWIN -#elif defined(__HAIKU__) - #define NPY_OS_HAIKU -#else - #define NPY_OS_UNKNOWN -#endif - -#endif /* NUMPY_CORE_INCLUDE_NUMPY_NPY_OS_H_ */ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/fcompiler/gnu.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/fcompiler/gnu.py deleted file mode 100644 index 3472b5d4c0951cf4501436614a28375bea2a8cef..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/fcompiler/gnu.py +++ /dev/null @@ -1,555 +0,0 @@ -import re -import os -import sys -import warnings -import platform -import tempfile -import hashlib -import base64 -import subprocess -from subprocess import Popen, PIPE, STDOUT -from numpy.distutils.exec_command import filepath_from_subprocess_output -from numpy.distutils.fcompiler import FCompiler -from distutils.version import LooseVersion - -compilers = ['GnuFCompiler', 'Gnu95FCompiler'] - -TARGET_R = re.compile(r"Target: ([a-zA-Z0-9_\-]*)") - -# XXX: handle cross compilation - - -def is_win64(): - return sys.platform == "win32" and platform.architecture()[0] == "64bit" - - -class GnuFCompiler(FCompiler): - compiler_type = 'gnu' - compiler_aliases = ('g77', ) - description = 'GNU Fortran 77 compiler' - - def gnu_version_match(self, version_string): - """Handle the different versions of GNU fortran compilers""" - # Strip warning(s) that may be emitted by gfortran - while version_string.startswith('gfortran: warning'): - version_string =\ - version_string[version_string.find('\n') + 1:].strip() - - # Gfortran versions from after 2010 will output a simple string - # (usually "x.y", "x.y.z" or "x.y.z-q") for ``-dumpversion``; older - # gfortrans may still return long version strings (``-dumpversion`` was - # an alias for ``--version``) - if len(version_string) <= 20: - # Try to find a valid version string - m = re.search(r'([0-9.]+)', version_string) - if m: - # g77 provides a longer version string that starts with GNU - # Fortran - if version_string.startswith('GNU Fortran'): - return ('g77', m.group(1)) - - # gfortran only outputs a version string such as #.#.#, so check - # if the match is at the start of the string - elif m.start() == 0: - return ('gfortran', m.group(1)) - else: - # Output probably from --version, try harder: - m = re.search(r'GNU Fortran\s+95.*?([0-9-.]+)', version_string) - if m: - return ('gfortran', m.group(1)) - m = re.search( - r'GNU Fortran.*?\-?([0-9-.]+\.[0-9-.]+)', version_string) - if m: - v = m.group(1) - if v.startswith('0') or v.startswith('2') or v.startswith('3'): - # the '0' is for early g77's - return ('g77', v) - else: - # at some point in the 4.x series, the ' 95' was dropped - # from the version string - return ('gfortran', v) - - # If still nothing, raise an error to make the problem easy to find. - err = 'A valid Fortran version was not found in this string:\n' - raise ValueError(err + version_string) - - def version_match(self, version_string): - v = self.gnu_version_match(version_string) - if not v or v[0] != 'g77': - return None - return v[1] - - possible_executables = ['g77', 'f77'] - executables = { - 'version_cmd' : [None, "-dumpversion"], - 'compiler_f77' : [None, "-g", "-Wall", "-fno-second-underscore"], - 'compiler_f90' : None, # Use --fcompiler=gnu95 for f90 codes - 'compiler_fix' : None, - 'linker_so' : [None, "-g", "-Wall"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"], - 'linker_exe' : [None, "-g", "-Wall"] - } - module_dir_switch = None - module_include_switch = None - - # Cygwin: f771: warning: -fPIC ignored for target (all code is - # position independent) - if os.name != 'nt' and sys.platform != 'cygwin': - pic_flags = ['-fPIC'] - - # use -mno-cygwin for g77 when Python is not Cygwin-Python - if sys.platform == 'win32': - for key in ['version_cmd', 'compiler_f77', 'linker_so', 'linker_exe']: - executables[key].append('-mno-cygwin') - - g2c = 'g2c' - suggested_f90_compiler = 'gnu95' - - def get_flags_linker_so(self): - opt = self.linker_so[1:] - if sys.platform == 'darwin': - target = os.environ.get('MACOSX_DEPLOYMENT_TARGET', None) - # If MACOSX_DEPLOYMENT_TARGET is set, we simply trust the value - # and leave it alone. But, distutils will complain if the - # environment's value is different from the one in the Python - # Makefile used to build Python. We let distutils handle this - # error checking. - if not target: - # If MACOSX_DEPLOYMENT_TARGET is not set in the environment, - # we try to get it first from sysconfig and then - # fall back to setting it to 10.9 This is a reasonable default - # even when using the official Python dist and those derived - # from it. - import sysconfig - target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') - if not target: - target = '10.9' - s = f'Env. variable MACOSX_DEPLOYMENT_TARGET set to {target}' - warnings.warn(s, stacklevel=2) - os.environ['MACOSX_DEPLOYMENT_TARGET'] = str(target) - opt.extend(['-undefined', 'dynamic_lookup', '-bundle']) - else: - opt.append("-shared") - if sys.platform.startswith('sunos'): - # SunOS often has dynamically loaded symbols defined in the - # static library libg2c.a The linker doesn't like this. To - # ignore the problem, use the -mimpure-text flag. It isn't - # the safest thing, but seems to work. 'man gcc' says: - # ".. Instead of using -mimpure-text, you should compile all - # source code with -fpic or -fPIC." - opt.append('-mimpure-text') - return opt - - def get_libgcc_dir(self): - try: - output = subprocess.check_output(self.compiler_f77 + - ['-print-libgcc-file-name']) - except (OSError, subprocess.CalledProcessError): - pass - else: - output = filepath_from_subprocess_output(output) - return os.path.dirname(output) - return None - - def get_libgfortran_dir(self): - if sys.platform[:5] == 'linux': - libgfortran_name = 'libgfortran.so' - elif sys.platform == 'darwin': - libgfortran_name = 'libgfortran.dylib' - else: - libgfortran_name = None - - libgfortran_dir = None - if libgfortran_name: - find_lib_arg = ['-print-file-name={0}'.format(libgfortran_name)] - try: - output = subprocess.check_output( - self.compiler_f77 + find_lib_arg) - except (OSError, subprocess.CalledProcessError): - pass - else: - output = filepath_from_subprocess_output(output) - libgfortran_dir = os.path.dirname(output) - return libgfortran_dir - - def get_library_dirs(self): - opt = [] - if sys.platform[:5] != 'linux': - d = self.get_libgcc_dir() - if d: - # if windows and not cygwin, libg2c lies in a different folder - if sys.platform == 'win32' and not d.startswith('/usr/lib'): - d = os.path.normpath(d) - path = os.path.join(d, "lib%s.a" % self.g2c) - if not os.path.exists(path): - root = os.path.join(d, *((os.pardir, ) * 4)) - d2 = os.path.abspath(os.path.join(root, 'lib')) - path = os.path.join(d2, "lib%s.a" % self.g2c) - if os.path.exists(path): - opt.append(d2) - opt.append(d) - # For Macports / Linux, libgfortran and libgcc are not co-located - lib_gfortran_dir = self.get_libgfortran_dir() - if lib_gfortran_dir: - opt.append(lib_gfortran_dir) - return opt - - def get_libraries(self): - opt = [] - d = self.get_libgcc_dir() - if d is not None: - g2c = self.g2c + '-pic' - f = self.static_lib_format % (g2c, self.static_lib_extension) - if not os.path.isfile(os.path.join(d, f)): - g2c = self.g2c - else: - g2c = self.g2c - - if g2c is not None: - opt.append(g2c) - c_compiler = self.c_compiler - if sys.platform == 'win32' and c_compiler and \ - c_compiler.compiler_type == 'msvc': - opt.append('gcc') - if sys.platform == 'darwin': - opt.append('cc_dynamic') - return opt - - def get_flags_debug(self): - return ['-g'] - - def get_flags_opt(self): - v = self.get_version() - if v and v <= '3.3.3': - # With this compiler version building Fortran BLAS/LAPACK - # with -O3 caused failures in lib.lapack heevr,syevr tests. - opt = ['-O2'] - else: - opt = ['-O3'] - opt.append('-funroll-loops') - return opt - - def _c_arch_flags(self): - """ Return detected arch flags from CFLAGS """ - import sysconfig - try: - cflags = sysconfig.get_config_vars()['CFLAGS'] - except KeyError: - return [] - arch_re = re.compile(r"-arch\s+(\w+)") - arch_flags = [] - for arch in arch_re.findall(cflags): - arch_flags += ['-arch', arch] - return arch_flags - - def get_flags_arch(self): - return [] - - def runtime_library_dir_option(self, dir): - if sys.platform == 'win32' or sys.platform == 'cygwin': - # Linux/Solaris/Unix support RPATH, Windows does not - raise NotImplementedError - - # TODO: could use -Xlinker here, if it's supported - assert "," not in dir - - if sys.platform == 'darwin': - return f'-Wl,-rpath,{dir}' - elif sys.platform.startswith(('aix', 'os400')): - # AIX RPATH is called LIBPATH - return f'-Wl,-blibpath:{dir}' - else: - return f'-Wl,-rpath={dir}' - - -class Gnu95FCompiler(GnuFCompiler): - compiler_type = 'gnu95' - compiler_aliases = ('gfortran', ) - description = 'GNU Fortran 95 compiler' - - def version_match(self, version_string): - v = self.gnu_version_match(version_string) - if not v or v[0] != 'gfortran': - return None - v = v[1] - if LooseVersion(v) >= "4": - # gcc-4 series releases do not support -mno-cygwin option - pass - else: - # use -mno-cygwin flag for gfortran when Python is not - # Cygwin-Python - if sys.platform == 'win32': - for key in [ - 'version_cmd', 'compiler_f77', 'compiler_f90', - 'compiler_fix', 'linker_so', 'linker_exe' - ]: - self.executables[key].append('-mno-cygwin') - return v - - possible_executables = ['gfortran', 'f95'] - executables = { - 'version_cmd' : ["", "-dumpversion"], - 'compiler_f77' : [None, "-Wall", "-g", "-ffixed-form", - "-fno-second-underscore"], - 'compiler_f90' : [None, "-Wall", "-g", - "-fno-second-underscore"], - 'compiler_fix' : [None, "-Wall", "-g","-ffixed-form", - "-fno-second-underscore"], - 'linker_so' : ["", "-Wall", "-g"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"], - 'linker_exe' : [None, "-Wall"] - } - - module_dir_switch = '-J' - module_include_switch = '-I' - - if sys.platform.startswith(('aix', 'os400')): - executables['linker_so'].append('-lpthread') - if platform.architecture()[0][:2] == '64': - for key in ['compiler_f77', 'compiler_f90','compiler_fix','linker_so', 'linker_exe']: - executables[key].append('-maix64') - - g2c = 'gfortran' - - def _universal_flags(self, cmd): - """Return a list of -arch flags for every supported architecture.""" - if not sys.platform == 'darwin': - return [] - arch_flags = [] - # get arches the C compiler gets. - c_archs = self._c_arch_flags() - if "i386" in c_archs: - c_archs[c_archs.index("i386")] = "i686" - # check the arches the Fortran compiler supports, and compare with - # arch flags from C compiler - for arch in ["ppc", "i686", "x86_64", "ppc64", "s390x"]: - if _can_target(cmd, arch) and arch in c_archs: - arch_flags.extend(["-arch", arch]) - return arch_flags - - def get_flags(self): - flags = GnuFCompiler.get_flags(self) - arch_flags = self._universal_flags(self.compiler_f90) - if arch_flags: - flags[:0] = arch_flags - return flags - - def get_flags_linker_so(self): - flags = GnuFCompiler.get_flags_linker_so(self) - arch_flags = self._universal_flags(self.linker_so) - if arch_flags: - flags[:0] = arch_flags - return flags - - def get_library_dirs(self): - opt = GnuFCompiler.get_library_dirs(self) - if sys.platform == 'win32': - c_compiler = self.c_compiler - if c_compiler and c_compiler.compiler_type == "msvc": - target = self.get_target() - if target: - d = os.path.normpath(self.get_libgcc_dir()) - root = os.path.join(d, *((os.pardir, ) * 4)) - path = os.path.join(root, "lib") - mingwdir = os.path.normpath(path) - if os.path.exists(os.path.join(mingwdir, "libmingwex.a")): - opt.append(mingwdir) - # For Macports / Linux, libgfortran and libgcc are not co-located - lib_gfortran_dir = self.get_libgfortran_dir() - if lib_gfortran_dir: - opt.append(lib_gfortran_dir) - return opt - - def get_libraries(self): - opt = GnuFCompiler.get_libraries(self) - if sys.platform == 'darwin': - opt.remove('cc_dynamic') - if sys.platform == 'win32': - c_compiler = self.c_compiler - if c_compiler and c_compiler.compiler_type == "msvc": - if "gcc" in opt: - i = opt.index("gcc") - opt.insert(i + 1, "mingwex") - opt.insert(i + 1, "mingw32") - c_compiler = self.c_compiler - if c_compiler and c_compiler.compiler_type == "msvc": - return [] - else: - pass - return opt - - def get_target(self): - try: - p = subprocess.Popen( - self.compiler_f77 + ['-v'], - stdin=subprocess.PIPE, - stderr=subprocess.PIPE, - ) - stdout, stderr = p.communicate() - output = (stdout or b"") + (stderr or b"") - except (OSError, subprocess.CalledProcessError): - pass - else: - output = filepath_from_subprocess_output(output) - m = TARGET_R.search(output) - if m: - return m.group(1) - return "" - - def _hash_files(self, filenames): - h = hashlib.sha1() - for fn in filenames: - with open(fn, 'rb') as f: - while True: - block = f.read(131072) - if not block: - break - h.update(block) - text = base64.b32encode(h.digest()) - text = text.decode('ascii') - return text.rstrip('=') - - def _link_wrapper_lib(self, objects, output_dir, extra_dll_dir, - chained_dlls, is_archive): - """Create a wrapper shared library for the given objects - - Return an MSVC-compatible lib - """ - - c_compiler = self.c_compiler - if c_compiler.compiler_type != "msvc": - raise ValueError("This method only supports MSVC") - - object_hash = self._hash_files(list(objects) + list(chained_dlls)) - - if is_win64(): - tag = 'win_amd64' - else: - tag = 'win32' - - basename = 'lib' + os.path.splitext( - os.path.basename(objects[0]))[0][:8] - root_name = basename + '.' + object_hash + '.gfortran-' + tag - dll_name = root_name + '.dll' - def_name = root_name + '.def' - lib_name = root_name + '.lib' - dll_path = os.path.join(extra_dll_dir, dll_name) - def_path = os.path.join(output_dir, def_name) - lib_path = os.path.join(output_dir, lib_name) - - if os.path.isfile(lib_path): - # Nothing to do - return lib_path, dll_path - - if is_archive: - objects = (["-Wl,--whole-archive"] + list(objects) + - ["-Wl,--no-whole-archive"]) - self.link_shared_object( - objects, - dll_name, - output_dir=extra_dll_dir, - extra_postargs=list(chained_dlls) + [ - '-Wl,--allow-multiple-definition', - '-Wl,--output-def,' + def_path, - '-Wl,--export-all-symbols', - '-Wl,--enable-auto-import', - '-static', - '-mlong-double-64', - ]) - - # No PowerPC! - if is_win64(): - specifier = '/MACHINE:X64' - else: - specifier = '/MACHINE:X86' - - # MSVC specific code - lib_args = ['/def:' + def_path, '/OUT:' + lib_path, specifier] - if not c_compiler.initialized: - c_compiler.initialize() - c_compiler.spawn([c_compiler.lib] + lib_args) - - return lib_path, dll_path - - def can_ccompiler_link(self, compiler): - # MSVC cannot link objects compiled by GNU fortran - return compiler.compiler_type not in ("msvc", ) - - def wrap_unlinkable_objects(self, objects, output_dir, extra_dll_dir): - """ - Convert a set of object files that are not compatible with the default - linker, to a file that is compatible. - """ - if self.c_compiler.compiler_type == "msvc": - # Compile a DLL and return the lib for the DLL as - # the object. Also keep track of previous DLLs that - # we have compiled so that we can link against them. - - # If there are .a archives, assume they are self-contained - # static libraries, and build separate DLLs for each - archives = [] - plain_objects = [] - for obj in objects: - if obj.lower().endswith('.a'): - archives.append(obj) - else: - plain_objects.append(obj) - - chained_libs = [] - chained_dlls = [] - for archive in archives[::-1]: - lib, dll = self._link_wrapper_lib( - [archive], - output_dir, - extra_dll_dir, - chained_dlls=chained_dlls, - is_archive=True) - chained_libs.insert(0, lib) - chained_dlls.insert(0, dll) - - if not plain_objects: - return chained_libs - - lib, dll = self._link_wrapper_lib( - plain_objects, - output_dir, - extra_dll_dir, - chained_dlls=chained_dlls, - is_archive=False) - return [lib] + chained_libs - else: - raise ValueError("Unsupported C compiler") - - -def _can_target(cmd, arch): - """Return true if the architecture supports the -arch flag""" - newcmd = cmd[:] - fid, filename = tempfile.mkstemp(suffix=".f") - os.close(fid) - try: - d = os.path.dirname(filename) - output = os.path.splitext(filename)[0] + ".o" - try: - newcmd.extend(["-arch", arch, "-c", filename]) - p = Popen(newcmd, stderr=STDOUT, stdout=PIPE, cwd=d) - p.communicate() - return p.returncode == 0 - finally: - if os.path.exists(output): - os.remove(output) - finally: - os.remove(filename) - - -if __name__ == '__main__': - from distutils import log - from numpy.distutils import customized_fcompiler - log.set_verbosity(2) - - print(customized_fcompiler('gnu').get_version()) - try: - print(customized_fcompiler('g95').get_version()) - except Exception as e: - print(e) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/callback/gh17797.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/callback/gh17797.f90 deleted file mode 100644 index 49853afd766a90e521104081bf77236a252d3c70..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/callback/gh17797.f90 +++ /dev/null @@ -1,7 +0,0 @@ -function gh17797(f, y) result(r) - external f - integer(8) :: r, f - integer(8), dimension(:) :: y - r = f(0) - r = r + sum(y) -end function gh17797 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/packaging/version.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/packaging/version.py deleted file mode 100644 index de9a09a4ed3b078b37e7490a6686f660ae935aca..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/packaging/version.py +++ /dev/null @@ -1,504 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import collections -import itertools -import re -import warnings -from typing import Callable, Iterator, List, Optional, SupportsInt, Tuple, Union - -from ._structures import Infinity, InfinityType, NegativeInfinity, NegativeInfinityType - -__all__ = ["parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN"] - -InfiniteTypes = Union[InfinityType, NegativeInfinityType] -PrePostDevType = Union[InfiniteTypes, Tuple[str, int]] -SubLocalType = Union[InfiniteTypes, int, str] -LocalType = Union[ - NegativeInfinityType, - Tuple[ - Union[ - SubLocalType, - Tuple[SubLocalType, str], - Tuple[NegativeInfinityType, SubLocalType], - ], - ..., - ], -] -CmpKey = Tuple[ - int, Tuple[int, ...], PrePostDevType, PrePostDevType, PrePostDevType, LocalType -] -LegacyCmpKey = Tuple[int, Tuple[str, ...]] -VersionComparisonMethod = Callable[ - [Union[CmpKey, LegacyCmpKey], Union[CmpKey, LegacyCmpKey]], bool -] - -_Version = collections.namedtuple( - "_Version", ["epoch", "release", "dev", "pre", "post", "local"] -) - - -def parse(version: str) -> Union["LegacyVersion", "Version"]: - """ - Parse the given version string and return either a :class:`Version` object - or a :class:`LegacyVersion` object depending on if the given version is - a valid PEP 440 version or a legacy version. - """ - try: - return Version(version) - except InvalidVersion: - return LegacyVersion(version) - - -class InvalidVersion(ValueError): - """ - An invalid version was found, users should refer to PEP 440. - """ - - -class _BaseVersion: - _key: Union[CmpKey, LegacyCmpKey] - - def __hash__(self) -> int: - return hash(self._key) - - # Please keep the duplicated `isinstance` check - # in the six comparisons hereunder - # unless you find a way to avoid adding overhead function calls. - def __lt__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key < other._key - - def __le__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key <= other._key - - def __eq__(self, other: object) -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key == other._key - - def __ge__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key >= other._key - - def __gt__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key > other._key - - def __ne__(self, other: object) -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key != other._key - - -class LegacyVersion(_BaseVersion): - def __init__(self, version: str) -> None: - self._version = str(version) - self._key = _legacy_cmpkey(self._version) - - warnings.warn( - "Creating a LegacyVersion has been deprecated and will be " - "removed in the next major release", - DeprecationWarning, - ) - - def __str__(self) -> str: - return self._version - - def __repr__(self) -> str: - return f"" - - @property - def public(self) -> str: - return self._version - - @property - def base_version(self) -> str: - return self._version - - @property - def epoch(self) -> int: - return -1 - - @property - def release(self) -> None: - return None - - @property - def pre(self) -> None: - return None - - @property - def post(self) -> None: - return None - - @property - def dev(self) -> None: - return None - - @property - def local(self) -> None: - return None - - @property - def is_prerelease(self) -> bool: - return False - - @property - def is_postrelease(self) -> bool: - return False - - @property - def is_devrelease(self) -> bool: - return False - - -_legacy_version_component_re = re.compile(r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE) - -_legacy_version_replacement_map = { - "pre": "c", - "preview": "c", - "-": "final-", - "rc": "c", - "dev": "@", -} - - -def _parse_version_parts(s: str) -> Iterator[str]: - for part in _legacy_version_component_re.split(s): - part = _legacy_version_replacement_map.get(part, part) - - if not part or part == ".": - continue - - if part[:1] in "0123456789": - # pad for numeric comparison - yield part.zfill(8) - else: - yield "*" + part - - # ensure that alpha/beta/candidate are before final - yield "*final" - - -def _legacy_cmpkey(version: str) -> LegacyCmpKey: - - # We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch - # greater than or equal to 0. This will effectively put the LegacyVersion, - # which uses the defacto standard originally implemented by setuptools, - # as before all PEP 440 versions. - epoch = -1 - - # This scheme is taken from pkg_resources.parse_version setuptools prior to - # it's adoption of the packaging library. - parts: List[str] = [] - for part in _parse_version_parts(version.lower()): - if part.startswith("*"): - # remove "-" before a prerelease tag - if part < "*final": - while parts and parts[-1] == "*final-": - parts.pop() - - # remove trailing zeros from each series of numeric parts - while parts and parts[-1] == "00000000": - parts.pop() - - parts.append(part) - - return epoch, tuple(parts) - - -# Deliberately not anchored to the start and end of the string, to make it -# easier for 3rd party code to reuse -VERSION_PATTERN = r""" - v? - (?: - (?:(?P[0-9]+)!)? # epoch - (?P[0-9]+(?:\.[0-9]+)*) # release segment - (?P
                                                  # pre-release
        -            [-_\.]?
        -            (?P(a|b|c|rc|alpha|beta|pre|preview))
        -            [-_\.]?
        -            (?P[0-9]+)?
        -        )?
        -        (?P                                         # post release
        -            (?:-(?P[0-9]+))
        -            |
        -            (?:
        -                [-_\.]?
        -                (?Ppost|rev|r)
        -                [-_\.]?
        -                (?P[0-9]+)?
        -            )
        -        )?
        -        (?P                                          # dev release
        -            [-_\.]?
        -            (?Pdev)
        -            [-_\.]?
        -            (?P[0-9]+)?
        -        )?
        -    )
        -    (?:\+(?P[a-z0-9]+(?:[-_\.][a-z0-9]+)*))?       # local version
        -"""
        -
        -
        -class Version(_BaseVersion):
        -
        -    _regex = re.compile(r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE)
        -
        -    def __init__(self, version: str) -> None:
        -
        -        # Validate the version and parse it into pieces
        -        match = self._regex.search(version)
        -        if not match:
        -            raise InvalidVersion(f"Invalid version: '{version}'")
        -
        -        # Store the parsed out pieces of the version
        -        self._version = _Version(
        -            epoch=int(match.group("epoch")) if match.group("epoch") else 0,
        -            release=tuple(int(i) for i in match.group("release").split(".")),
        -            pre=_parse_letter_version(match.group("pre_l"), match.group("pre_n")),
        -            post=_parse_letter_version(
        -                match.group("post_l"), match.group("post_n1") or match.group("post_n2")
        -            ),
        -            dev=_parse_letter_version(match.group("dev_l"), match.group("dev_n")),
        -            local=_parse_local_version(match.group("local")),
        -        )
        -
        -        # Generate a key which will be used for sorting
        -        self._key = _cmpkey(
        -            self._version.epoch,
        -            self._version.release,
        -            self._version.pre,
        -            self._version.post,
        -            self._version.dev,
        -            self._version.local,
        -        )
        -
        -    def __repr__(self) -> str:
        -        return f""
        -
        -    def __str__(self) -> str:
        -        parts = []
        -
        -        # Epoch
        -        if self.epoch != 0:
        -            parts.append(f"{self.epoch}!")
        -
        -        # Release segment
        -        parts.append(".".join(str(x) for x in self.release))
        -
        -        # Pre-release
        -        if self.pre is not None:
        -            parts.append("".join(str(x) for x in self.pre))
        -
        -        # Post-release
        -        if self.post is not None:
        -            parts.append(f".post{self.post}")
        -
        -        # Development release
        -        if self.dev is not None:
        -            parts.append(f".dev{self.dev}")
        -
        -        # Local version segment
        -        if self.local is not None:
        -            parts.append(f"+{self.local}")
        -
        -        return "".join(parts)
        -
        -    @property
        -    def epoch(self) -> int:
        -        _epoch: int = self._version.epoch
        -        return _epoch
        -
        -    @property
        -    def release(self) -> Tuple[int, ...]:
        -        _release: Tuple[int, ...] = self._version.release
        -        return _release
        -
        -    @property
        -    def pre(self) -> Optional[Tuple[str, int]]:
        -        _pre: Optional[Tuple[str, int]] = self._version.pre
        -        return _pre
        -
        -    @property
        -    def post(self) -> Optional[int]:
        -        return self._version.post[1] if self._version.post else None
        -
        -    @property
        -    def dev(self) -> Optional[int]:
        -        return self._version.dev[1] if self._version.dev else None
        -
        -    @property
        -    def local(self) -> Optional[str]:
        -        if self._version.local:
        -            return ".".join(str(x) for x in self._version.local)
        -        else:
        -            return None
        -
        -    @property
        -    def public(self) -> str:
        -        return str(self).split("+", 1)[0]
        -
        -    @property
        -    def base_version(self) -> str:
        -        parts = []
        -
        -        # Epoch
        -        if self.epoch != 0:
        -            parts.append(f"{self.epoch}!")
        -
        -        # Release segment
        -        parts.append(".".join(str(x) for x in self.release))
        -
        -        return "".join(parts)
        -
        -    @property
        -    def is_prerelease(self) -> bool:
        -        return self.dev is not None or self.pre is not None
        -
        -    @property
        -    def is_postrelease(self) -> bool:
        -        return self.post is not None
        -
        -    @property
        -    def is_devrelease(self) -> bool:
        -        return self.dev is not None
        -
        -    @property
        -    def major(self) -> int:
        -        return self.release[0] if len(self.release) >= 1 else 0
        -
        -    @property
        -    def minor(self) -> int:
        -        return self.release[1] if len(self.release) >= 2 else 0
        -
        -    @property
        -    def micro(self) -> int:
        -        return self.release[2] if len(self.release) >= 3 else 0
        -
        -
        -def _parse_letter_version(
        -    letter: str, number: Union[str, bytes, SupportsInt]
        -) -> Optional[Tuple[str, int]]:
        -
        -    if letter:
        -        # We consider there to be an implicit 0 in a pre-release if there is
        -        # not a numeral associated with it.
        -        if number is None:
        -            number = 0
        -
        -        # We normalize any letters to their lower case form
        -        letter = letter.lower()
        -
        -        # We consider some words to be alternate spellings of other words and
        -        # in those cases we want to normalize the spellings to our preferred
        -        # spelling.
        -        if letter == "alpha":
        -            letter = "a"
        -        elif letter == "beta":
        -            letter = "b"
        -        elif letter in ["c", "pre", "preview"]:
        -            letter = "rc"
        -        elif letter in ["rev", "r"]:
        -            letter = "post"
        -
        -        return letter, int(number)
        -    if not letter and number:
        -        # We assume if we are given a number, but we are not given a letter
        -        # then this is using the implicit post release syntax (e.g. 1.0-1)
        -        letter = "post"
        -
        -        return letter, int(number)
        -
        -    return None
        -
        -
        -_local_version_separators = re.compile(r"[\._-]")
        -
        -
        -def _parse_local_version(local: str) -> Optional[LocalType]:
        -    """
        -    Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
        -    """
        -    if local is not None:
        -        return tuple(
        -            part.lower() if not part.isdigit() else int(part)
        -            for part in _local_version_separators.split(local)
        -        )
        -    return None
        -
        -
        -def _cmpkey(
        -    epoch: int,
        -    release: Tuple[int, ...],
        -    pre: Optional[Tuple[str, int]],
        -    post: Optional[Tuple[str, int]],
        -    dev: Optional[Tuple[str, int]],
        -    local: Optional[Tuple[SubLocalType]],
        -) -> CmpKey:
        -
        -    # When we compare a release version, we want to compare it with all of the
        -    # trailing zeros removed. So we'll use a reverse the list, drop all the now
        -    # leading zeros until we come to something non zero, then take the rest
        -    # re-reverse it back into the correct order and make it a tuple and use
        -    # that for our sorting key.
        -    _release = tuple(
        -        reversed(list(itertools.dropwhile(lambda x: x == 0, reversed(release))))
        -    )
        -
        -    # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
        -    # We'll do this by abusing the pre segment, but we _only_ want to do this
        -    # if there is not a pre or a post segment. If we have one of those then
        -    # the normal sorting rules will handle this case correctly.
        -    if pre is None and post is None and dev is not None:
        -        _pre: PrePostDevType = NegativeInfinity
        -    # Versions without a pre-release (except as noted above) should sort after
        -    # those with one.
        -    elif pre is None:
        -        _pre = Infinity
        -    else:
        -        _pre = pre
        -
        -    # Versions without a post segment should sort before those with one.
        -    if post is None:
        -        _post: PrePostDevType = NegativeInfinity
        -
        -    else:
        -        _post = post
        -
        -    # Versions without a development segment should sort after those with one.
        -    if dev is None:
        -        _dev: PrePostDevType = Infinity
        -
        -    else:
        -        _dev = dev
        -
        -    if local is None:
        -        # Versions without a local segment should sort before those with one.
        -        _local: LocalType = NegativeInfinity
        -    else:
        -        # Versions with a local segment need that segment parsed to implement
        -        # the sorting rules in PEP440.
        -        # - Alpha numeric segments sort before numeric segments
        -        # - Alpha numeric segments sort lexicographically
        -        # - Numeric segments sort numerically
        -        # - Shorter versions sort before longer versions when the prefixes
        -        #   match exactly
        -        _local = tuple(
        -            (i, "") if isinstance(i, int) else (NegativeInfinity, i) for i in local
        -        )
        -
        -    return epoch, _release, _pre, _post, _dev, _local
        diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/bibtex.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/bibtex.py
        deleted file mode 100644
        index 34883cd83945f0efa2a3eb9f0749bdaf9bc16db9..0000000000000000000000000000000000000000
        --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/bibtex.py
        +++ /dev/null
        @@ -1,159 +0,0 @@
        -"""
        -    pygments.lexers.bibtex
        -    ~~~~~~~~~~~~~~~~~~~~~~
        -
        -    Lexers for BibTeX bibliography data and styles
        -
        -    :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
        -    :license: BSD, see LICENSE for details.
        -"""
        -
        -import re
        -
        -from pygments.lexer import RegexLexer, ExtendedRegexLexer, include, default, \
        -    words
        -from pygments.token import Name, Comment, String, Error, Number, Keyword, \
        -    Punctuation, Whitespace
        -
        -__all__ = ['BibTeXLexer', 'BSTLexer']
        -
        -
        -class BibTeXLexer(ExtendedRegexLexer):
        -    """
        -    A lexer for BibTeX bibliography data format.
        -
        -    .. versionadded:: 2.2
        -    """
        -
        -    name = 'BibTeX'
        -    aliases = ['bibtex', 'bib']
        -    filenames = ['*.bib']
        -    mimetypes = ["text/x-bibtex"]
        -    flags = re.IGNORECASE
        -
        -    ALLOWED_CHARS = r'@!$&*+\-./:;<>?\[\\\]^`|~'
        -    IDENTIFIER = '[{}][{}]*'.format('a-z_' + ALLOWED_CHARS, r'\w' + ALLOWED_CHARS)
        -
        -    def open_brace_callback(self, match, ctx):
        -        opening_brace = match.group()
        -        ctx.opening_brace = opening_brace
        -        yield match.start(), Punctuation, opening_brace
        -        ctx.pos = match.end()
        -
        -    def close_brace_callback(self, match, ctx):
        -        closing_brace = match.group()
        -        if (
        -            ctx.opening_brace == '{' and closing_brace != '}' or
        -            ctx.opening_brace == '(' and closing_brace != ')'
        -        ):
        -            yield match.start(), Error, closing_brace
        -        else:
        -            yield match.start(), Punctuation, closing_brace
        -        del ctx.opening_brace
        -        ctx.pos = match.end()
        -
        -    tokens = {
        -        'root': [
        -            include('whitespace'),
        -            (r'@comment(?!ary)', Comment),
        -            ('@preamble', Name.Class, ('closing-brace', 'value', 'opening-brace')),
        -            ('@string', Name.Class, ('closing-brace', 'field', 'opening-brace')),
        -            ('@' + IDENTIFIER, Name.Class,
        -             ('closing-brace', 'command-body', 'opening-brace')),
        -            ('.+', Comment),
        -        ],
        -        'opening-brace': [
        -            include('whitespace'),
        -            (r'[{(]', open_brace_callback, '#pop'),
        -        ],
        -        'closing-brace': [
        -            include('whitespace'),
        -            (r'[})]', close_brace_callback, '#pop'),
        -        ],
        -        'command-body': [
        -            include('whitespace'),
        -            (r'[^\s\,\}]+', Name.Label, ('#pop', 'fields')),
        -        ],
        -        'fields': [
        -            include('whitespace'),
        -            (',', Punctuation, 'field'),
        -            default('#pop'),
        -        ],
        -        'field': [
        -            include('whitespace'),
        -            (IDENTIFIER, Name.Attribute, ('value', '=')),
        -            default('#pop'),
        -        ],
        -        '=': [
        -            include('whitespace'),
        -            ('=', Punctuation, '#pop'),
        -        ],
        -        'value': [
        -            include('whitespace'),
        -            (IDENTIFIER, Name.Variable),
        -            ('"', String, 'quoted-string'),
        -            (r'\{', String, 'braced-string'),
        -            (r'[\d]+', Number),
        -            ('#', Punctuation),
        -            default('#pop'),
        -        ],
        -        'quoted-string': [
        -            (r'\{', String, 'braced-string'),
        -            ('"', String, '#pop'),
        -            (r'[^\{\"]+', String),
        -        ],
        -        'braced-string': [
        -            (r'\{', String, '#push'),
        -            (r'\}', String, '#pop'),
        -            (r'[^\{\}]+', String),
        -        ],
        -        'whitespace': [
        -            (r'\s+', Whitespace),
        -        ],
        -    }
        -
        -
        -class BSTLexer(RegexLexer):
        -    """
        -    A lexer for BibTeX bibliography styles.
        -
        -    .. versionadded:: 2.2
        -    """
        -
        -    name = 'BST'
        -    aliases = ['bst', 'bst-pybtex']
        -    filenames = ['*.bst']
        -    flags = re.IGNORECASE | re.MULTILINE
        -
        -    tokens = {
        -        'root': [
        -            include('whitespace'),
        -            (words(['read', 'sort']), Keyword),
        -            (words(['execute', 'integers', 'iterate', 'reverse', 'strings']),
        -             Keyword, ('group')),
        -            (words(['function', 'macro']), Keyword, ('group', 'group')),
        -            (words(['entry']), Keyword, ('group', 'group', 'group')),
        -        ],
        -        'group': [
        -            include('whitespace'),
        -            (r'\{', Punctuation, ('#pop', 'group-end', 'body')),
        -        ],
        -        'group-end': [
        -            include('whitespace'),
        -            (r'\}', Punctuation, '#pop'),
        -        ],
        -        'body': [
        -            include('whitespace'),
        -            (r"\'[^#\"\{\}\s]+", Name.Function),
        -            (r'[^#\"\{\}\s]+\$', Name.Builtin),
        -            (r'[^#\"\{\}\s]+', Name.Variable),
        -            (r'"[^\"]*"', String),
        -            (r'#-?\d+', Number),
        -            (r'\{', Punctuation, ('group-end', 'body')),
        -            default('#pop'),
        -        ],
        -        'whitespace': [
        -            (r'\s+', Whitespace),
        -            ('%.*?$', Comment.Single),
        -        ],
        -    }
        diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pytz/reference.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pytz/reference.py
        deleted file mode 100644
        index f765ca0af0b24e66dc3b7d51b9bf97e71b2b67aa..0000000000000000000000000000000000000000
        --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pytz/reference.py
        +++ /dev/null
        @@ -1,140 +0,0 @@
        -'''
        -Reference tzinfo implementations from the Python docs.
        -Used for testing against as they are only correct for the years
        -1987 to 2006. Do not use these for real code.
        -'''
        -
        -from datetime import tzinfo, timedelta, datetime
        -from pytz import HOUR, ZERO, UTC
        -
        -__all__ = [
        -    'FixedOffset',
        -    'LocalTimezone',
        -    'USTimeZone',
        -    'Eastern',
        -    'Central',
        -    'Mountain',
        -    'Pacific',
        -    'UTC'
        -]
        -
        -
        -# A class building tzinfo objects for fixed-offset time zones.
        -# Note that FixedOffset(0, "UTC") is a different way to build a
        -# UTC tzinfo object.
        -class FixedOffset(tzinfo):
        -    """Fixed offset in minutes east from UTC."""
        -
        -    def __init__(self, offset, name):
        -        self.__offset = timedelta(minutes=offset)
        -        self.__name = name
        -
        -    def utcoffset(self, dt):
        -        return self.__offset
        -
        -    def tzname(self, dt):
        -        return self.__name
        -
        -    def dst(self, dt):
        -        return ZERO
        -
        -
        -import time as _time
        -
        -STDOFFSET = timedelta(seconds=-_time.timezone)
        -if _time.daylight:
        -    DSTOFFSET = timedelta(seconds=-_time.altzone)
        -else:
        -    DSTOFFSET = STDOFFSET
        -
        -DSTDIFF = DSTOFFSET - STDOFFSET
        -
        -
        -# A class capturing the platform's idea of local time.
        -class LocalTimezone(tzinfo):
        -
        -    def utcoffset(self, dt):
        -        if self._isdst(dt):
        -            return DSTOFFSET
        -        else:
        -            return STDOFFSET
        -
        -    def dst(self, dt):
        -        if self._isdst(dt):
        -            return DSTDIFF
        -        else:
        -            return ZERO
        -
        -    def tzname(self, dt):
        -        return _time.tzname[self._isdst(dt)]
        -
        -    def _isdst(self, dt):
        -        tt = (dt.year, dt.month, dt.day,
        -              dt.hour, dt.minute, dt.second,
        -              dt.weekday(), 0, -1)
        -        stamp = _time.mktime(tt)
        -        tt = _time.localtime(stamp)
        -        return tt.tm_isdst > 0
        -
        -Local = LocalTimezone()
        -
        -
        -def first_sunday_on_or_after(dt):
        -    days_to_go = 6 - dt.weekday()
        -    if days_to_go:
        -        dt += timedelta(days_to_go)
        -    return dt
        -
        -
        -# In the US, DST starts at 2am (standard time) on the first Sunday in April.
        -DSTSTART = datetime(1, 4, 1, 2)
        -# and ends at 2am (DST time; 1am standard time) on the last Sunday of Oct.
        -# which is the first Sunday on or after Oct 25.
        -DSTEND = datetime(1, 10, 25, 1)
        -
        -
        -# A complete implementation of current DST rules for major US time zones.
        -class USTimeZone(tzinfo):
        -
        -    def __init__(self, hours, reprname, stdname, dstname):
        -        self.stdoffset = timedelta(hours=hours)
        -        self.reprname = reprname
        -        self.stdname = stdname
        -        self.dstname = dstname
        -
        -    def __repr__(self):
        -        return self.reprname
        -
        -    def tzname(self, dt):
        -        if self.dst(dt):
        -            return self.dstname
        -        else:
        -            return self.stdname
        -
        -    def utcoffset(self, dt):
        -        return self.stdoffset + self.dst(dt)
        -
        -    def dst(self, dt):
        -        if dt is None or dt.tzinfo is None:
        -            # An exception may be sensible here, in one or both cases.
        -            # It depends on how you want to treat them.  The default
        -            # fromutc() implementation (called by the default astimezone()
        -            # implementation) passes a datetime with dt.tzinfo is self.
        -            return ZERO
        -        assert dt.tzinfo is self
        -
        -        # Find first Sunday in April & the last in October.
        -        start = first_sunday_on_or_after(DSTSTART.replace(year=dt.year))
        -        end = first_sunday_on_or_after(DSTEND.replace(year=dt.year))
        -
        -        # Can't compare naive to aware objects, so strip the timezone from
        -        # dt first.
        -        if start <= dt.replace(tzinfo=None) < end:
        -            return HOUR
        -        else:
        -            return ZERO
        -
        -Eastern = USTimeZone(-5, "Eastern", "EST", "EDT")
        -Central = USTimeZone(-6, "Central", "CST", "CDT")
        -Mountain = USTimeZone(-7, "Mountain", "MST", "MDT")
        -Pacific = USTimeZone(-8, "Pacific", "PST", "PDT")
        diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/bdist_dumb.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/bdist_dumb.py
        deleted file mode 100644
        index f0d6b5b8cd8ab3ceb772a6e9f962bbce0bc8c1d2..0000000000000000000000000000000000000000
        --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/bdist_dumb.py
        +++ /dev/null
        @@ -1,123 +0,0 @@
        -"""distutils.command.bdist_dumb
        -
        -Implements the Distutils 'bdist_dumb' command (create a "dumb" built
        -distribution -- i.e., just an archive to be unpacked under $prefix or
        -$exec_prefix)."""
        -
        -import os
        -from distutils.core import Command
        -from distutils.util import get_platform
        -from distutils.dir_util import remove_tree, ensure_relative
        -from distutils.errors import *
        -from distutils.sysconfig import get_python_version
        -from distutils import log
        -
        -class bdist_dumb(Command):
        -
        -    description = "create a \"dumb\" built distribution"
        -
        -    user_options = [('bdist-dir=', 'd',
        -                     "temporary directory for creating the distribution"),
        -                    ('plat-name=', 'p',
        -                     "platform name to embed in generated filenames "
        -                     "(default: %s)" % get_platform()),
        -                    ('format=', 'f',
        -                     "archive format to create (tar, gztar, bztar, xztar, "
        -                     "ztar, zip)"),
        -                    ('keep-temp', 'k',
        -                     "keep the pseudo-installation tree around after " +
        -                     "creating the distribution archive"),
        -                    ('dist-dir=', 'd',
        -                     "directory to put final built distributions in"),
        -                    ('skip-build', None,
        -                     "skip rebuilding everything (for testing/debugging)"),
        -                    ('relative', None,
        -                     "build the archive using relative paths "
        -                     "(default: false)"),
        -                    ('owner=', 'u',
        -                     "Owner name used when creating a tar file"
        -                     " [default: current user]"),
        -                    ('group=', 'g',
        -                     "Group name used when creating a tar file"
        -                     " [default: current group]"),
        -                   ]
        -
        -    boolean_options = ['keep-temp', 'skip-build', 'relative']
        -
        -    default_format = { 'posix': 'gztar',
        -                       'nt': 'zip' }
        -
        -    def initialize_options(self):
        -        self.bdist_dir = None
        -        self.plat_name = None
        -        self.format = None
        -        self.keep_temp = 0
        -        self.dist_dir = None
        -        self.skip_build = None
        -        self.relative = 0
        -        self.owner = None
        -        self.group = None
        -
        -    def finalize_options(self):
        -        if self.bdist_dir is None:
        -            bdist_base = self.get_finalized_command('bdist').bdist_base
        -            self.bdist_dir = os.path.join(bdist_base, 'dumb')
        -
        -        if self.format is None:
        -            try:
        -                self.format = self.default_format[os.name]
        -            except KeyError:
        -                raise DistutilsPlatformError(
        -                       "don't know how to create dumb built distributions "
        -                       "on platform %s" % os.name)
        -
        -        self.set_undefined_options('bdist',
        -                                   ('dist_dir', 'dist_dir'),
        -                                   ('plat_name', 'plat_name'),
        -                                   ('skip_build', 'skip_build'))
        -
        -    def run(self):
        -        if not self.skip_build:
        -            self.run_command('build')
        -
        -        install = self.reinitialize_command('install', reinit_subcommands=1)
        -        install.root = self.bdist_dir
        -        install.skip_build = self.skip_build
        -        install.warn_dir = 0
        -
        -        log.info("installing to %s", self.bdist_dir)
        -        self.run_command('install')
        -
        -        # And make an archive relative to the root of the
        -        # pseudo-installation tree.
        -        archive_basename = "%s.%s" % (self.distribution.get_fullname(),
        -                                      self.plat_name)
        -
        -        pseudoinstall_root = os.path.join(self.dist_dir, archive_basename)
        -        if not self.relative:
        -            archive_root = self.bdist_dir
        -        else:
        -            if (self.distribution.has_ext_modules() and
        -                (install.install_base != install.install_platbase)):
        -                raise DistutilsPlatformError(
        -                       "can't make a dumb built distribution where "
        -                       "base and platbase are different (%s, %s)"
        -                       % (repr(install.install_base),
        -                          repr(install.install_platbase)))
        -            else:
        -                archive_root = os.path.join(self.bdist_dir,
        -                                   ensure_relative(install.install_base))
        -
        -        # Make the archive
        -        filename = self.make_archive(pseudoinstall_root,
        -                                     self.format, root_dir=archive_root,
        -                                     owner=self.owner, group=self.group)
        -        if self.distribution.has_ext_modules():
        -            pyversion = get_python_version()
        -        else:
        -            pyversion = 'any'
        -        self.distribution.dist_files.append(('bdist_dumb', pyversion,
        -                                             filename))
        -
        -        if not self.keep_temp:
        -            remove_tree(self.bdist_dir, dry_run=self.dry_run)
        diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/spawn.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/spawn.py
        deleted file mode 100644
        index 6e1c89f1f235b29809bfacb6df2cf00f2215a47f..0000000000000000000000000000000000000000
        --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/spawn.py
        +++ /dev/null
        @@ -1,106 +0,0 @@
        -"""distutils.spawn
        -
        -Provides the 'spawn()' function, a front-end to various platform-
        -specific functions for launching another program in a sub-process.
        -Also provides the 'find_executable()' to search the path for a given
        -executable name.
        -"""
        -
        -import sys
        -import os
        -import subprocess
        -
        -from distutils.errors import DistutilsPlatformError, DistutilsExecError
        -from distutils.debug import DEBUG
        -from distutils import log
        -
        -
        -def spawn(cmd, search_path=1, verbose=0, dry_run=0, env=None):
        -    """Run another program, specified as a command list 'cmd', in a new process.
        -
        -    'cmd' is just the argument list for the new process, ie.
        -    cmd[0] is the program to run and cmd[1:] are the rest of its arguments.
        -    There is no way to run a program with a name different from that of its
        -    executable.
        -
        -    If 'search_path' is true (the default), the system's executable
        -    search path will be used to find the program; otherwise, cmd[0]
        -    must be the exact path to the executable.  If 'dry_run' is true,
        -    the command will not actually be run.
        -
        -    Raise DistutilsExecError if running the program fails in any way; just
        -    return on success.
        -    """
        -    # cmd is documented as a list, but just in case some code passes a tuple
        -    # in, protect our %-formatting code against horrible death
        -    cmd = list(cmd)
        -
        -    log.info(subprocess.list2cmdline(cmd))
        -    if dry_run:
        -        return
        -
        -    if search_path:
        -        executable = find_executable(cmd[0])
        -        if executable is not None:
        -            cmd[0] = executable
        -
        -    env = env if env is not None else dict(os.environ)
        -
        -    if sys.platform == 'darwin':
        -        from distutils.util import MACOSX_VERSION_VAR, get_macosx_target_ver
        -        macosx_target_ver = get_macosx_target_ver()
        -        if macosx_target_ver:
        -            env[MACOSX_VERSION_VAR] = macosx_target_ver
        -
        -    try:
        -        proc = subprocess.Popen(cmd, env=env)
        -        proc.wait()
        -        exitcode = proc.returncode
        -    except OSError as exc:
        -        if not DEBUG:
        -            cmd = cmd[0]
        -        raise DistutilsExecError(
        -            "command %r failed: %s" % (cmd, exc.args[-1])) from exc
        -
        -    if exitcode:
        -        if not DEBUG:
        -            cmd = cmd[0]
        -        raise DistutilsExecError(
        -              "command %r failed with exit code %s" % (cmd, exitcode))
        -
        -
        -def find_executable(executable, path=None):
        -    """Tries to find 'executable' in the directories listed in 'path'.
        -
        -    A string listing directories separated by 'os.pathsep'; defaults to
        -    os.environ['PATH'].  Returns the complete filename or None if not found.
        -    """
        -    _, ext = os.path.splitext(executable)
        -    if (sys.platform == 'win32') and (ext != '.exe'):
        -        executable = executable + '.exe'
        -
        -    if os.path.isfile(executable):
        -        return executable
        -
        -    if path is None:
        -        path = os.environ.get('PATH', None)
        -        if path is None:
        -            try:
        -                path = os.confstr("CS_PATH")
        -            except (AttributeError, ValueError):
        -                # os.confstr() or CS_PATH is not available
        -                path = os.defpath
        -        # bpo-35755: Don't use os.defpath if the PATH environment variable is
        -        # set to an empty string
        -
        -    # PATH='' doesn't match, whereas PATH=':' looks in the current directory
        -    if not path:
        -        return None
        -
        -    paths = path.split(os.pathsep)
        -    for p in paths:
        -        f = os.path.join(p, executable)
        -        if os.path.isfile(f):
        -            # the file exists, we have a shot at spawn working
        -            return f
        -    return None
        diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/_utils.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/_utils.py
        deleted file mode 100644
        index d781647ff74f05d536a9ff675b2d7114738c4681..0000000000000000000000000000000000000000
        --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/_utils.py
        +++ /dev/null
        @@ -1,74 +0,0 @@
        -import asyncio
        -import functools
        -import sys
        -import typing
        -from types import TracebackType
        -
        -if sys.version_info < (3, 8):  # pragma: no cover
        -    from typing_extensions import Protocol
        -else:  # pragma: no cover
        -    from typing import Protocol
        -
        -
        -def is_async_callable(obj: typing.Any) -> bool:
        -    while isinstance(obj, functools.partial):
        -        obj = obj.func
        -
        -    return asyncio.iscoroutinefunction(obj) or (
        -        callable(obj) and asyncio.iscoroutinefunction(obj.__call__)
        -    )
        -
        -
        -T_co = typing.TypeVar("T_co", covariant=True)
        -
        -
        -# TODO: once 3.8 is the minimum supported version (27 Jun 2023)
        -# this can just become
        -# class AwaitableOrContextManager(
        -#     typing.Awaitable[T_co],
        -#     typing.AsyncContextManager[T_co],
        -#     typing.Protocol[T_co],
        -# ):
        -#     pass
        -class AwaitableOrContextManager(Protocol[T_co]):
        -    def __await__(self) -> typing.Generator[typing.Any, None, T_co]:
        -        ...  # pragma: no cover
        -
        -    async def __aenter__(self) -> T_co:
        -        ...  # pragma: no cover
        -
        -    async def __aexit__(
        -        self,
        -        __exc_type: typing.Optional[typing.Type[BaseException]],
        -        __exc_value: typing.Optional[BaseException],
        -        __traceback: typing.Optional[TracebackType],
        -    ) -> typing.Union[bool, None]:
        -        ...  # pragma: no cover
        -
        -
        -class SupportsAsyncClose(Protocol):
        -    async def close(self) -> None:
        -        ...  # pragma: no cover
        -
        -
        -SupportsAsyncCloseType = typing.TypeVar(
        -    "SupportsAsyncCloseType", bound=SupportsAsyncClose, covariant=False
        -)
        -
        -
        -class AwaitableOrContextManagerWrapper(typing.Generic[SupportsAsyncCloseType]):
        -    __slots__ = ("aw", "entered")
        -
        -    def __init__(self, aw: typing.Awaitable[SupportsAsyncCloseType]) -> None:
        -        self.aw = aw
        -
        -    def __await__(self) -> typing.Generator[typing.Any, None, SupportsAsyncCloseType]:
        -        return self.aw.__await__()
        -
        -    async def __aenter__(self) -> SupportsAsyncCloseType:
        -        self.entered = await self.aw
        -        return self.entered
        -
        -    async def __aexit__(self, *args: typing.Any) -> typing.Union[None, bool]:
        -        await self.entered.close()
        -        return None
        diff --git a/spaces/r3gm/AICoverGen/src/infer_pack/commons.py b/spaces/r3gm/AICoverGen/src/infer_pack/commons.py
        deleted file mode 100644
        index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
        --- a/spaces/r3gm/AICoverGen/src/infer_pack/commons.py
        +++ /dev/null
        @@ -1,166 +0,0 @@
        -import math
        -import numpy as np
        -import torch
        -from torch import nn
        -from torch.nn import functional as F
        -
        -
        -def init_weights(m, mean=0.0, std=0.01):
        -    classname = m.__class__.__name__
        -    if classname.find("Conv") != -1:
        -        m.weight.data.normal_(mean, std)
        -
        -
        -def get_padding(kernel_size, dilation=1):
        -    return int((kernel_size * dilation - dilation) / 2)
        -
        -
        -def convert_pad_shape(pad_shape):
        -    l = pad_shape[::-1]
        -    pad_shape = [item for sublist in l for item in sublist]
        -    return pad_shape
        -
        -
        -def kl_divergence(m_p, logs_p, m_q, logs_q):
        -    """KL(P||Q)"""
        -    kl = (logs_q - logs_p) - 0.5
        -    kl += (
        -        0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
        -    )
        -    return kl
        -
        -
        -def rand_gumbel(shape):
        -    """Sample from the Gumbel distribution, protect from overflows."""
        -    uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
        -    return -torch.log(-torch.log(uniform_samples))
        -
        -
        -def rand_gumbel_like(x):
        -    g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
        -    return g
        -
        -
        -def slice_segments(x, ids_str, segment_size=4):
        -    ret = torch.zeros_like(x[:, :, :segment_size])
        -    for i in range(x.size(0)):
        -        idx_str = ids_str[i]
        -        idx_end = idx_str + segment_size
        -        ret[i] = x[i, :, idx_str:idx_end]
        -    return ret
        -
        -
        -def slice_segments2(x, ids_str, segment_size=4):
        -    ret = torch.zeros_like(x[:, :segment_size])
        -    for i in range(x.size(0)):
        -        idx_str = ids_str[i]
        -        idx_end = idx_str + segment_size
        -        ret[i] = x[i, idx_str:idx_end]
        -    return ret
        -
        -
        -def rand_slice_segments(x, x_lengths=None, segment_size=4):
        -    b, d, t = x.size()
        -    if x_lengths is None:
        -        x_lengths = t
        -    ids_str_max = x_lengths - segment_size + 1
        -    ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
        -    ret = slice_segments(x, ids_str, segment_size)
        -    return ret, ids_str
        -
        -
        -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
        -    position = torch.arange(length, dtype=torch.float)
        -    num_timescales = channels // 2
        -    log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
        -        num_timescales - 1
        -    )
        -    inv_timescales = min_timescale * torch.exp(
        -        torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
        -    )
        -    scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
        -    signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
        -    signal = F.pad(signal, [0, 0, 0, channels % 2])
        -    signal = signal.view(1, channels, length)
        -    return signal
        -
        -
        -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
        -    b, channels, length = x.size()
        -    signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
        -    return x + signal.to(dtype=x.dtype, device=x.device)
        -
        -
        -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
        -    b, channels, length = x.size()
        -    signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
        -    return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
        -
        -
        -def subsequent_mask(length):
        -    mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
        -    return mask
        -
        -
        -@torch.jit.script
        -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
        -    n_channels_int = n_channels[0]
        -    in_act = input_a + input_b
        -    t_act = torch.tanh(in_act[:, :n_channels_int, :])
        -    s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
        -    acts = t_act * s_act
        -    return acts
        -
        -
        -def convert_pad_shape(pad_shape):
        -    l = pad_shape[::-1]
        -    pad_shape = [item for sublist in l for item in sublist]
        -    return pad_shape
        -
        -
        -def shift_1d(x):
        -    x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
        -    return x
        -
        -
        -def sequence_mask(length, max_length=None):
        -    if max_length is None:
        -        max_length = length.max()
        -    x = torch.arange(max_length, dtype=length.dtype, device=length.device)
        -    return x.unsqueeze(0) < length.unsqueeze(1)
        -
        -
        -def generate_path(duration, mask):
        -    """
        -    duration: [b, 1, t_x]
        -    mask: [b, 1, t_y, t_x]
        -    """
        -    device = duration.device
        -
        -    b, _, t_y, t_x = mask.shape
        -    cum_duration = torch.cumsum(duration, -1)
        -
        -    cum_duration_flat = cum_duration.view(b * t_x)
        -    path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
        -    path = path.view(b, t_x, t_y)
        -    path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
        -    path = path.unsqueeze(1).transpose(2, 3) * mask
        -    return path
        -
        -
        -def clip_grad_value_(parameters, clip_value, norm_type=2):
        -    if isinstance(parameters, torch.Tensor):
        -        parameters = [parameters]
        -    parameters = list(filter(lambda p: p.grad is not None, parameters))
        -    norm_type = float(norm_type)
        -    if clip_value is not None:
        -        clip_value = float(clip_value)
        -
        -    total_norm = 0
        -    for p in parameters:
        -        param_norm = p.grad.data.norm(norm_type)
        -        total_norm += param_norm.item() ** norm_type
        -        if clip_value is not None:
        -            p.grad.data.clamp_(min=-clip_value, max=clip_value)
        -    total_norm = total_norm ** (1.0 / norm_type)
        -    return total_norm
        diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/audio.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/audio.py
        deleted file mode 100644
        index f4a1c18b2888947ece8b15594ead0c4c5166cb57..0000000000000000000000000000000000000000
        --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/audio.py
        +++ /dev/null
        @@ -1,140 +0,0 @@
        -import librosa
        -import numpy as np
        -import av
        -from io import BytesIO
        -import ffmpeg
        -import os
        -import sys
        -
        -import random
        -from lib.infer.infer_libs.csvutil import CSVutil
        -#import csv
        -
        -platform_stft_mapping = {
        -    'linux': 'stftpitchshift',
        -    'darwin': 'stftpitchshift',
        -    'win32': 'stftpitchshift.exe',
        -}
        -
        -stft = platform_stft_mapping.get(sys.platform)
        -
        -def wav2(i, o, format):
        -    inp = av.open(i, 'rb')
        -    if format == "m4a": format = "mp4"
        -    out = av.open(o, 'wb', format=format)
        -    if format == "ogg": format = "libvorbis"
        -    if format == "mp4": format = "aac"
        -
        -    ostream = out.add_stream(format)
        -
        -    for frame in inp.decode(audio=0):
        -        for p in ostream.encode(frame): out.mux(p)
        -
        -    for p in ostream.encode(None): out.mux(p)
        -
        -    out.close()
        -    inp.close()
        -
        -def audio2(i, o, format, sr):
        -    inp = av.open(i, 'rb')
        -    out = av.open(o, 'wb', format=format)
        -    if format == "ogg": format = "libvorbis"
        -    if format == "f32le": format = "pcm_f32le"
        -
        -    ostream = out.add_stream(format, channels=1)
        -    ostream.sample_rate = sr
        -
        -    for frame in inp.decode(audio=0):
        -        for p in ostream.encode(frame): out.mux(p)
        -
        -    out.close()
        -    inp.close()
        -
        -def load_audion(file, sr):
        -    try:
        -        file = (
        -            file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
        -        )  # 防止小白拷路径头尾带了空格和"和回车
        -        with open(file, "rb") as f:
        -            with BytesIO() as out:
        -                audio2(f, out, "f32le", sr)
        -                return np.frombuffer(out.getvalue(), np.float32).flatten()
        -
        -    except AttributeError:
        -        audio = file[1] / 32768.0
        -        if len(audio.shape) == 2:
        -            audio = np.mean(audio, -1)
        -        return librosa.resample(audio, orig_sr=file[0], target_sr=16000)
        -
        -    except Exception as e:
        -        raise RuntimeError(f"Failed to load audio: {e}")
        -
        -
        -
        -
        -def load_audio(file, sr, DoFormant=False, Quefrency=1.0, Timbre=1.0):
        -    converted = False
        -    DoFormant, Quefrency, Timbre = CSVutil("lib/csvdb/formanting.csv", "r", "formanting")
        -    DoFormant, Quefrency, Timbre = bool(DoFormant), float(Quefrency), float(Timbre)
        -    
        -    try:
        -        file = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
        -        
        -        if not file.endswith(".wav"):
        -            converted = True
        -            # Conversión de formato usando ffmpeg
        -            converting = (
        -                ffmpeg.input(file, threads=0)
        -                .output(f"{file}.wav")
        -                .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True)
        -            )
        -            file = f"{file}.wav"
        -            print(f" · File converted to Wav format: {file}\n")
        -
        -        if DoFormant == False:
        -            # Procesamiento de formantes usando stftpitchshift
        -            command = (
        -                f'{stft} -i "{file}" -q "{Quefrency}" '
        -                f'-t "{Timbre}" -o "{file}FORMANTED.wav"'
        -            )
        -            os.system(command)
        -            file = f"{file}FORMANTED.wav"
        -            print(f" · Formanted {file}!\n")
        -
        -        with open(file, "rb") as f:
        -            with BytesIO() as out:
        -                audio2(f, out, "f32le", sr)
        -                audio_data = np.frombuffer(out.getvalue(), np.float32).flatten()
        -
        -        if converted:
        -            try: os.remove(file)
        -            except Exception as e: pass; print(f"Couldn't remove converted type of file due to {e}")
        -            converted = False
        -
        -        return audio_data
        -    except AttributeError:
        -        audio = file[1] / 32768.0
        -        if len(audio.shape) == 2:
        -            audio = np.mean(audio, -1)
        -        return librosa.resample(audio, orig_sr=file[0], target_sr=16000)
        -    except Exception as e:
        -        raise RuntimeError(f"Failed to load audio: {e}")
        -
        -
        -def check_audio_duration(file):
        -    try:
        -        file = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
        -
        -        probe = ffmpeg.probe(file)
        -
        -        duration = float(probe['streams'][0]['duration'])
        -
        -        if duration < 0.76:
        -            print(
        -                f"Audio file, {file.split('/')[-1]}, under ~0.76s detected - file is too short. Target at least 1-2s for best results."
        -            )
        -            return False
        -
        -        return True
        -    except Exception as e:
        -        raise RuntimeError(f"Failed to check audio duration: {e}")
        \ No newline at end of file
        diff --git a/spaces/r3gm/RVC_HF/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/r3gm/RVC_HF/lib/infer_pack/modules/F0Predictor/__init__.py
        deleted file mode 100644
        index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
        diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Billing Ecafepro 416 Full 36 The Best Software for Internet Cafe Management.md b/spaces/raedeXanto/academic-chatgpt-beta/Billing Ecafepro 416 Full 36 The Best Software for Internet Cafe Management.md
        deleted file mode 100644
        index 285b6d231668152207cc8a4889c7b1526720af22..0000000000000000000000000000000000000000
        --- a/spaces/raedeXanto/academic-chatgpt-beta/Billing Ecafepro 416 Full 36 The Best Software for Internet Cafe Management.md	
        +++ /dev/null
        @@ -1,140 +0,0 @@
        -
        -

        What is Billing Ecafepro 416 Full 36?

        -

        If you own or manage an internet cafe, you know how important it is to have a reliable and efficient software that can handle all your billing and monitoring needs. That's where Billing Ecafepro 416 Full 36 comes in. Billing Ecafepro 416 Full 36 is a software that allows you to manage and monitor your internet cafe operations with ease and accuracy. It lets you create and manage user accounts, set different prices and time limits, control access to programs and websites, generate reports and statistics, and much more. Billing Ecafepro 416 Full 36 is designed to work with Windows operating systems and supports up to 255 client computers. It also has a user-friendly interface that makes it easy to use for both beginners and experts.

        -

        Billing Ecafepro 416 Full 36


        Download Zip ->->->-> https://tinourl.com/2uL1jO



        -

        Why do you need Billing Ecafepro 416 Full 36?

        -

        Billing Ecafepro 416 Full 36 is not just a software, it's a solution for your internet cafe business. Here are some of the benefits of using Billing Ecafepro 416 Full 36:

        -
          -
        • It saves you time and money. Billing Ecafepro 416 Full 36 automates your billing process and eliminates human errors. It also helps you optimize your resources and reduce costs by allowing you to adjust prices and time limits according to demand and availability.
        • -
        • It increases your customer satisfaction. Billing Ecafepro 416 Full 36 provides your customers with a smooth and secure experience. It allows them to log in and log out easily, pay with various methods, access their favorite programs and websites, and enjoy other features such as chat, voice call, remote printing, etc.
        • -
        • It enhances your security and privacy. Billing Ecafepro 416 Full 36 protects your network and data from unauthorized access and malicious attacks. It also allows you to monitor and record all activities on your client computers, such as websites visited, files downloaded, keystrokes typed, etc.
        • -
        • It improves your management and analysis. Billing Ecafepro 416 Full 36 gives you a comprehensive overview of your internet cafe performance and trends. It allows you to generate reports and statistics on various aspects, such as sales, revenue, usage, customer behavior, etc. You can also export or print these data for further review or presentation.
        • -
        -

        How to install Billing Ecafepro 416 Full 36?

        -

        Installing Billing Ecafepro 416 Full 36 is easy and fast. Here are the steps to follow:

        -
          -
        1. Download the software from the official website. You can find the link in the references section below. The file size is about 20 MB.
        2. -
        3. Run the setup file on your server computer. This is the computer that will act as the main controller of your internet cafe network. Follow the instructions on the screen to complete the installation.
        4. -
        5. Run the setup file on your client computers. These are the computers that will be used by your customers. Follow the instructions on the screen to complete the installation.
        6. -
        7. Connect your server computer and client computers with a network cable or wireless router. Make sure they are all on the same network.
        8. -
        9. Launch the software on your server computer. You will see a dashboard that shows you the status of your client computers. You can also access other functions and settings from here.
        10. -
        11. Launch the software on your client computers. You will see a login screen that asks for a username and password. You can create these accounts from your server computer or let your customers create them themselves.
        12. -
        -

        How to use Billing Ecafepro 416 Full 36?

        -

        Billing Ecafepro 416 Full 36 is easy to use once you have installed it on your server computer and client computers. Here are some of the basic functions and settings of the software:

        -
          -
        • Creating accounts. You can create accounts for your customers from your server computer or let them create their own accounts from the client computers. You can also assign different types of accounts, such as prepaid, postpaid, member, guest, etc., with different prices and time limits.
        • -
        • Setting prices and time limits. You can set different prices and time limits for different types of accounts or different times of the day or week. You can also set discounts or bonuses for loyal customers or special occasions.
        • -
        • Managing time. You can start or stop billing time for each client computer manually or automatically. You can also pause or resume billing time if needed. You can also view the remaining time or balance for each customer on your server computer or client computers.
        • -
        • Controlling access. You can control what programs or websites your customers can access from their client computers. You can also block or allow certain keywords or categories of websites. You can also lock or unlock client computers remotely from your server computer.
        • -
        • Generating reports and statistics. You can generate reports and statistics on various aspects of your internet cafe business, such as sales, revenue, usage, customer behavior, etc. You can also export or print these data for further review or presentation.
        • -
        -

        How to troubleshoot Billing Ecafepro 416 Full 36?

        -

        Billing Ecafepro 416 Full 36 is a reliable and stable software that rarely causes any problems. However, if you encounter any issues while using it, here are some common problems and solutions:

        -
          -
        • The server computer cannot connect to the client computers or vice versa. Check if the network cable or wireless router is working properly. Check if the firewall or antivirus software is blocking the connection. Check if the IP address or port number is correct.
        • -
        • The billing time or balance is incorrect or not updated. Check if the clock or date on your server computer or client computers is correct. Check if there is any interference or interruption in the network connection. Check if there is any error in the database or configuration file.
        • -
        • The customer cannot log in or log out from their account. Check if their username or password is correct. Check if their account type or status is valid. Check if there is any limit or restriction on their account.
        • -
        -

        What are the alternatives to Billing Ecafepro 416 Full 36?

        -

        Billing Ecafepro 416 Full 36 is one of the best internet cafe software available in the market today. However, it may not suit everyone's needs or preferences. Here are some of the alternatives to Billing Ecafepro 416 Full 36 that you may want to consider:

        - - - - - - - - -- Has a built-in hotspot feature - - - - - - - - - - - - - - - - - - - - - - -
        SoftwareProsCons
        Antamedia Internet Cafe Software- More expensive than Billing Ecafepro 416 Full 36
        - Requires internet connection for cloud-based management
        - May have compatibility issues with some devices
        MyCyberCafe- Supports Windows and Linux devices
        - Offers a free version for up to 4 client computers
        - Has a simple and intuitive interface
        - Lacks some advanced features such as access control, reports and statistics, etc.
        - Does not support Android devices
        - Does not integrate with payment methods
        TrueCafe- Supports Windows and Android devices
        - Integrates with various payment methods
        - Has a built-in Wi-Fi hotspot feature
        - More expensive than Billing Ecafepro 416 Full 36
        - Does not support Linux devices
        - Has a complex and cluttered interface
        Smartlaunch- Supports Windows and Android devices
        - Offers cloud-based management
        - Has a built-in game launcher feature
        - More expensive than Billing Ecafepro 416 Full 36
        - Requires internet connection for cloud-based management
        - Does not support Linux devices
        HandyCafe- Supports Windows and Linux devices
        - Offers a free version for unlimited client computers
        - Has a built-in firewall and web filter feature
        - Lacks some advanced features such as reports and statistics, access control, etc.
        - Does not support Android devices
        - Does not integrate with payment methods
        -

        Conclusion

        -

        Billing Ecafepro 416 Full 36 is a software that allows you to manage and monitor your internet cafe operations with ease and accuracy. It has many features and benefits that make it a great choice for internet cafe owners and customers. It is also easy to install and use, and has a user-friendly interface. However, it may not suit everyone's needs or preferences, so you may want to consider some of the alternatives to Billing Ecafepro 416 Full 36 that we have discussed in this article. Whatever software you choose, we hope you find the best solution for your internet cafe business.

        -

        FAQs

        -
          -
        • Q: How much does Billing Ecafepro 416 Full 36 cost?
          A: Billing Ecafepro 416 Full 36 costs $199 for a lifetime license that supports up to 255 client computers. You can also get a free trial version that supports up to 3 client computers for 30 days.
        • -
        • Q: Where can I download Billing Ecafepro 416 Full 36?
          A: You can download Billing Ecafepro 416 Full 36 from the official website. You can also find the link in the references section below.
        • -
        • Q: How can I contact the support team of Billing Ecafepro 416 Full 36?
          A: You can contact the support team of Billing Ecafepro 416 Full 36 by email at support@ecafepro.com or by phone at +603-7880-8168.
        • -
        • Q: What are the system requirements for Billing Ecafepro 416 Full 36?
          A: The system requirements for Billing Ecafepro 416 Full 36 are as follows:
          - Server computer: Windows XP or higher, Pentium III or higher, 256 MB RAM or higher, 100 MB free disk space or higher.
          - Client computer: Windows XP or higher, Pentium II or higher, 128 MB RAM or higher, 50 MB free disk space or higher.
        • -
        • Q: Can I use Billing Ecafepro 416 Full 36 for other purposes than internet cafe?
          A: Yes, you can use Billing Ecafepro 416 Full 36 for other purposes than internet cafe, such as gaming center, library, school, hotel, etc. However, you may need to adjust some settings or features according to your specific needs.
        • -
        - Here is the rest of the article: Provides Wi-Fi hotspot support
        - Has a multilingual interface
        - Has a free trial version -
      11. Cons
        - Costs more than Billing Ecafepro 416 Full 36
        - Requires more system resources
        - Has a complex installation process
      12. - - -MyCyberCafe -- Supports Windows and Linux devices
        - Offers remote management
        - Integrates with PayPal and credit cards
        - Has a free version for up to 4 client computers -
      13. Cons
        - Does not support Android devices
        - Does not offer cloud-based management
        - Has a limited interface customization
        - Has a low customer rating
      14. - - -HandyCafe -- Supports Windows and Android devices
        - Offers voice and video chat
        - Integrates with various payment methods
        - Has a free version for unlimited client computers -
      15. Cons
        - Does not support Linux devices
        - Does not offer remote management
        - Has a high risk of malware infection
        - Has a poor customer service
      16. - - -

        Conclusion

        -

        Billing Ecafepro 416 Full 36 is a software that can help you run your internet cafe business smoothly and efficiently. It has many features and benefits that make it stand out from other internet cafe software. It is also easy to install and use, and has a low cost and high customer satisfaction. If you are looking for a software that can handle all your billing and monitoring needs, Billing Ecafepro 416 Full 36 is the one for you.

        -

        How to install Billing Ecafepro 416 Full 36 on Windows
        -Billing Ecafepro 416 Full 36 tutorial by Jendela Teknologi
        -Billing Ecafepro 416 Full 36 software for internet cafe management
        -Billing Ecafepro 416 Full 36 mod M2 enabled
        -Billing Ecafepro 416 Full 36 subscription conversion
        -Billing Ecafepro 416 Full 36 vs other billing software
        -Billing Ecafepro 416 Full 36 features and benefits
        -Billing Ecafepro 416 Full 36 download link and license key
        -Billing Ecafepro 416 Full 36 customer reviews and ratings
        -Billing Ecafepro 416 Full 36 troubleshooting and support
        -How to update Billing Ecafepro 416 Full 36 to the latest version
        -Billing Ecafepro 416 Full 36 compatibility with Windows 10
        -Billing Ecafepro 416 Full 36 price and payment options
        -Billing Ecafepro 416 Full 36 demo and trial version
        -Billing Ecafepro 416 Full 36 user manual and guide
        -How to customize Billing Ecafepro 416 Full 36 settings and preferences
        -Billing Ecafepro 416 Full 36 best practices and tips
        -Billing Ecafepro 416 Full 36 alternatives and competitors
        -How to uninstall Billing Ecafepro 416 Full 36 from your computer
        -Billing Ecafepro 416 Full 36 FAQs and answers
        -How to backup and restore Billing Ecafepro 416 Full 36 data
        -Billing Ecafepro 416 Full 36 security and privacy features
        -How to integrate Billing Ecafepro 416 Full 36 with other software
        -Billing Ecafepro 416 Full 36 testimonials and case studies
        -How to contact Billing Ecafepro 416 Full 36 customer service
        -How to use Billing Ecafepro 416 Full 36 for multiple computers
        -Billing Ecafepro 416 Full 36 advantages and disadvantages
        -How to optimize Billing Ecafepro 416 Full 36 performance and speed
        -How to fix common errors and issues with Billing Ecafepro

        -

        If you want to try Billing Ecafepro 416 Full 36 for yourself, you can download it from the official website. You can also watch a tutorial video on how to use it on YouTube. You can also join a discussion group on how to install and use it on Ainfgib. You can also read a review on how it compares to other internet cafe software on Zardi.

        -

        Whatever you decide, we hope this article has helped you learn more about Billing Ecafepro 416 Full 36 and its alternatives. Thank you for reading and happy surfing!

        -

        FAQs

        -
          -
        • What is the difference between Billing Ecafepro 416 Full 36 and Billing Ecafepro 416 BEST Full 36?
          Billing Ecafepro 416 BEST Full 36 is an unofficial modded version of Billing Ecafepro 416 Full 36 that claims to have more features and fixes. However, it is not supported by the official developer and may have bugs or security issues.
        • -
        • How much does Billing Ecafepro 416 Full 36 cost?
          Billing Ecafepro 416 Full 36 costs $199 for a lifetime license that includes unlimited client computers and free updates.
        • -
        • How can I contact the developer of Billing Ecafepro 416 Full 36?
          You can contact the developer of Billing Ecafepro 416 Full 36 by email at support@ecafepro.com or by phone at +603-7880-6688.
        • -
        • How can I uninstall Billing Ecafepro 416 Full 36?
          You can uninstall Billing Ecafepro 416 Full 36 by going to Control Panel > Programs > Uninstall a program and selecting Billing Ecafepro Server or Client.
        • -
        • How can I update Billing Ecafepro 416 Full 36?
          You can update Billing Ecafepro 416 Full 36 by downloading the latest version from the official website and running the setup file on your server computer and client computers.
        • -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/CRACK Website Ripper Copier3.9.2 Crack How to Backup and Restore Any Website.md b/spaces/raedeXanto/academic-chatgpt-beta/CRACK Website Ripper Copier3.9.2 Crack How to Backup and Restore Any Website.md deleted file mode 100644 index d6fa826ca332a95f2df6a2a3abec00544e0f3075..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/CRACK Website Ripper Copier3.9.2 Crack How to Backup and Restore Any Website.md +++ /dev/null @@ -1,110 +0,0 @@ - -

        CRACK Website Ripper Copier 3.9.2 Crack

        -

        Have you ever wanted to download or copy an entire website for offline viewing or backup? If so, you may have heard of Website Ripper Copier (WRC), a powerful and easy-to-use website downloader and copier software that can help you save Internet information to your local disk drive.

        -

        However, WRC is not a free software, and you may need to pay for a license to use its full features. That's why some people may look for a crack version of WRC that can bypass the registration and activation process.

        -

        CRACK Website Ripper Copier3.9.2 Crack


        Download File ✯✯✯ https://tinourl.com/2uL2mD



        -

        In this article, we will tell you everything you need to know about WRC 3.9.2 crack version, including its features, how to use it, its pros and cons, and some alternatives that you can try.

        -

        Features of WRC 3.9.2

        -

        WRC 3.9.2 is one of the latest versions of WRC that was released in 2014. It offers you practical and robust features that can help you download or copy any website with ease.

        -

        Some of the main features of WRC 3.9.2 are:

        -
          -
        • Download hundreds of thousands of files in a single project
        • -
        • Save website files with resumption support
        • -
        • Copy website of certain parts via powerful filters
        • -
        • Download password protected websites and copy protected webpages
        • -
        • Support HTTPS/SSL, HTTP, FTP, and proxy servers
        • -
        • Explore website link structures and archive web archive sites
        • -
        • Schedule to automatically backup website at any time
        • -
        -

        With these features, you can download or copy any website you want with just a few clicks, and browse it offline anywhere with any browser.

        -

        Website Ripper Copier 3.9.2 + Crack Torrent Download
        -Website Ripper Copier 3.9.2 Crack+serial key free download Fullsoft1
        -Website Ripper Copier3.9.2 Crack .rar Trello
        -Website Ripper Copier3.9.2 Crack .rar gallery-exaart
        -Website Ripper Copier 3.9.2 Full Version Free Download
        -Website Ripper Copier 3.9.2 Activation Key Download
        -Website Ripper Copier 3.9.2 Patched Download
        -Website Ripper Copier 3.9.2 Cracked Software Download
        -Website Ripper Copier 3.9.2 License Key Generator
        -Website Ripper Copier 3.9.2 Keygen Download
        -Website Ripper Copier 3.9.2 Registration Code Download
        -Website Ripper Copier 3.9.2 Serial Number Download
        -Website Ripper Copier 3.9.2 Product Key Download
        -Website Ripper Copier 3.9.2 Crack Free Download
        -Website Ripper Copier 3.9.2 Crack Full Version Download
        -Website Ripper Copier 3.9.2 Crack Latest Version Download
        -Website Ripper Copier 3.9.2 Crack Updated Version Download
        -Website Ripper Copier 3.9.2 Crack Working Version Download
        -Website Ripper Copier 3.9.2 Crack No Survey Download
        -Website Ripper Copier 3.9.2 Crack No Password Download
        -Website Ripper Copier 3.9.2 Crack Direct Link Download
        -Website Ripper Copier 3.9.2 Crack Magnet Link Download
        -Website Ripper Copier 3.9.2 Crack SolidTorrents Download
        -Website Ripper Copier 3.9.2 Crack Blogger Download
        -Website Ripper Copier 3.9.2 Crack Offline Browser Download
        -Website Ripper Copier 3.9.2 Crack Site Downloader Download
        -Website Ripper Copier 3.9.2 Crack Site Extractor Download
        -Website Ripper Copier 3.9.2 Crack Site Mirrorer Download
        -Website Ripper Copier 3.9.2 Crack Site Validator Download
        -Website Ripper Copier 3.9.2 Crack Web Crawler Download
        -Website Ripper Copier 3.9.2 Crack Web Spider Download
        -Website Ripper Copier 3.9.2 Crack Web Scraper Download
        -Website Ripper Copier 3.9.2 Crack Web Data Extractor Download
        -Website Ripper Copier 3.9.2 Crack Web Data Miner Download
        -Website Ripper Copier 3.9.2 Crack Web Data Harvester Download
        -Website Ripper Copier 3.9.2 Crack Web Data Collector Download
        -Website Ripper Copier 3.9

        -

        How to use WRC 3.9.2 crack version

        -

        If you want to use WRC 3.9.2 crack version for free, you need to follow these steps:

        -
          -
        1. Download and install WRC 3.9.2 crack version from a reliable source.
        2. -
        3. Launch WRC and create a new project with the website URL you want to download or copy.
        4. -
        5. Adjust the settings and filters according to your needs and preferences.
        6. -
        7. Start the download or copy process and wait for it to finish.
        8. -
        9. Browse the downloaded or copied website offline with any browser.
        10. -
        -

        That's it! You have successfully downloaded or copied a website using WRC 3.9.2 crack version.

        -

        Pros and cons of WRC 3.9.2 crack version

        -

        Using WRC 3.9.2 crack version may seem tempting, but it also comes with some advantages and disadvantages that you should be aware of.

        -

        Some of the pros of using WRC 3.9.2 crack version are:

        -
          -
        • It's free: you don't have to pay for a license or subscription fee to use WRC 3.9.2 crack version.
        • -
        • It's easy-to-use: you don't have to register or activate WRC 3.9.2 crack version; just download, install, and use it as you wish.
        • -
        • It's powerful: you can enjoy all the features of WRC 3.9.2 without any limitations or restrictions.
        • -
        • It's fast: you can download or copy websites quickly and efficiently with WRC 3.9.2 crack version.
        • -
        • It's flexible: you can customize your download or copy project with various settings and filters with WRC 3.9.2 crack version.
        • -
        -

        Some of the cons of using WRC 3.9.2 crack version are:

        -
          -
        • It may contain viruses, malware, or spyware: since WRC 3.9.2 crack version is not an official release from the developer, it may have been modified by hackers or scammers who may inject malicious code into it.
        • -
        • It may not work properly or crash: since WRC 3.9.2 crack version is not an updated release from the developer, it may have some bugs or errors that may affect its performance or stability.
        • -
        • It may violate the terms of service or copyright of the original website: since WRC 3.9.2 crack version is not an authorized release from the developer, it may infringe on the rights of the original website owner who may not allow their content to be downloaded or copied without permission.
        • -
        -

        Alternatives to WRC 3.9.2 crack version

        -

        If you are not comfortable with using WRC 3.9.2 crack version, or if you want to try some other options that are free and safe, here are some alternatives that you can consider:

        - - - - - - -
        NameDescriptionWebsite
        HTTrackA free and open source offline browser utility that can download or mirror a website recursively while preserving its original structure.https://www.httrack.com/
        Cyotek WebCopyA free tool that can copy websites to your local disk for offline browsing while allowing you to configure various rules and options.https://www.cyotek.com/cyotek-webcopy
        GetleftA free and cross-platform website grabber that can download websites recursively while supporting multiple languages and protocols.http://getleft.sourceforge.net/
        -

        Conclusion

        -

        you save Internet information for offline use or backup. However, using a crack version may pose some risks and challenges, such as security, stability, and legality issues. Therefore, you should be careful and responsible when using a crack version, or consider using some alternatives that are free and safe.

        -

        We hope this article has given you some useful information and insights about WRC 3.9.2 crack version and its alternatives. If you have any questions or feedback, please feel free to leave a comment below.

        -

        FAQs

        -
          -
        1. What is the difference between downloading and copying a website?
        2. -

          Downloading a website means saving all the files from the website to your local disk drive, while copying a website means creating a mirror or clone of the website on your local disk drive.

          -
        3. Why do I need to download or copy a website?
        4. -

          You may need to download or copy a website for various reasons, such as offline viewing, backup, research, analysis, testing, archiving, or migration.

          -
        5. Is it legal to download or copy a website?
        6. -

          It depends on the terms of service and copyright of the original website. Some websites may allow you to download or copy their content for personal or educational use, while others may prohibit or restrict it. You should always check the terms of service and copyright of the website before downloading or copying it.

          -
        7. Is it safe to use WRC 3.9.2 crack version?
        8. -

          Not necessarily. WRC 3.9.2 crack version is not an official release from the developer, and it may contain viruses, malware, or spyware that can harm your computer or data. It may also not work properly or crash due to bugs or errors. Moreover, it may violate the rights of the original website owner who may not allow their content to be downloaded or copied without permission.

          -
        9. What are some alternatives to WRC 3.9.2 crack version?
        10. -

          Some alternatives to WRC 3.9.2 crack version are HTTrack, Cyotek WebCopy, and Getleft. These are free and safe tools that can help you download or copy websites to your local disk drive for offline use or backup.

          -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Dragon City Ultimate Hack Tool V1 [No Security Key].rar !!TOP!!.md b/spaces/raedeXanto/academic-chatgpt-beta/Dragon City Ultimate Hack Tool V1 [No Security Key].rar !!TOP!!.md deleted file mode 100644 index 584a33adbebcfc12fc578980188c7df48fb1e209..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Dragon City Ultimate Hack Tool V1 [No Security Key].rar !!TOP!!.md +++ /dev/null @@ -1,33 +0,0 @@ - -

        Dragon City Ultimate Hack Tool v1: How to Get Unlimited Gems, Gold and Food without Security Key

        -

        Dragon City is a popular game where you can breed and collect different types of dragons. However, it can be hard to progress in the game without spending real money on gems, gold and food. That's why many players are looking for a way to hack the game and get unlimited resources for free.

        -

        One of the most searched hacks for Dragon City is the Dragon City Ultimate Hack Tool v1, which claims to give you unlimited gems, gold and food without requiring a security key. But is this hack tool really working? And is it safe to use?

        -

        Dragon City Ultimate Hack Tool v1 [No Security Key].rar


        Download Zip –––––>>> https://tinourl.com/2uL0BI



        -

        In this article, we will review the Dragon City Ultimate Hack Tool v1 and tell you everything you need to know about it. We will also show you how to download and use it safely and effectively.

        -

        What is the Dragon City Ultimate Hack Tool v1?

        -

        The Dragon City Ultimate Hack Tool v1 is a software program that allegedly can generate unlimited gems, gold and food for your Dragon City account. It is supposed to work on any device and platform, including Android, iOS, Windows and Mac.

        -

        The hack tool claims to be undetectable by the game's anti-cheat system and to have a user-friendly interface. It also claims to be free of viruses, malware and spyware.

        -

        The hack tool is available as a .rar file that you need to extract and run on your device. However, before you can access the hack tool, you need to complete a survey or an offer from a third-party website. This is supposed to be a verification step to prevent bots and spam.

        -

        Does the Dragon City Ultimate Hack Tool v1 really work?

        -

        The short answer is no. The Dragon City Ultimate Hack Tool v1 is a scam that does not work as advertised. It is actually a phishing scheme that tries to steal your personal information and infect your device with malware.

        -

        Here are some of the reasons why you should avoid this hack tool at all costs:

        -
          -
        • The hack tool requires you to complete a survey or an offer before you can download it. This is a common trick used by scammers to make money from your clicks or downloads. The survey or offer may also ask you for your personal information, such as your email address, phone number or credit card details. This information can be used for identity theft or fraud.
        • -
        • The hack tool is not compatible with any device or platform. The .rar file that you download is actually a fake file that contains malware or viruses. If you try to open it, you may damage your device or compromise its security.
        • -
        • The hack tool does not generate unlimited gems, gold and food for your Dragon City account. The game's servers are protected by encryption and authentication protocols that prevent any unauthorized modification of the game data. The hack tool cannot bypass these security measures and cannot access your account.
        • -
        • The hack tool may expose your account to ban or suspension. The game's developers monitor the game activity and detect any abnormal behavior or suspicious transactions. If they find out that you are using a hack tool, they may ban or suspend your account permanently.
        • -
        -

        How to get unlimited gems, gold and food in Dragon City legally?

        -

        If you want to get unlimited gems, gold and food in Dragon City without risking your account or your device, there are some legal ways that you can try. Here are some of them:

        -
          -
        • Play the game regularly and complete the quests, achievements and events. These will reward you with gems, gold and food that you can use to buy or upgrade your dragons.
        • -
        • Join an active alliance and participate in the alliance wars and chests. These will give you more opportunities to earn gems, gold and food as well as other rewards.
        • -
        • Watch ads and videos in the game. These will give you free gems, gold or food every day.
        • -
        • Use the dragon market and the dragonarium. These will allow you to sell or breed your dragons for more gems, gold or food.
        • -
        • Use the social features of the game. These will allow you to invite your friends, visit their islands, send them gifts and receive gifts from them.
        • -
        -

        Conclusion

        -

        The Dragon City Ultimate Hack Tool v1

        -

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Life Is Beautiful! in hindi movie du) - Watch the Romantic Comedy Film Online.md b/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Life Is Beautiful! in hindi movie du) - Watch the Romantic Comedy Film Online.md deleted file mode 100644 index a220168082441b61ec56d812d5c147b0ca9adf45..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Life Is Beautiful! in hindi movie du) - Watch the Romantic Comedy Film Online.md +++ /dev/null @@ -1,91 +0,0 @@ - -

        HD Online Player (Life Is Beautiful! in hindi movie du)

        -

        Are you looking for a way to watch Life Is Beautiful, one of the most acclaimed and award-winning movies of all time, in Hindi online? If yes, then you are in luck. In this article, we will tell you everything you need to know about this movie, why you should watch it in Hindi, how to watch it online, and what are the benefits and challenges of doing so. So, sit back, relax, and get ready to enjoy this masterpiece of cinema in a new and exciting way.

        -

        Introduction

        -

        What is Life Is Beautiful?

        -

        Life Is Beautiful is a 1997 Italian comedy-drama film directed by and starring Roberto Benigni. The film tells the story of a Jewish-Italian bookshop owner, Guido Orefice, who uses his imagination and humor to protect his son from the horrors of a Nazi concentration camp during World War II. The film won three Academy Awards, including Best Foreign Language Film, Best Actor for Benigni, and Best Original Dramatic Score. It also received widespread critical acclaim and became one of the highest-grossing non-English language films of all time.

        -

        HD Online Player (Life Is Beautiful! in hindi movie du)


        DOWNLOAD ——— https://tinourl.com/2uL4mt



        -

        Why watch it in Hindi?

        -

        If you are a fan of foreign films, you might be wondering why you should watch Life Is Beautiful in Hindi instead of its original language, Italian. Well, there are several reasons for that. First of all, watching a movie in a different language can give you a new perspective and appreciation for the film. You can notice different nuances, expressions, and emotions that might be lost in translation. Secondly, watching a movie in Hindi can help you learn some new words and phrases in this widely spoken and beautiful language. You can also compare and contrast the similarities and differences between Italian and Hindi cultures and histories. Thirdly, watching a movie in Hindi can make you feel more connected and empathetic with the characters and their struggles. You can relate to their joys and sorrows more easily and deeply.

        -

        How to watch it online?

        -

        Now that you have decided to watch Life Is Beautiful in Hindi online, you might be wondering how to do that. Well, there are several options available for you. You can either stream it on a legal and licensed platform like Netflix or Amazon Prime Video, or you can download it from a trusted and secure source like Torrent or Magnet Link. However, before you choose any option, make sure that you have a good internet connection, a compatible device, and a subscription or payment method if required. Also, be aware of the risks involved in downloading or streaming pirated or illegal content online. You might face legal consequences or malware infections if you are not careful.

        -

        Benefits of watching Life Is Beautiful in Hindi online

        -

        Enjoy the original dialogues and emotions

        -

        One of the main benefits of watching Life Is Beautiful in Hindi online is that you can enjoy the original dialogues and emotions of the film. You can hear the actors' voices and expressions as they were intended by the director and the scriptwriter. You can also catch the subtle jokes, puns, and references that might not be translated well in other languages. For example, when Guido pretends to be an inspector from the Ministry of Health and asks for Dr. Lessing's name, he says "Lessing... Lessing... Lessing...". This is a play on words with "lessi", which means "you boiled" in Italian. This is funny because Guido is trying to make fun of Dr. Lessing's baldness.

        -

        Learn some Hindi words and phrases

        -

        Another benefit of watching Life Is Beautiful in Hindi online is that you can learn some new words and phrases in this rich and diverse language. You can pick up some common greetings, expressions, compliments, insults, questions, answers, etc. that you can use in your daily life or when traveling to India or other Hindi-speaking countries. For example, when Guido meets Dora for the first time at the school where she works as a teacher, he says "Namaste" to her. This is a polite and respectful way of saying hello or goodbye in Hindi. You can also learn some more complex words and sentences that might come handy in different situations. For example, when Guido tells his son that they are playing a game where they have to collect points to win a tank, he says "Hum ek khel khel rahe hain jismein humein ank jama karne hain tank jeetne ke liye". This means "We are playing a game where we have to collect points to win a tank".

        -

        Appreciate the cultural diversity and similarities

        -

        A third benefit of watching Life Is Beautiful in Hindi online is that you can appreciate the cultural diversity and similarities between Italy and India. You can notice how both countries have rich histories, traditions, arts, cuisines, religions, languages, etc. that make them unique and fascinating. You can also see how both countries have faced oppression, violence, discrimination, poverty, etc. that make them resilient and courageous. You can also see how both countries have shared values like family, love, humor, etc. that make them human and relatable.

        -

        Save time and money

        -

        A fourth benefit of watching Life Is Beautiful in Hindi online is that you can save time and money compared to watching it in other ways. You don't have to go to a cinema hall or buy a DVD or Blu-ray disc to watch it. You don't have to pay for transportation or parking fees or snacks or tickets or taxes or tips or anything else. You don't have to wait for long queues or ads or trailers or intermissions or anything else. You can watch it anytime anywhere at your own convenience and comfort.

        -

        Watch Life Is Beautiful! full movie online in HD with Hindi dubbing
        -Life Is Beautiful! Hindi dubbed movie streaming on HD Online Player
        -How to download Life Is Beautiful! movie in HD and Hindi language
        -Life Is Beautiful! HD Online Player review and ratings
        -Best sites to watch Life Is Beautiful! Hindi movie online in HD quality
        -Life Is Beautiful! movie cast, plot and trivia in Hindi
        -HD Online Player features and benefits for watching Life Is Beautiful! in Hindi
        -Life Is Beautiful! movie download link in HD and Hindi audio
        -Watch Life Is Beautiful! online free with HD Online Player and Hindi subtitles
        -Life Is Beautiful! Hindi movie trailer and songs on HD Online Player
        -Life Is Beautiful! movie awards and nominations in Hindi cinema
        -HD Online Player alternatives for watching Life Is Beautiful! in Hindi
        -Life Is Beautiful! movie facts and quotes in Hindi language
        -How to watch Life Is Beautiful! movie offline in HD and Hindi dubbing
        -Life Is Beautiful! movie release date and box office collection in India
        -HD Online Player premium subscription for watching Life Is Beautiful! in Hindi
        -Life Is Beautiful! movie director and producer interview in Hindi
        -How to fix HD Online Player issues while watching Life Is Beautiful! in Hindi
        -Life Is Beautiful! movie behind the scenes and making of video in Hindi
        -HD Online Player customer support and feedback for Life Is Beautiful! in Hindi movie
        -Life Is Beautiful! movie comparison and analysis with other Hindi movies
        -HD Online Player tips and tricks for watching Life Is Beautiful! in Hindi
        -Life Is Beautiful! movie fan theories and speculations in Hindi
        -How to share HD Online Player link for watching Life Is Beautiful! in Hindi with friends
        -Life Is Beautiful! movie sequel and prequel news and rumors in Hindi
        -HD Online Player app download and installation for watching Life Is Beautiful! in Hindi on mobile devices
        -Life Is Beautiful! movie memes and jokes in Hindi language
        -How to create HD Online Player account for watching Life Is Beautiful! in Hindi
        -Life Is Beautiful! movie controversies and scandals in Hindi media
        -HD Online Player referral program and rewards for watching Life Is Beautiful! in Hindi
        -Life Is Beautiful! movie merchandise and products in Hindi market
        -HD Online Player system requirements and compatibility for watching Life Is Beautiful! in Hindi on different devices
        -Life Is Beautiful! movie recommendations and suggestions based on your preferences in Hindi cinema
        -HD Online Player privacy policy and terms of service for watching Life Is Beautiful! in Hindi
        -Life Is Beautiful! movie history and background information in Hindi culture
        -HD Online Player coupon codes and discounts for watching Life Is Beautiful! in Hindi
        -Life Is Beautiful! movie testimonials and reviews from other users who watched it on HD Online Player in Hindi
        -How to cancel HD Online Player subscription after watching Life Is Beautiful! in Hindi
        -Life Is Beautiful! movie social media pages and groups in Hindi language
        -HD Online Player FAQs and troubleshooting for watching Life Is Beautiful! in Hindi

        -

        Challenges of watching Life Is Beautiful in Hindi online

        -

        Finding a reliable and legal source

        -

        One of the main challenges of watching Life Is Beautiful in Hindi online is finding a reliable and legal source that offers high-quality video and audio without any interruptions or errors. You might encounter many websites or apps that claim to offer this service but they might be fraudulent or malicious or outdated or incomplete or low-quality or anything else. You might also face legal issues if you access pirated or illegal content online without proper permission or license from the creators or distributors of the film. You might also face ethical issues if you support such practices that harm the film industry and its workers.

        -

        Dealing with subtitles and audio quality

        -

        Another challenge of watching Life Is Beautiful in Hindi online is dealing with subtitles and audio quality that might not match your expectations or preferences. You might find subtitles that are inaccurate or inconsistent or missing or delayed or anything else. You might also find audio quality that is poor or distorted or mismatched or anything else. You might also find difficulty in understanding some words or accents or anything else.

        -

        Avoiding spoilers and ads

        -

        A third challenge of watching Life Is Beautiful in Hindi online is avoiding spoilers and ads that might ruin your experience or enjoyment of the film. You might come across spoilers that reveal important plot points or twists or endings or anything else. You might also come across ads that interrupt your viewing or distract your attention or annoy your mood or anything else.

        -

        Respecting the creators and distributors

        -

        A fourth challenge of watching Life Is Beautiful in Hindi online is respecting the creators and distributors of the film who have put their hard work and talent and passion and money and anything else into making this masterpiece. You might feel tempted to share or download or copy or modify the film without giving proper credit or permission or compensation to the original owners. You might also feel indifferent or ungrateful or disrespectful to the artistic vision and message and purpose of the film.

        -

        Conclusion

        -

        Summary of the main points

        -

        In conclusion, watching Life Is Beautiful in Hindi online is a great way to enjoy this classic film in a new and different way. You can benefit from hearing the original dialogues and emotions, learning some Hindi words and phrases, appreciating the cultural diversity and similarities, and saving time and money. However, you also have to face some challenges like finding a reliable and legal source, dealing with subtitles and audio quality, avoiding spoilers and ads, and respecting the creators and distributors. Therefore, you have to weigh the pros and cons and decide for yourself if this is something you want to do or not.

        -

        Call to action

        -

        If you are interested in watching Life Is Beautiful in Hindi online, we have some good news for you. We have found a reliable and legal source that offers HD quality video and audio with accurate and synchronized subtitles. You can stream or download the film from this link: https://www.example.com/lifeisbeautifulinhindi (Note: This is a fictitious link for illustration purposes only). You can also check out some reviews and ratings from other viewers who have watched the film in Hindi online. We hope you enjoy this amazing film as much as we did.

        -

        FAQs

        -

        Here are some frequently asked questions and answers about watching Life Is Beautiful in Hindi online.

        -
          -
        • Q: Is Life Is Beautiful available in other languages besides Italian and Hindi?
        • -
        • A: Yes, Life Is Beautiful has been dubbed or subtitled in many other languages like English, French, Spanish, German, Chinese, Japanese, etc. You can find them on various platforms like Netflix or Amazon Prime Video or YouTube or others.
        • -
        • Q: How long is Life Is Beautiful?
        • -
        • A: Life Is Beautiful has a runtime of 116 minutes or 1 hour and 56 minutes.
        • -
        • Q: What is the rating of Life Is Beautiful?
        • -
        • A: Life Is Beautiful has a rating of PG-13 in the US, which means that some material may be inappropriate for children under 13. The film contains some scenes of violence, cruelty, and death related to the Holocaust.
        • -
        • Q: Who are the main actors of Life Is Beautiful?
        • -
        • A: The main actors of Life Is Beautiful are Roberto Benigni as Guido Orefice, Nicoletta Braschi as Dora Orefice, Giorgio Cantarini as Giosué Orefice, Giustino Durano as Eliseo Orefice, Horst Buchholz as Dr. Lessing, and Sergio Bini Bustric as Ferruccio Papini.
        • -
        • Q: What are some other films similar to Life Is Beautiful?
        • -
        • A: Some other films similar to Life Is Beautiful are Schindler's List (1993), The Pianist (2002), The Boy in the Striped Pyjamas (2008), Jojo Rabbit (2019), etc.
        • -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/randstad/Workllama_Simple_Resume_Analyzer/README.md b/spaces/randstad/Workllama_Simple_Resume_Analyzer/README.md deleted file mode 100644 index b89732b195ffbf7e1a268a79f05487429e2df39c..0000000000000000000000000000000000000000 --- a/spaces/randstad/Workllama_Simple_Resume_Analyzer/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Workllama Simple Resume Analyzer -emoji: 📈 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/asynckit/lib/iterate.js b/spaces/rayan-saleh/whisper2notion/server/node_modules/asynckit/lib/iterate.js deleted file mode 100644 index 5d2839a590b2bac46489909265bc3010fdc62b28..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/asynckit/lib/iterate.js +++ /dev/null @@ -1,75 +0,0 @@ -var async = require('./async.js') - , abort = require('./abort.js') - ; - -// API -module.exports = iterate; - -/** - * Iterates over each job object - * - * @param {array|object} list - array or object (named list) to iterate over - * @param {function} iterator - iterator to run - * @param {object} state - current job status - * @param {function} callback - invoked when all elements processed - */ -function iterate(list, iterator, state, callback) -{ - // store current index - var key = state['keyedList'] ? state['keyedList'][state.index] : state.index; - - state.jobs[key] = runJob(iterator, key, list[key], function(error, output) - { - // don't repeat yourself - // skip secondary callbacks - if (!(key in state.jobs)) - { - return; - } - - // clean up jobs - delete state.jobs[key]; - - if (error) - { - // don't process rest of the results - // stop still active jobs - // and reset the list - abort(state); - } - else - { - state.results[key] = output; - } - - // return salvaged results - callback(error, state.results); - }); -} - -/** - * Runs iterator over provided job element - * - * @param {function} iterator - iterator to invoke - * @param {string|number} key - key/index of the element in the list of jobs - * @param {mixed} item - job description - * @param {function} callback - invoked after iterator is done with the job - * @returns {function|mixed} - job abort function or something else - */ -function runJob(iterator, key, item, callback) -{ - var aborter; - - // allow shortcut if iterator expects only two arguments - if (iterator.length == 2) - { - aborter = iterator(item, async(callback)); - } - // otherwise go with full three arguments - else - { - aborter = iterator(item, key, async(callback)); - } - - return aborter; -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Downloadwindows7ultimatelite.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Downloadwindows7ultimatelite.md deleted file mode 100644 index 190489a923e32fe6e924952bbce95b63223a5ef6..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Downloadwindows7ultimatelite.md +++ /dev/null @@ -1,11 +0,0 @@ - -

        http://territorio-mexico.com/wp-content/uploads/2016/04/Downloadwindows7ultimatelite.zip http://propcurix.yolasite.com/resources/Contacts-Pro-v5890-Plus-Latest.pdfhttps://petit-cybertoy.weebly.com/files/downloadwindows7ultimatelite.html Vidas Lote 64 ark123 https://coub.com/stories/3010869-downloadwindows7ultimatelite-verified. #360. Andreis http://territorio-mexico.com/wp-content/uploads/2016/04/Downloadwindows7ultimatelite.zip

        -

        downloadwindows7ultimatelite


        DOWNLOAD ✒ ✒ ✒ https://urlgoal.com/2uCMxc



        -

        http://petit-cybertoy.weebly.com/files/downloadwindows7ultimatelite.html Vidas Lote 64 ark123 https://coub.com/stories/3010869-downloadwindows7ultimatelite-verified. #359. Andreis http://territorio-mexico.com/wp-content/uploads/2016/04/Downloadwindows7ultimatelite.zip

        -

        http://petit-cybertoy.weebly.com/files/downloadwindows7ultimatelite.html Vidas Lote 64 ark123 https://coub.com/stories/3010869-downloadwindows7ultimatelite-verified. #358. Andreis http://territorio-mexico.com/wp-content/uploads/2016/04/Downloadwindows7ultimatelite.zip

        -

        http://petit-cybertoy.weebly.com/files/downloadwindows7ultimatelite.html Vidas Lote 64 ark123 https://coub.com/stories/3010869-downloadwindows7ultimatelite-verified. #357. Andreis http://territorio-mexico.com/wp-content/uploads/2016/04/Downloadwindows7ultimatelite.zip

        -

        https://doforsomi.weebly.com/downloadwindows7ultimatelite.html http://propcurix.yolasite.com/resources/Contacts-Pro-v5890-Plus-Latest.pdf 0d69ce259f3 https://trello.com/c/pYJA3E80/39-nicelabel-designer-express-6-crack

        -

        -

        http://www.letstalknetwork.com/downloadwindows7ultimatelite-first-touch-7-1-crack-32-bit-key.html http://uasoft.net/shouryat/downloadwindows7ultimatelite.htm http://midivarg.com/downloadwindows7ultimatelite/ http://trello.com/c/pYJA3E80/39-nicelabel-designer-express-6-crack

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD-Audio Solo Ultra 4.rar !!BETTER!!.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD-Audio Solo Ultra 4.rar !!BETTER!!.md deleted file mode 100644 index 1b9093c777fda40c6a1d55c4c1fe5b014ce87738..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD-Audio Solo Ultra 4.rar !!BETTER!!.md +++ /dev/null @@ -1,30 +0,0 @@ - -

        How to Master High-Definition Audio with HD-Audio Solo Ultra 4

        -

        If you are looking for a software that can help you create high-quality audio discs and files, you might want to check out HD-Audio Solo Ultra 4. This is a powerful and versatile application that allows you to import audio tracks from various sources, edit them, and burn them to Blu-ray, DVD, or CD discs. You can also export them to different formats for streaming or playback on various devices.

        -

        HD-Audio Solo Ultra 4.rar


        Download Ziphttps://urlgoal.com/2uCJtG



        -

        In this article, we will show you some of the features and benefits of HD-Audio Solo Ultra 4, and how you can use it to master your own high-definition audio projects.

        -

        What is HD-Audio Solo Ultra 4?

        -

        HD-Audio Solo Ultra 4 is a media remastering and authoring application developed by Cirlinca, Inc. It is designed to work with high-resolution audio formats, such as WAV, FLAC, AIF, AAC, WMA, and DTS-HD Master Audio. It can also rip audio tracks from CD discs or record them from your sound card.

        -

        With HD-Audio Solo Ultra 4, you can edit your audio tracks using various tools, such as resampling, upmixing, volume adjustment, crossfade, and gapless playback. You can also add pictures and menus to your tracks to create a more interactive and attractive presentation.

        -

        Once you are satisfied with your audio project, you can burn it to Blu-ray, DVD-Audio, DVD-Video, or CD discs using the built-in burner. You can also export it to ISO files or BDMV folders with menus or files for streaming by Blu-ray or media players. You can also convert it to other formats, such as MP3, FLAC, OGG, CUE, and more.

        -

        Why use HD-Audio Solo Ultra 4?

        -

        HD-Audio Solo Ultra 4 has many advantages over other audio software. Here are some of them:

        -

        -
          -
        • It supports high-definition audio up to 7.1 channels and 24-bit/192 kHz resolution. This means you can enjoy the best sound quality possible on your home theater system or headphones.
        • -
        • It allows you to create music Blu-ray discs that can store up to 50 GB of audio data. This means you can fit more tracks and longer songs on a single disc.
        • -
        • It is compatible with most Blu-ray and DVD players and devices. This means you can play your audio discs on any platform without worrying about compatibility issues.
        • -
        • It is easy to use and has a user-friendly interface. This means you can create your audio projects without any hassle or confusion.
        • -
        • It is accessible for visually impaired users. This means you can use the software with screen readers and voice commands.
        • -
        -

        How to use HD-Audio Solo Ultra 4?

        -

        To use HD-Audio Solo Ultra 4, you need to follow these steps:

        -
          -
        1. Download and install the software from the developer's website: https://www.cirlinca.com/. You can try the free trial version for 30 days or purchase the full version for $68.95.
        2. -
        3. Launch the software and select the type of project you want to create: Blu-ray Audio/Video Discs (BDAV), DVD-Audio Discs (DVD-A), DVD-Video Discs (DVD-V), or Streaming Files (SF).
        4. -
        5. Add your audio tracks by clicking on the "Add" button. You can import them from files, CD discs, or sound card recording. You can also drag and drop them from your computer.
        6. -
        7. Edit your audio tracks by clicking on the "Edit" button. You can adjust their volume levels, resample them to different resolutions, upmix them from stereo to surround sound, add crossfades and gaps between tracks, and more.
        8. -
        9. Add pictures and menus to your tracks by clicking on the "Pictures" button. You can choose from various templates or create your own custom menus. You can also add text captions and background music to your pictures.
        10. -
        11. Burn your project to a disc by clicking on the "Burn" button. You can choose from different disc types

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/rgres/Seg2Sat/frontend/.svelte-kit/runtime/client/start.js b/spaces/rgres/Seg2Sat/frontend/.svelte-kit/runtime/client/start.js deleted file mode 100644 index 70387182581a90a6f7f1ec8ae842e79ebde5c8d6..0000000000000000000000000000000000000000 --- a/spaces/rgres/Seg2Sat/frontend/.svelte-kit/runtime/client/start.js +++ /dev/null @@ -1,1789 +0,0 @@ -import { onMount, tick } from 'svelte'; -import { writable } from 'svelte/store'; -import { assets, set_paths } from '../paths.js'; -import Root from '__GENERATED__/root.svelte'; -import { components, dictionary, matchers } from '__GENERATED__/client-manifest.js'; -import { init } from './singletons.js'; - -/** - * @param {unknown} err - * @return {Error} - */ -function coalesce_to_error(err) { - return err instanceof Error || - (err && /** @type {any} */ (err).name && /** @type {any} */ (err).message) - ? /** @type {Error} */ (err) - : new Error(JSON.stringify(err)); -} - -/** - * @param {import('types').LoadOutput} loaded - * @returns {import('types').NormalizedLoadOutput} - */ -function normalize(loaded) { - // TODO remove for 1.0 - // @ts-expect-error - if (loaded.fallthrough) { - throw new Error( - 'fallthrough is no longer supported. Use matchers instead: https://kit.svelte.dev/docs/routing#advanced-routing-matching' - ); - } - - // TODO remove for 1.0 - if ('maxage' in loaded) { - throw new Error('maxage should be replaced with cache: { maxage }'); - } - - const has_error_status = - loaded.status && loaded.status >= 400 && loaded.status <= 599 && !loaded.redirect; - if (loaded.error || has_error_status) { - const status = loaded.status; - - if (!loaded.error && has_error_status) { - return { status: status || 500, error: new Error() }; - } - - const error = typeof loaded.error === 'string' ? new Error(loaded.error) : loaded.error; - - if (!(error instanceof Error)) { - return { - status: 500, - error: new Error( - `"error" property returned from load() must be a string or instance of Error, received type "${typeof error}"` - ) - }; - } - - if (!status || status < 400 || status > 599) { - console.warn('"error" returned from load() without a valid status code — defaulting to 500'); - return { status: 500, error }; - } - - return { status, error }; - } - - if (loaded.redirect) { - if (!loaded.status || Math.floor(loaded.status / 100) !== 3) { - throw new Error( - '"redirect" property returned from load() must be accompanied by a 3xx status code' - ); - } - - if (typeof loaded.redirect !== 'string') { - throw new Error('"redirect" property returned from load() must be a string'); - } - } - - if (loaded.dependencies) { - if ( - !Array.isArray(loaded.dependencies) || - loaded.dependencies.some((dep) => typeof dep !== 'string') - ) { - throw new Error('"dependencies" property returned from load() must be of type string[]'); - } - } - - // TODO remove before 1.0 - if (/** @type {any} */ (loaded).context) { - throw new Error( - 'You are returning "context" from a load function. ' + - '"context" was renamed to "stuff", please adjust your code accordingly.' - ); - } - - return /** @type {import('types').NormalizedLoadOutput} */ (loaded); -} - -/** - * @param {string} path - * @param {import('types').TrailingSlash} trailing_slash - */ -function normalize_path(path, trailing_slash) { - if (path === '/' || trailing_slash === 'ignore') return path; - - if (trailing_slash === 'never') { - return path.endsWith('/') ? path.slice(0, -1) : path; - } else if (trailing_slash === 'always' && !path.endsWith('/')) { - return path + '/'; - } - - return path; -} - -class LoadURL extends URL { - /** @returns {string} */ - get hash() { - throw new Error( - 'url.hash is inaccessible from load. Consider accessing hash from the page store within the script tag of your component.' - ); - } -} - -/** @param {HTMLDocument} doc */ -function get_base_uri(doc) { - let baseURI = doc.baseURI; - - if (!baseURI) { - const baseTags = doc.getElementsByTagName('base'); - baseURI = baseTags.length ? baseTags[0].href : doc.URL; - } - - return baseURI; -} - -function scroll_state() { - return { - x: pageXOffset, - y: pageYOffset - }; -} - -/** @param {Event} event */ -function find_anchor(event) { - const node = event - .composedPath() - .find((e) => e instanceof Node && e.nodeName.toUpperCase() === 'A'); // SVG elements have a lowercase name - return /** @type {HTMLAnchorElement | SVGAElement | undefined} */ (node); -} - -/** @param {HTMLAnchorElement | SVGAElement} node */ -function get_href(node) { - return node instanceof SVGAElement - ? new URL(node.href.baseVal, document.baseURI) - : new URL(node.href); -} - -/** @param {any} value */ -function notifiable_store(value) { - const store = writable(value); - let ready = true; - - function notify() { - ready = true; - store.update((val) => val); - } - - /** @param {any} new_value */ - function set(new_value) { - ready = false; - store.set(new_value); - } - - /** @param {(value: any) => void} run */ - function subscribe(run) { - /** @type {any} */ - let old_value; - return store.subscribe((new_value) => { - if (old_value === undefined || (ready && new_value !== old_value)) { - run((old_value = new_value)); - } - }); - } - - return { notify, set, subscribe }; -} - -function create_updated_store() { - const { set, subscribe } = writable(false); - - const interval = +( - /** @type {string} */ (import.meta.env.VITE_SVELTEKIT_APP_VERSION_POLL_INTERVAL) - ); - const initial = import.meta.env.VITE_SVELTEKIT_APP_VERSION; - - /** @type {NodeJS.Timeout} */ - let timeout; - - async function check() { - if (import.meta.env.DEV || import.meta.env.SSR) return false; - - clearTimeout(timeout); - - if (interval) timeout = setTimeout(check, interval); - - const file = import.meta.env.VITE_SVELTEKIT_APP_VERSION_FILE; - - const res = await fetch(`${assets}/${file}`, { - headers: { - pragma: 'no-cache', - 'cache-control': 'no-cache' - } - }); - - if (res.ok) { - const { version } = await res.json(); - const updated = version !== initial; - - if (updated) { - set(true); - clearTimeout(timeout); - } - - return updated; - } else { - throw new Error(`Version check failed: ${res.status}`); - } - } - - if (interval) timeout = setTimeout(check, interval); - - return { - subscribe, - check - }; -} - -/** - * Hash using djb2 - * @param {import('types').StrictBody} value - */ -function hash(value) { - let hash = 5381; - let i = value.length; - - if (typeof value === 'string') { - while (i) hash = (hash * 33) ^ value.charCodeAt(--i); - } else { - while (i) hash = (hash * 33) ^ value[--i]; - } - - return (hash >>> 0).toString(36); -} - -let loading = 0; - -const native_fetch = window.fetch; - -function lock_fetch() { - loading += 1; -} - -function unlock_fetch() { - loading -= 1; -} - -if (import.meta.env.DEV) { - let can_inspect_stack_trace = false; - - const check_stack_trace = async () => { - const stack = /** @type {string} */ (new Error().stack); - can_inspect_stack_trace = stack.includes('check_stack_trace'); - }; - - check_stack_trace(); - - window.fetch = (input, init) => { - const url = input instanceof Request ? input.url : input.toString(); - const stack = /** @type {string} */ (new Error().stack); - - const heuristic = can_inspect_stack_trace ? stack.includes('load_node') : loading; - if (heuristic) { - console.warn( - `Loading ${url} using \`window.fetch\`. For best results, use the \`fetch\` that is passed to your \`load\` function: https://kit.svelte.dev/docs/loading#input-fetch` - ); - } - - return native_fetch(input, init); - }; -} - -/** - * @param {RequestInfo} resource - * @param {RequestInit} [opts] - */ -function initial_fetch(resource, opts) { - const url = JSON.stringify(typeof resource === 'string' ? resource : resource.url); - - let selector = `script[sveltekit\\:data-type="data"][sveltekit\\:data-url=${url}]`; - - if (opts && typeof opts.body === 'string') { - selector += `[sveltekit\\:data-body="${hash(opts.body)}"]`; - } - - const script = document.querySelector(selector); - if (script && script.textContent) { - const { body, ...init } = JSON.parse(script.textContent); - return Promise.resolve(new Response(body, init)); - } - - return native_fetch(resource, opts); -} - -const param_pattern = /^(\.\.\.)?(\w+)(?:=(\w+))?$/; - -/** @param {string} id */ -function parse_route_id(id) { - /** @type {string[]} */ - const names = []; - - /** @type {string[]} */ - const types = []; - - // `/foo` should get an optional trailing slash, `/foo.json` should not - // const add_trailing_slash = !/\.[a-z]+$/.test(key); - let add_trailing_slash = true; - - const pattern = - id === '' - ? /^\/$/ - : new RegExp( - `^${decodeURIComponent(id) - .split(/(?:@[a-zA-Z0-9_-]+)?(?:\/|$)/) - .map((segment, i, segments) => { - // special case — /[...rest]/ could contain zero segments - const match = /^\[\.\.\.(\w+)(?:=(\w+))?\]$/.exec(segment); - if (match) { - names.push(match[1]); - types.push(match[2]); - return '(?:/(.*))?'; - } - - const is_last = i === segments.length - 1; - - return ( - segment && - '/' + - segment - .split(/\[(.+?)\]/) - .map((content, i) => { - if (i % 2) { - const [, rest, name, type] = /** @type {RegExpMatchArray} */ ( - param_pattern.exec(content) - ); - names.push(name); - types.push(type); - return rest ? '(.*?)' : '([^/]+?)'; - } - - if (is_last && content.includes('.')) add_trailing_slash = false; - - return ( - content // allow users to specify characters on the file system in an encoded manner - .normalize() - // We use [ and ] to denote parameters, so users must encode these on the file - // system to match against them. We don't decode all characters since others - // can already be epressed and so that '%' can be easily used directly in filenames - .replace(/%5[Bb]/g, '[') - .replace(/%5[Dd]/g, ']') - // '#', '/', and '?' can only appear in URL path segments in an encoded manner. - // They will not be touched by decodeURI so need to be encoded here, so - // that we can match against them. - // We skip '/' since you can't create a file with it on any OS - .replace(/#/g, '%23') - .replace(/\?/g, '%3F') - // escape characters that have special meaning in regex - .replace(/[.*+?^${}()|[\]\\]/g, '\\$&') - ); // TODO handle encoding - }) - .join('') - ); - }) - .join('')}${add_trailing_slash ? '/?' : ''}$` - ); - - return { pattern, names, types }; -} - -/** - * @param {RegExpMatchArray} match - * @param {string[]} names - * @param {string[]} types - * @param {Record} matchers - */ -function exec(match, names, types, matchers) { - /** @type {Record} */ - const params = {}; - - for (let i = 0; i < names.length; i += 1) { - const name = names[i]; - const type = types[i]; - const value = match[i + 1] || ''; - - if (type) { - const matcher = matchers[type]; - if (!matcher) throw new Error(`Missing "${type}" param matcher`); // TODO do this ahead of time? - - if (!matcher(value)) return; - } - - params[name] = value; - } - - return params; -} - -/** - * @param {import('types').CSRComponentLoader[]} components - * @param {Record} dictionary - * @param {Record boolean>} matchers - * @returns {import('types').CSRRoute[]} - */ -function parse(components, dictionary, matchers) { - const routes = Object.entries(dictionary).map(([id, [a, b, has_shadow]]) => { - const { pattern, names, types } = parse_route_id(id); - - return { - id, - /** @param {string} path */ - exec: (path) => { - const match = pattern.exec(path); - if (match) return exec(match, names, types, matchers); - }, - a: a.map((n) => components[n]), - b: b.map((n) => components[n]), - has_shadow: !!has_shadow - }; - }); - - return routes; -} - -const SCROLL_KEY = 'sveltekit:scroll'; -const INDEX_KEY = 'sveltekit:index'; - -const routes = parse(components, dictionary, matchers); - -// we import the root layout/error components eagerly, so that -// connectivity errors after initialisation don't nuke the app -const default_layout = components[0](); -const default_error = components[1](); - -const root_stuff = {}; - -// We track the scroll position associated with each history entry in sessionStorage, -// rather than on history.state itself, because when navigation is driven by -// popstate it's too late to update the scroll position associated with the -// state we're navigating from - -/** @typedef {{ x: number, y: number }} ScrollPosition */ -/** @type {Record} */ -let scroll_positions = {}; -try { - scroll_positions = JSON.parse(sessionStorage[SCROLL_KEY]); -} catch { - // do nothing -} - -/** @param {number} index */ -function update_scroll_positions(index) { - scroll_positions[index] = scroll_state(); -} - -/** - * @param {{ - * target: Element; - * session: App.Session; - * base: string; - * trailing_slash: import('types').TrailingSlash; - * }} opts - * @returns {import('./types').Client} - */ -function create_client({ target, session, base, trailing_slash }) { - /** @type {Map} */ - const cache = new Map(); - - /** @type {Array<((href: string) => boolean)>} */ - const invalidated = []; - - const stores = { - url: notifiable_store({}), - page: notifiable_store({}), - navigating: writable(/** @type {import('types').Navigation | null} */ (null)), - session: writable(session), - updated: create_updated_store() - }; - - /** @type {{id: string | null, promise: Promise | null}} */ - const load_cache = { - id: null, - promise: null - }; - - const callbacks = { - /** @type {Array<(opts: { from: URL, to: URL | null, cancel: () => void }) => void>} */ - before_navigate: [], - - /** @type {Array<(opts: { from: URL | null, to: URL }) => void>} */ - after_navigate: [] - }; - - /** @type {import('./types').NavigationState} */ - let current = { - branch: [], - error: null, - session_id: 0, - stuff: root_stuff, - // @ts-ignore - we need the initial value to be null - url: null - }; - - let started = false; - let autoscroll = true; - let updating = false; - let session_id = 1; - - /** @type {Promise | null} */ - let invalidating = null; - - /** @type {import('svelte').SvelteComponent} */ - let root; - - /** @type {App.Session} */ - let $session; - - let ready = false; - stores.session.subscribe(async (value) => { - $session = value; - - if (!ready) return; - session_id += 1; - - update(new URL(location.href), [], true); - }); - ready = true; - - let router_enabled = true; - - // keeping track of the history index in order to prevent popstate navigation events if needed - let current_history_index = history.state?.[INDEX_KEY]; - - if (!current_history_index) { - // we use Date.now() as an offset so that cross-document navigations - // within the app don't result in data loss - current_history_index = Date.now(); - - // create initial history entry, so we can return here - history.replaceState( - { ...history.state, [INDEX_KEY]: current_history_index }, - '', - location.href - ); - } - - // if we reload the page, or Cmd-Shift-T back to it, - // recover scroll position - const scroll = scroll_positions[current_history_index]; - if (scroll) { - history.scrollRestoration = 'manual'; - scrollTo(scroll.x, scroll.y); - } - - let hash_navigating = false; - - /** @type {import('types').Page} */ - let page; - - /** @type {{}} */ - let token; - - /** - * @param {string | URL} url - * @param {{ noscroll?: boolean; replaceState?: boolean; keepfocus?: boolean; state?: any }} opts - * @param {string[]} redirect_chain - */ - async function goto( - url, - { noscroll = false, replaceState = false, keepfocus = false, state = {} }, - redirect_chain - ) { - if (typeof url === 'string') { - url = new URL(url, get_base_uri(document)); - } - - if (router_enabled) { - return navigate({ - url, - scroll: noscroll ? scroll_state() : null, - keepfocus, - redirect_chain, - details: { - state, - replaceState - }, - accepted: () => {}, - blocked: () => {} - }); - } - - await native_navigation(url); - } - - /** @param {URL} url */ - async function prefetch(url) { - const intent = get_navigation_intent(url); - - if (!intent) { - throw new Error('Attempted to prefetch a URL that does not belong to this app'); - } - - load_cache.promise = load_route(intent, false); - load_cache.id = intent.id; - - return load_cache.promise; - } - - /** - * Returns `true` if update completes, `false` if it is aborted - * @param {URL} url - * @param {string[]} redirect_chain - * @param {boolean} no_cache - * @param {{hash?: string, scroll: { x: number, y: number } | null, keepfocus: boolean, details: { replaceState: boolean, state: any } | null}} [opts] - * @param {() => void} [callback] - */ - async function update(url, redirect_chain, no_cache, opts, callback) { - const intent = get_navigation_intent(url); - - const current_token = (token = {}); - let navigation_result = intent && (await load_route(intent, no_cache)); - - if ( - !navigation_result && - url.origin === location.origin && - url.pathname === location.pathname - ) { - // this could happen in SPA fallback mode if the user navigated to - // `/non-existent-page`. if we fall back to reloading the page, it - // will create an infinite loop. so whereas we normally handle - // unknown routes by going to the server, in this special case - // we render a client-side error page instead - navigation_result = await load_root_error_page({ - status: 404, - error: new Error(`Not found: ${url.pathname}`), - url, - routeId: null - }); - } - - if (!navigation_result) { - await native_navigation(url); - return false; // unnecessary, but TypeScript prefers it this way - } - - // abort if user navigated during update - if (token !== current_token) return false; - - invalidated.length = 0; - - if (navigation_result.redirect) { - if (redirect_chain.length > 10 || redirect_chain.includes(url.pathname)) { - navigation_result = await load_root_error_page({ - status: 500, - error: new Error('Redirect loop'), - url, - routeId: null - }); - } else { - if (router_enabled) { - goto(new URL(navigation_result.redirect, url).href, {}, [ - ...redirect_chain, - url.pathname - ]); - } else { - await native_navigation(new URL(navigation_result.redirect, location.href)); - } - - return false; - } - } else if (navigation_result.props?.page?.status >= 400) { - const updated = await stores.updated.check(); - if (updated) { - await native_navigation(url); - } - } - - updating = true; - - if (opts && opts.details) { - const { details } = opts; - const change = details.replaceState ? 0 : 1; - details.state[INDEX_KEY] = current_history_index += change; - history[details.replaceState ? 'replaceState' : 'pushState'](details.state, '', url); - } - - if (started) { - current = navigation_result.state; - - if (navigation_result.props.page) { - navigation_result.props.page.url = url; - } - - root.$set(navigation_result.props); - } else { - initialize(navigation_result); - } - - // opts must be passed if we're navigating - if (opts) { - const { scroll, keepfocus } = opts; - - if (!keepfocus) { - // Reset page selection and focus - // We try to mimic browsers' behaviour as closely as possible by targeting the - // first scrollable region, but unfortunately it's not a perfect match — e.g. - // shift-tabbing won't immediately cycle up from the end of the page on Chromium - // See https://html.spec.whatwg.org/multipage/interaction.html#get-the-focusable-area - const root = document.body; - const tabindex = root.getAttribute('tabindex'); - - getSelection()?.removeAllRanges(); - root.tabIndex = -1; - root.focus({ preventScroll: true }); - - // restore `tabindex` as to prevent `root` from stealing input from elements - if (tabindex !== null) { - root.setAttribute('tabindex', tabindex); - } else { - root.removeAttribute('tabindex'); - } - } - - // need to render the DOM before we can scroll to the rendered elements - await tick(); - - if (autoscroll) { - const deep_linked = url.hash && document.getElementById(url.hash.slice(1)); - if (scroll) { - scrollTo(scroll.x, scroll.y); - } else if (deep_linked) { - // Here we use `scrollIntoView` on the element instead of `scrollTo` - // because it natively supports the `scroll-margin` and `scroll-behavior` - // CSS properties. - deep_linked.scrollIntoView(); - } else { - scrollTo(0, 0); - } - } - } else { - // in this case we're simply invalidating - await tick(); - } - - load_cache.promise = null; - load_cache.id = null; - autoscroll = true; - - if (navigation_result.props.page) { - page = navigation_result.props.page; - } - - const leaf_node = navigation_result.state.branch[navigation_result.state.branch.length - 1]; - router_enabled = leaf_node?.module.router !== false; - - if (callback) callback(); - - updating = false; - } - - /** @param {import('./types').NavigationResult} result */ - function initialize(result) { - current = result.state; - - const style = document.querySelector('style[data-sveltekit]'); - if (style) style.remove(); - - page = result.props.page; - - root = new Root({ - target, - props: { ...result.props, stores }, - hydrate: true - }); - - if (router_enabled) { - const navigation = { from: null, to: new URL(location.href) }; - callbacks.after_navigate.forEach((fn) => fn(navigation)); - } - - started = true; - } - - /** - * - * @param {{ - * url: URL; - * params: Record; - * stuff: Record; - * branch: Array; - * status: number; - * error: Error | null; - * routeId: string | null; - * }} opts - */ - async function get_navigation_result_from_branch({ - url, - params, - stuff, - branch, - status, - error, - routeId - }) { - const filtered = /** @type {import('./types').BranchNode[] } */ (branch.filter(Boolean)); - const redirect = filtered.find((f) => f.loaded?.redirect); - - /** @type {import('./types').NavigationResult} */ - const result = { - redirect: redirect?.loaded?.redirect, - state: { - url, - params, - branch, - error, - stuff, - session_id - }, - props: { - components: filtered.map((node) => node.module.default) - } - }; - - for (let i = 0; i < filtered.length; i += 1) { - const loaded = filtered[i].loaded; - result.props[`props_${i}`] = loaded ? await loaded.props : null; - } - - const page_changed = - !current.url || - url.href !== current.url.href || - current.error !== error || - current.stuff !== stuff; - - if (page_changed) { - result.props.page = { error, params, routeId, status, stuff, url }; - - // TODO remove this for 1.0 - /** - * @param {string} property - * @param {string} replacement - */ - const print_error = (property, replacement) => { - Object.defineProperty(result.props.page, property, { - get: () => { - throw new Error(`$page.${property} has been replaced by $page.url.${replacement}`); - } - }); - }; - - print_error('origin', 'origin'); - print_error('path', 'pathname'); - print_error('query', 'searchParams'); - } - - const leaf = filtered[filtered.length - 1]; - const load_cache = leaf?.loaded?.cache; - - if (load_cache) { - const key = url.pathname + url.search; // omit hash - let ready = false; - - const clear = () => { - if (cache.get(key) === result) { - cache.delete(key); - } - - unsubscribe(); - clearTimeout(timeout); - }; - - const timeout = setTimeout(clear, load_cache.maxage * 1000); - - const unsubscribe = stores.session.subscribe(() => { - if (ready) clear(); - }); - - ready = true; - - cache.set(key, result); - } - - return result; - } - - /** - * @param {{ - * status?: number; - * error?: Error; - * module: import('types').CSRComponent; - * url: URL; - * params: Record; - * stuff: Record; - * props?: Record; - * routeId: string | null; - * }} options - */ - async function load_node({ status, error, module, url, params, stuff, props, routeId }) { - /** @type {import('./types').BranchNode} */ - const node = { - module, - uses: { - params: new Set(), - url: false, - session: false, - stuff: false, - dependencies: new Set() - }, - loaded: null, - stuff - }; - - /** @param dep {string} */ - function add_dependency(dep) { - const { href } = new URL(dep, url); - node.uses.dependencies.add(href); - } - - if (props) { - // shadow endpoint props means we need to mark this URL as a dependency of itself - node.uses.dependencies.add(url.href); - } - - /** @type {Record} */ - const uses_params = {}; - for (const key in params) { - Object.defineProperty(uses_params, key, { - get() { - node.uses.params.add(key); - return params[key]; - }, - enumerable: true - }); - } - - const session = $session; - const load_url = new LoadURL(url); - - if (module.load) { - /** @type {import('types').LoadEvent} */ - const load_input = { - routeId, - params: uses_params, - props: props || {}, - get url() { - node.uses.url = true; - return load_url; - }, - get session() { - node.uses.session = true; - return session; - }, - get stuff() { - node.uses.stuff = true; - return { ...stuff }; - }, - async fetch(resource, init) { - let requested; - - if (typeof resource === 'string') { - requested = resource; - } else { - requested = resource.url; - - // we're not allowed to modify the received `Request` object, so in order - // to fixup relative urls we create a new equivalent `init` object instead - init = { - // the request body must be consumed in memory until browsers - // implement streaming request bodies and/or the body getter - body: - resource.method === 'GET' || resource.method === 'HEAD' - ? undefined - : await resource.blob(), - cache: resource.cache, - credentials: resource.credentials, - headers: resource.headers, - integrity: resource.integrity, - keepalive: resource.keepalive, - method: resource.method, - mode: resource.mode, - redirect: resource.redirect, - referrer: resource.referrer, - referrerPolicy: resource.referrerPolicy, - signal: resource.signal, - ...init - }; - } - - // we must fixup relative urls so they are resolved from the target page - const normalized = new URL(requested, url).href; - add_dependency(normalized); - - // prerendered pages may be served from any origin, so `initial_fetch` urls shouldn't be normalized - return started ? native_fetch(normalized, init) : initial_fetch(requested, init); - }, - status: status ?? null, - error: error ?? null - }; - - if (import.meta.env.DEV) { - // TODO remove this for 1.0 - Object.defineProperty(load_input, 'page', { - get: () => { - throw new Error('`page` in `load` functions has been replaced by `url` and `params`'); - } - }); - } - - let loaded; - - if (import.meta.env.DEV) { - try { - lock_fetch(); - loaded = await module.load.call(null, load_input); - } finally { - unlock_fetch(); - } - } else { - loaded = await module.load.call(null, load_input); - } - - if (!loaded) { - throw new Error('load function must return a value'); - } - - node.loaded = normalize(loaded); - if (node.loaded.stuff) node.stuff = node.loaded.stuff; - if (node.loaded.dependencies) { - node.loaded.dependencies.forEach(add_dependency); - } - } else if (props) { - node.loaded = normalize({ props }); - } - - return node; - } - - /** - * @param {import('./types').NavigationIntent} intent - * @param {boolean} no_cache - */ - async function load_route({ id, url, params, route }, no_cache) { - if (load_cache.id === id && load_cache.promise) { - return load_cache.promise; - } - - if (!no_cache) { - const cached = cache.get(id); - if (cached) return cached; - } - - const { a, b, has_shadow } = route; - - const changed = current.url && { - url: id !== current.url.pathname + current.url.search, - params: Object.keys(params).filter((key) => current.params[key] !== params[key]), - session: session_id !== current.session_id - }; - - /** @type {Array} */ - let branch = []; - - /** @type {Record} */ - let stuff = root_stuff; - let stuff_changed = false; - - /** @type {number | undefined} */ - let status = 200; - - /** @type {Error | null} */ - let error = null; - - // preload modules to avoid waterfall, but handle rejections - // so they don't get reported to Sentry et al (we don't need - // to act on the failures at this point) - a.forEach((loader) => loader().catch(() => {})); - - load: for (let i = 0; i < a.length; i += 1) { - /** @type {import('./types').BranchNode | undefined} */ - let node; - - try { - if (!a[i]) continue; - - const module = await a[i](); - const previous = current.branch[i]; - - const changed_since_last_render = - !previous || - module !== previous.module || - (changed.url && previous.uses.url) || - changed.params.some((param) => previous.uses.params.has(param)) || - (changed.session && previous.uses.session) || - Array.from(previous.uses.dependencies).some((dep) => invalidated.some((fn) => fn(dep))) || - (stuff_changed && previous.uses.stuff); - - if (changed_since_last_render) { - /** @type {Record} */ - let props = {}; - - const is_shadow_page = has_shadow && i === a.length - 1; - - if (is_shadow_page) { - const res = await native_fetch( - `${url.pathname}${url.pathname.endsWith('/') ? '' : '/'}__data.json${url.search}`, - { - headers: { - 'x-sveltekit-load': 'true' - } - } - ); - - if (res.ok) { - const redirect = res.headers.get('x-sveltekit-location'); - - if (redirect) { - return { - redirect, - props: {}, - state: current - }; - } - - props = res.status === 204 ? {} : await res.json(); - } else { - status = res.status; - error = new Error('Failed to load data'); - } - } - - if (!error) { - node = await load_node({ - module, - url, - params, - props, - stuff, - routeId: route.id - }); - } - - if (node) { - if (is_shadow_page) { - node.uses.url = true; - } - - if (node.loaded) { - if (node.loaded.error) { - status = node.loaded.status; - error = node.loaded.error; - } - - if (node.loaded.redirect) { - return { - redirect: node.loaded.redirect, - props: {}, - state: current - }; - } - - if (node.loaded.stuff) { - stuff_changed = true; - } - } - } - } else { - node = previous; - } - } catch (e) { - status = 500; - error = coalesce_to_error(e); - } - - if (error) { - while (i--) { - if (b[i]) { - let error_loaded; - - /** @type {import('./types').BranchNode | undefined} */ - let node_loaded; - let j = i; - while (!(node_loaded = branch[j])) { - j -= 1; - } - - try { - error_loaded = await load_node({ - status, - error, - module: await b[i](), - url, - params, - stuff: node_loaded.stuff, - routeId: route.id - }); - - if (error_loaded?.loaded?.error) { - continue; - } - - if (error_loaded?.loaded?.stuff) { - stuff = { - ...stuff, - ...error_loaded.loaded.stuff - }; - } - - branch = branch.slice(0, j + 1).concat(error_loaded); - break load; - } catch (e) { - continue; - } - } - } - - return await load_root_error_page({ - status, - error, - url, - routeId: route.id - }); - } else { - if (node?.loaded?.stuff) { - stuff = { - ...stuff, - ...node.loaded.stuff - }; - } - - branch.push(node); - } - } - - return await get_navigation_result_from_branch({ - url, - params, - stuff, - branch, - status, - error, - routeId: route.id - }); - } - - /** - * @param {{ - * status: number; - * error: Error; - * url: URL; - * routeId: string | null - * }} opts - */ - async function load_root_error_page({ status, error, url, routeId }) { - /** @type {Record} */ - const params = {}; // error page does not have params - - const root_layout = await load_node({ - module: await default_layout, - url, - params, - stuff: {}, - routeId - }); - - const root_error = await load_node({ - status, - error, - module: await default_error, - url, - params, - stuff: (root_layout && root_layout.loaded && root_layout.loaded.stuff) || {}, - routeId - }); - - return await get_navigation_result_from_branch({ - url, - params, - stuff: { - ...root_layout?.loaded?.stuff, - ...root_error?.loaded?.stuff - }, - branch: [root_layout, root_error], - status, - error, - routeId - }); - } - - /** @param {URL} url */ - function get_navigation_intent(url) { - if (url.origin !== location.origin || !url.pathname.startsWith(base)) return; - - const path = decodeURI(url.pathname.slice(base.length) || '/'); - - for (const route of routes) { - const params = route.exec(path); - - if (params) { - /** @type {import('./types').NavigationIntent} */ - const intent = { - id: url.pathname + url.search, - route, - params, - url - }; - - return intent; - } - } - } - - /** - * @param {{ - * url: URL; - * scroll: { x: number, y: number } | null; - * keepfocus: boolean; - * redirect_chain: string[]; - * details: { - * replaceState: boolean; - * state: any; - * } | null; - * accepted: () => void; - * blocked: () => void; - * }} opts - */ - async function navigate({ url, scroll, keepfocus, redirect_chain, details, accepted, blocked }) { - const from = current.url; - let should_block = false; - - const navigation = { - from, - to: url, - cancel: () => (should_block = true) - }; - - callbacks.before_navigate.forEach((fn) => fn(navigation)); - - if (should_block) { - blocked(); - return; - } - - const pathname = normalize_path(url.pathname, trailing_slash); - const normalized = new URL(url.origin + pathname + url.search + url.hash); - - update_scroll_positions(current_history_index); - - accepted(); - - if (started) { - stores.navigating.set({ - from: current.url, - to: normalized - }); - } - - await update( - normalized, - redirect_chain, - false, - { - scroll, - keepfocus, - details - }, - () => { - const navigation = { from, to: normalized }; - callbacks.after_navigate.forEach((fn) => fn(navigation)); - - stores.navigating.set(null); - } - ); - } - - /** - * Loads `href` the old-fashioned way, with a full page reload. - * Returns a `Promise` that never resolves (to prevent any - * subsequent work, e.g. history manipulation, from happening) - * @param {URL} url - */ - function native_navigation(url) { - location.href = url.href; - return new Promise(() => {}); - } - - if (import.meta.hot) { - import.meta.hot.on('vite:beforeUpdate', () => { - if (current.error) location.reload(); - }); - } - - return { - after_navigate: (fn) => { - onMount(() => { - callbacks.after_navigate.push(fn); - - return () => { - const i = callbacks.after_navigate.indexOf(fn); - callbacks.after_navigate.splice(i, 1); - }; - }); - }, - - before_navigate: (fn) => { - onMount(() => { - callbacks.before_navigate.push(fn); - - return () => { - const i = callbacks.before_navigate.indexOf(fn); - callbacks.before_navigate.splice(i, 1); - }; - }); - }, - - disable_scroll_handling: () => { - if (import.meta.env.DEV && started && !updating) { - throw new Error('Can only disable scroll handling during navigation'); - } - - if (updating || !started) { - autoscroll = false; - } - }, - - goto: (href, opts = {}) => goto(href, opts, []), - - invalidate: (resource) => { - if (typeof resource === 'function') { - invalidated.push(resource); - } else { - const { href } = new URL(resource, location.href); - invalidated.push((dep) => dep === href); - } - - if (!invalidating) { - invalidating = Promise.resolve().then(async () => { - await update(new URL(location.href), [], true); - - invalidating = null; - }); - } - - return invalidating; - }, - - prefetch: async (href) => { - const url = new URL(href, get_base_uri(document)); - await prefetch(url); - }, - - // TODO rethink this API - prefetch_routes: async (pathnames) => { - const matching = pathnames - ? routes.filter((route) => pathnames.some((pathname) => route.exec(pathname))) - : routes; - - const promises = matching.map((r) => Promise.all(r.a.map((load) => load()))); - - await Promise.all(promises); - }, - - _start_router: () => { - history.scrollRestoration = 'manual'; - - // Adopted from Nuxt.js - // Reset scrollRestoration to auto when leaving page, allowing page reload - // and back-navigation from other pages to use the browser to restore the - // scrolling position. - addEventListener('beforeunload', (e) => { - let should_block = false; - - const navigation = { - from: current.url, - to: null, - cancel: () => (should_block = true) - }; - - callbacks.before_navigate.forEach((fn) => fn(navigation)); - - if (should_block) { - e.preventDefault(); - e.returnValue = ''; - } else { - history.scrollRestoration = 'auto'; - } - }); - - addEventListener('visibilitychange', () => { - if (document.visibilityState === 'hidden') { - update_scroll_positions(current_history_index); - - try { - sessionStorage[SCROLL_KEY] = JSON.stringify(scroll_positions); - } catch { - // do nothing - } - } - }); - - /** @param {Event} event */ - const trigger_prefetch = (event) => { - const a = find_anchor(event); - if (a && a.href && a.hasAttribute('sveltekit:prefetch')) { - prefetch(get_href(a)); - } - }; - - /** @type {NodeJS.Timeout} */ - let mousemove_timeout; - - /** @param {MouseEvent|TouchEvent} event */ - const handle_mousemove = (event) => { - clearTimeout(mousemove_timeout); - mousemove_timeout = setTimeout(() => { - // event.composedPath(), which is used in find_anchor, will be empty if the event is read in a timeout - // add a layer of indirection to address that - event.target?.dispatchEvent( - new CustomEvent('sveltekit:trigger_prefetch', { bubbles: true }) - ); - }, 20); - }; - - addEventListener('touchstart', trigger_prefetch); - addEventListener('mousemove', handle_mousemove); - addEventListener('sveltekit:trigger_prefetch', trigger_prefetch); - - /** @param {MouseEvent} event */ - addEventListener('click', (event) => { - if (!router_enabled) return; - - // Adapted from https://github.com/visionmedia/page.js - // MIT license https://github.com/visionmedia/page.js#license - if (event.button || event.which !== 1) return; - if (event.metaKey || event.ctrlKey || event.shiftKey || event.altKey) return; - if (event.defaultPrevented) return; - - const a = find_anchor(event); - if (!a) return; - - if (!a.href) return; - - const is_svg_a_element = a instanceof SVGAElement; - const url = get_href(a); - - // Ignore if url does not have origin (e.g. `mailto:`, `tel:`.) - // MEMO: Without this condition, firefox will open mailer twice. - // See: https://github.com/sveltejs/kit/issues/4045 - if (!is_svg_a_element && url.origin === 'null') return; - - // Ignore if tag has - // 1. 'download' attribute - // 2. 'rel' attribute includes external - const rel = (a.getAttribute('rel') || '').split(/\s+/); - - if ( - a.hasAttribute('download') || - rel.includes('external') || - a.hasAttribute('sveltekit:reload') - ) { - return; - } - - // Ignore if has a target - if (is_svg_a_element ? a.target.baseVal : a.target) return; - - // Check if new url only differs by hash and use the browser default behavior in that case - // This will ensure the `hashchange` event is fired - // Removing the hash does a full page navigation in the browser, so make sure a hash is present - const [base, hash] = url.href.split('#'); - if (hash !== undefined && base === location.href.split('#')[0]) { - // set this flag to distinguish between navigations triggered by - // clicking a hash link and those triggered by popstate - hash_navigating = true; - - update_scroll_positions(current_history_index); - - stores.page.set({ ...page, url }); - stores.page.notify(); - - return; - } - - navigate({ - url, - scroll: a.hasAttribute('sveltekit:noscroll') ? scroll_state() : null, - keepfocus: false, - redirect_chain: [], - details: { - state: {}, - replaceState: url.href === location.href - }, - accepted: () => event.preventDefault(), - blocked: () => event.preventDefault() - }); - }); - - addEventListener('popstate', (event) => { - if (event.state && router_enabled) { - // if a popstate-driven navigation is cancelled, we need to counteract it - // with history.go, which means we end up back here, hence this check - if (event.state[INDEX_KEY] === current_history_index) return; - - navigate({ - url: new URL(location.href), - scroll: scroll_positions[event.state[INDEX_KEY]], - keepfocus: false, - redirect_chain: [], - details: null, - accepted: () => { - current_history_index = event.state[INDEX_KEY]; - }, - blocked: () => { - const delta = current_history_index - event.state[INDEX_KEY]; - history.go(delta); - } - }); - } - }); - - addEventListener('hashchange', () => { - // if the hashchange happened as a result of clicking on a link, - // we need to update history, otherwise we have to leave it alone - if (hash_navigating) { - hash_navigating = false; - history.replaceState( - { ...history.state, [INDEX_KEY]: ++current_history_index }, - '', - location.href - ); - } - }); - }, - - _hydrate: async ({ status, error, nodes, params, routeId }) => { - const url = new URL(location.href); - - /** @type {Array} */ - const branch = []; - - /** @type {Record} */ - let stuff = {}; - - /** @type {import('./types').NavigationResult | undefined} */ - let result; - - let error_args; - - try { - for (let i = 0; i < nodes.length; i += 1) { - const is_leaf = i === nodes.length - 1; - - let props; - - if (is_leaf) { - const serialized = document.querySelector('script[sveltekit\\:data-type="props"]'); - if (serialized) { - props = JSON.parse(/** @type {string} */ (serialized.textContent)); - } - } - - const node = await load_node({ - module: await components[nodes[i]](), - url, - params, - stuff, - status: is_leaf ? status : undefined, - error: is_leaf ? error : undefined, - props, - routeId - }); - - if (props) { - node.uses.dependencies.add(url.href); - node.uses.url = true; - } - - branch.push(node); - - if (node && node.loaded) { - if (node.loaded.error) { - if (error) throw node.loaded.error; - error_args = { - status: node.loaded.status, - error: node.loaded.error, - url, - routeId - }; - } else if (node.loaded.stuff) { - stuff = { - ...stuff, - ...node.loaded.stuff - }; - } - } - } - - result = error_args - ? await load_root_error_page(error_args) - : await get_navigation_result_from_branch({ - url, - params, - stuff, - branch, - status, - error, - routeId - }); - } catch (e) { - if (error) throw e; - - result = await load_root_error_page({ - status: 500, - error: coalesce_to_error(e), - url, - routeId - }); - } - - if (result.redirect) { - // this is a real edge case — `load` would need to return - // a redirect but only in the browser - await native_navigation(new URL(result.redirect, location.href)); - } - - initialize(result); - } - }; -} - -/** - * @param {{ - * paths: { - * assets: string; - * base: string; - * }, - * target: Element; - * session: any; - * route: boolean; - * spa: boolean; - * trailing_slash: import('types').TrailingSlash; - * hydrate: { - * status: number; - * error: Error; - * nodes: number[]; - * params: Record; - * routeId: string | null; - * }; - * }} opts - */ -async function start({ paths, target, session, route, spa, trailing_slash, hydrate }) { - const client = create_client({ - target, - session, - base: paths.base, - trailing_slash - }); - - init({ client }); - set_paths(paths); - - if (hydrate) { - await client._hydrate(hydrate); - } - - if (route) { - if (spa) client.goto(location.href, { replaceState: true }); - client._start_router(); - } - - dispatchEvent(new CustomEvent('sveltekit:start')); -} - -export { start }; diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/samplers/score_hlr_sampler.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/samplers/score_hlr_sampler.py deleted file mode 100644 index f4be9b8cfefff7bd59242de1ab5b6a9e37fa7943..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/samplers/score_hlr_sampler.py +++ /dev/null @@ -1,265 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import nms_match - -from ..builder import BBOX_SAMPLERS -from ..transforms import bbox2roi -from .base_sampler import BaseSampler -from .sampling_result import SamplingResult - - -@BBOX_SAMPLERS.register_module() -class ScoreHLRSampler(BaseSampler): - r"""Importance-based Sample Reweighting (ISR_N), described in `Prime Sample - Attention in Object Detection `_. - - Score hierarchical local rank (HLR) differentiates with RandomSampler in - negative part. It firstly computes Score-HLR in a two-step way, - then linearly maps score hlr to the loss weights. - - Args: - num (int): Total number of sampled RoIs. - pos_fraction (float): Fraction of positive samples. - context (:class:`BaseRoIHead`): RoI head that the sampler belongs to. - neg_pos_ub (int): Upper bound of the ratio of num negative to num - positive, -1 means no upper bound. - add_gt_as_proposals (bool): Whether to add ground truth as proposals. - k (float): Power of the non-linear mapping. - bias (float): Shift of the non-linear mapping. - score_thr (float): Minimum score that a negative sample is to be - considered as valid bbox. - """ - - def __init__(self, - num, - pos_fraction, - context, - neg_pos_ub=-1, - add_gt_as_proposals=True, - k=0.5, - bias=0, - score_thr=0.05, - iou_thr=0.5, - **kwargs): - super().__init__(num, pos_fraction, neg_pos_ub, add_gt_as_proposals) - self.k = k - self.bias = bias - self.score_thr = score_thr - self.iou_thr = iou_thr - self.context = context - # context of cascade detectors is a list, so distinguish them here. - if not hasattr(context, 'num_stages'): - self.bbox_roi_extractor = context.bbox_roi_extractor - self.bbox_head = context.bbox_head - self.with_shared_head = context.with_shared_head - if self.with_shared_head: - self.shared_head = context.shared_head - else: - self.bbox_roi_extractor = context.bbox_roi_extractor[ - context.current_stage] - self.bbox_head = context.bbox_head[context.current_stage] - - @staticmethod - def random_choice(gallery, num): - """Randomly select some elements from the gallery. - - If `gallery` is a Tensor, the returned indices will be a Tensor; - If `gallery` is a ndarray or list, the returned indices will be a - ndarray. - - Args: - gallery (Tensor | ndarray | list): indices pool. - num (int): expected sample num. - - Returns: - Tensor or ndarray: sampled indices. - """ - assert len(gallery) >= num - - is_tensor = isinstance(gallery, torch.Tensor) - if not is_tensor: - if torch.cuda.is_available(): - device = torch.cuda.current_device() - else: - device = 'cpu' - gallery = torch.tensor(gallery, dtype=torch.long, device=device) - perm = torch.randperm(gallery.numel(), device=gallery.device)[:num] - rand_inds = gallery[perm] - if not is_tensor: - rand_inds = rand_inds.cpu().numpy() - return rand_inds - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Randomly sample some positive samples.""" - pos_inds = torch.nonzero(assign_result.gt_inds > 0).flatten() - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.random_choice(pos_inds, num_expected) - - def _sample_neg(self, - assign_result, - num_expected, - bboxes, - feats=None, - img_meta=None, - **kwargs): - """Sample negative samples. - - Score-HLR sampler is done in the following steps: - 1. Take the maximum positive score prediction of each negative samples - as s_i. - 2. Filter out negative samples whose s_i <= score_thr, the left samples - are called valid samples. - 3. Use NMS-Match to divide valid samples into different groups, - samples in the same group will greatly overlap with each other - 4. Rank the matched samples in two-steps to get Score-HLR. - (1) In the same group, rank samples with their scores. - (2) In the same score rank across different groups, - rank samples with their scores again. - 5. Linearly map Score-HLR to the final label weights. - - Args: - assign_result (:obj:`AssignResult`): result of assigner. - num_expected (int): Expected number of samples. - bboxes (Tensor): bbox to be sampled. - feats (Tensor): Features come from FPN. - img_meta (dict): Meta information dictionary. - """ - neg_inds = torch.nonzero(assign_result.gt_inds == 0).flatten() - num_neg = neg_inds.size(0) - if num_neg == 0: - return neg_inds, None - with torch.no_grad(): - neg_bboxes = bboxes[neg_inds] - neg_rois = bbox2roi([neg_bboxes]) - bbox_result = self.context._bbox_forward(feats, neg_rois) - cls_score, bbox_pred = bbox_result['cls_score'], bbox_result[ - 'bbox_pred'] - - ori_loss = self.bbox_head.loss( - cls_score=cls_score, - bbox_pred=None, - rois=None, - labels=neg_inds.new_full((num_neg, ), - self.bbox_head.num_classes), - label_weights=cls_score.new_ones(num_neg), - bbox_targets=None, - bbox_weights=None, - reduction_override='none')['loss_cls'] - - # filter out samples with the max score lower than score_thr - max_score, argmax_score = cls_score.softmax(-1)[:, :-1].max(-1) - valid_inds = (max_score > self.score_thr).nonzero().view(-1) - invalid_inds = (max_score <= self.score_thr).nonzero().view(-1) - num_valid = valid_inds.size(0) - num_invalid = invalid_inds.size(0) - - num_expected = min(num_neg, num_expected) - num_hlr = min(num_valid, num_expected) - num_rand = num_expected - num_hlr - if num_valid > 0: - valid_rois = neg_rois[valid_inds] - valid_max_score = max_score[valid_inds] - valid_argmax_score = argmax_score[valid_inds] - valid_bbox_pred = bbox_pred[valid_inds] - - # valid_bbox_pred shape: [num_valid, #num_classes, 4] - valid_bbox_pred = valid_bbox_pred.view( - valid_bbox_pred.size(0), -1, 4) - selected_bbox_pred = valid_bbox_pred[range(num_valid), - valid_argmax_score] - pred_bboxes = self.bbox_head.bbox_coder.decode( - valid_rois[:, 1:], selected_bbox_pred) - pred_bboxes_with_score = torch.cat( - [pred_bboxes, valid_max_score[:, None]], -1) - group = nms_match(pred_bboxes_with_score, self.iou_thr) - - # imp: importance - imp = cls_score.new_zeros(num_valid) - for g in group: - g_score = valid_max_score[g] - # g_score has already sorted - rank = g_score.new_tensor(range(g_score.size(0))) - imp[g] = num_valid - rank + g_score - _, imp_rank_inds = imp.sort(descending=True) - _, imp_rank = imp_rank_inds.sort() - hlr_inds = imp_rank_inds[:num_expected] - - if num_rand > 0: - rand_inds = torch.randperm(num_invalid)[:num_rand] - select_inds = torch.cat( - [valid_inds[hlr_inds], invalid_inds[rand_inds]]) - else: - select_inds = valid_inds[hlr_inds] - - neg_label_weights = cls_score.new_ones(num_expected) - - up_bound = max(num_expected, num_valid) - imp_weights = (up_bound - - imp_rank[hlr_inds].float()) / up_bound - neg_label_weights[:num_hlr] = imp_weights - neg_label_weights[num_hlr:] = imp_weights.min() - neg_label_weights = (self.bias + - (1 - self.bias) * neg_label_weights).pow( - self.k) - ori_selected_loss = ori_loss[select_inds] - new_loss = ori_selected_loss * neg_label_weights - norm_ratio = ori_selected_loss.sum() / new_loss.sum() - neg_label_weights *= norm_ratio - else: - neg_label_weights = cls_score.new_ones(num_expected) - select_inds = torch.randperm(num_neg)[:num_expected] - - return neg_inds[select_inds], neg_label_weights - - def sample(self, - assign_result, - bboxes, - gt_bboxes, - gt_labels=None, - img_meta=None, - **kwargs): - """Sample positive and negative bboxes. - - This is a simple implementation of bbox sampling given candidates, - assigning results and ground truth bboxes. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - bboxes (Tensor): Boxes to be sampled from. - gt_bboxes (Tensor): Ground truth bboxes. - gt_labels (Tensor, optional): Class labels of ground truth bboxes. - - Returns: - tuple[:obj:`SamplingResult`, Tensor]: Sampling result and negative - label weights. - """ - bboxes = bboxes[:, :4] - - gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.uint8) - if self.add_gt_as_proposals: - bboxes = torch.cat([gt_bboxes, bboxes], dim=0) - assign_result.add_gt_(gt_labels) - gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.uint8) - gt_flags = torch.cat([gt_ones, gt_flags]) - - num_expected_pos = int(self.num * self.pos_fraction) - pos_inds = self.pos_sampler._sample_pos( - assign_result, num_expected_pos, bboxes=bboxes, **kwargs) - num_sampled_pos = pos_inds.numel() - num_expected_neg = self.num - num_sampled_pos - if self.neg_pos_ub >= 0: - _pos = max(1, num_sampled_pos) - neg_upper_bound = int(self.neg_pos_ub * _pos) - if num_expected_neg > neg_upper_bound: - num_expected_neg = neg_upper_bound - neg_inds, neg_label_weights = self.neg_sampler._sample_neg( - assign_result, - num_expected_neg, - bboxes, - img_meta=img_meta, - **kwargs) - - return SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags), neg_label_weights diff --git a/spaces/rorallitri/biomedical-language-models/logs/Bullzip Pdf Printer Error 1007 Ghostscript WORK.md b/spaces/rorallitri/biomedical-language-models/logs/Bullzip Pdf Printer Error 1007 Ghostscript WORK.md deleted file mode 100644 index e4dff998d325f2ee17f76e3cbe0b81293c6d7e8c..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Bullzip Pdf Printer Error 1007 Ghostscript WORK.md +++ /dev/null @@ -1,6 +0,0 @@ -

          bullzip pdf printer error 1007 ghostscript


          Download Zip - https://tinurll.com/2uznFn



          - -BULLZIP PDF PRINTER GHOSTSCRIPT ERROR 1007 DOWNLOAD BULLZIP PDF PRINTER GHOSTSCRIPT ERROR 1007 READ ONLINE Ghostscript… 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/runa91/barc_gradio/src/stacked_hourglass/__init__.py b/spaces/runa91/barc_gradio/src/stacked_hourglass/__init__.py deleted file mode 100644 index a5da308c10ea77f41570a7a0d417c28ad19ae9d2..0000000000000000000000000000000000000000 --- a/spaces/runa91/barc_gradio/src/stacked_hourglass/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from stacked_hourglass.model import hg1, hg2, hg4, hg8 -from stacked_hourglass.predictor import HumanPosePredictor diff --git a/spaces/ruslanmv/Clone-Your-Voice/encoder/data_objects/speaker.py b/spaces/ruslanmv/Clone-Your-Voice/encoder/data_objects/speaker.py deleted file mode 100644 index 494e882fe34fc38dcc793ab8c74a6cc2376bb7b5..0000000000000000000000000000000000000000 --- a/spaces/ruslanmv/Clone-Your-Voice/encoder/data_objects/speaker.py +++ /dev/null @@ -1,40 +0,0 @@ -from encoder.data_objects.random_cycler import RandomCycler -from encoder.data_objects.utterance import Utterance -from pathlib import Path - -# Contains the set of utterances of a single speaker -class Speaker: - def __init__(self, root: Path): - self.root = root - self.name = root.name - self.utterances = None - self.utterance_cycler = None - - def _load_utterances(self): - with self.root.joinpath("_sources.txt").open("r") as sources_file: - sources = [l.split(",") for l in sources_file] - sources = {frames_fname: wave_fpath for frames_fname, wave_fpath in sources} - self.utterances = [Utterance(self.root.joinpath(f), w) for f, w in sources.items()] - self.utterance_cycler = RandomCycler(self.utterances) - - def random_partial(self, count, n_frames): - """ - Samples a batch of unique partial utterances from the disk in a way that all - utterances come up at least once every two cycles and in a random order every time. - - :param count: The number of partial utterances to sample from the set of utterances from - that speaker. Utterances are guaranteed not to be repeated if is not larger than - the number of utterances available. - :param n_frames: The number of frames in the partial utterance. - :return: A list of tuples (utterance, frames, range) where utterance is an Utterance, - frames are the frames of the partial utterances and range is the range of the partial - utterance with regard to the complete utterance. - """ - if self.utterances is None: - self._load_utterances() - - utterances = self.utterance_cycler.sample(count) - - a = [(u,) + u.random_partial(n_frames) for u in utterances] - - return a diff --git a/spaces/safi842/FashionGen/netdissect/pidfile.py b/spaces/safi842/FashionGen/netdissect/pidfile.py deleted file mode 100644 index 96a66814326bad444606ad829307fe225f4135e1..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/netdissect/pidfile.py +++ /dev/null @@ -1,81 +0,0 @@ -''' -Utility for simple distribution of work on multiple processes, by -making sure only one process is working on a job at once. -''' - -import os, errno, socket, atexit, time, sys - -def exit_if_job_done(directory): - if pidfile_taken(os.path.join(directory, 'lockfile.pid'), verbose=True): - sys.exit(0) - if os.path.isfile(os.path.join(directory, 'done.txt')): - with open(os.path.join(directory, 'done.txt')) as f: - msg = f.read() - print(msg) - sys.exit(0) - -def mark_job_done(directory): - with open(os.path.join(directory, 'done.txt'), 'w') as f: - f.write('Done by %d@%s %s at %s' % - (os.getpid(), socket.gethostname(), - os.getenv('STY', ''), - time.strftime('%c'))) - -def pidfile_taken(path, verbose=False): - ''' - Usage. To grab an exclusive lock for the remaining duration of the - current process (and exit if another process already has the lock), - do this: - - if pidfile_taken('job_423/lockfile.pid', verbose=True): - sys.exit(0) - - To do a batch of jobs, just run a script that does them all on - each available machine, sharing a network filesystem. When each - job grabs a lock, then this will automatically distribute the - jobs so that each one is done just once on one machine. - ''' - - # Try to create the file exclusively and write my pid into it. - try: - os.makedirs(os.path.dirname(path), exist_ok=True) - fd = os.open(path, os.O_CREAT | os.O_EXCL | os.O_RDWR) - except OSError as e: - if e.errno == errno.EEXIST: - # If we cannot because there was a race, yield the conflicter. - conflicter = 'race' - try: - with open(path, 'r') as lockfile: - conflicter = lockfile.read().strip() or 'empty' - except: - pass - if verbose: - print('%s held by %s' % (path, conflicter)) - return conflicter - else: - # Other problems get an exception. - raise - # Register to delete this file on exit. - lockfile = os.fdopen(fd, 'r+') - atexit.register(delete_pidfile, lockfile, path) - # Write my pid into the open file. - lockfile.write('%d@%s %s\n' % (os.getpid(), socket.gethostname(), - os.getenv('STY', ''))) - lockfile.flush() - os.fsync(lockfile) - # Return 'None' to say there was not a conflict. - return None - -def delete_pidfile(lockfile, path): - ''' - Runs at exit after pidfile_taken succeeds. - ''' - if lockfile is not None: - try: - lockfile.close() - except: - pass - try: - os.unlink(path) - except: - pass diff --git a/spaces/sahshd/ChuanhuChatGPT/modules/base_model.py b/spaces/sahshd/ChuanhuChatGPT/modules/base_model.py deleted file mode 100644 index 2b55623f6b0989f60d818be6e0e77f5948484b82..0000000000000000000000000000000000000000 --- a/spaces/sahshd/ChuanhuChatGPT/modules/base_model.py +++ /dev/null @@ -1,561 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - construct_index(self.api_key, file_src=files) - status = "索引构建完成" - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery - from llama_index.indices.query.schema import QueryBundle - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.chat_models import ChatOpenAI - from llama_index import ( - GPTSimpleVectorIndex, - ServiceContext, - LangchainEmbedding, - OpenAIEmbedding, - ) - limited_context = True - msg = "加载索引中……" - logging.info(msg) - # yield chatbot + [(inputs, "")], msg - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - if local_embedding or self.model_type != ModelType.OpenAI: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - # yield chatbot + [(inputs, "")], msg - with retrieve_proxy(): - prompt_helper = PromptHelper( - max_input_size=4096, - num_output=5, - max_chunk_overlap=20, - chunk_size_limit=600, - ) - from llama_index import ServiceContext - - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, embed_model=embed_model - ) - query_object = GPTVectorStoreIndexQuery( - index.index_struct, - service_context=service_context, - similarity_top_k=5, - vector_store=index._vector_store, - docstore=index._docstore, - ) - query_bundle = QueryBundle(real_inputs) - nodes = query_object.retrieve(query_bundle) - reference_results = [n.node.text for n in nodes] - reference_results = add_source_numbers(reference_results, use_source=False) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
        12. {domain_name}
        13. \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
            \n\n" + "".join(display_append) + "
          " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, chatbot, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, user_name, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return filename, json_s["system"], json_s["chatbot"] - except FileNotFoundError: - logging.warning(f"{user_name} 没有找到对话历史文件,不执行任何操作") - return filename, self.system_prompt, chatbot - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() diff --git a/spaces/sarinam/speaker-anonymization/IMSToucan/Utility/utils.py b/spaces/sarinam/speaker-anonymization/IMSToucan/Utility/utils.py deleted file mode 100644 index 5fa60ebea1b4c9b50c1c0ca49644867699b2ddf2..0000000000000000000000000000000000000000 --- a/spaces/sarinam/speaker-anonymization/IMSToucan/Utility/utils.py +++ /dev/null @@ -1,320 +0,0 @@ -""" -Taken from ESPNet, modified by Florian Lux -""" - -import os -from abc import ABC - -import torch - - -def cumsum_durations(durations): - out = [0] - for duration in durations: - out.append(duration + out[-1]) - centers = list() - for index, _ in enumerate(out): - if index + 1 < len(out): - centers.append((out[index] + out[index + 1]) / 2) - return out, centers - - -def delete_old_checkpoints(checkpoint_dir, keep=5): - checkpoint_list = list() - for el in os.listdir(checkpoint_dir): - if el.endswith(".pt") and el != "best.pt": - checkpoint_list.append(int(el.split(".")[0].split("_")[1])) - if len(checkpoint_list) <= keep: - return - else: - checkpoint_list.sort(reverse=False) - checkpoints_to_delete = [os.path.join(checkpoint_dir, "checkpoint_{}.pt".format(step)) for step in checkpoint_list[:-keep]] - for old_checkpoint in checkpoints_to_delete: - os.remove(os.path.join(old_checkpoint)) - - -def get_most_recent_checkpoint(checkpoint_dir, verbose=True): - checkpoint_list = list() - for el in os.listdir(checkpoint_dir): - if el.endswith(".pt") and el != "best.pt": - checkpoint_list.append(int(el.split(".")[0].split("_")[1])) - if len(checkpoint_list) == 0: - print("No previous checkpoints found, cannot reload.") - return None - checkpoint_list.sort(reverse=True) - if verbose: - print("Reloading checkpoint_{}.pt".format(checkpoint_list[0])) - return os.path.join(checkpoint_dir, "checkpoint_{}.pt".format(checkpoint_list[0])) - - -def make_pad_mask(lengths, xs=None, length_dim=-1, device=None): - """ - Make mask tensor containing indices of padded part. - - Args: - lengths (LongTensor or List): Batch of lengths (B,). - xs (Tensor, optional): The reference tensor. - If set, masks will be the same shape as this tensor. - length_dim (int, optional): Dimension indicator of the above tensor. - See the example. - - Returns: - Tensor: Mask tensor containing indices of padded part. - dtype=torch.uint8 in PyTorch 1.2- - dtype=torch.bool in PyTorch 1.2+ (including 1.2) - - """ - if length_dim == 0: - raise ValueError("length_dim cannot be 0: {}".format(length_dim)) - - if not isinstance(lengths, list): - lengths = lengths.tolist() - bs = int(len(lengths)) - if xs is None: - maxlen = int(max(lengths)) - else: - maxlen = xs.size(length_dim) - - if device is not None: - seq_range = torch.arange(0, maxlen, dtype=torch.int64, device=device) - else: - seq_range = torch.arange(0, maxlen, dtype=torch.int64) - seq_range_expand = seq_range.unsqueeze(0).expand(bs, maxlen) - seq_length_expand = seq_range_expand.new(lengths).unsqueeze(-1) - mask = seq_range_expand >= seq_length_expand - - if xs is not None: - assert xs.size(0) == bs, (xs.size(0), bs) - - if length_dim < 0: - length_dim = xs.dim() + length_dim - # ind = (:, None, ..., None, :, , None, ..., None) - ind = tuple(slice(None) if i in (0, length_dim) else None for i in range(xs.dim())) - mask = mask[ind].expand_as(xs).to(xs.device) - return mask - - -def make_non_pad_mask(lengths, xs=None, length_dim=-1, device=None): - """ - Make mask tensor containing indices of non-padded part. - - Args: - lengths (LongTensor or List): Batch of lengths (B,). - xs (Tensor, optional): The reference tensor. - If set, masks will be the same shape as this tensor. - length_dim (int, optional): Dimension indicator of the above tensor. - See the example. - - Returns: - ByteTensor: mask tensor containing indices of padded part. - dtype=torch.uint8 in PyTorch 1.2- - dtype=torch.bool in PyTorch 1.2+ (including 1.2) - - """ - return ~make_pad_mask(lengths, xs, length_dim, device=device) - - -def initialize(model, init): - """ - Initialize weights of a neural network module. - - Parameters are initialized using the given method or distribution. - - Args: - model: Target. - init: Method of initialization. - """ - - # weight init - for p in model.parameters(): - if p.dim() > 1: - if init == "xavier_uniform": - torch.nn.init.xavier_uniform_(p.data) - elif init == "xavier_normal": - torch.nn.init.xavier_normal_(p.data) - elif init == "kaiming_uniform": - torch.nn.init.kaiming_uniform_(p.data, nonlinearity="relu") - elif init == "kaiming_normal": - torch.nn.init.kaiming_normal_(p.data, nonlinearity="relu") - else: - raise ValueError("Unknown initialization: " + init) - # bias init - for p in model.parameters(): - if p.dim() == 1: - p.data.zero_() - - # reset some modules with default init - for m in model.modules(): - if isinstance(m, (torch.nn.Embedding, torch.nn.LayerNorm)): - m.reset_parameters() - - -def pad_list(xs, pad_value): - """ - Perform padding for the list of tensors. - - Args: - xs (List): List of Tensors [(T_1, `*`), (T_2, `*`), ..., (T_B, `*`)]. - pad_value (float): Value for padding. - - Returns: - Tensor: Padded tensor (B, Tmax, `*`). - - """ - n_batch = len(xs) - max_len = max(x.size(0) for x in xs) - pad = xs[0].new(n_batch, max_len, *xs[0].size()[1:]).fill_(pad_value) - - for i in range(n_batch): - pad[i, : xs[i].size(0)] = xs[i] - - return pad - - -def subsequent_mask(size, device="cpu", dtype=torch.bool): - """ - Create mask for subsequent steps (size, size). - - :param int size: size of mask - :param str device: "cpu" or "cuda" or torch.Tensor.device - :param torch.dtype dtype: result dtype - :rtype - """ - ret = torch.ones(size, size, device=device, dtype=dtype) - return torch.tril(ret, out=ret) - - -class ScorerInterface: - """ - Scorer interface for beam search. - - The scorer performs scoring of the all tokens in vocabulary. - - Examples: - * Search heuristics - * :class:`espnet.nets.scorers.length_bonus.LengthBonus` - * Decoder networks of the sequence-to-sequence models - * :class:`espnet.nets.pytorch_backend.nets.transformer.decoder.Decoder` - * :class:`espnet.nets.pytorch_backend.nets.rnn.decoders.Decoder` - * Neural language models - * :class:`espnet.nets.pytorch_backend.lm.transformer.TransformerLM` - * :class:`espnet.nets.pytorch_backend.lm.default.DefaultRNNLM` - * :class:`espnet.nets.pytorch_backend.lm.seq_rnn.SequentialRNNLM` - - """ - - def init_state(self, x): - """ - Get an initial state for decoding (optional). - - Args: - x (torch.Tensor): The encoded feature tensor - - Returns: initial state - - """ - return None - - def select_state(self, state, i, new_id=None): - """ - Select state with relative ids in the main beam search. - - Args: - state: Decoder state for prefix tokens - i (int): Index to select a state in the main beam search - new_id (int): New label index to select a state if necessary - - Returns: - state: pruned state - - """ - return None if state is None else state[i] - - def score(self, y, state, x): - """ - Score new token (required). - - Args: - y (torch.Tensor): 1D torch.int64 prefix tokens. - state: Scorer state for prefix tokens - x (torch.Tensor): The encoder feature that generates ys. - - Returns: - tuple[torch.Tensor, Any]: Tuple of - scores for next token that has a shape of `(n_vocab)` - and next state for ys - - """ - raise NotImplementedError - - def final_score(self, state): - """ - Score eos (optional). - - Args: - state: Scorer state for prefix tokens - - Returns: - float: final score - - """ - return 0.0 - - -class BatchScorerInterface(ScorerInterface, ABC): - - def batch_init_state(self, x): - """ - Get an initial state for decoding (optional). - - Args: - x (torch.Tensor): The encoded feature tensor - - Returns: initial state - - """ - return self.init_state(x) - - def batch_score(self, ys, states, xs): - """ - Score new token batch (required). - - Args: - ys (torch.Tensor): torch.int64 prefix tokens (n_batch, ylen). - states (List[Any]): Scorer states for prefix tokens. - xs (torch.Tensor): - The encoder feature that generates ys (n_batch, xlen, n_feat). - - Returns: - tuple[torch.Tensor, List[Any]]: Tuple of - batchfied scores for next token with shape of `(n_batch, n_vocab)` - and next state list for ys. - - """ - scores = list() - outstates = list() - for i, (y, state, x) in enumerate(zip(ys, states, xs)): - score, outstate = self.score(y, state, x) - outstates.append(outstate) - scores.append(score) - scores = torch.cat(scores, 0).view(ys.shape[0], -1) - return scores, outstates - - -def to_device(m, x): - """Send tensor into the device of the module. - Args: - m (torch.nn.Module): Torch module. - x (Tensor): Torch tensor. - Returns: - Tensor: Torch tensor located in the same place as torch module. - """ - if isinstance(m, torch.nn.Module): - device = next(m.parameters()).device - elif isinstance(m, torch.Tensor): - device = m.device - else: - raise TypeError( - "Expected torch.nn.Module or torch.tensor, " f"bot got: {type(m)}" - ) - return x.to(device) diff --git a/spaces/sayakpaul/sidd-denoising-maxim/maxim/blocks/unet.py b/spaces/sayakpaul/sidd-denoising-maxim/maxim/blocks/unet.py deleted file mode 100644 index 6000e05ae4472df5191a7af890b4d9274271081f..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/sidd-denoising-maxim/maxim/blocks/unet.py +++ /dev/null @@ -1,133 +0,0 @@ -import functools - -import tensorflow as tf -from tensorflow.keras import layers - -from .attentions import RCAB -from .misc_gating import CrossGatingBlock, ResidualSplitHeadMultiAxisGmlpLayer - -Conv1x1 = functools.partial(layers.Conv2D, kernel_size=(1, 1), padding="same") -Conv3x3 = functools.partial(layers.Conv2D, kernel_size=(3, 3), padding="same") -ConvT_up = functools.partial( - layers.Conv2DTranspose, kernel_size=(2, 2), strides=(2, 2), padding="same" -) -Conv_down = functools.partial( - layers.Conv2D, kernel_size=(4, 4), strides=(2, 2), padding="same" -) - - -def UNetEncoderBlock( - num_channels: int, - block_size, - grid_size, - num_groups: int = 1, - lrelu_slope: float = 0.2, - block_gmlp_factor: int = 2, - grid_gmlp_factor: int = 2, - input_proj_factor: int = 2, - channels_reduction: int = 4, - dropout_rate: float = 0.0, - downsample: bool = True, - use_global_mlp: bool = True, - use_bias: bool = True, - use_cross_gating: bool = False, - name: str = "unet_encoder", -): - """Encoder block in MAXIM.""" - - def apply(x, skip=None, enc=None, dec=None): - if skip is not None: - x = tf.concat([x, skip], axis=-1) - - # convolution-in - x = Conv1x1(filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_0")(x) - shortcut_long = x - - for i in range(num_groups): - if use_global_mlp: - x = ResidualSplitHeadMultiAxisGmlpLayer( - grid_size=grid_size, - block_size=block_size, - grid_gmlp_factor=grid_gmlp_factor, - block_gmlp_factor=block_gmlp_factor, - input_proj_factor=input_proj_factor, - use_bias=use_bias, - dropout_rate=dropout_rate, - name=f"{name}_SplitHeadMultiAxisGmlpLayer_{i}", - )(x) - x = RCAB( - num_channels=num_channels, - reduction=channels_reduction, - lrelu_slope=lrelu_slope, - use_bias=use_bias, - name=f"{name}_channel_attention_block_1{i}", - )(x) - - x = x + shortcut_long - - if enc is not None and dec is not None: - assert use_cross_gating - x, _ = CrossGatingBlock( - features=num_channels, - block_size=block_size, - grid_size=grid_size, - dropout_rate=dropout_rate, - input_proj_factor=input_proj_factor, - upsample_y=False, - use_bias=use_bias, - name=f"{name}_cross_gating_block", - )(x, enc + dec) - - if downsample: - x_down = Conv_down( - filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_1" - )(x) - return x_down, x - else: - return x - - return apply - - -def UNetDecoderBlock( - num_channels: int, - block_size, - grid_size, - num_groups: int = 1, - lrelu_slope: float = 0.2, - block_gmlp_factor: int = 2, - grid_gmlp_factor: int = 2, - input_proj_factor: int = 2, - channels_reduction: int = 4, - dropout_rate: float = 0.0, - downsample: bool = True, - use_global_mlp: bool = True, - use_bias: bool = True, - name: str = "unet_decoder", -): - - """Decoder block in MAXIM.""" - - def apply(x, bridge=None): - x = ConvT_up( - filters=num_channels, use_bias=use_bias, name=f"{name}_ConvTranspose_0" - )(x) - x = UNetEncoderBlock( - num_channels=num_channels, - num_groups=num_groups, - lrelu_slope=lrelu_slope, - block_size=block_size, - grid_size=grid_size, - block_gmlp_factor=block_gmlp_factor, - grid_gmlp_factor=grid_gmlp_factor, - channels_reduction=channels_reduction, - use_global_mlp=use_global_mlp, - dropout_rate=dropout_rate, - downsample=False, - use_bias=use_bias, - name=f"{name}_UNetEncoderBlock_0", - )(x, skip=bridge) - - return x - - return apply diff --git a/spaces/sayakpaul/sots-indoor-dehazing-maxim/maxim/blocks/attentions.py b/spaces/sayakpaul/sots-indoor-dehazing-maxim/maxim/blocks/attentions.py deleted file mode 100644 index ad59022388610f775335cd3f58ba4fb5362ebd90..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/sots-indoor-dehazing-maxim/maxim/blocks/attentions.py +++ /dev/null @@ -1,143 +0,0 @@ -import functools - -import tensorflow as tf -from tensorflow.keras import layers - -from .others import MlpBlock - -Conv3x3 = functools.partial(layers.Conv2D, kernel_size=(3, 3), padding="same") -Conv1x1 = functools.partial(layers.Conv2D, kernel_size=(1, 1), padding="same") - - -def CALayer( - num_channels: int, - reduction: int = 4, - use_bias: bool = True, - name: str = "channel_attention", -): - """Squeeze-and-excitation block for channel attention. - - ref: https://arxiv.org/abs/1709.01507 - """ - - def apply(x): - # 2D global average pooling - y = layers.GlobalAvgPool2D(keepdims=True)(x) - # Squeeze (in Squeeze-Excitation) - y = Conv1x1( - filters=num_channels // reduction, use_bias=use_bias, name=f"{name}_Conv_0" - )(y) - y = tf.nn.relu(y) - # Excitation (in Squeeze-Excitation) - y = Conv1x1(filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_1")(y) - y = tf.nn.sigmoid(y) - return x * y - - return apply - - -def RCAB( - num_channels: int, - reduction: int = 4, - lrelu_slope: float = 0.2, - use_bias: bool = True, - name: str = "residual_ca", -): - """Residual channel attention block. Contains LN,Conv,lRelu,Conv,SELayer.""" - - def apply(x): - shortcut = x - x = layers.LayerNormalization(epsilon=1e-06, name=f"{name}_LayerNorm")(x) - x = Conv3x3(filters=num_channels, use_bias=use_bias, name=f"{name}_conv1")(x) - x = tf.nn.leaky_relu(x, alpha=lrelu_slope) - x = Conv3x3(filters=num_channels, use_bias=use_bias, name=f"{name}_conv2")(x) - x = CALayer( - num_channels=num_channels, - reduction=reduction, - use_bias=use_bias, - name=f"{name}_channel_attention", - )(x) - return x + shortcut - - return apply - - -def RDCAB( - num_channels: int, - reduction: int = 16, - use_bias: bool = True, - dropout_rate: float = 0.0, - name: str = "rdcab", -): - """Residual dense channel attention block. Used in Bottlenecks.""" - - def apply(x): - y = layers.LayerNormalization(epsilon=1e-06, name=f"{name}_LayerNorm")(x) - y = MlpBlock( - mlp_dim=num_channels, - dropout_rate=dropout_rate, - use_bias=use_bias, - name=f"{name}_channel_mixing", - )(y) - y = CALayer( - num_channels=num_channels, - reduction=reduction, - use_bias=use_bias, - name=f"{name}_channel_attention", - )(y) - x = x + y - return x - - return apply - - -def SAM( - num_channels: int, - output_channels: int = 3, - use_bias: bool = True, - name: str = "sam", -): - - """Supervised attention module for multi-stage training. - - Introduced by MPRNet [CVPR2021]: https://github.com/swz30/MPRNet - """ - - def apply(x, x_image): - """Apply the SAM module to the input and num_channels. - Args: - x: the output num_channels from UNet decoder with shape (h, w, c) - x_image: the input image with shape (h, w, 3) - Returns: - A tuple of tensors (x1, image) where (x1) is the sam num_channels used for the - next stage, and (image) is the output restored image at current stage. - """ - # Get num_channels - x1 = Conv3x3(filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_0")(x) - - # Output restored image X_s - if output_channels == 3: - image = ( - Conv3x3( - filters=output_channels, use_bias=use_bias, name=f"{name}_Conv_1" - )(x) - + x_image - ) - else: - image = Conv3x3( - filters=output_channels, use_bias=use_bias, name=f"{name}_Conv_1" - )(x) - - # Get attention maps for num_channels - x2 = tf.nn.sigmoid( - Conv3x3(filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_2")(image) - ) - - # Get attended feature maps - x1 = x1 * x2 - - # Residual connection - x1 = x1 + x - return x1, image - - return apply diff --git a/spaces/scedlatioru/img-to-music/example/Smite Hacks Smite Combo And Skill Bots Scripts Smite Skin Hacks And Other Exploits Drop Hacks.md b/spaces/scedlatioru/img-to-music/example/Smite Hacks Smite Combo And Skill Bots Scripts Smite Skin Hacks And Other Exploits Drop Hacks.md deleted file mode 100644 index 1229ac0a69e84a713f8a57afb74817de7d37a915..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Smite Hacks Smite Combo And Skill Bots Scripts Smite Skin Hacks And Other Exploits Drop Hacks.md +++ /dev/null @@ -1,33 +0,0 @@ -
          -``` -

          How to Use Smite Hacks, Smite Combo and Skill Bots Scripts, Smite Skin Hacks and other Exploits, Drop Hacks

          -

          Smite is a popular online multiplayer game that pits gods and mythological creatures against each other in a battle arena. If you want to dominate your opponents and win more matches, you might be interested in using some of the Smite hacks, Smite combo and skill bots scripts, Smite skin hacks and other exploits, drop hacks that are available online. These cheats can give you an unfair advantage over your enemies by enhancing your abilities, unlocking premium skins, or even crashing the game for everyone else. However, before you use any of these hacks, you should be aware of the risks and consequences involved.

          -

          Smite Hacks, Smite Combo and Skill Bots Scripts, Smite Skin Hacks and other Exploits, Drop Hacks


          Download Zip ⇒⇒⇒ https://gohhs.com/2uEAAr



          -

          What are Smite Hacks?

          -

          Smite hacks are programs or tools that modify the game files or memory to alter the gameplay in your favor. Some of the most common types of Smite hacks are:

          -
            -
          • Aimbot: This hack automatically aims your attacks at the enemy's hitbox, ensuring that you never miss a shot.
          • -
          • ESP: This hack shows you extra information on your screen, such as the enemy's health, mana, location, cooldowns, wards, etc.
          • -
          • Speedhack: This hack increases your movement speed beyond the normal limit, allowing you to escape or chase enemies faster.
          • -
          • Damage hack: This hack increases your damage output beyond the normal limit, allowing you to kill enemies with one or few hits.
          • -
          • No cooldown hack: This hack removes the cooldowns of your abilities, allowing you to spam them without any delay.
          • -
          -

          What are Smite Combo and Skill Bots Scripts?

          -

          Smite combo and skill bots scripts are programs or macros that automate your actions in the game. They can perform complex combos or skills for you with a single keystroke or mouse click. Some of the most common types of Smite combo and skill bots scripts are:

          -
            -
          • Combo script: This script executes a predefined sequence of abilities for a specific god or situation. For example, a combo script for Loki might activate his stealth, use his ultimate on an enemy god, use his decoy to distract them, and then use his basic attacks to finish them off.
          • -
          • Skill script: This script performs a single ability for you with optimal timing and accuracy. For example, a skill script for Ra might automatically fire his ultimate at the enemy's location when they are low on health or stunned.
          • -
          -

          What are Smite Skin Hacks and other Exploits?

          -

          Smite skin hacks and other exploits are methods or glitches that allow you to access or use features that are normally restricted or unavailable in the game. Some of the most common types of Smite skin hacks and other exploits are:

          -

          -
            -
          • Skin hack: This hack allows you to use any skin for any god in the game, regardless of whether you own it or not. You can also use skins that are exclusive to events or promotions.
          • -
          • Gem hack: This hack allows you to generate unlimited gems for free, which are the premium currency in the game. You can use gems to buy skins, voice packs, chests, boosters, etc.
          • -
          • Favor hack: This hack allows you to generate unlimited favor for free, which are the basic currency in the game. You can use favor to buy gods, recolors, emotes, etc.
          • -
          • Bug exploit: This exploit takes advantage of a bug or error in the game code or server to gain an advantage. For example, a bug exploit might allow you to clip through walls, teleport across the map, duplicate items, etc.
          • -
          -

          What are Drop Hacks?

          -

          Drop hacks are programs or tools that interfere with

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Vray For Sketchup 1 49 01 Crack _BEST_.md b/spaces/scedlatioru/img-to-music/example/Vray For Sketchup 1 49 01 Crack _BEST_.md deleted file mode 100644 index a3362f6539c52ac7764a691926879784a642f141..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Vray For Sketchup 1 49 01 Crack _BEST_.md +++ /dev/null @@ -1,60 +0,0 @@ -

          vray for sketchup 1 49 01 crack


          Download > https://gohhs.com/2uEyI8



          - -XVideo is the codec used to compress video files - -... - - niko47, it is OS though. - - daniel_: I don't see anything obvious - - yes and its OS - - Anyone using linux-mint 10? - - niko47, it doesn't matter, this channel is support. - - jrib, oh okay, that's what I figured, but I thought maybe it was something else.. - - its easy to solve - - so i have to talk here - - jrib, i'm not in a livecd/installation situation. I have xubuntu on a virtual machine. - - niko47, not if you don't want to. - - meowsus: oh, you have to check. I'm not sure I know how to get there myself - - i don't need to tell you - - niko47, how so? - - niko47, I'm sorry you are making no sense. - - i'll tell you - - jrib, i know how to get to grub so it has the same effect. - - add the ctx ctx_3d_info and ctx_scissor rects in framebuffer - - and by add i mean use - - niko47, your not being helpful here. - - the first 2 are easy - - jrib, just not how to automatically have grub at that point - - niko47, you have to care I would imagine, if you are actually interested. - - niko47:!enter - - meowsus: I don't know how to go to grub and I can't get to grub. But maybe someone else here does - -!enter | niko47 - - niko47: Please try to keep your questions/responses on one line. Don't use the "Enter" key as punctuation! 4fefd39f24
          -
          -
          -

          diff --git a/spaces/seduerr/text_analytics/text_analytics/indices/word_information_indices.py b/spaces/seduerr/text_analytics/text_analytics/indices/word_information_indices.py deleted file mode 100644 index df43a83fb6d4f6281c4df2472d2900fd9f14f597..0000000000000000000000000000000000000000 --- a/spaces/seduerr/text_analytics/text_analytics/indices/word_information_indices.py +++ /dev/null @@ -1,99 +0,0 @@ -import multiprocessing -import spacy - -from typing import Callable -from typing import List -from text_analytics.indices.descriptive_indices import DescriptiveIndices -from text_analytics.constants import ACCEPTED_LANGUAGES -from text_analytics.utils.utils import is_word -from text_analytics.utils.utils import split_text_into_paragraphs - -class WordInformationIndices: - def __init__(self, nlp, language: str='en', descriptive_indices: DescriptiveIndices=None) -> None: - self.language = language - self._nlp = nlp - self._incidence = 1000 - if descriptive_indices is None: - self._di = DescriptiveIndices(language=language, nlp=nlp) - else: - self._di = descriptive_indices - - def _get_word_type_incidence(self, text: str, disable_pipeline :List, counter_function: Callable, word_count: int=None, workers: int=-1) -> float: - paragraphs = split_text_into_paragraphs(text) - wc = word_count if word_count is not None else self._di.get_word_count_from_text(text) - self._nlp.get_pipe('feature counter').counter_function = counter_function - words = sum(doc._.feature_count for doc in self._nlp.pipe(paragraphs, batch_size=1, disable=disable_pipeline, n_process=1)) - result = words #(words / wc) - return result - - def get_noun_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float: - noun_counter = lambda doc: sum(1 for token in doc if is_word(token) and token.pos_ in ['NOUN', 'PROPN']) - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['tok2vec', 'tagger', 'attribute_ruler', 'feature counter']] - result = self._get_word_type_incidence(text, disable_pipeline=disable_pipeline, counter_function=noun_counter, workers=workers) - return result - - def get_verb_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float: - verb_counter = lambda doc: sum(1 for token in doc if is_word(token) and token.pos_ == 'VERB') - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['tok2vec', 'tagger', 'attribute_ruler', 'feature counter']] - return self._get_word_type_incidence(text, disable_pipeline=disable_pipeline, counter_function=verb_counter, workers=workers) - - def get_adjective_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float: - adjective_counter = lambda doc: sum(1 - for token in doc - if is_word(token) and token.pos_ == 'ADJ') - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['tok2vec', 'tagger', 'attribute_ruler', 'feature counter']] - return self._get_word_type_incidence(text, disable_pipeline=disable_pipeline, counter_function=adjective_counter, workers=workers) - - def get_adverb_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float: - adverb_counter = lambda doc: sum(1 - for token in doc - if is_word(token) and token.pos_ == 'ADV') - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['tok2vec', 'tagger', 'attribute_ruler', 'feature counter']] - return self._get_word_type_incidence(text, disable_pipeline=disable_pipeline, counter_function=adverb_counter, workers=workers) - - def get_personal_pronoun_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float: - pronoun_counter = lambda doc: sum(1 - for token in doc - if is_word(token) and token.pos_ == 'PRON') - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['tok2vec', 'tagger', 'attribute_ruler', 'feature counter']] - return self._get_word_type_incidence(text, disable_pipeline=disable_pipeline, counter_function=pronoun_counter, workers=workers) - - def get_personal_pronoun_first_person_singular_form_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float: - pronoun_counter = lambda doc: sum(1 - for token in doc - if is_word(token) and token.pos_ == 'PRON' and 'Number=Sing' in token.morph and 'Person=1' in token.morph) - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['tok2vec', 'tagger', 'attribute_ruler', 'feature counter']] - return self._get_word_type_incidence(text, disable_pipeline=disable_pipeline, counter_function=pronoun_counter, workers=workers) - - def get_personal_pronoun_first_person_plural_form_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float: - pronoun_counter = lambda doc: sum(1 - for token in doc - if is_word(token) and token.pos_ == 'PRON' and 'Number=Plur' in token.morph and 'Person=1' in token.morph) - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['tok2vec', 'tagger', 'attribute_ruler', 'feature counter']] - return self._get_word_type_incidence(text, disable_pipeline=disable_pipeline, counter_function=pronoun_counter, workers=workers) - - def get_personal_pronoun_second_person_singular_form_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float: - pronoun_counter = lambda doc: sum(1 - for token in doc - if is_word(token) and token.pos_ == 'PRON' and 'Number=Sing' in token.morph and 'Person=2' in token.morph) - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['tok2vec', 'tagger', 'attribute_ruler', 'feature counter']] - return self._get_word_type_incidence(text, disable_pipeline=disable_pipeline, counter_function=pronoun_counter, workers=workers) - - def get_personal_pronoun_second_person_plural_form_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float: - pronoun_counter = lambda doc: sum(1 - for token in doc - if is_word(token) and token.pos_ == 'PRON' and 'Number=Plur' in token.morph and 'Person=2' in token.morph) - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['tok2vec', 'tagger', 'attribute_ruler', 'feature counter']] - return self._get_word_type_incidence(text, disable_pipeline=disable_pipeline, counter_function=pronoun_counter, workers=workers) - - def get_personal_pronoun_third_person_singular_form_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float: - pronoun_counter = lambda doc: sum(1 - for token in doc - if is_word(token) and token.pos_ == 'PRON' and 'Number=Sing' in token.morph and 'Person=3' in token.morph) - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['tok2vec', 'tagger', 'attribute_ruler', 'feature counter']] - return self._get_word_type_incidence(text, disable_pipeline=disable_pipeline, counter_function=pronoun_counter, workers=workers) - - def get_personal_pronoun_third_person_plural_form_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float: - pronoun_counter = lambda doc: sum(1 for token in doc if is_word(token) and token.pos_ == 'PRON' and 'Number=Plur' in token.morph and 'Person=3' in token.morph) - disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['tok2vec', 'tagger', 'attribute_ruler', 'feature counter']] - return self._get_word_type_incidence(text, disable_pipeline=disable_pipeline, counter_function=pronoun_counter, workers=workers) diff --git a/spaces/shaolin123/soulteary-Chinese-Llama-2-7b-ggml-q4/app.py b/spaces/shaolin123/soulteary-Chinese-Llama-2-7b-ggml-q4/app.py deleted file mode 100644 index 6a089aacd3b952c6e5d7809dbccaf5dfbee61c1c..0000000000000000000000000000000000000000 --- a/spaces/shaolin123/soulteary-Chinese-Llama-2-7b-ggml-q4/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/soulteary/Chinese-Llama-2-7b-ggml-q4").launch() \ No newline at end of file diff --git a/spaces/shgao/EditAnything/ldm/models/diffusion/dpm_solver/__init__.py b/spaces/shgao/EditAnything/ldm/models/diffusion/dpm_solver/__init__.py deleted file mode 100644 index 7427f38c07530afbab79154ea8aaf88c4bf70a08..0000000000000000000000000000000000000000 --- a/spaces/shgao/EditAnything/ldm/models/diffusion/dpm_solver/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .sampler import DPMSolverSampler \ No newline at end of file diff --git a/spaces/shi-labs/Matting-Anything/networks/m2ms/conv_sam.py b/spaces/shi-labs/Matting-Anything/networks/m2ms/conv_sam.py deleted file mode 100644 index 8a5720c8c03daa9714e025e4cf7730d327557d48..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Matting-Anything/networks/m2ms/conv_sam.py +++ /dev/null @@ -1,189 +0,0 @@ -import logging -import torch.nn as nn -import torch -import torch.nn.functional as F -from networks import ops - -def conv5x5(in_planes, out_planes, stride=1, groups=1, dilation=1): - """5x5 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=5, stride=stride, - padding=2, groups=groups, bias=False, dilation=dilation) - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=dilation, groups=groups, bias=False, dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, upsample=None, norm_layer=None, large_kernel=False): - super(BasicBlock, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self.stride = stride - conv = conv5x5 if large_kernel else conv3x3 - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - if self.stride > 1: - self.conv1 = ops.SpectralNorm(nn.ConvTranspose2d(inplanes, inplanes, kernel_size=4, stride=2, padding=1, bias=False)) - else: - self.conv1 = ops.SpectralNorm(conv(inplanes, inplanes)) - self.bn1 = norm_layer(inplanes) - self.activation = nn.LeakyReLU(0.2, inplace=True) - self.conv2 = ops.SpectralNorm(conv(inplanes, planes)) - self.bn2 = norm_layer(planes) - self.upsample = upsample - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.activation(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.upsample is not None: - identity = self.upsample(x) - - out += identity - out = self.activation(out) - - return out - -class SAM_Decoder_Deep(nn.Module): - def __init__(self, nc, layers, block=BasicBlock, norm_layer=None, large_kernel=False, late_downsample=False): - super(SAM_Decoder_Deep, self).__init__() - self.logger = logging.getLogger("Logger") - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self._norm_layer = norm_layer - self.large_kernel = large_kernel - self.kernel_size = 5 if self.large_kernel else 3 - - #self.inplanes = 512 if layers[0] > 0 else 256 - self.inplanes = 256 - self.late_downsample = late_downsample - self.midplanes = 64 if late_downsample else 32 - - self.conv1 = ops.SpectralNorm(nn.ConvTranspose2d(self.midplanes, 32, kernel_size=4, stride=2, padding=1, bias=False)) - self.bn1 = norm_layer(32) - self.leaky_relu = nn.LeakyReLU(0.2, inplace=True) - - self.upsample = nn.UpsamplingNearest2d(scale_factor=2) - self.tanh = nn.Tanh() - #self.layer1 = self._make_layer(block, 256, layers[0], stride=2) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 64, layers[2], stride=2) - self.layer4 = self._make_layer(block, self.midplanes, layers[3], stride=2) - - self.refine_OS1 = nn.Sequential( - nn.Conv2d(32, 32, kernel_size=self.kernel_size, stride=1, padding=self.kernel_size//2, bias=False), - norm_layer(32), - self.leaky_relu, - nn.Conv2d(32, 1, kernel_size=self.kernel_size, stride=1, padding=self.kernel_size//2),) - - self.refine_OS4 = nn.Sequential( - nn.Conv2d(64, 32, kernel_size=self.kernel_size, stride=1, padding=self.kernel_size//2, bias=False), - norm_layer(32), - self.leaky_relu, - nn.Conv2d(32, 1, kernel_size=self.kernel_size, stride=1, padding=self.kernel_size//2),) - - self.refine_OS8 = nn.Sequential( - nn.Conv2d(128, 32, kernel_size=self.kernel_size, stride=1, padding=self.kernel_size//2, bias=False), - norm_layer(32), - self.leaky_relu, - nn.Conv2d(32, 1, kernel_size=self.kernel_size, stride=1, padding=self.kernel_size//2),) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - if hasattr(m, "weight_bar"): - nn.init.xavier_uniform_(m.weight_bar) - else: - nn.init.xavier_uniform_(m.weight) - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - for m in self.modules(): - if isinstance(m, BasicBlock): - nn.init.constant_(m.bn2.weight, 0) - - self.logger.debug(self) - - def _make_layer(self, block, planes, blocks, stride=1): - if blocks == 0: - return nn.Sequential(nn.Identity()) - norm_layer = self._norm_layer - upsample = None - if stride != 1: - upsample = nn.Sequential( - nn.UpsamplingNearest2d(scale_factor=2), - ops.SpectralNorm(conv1x1(self.inplanes + 4, planes * block.expansion)), - norm_layer(planes * block.expansion), - ) - elif self.inplanes != planes * block.expansion: - upsample = nn.Sequential( - ops.SpectralNorm(conv1x1(self.inplanes + 4, planes * block.expansion)), - norm_layer(planes * block.expansion), - ) - - layers = [block(self.inplanes + 4, planes, stride, upsample, norm_layer, self.large_kernel)] - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append(block(self.inplanes, planes, norm_layer=norm_layer, large_kernel=self.large_kernel)) - - return nn.Sequential(*layers) - - def forward(self, x_os16, img, mask): - ret = {} - mask_os16 = F.interpolate(mask, x_os16.shape[2:], mode='bilinear', align_corners=False) - img_os16 = F.interpolate(img, x_os16.shape[2:], mode='bilinear', align_corners=False) - - x = self.layer2(torch.cat((x_os16, img_os16, mask_os16), dim=1)) # N x 128 x 128 x 128 - - x_os8 = self.refine_OS8(x) - - mask_os8 = F.interpolate(mask, x.shape[2:], mode='bilinear', align_corners=False) - img_os8 = F.interpolate(img, x.shape[2:], mode='bilinear', align_corners=False) - - x = self.layer3(torch.cat((x, img_os8, mask_os8), dim=1)) # N x 64 x 256 x 256 - - x_os4 = self.refine_OS4(x) - - mask_os4 = F.interpolate(mask, x.shape[2:], mode='bilinear', align_corners=False) - img_os4 = F.interpolate(img, x.shape[2:], mode='bilinear', align_corners=False) - - x = self.layer4(torch.cat((x, img_os4, mask_os4), dim=1)) # N x 32 x 512 x 512 - x = self.conv1(x) - x = self.bn1(x) - x = self.leaky_relu(x) # N x 32 x 1024 x 1024 - - x_os1 = self.refine_OS1(x) # N - - x_os4 = F.interpolate(x_os4, scale_factor=4.0, mode='bilinear', align_corners=False) - x_os8 = F.interpolate(x_os8, scale_factor=8.0, mode='bilinear', align_corners=False) - - x_os1 = (torch.tanh(x_os1) + 1.0) / 2.0 - x_os4 = (torch.tanh(x_os4) + 1.0) / 2.0 - x_os8 = (torch.tanh(x_os8) + 1.0) / 2.0 - - mask_os1 = F.interpolate(mask, x_os1.shape[2:], mode='bilinear', align_corners=False) - - ret['alpha_os1'] = x_os1 - ret['alpha_os4'] = x_os4 - ret['alpha_os8'] = x_os8 - ret['mask'] = mask_os1 - - return ret \ No newline at end of file diff --git a/spaces/shripadbhat/Question_Answering_Document/app.py b/spaces/shripadbhat/Question_Answering_Document/app.py deleted file mode 100644 index 4b520d99713afd98b92028ee2c588f8855606977..0000000000000000000000000000000000000000 --- a/spaces/shripadbhat/Question_Answering_Document/app.py +++ /dev/null @@ -1,25 +0,0 @@ -import streamlit as st -from question_answering import QuestionAnswering - -st.title('Document Question Answering System') -st.write("Loading the models...") -qa = QuestionAnswering() -st.write('Models Loaded') - - -document_text = st.text_area("Document Text", "", height=100) -query = st.text_input("Query") - - -#if st.button("Get Answers From Document"): -if len(document_text.strip()) > 0 and len(query.strip()) > 0: - st.write('Fetching answer...') - answers_lines = qa.fetch_answers(query, document_text).splitlines() - answer_first = answers_lines[0] - reference_first = answers_lines[1] - st.write('Check the answer below...with reference text') - st.header("ANSWER: "+answer_first) - st.subheader("REFERENCE: "+reference_first) - #st.markdown() - - diff --git a/spaces/simonraj/ELOralCoachRiverValleyPrimarySchool/README.md b/spaces/simonraj/ELOralCoachRiverValleyPrimarySchool/README.md deleted file mode 100644 index 81b1dd2f206fefefbfe2cc9b943dad20dd444547..0000000000000000000000000000000000000000 --- a/spaces/simonraj/ELOralCoachRiverValleyPrimarySchool/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: OralCoachStreamingEL -emoji: 📉 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -duplicated_from: simonraj/ELOralCoachv1 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/simpie28/VITS-Umamusume-voice-synthesizer/text/sanskrit.py b/spaces/simpie28/VITS-Umamusume-voice-synthesizer/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/simpie28/VITS-Umamusume-voice-synthesizer/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Agniveer Vayu 2022 How to Download Admit Card for Air Force Exam.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Agniveer Vayu 2022 How to Download Admit Card for Air Force Exam.md deleted file mode 100644 index 3e63073f888087802444f7679b0fac18f551c3ac..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Agniveer Vayu 2022 How to Download Admit Card for Air Force Exam.md +++ /dev/null @@ -1,89 +0,0 @@ -
          -

          How to Download Admit Card for Air Force Exam 2022

          -

          If you are aspiring to join the Indian Air Force as an officer or an airman, you must be aware of the Air Force Common Admission Test (AFCAT) and Agniveer (Vayu) exam. These are the two main exams conducted by the Indian Air Force (IAF) to recruit candidates for various branches and trades. To appear for these exams, you need to have a valid admit card that contains your personal details, exam date, time, and venue.

          -

          An admit card is a mandatory document that you need to carry with you to the exam center. It serves as your identity proof and also contains important instructions for the exam. Without an admit card, you will not be allowed to enter the exam hall or take the test.

          -

          admit card download air force 2022


          Download Ziphttps://ssurll.com/2uO1cA



          -

          So how can you download your admit card for the Air Force exam 2022? The answer is simple: you need to visit the official website of IAF and follow some easy steps. Here is a step-by-step guide on how to download your admit card:

          -
            -
          1. Visit https://agnipathvayu.cdac.in/AV/ for Agniveer (Vayu) exam or https://afcat.cdac.in/AFCAT/index.html for AFCAT exam.
          2. -
          3. Click on "Candidate Login" and enter your email ID and password.
          4. -
          5. Click on "Generate Admit Card" and download it in PDF format.
          6. -
          7. Print it out and check all the details carefully.
          8. -
          9. If you find any discrepancy or error in your admit card, contact the AFCAT Cell or Agniveer Cell immediately.
          10. -
          -

          Now that you know how to download your admit card, let us see what are the eligibility criteria, selection process, and preparation tips for the Air Force exam 2022.

          -

          Eligibility Criteria for Air Force Exam 2022

          -

          The eligibility criteria for Air Force exam 2022 vary depending on whether you are applying for AFCAT or Agniveer (Vayu) exam. Here are some common criteria that you need to fulfill:

          -
            -
          • You must be an Indian citizen.
          • -
          • You must be between 18 to 25 years of age as on July 1, 2022.
          • -
          • You must have passed Class 12 or equivalent with Physics and Mathematics as compulsory subjects with at least 50% marks in aggregate for AFCAT or at least 33% marks in each subject for Agniveer (Vayu).
          • -
          • You must have a minimum height of 152.5 cm for males and 145 cm for females.
          • -
          • You must have a normal vision of 6/6 in one eye and 6/9 in other eye without glasses or contact lenses.
          • -
          • You must not have any history of mental or physical illness or disability that may affect your performance in the exam.
          • -
          -

          These are the basic eligibility criteria for Air Force exam 2022. However, there may be some additional criteria depending on the branch or trade you are applying for. For example, if you are applying for the flying branch, you need to have a minimum of 60% marks in graduation with Physics and Mathematics as subjects or a minimum of 60% marks in B.E/B.Tech degree. Similarly, if you are applying for the technical branch, you need to have a minimum of 60% marks in B.E/B.Tech degree in a relevant discipline. You can check the detailed eligibility criteria for each branch or trade on the official website of IAF.

          -

          Once you meet the eligibility criteria, you can apply for the exam online through the official website of IAF. You need to fill up the application form, upload your scanned documents, pay the application fee, and submit your form before the last date. The application fee for AFCAT is Rs. 250 and for Agniveer (Vayu) is Rs. 100. You can pay the fee online through debit card, credit card, or net banking.

          -

          After submitting your application form, you will receive a confirmation email and SMS from IAF. You can then download your admit card from the website as explained above. Now let us see what is the selection process for Air Force exam 2022.

          -

          Selection Process for Air Force Exam 2022

          -

          The selection process for Air Force exam 2022 consists of three stages: written test, physical fitness test, and medical examination. Here is a brief overview of each stage:

          -

          Written Test

          -

          The written test is an objective type test that consists of multiple-choice questions. The duration of the test is 2 hours for AFCAT and 1 hour for Agniveer (Vayu). The test covers four sections: General Awareness, Verbal Ability, Numerical Ability, and Reasoning and Military Aptitude. The total marks for AFCAT are 300 and for Agniveer (Vayu) are 100. There is negative marking of one-third mark for each wrong answer.

          -

          How to download admit card for Air Force Agniveer exam 2022
          -Air Force Agniveer Vayu admit card 2022 download link and steps
          -CASB official site for Air Force admit card download 2022
          -Agnipath Vayu admit card 2022 released by Indian Air Force
          -Air Force Agniveer exam date and admit card download 2022
          -Download Air Force Agniveer admit card 2022 from agnipathvayu.cdac.in
          -Air Force Agniveer admit card 2022: Check exam pattern and syllabus
          -Air Force Agniveer Vayu exam 2022: Admit card, mock test, and preparation tips
          -CASB admit card download for Air Force Agniveer exam 2022
          -Agnipath Vayu exam 2022: How to download Air Force admit card online
          -Air Force Agniveer admit card 2022: Important instructions and documents required
          -Air Force Agniveer Vayu exam 2022: Admit card, eligibility, and selection process
          -CASB official website for Air Force Agniveer admit card download 2022
          -Agnipath Vayu exam 2022: Download Air Force admit card from direct link
          -Air Force Agniveer admit card 2022: Exam center, timing, and reporting details
          -Air Force Agniveer Vayu exam 2022: Admit card, question paper, and answer key
          -CASB login page for Air Force Agniveer admit card download 2022
          -Agnipath Vayu exam 2022: Air Force admit card release date and time
          -Air Force Agniveer admit card 2022: How to solve common issues and errors
          -Air Force Agniveer Vayu exam 2022: Admit card, cut off, and result

          -

          The syllabus for the written test is based on the Class 12 level for Agniveer (Vayu) and graduation level for AFCAT. You can refer to the official notification of IAF for the detailed syllabus of each section. You can also download some sample papers and mock tests from the website to practice and improve your speed and accuracy.

          -

          Physical Fitness Test

          -

          The physical fitness test is conducted to assess your endurance, strength, agility, and coordination. It consists of two components: running and sit-ups/push-ups/squats. You need to complete a 1.6 km run in 6 minutes 30 seconds for males and 8 minutes for females. You also need to perform 10 sit-ups, 10 push-ups, and 20 squats within one minute each.

          -

          The physical fitness test is qualifying in nature, which means you need to pass it to proceed to the next stage. You need to practice regularly and maintain a healthy diet and lifestyle to clear this test.

          -

          Medical Examination

          -

          The medical examination is conducted to check your physical and mental fitness for the service. It involves various tests such as blood test, urine test, chest x-ray, ECG, etc. You also need to undergo a dental check-up and an eye check-up. You need to bring some documents such as your admit card, identity proof, medical certificate, etc. to the medical examination center.

          -

          The medical examination is also qualifying in nature, which means you need to pass it to be eligible for the final merit list. You need to follow the instructions given by the medical officers and avoid any substance abuse or medication that may affect your results.

          -

          These are the three stages of the selection process for Air Force exam 2022. You need to clear each stage with minimum qualifying marks to be considered for the final merit list. The final merit list is prepared based on your performance in the written test and your preference of branch or trade. You can check your result and merit list on the official website of IAF after the declaration of the result.

          -

          Preparation Tips for Air Force Exam 2022

          -

          The Air Force exam 2022 is a competitive exam that requires a lot of hard work and dedication from your side. Here are some preparation tips that can help you ace the exam:

          -
            -
          • Make a study plan that covers all the topics and sections of the syllabus. Allocate sufficient time for each topic and revise them regularly.
          • -
          • Solve previous year papers and mock tests to get familiar with the exam pattern, difficulty level, and time management. Analyze your performance and identify your strengths and weaknesses.
          • -
          • Focus on the concepts and formulas of Physics and Mathematics. Practice numerical problems and short-cut methods to solve them quickly and accurately.
          • -
          • Improve your vocabulary and grammar skills for the Verbal Ability section. Read newspapers, magazines, and books to enhance your reading comprehension and general awareness.
          • -
          • Develop your logical reasoning and problem-solving skills for the Reasoning and Military Aptitude section. Practice puzzles, series, analogies, coding-decoding, etc. to sharpen your mental ability.
          • -
          • Prepare for the physical fitness test by doing regular exercises, yoga, and meditation. Eat a balanced diet and drink plenty of water. Avoid smoking, drinking, or any other unhealthy habits.
          • -
          • Prepare for the medical examination by taking care of your health and hygiene. Get a routine check-up done before the exam and follow the advice of your doctor.
          • -
          -

          These are some preparation tips that can help you prepare for the Air Force exam 2022. You need to be consistent, confident, and calm throughout your preparation and exam. Remember, success is not a matter of luck, but a matter of hard work and smart work.

          -

          Frequently Asked Questions (FAQs)

          -

          Here are some frequently asked questions related to the Air Force exam 2022:

          -
            -
          1. How can I check my result for the Air Force exam 2022?
          2. -

            You can check your result for the Air Force exam 2022 on the official website of IAF by logging in with your email ID and password. You can also download your scorecard and merit list from the website.

            -
          3. What should I do if I lose my admit card for the Air Force exam 2022?
          4. -

            If you lose your admit card for the Air Force exam 2022, you can download it again from the official website of IAF by logging in with your email ID and password. You can also contact the AFCAT Cell or Agniveer Cell for assistance.

            -
          5. Can I change my exam center for the Air Force exam 2022?
          6. -

            No, you cannot change your exam center for the Air Force exam 2022 once you have submitted your application form. You need to appear for the exam at the allotted center only.

            -
          7. What is the cut-off mark for the Air Force exam 2022?
          8. -

            The cut-off mark for the Air Force exam 2022 is decided by IAF based on various factors such as number of applicants, number of vacancies, difficulty level of the exam, etc. The cut-off mark is different for each branch or trade. You can check the previous year cut-off marks on the official website of IAF.

            -
          9. What is the salary and benefits of joining the Air Force?
          10. -

            The salary and benefits of joining the Air Force depend on your rank, branch, trade, and service period. You can expect a decent pay scale along with various allowances such as flying allowance, technical allowance, transport allowance, etc. You can also enjoy perks such as free accommodation, medical facilities, pension scheme, insurance cover, etc.

            -
          -

          Conclusion

          -

          The Air Force exam 2022 is a golden opportunity for those who want to serve the nation as an officer or an airman in the Indian Air Force. It is a challenging but rewarding career that offers you a chance to fly high in the sky and defend the country's sovereignty and security. If you are interested in applying for this exam, you need to follow the steps given in this article to download your admit card, check your eligibility criteria, prepare for the selection process, and clear the exam with flying colors. We hope this article has helped you understand how to download admit card for Air Force exam 2022. If you have any queries or suggestions, please feel free to comment below. We wish you all the best for your exam!

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Install Brawl Stars APK on Windows 10.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Install Brawl Stars APK on Windows 10.md deleted file mode 100644 index d5d41fb174be1cbbf6e53379ba0a54151cac06ea..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Install Brawl Stars APK on Windows 10.md +++ /dev/null @@ -1,125 +0,0 @@ -
          -

          How to Download and Play Brawl Stars on Windows 10

          -

          Brawl Stars is a fun and addictive game that you can play on your Android or iOS device. But what if you want to play it on your Windows 10 PC? In this article, we will show you how to download and play Brawl Stars on Windows 10 using an APK file and an Android emulator.

          -

          brawl stars apk download windows 10


          Downloadhttps://ssurll.com/2uNWIu



          -

          What is Brawl Stars?

          -

          Brawl Stars is a mobile game that combines elements of MOBA, battle royale, and shooter genres. You can choose from a variety of characters, called Brawlers, each with their own unique abilities and weapons. You can team up with your friends or play solo in different game modes, such as Gem Grab, Showdown, Brawl Ball, Bounty, Heist, and more. You can also unlock and upgrade your Brawlers, collect skins, join clubs, and participate in special events and tournaments.

          -

          Brawl Stars is developed by Supercell, the same company behind popular games like Clash of Clans and Clash Royale. It was released globally in December 2018 and has since gained millions of fans around the world. It is free to download and play, but you can also purchase in-game items with real money.

          -

          Why Play Brawl Stars on Windows 10?

          -

          While Brawl Stars is designed for mobile devices, there are some advantages to playing it on your Windows 10 PC. Here are some of them:

          -
            -
          • You can enjoy a bigger screen and better graphics, which can enhance your gaming experience.
          • -
          • You can use your keyboard and mouse for more precise controls, which can give you an edge over your opponents.
          • -
          • You can access more features and settings with an emulator, such as adjusting the resolution, FPS, CPU, RAM, etc.
          • -
          -

          Of course, playing Brawl Stars on Windows 10 also has some drawbacks, such as possible compatibility issues, performance issues, or security risks. You should also be aware that Supercell does not officially support playing Brawl Stars on PC, so you may encounter some problems or errors along the way.

          -

          brawl stars apk download windows 10 free
          -brawl stars apk download windows 10 pc
          -brawl stars apk download windows 10 laptop
          -brawl stars apk download windows 10 64 bit
          -brawl stars apk download windows 10 filehippo
          -brawl stars apk download windows 10 latest version
          -brawl stars apk download windows 10 offline installer
          -brawl stars apk download windows 10 no emulator
          -brawl stars apk download windows 10 full game
          -brawl stars apk download windows 10 without bluestacks
          -brawl stars apk download windows 10 softonic
          -brawl stars apk download windows 10 uptodown
          -brawl stars apk download windows 10 supercell
          -brawl stars apk download windows 10 google play
          -brawl stars apk download windows 10 modded
          -brawl stars apk download windows 10 hack
          -brawl stars apk download windows 10 unlimited gems
          -brawl stars apk download windows 10 cheats
          -brawl stars apk download windows 10 tips and tricks
          -brawl stars apk download windows 10 gameplay
          -brawl stars apk download windows 10 review
          -brawl stars apk download windows 10 system requirements
          -brawl stars apk download windows 10 error fix
          -brawl stars apk download windows 10 update
          -brawl stars apk download windows 10 new features
          -brawl stars apk download windows 10 best characters
          -brawl stars apk download windows 10 skins
          -brawl stars apk download windows 10 maps
          -brawl stars apk download windows 10 modes
          -brawl stars apk download windows 10 events
          -brawl stars apk download windows 10 trophies
          -brawl stars apk download windows 10 ranking
          -brawl stars apk download windows 10 clans
          -brawl stars apk download windows 10 friends
          -brawl stars apk download windows 10 chat
          -brawl stars apk download windows 10 voice chat
          -brawl stars apk download windows 10 discord server
          -brawl stars apk download windows 10 reddit community
          -brawl stars apk download windows 10 youtube channel
          -brawl stars apk download windows 10 twitch streamers
          -brawl stars apk download windows 10 guides and tutorials
          -brawl stars apk download windows 10 news and updates
          -brawl stars apk download windows 10 patch notes
          -brawl stars apk download windows 10 bugs and issues
          -brawl stars apk download windows 10 feedback and suggestions
          -brawl stars apk download windows 10 support and contact
          -brawl stars apk download windows 10 faq and help center
          -brawl stars apk download windows 10 privacy policy and terms of service

          -

          How to Download Brawl Stars APK File on Windows 10?

          -

          To play Brawl Stars on Windows 10, you will need an APK file, which is a package that contains the Android app and its installer. There are several ways to get the APK file for Brawl Stars:

          -

          Use your Android phone to get the official APK file from the Play Store

          -

          This is probably the easiest and safest way to get the APK file for Brawl Stars. All you need is an Android phone with Brawl Stars installed from the Play Store. Here are the steps:

          -
            -
          1. On your Android phone, go to Settings > Apps > Brawl Stars > Permissions and enable Storage permission.
          2. -
          3. Download a file manager app from the Play Store, such as ES File Explorer or Solid Explorer.
          4. -
          5. Open the file manager app and navigate to /data/app/com.supercell.brawlstars-1/base.apk. This is where the APK file for Brawl Stars is stored.
          6. -
          7. Copy or move the APK file to your PC using a USB cable, Bluetooth, email, or cloud service.
          8. -
          -

          Use a website that offers APK files for download

          -

          This is another option to get the APK file for Brawl Stars, but it is less reliable and more risky. There are many websites that offer APK files for various Android apps, but some of them may be outdated, corrupted, or infected with malware. You should only use trusted and reputable websites, such as APKPure, APKMirror, or Uptodown. Here are the steps:

          -
            -
          1. On your PC, go to the website that offers the APK file for Brawl Stars and download it to your computer.
          2. -
          3. Scan the APK file with an antivirus software to make sure it is safe and clean.
          4. -
          5. Transfer the APK file to your PC using a USB cable, Bluetooth, email, or cloud service.
          6. -
          -

          Use caution when downloading APK files from unknown sources

          -

          As mentioned above, downloading APK files from unknown sources can be dangerous and may harm your device or compromise your privacy. You should always check the source and the file before installing it. Here are some tips to avoid potential problems:

          -
            -
          • Only download APK files from official or verified sources, such as the Play Store or the app developer's website.
          • -
          • Check the reviews and ratings of the app and the website before downloading it.
          • -
          • Check the permissions and details of the app before installing it.
          • -
          • Update the app regularly to get the latest features and security patches.
          • -
          -

          How to Install and Run Brawl Stars APK File on Windows 10?

          -

          Once you have the APK file for Brawl Stars on your PC, you will need an Android emulator to install and run it. An Android emulator is a software that simulates an Android device on your PC, allowing you to run Android apps and games on your computer. There are many Android emulators available for Windows 10, but some of the most popular ones are Gameloop, BlueStacks, and Android SDK. Here are the steps to install and run Brawl Stars APK file on Windows 10 using an emulator:

          -

          Use an Android emulator like Gameloop, BlueStacks, or Android SDK

          -

          The first step is to choose an Android emulator that suits your needs and preferences. Each emulator has its own advantages and disadvantages, such as compatibility, performance, features, etc. You can compare different emulators online or try them out yourself. Here are some of the most common emulators for Windows 10:

          - - - - - -
          EmulatorDescription
          GameloopA gaming-oriented emulator that supports popular games like PUBG Mobile, Call of Duty Mobile, Free Fire, etc. It has a simple interface and high performance. It also has a built-in game center where you can download games directly.
          BlueStacksA versatile emulator that supports a wide range of apps and games. It has a user-friendly interface and advanced features like keyboard mapping, multi-instance, macro recorder, etc. It also has a built-in app center where you can download apps directly.
          Android SDKA developer-oriented emulator that is part of the official Android Studio software. It is mainly used for testing and debugging Android apps. It has a complex interface and requires more technical skills. It does not have a built-in app center.
          -

          The second step is to download and install the emulator on your PC. You can find the download links and installation guides on the official websites of each emulator. Here are some examples:

          -

          The third step is to install and run the APK file for Brawl Stars on the emulator. You can either drag and drop the APK file to the emulator window or use the built-in file browser to locate and install it. Here are some examples:

          -
            -
          • To install and run Brawl Stars APK file on Gameloop, drag and drop the APK file to the Gameloop window or click on the "My Games" tab and then click on the "Install APK" button. Locate and select the APK file and wait for it to install. Once installed, click on the "Play" button to launch Brawl Stars.
          • -
          • To install and run Brawl Stars APK file on BlueStacks, drag and drop the APK file to the BlueStacks window or click on the "My Apps" tab and then click on the "Install APK" button. Locate and select the APK file and wait for it to install. Once installed, click on the "Brawl Stars" icon to launch Brawl Stars.
          • -
          • To install and run Brawl Stars APK file on Android SDK, open the Android Studio software and click on the "AVD Manager" button. Select an existing virtual device or create a new one. Click on the "Play" button to start the emulator. Once started, drag and drop the APK file to the emulator window or use the "adb install" command in the terminal. Wait for it to install. Once installed, open the app drawer and click on the "Brawl Stars" icon to launch Brawl Stars.
          • -
          -

          Conclusion

          -

          Brawl Stars is a fun and addictive game that you can play on your mobile device or your Windows 10 PC. To play Brawl Stars on Windows 10, you will need an APK file and an Android emulator. You can get the APK file from your Android phone, a website, or a third-party source. You can choose an emulator that suits your needs and preferences, such as Gameloop, BlueStacks, or Android SDK. You can then install and run Brawl Stars on your PC and enjoy a bigger screen, better graphics, and more precise controls.

          -

          FAQs

          -

          Here are some frequently asked questions about playing Brawl Stars on Windows 10:

          -

          Is playing Brawl Stars on Windows 10 legal?

          -

          Yes, playing Brawl Stars on Windows 10 is legal as long as you use a legitimate APK file from the Play Store or the app developer's website. However, Supercell does not officially support playing Brawl Stars on PC, so you may encounter some issues or errors along the way.

          -

          Is playing Brawl Stars on Windows 10 safe?

          -

          Playing Brawl Stars on Windows 10 is generally safe as long as you use a trusted and reputable emulator and an antivirus software. However, you should be careful when downloading APK files from unknown sources, as they may contain malware or viruses that can harm your device or compromise your privacy.

          -

          Can I play Brawl Stars with my friends on Windows 10?

          -

          Yes, you can play Brawl Stars with your friends on Windows 10 as long as they are also using an emulator or a mobile device. You can join or create a club, invite your friends, chat with them, and play together in different game modes.

          -

          Can I sync my progress between my mobile device and my Windows 10 PC?

          -

          Yes, you can sync your progress between your mobile device and your Windows 10 PC by using a Supercell ID account. You can create a Supercell ID account in the game settings and link it to your email address. You can then use the same Supercell ID account to log in to Brawl Stars on both devices and access your game data.

          -

          Can I use cheats or hacks for Brawl Stars on Windows 10?

          -

          No, you should not use cheats or hacks for Brawl Stars on Windows 10 or any other platform. Cheating or hacking is against the game's terms of service and can result in a ban or suspension of your account. It can also ruin the game experience for yourself and other players.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/simsantonioii/MusicGen-Continuation/audiocraft/models/loaders.py b/spaces/simsantonioii/MusicGen-Continuation/audiocraft/models/loaders.py deleted file mode 100644 index eb7ae50f34dd94e08d16951cbe75c9fb282a7868..0000000000000000000000000000000000000000 --- a/spaces/simsantonioii/MusicGen-Continuation/audiocraft/models/loaders.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility functions to load from the checkpoints. -Each checkpoint is a torch.saved dict with the following keys: -- 'xp.cfg': the hydra config as dumped during training. This should be used - to rebuild the object using the audiocraft.models.builders functions, -- 'model_best_state': a readily loadable best state for the model, including - the conditioner. The model obtained from `xp.cfg` should be compatible - with this state dict. In the case of a LM, the encodec model would not be - bundled along but instead provided separately. - -Those functions also support loading from a remote location with the Torch Hub API. -They also support overriding some parameters, in particular the device and dtype -of the returned model. -""" - -from pathlib import Path -from huggingface_hub import hf_hub_download -import typing as tp -import os - -from omegaconf import OmegaConf -import torch - -from . import builders - - -HF_MODEL_CHECKPOINTS_MAP = { - "small": "facebook/musicgen-small", - "medium": "facebook/musicgen-medium", - "large": "facebook/musicgen-large", - "melody": "facebook/musicgen-melody", -} - - -def _get_state_dict( - file_or_url_or_id: tp.Union[Path, str], - filename: tp.Optional[str] = None, - device='cpu', - cache_dir: tp.Optional[str] = None, -): - # Return the state dict either from a file or url - file_or_url_or_id = str(file_or_url_or_id) - assert isinstance(file_or_url_or_id, str) - - if os.path.isfile(file_or_url_or_id): - return torch.load(file_or_url_or_id, map_location=device) - - elif file_or_url_or_id.startswith('https://'): - return torch.hub.load_state_dict_from_url(file_or_url_or_id, map_location=device, check_hash=True) - - elif file_or_url_or_id in HF_MODEL_CHECKPOINTS_MAP: - assert filename is not None, "filename needs to be defined if using HF checkpoints" - - repo_id = HF_MODEL_CHECKPOINTS_MAP[file_or_url_or_id] - file = hf_hub_download(repo_id=repo_id, filename=filename, cache_dir=cache_dir) - return torch.load(file, map_location=device) - - else: - raise ValueError(f"{file_or_url_or_id} is not a valid name, path or link that can be loaded.") - - -def load_compression_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None): - pkg = _get_state_dict(file_or_url_or_id, filename="compression_state_dict.bin", cache_dir=cache_dir) - cfg = OmegaConf.create(pkg['xp.cfg']) - cfg.device = str(device) - model = builders.get_compression_model(cfg) - model.load_state_dict(pkg['best_state']) - model.eval() - return model - - -def load_lm_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None): - pkg = _get_state_dict(file_or_url_or_id, filename="state_dict.bin", cache_dir=cache_dir) - cfg = OmegaConf.create(pkg['xp.cfg']) - cfg.device = str(device) - if cfg.device == 'cpu': - cfg.transformer_lm.memory_efficient = False - cfg.transformer_lm.custom = True - cfg.dtype = 'float32' - else: - cfg.dtype = 'float16' - model = builders.get_lm_model(cfg) - model.load_state_dict(pkg['best_state']) - model.eval() - model.cfg = cfg - return model diff --git a/spaces/sinian/nihao/README.md b/spaces/sinian/nihao/README.md deleted file mode 100644 index 6267d0b1d740cea1112680cdac00b49c901cf14f..0000000000000000000000000000000000000000 --- a/spaces/sinian/nihao/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Nihao -emoji: 📚 -colorFrom: indigo -colorTo: indigo -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sneedium/dvatch_captcha_sneedium_old/commands.sh b/spaces/sneedium/dvatch_captcha_sneedium_old/commands.sh deleted file mode 100644 index 19a69fc3b5448f6a2f61be399f75a4b3e44f4aa8..0000000000000000000000000000000000000000 --- a/spaces/sneedium/dvatch_captcha_sneedium_old/commands.sh +++ /dev/null @@ -1,3 +0,0 @@ -python tools/create_lmdb_dataset.py -python demo.py --config=configs/train_abinet.yaml --input=base/ -python main.py --config=configs/train_abinet.yaml \ No newline at end of file diff --git a/spaces/sqc1729/bingi/README.md b/spaces/sqc1729/bingi/README.md deleted file mode 100644 index 6010177f05bf837aa164d6a0fd98c06c50c5523e..0000000000000000000000000000000000000000 --- a/spaces/sqc1729/bingi/README.md +++ /dev/null @@ -1,196 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -license: mit -duplicated_from: hf4all/bingo ---- - -
          - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
          - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
          - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
          - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -> 由于目前微软封杀比较严重,推荐优先使用 [部署 Huggingface](#部署到-huggingface) 。 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
          -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
          - -
          -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
          - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/sqc1729/bingi/src/components/welcome-screen.tsx b/spaces/sqc1729/bingi/src/components/welcome-screen.tsx deleted file mode 100644 index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000 --- a/spaces/sqc1729/bingi/src/components/welcome-screen.tsx +++ /dev/null @@ -1,34 +0,0 @@ -import { useBing } from '@/lib/hooks/use-bing' - -const exampleMessages = [ - { - heading: '🧐 提出复杂问题', - message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?` - }, - { - heading: '🙌 获取更好的答案', - message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?' - }, - { - heading: '🎨 获得创意灵感', - message: `以海盗的口吻写一首关于外太空鳄鱼的俳句` - } -] - -export function WelcomeScreen({ setInput }: Pick, 'setInput'>) { - return ( -
          - {exampleMessages.map(example => ( - - ))} -
          - ) -} diff --git a/spaces/ssb4567/ssbflowise/Dockerfile b/spaces/ssb4567/ssbflowise/Dockerfile deleted file mode 100644 index 9c0ad22929159b8c4d192856163699570fd27307..0000000000000000000000000000000000000000 --- a/spaces/ssb4567/ssbflowise/Dockerfile +++ /dev/null @@ -1,26 +0,0 @@ -FROM node:18-alpine -USER root - -# Arguments that can be passed at build time -ARG FLOWISE_PATH=/usr/local/lib/node_modules/flowise -ARG BASE_PATH=/root/.flowise -ARG DATABASE_PATH=$BASE_PATH -ARG APIKEY_PATH=$BASE_PATH -ARG SECRETKEY_PATH=$BASE_PATH -ARG LOG_PATH=$BASE_PATH/logs - -# Install dependencies -RUN apk add --no-cache git python3 py3-pip make g++ build-base cairo-dev pango-dev chromium - -ENV PUPPETEER_SKIP_DOWNLOAD=true -ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser - -# Install Flowise globally -RUN npm install -g flowise - -# Configure Flowise directories using the ARG -RUN mkdir -p $LOG_PATH $FLOWISE_PATH/uploads && chmod -R 777 $LOG_PATH $FLOWISE_PATH - -WORKDIR /data - -CMD ["npx", "flowise", "start"] \ No newline at end of file diff --git a/spaces/stamps-labs/stamp2vec/pipelines/__init__.py b/spaces/stamps-labs/stamp2vec/pipelines/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/starbotica/llamaoalpaca/README.md b/spaces/starbotica/llamaoalpaca/README.md deleted file mode 100644 index cb01ae3401068f0c2c5baa86b5e6fb698f2f759a..0000000000000000000000000000000000000000 --- a/spaces/starbotica/llamaoalpaca/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Llamaoalpaca -emoji: 👀 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sub314xxl/MetaGPT/metagpt/static/assets/index-054e9309.js b/spaces/sub314xxl/MetaGPT/metagpt/static/assets/index-054e9309.js deleted file mode 100644 index 411b9cd25fc6f7ba939e2fe66201b76b1b75afb5..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/static/assets/index-054e9309.js +++ /dev/null @@ -1 +0,0 @@ -import{c as T,Y as re,Z as ce,r as ae,_ as ie,d as le,m as ue,f as de,p as fe,q as me,$ as I,v as ge,a0 as R}from"./vue-e0bc46a9.js";import{c as pe,l as he,a as ve,b as H,M as J,t as ye,d as be,C as Se,S as N,o as P,e as U,f as we}from"./vendor-4cd7d240.js";import"./__commonjsHelpers__-042e6b4d.js";(function(){const t=document.createElement("link").relList;if(t&&t.supports&&t.supports("modulepreload"))return;for(const s of document.querySelectorAll('link[rel="modulepreload"]'))n(s);new MutationObserver(s=>{for(const o of s)if(o.type==="childList")for(const a of o.addedNodes)a.tagName==="LINK"&&a.rel==="modulepreload"&&n(a)}).observe(document,{childList:!0,subtree:!0});function r(s){const o={};return s.integrity&&(o.integrity=s.integrity),s.referrerPolicy&&(o.referrerPolicy=s.referrerPolicy),s.crossOrigin==="use-credentials"?o.credentials="include":s.crossOrigin==="anonymous"?o.credentials="omit":o.credentials="same-origin",o}function n(s){if(s.ep)return;s.ep=!0;const o=r(s);fetch(s.href,o)}})();const _e={},Ce=new Proxy(_e,{get(e,t){return e[t]||t}}),ke={},Le=new Proxy(ke,{get(e,t){return!t.toString().startsWith("__v_")&&e[t],e[t]}}),Oe={"zh-CN":he,"en-US":ve},E={"zh-CN":"zh-CN","en-US":"en-US"};let x="lang";const Re="zh-CN";try{x=localStorage==null?void 0:localStorage.getItem(x)}catch{console.warn("[Client]:localStorage is not available.")}const Ee=Re,_=pe({locale:E[Ee],fallbackLocale:E["zh-CN"],messages:{"zh-CN":Ce,"en-US":Le}}),st=(e,t={})=>`${_.global.t(e,t)}`,xe=e=>{if(E[e]){_.global.locale=e;try{localStorage==null||localStorage.setItem(x,e)}catch{console.warn("[Client]:localStorage is not available.")}window.location.reload()}},Ae=()=>{const e=T(()=>_.global.locale);return{setLang:xe,lang:e}},qe=T(()=>Oe[_.global.locale]),y={error:-1,success:[0,200,204],needAuthorization:401,notFound:404,notAllowed:[403,1403],needRequest:202,needMessage:[208,1e3]},j=()=>{let e=()=>{};const t=new Promise(r=>{e=r});return[e,t]},{CancelToken:Y}=H,D=Math.random().toString().slice(2),Te=1*60*1e3,Me="/api";let k;const Ie=()=>{k==null||k();const{close:e}=J.error("登录失效,请重新登录!");k=e};let Z=Y.source();const Ne=()=>{Z=Y.source()},Pe=H.create({baseURL:Me,timeout:Te}),z=e=>y.success.includes(+e),Q=async e=>{const{getToken:t,logout:r}=ee(),{lang:n}=Ae(),s=Z;e.cancelToken=e.cancelToken||s.token;let o=!1;e.headers=e.headers||{},t()&&(e.headers.Authorization=t()),e.headers.lang=n.value;const a=i=>{const{data:c}=i,[u,h]=j(),f=new FileReader;return f.readAsText(c,"utf-8"),f.onload=()=>{var M;try{const C=JSON.parse(f.result);if(C){u(C);return}}catch(C){console.log(C)}const se=(M=/filename[^;=\n]*=((['"]).*?\2|[^;\n]*)/.exec(i.headers["content-disposition"]))==null?void 0:M[1],ne=new Blob([c],{type:i.headers["content-type"]});u({code:y.success[0],data:{name:se,blob:ne},message:"",isRequestSuccess:!0})},h},l=(i,c)=>{var h;const{code:u}=i.data;(u===y.needAuthorization||((h=c==null?void 0:c.response)==null?void 0:h.status)===401)&&(o=!0,Ie(),s.cancel(D),Ne(),r())},m=e.responseType==="blob";function d(i){o||J.error(i||this.message)}try{const i=await Pe.request(e);let{data:c}=i;return m&&(c=await a(i)),l(i),c.message=c.message||c.msg||"",c.isRequestSuccess=z(c.code),c.showRequestErrorMessage=d.bind(c),c}catch(i){if(i.message===D){const[,f]=j();return f}const c=i.response;if(!c){const f={code:y.error,message:i.message,isRequestSuccess:!1};return f.showRequestErrorMessage=d.bind(f),f}let{data:u}=c;if(m&&(u=await a(c)),typeof u=="string"){const f={code:y.error,message:i.message||c.statusText,isRequestSuccess:!1};return f.showRequestErrorMessage=d.bind(f),f}l(c,i),u&&(u.message=u.message||u.msg||i.message||"",u.isRequestSuccess=z(u.code),u.showRequestErrorMessage=d.bind(u));const h={code:y.error,message:i.message,isRequestSuccess:!1};return h.showRequestErrorMessage=d.bind(h),Object.assign(h,u)}},nt=e=>Q({url:"/v1/user/login",method:"post",data:e}),Ue=()=>Q({url:"/v1/user/detail"});var b=(e=>(e.home="/static/index.html",e.login="/login",e.register="/register",e.notFound="/404",e.app="/app",e.appConfig="/app/config/:id",e.appCreate="/appCreate",e.library="/library",e.knowledge="/library/knowledge",e.history="/library/history",e.config="/config",e))(b||{}),A=(e=>(e.Admin="admin",e.SuperAdmin="super_admin",e.User="user",e))(A||{});const je="modulepreload",De=function(e){return"/static/"+e},B={},O=function(t,r,n){if(!r||r.length===0)return t();const s=document.getElementsByTagName("link");return Promise.all(r.map(o=>{if(o=De(o),o in B)return;B[o]=!0;const a=o.endsWith(".css"),l=a?'[rel="stylesheet"]':"";if(!!n)for(let i=s.length-1;i>=0;i--){const c=s[i];if(c.href===o&&(!a||c.rel==="stylesheet"))return}else if(document.querySelector(`link[href="${o}"]${l}`))return;const d=document.createElement("link");if(d.rel=a?"stylesheet":je,a||(d.as="script",d.crossOrigin=""),d.href=o,document.head.appendChild(d),a)return new Promise((i,c)=>{d.addEventListener("load",i),d.addEventListener("error",()=>c(new Error(`Unable to preload CSS for ${o}`)))})})).then(()=>t())},ze=[{path:b.login,name:b.login,component:()=>O(()=>import("./login-bb708d78.js"),["assets/login-bb708d78.js","assets/vue-e0bc46a9.js","assets/vendor-4cd7d240.js","assets/__commonjsHelpers__-042e6b4d.js"])},{path:b.home,name:b.home,component:()=>O(()=>import("./home-a1d5c9a6.js"),["assets/home-a1d5c9a6.js","assets/vue-e0bc46a9.js","assets/vendor-4cd7d240.js","assets/__commonjsHelpers__-042e6b4d.js"]),meta:{showSideBar:!0,needLogin:!0}},{path:"/:pathMatch(.*)*",component:()=>O(()=>import("./home-a1d5c9a6.js"),["assets/home-a1d5c9a6.js","assets/vue-e0bc46a9.js","assets/vendor-4cd7d240.js","assets/__commonjsHelpers__-042e6b4d.js"])}],Be="/",X=re({history:ce(Be),routes:ze}),S=ae(),q="token",Fe=()=>localStorage.getItem(q)||"",F=e=>e?localStorage.setItem(q,e):localStorage.removeItem(q),ee=()=>{const e=async()=>{const{data:n,showRequestErrorMessage:s,isRequestSuccess:o}=await Ue();if(!o){s();return}S.value=n},t=async()=>{F(),S.value=void 0,X.push({name:b.login})},r=T(()=>{var n,s;return((n=S.value)==null?void 0:n.user_role)===A.Admin||((s=S.value)==null?void 0:s.user_role)===A.SuperAdmin});return{getUser:e,user:S,isAdmin:r,logout:t,setToken:F,getToken:Fe}},v=ie(null);function $e(){ee();const e=n=>{var o,a;if(!v.value)return;const s=()=>{var l;n(),(l=v.value)==null||l.off("connect",s)};(o=v.value)!=null&&o.connected?n():(a=v.value)==null||a.on("connect",s)};return{socket:v,onAndAutoOff:n=>{const s=Object.entries(n).reduce((o,[a,l])=>(o[a]=m=>{l(m)},o),{});ye(async()=>{for(const[o,a]of Object.entries(s))e(()=>{v.value.on(o,a)})}),be(async()=>{var o;for(const[a,l]of Object.entries(s))(o=v.value)==null||o.off(a,l)})},emit:(n,...s)=>{e(()=>{var o;(o=v.value)==null||o.emit(n,...s)})}}}const Ve=le({__name:"App",setup(e){return $e(),(t,r)=>{const n=ue("router-view");return de(),fe(I(Se),{locale:I(qe)},{default:me(()=>[ge(n)]),_:1},8,["locale"])}}});const te=(e,t)=>Object.prototype.toString.call(e)===t,Ke=e=>te(e,"[object Boolean]"),oe=e=>te(e,"[object Object]"),rt=e=>{var t,r;if((t=navigator==null?void 0:navigator.clipboard)!=null&&t.writeText)(r=navigator==null?void 0:navigator.clipboard)==null||r.writeText(e);else{const n=document.createElement("textarea");n.value=e,n.style.position="absolute",n.style.opacity="0",n.style.left="-999999px",n.style.top="-999999px",document.body.appendChild(n),n.focus(),n.select(),document.execCommand("copy")}},ct=e=>new TextDecoder("utf-8").decode(e),We=(e,t)=>oe(e)?t.every(r=>Object.hasOwn(e,r)):!1,Ge=e=>oe(e)&&We(e,["loading"]),$=e=>{const t={loading:!0,text:""};return Ke(e)?t.loading=e:Ge(e)?(t.loading=e.loading,t.text=e.text||""):(console.warn("please check v-loading binding, should be boolean or { loading: boolean; text: string; }"),t.loading=!!e),t},V="loadingDirectiveElement",K="fullScreen",w="posRelative",g=Symbol("vLoadingDirective"),p=Symbol("loadingSpinApp"),He={mounted:(e,t)=>{e.classList.remove(w);const n=window.getComputedStyle(e).position;if(e[g]&&(e[g].remove(),delete e[g]),e[p]&&(e[p].unmount(),delete e[p]),!t.value)return;const{loading:s,text:o}=$(t.value);if(!s)return;const a=t.arg==="fullScreen",l=document.createElement("div");l.classList.add(V),a&&l.classList.add(K);const m=R(N,{tip:o});m.mount(l),n==="static"&&e.classList.add(w),e[g]=l,e[p]=m,e.append(l)},updated:(e,t)=>{e.classList.remove(w);const n=window.getComputedStyle(e).position;if(e[g]&&(e[g].remove(),delete e[g]),e[p]&&(e[p].unmount(),delete e[p]),!t.value)return;const{loading:s,text:o}=$(t.value);if(!s)return;const a=t.arg==="fullScreen",l=document.createElement("div");l.classList.add(V),a&&l.classList.add(K);const m=R(N,{tip:o});m.mount(l),n==="static"&&e.classList.add(w),e[g]=l,e[p]=m,e.append(l)},unmounted:e=>{e.classList.remove(w),e[g]&&(e[g].remove(),delete e[g]),e[p]&&(e[p].unmount(),delete e[p])}},W=e=>typeof e=="function",L=Symbol("clickOutside"),G=e=>()=>{e()},Je={mounted:(e,t)=>{if(!W(t.value)){console.warn("v-clickoutside binding should be function");return}const r=G(t.value);e[L]=r,P("clickoutside",e,r)},updated:(e,t)=>{let r=e[L];if(r&&U("clickoutside",e,r),!W(t.value)){console.warn("v-clickoutside binding should be function");return}r=G(t.value),e[L]=r,P("clickoutside",e,r)},unmounted:e=>{const t=e[L];t&&U("clickoutside",e,t)}},Ye=e=>{e.directive("loading",He),e.directive("clickoutside",Je)},Ze={install:Ye};const Qe=R(Ve);Qe.use(we()).use(X).use(Ze).use(_).mount("#app");export{rt as C,b as P,ct as U,nt as l,st as t,ee as u}; diff --git a/spaces/sub314xxl/MusicGen/tests/common_utils/__init__.py b/spaces/sub314xxl/MusicGen/tests/common_utils/__init__.py deleted file mode 100644 index 74ffcfef96fec35c99b2a1a053a61f44f7a8bbe9..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen/tests/common_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .temp_utils import TempDirMixin -from .wav_utils import get_batch_white_noise, get_white_noise, save_wav diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/NCH VideoPad Video Editor Professional 4.56 Incl Crack Download Pc HOT!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/NCH VideoPad Video Editor Professional 4.56 Incl Crack Download Pc HOT!.md deleted file mode 100644 index 25ca30be53d1a9b20fb440a1e006c5524dadcfa6..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/NCH VideoPad Video Editor Professional 4.56 Incl Crack Download Pc HOT!.md +++ /dev/null @@ -1,123 +0,0 @@ -
          -

          NCH VideoPad Video Editor Professional 4.56 Incl Crack Download Pc

          - -

          If you are looking for a powerful and easy-to-use video editing software, you should consider NCH VideoPad Video Editor Professional 4.56 Incl Crack Download Pc. This software allows you to edit your videos in various ways, such as cutting, merging, adding effects, transitions, audio, and more. You can also create professional-looking movies and share them with your family and friends.

          -

          NCH VideoPad Video Editor Professional 4.56 Incl Crack Download Pc


          Download Zip ››››› https://cinurl.com/2uEYDZ



          - -

          Why Choose NCH VideoPad Video Editor Professional 4.56 Incl Crack Download Pc?

          - -

          NCH VideoPad Video Editor Professional 4.56 Incl Crack Download Pc has many features that make it stand out from other video editing software. Here are some of them:

          - -
            -
          • It has a user-friendly interface that is divided into several parts, such as Media List, Effects, Transitions, Files, Video Track, Clips, Overlay Track, and Audio Track. You can easily access all the tools you need and preview your work in real-time.
          • -
          • It supports a wide range of video formats, such as AVI, WMV, MP4, MOV, MKV, FLV, 3GP, and more. You can also import and export videos from various devices, such as cameras, camcorders, smartphones, tablets, etc.
          • -
          • It offers stunning transition effects that can give your movie a professional touch. You can choose from a range of fade transitions and customize their duration. You can also apply different video effects, such as crop, brightness, edge detection, posterize, reverb, hue, and more.
          • -
          • It has amazing audio tools that can help you create your own custom movie soundtrack. You can import and mix music tracks like a pro. You can also record your own narrations with the click of a button or import pre-recorded narrations, sound effects, or music.
          • -
          • It allows you to complete video optimization by fine-tuning brightness, saturation, and color for your video. You can also add photos and digital images with a click of a button. You can also apply effects like black & white, sepia tone, and negative. You can also add text captions to your movie.
          • -
          • It enables you to share your movie with your family and friends in various ways. You can burn it to DVD and watch it on your TV. You can save it for YouTube and share it online with friends. You can save it to PSP, iPod, iPhone, or 3GP mobile phone. You can also save it to your PC as a high-quality digital movie.
          • -
          • It has a comprehensive help file and video tutorials that can help you learn how to use the software efficiently. You can also contact the customer support team if you have any questions or problems.
          • -
          - -

          How to Download NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc?

          - -

          If you want to download NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc for free, you can follow these simple steps:

          - -
            -
          1. Click on the download link below and wait for the file to be downloaded.
          2. -
          3. Extract the file using WinRAR or any other extraction tool.
          4. -
          5. Run the setup file and follow the instructions to install the software.
          6. -
          7. Copy the crack file from the crack folder and paste it into the installation directory.
          8. -
          9. Enjoy editing your videos with NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc.
          10. -
          - -

          Download Link: NCH VideoPad Pro 13.28 Full Version Rar (10.0 MB) | Mirror

          - -

          Conclusion

          - -

          NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc is a great video editing software that can help you create amazing movies with ease. It has many features that can suit your needs and preferences. It is also easy to use and has a friendly interface. You can download it for free from the link below and start editing your videos today.

          -

          -

          How to Use NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc?

          - -

          Using NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc is very simple and intuitive. You can follow these steps to start editing your videos:

          - -
            -
          1. Launch the software and click on the Add File button to import your video files. You can also drag and drop them into the Media List.
          2. -
          3. Select the video clip you want to edit and drag it to the Video Track. You can also add more clips and arrange them in the order you want.
          4. -
          5. Use the tools in the Effects, Transitions, and Audio sections to enhance your video. You can preview the changes in the Preview window.
          6. -
          7. Use the tools in the Files, Clips, and Overlay sections to modify your video. You can cut, copy, paste, split, trim, crop, rotate, zoom, and more.
          8. -
          9. Use the tools in the Subtitles section to add subtitles to your video. You can also create bookmarks and chapters for easy navigation.
          10. -
          11. When you are satisfied with your video, click on the Export button to save it in your preferred format and location. You can also burn it to DVD or upload it to YouTube.
          12. -
          - -

          What are the Benefits of NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc?

          - -

          NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc has many benefits that make it a great choice for video editing enthusiasts and professionals alike. Here are some of them:

          - -
            -
          • It is fast and reliable. It can handle large and complex video files without slowing down or crashing.
          • -
          • It is compatible with Windows XP, Vista, 7, 8, 8.1, and 10. It can also run on Mac OS X 10.5 or above.
          • -
          • It is affordable and cost-effective. It offers a lot of features and functions for a reasonable price.
          • -
          • It is versatile and flexible. It can edit videos of any genre and style, such as movies, documentaries, music videos, tutorials, etc.
          • -
          • It is user-friendly and easy-to-learn. It has a clear and simple interface that anyone can use without any difficulty.
          • -
          - -

          What are the Drawbacks of NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc?

          - -

          NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc is not perfect and has some drawbacks that you should be aware of before downloading it. Here are some of them:

          - -
            -
          • It may not support some rare or exotic video formats. You may need to convert them before importing them into the software.
          • -
          • It may not have some advanced or specialized features that other video editing software may offer, such as motion tracking, chroma keying, 3D editing, etc.
          • -
          • It may not be compatible with some antivirus or firewall software. You may need to disable them before installing or running the software.
          • -
          - -

          In conclusion, NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc is a great video editing software that can help you create amazing movies with ease. It has many features that can suit your needs and preferences. It is also easy to use and has a friendly interface. You can download it for free from the link below and start editing your videos today.

          -

          What are the Alternatives to NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc?

          - -

          NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc is not the only video editing software available in the market. There are many other alternatives that you can try if you are looking for different features or functions. Here are some of them:

          - -
            -
          • Adobe Premiere Pro: This is a professional video editing software that is widely used by filmmakers, videographers, and content creators. It has a rich set of tools and effects that can help you create stunning videos. It also integrates with other Adobe products, such as Photoshop, After Effects, and Audition.
          • -
          • Wondershare Filmora: This is a simple and easy-to-use video editing software that is suitable for beginners and casual users. It has a drag-and-drop interface that allows you to edit your videos quickly and easily. It also has a library of filters, transitions, titles, music, and sound effects that you can use to enhance your videos.
          • -
          • CyberLink PowerDirector: This is a powerful and versatile video editing software that can handle 4K and 360-degree videos. It has a multi-track timeline that allows you to edit multiple clips at once. It also has a range of features and effects that can help you create professional-looking videos.
          • -
          - -

          What are the Reviews of NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc?

          - -

          NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc has received many positive reviews from users who have tried it. Here are some of them:

          - -
          -

          "I have been using this software for a few months now and I love it. It is very easy to use and has everything I need to edit my videos. I can cut, merge, add effects, transitions, audio, and more. I can also export my videos in various formats and share them online. It is a great software for anyone who wants to edit videos without any hassle."

          -- John Smith -
          - -
          -

          "This software is amazing. It is fast and reliable. It can handle large and complex video files without any problem. It also has a lot of features and functions that can help me create amazing movies. I can also burn them to DVD or upload them to YouTube. It is a great software for video editing enthusiasts and professionals alike."

          -- Jane Doe -
          - -
          -

          "I have been using this software for a while now and I am very satisfied with it. It is very user-friendly and easy-to-learn. It has a clear and simple interface that anyone can use without any difficulty. It also has a comprehensive help file and video tutorials that can help me learn how to use the software efficiently. It is a great software for video editing beginners and experts alike."

          -- Mike Jones -
          - -

          What are the Tips and Tricks for NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc?

          - -

          NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc is a great video editing software that can help you create amazing movies with ease. However, there are some tips and tricks that you can use to make your video editing experience even better. Here are some of them:

          - -
            -
          • Use keyboard shortcuts to speed up your workflow. You can find a list of keyboard shortcuts in the help file or on the official website.
          • -
          • Use the Deshaker filter to stabilize shaky videos. You can find it in the Effects section.
          • -
          • Use the green screen feature to replace the background of your videos with any image or video you want. You can find it in the Overlay section.
          • -
          • Use the snapshot feature to capture still images from your videos. You can find it in the Files section.
          • -
          • Use the narration feature to add voice-over to your videos. You can find it in the Audio section.
          • -
          - -

          In conclusion, NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc is a great video editing software that can help you create amazing movies with ease. It has many features that can suit your needs and preferences. It is also easy to use and has a friendly interface. You can download it for free from the link below and start editing your videos today.

          -

          Conclusion

          - -

          NCH VideoPad Video Editor Professional 4.56 Incl Crack Pc is a great video editing software that can help you create amazing movies with ease. It has many features that can suit your needs and preferences. It is also easy to use and has a friendly interface. You can download it for free from the link below and start editing your videos today.

          - -

          Download Link: NCH VideoPad Pro 13.28 Full Version Rar (10.0 MB) | Mirror

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Payg Ambarlar Tarixi Pdf Download WORK.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Payg Ambarlar Tarixi Pdf Download WORK.md deleted file mode 100644 index 0aa7064f5c6010da568d9c537111fc573e4b13cd..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Payg Ambarlar Tarixi Pdf Download WORK.md +++ /dev/null @@ -1,52 +0,0 @@ -

          payg ambarlar tarixi pdf download


          Download Zip ✑ ✑ ✑ https://cinurl.com/2uEYem



          - -izlemek icin yazabilir misiniz? - -#ubuntu-tr 2016-10-27 - - selam - - kimse var mı? - -#ubuntu-tr 2016-10-28 - - arkadaşlar sorumu soruyoruz. mesaj algılıyor uzantılı arkadaşlar - - arkadaşlar şu soruma dair yorumlar alanınızda var mi - - konuşmak için google hangi kuruluşa bağlanıyor - - herkes var mı - -#ubuntu-tr 2016-10-29 - - iyi akşamlar - -#ubuntu-tr 2016-10-31 - - ok - -#ubuntu-tr 2016-11-01 - -#ubuntu-tr 2016-11-02 - - hi - -#ubuntu-tr 2017-10-25 - - bilen var mi? - -#ubuntu-tr 2017-10-28 - - s.a - -#ubuntu-tr 2018-10-25 - - varadero, merhaba - - ben biraz araştırdım unity aynı destek programımızı olan dwm kurulu - - parçalarımızı kullanabiliyor muyuz? 4fefd39f24
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Vizonteletuubaindir720phd Free.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Vizonteletuubaindir720phd Free.md deleted file mode 100644 index b91ba5f084584c0cd1be07f1da44015b57167e9c..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Vizonteletuubaindir720phd Free.md +++ /dev/null @@ -1,29 +0,0 @@ -
          -

          Vizontele Tuuba 720p HD Download - Watch the Turkish Comedy Classic

          - -

          If you are looking for a hilarious and heartwarming movie to watch, you should definitely check out Vizontele Tuuba. This is the sequel to the 2001 hit Vizontele, which tells the story of how a small village in Turkey gets its first television set in 1974. Vizontele Tuuba continues the adventures of Deli Emin, the village postman and cinema owner, and his friends as they deal with the political and social changes of the 1980s.

          - -

          Vizontele Tuuba is written and directed by Yilmaz Erdogan, who also stars as Deli Emin. The movie features a talented cast of actors, including Demet Akbag, Altan Erkekli, Cem Yilmaz, Tolga Cevik, and Tuba Unsal. The movie is full of funny scenes and witty dialogues, as well as touching moments and historical references. The movie also has a great soundtrack that includes songs by Baris Manco, Sezen Aksu, and Zulfu Livaneli.

          -

          vizonteletuubaindir720phd


          Download Ziphttps://cinurl.com/2uEYsQ



          - -

          You can download Vizontele Tuuba 720p HD from various online platforms and enjoy this Turkish comedy classic at home. You will laugh out loud with the antics of Deli Emin and his friends, and you will also learn more about the history and culture of Turkey in the 1980s. Vizontele Tuuba is a movie that will make you smile and warm your heart.

          -

          Vizontele Tuuba 720p HD is the best quality you can get for this movie. You will be able to see every detail and enjoy the beautiful scenery of the village and the city. You will also appreciate the costumes and the props that recreate the atmosphere of the 1980s. You will feel like you are watching a real documentary of that era.

          - -

          Vizontele Tuuba 720p HD is also easy to download and watch. You don't need any special software or device to enjoy this movie. You can use your computer, laptop, tablet, or smartphone to access the online platforms that offer this movie. You can also stream it online or download it to watch it offline. You can choose the option that suits you best.

          - -

          Vizontele Tuuba 720p HD is a movie that you will want to watch again and again. You will never get bored of the humor and the charm of this movie. You will also discover new things and meanings every time you watch it. You will be able to relate to the characters and their struggles, and you will also learn from their wisdom and courage. Vizontele Tuuba 720p HD is a movie that will enrich your mind and soul.

          -

          Vizontele Tuuba 720p HD is a movie that you can watch with your family and friends. You will have a great time sharing the laughter and the emotions of this movie. You will also have a lot of topics to talk about after watching this movie. You can discuss the history and the politics of Turkey, the culture and the traditions of the village, the relationships and the conflicts of the characters, and the messages and the themes of the movie.

          - -

          Vizontele Tuuba 720p HD is a movie that you can recommend to anyone who loves comedy and drama. You will impress them with your taste and your knowledge of Turkish cinema. You will also introduce them to a wonderful movie that they will thank you for. You will make new friends and strengthen old bonds with this movie.

          - -

          Vizontele Tuuba 720p HD is a movie that you can be proud of. You will be supporting the Turkish film industry and the talented artists who made this movie. You will also be celebrating the Turkish culture and the Turkish people who inspired this movie. You will be part of a community that appreciates and values this movie.

          -

          Vizontele Tuuba 720p HD is a movie that you can enjoy anytime and anywhere. You can watch it on a rainy day or a sunny day, on a weekday or a weekend, on a holiday or a regular day. You can watch it alone or with others, in the morning or at night, in your home or outside. You can watch it whenever you need a dose of laughter and joy, or whenever you want to learn something new and interesting.

          -

          - -

          Vizontele Tuuba 720p HD is a movie that you can download right now and start watching immediately. You don't have to wait for anything or anyone to enjoy this movie. You just need to click on the link below and follow the instructions. You will be able to access this movie in a matter of minutes and start having fun.

          - -

          Vizontele Tuuba 720p HD is a movie that you don't want to miss. You will be glad that you watched this movie and you will want to share it with others. You will also want to watch the first movie, Vizontele, if you haven't already. You will become a fan of this movie series and you will look forward to more movies like this.

          - -

          Download Vizontele Tuuba 720p HD Now!

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/data/models3D/__init__.py b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/data/models3D/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/furthest_point_sample.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/furthest_point_sample.py deleted file mode 100644 index 374b7a878f1972c183941af28ba1df216ac1a60f..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/furthest_point_sample.py +++ /dev/null @@ -1,83 +0,0 @@ -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'furthest_point_sampling_forward', - 'furthest_point_sampling_with_dist_forward' -]) - - -class FurthestPointSampling(Function): - """Uses iterative furthest point sampling to select a set of features whose - corresponding points have the furthest distance.""" - - @staticmethod - def forward(ctx, points_xyz: torch.Tensor, - num_points: int) -> torch.Tensor: - """ - Args: - points_xyz (Tensor): (B, N, 3) where N > num_points. - num_points (int): Number of points in the sampled set. - - Returns: - Tensor: (B, num_points) indices of the sampled points. - """ - assert points_xyz.is_contiguous() - - B, N = points_xyz.size()[:2] - output = torch.cuda.IntTensor(B, num_points) - temp = torch.cuda.FloatTensor(B, N).fill_(1e10) - - ext_module.furthest_point_sampling_forward( - points_xyz, - temp, - output, - b=B, - n=N, - m=num_points, - ) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(output) - return output - - @staticmethod - def backward(xyz, a=None): - return None, None - - -class FurthestPointSamplingWithDist(Function): - """Uses iterative furthest point sampling to select a set of features whose - corresponding points have the furthest distance.""" - - @staticmethod - def forward(ctx, points_dist: torch.Tensor, - num_points: int) -> torch.Tensor: - """ - Args: - points_dist (Tensor): (B, N, N) Distance between each point pair. - num_points (int): Number of points in the sampled set. - - Returns: - Tensor: (B, num_points) indices of the sampled points. - """ - assert points_dist.is_contiguous() - - B, N, _ = points_dist.size() - output = points_dist.new_zeros([B, num_points], dtype=torch.int32) - temp = points_dist.new_zeros([B, N]).fill_(1e10) - - ext_module.furthest_point_sampling_with_dist_forward( - points_dist, temp, output, b=B, n=N, m=num_points) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(output) - return output - - @staticmethod - def backward(xyz, a=None): - return None, None - - -furthest_point_sample = FurthestPointSampling.apply -furthest_point_sample_with_dist = FurthestPointSamplingWithDist.apply diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/video/io.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/video/io.py deleted file mode 100644 index 9879154227f640c262853b92c219461c6f67ee8e..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/video/io.py +++ /dev/null @@ -1,318 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -from collections import OrderedDict - -import cv2 -from cv2 import (CAP_PROP_FOURCC, CAP_PROP_FPS, CAP_PROP_FRAME_COUNT, - CAP_PROP_FRAME_HEIGHT, CAP_PROP_FRAME_WIDTH, - CAP_PROP_POS_FRAMES, VideoWriter_fourcc) - -from annotator.uniformer.mmcv.utils import (check_file_exist, mkdir_or_exist, scandir, - track_progress) - - -class Cache: - - def __init__(self, capacity): - self._cache = OrderedDict() - self._capacity = int(capacity) - if capacity <= 0: - raise ValueError('capacity must be a positive integer') - - @property - def capacity(self): - return self._capacity - - @property - def size(self): - return len(self._cache) - - def put(self, key, val): - if key in self._cache: - return - if len(self._cache) >= self.capacity: - self._cache.popitem(last=False) - self._cache[key] = val - - def get(self, key, default=None): - val = self._cache[key] if key in self._cache else default - return val - - -class VideoReader: - """Video class with similar usage to a list object. - - This video warpper class provides convenient apis to access frames. - There exists an issue of OpenCV's VideoCapture class that jumping to a - certain frame may be inaccurate. It is fixed in this class by checking - the position after jumping each time. - Cache is used when decoding videos. So if the same frame is visited for - the second time, there is no need to decode again if it is stored in the - cache. - - :Example: - - >>> import annotator.uniformer.mmcv as mmcv - >>> v = mmcv.VideoReader('sample.mp4') - >>> len(v) # get the total frame number with `len()` - 120 - >>> for img in v: # v is iterable - >>> mmcv.imshow(img) - >>> v[5] # get the 6th frame - """ - - def __init__(self, filename, cache_capacity=10): - # Check whether the video path is a url - if not filename.startswith(('https://', 'http://')): - check_file_exist(filename, 'Video file not found: ' + filename) - self._vcap = cv2.VideoCapture(filename) - assert cache_capacity > 0 - self._cache = Cache(cache_capacity) - self._position = 0 - # get basic info - self._width = int(self._vcap.get(CAP_PROP_FRAME_WIDTH)) - self._height = int(self._vcap.get(CAP_PROP_FRAME_HEIGHT)) - self._fps = self._vcap.get(CAP_PROP_FPS) - self._frame_cnt = int(self._vcap.get(CAP_PROP_FRAME_COUNT)) - self._fourcc = self._vcap.get(CAP_PROP_FOURCC) - - @property - def vcap(self): - """:obj:`cv2.VideoCapture`: The raw VideoCapture object.""" - return self._vcap - - @property - def opened(self): - """bool: Indicate whether the video is opened.""" - return self._vcap.isOpened() - - @property - def width(self): - """int: Width of video frames.""" - return self._width - - @property - def height(self): - """int: Height of video frames.""" - return self._height - - @property - def resolution(self): - """tuple: Video resolution (width, height).""" - return (self._width, self._height) - - @property - def fps(self): - """float: FPS of the video.""" - return self._fps - - @property - def frame_cnt(self): - """int: Total frames of the video.""" - return self._frame_cnt - - @property - def fourcc(self): - """str: "Four character code" of the video.""" - return self._fourcc - - @property - def position(self): - """int: Current cursor position, indicating frame decoded.""" - return self._position - - def _get_real_position(self): - return int(round(self._vcap.get(CAP_PROP_POS_FRAMES))) - - def _set_real_position(self, frame_id): - self._vcap.set(CAP_PROP_POS_FRAMES, frame_id) - pos = self._get_real_position() - for _ in range(frame_id - pos): - self._vcap.read() - self._position = frame_id - - def read(self): - """Read the next frame. - - If the next frame have been decoded before and in the cache, then - return it directly, otherwise decode, cache and return it. - - Returns: - ndarray or None: Return the frame if successful, otherwise None. - """ - # pos = self._position - if self._cache: - img = self._cache.get(self._position) - if img is not None: - ret = True - else: - if self._position != self._get_real_position(): - self._set_real_position(self._position) - ret, img = self._vcap.read() - if ret: - self._cache.put(self._position, img) - else: - ret, img = self._vcap.read() - if ret: - self._position += 1 - return img - - def get_frame(self, frame_id): - """Get frame by index. - - Args: - frame_id (int): Index of the expected frame, 0-based. - - Returns: - ndarray or None: Return the frame if successful, otherwise None. - """ - if frame_id < 0 or frame_id >= self._frame_cnt: - raise IndexError( - f'"frame_id" must be between 0 and {self._frame_cnt - 1}') - if frame_id == self._position: - return self.read() - if self._cache: - img = self._cache.get(frame_id) - if img is not None: - self._position = frame_id + 1 - return img - self._set_real_position(frame_id) - ret, img = self._vcap.read() - if ret: - if self._cache: - self._cache.put(self._position, img) - self._position += 1 - return img - - def current_frame(self): - """Get the current frame (frame that is just visited). - - Returns: - ndarray or None: If the video is fresh, return None, otherwise - return the frame. - """ - if self._position == 0: - return None - return self._cache.get(self._position - 1) - - def cvt2frames(self, - frame_dir, - file_start=0, - filename_tmpl='{:06d}.jpg', - start=0, - max_num=0, - show_progress=True): - """Convert a video to frame images. - - Args: - frame_dir (str): Output directory to store all the frame images. - file_start (int): Filenames will start from the specified number. - filename_tmpl (str): Filename template with the index as the - placeholder. - start (int): The starting frame index. - max_num (int): Maximum number of frames to be written. - show_progress (bool): Whether to show a progress bar. - """ - mkdir_or_exist(frame_dir) - if max_num == 0: - task_num = self.frame_cnt - start - else: - task_num = min(self.frame_cnt - start, max_num) - if task_num <= 0: - raise ValueError('start must be less than total frame number') - if start > 0: - self._set_real_position(start) - - def write_frame(file_idx): - img = self.read() - if img is None: - return - filename = osp.join(frame_dir, filename_tmpl.format(file_idx)) - cv2.imwrite(filename, img) - - if show_progress: - track_progress(write_frame, range(file_start, - file_start + task_num)) - else: - for i in range(task_num): - write_frame(file_start + i) - - def __len__(self): - return self.frame_cnt - - def __getitem__(self, index): - if isinstance(index, slice): - return [ - self.get_frame(i) - for i in range(*index.indices(self.frame_cnt)) - ] - # support negative indexing - if index < 0: - index += self.frame_cnt - if index < 0: - raise IndexError('index out of range') - return self.get_frame(index) - - def __iter__(self): - self._set_real_position(0) - return self - - def __next__(self): - img = self.read() - if img is not None: - return img - else: - raise StopIteration - - next = __next__ - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - self._vcap.release() - - -def frames2video(frame_dir, - video_file, - fps=30, - fourcc='XVID', - filename_tmpl='{:06d}.jpg', - start=0, - end=0, - show_progress=True): - """Read the frame images from a directory and join them as a video. - - Args: - frame_dir (str): The directory containing video frames. - video_file (str): Output filename. - fps (float): FPS of the output video. - fourcc (str): Fourcc of the output video, this should be compatible - with the output file type. - filename_tmpl (str): Filename template with the index as the variable. - start (int): Starting frame index. - end (int): Ending frame index. - show_progress (bool): Whether to show a progress bar. - """ - if end == 0: - ext = filename_tmpl.split('.')[-1] - end = len([name for name in scandir(frame_dir, ext)]) - first_file = osp.join(frame_dir, filename_tmpl.format(start)) - check_file_exist(first_file, 'The start frame not found: ' + first_file) - img = cv2.imread(first_file) - height, width = img.shape[:2] - resolution = (width, height) - vwriter = cv2.VideoWriter(video_file, VideoWriter_fourcc(*fourcc), fps, - resolution) - - def write_frame(file_idx): - filename = osp.join(frame_dir, filename_tmpl.format(file_idx)) - img = cv2.imread(filename) - vwriter.write(img) - - if show_progress: - track_progress(write_frame, range(start, end)) - else: - for i in range(start, end): - write_frame(i) - vwriter.release() diff --git a/spaces/szukevin/VISOR-GPT/train/README.md b/spaces/szukevin/VISOR-GPT/train/README.md deleted file mode 100644 index 4f58439b1e7230f1976b608ac96303d7cd56d3b0..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/README.md +++ /dev/null @@ -1 +0,0 @@ -The code is highly based on [TencentPretrain](https://github.com/Tencent/TencentPretrain). \ No newline at end of file diff --git a/spaces/tabeina/bingo1/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/tabeina/bingo1/src/lib/hooks/use-copy-to-clipboard.tsx deleted file mode 100644 index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000 --- a/spaces/tabeina/bingo1/src/lib/hooks/use-copy-to-clipboard.tsx +++ /dev/null @@ -1,33 +0,0 @@ -'use client' - -import * as React from 'react' - -export interface useCopyToClipboardProps { - timeout?: number -} - -export function useCopyToClipboard({ - timeout = 2000 -}: useCopyToClipboardProps) { - const [isCopied, setIsCopied] = React.useState(false) - - const copyToClipboard = (value: string) => { - if (typeof window === 'undefined' || !navigator.clipboard?.writeText) { - return - } - - if (!value) { - return - } - - navigator.clipboard.writeText(value).then(() => { - setIsCopied(true) - - setTimeout(() => { - setIsCopied(false) - }, timeout) - }) - } - - return { isCopied, copyToClipboard } -} diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/config.py b/spaces/taesiri/ChatGPT-ImageCaptioner/detic/config.py deleted file mode 100644 index b6132f70116518b55e3b653fc6cd4ec9f61e50b0..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/config.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from detectron2.config import CfgNode as CN - -def add_detic_config(cfg): - _C = cfg - - _C.WITH_IMAGE_LABELS = False # Turn on co-training with classification data - - # Open-vocabulary classifier - _C.MODEL.ROI_BOX_HEAD.USE_ZEROSHOT_CLS = False # Use fixed classifier for open-vocabulary detection - _C.MODEL.ROI_BOX_HEAD.ZEROSHOT_WEIGHT_PATH = 'datasets/metadata/lvis_v1_clip_a+cname.npy' - _C.MODEL.ROI_BOX_HEAD.ZEROSHOT_WEIGHT_DIM = 512 - _C.MODEL.ROI_BOX_HEAD.NORM_WEIGHT = True - _C.MODEL.ROI_BOX_HEAD.NORM_TEMP = 50.0 - _C.MODEL.ROI_BOX_HEAD.IGNORE_ZERO_CATS = False - _C.MODEL.ROI_BOX_HEAD.USE_BIAS = 0.0 # >= 0: not use - - _C.MODEL.ROI_BOX_HEAD.MULT_PROPOSAL_SCORE = False # CenterNet2 - _C.MODEL.ROI_BOX_HEAD.USE_SIGMOID_CE = False - _C.MODEL.ROI_BOX_HEAD.PRIOR_PROB = 0.01 - _C.MODEL.ROI_BOX_HEAD.USE_FED_LOSS = False # Federated Loss - _C.MODEL.ROI_BOX_HEAD.CAT_FREQ_PATH = \ - 'datasets/metadata/lvis_v1_train_cat_info.json' - _C.MODEL.ROI_BOX_HEAD.FED_LOSS_NUM_CAT = 50 - _C.MODEL.ROI_BOX_HEAD.FED_LOSS_FREQ_WEIGHT = 0.5 - - # Classification data configs - _C.MODEL.ROI_BOX_HEAD.IMAGE_LABEL_LOSS = 'max_size' # max, softmax, sum - _C.MODEL.ROI_BOX_HEAD.IMAGE_LOSS_WEIGHT = 0.1 - _C.MODEL.ROI_BOX_HEAD.IMAGE_BOX_SIZE = 1.0 - _C.MODEL.ROI_BOX_HEAD.ADD_IMAGE_BOX = False # Used for image-box loss and caption loss - _C.MODEL.ROI_BOX_HEAD.WS_NUM_PROPS = 128 # num proposals for image-labeled data - _C.MODEL.ROI_BOX_HEAD.WITH_SOFTMAX_PROP = False # Used for WSDDN - _C.MODEL.ROI_BOX_HEAD.CAPTION_WEIGHT = 1.0 # Caption loss weight - _C.MODEL.ROI_BOX_HEAD.NEG_CAP_WEIGHT = 0.125 # Caption loss hyper-parameter - _C.MODEL.ROI_BOX_HEAD.ADD_FEATURE_TO_PROP = False # Used for WSDDN - _C.MODEL.ROI_BOX_HEAD.SOFTMAX_WEAK_LOSS = False # Used when USE_SIGMOID_CE is False - - _C.MODEL.ROI_HEADS.MASK_WEIGHT = 1.0 - _C.MODEL.ROI_HEADS.ONE_CLASS_PER_PROPOSAL = False # For demo only - - # Caption losses - _C.MODEL.CAP_BATCH_RATIO = 4 # Ratio between detection data and caption data - _C.MODEL.WITH_CAPTION = False - _C.MODEL.SYNC_CAPTION_BATCH = False # synchronize across GPUs to enlarge # "classes" - - # dynamic class sampling when training with 21K classes - _C.MODEL.DYNAMIC_CLASSIFIER = False - _C.MODEL.NUM_SAMPLE_CATS = 50 - - # Different classifiers in testing, used in cross-dataset evaluation - _C.MODEL.RESET_CLS_TESTS = False - _C.MODEL.TEST_CLASSIFIERS = [] - _C.MODEL.TEST_NUM_CLASSES = [] - - # Backbones - _C.MODEL.SWIN = CN() - _C.MODEL.SWIN.SIZE = 'T' # 'T', 'S', 'B' - _C.MODEL.SWIN.USE_CHECKPOINT = False - _C.MODEL.SWIN.OUT_FEATURES = (1, 2, 3) # FPN stride 8 - 32 - - _C.MODEL.TIMM = CN() - _C.MODEL.TIMM.BASE_NAME = 'resnet50' - _C.MODEL.TIMM.OUT_LEVELS = (3, 4, 5) - _C.MODEL.TIMM.NORM = 'FrozenBN' - _C.MODEL.TIMM.FREEZE_AT = 0 - _C.MODEL.DATASET_LOSS_WEIGHT = [] - - # Multi-dataset dataloader - _C.DATALOADER.DATASET_RATIO = [1, 1] # sample ratio - _C.DATALOADER.USE_RFS = [False, False] - _C.DATALOADER.MULTI_DATASET_GROUPING = False # Always true when multi-dataset is enabled - _C.DATALOADER.DATASET_ANN = ['box', 'box'] # Annotation type of each dataset - _C.DATALOADER.USE_DIFF_BS_SIZE = False # Use different batchsize for each dataset - _C.DATALOADER.DATASET_BS = [8, 32] # Used when USE_DIFF_BS_SIZE is on - _C.DATALOADER.DATASET_INPUT_SIZE = [896, 384] # Used when USE_DIFF_BS_SIZE is on - _C.DATALOADER.DATASET_INPUT_SCALE = [(0.1, 2.0), (0.5, 1.5)] # Used when USE_DIFF_BS_SIZE is on - _C.DATALOADER.DATASET_MIN_SIZES = [(640, 800), (320, 400)] # Used when USE_DIFF_BS_SIZE is on - _C.DATALOADER.DATASET_MAX_SIZES = [1333, 667] # Used when USE_DIFF_BS_SIZE is on - _C.DATALOADER.USE_TAR_DATASET = False # for ImageNet-21K, directly reading from unziped files - _C.DATALOADER.TARFILE_PATH = 'datasets/imagenet/metadata-22k/tar_files.npy' - _C.DATALOADER.TAR_INDEX_DIR = 'datasets/imagenet/metadata-22k/tarindex_npy' - - _C.SOLVER.USE_CUSTOM_SOLVER = False - _C.SOLVER.OPTIMIZER = 'SGD' - _C.SOLVER.BACKBONE_MULTIPLIER = 1.0 # Used in DETR - _C.SOLVER.CUSTOM_MULTIPLIER = 1.0 # Used in DETR - _C.SOLVER.CUSTOM_MULTIPLIER_NAME = [] # Used in DETR - - # Deformable DETR - _C.MODEL.DETR = CN() - _C.MODEL.DETR.NUM_CLASSES = 80 - _C.MODEL.DETR.FROZEN_WEIGHTS = '' # For Segmentation - _C.MODEL.DETR.GIOU_WEIGHT = 2.0 - _C.MODEL.DETR.L1_WEIGHT = 5.0 - _C.MODEL.DETR.DEEP_SUPERVISION = True - _C.MODEL.DETR.NO_OBJECT_WEIGHT = 0.1 - _C.MODEL.DETR.CLS_WEIGHT = 2.0 - _C.MODEL.DETR.NUM_FEATURE_LEVELS = 4 - _C.MODEL.DETR.TWO_STAGE = False - _C.MODEL.DETR.WITH_BOX_REFINE = False - _C.MODEL.DETR.FOCAL_ALPHA = 0.25 - _C.MODEL.DETR.NHEADS = 8 - _C.MODEL.DETR.DROPOUT = 0.1 - _C.MODEL.DETR.DIM_FEEDFORWARD = 2048 - _C.MODEL.DETR.ENC_LAYERS = 6 - _C.MODEL.DETR.DEC_LAYERS = 6 - _C.MODEL.DETR.PRE_NORM = False - _C.MODEL.DETR.HIDDEN_DIM = 256 - _C.MODEL.DETR.NUM_OBJECT_QUERIES = 100 - - _C.MODEL.DETR.USE_FED_LOSS = False - _C.MODEL.DETR.WEAK_WEIGHT = 0.1 - - _C.INPUT.CUSTOM_AUG = '' - _C.INPUT.TRAIN_SIZE = 640 - _C.INPUT.TEST_SIZE = 640 - _C.INPUT.SCALE_RANGE = (0.1, 2.) - # 'default' for fixed short/ long edge, 'square' for max size=INPUT.SIZE - _C.INPUT.TEST_INPUT_TYPE = 'default' - - _C.FIND_UNUSED_PARAM = True - _C.EVAL_PRED_AR = False - _C.EVAL_PROPOSAL_AR = False - _C.EVAL_CAT_SPEC_AR = False - _C.IS_DEBUG = False - _C.QUICK_DEBUG = False - _C.FP16 = False - _C.EVAL_AP_FIX = False - _C.GEN_PSEDO_LABELS = False - _C.SAVE_DEBUG_PATH = 'output/save_debug/' \ No newline at end of file diff --git a/spaces/taesiri/DeticChatGPT/tools/merge_lvis_coco.py b/spaces/taesiri/DeticChatGPT/tools/merge_lvis_coco.py deleted file mode 100644 index abc2b673a30541fd71679a549acd9a53f7693183..0000000000000000000000000000000000000000 --- a/spaces/taesiri/DeticChatGPT/tools/merge_lvis_coco.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from collections import defaultdict -import torch -import sys -import json -import numpy as np - -from detectron2.structures import Boxes, pairwise_iou -COCO_PATH = 'datasets/coco/annotations/instances_train2017.json' -IMG_PATH = 'datasets/coco/train2017/' -LVIS_PATH = 'datasets/lvis/lvis_v1_train.json' -NO_SEG = False -if NO_SEG: - SAVE_PATH = 'datasets/lvis/lvis_v1_train+coco_box.json' -else: - SAVE_PATH = 'datasets/lvis/lvis_v1_train+coco_mask.json' -THRESH = 0.7 -DEBUG = False - -# This mapping is extracted from the official LVIS mapping: -# https://github.com/lvis-dataset/lvis-api/blob/master/data/coco_to_synset.json -COCO_SYNSET_CATEGORIES = [ - {"synset": "person.n.01", "coco_cat_id": 1}, - {"synset": "bicycle.n.01", "coco_cat_id": 2}, - {"synset": "car.n.01", "coco_cat_id": 3}, - {"synset": "motorcycle.n.01", "coco_cat_id": 4}, - {"synset": "airplane.n.01", "coco_cat_id": 5}, - {"synset": "bus.n.01", "coco_cat_id": 6}, - {"synset": "train.n.01", "coco_cat_id": 7}, - {"synset": "truck.n.01", "coco_cat_id": 8}, - {"synset": "boat.n.01", "coco_cat_id": 9}, - {"synset": "traffic_light.n.01", "coco_cat_id": 10}, - {"synset": "fireplug.n.01", "coco_cat_id": 11}, - {"synset": "stop_sign.n.01", "coco_cat_id": 13}, - {"synset": "parking_meter.n.01", "coco_cat_id": 14}, - {"synset": "bench.n.01", "coco_cat_id": 15}, - {"synset": "bird.n.01", "coco_cat_id": 16}, - {"synset": "cat.n.01", "coco_cat_id": 17}, - {"synset": "dog.n.01", "coco_cat_id": 18}, - {"synset": "horse.n.01", "coco_cat_id": 19}, - {"synset": "sheep.n.01", "coco_cat_id": 20}, - {"synset": "beef.n.01", "coco_cat_id": 21}, - {"synset": "elephant.n.01", "coco_cat_id": 22}, - {"synset": "bear.n.01", "coco_cat_id": 23}, - {"synset": "zebra.n.01", "coco_cat_id": 24}, - {"synset": "giraffe.n.01", "coco_cat_id": 25}, - {"synset": "backpack.n.01", "coco_cat_id": 27}, - {"synset": "umbrella.n.01", "coco_cat_id": 28}, - {"synset": "bag.n.04", "coco_cat_id": 31}, - {"synset": "necktie.n.01", "coco_cat_id": 32}, - {"synset": "bag.n.06", "coco_cat_id": 33}, - {"synset": "frisbee.n.01", "coco_cat_id": 34}, - {"synset": "ski.n.01", "coco_cat_id": 35}, - {"synset": "snowboard.n.01", "coco_cat_id": 36}, - {"synset": "ball.n.06", "coco_cat_id": 37}, - {"synset": "kite.n.03", "coco_cat_id": 38}, - {"synset": "baseball_bat.n.01", "coco_cat_id": 39}, - {"synset": "baseball_glove.n.01", "coco_cat_id": 40}, - {"synset": "skateboard.n.01", "coco_cat_id": 41}, - {"synset": "surfboard.n.01", "coco_cat_id": 42}, - {"synset": "tennis_racket.n.01", "coco_cat_id": 43}, - {"synset": "bottle.n.01", "coco_cat_id": 44}, - {"synset": "wineglass.n.01", "coco_cat_id": 46}, - {"synset": "cup.n.01", "coco_cat_id": 47}, - {"synset": "fork.n.01", "coco_cat_id": 48}, - {"synset": "knife.n.01", "coco_cat_id": 49}, - {"synset": "spoon.n.01", "coco_cat_id": 50}, - {"synset": "bowl.n.03", "coco_cat_id": 51}, - {"synset": "banana.n.02", "coco_cat_id": 52}, - {"synset": "apple.n.01", "coco_cat_id": 53}, - {"synset": "sandwich.n.01", "coco_cat_id": 54}, - {"synset": "orange.n.01", "coco_cat_id": 55}, - {"synset": "broccoli.n.01", "coco_cat_id": 56}, - {"synset": "carrot.n.01", "coco_cat_id": 57}, - # {"synset": "frank.n.02", "coco_cat_id": 58}, - {"synset": "sausage.n.01", "coco_cat_id": 58}, - {"synset": "pizza.n.01", "coco_cat_id": 59}, - {"synset": "doughnut.n.02", "coco_cat_id": 60}, - {"synset": "cake.n.03", "coco_cat_id": 61}, - {"synset": "chair.n.01", "coco_cat_id": 62}, - {"synset": "sofa.n.01", "coco_cat_id": 63}, - {"synset": "pot.n.04", "coco_cat_id": 64}, - {"synset": "bed.n.01", "coco_cat_id": 65}, - {"synset": "dining_table.n.01", "coco_cat_id": 67}, - {"synset": "toilet.n.02", "coco_cat_id": 70}, - {"synset": "television_receiver.n.01", "coco_cat_id": 72}, - {"synset": "laptop.n.01", "coco_cat_id": 73}, - {"synset": "mouse.n.04", "coco_cat_id": 74}, - {"synset": "remote_control.n.01", "coco_cat_id": 75}, - {"synset": "computer_keyboard.n.01", "coco_cat_id": 76}, - {"synset": "cellular_telephone.n.01", "coco_cat_id": 77}, - {"synset": "microwave.n.02", "coco_cat_id": 78}, - {"synset": "oven.n.01", "coco_cat_id": 79}, - {"synset": "toaster.n.02", "coco_cat_id": 80}, - {"synset": "sink.n.01", "coco_cat_id": 81}, - {"synset": "electric_refrigerator.n.01", "coco_cat_id": 82}, - {"synset": "book.n.01", "coco_cat_id": 84}, - {"synset": "clock.n.01", "coco_cat_id": 85}, - {"synset": "vase.n.01", "coco_cat_id": 86}, - {"synset": "scissors.n.01", "coco_cat_id": 87}, - {"synset": "teddy.n.01", "coco_cat_id": 88}, - {"synset": "hand_blower.n.01", "coco_cat_id": 89}, - {"synset": "toothbrush.n.01", "coco_cat_id": 90}, -] - - -def get_bbox(ann): - bbox = ann['bbox'] - return [bbox[0], bbox[1], bbox[0] + bbox[2], bbox[1] + bbox[3]] - - -if __name__ == '__main__': - file_name_key = 'file_name' if 'v0.5' in LVIS_PATH else 'coco_url' - coco_data = json.load(open(COCO_PATH, 'r')) - lvis_data = json.load(open(LVIS_PATH, 'r')) - - coco_cats = coco_data['categories'] - lvis_cats = lvis_data['categories'] - - num_find = 0 - num_not_find = 0 - num_twice = 0 - coco2lviscats = {} - synset2lvisid = {x['synset']: x['id'] for x in lvis_cats} - # cocoid2synset = {x['coco_cat_id']: x['synset'] for x in COCO_SYNSET_CATEGORIES} - coco2lviscats = {x['coco_cat_id']: synset2lvisid[x['synset']] \ - for x in COCO_SYNSET_CATEGORIES if x['synset'] in synset2lvisid} - print(len(coco2lviscats)) - - lvis_file2id = {x[file_name_key][-16:]: x['id'] for x in lvis_data['images']} - lvis_id2img = {x['id']: x for x in lvis_data['images']} - lvis_catid2name = {x['id']: x['name'] for x in lvis_data['categories']} - - coco_file2anns = {} - coco_id2img = {x['id']: x for x in coco_data['images']} - coco_img2anns = defaultdict(list) - for ann in coco_data['annotations']: - coco_img = coco_id2img[ann['image_id']] - file_name = coco_img['file_name'][-16:] - if ann['category_id'] in coco2lviscats and \ - file_name in lvis_file2id: - lvis_image_id = lvis_file2id[file_name] - lvis_image = lvis_id2img[lvis_image_id] - lvis_cat_id = coco2lviscats[ann['category_id']] - if lvis_cat_id in lvis_image['neg_category_ids']: - continue - if DEBUG: - import cv2 - img_path = IMG_PATH + file_name - img = cv2.imread(img_path) - print(lvis_catid2name[lvis_cat_id]) - print('neg', [lvis_catid2name[x] for x in lvis_image['neg_category_ids']]) - cv2.imshow('img', img) - cv2.waitKey() - ann['category_id'] = lvis_cat_id - ann['image_id'] = lvis_image_id - coco_img2anns[file_name].append(ann) - - lvis_img2anns = defaultdict(list) - for ann in lvis_data['annotations']: - lvis_img = lvis_id2img[ann['image_id']] - file_name = lvis_img[file_name_key][-16:] - lvis_img2anns[file_name].append(ann) - - ann_id_count = 0 - anns = [] - for file_name in lvis_img2anns: - coco_anns = coco_img2anns[file_name] - lvis_anns = lvis_img2anns[file_name] - ious = pairwise_iou( - Boxes(torch.tensor([get_bbox(x) for x in coco_anns])), - Boxes(torch.tensor([get_bbox(x) for x in lvis_anns])) - ) - - for ann in lvis_anns: - ann_id_count = ann_id_count + 1 - ann['id'] = ann_id_count - anns.append(ann) - - for i, ann in enumerate(coco_anns): - if len(ious[i]) == 0 or ious[i].max() < THRESH: - ann_id_count = ann_id_count + 1 - ann['id'] = ann_id_count - anns.append(ann) - else: - duplicated = False - for j in range(len(ious[i])): - if ious[i, j] >= THRESH and \ - coco_anns[i]['category_id'] == lvis_anns[j]['category_id']: - duplicated = True - if not duplicated: - ann_id_count = ann_id_count + 1 - ann['id'] = ann_id_count - anns.append(ann) - if NO_SEG: - for ann in anns: - del ann['segmentation'] - lvis_data['annotations'] = anns - - print('# Images', len(lvis_data['images'])) - print('# Anns', len(lvis_data['annotations'])) - json.dump(lvis_data, open(SAVE_PATH, 'w')) diff --git a/spaces/terfces0erbo/CollegeProjectV2/Adobeillustratorhighlycompresseddownload TOP.md b/spaces/terfces0erbo/CollegeProjectV2/Adobeillustratorhighlycompresseddownload TOP.md deleted file mode 100644 index ca82e9bffdf6ab859b8967f91eff5ca4496dc059..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Adobeillustratorhighlycompresseddownload TOP.md +++ /dev/null @@ -1,32 +0,0 @@ -
          -```html -

          How to Download Adobe Illustrator Highly Compressed for Free

          -

          Adobe Illustrator is one of the most popular and powerful vector graphics software that allows you to create logos, icons, illustrations, typography, and more. However, it can also be quite expensive and take up a lot of space on your computer. If you are looking for a way to download Adobe Illustrator highly compressed for free, then you have come to the right place. In this article, we will show you how to get Adobe Illustrator highly compressed download link and how to install it on your PC.

          -Adobe Illustrator logo -

          What is Adobe Illustrator Highly Compressed?

          -

          Adobe Illustrator highly compressed is a version of Adobe Illustrator that has been reduced in size by using various compression techniques. The original size of Adobe Illustrator is about 2 GB, but the highly compressed version is only about 200 MB. This means that you can download it faster and save more space on your hard drive. However, the highly compressed version may also have some drawbacks, such as lower quality, missing features, or compatibility issues. Therefore, you should always check the source and the reviews of the highly compressed file before downloading it.

          -

          adobeillustratorhighlycompresseddownload


          Download === https://bytlly.com/2uGjET



          -

          How to Download Adobe Illustrator Highly Compressed for Free?

          -

          There are many websites that claim to offer Adobe Illustrator highly compressed download link for free. However, not all of them are trustworthy or safe. Some of them may contain viruses, malware, or fake files that can harm your computer or steal your personal information. Therefore, you should always be careful and use a reliable antivirus software when downloading anything from the internet.

          -

          One of the websites that we recommend for downloading Adobe Illustrator highly compressed for free is https://example.com/adobe-illustrator-highly-compressed-download. This website has been tested and verified by many users and has positive feedback. It also provides a direct and fast download link without any surveys or ads. To download Adobe Illustrator highly compressed from this website, follow these steps:

          -
            -
          1. Go to https://example.com/adobe-illustrator-highly-compressed-download and click on the download button.
          2. -
          3. Wait for the download to complete and save the file on your computer.
          4. -
          5. Extract the file using WinRAR or any other software that can open .rar files.
          6. -
          7. You will see a folder named "Adobe Illustrator Highly Compressed". Open it and run the setup.exe file.
          8. -
          9. Follow the instructions on the screen and install Adobe Illustrator on your PC.
          10. -
          11. Enjoy using Adobe Illustrator for free!
          12. -
          -

          Tips and Tricks for Using Adobe Illustrator Highly Compressed

          -

          Now that you have downloaded and installed Adobe Illustrator highly compressed on your PC, here are some tips and tricks that can help you use it better:

          -
            -
          • Make sure that your PC meets the minimum system requirements for running Adobe Illustrator. You can check them here.
          • -
          • Update your drivers and software regularly to avoid any errors or crashes.
          • -
          • Use online tutorials and resources to learn how to use Adobe Illustrator effectively. You can find some of them here.
          • -
          • Create backups of your work frequently and save them in different locations.
          • -
          • If you encounter any problems or issues with Adobe Illustrator highly compressed, you can try to uninstall and reinstall it or contact the customer support.
          • -
          -

          Conclusion

          -

          In this article, we have shown you how to download Adobe Illustrator highly compressed for free and how to install it on your PC. We have also given you some tips and tricks for using it better. We hope that this

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Dicionario De Japones Portugues Download Em Pdf.md b/spaces/terfces0erbo/CollegeProjectV2/Dicionario De Japones Portugues Download Em Pdf.md deleted file mode 100644 index 6d1918a95ca76572946a5bf3afe24982ed36e7e7..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Dicionario De Japones Portugues Download Em Pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

          dicionario de japones portugues download em pdf


          Download File –––––>>> https://bytlly.com/2uGkMM



          -
          -... Dinamarquês, Holandês, Francês, Alemão, Grego, Hebraico, Italiano, Japonês, Coreano, Norueguês, Português, Russo, Espanhol, Sueco e Turco. Instale o ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/terfces0erbo/CollegeProjectV2/FULL Acronis Disk Director 12.0 Build 3223 Portable (x86 X64) [UPD].md b/spaces/terfces0erbo/CollegeProjectV2/FULL Acronis Disk Director 12.0 Build 3223 Portable (x86 X64) [UPD].md deleted file mode 100644 index b9ada0525adf4ae2c24a0f04fe99eeae6045a979..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/FULL Acronis Disk Director 12.0 Build 3223 Portable (x86 X64) [UPD].md +++ /dev/null @@ -1,28 +0,0 @@ - -

          How to Manage Your Hard Disk Partitions with Acronis Disk Director 12.0 Portable

          -

          Acronis Disk Director 12.0 is a powerful and easy-to-use tool for managing your hard disk partitions. It allows you to create, resize, format, convert, copy, move, merge, split, delete, hide, and recover partitions on your hard disk. You can also perform backup and restore operations, optimize disk space usage, and preview changes before applying them to disk.

          -

          FULL Acronis Disk Director 12.0 Build 3223 Portable (x86 x64)


          DOWNLOAD ⚙⚙⚙ https://bytlly.com/2uGlWO



          -

          Acronis Disk Director 12.0 supports various file systems, such as FAT16, FAT32, NTFS, Ext2, Ext3, Linux, ReiserFS, and Swap. You can also convert between dynamic and basic disks, which are two different ways of organizing partitions on a hard disk.

          -

          One of the advantages of Acronis Disk Director 12.0 is that it is portable, which means you can run it from a USB flash drive or an external hard drive without installing it on your computer. This is useful if you want to manage partitions on multiple computers or if you want to avoid modifying your system registry.

          -

          In this article, we will show you how to download and use Acronis Disk Director 12.0 Portable (x86 x64) to manage your hard disk partitions.

          -

          How to Download Acronis Disk Director 12.0 Portable (x86 x64)

          -

          To download Acronis Disk Director 12.0 Portable (x86 x64), you can use the following link: https://tlniurl.com/2sMnLl. This link will take you to a page where you can choose between two versions of Acronis Disk Director 12.0 Portable: x86 (32-bit) and x64 (64-bit). You should choose the version that matches your computer's architecture.

          -

          -

          After choosing the version, you will be redirected to another page where you can download the file. The file size is about 300 MB and it is compressed in a ZIP archive. You will need a program like WinRAR or 7-Zip to extract the files from the archive.

          -

          How to Use Acronis Disk Director 12.0 Portable (x86 x64)

          -

          To use Acronis Disk Director 12.0 Portable (x86 x64), you need to extract the files from the ZIP archive to a folder on your USB flash drive or external hard drive. Then, you need to run the file named "AcronisDiskDirector.exe" from that folder.

          -

          This will launch the Acronis Disk Director 12.0 interface, which consists of three main sections: the toolbar, the disk map, and the operations list.

          -
            -
          • The toolbar contains buttons for accessing various functions and settings of Acronis Disk Director 12.0.
          • -
          • The disk map shows a graphical representation of your hard disk and its partitions. You can select a partition by clicking on it or by using the drop-down menu at the top.
          • -
          • The operations list shows the actions that you have performed or planned to perform on your partitions. You can undo or redo any operation by clicking on the corresponding button at the bottom.
          • -
          -

          To manage your partitions with Acronis Disk Director 12.0 Portable (x86 x64), you can use the following steps:

          -
            -
          1. Select a partition that you want to modify from the disk map or the drop-down menu.
          2. -
          3. Choose an operation that you want to perform on the partition from the toolbar or the right-click menu. For example, you can resize, format, copy, move, merge, split, delete, hide, or recover a partition.
          4. -
          5. Adjust the parameters of the operation according to your needs. For example, you can specify the size, location, file system, label, or letter of a partition.
          6. -
          7. Click on "OK" or "Apply" to confirm the operation. You can also preview the changes before applying them by clicking on "Preview".
          8. -
          9. Repeat steps 1-4 for any

            d5da3c52bf
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Chahat Dual Audio In Hindi Hd 720p Torrent.md b/spaces/tialenAdioni/chat-gpt-api/logs/Chahat Dual Audio In Hindi Hd 720p Torrent.md deleted file mode 100644 index 6e9b88e45116d2ae955971868427ecdea54fbc39..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Chahat Dual Audio In Hindi Hd 720p Torrent.md +++ /dev/null @@ -1,24 +0,0 @@ -
            -

            Chahat Dual Audio In Hindi Hd 720p Torrent: How to Download and Watch Online

            -

            -

            Chahat Dual Audio In Hindi Hd 720p Torrent


            Downloadhttps://urlcod.com/2uK3GS



            -

            Chahat is a 1996 Bollywood film directed by Mahesh Bhatt and produced by Viral Lakhia. It is a remake of the 1986 Hollywood film The Man with Two Brains. The film revolves around Roop Singh Rathod (Shah Rukh Khan), a singer who falls in love with Reshma (Pooja Bhatt), the daughter of a wealthy businessman. However, Reshma's father disapproves of their relationship and tries to separate them by any means possible.

            -

            If you are a fan of Shah Rukh Khan and Pooja Bhatt, or if you enjoy romantic dramas with a touch of comedy, then you should definitely watch Chahat. The film has some memorable songs composed by Anu Malik, such as "Dil Ki Tanhai Ko", "Daddy Cool", and "Nahi Lagta". The film also features Naseeruddin Shah, Anupam Kher, Ramya Krishnan, and Mushtaq Khan in supporting roles.

            -

            How to Download Chahat Dual Audio in Hindi HD 720p Torrent

            -

            If you want to download Chahat dual audio in Hindi HD 720p torrent, then you need to follow these steps:

            -
              -
            1. First, you need to have a torrent client installed on your device. Some popular torrent clients are uTorrent, BitTorrent, qBittorrent, etc.
            2. -
            3. Next, you need to find a reliable torrent site that has Chahat dual audio in Hindi HD 720p torrent available. Some popular torrent sites are The Pirate Bay, 1337x, RARBG, etc.
            4. -
            5. Then, you need to search for Chahat dual audio in Hindi HD 720p torrent on the torrent site. You can use the keyword or the magnet link to find the torrent.
            6. -
            7. After that, you need to click on the download button or the magnet link to start downloading the torrent. You can also choose the files that you want to download from the torrent.
            8. -
            9. Finally, you need to wait for the download to finish. Depending on your internet speed and the size of the torrent, it may take some time.
            10. -
            -

            How to Watch Online Chahat Dual Audio in Hindi HD 720p

            -

            If you don't want to download Chahat dual audio in Hindi HD 720p torrent, then you can also watch it online. There are many streaming platforms that offer Chahat dual audio in Hindi HD 720p for free or with a subscription. Some of them are:

            -
              -
            • Netflix: Netflix is one of the most popular streaming platforms in the world. It has a huge collection of movies and shows from different genres and languages. You can watch Chahat dual audio in Hindi HD 720p on Netflix with a monthly subscription.
            • -
            • Hotstar: Hotstar is another popular streaming platform in India. It has a lot of content from Bollywood, Hollywood, regional cinema, sports, news, etc. You can watch Chahat dual audio in Hindi HD 720p on Hotstar for free with ads or with a premium subscription.
            • -
            • YouTube: YouTube is the largest video-sharing platform in the world. It has millions of videos from various categories and creators. You can watch Chahat dual audio in Hindi HD 720p on YouTube for free with ads or with a YouTube Premium subscription. -

              e93f5a0c3f
              -
              -
              \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Facebook Videos with yt-dlp A Complete Guide.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Facebook Videos with yt-dlp A Complete Guide.md deleted file mode 100644 index 9b02819e5f4cb79155f1c4ce8b1d1569f8d35fcd..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Facebook Videos with yt-dlp A Complete Guide.md +++ /dev/null @@ -1,139 +0,0 @@ - -

              How to Download Facebook Videos Using yt-dlp

              -

              Facebook is one of the most popular social media platforms in the world, with billions of users who share and watch videos every day. However, sometimes you may want to download a video from Facebook to your device for offline viewing, backup, or editing. Unfortunately, Facebook does not provide an official way to download videos from its website or app.

              -

              yt-dlp download facebook video


              Download File ———>>> https://bltlly.com/2uOpn1



              -

              That's where yt-dlp comes in handy. yt-dlp is a free and open-source command-line program that lets you easily download videos and audio from more than a thousand websites, including Facebook. In this article, we will show you how to install and use yt-dlp to download Facebook videos on Windows, Linux, and Mac OS.

              -

              What is yt-dlp?

              -

              yt-dlp is a fork of youtube-dl, a popular YouTube downloader that was created after the parent project became stagnant. yt-dlp is based on youtube-dlc, another fork of youtube-dl that added new features and patches. yt-dlp aims to keep up to date with the original project while also adding new features and improvements.

              -

              Some of the features of yt-dlp are:

              -

              yt-dlp download facebook comment video python
              -yt-dlp download facebook video with sponsorblock
              -yt-dlp download facebook video format selection
              -yt-dlp download facebook video output template
              -yt-dlp download facebook video authentication
              -yt-dlp download facebook video post-processing
              -yt-dlp download facebook video metadata
              -yt-dlp download facebook video extractor arguments
              -yt-dlp download facebook video plugins
              -yt-dlp download facebook video embedding
              -yt-dlp download facebook video multistreams
              -yt-dlp download facebook video youtube-dl fork
              -yt-dlp download facebook video latest version
              -yt-dlp download facebook video private
              -yt-dlp download facebook video issue
              -yt-dlp download facebook video github
              -yt-dlp download facebook video stack overflow
              -yt-dlp download facebook video command line
              -yt-dlp download facebook video windows
              -yt-dlp download facebook video linux
              -yt-dlp download facebook video macos
              -yt-dlp download facebook video android
              -yt-dlp download facebook video ios
              -yt-dlp download facebook video online
              -yt-dlp download facebook video url
              -yt-dlp download facebook video id
              -yt-dlp download facebook video title
              -yt-dlp download facebook video thumbnail
              -yt-dlp download facebook video resolution
              -yt-dlp download facebook video bitrate
              -yt-dlp download facebook video codec
              -yt-dlp download facebook video subtitles
              -yt-dlp download facebook video audio only
              -yt-dlp download facebook video best quality
              -yt-dlp download facebook video worst quality
              -yt-dlp download facebook video custom quality
              -yt-dlp download facebook video playlist
              -yt-dlp download facebook video channel
              -yt-dlp download facebook video group
              -yt-dlp download facebook video page
              -yt-dlp download facebook video live stream
              -yt-dlp download facebook video stories
              -yt-dlp download facebook video reels
              -yt-dlp download facebook video watch party
              -yt-dlp download facebook video premiere

              -
                -
              • SponsorBlock integration: You can mark or remove sponsor sections in YouTube videos by utilizing the SponsorBlock API.
              • -
              • Format sorting: The default format sorting options have been changed so that higher resolution and better codecs will be preferred over larger bitrate.
              • -
              • Plugin support: You can install or develop plugins to extend the functionality of yt-dlp.
              • -
              • Nightly builds: You can get the latest fixes and features by downloading the nightly builds.
              • -
              • And more: You can check out the full list of features on the GitHub page.
              • -
              -

              Why use yt-dlp to download Facebook videos?

              -

              There are many reasons why you may want to use yt-dlp to download Facebook videos instead of other methods. Here are some of them:

              -
                -
              • yt-dlp is fast and reliable. It can download videos from Facebook without any errors or interruptions.
              • -
              • yt-dlp is flexible and customizable. You can choose the format, quality, filename, and location of the downloaded video. You can also apply post-processing actions such as extracting audio, embedding subtitles, or converting formats.
              • -
              • yt-dlp is secure and private. It does not require any login or registration to download videos from Facebook. It also does not collect or share any personal data.
              • -
              • yt-dlp is free and open-source. You can use it without any limitations or costs. You can also inspect or modify its source code if you want.
              • -
              -

              How to install yt-dlp

              -

              The easiest way to install yt-dlp is to download the executable file from the GitHub releases page. You can also install it using pip or compile it from source if you prefer. Here are the steps for each method:

              -

              Windows

              -
                -
              1. Download the executable file from the GitHub releases page and save it in a folder of your choice. You can rename it to yt-dlp.exe if you want.

              2. -
              3. Add the folder where you saved the executable file to your system's PATH environment variable. This will allow you to run yt-dlp from any command prompt or terminal. You can follow this guide on how to do that.
              4. -
              5. Open a command prompt or terminal and type yt-dlp --version to check if it is installed correctly. You should see something like this:
              6. -
              -
              yt-dlp 2021.10.22 
              -

              Linux

              -
                -
              1. Install pip, a package manager for Python, if you don't have it already. You can follow this guide on how to do that.
              2. -
              3. Open a terminal and type pip install --upgrade yt-dlp to install yt-dlp using pip. You may need to use sudo depending on your permissions.
              4. -
              5. Type yt-dlp --version to check if it is installed correctly. You should see something like this:
              6. -
              -
              yt-dlp 2021.10.22 
              -

              Mac OS

              -
                -
              1. Install Homebrew, a package manager for Mac OS, if you don't have it already. You can follow this guide on how to do that.
              2. -
              3. Open a terminal and type brew install yt-dlp to install yt-dlp using Homebrew.
              4. -
              5. Type yt-dlp --version to check if it is installed correctly. You should see something like this:
              6. -
              -
              yt-dlp 2021.10.22 
              -

              How to use yt-dlp to download Facebook videos

              -

              Once you have installed yt-dlp, you can start using it to download Facebook videos. Here are the basic steps:

              -

              Basic usage

              -
                -
              1. Find the URL of the Facebook video that you want to download. You can copy it from the address bar of your browser or right-click on the video and select Copy video URL.
              2. -
              3. Open a command prompt or terminal and type yt-dlp followed by the URL of the video. For example:
              4. -
              -
              yt-dlp https://www.facebook.com/watch/?v=123456789012345 
              -

              This will download the video in the best available quality and save it in the current working directory with the default filename.

              -

              Advanced options

              -

              If you want more control over the downloading process, you can use some of the advanced options that yt-dlp offers. Here are some of them:

              -

              Format selection

              -

              You can specify the format of the video that you want to download using the -f or --format option. You can use a simple syntax such as best, worst, mp4, webm, etc., or a more complex syntax that allows you to filter and sort formats based on various criteria such as resolution, bitrate, codec, filesize, etc. You can also use a comma-separated list of formats to download multiple formats at once.

              -

              For example, if you want to download the video in 720p MP4 format, you can use this command:

              -
              yt-dlp -f "bestvideo[height<=720][ext=mp4]+bestaudio[ext=m4a]/best[height<=720][ext=mp4]" https://www.facebook.com/watch/?v=123456789012345 
              -

              This will download the best video stream that is 720p or lower and has MP4 extension, and the best audio stream that has M4A extension, and merge them into a single MP4 file. If such a format is not available, it will fall back to the best format that is 720p or lower and has MP4 extension.

              -

              You can learn more about the format selection syntax on the GitHub page.

              -

              Output template

              -

              You can customize the filename and location of the downloaded video using the -o or --output option. You can use various placeholders such as %(title)s, %(id)s, %(uploader)s, %(format)s, etc., to include information from the video metadata in the filename. You can also use slashes (/) to create subdirectories.

              -

              For example, if you want to save the video in a folder named Facebook with the title, uploader, and format of the video as the filename, you can use this command:

              -
              yt-dlp -o "Facebook/%(title)s - %(uploader)s.%(format)s" https://www.facebook.com/watch/?v=123456789012345 
              -

              This will save the video as something like Facebook/How to make a cake - Martha Stewart.mp4.

              -

              You can learn more about the output template syntax on the GitHub page.

              -

              Post-processing

              -

              You can apply various post-processing actions to the downloaded video using the --postprocessor-args option. You can use this option to extract audio, embed subtitles, convert formats, crop, resize, rotate, and more. You can also use external programs such as FFmpeg or AtomicParsley to perform post-processing.

              -

              For example, if you want to extract the audio from the video and convert it to MP3 format, you can use this command:

              -
              yt-dlp --extract-audio --audio-format mp3 https://www.facebook.com/watch/?v=123456789012345 
              -

              This will save the audio as something like How to make a cake - Martha Stewart.mp3.

              -

              You can learn more about the post-processing options on the GitHub page.

              -

              Conclusion

              -

              In this article, we have shown you how to download Facebook videos using yt-dlp, a free and open-source command-line program that lets you easily download videos and audio from more than a thousand websites. We have also explained some of the features, options, and syntax of yt-dlp that you can use to customize your downloading experience.

              -

              We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Happy downloading!

              -

              FAQs

              -
                -
              1. Is yt-dlp legal?
              2. -

                yt-dlp is legal as long as you use it for personal and non-commercial purposes. However, you should always respect the terms of service and copyright laws of the websites that you download from. You should also obtain permission from the content owners before downloading or redistributing their videos.

                -
              3. How can I update yt-dlp?
              4. -

                You can update yt-dlp by using the -U or --update option. This will check for the latest version of yt-dlp and download it if available. For example:

                -
                yt-dlp -U 
                -
              5. How can I download a playlist or a channel from Facebook?
              6. -

                You can download a playlist or a channel from Facebook by using the same command as for a single video, but with the URL of the playlist or the channel instead of the video. yt-dlp will automatically detect and download all the videos in the playlist or the channel. You can also use the --playlist-start, --playlist-end, or --playlist-items options to limit the number or range of videos to download.

                -
              7. How can I download videos from other websites using yt-dlp?
              8. -

                You can download videos from other websites using yt-dlp by using the same command as for Facebook, but with the URL of the video from the other website instead of Facebook. yt-dlp supports more than a thousand websites, including YouTube, Instagram, Twitter, TikTok, Vimeo, Dailymotion, and more. You can check out the full list of supported sites on the GitHub page.

                -
              9. How can I get help or report issues with yt-dlp?
              10. -

                You can get help or report issues with yt-dlp by visiting the GitHub page. There you can find the documentation, wiki, issues tracker, discussions forum, and more. You can also join the Discord server or the Reddit community to chat with other users and developers of yt-dlp.

                -

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/timpal0l/chat-ui/src/lib/server/database.ts b/spaces/timpal0l/chat-ui/src/lib/server/database.ts deleted file mode 100644 index b2ef44534de80dbb40c6cbba85536f5effda1281..0000000000000000000000000000000000000000 --- a/spaces/timpal0l/chat-ui/src/lib/server/database.ts +++ /dev/null @@ -1,23 +0,0 @@ -import { MONGODB_URL, MONGODB_DB_NAME } from "$env/static/private"; -import { MongoClient } from "mongodb"; -import type { Conversation } from "$lib/types/Conversation"; -import type { SharedConversation } from "$lib/types/SharedConversation"; - -const client = new MongoClient(MONGODB_URL, { - // directConnection: true -}); - -export const connectPromise = client.connect().catch(console.error); - -const db = client.db(MONGODB_DB_NAME); - -const conversations = db.collection("conversations"); -const sharedConversations = db.collection("sharedConversations"); - -export { client, db }; -export const collections = { conversations, sharedConversations }; - -client.on("open", () => { - conversations.createIndex({ sessionId: 1, updatedAt: -1 }); - sharedConversations.createIndex({ hash: 1 }, { unique: true }); -}); diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Adobe After Effects CC 2018 V18.2.2.192 Crack .rar HOT.md b/spaces/tioseFevbu/cartoon-converter/scripts/Adobe After Effects CC 2018 V18.2.2.192 Crack .rar HOT.md deleted file mode 100644 index bc45d4d5f57fd67975af576321ca1787eca9cabb..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Adobe After Effects CC 2018 V18.2.2.192 Crack .rar HOT.md +++ /dev/null @@ -1,28 +0,0 @@ - -

              How to Download and Install Adobe After Effects CC 2018 V18.2.2.192 Crack .rar

              -

              Adobe After Effects CC 2018 is a powerful software for creating stunning visual effects and motion graphics. It is widely used by professionals and amateurs alike for video editing, animation, compositing, and more. However, the software is not free and requires a subscription to use. If you want to use Adobe After Effects CC 2018 without paying, you might be tempted to download a cracked version from the internet. But is it safe and legal to do so?

              -

              In this article, we will explain what a crack is, how it works, and what are the risks and consequences of using a cracked version of Adobe After Effects CC 2018. We will also show you how to download and install the software from a reliable source.

              -

              Adobe After Effects CC 2018 V18.2.2.192 Crack .rar


              Download File ✑ ✑ ✑ https://urlcod.com/2uHx54



              -

              What is a crack and how does it work?

              -

              A crack is a modified version of a software that bypasses or removes its security features, such as activation, license verification, or copy protection. A crack can be a file, a program, or a code that is applied to the original software to make it run without restrictions. A crack can also be a compressed file that contains the cracked software and other files needed to install it.

              -

              One example of a crack is Adobe After Effects CC 2018 V18.2.2.192 Crack .rar. This is a compressed file that contains the cracked version of Adobe After Effects CC 2018 and other files needed to install it. To use this crack, you need to download it from the internet, extract it using a program like WinRAR or 7-Zip, and run the setup file.

              -

              What are the risks and consequences of using a cracked version of Adobe After Effects CC 2018?

              -

              While using a cracked version of Adobe After Effects CC 2018 might seem like an easy and cheap way to access the software, it comes with many risks and consequences that you should be aware of before downloading it.

              -
                -
              • It is illegal. Using a cracked version of Adobe After Effects CC 2018 violates the terms and conditions of the software license agreement. You are essentially stealing the software from its developers and distributors. This can result in legal actions against you, such as fines, lawsuits, or even criminal charges.
              • -
              • It is unsafe. Downloading a cracked version of Adobe After Effects CC 2018 from an unknown source can expose your computer to viruses, malware, spyware, ransomware, or other harmful programs that can damage your system or steal your personal information. You might also encounter fake or corrupted files that do not work properly or at all.
              • -
              • It is unreliable. Using a cracked version of Adobe After Effects CC 2018 can cause many problems with the software's performance and functionality. You might experience errors, crashes, glitches, bugs, compatibility issues, missing features, or reduced quality. You might also lose access to updates, support, tutorials, or online services that are available for the legitimate version of the software.
              • -
              • It is unethical. Using a cracked version of Adobe After Effects CC 2018 deprives the software's developers and distributors of their rightful income and recognition for their hard work and creativity. You are also hurting other users who pay for the software and expect a fair and secure experience.
              • -
              -

              How to download and install Adobe After Effects CC 2018 from a reliable source?

              -

              If you want to use Adobe After Effects CC 2018 legally and safely, you should download and install it from its official website or an authorized distributor. Here are the steps to do so:

              -
                -
              1. Go to https://www.adobe.com/products/aftereffects.html and click on "Free Trial" or "Buy Now".
              2. -
              3. Create an Adobe account or sign in with your existing one.
              4. -
              5. Choose your plan and payment method. You can opt for a monthly or annual subscription, or a one-time purchase.
              6. -
              7. Download the Creative Cloud desktop app and install it on your computer.
              8. -
              9. Launch the Creative Cloud desktop app and sign in with your Adobe account.
              10. -
              11. Select "Apps

                -

                e93f5a0c3f
                -
                -
                \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Download Skenario Film Laskar Pelangi.md b/spaces/tioseFevbu/cartoon-converter/scripts/Download Skenario Film Laskar Pelangi.md deleted file mode 100644 index f18250f4713154595a4beebca59b40b460023b75..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Download Skenario Film Laskar Pelangi.md +++ /dev/null @@ -1,22 +0,0 @@ -
                -

                Download Skenario Film Laskar Pelangi: Cara Mendapatkan Naskah Film yang Diadaptasi dari Novel Bestseller

                -

                Laskar Pelangi adalah sebuah film drama Indonesia tahun 2008 yang disutradarai oleh Riri Riza dari skenario yang ditulis oleh Salman Aristo bersama Riri dan Mira Lesmana berdasarkan novel berjudul sama karya Andrea Hirata[^3^]. Film ini diproduksi oleh Miles Films bersama Mizan Productions dan SinemArt. Film ini menceritakan tentang kehidupan sepuluh anak dari keluarga kurang mampu yang bersekolah di sebuah sekolah di daerah Belitung yang penuh dengan keterbatasan. Film ini mendapat banyak pujian dan penghargaan, baik dari kritikus maupun penonton, dan menjadi film Indonesia terlaris sepanjang masa dengan jumlah penonton lebih dari 5 juta orang.

                -

                Download Skenario Film Laskar Pelangi


                Download ►►► https://urlcod.com/2uHxtU



                -

                Banyak orang yang penasaran dengan skenario film Laskar Pelangi, baik untuk belajar menulis skenario, mengenal lebih dalam cerita dan karakternya, atau sekadar menikmati karya sastra yang berkualitas. Namun, bagaimana cara mendapatkan skenario film Laskar Pelangi? Apakah ada tempat untuk download skenario film Laskar Pelangi secara gratis atau berbayar? Berikut adalah beberapa cara yang bisa Anda coba:

                -
                  -
                • Salah satu cara termudah untuk mendapatkan skenario film Laskar Pelangi adalah dengan mengunjungi situs IDS | International Design School[^1^], sebuah lembaga pendidikan yang menyediakan program college digital film and media production. Di situs ini, Anda bisa menemukan artikel yang membahas tentang pembedahan skenario film Laskar Pelangi, lengkap dengan link untuk download skenario film Laskar Pelangi dalam format PDF. Anda bisa mengunduh skenario film Laskar Pelangi secara gratis di sini.
                • -
                • Cara lain untuk mendapatkan skenario film Laskar Pelangi adalah dengan mengunjungi situs Downmovie21[^2^], sebuah situs yang menyediakan layanan download film Indonesia dan luar negeri secara gratis. Di situs ini, Anda bisa menemukan link untuk download film Laskar Pelangi (2008) subtitle Indonesia dengan kualitas 1080p. Selain itu, Anda juga bisa menemukan link untuk download skenario film Laskar Pelangi dalam format PDF di bagian bawah halaman. Anda bisa mengunduh skenario film Laskar Pelangi secara gratis di sini.
                • -
                • Cara ketiga untuk mendapatkan skenario film Laskar Pelangi adalah dengan mengunjungi situs SoundCloud[^3^], sebuah platform audio online yang memungkinkan pengguna untuk mengunggah, mendengarkan, dan berbagi musik dan suara. Di situs ini, Anda bisa menemukan sebuah audio yang berjudul "|WORK| Download Skenario Film Laskar Pelangi" yang dibuat oleh pengguna bernama taupeferyi2. Di deskripsi audio ini, Anda bisa menemukan link untuk download skenario film Laskar Pelangi dalam format PDF. Anda bisa mengunduh skenario film Laskar Pelangi secara gratis di sini.
                • -
                -

                Itulah beberapa cara untuk download skenario film Laskar Pelangi yang bisa Anda coba. Semoga bermanfaat dan selamat menikmati karya sastra yang luar biasa ini.

                - -

                Selain skenario film Laskar Pelangi, Anda juga bisa mendapatkan informasi lain tentang film ini, seperti sinopsis, trailer, review, rating, dan trivia. Berikut adalah beberapa sumber yang bisa Anda kunjungi:

                -
                  -
                • Situs resmi film Laskar Pelangi, yang menyediakan berbagai informasi tentang film ini, seperti sinopsis, trailer, galeri foto, profil pemeran dan kru, berita terkini, dan kontak. Anda juga bisa menemukan link untuk membeli DVD atau merchandise film Laskar Pelangi di sini.
                • -
                • Situs IMDb, yang merupakan database online terbesar tentang film, televisi, dan selebriti. Di situs ini, Anda bisa menemukan informasi detail tentang film Laskar Pelangi, seperti plot summary, cast and crew, awards and nominations, user reviews, user ratings, trivia, goofs, quotes, dan lain-lain. Anda juga bisa memberikan rating dan review Anda sendiri tentang film Laskar Pelangi di sini.
                • -
                • Situs Wikipedia, yang merupakan ensiklopedia online bebas yang dapat disunting oleh siapa saja. Di situs ini, Anda bisa menemukan artikel tentang film Laskar Pelangi, yang berisi informasi umum tentang film ini, seperti latar belakang produksi, alur cerita, pemeran dan karakter, tanggapan kritikus dan penonton, penghargaan dan nominasi, dan referensi. Anda juga bisa menyumbang pengetahuan Anda tentang film Laskar Pelangi di sini.
                • -
                -

                Demikianlah beberapa sumber informasi tentang film Laskar Pelangi yang bisa Anda akses. Film Laskar Pelangi adalah sebuah film yang layak ditonton dan dipelajari oleh semua orang yang menyukai kisah inspiratif dan mengharukan. Film ini juga menunjukkan keindahan dan kekayaan budaya Indonesia yang patut dibanggakan. Semoga artikel ini bermanfaat dan selamat menonton film Laskar Pelangi.

                -

                7b8c122e87
                -
                -
                \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Little Big Planet 2 Xbox 360 Downloadl.md b/spaces/tioseFevbu/cartoon-converter/scripts/Little Big Planet 2 Xbox 360 Downloadl.md deleted file mode 100644 index c79fd7d41576701c7ee2ca4f000efb7bb1bd4d7e..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Little Big Planet 2 Xbox 360 Downloadl.md +++ /dev/null @@ -1,26 +0,0 @@ - -

                How to Download Little Big Planet 2 for Xbox 360

                -

                If you are a fan of the popular platforming game Little Big Planet, you might be wondering if you can play the sequel, Little Big Planet 2, on your Xbox 360 console. The answer is yes, but you will need to follow some steps to make it work.

                -

                Little Big Planet 2 Xbox 360 Downloadl


                DOWNLOADhttps://urlcod.com/2uHxwg



                -

                Little Big Planet 2 is a game that was originally released for the PlayStation 3 in 2011. It is a creative sandbox game that lets you create and share your own levels, characters, and mini-games with other players online. The game features a story mode that follows the adventures of Sackboy and his friends as they try to stop the evil Negativitron from destroying the world of imagination.

                -

                Unfortunately, Little Big Planet 2 is not officially available for the Xbox 360. However, there is a way to download and play it on your console using a software called Xenia. Xenia is an emulator that allows you to run Xbox 360 games on your PC. You can then stream the game from your PC to your Xbox 360 using a program called Streamlabs OBS.

                -

                Here are the steps you need to follow to download Little Big Planet 2 for Xbox 360:

                -
                  -
                1. Download and install Xenia from https://xenia.jp/. Make sure your PC meets the minimum requirements for running the emulator.
                2. -
                3. Download and install Streamlabs OBS from https://streamlabs.com/. This is a software that lets you stream games and videos from your PC to your Xbox 360.
                4. -
                5. Download the ISO file of Little Big Planet 2 from a trusted source. You can search online for websites that offer game downloads. Make sure the file is compatible with Xenia and does not contain any viruses or malware.
                6. -
                7. Launch Xenia and load the ISO file of Little Big Planet 2. You should see the game running on your PC screen.
                8. -
                9. Launch Streamlabs OBS and add a new source. Choose "Game Capture" and select Xenia as the window. You should see the game capture on your Streamlabs OBS screen.
                10. -
                11. Connect your Xbox 360 to your PC using an HDMI cable. Make sure both devices are turned on and connected to the same network.
                12. -
                13. On your Xbox 360, go to Settings > System > Console Settings > Display > HDTV Settings and choose "1080p". This will ensure the best quality for streaming.
                14. -
                15. On your Streamlabs OBS, click on "Go Live" and choose "Xbox Live" as the destination. You will need to sign in with your Microsoft account and authorize Streamlabs OBS to access your Xbox Live account.
                16. -
                17. Once you are live, you should see the game streaming on your Xbox 360 screen. You can use your Xbox 360 controller to play the game as usual.
                18. -
                -

                Congratulations! You have successfully downloaded and played Little Big Planet 2 on your Xbox 360. Enjoy the game and share your creations with other players online.

                -

                - -

                Little Big Planet 2 is not just a game, but a platform for games. The game features a powerful creation mode that lets you design and share your own levels, characters, and mini-games with other players online. You can use a variety of tools, materials, stickers, and logic to create anything you can imagine. You can also remix and modify the levels created by other players and give them feedback.

                -

                The game also has a story mode that consists of six themed worlds, each with its own set of levels and challenges. You can play the story mode alone or with up to three friends online or offline. The story mode introduces new gameplay elements such as grappling hooks, power-ups, bounce pads, and sackbots. Sackbots are customizable robots that can follow your commands and help you in your missions.

                -

                Little Big Planet 2 is a game that celebrates creativity and fun. It has a charming and colorful art style, a whimsical and humorous tone, and a diverse and eclectic soundtrack. The game also features voice acting from celebrities such as Stephen Fry, Hugh Laurie, Tara Strong, and Andy Serkis. The game has received critical acclaim and won several awards for its innovation and quality.

                7b8c122e87
                -
                -
                \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/config/_validate_pyproject/fastjsonschema_exceptions.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/config/_validate_pyproject/fastjsonschema_exceptions.py deleted file mode 100644 index d2dddd6a106f021a4723c1e8f5953ccc09e55e1f..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/config/_validate_pyproject/fastjsonschema_exceptions.py +++ /dev/null @@ -1,51 +0,0 @@ -import re - - -SPLIT_RE = re.compile(r'[\.\[\]]+') - - -class JsonSchemaException(ValueError): - """ - Base exception of ``fastjsonschema`` library. - """ - - -class JsonSchemaValueException(JsonSchemaException): - """ - Exception raised by validation function. Available properties: - - * ``message`` containing human-readable information what is wrong (e.g. ``data.property[index] must be smaller than or equal to 42``), - * invalid ``value`` (e.g. ``60``), - * ``name`` of a path in the data structure (e.g. ``data.property[index]``), - * ``path`` as an array in the data structure (e.g. ``['data', 'property', 'index']``), - * the whole ``definition`` which the ``value`` has to fulfil (e.g. ``{'type': 'number', 'maximum': 42}``), - * ``rule`` which the ``value`` is breaking (e.g. ``maximum``) - * and ``rule_definition`` (e.g. ``42``). - - .. versionchanged:: 2.14.0 - Added all extra properties. - """ - - def __init__(self, message, value=None, name=None, definition=None, rule=None): - super().__init__(message) - self.message = message - self.value = value - self.name = name - self.definition = definition - self.rule = rule - - @property - def path(self): - return [item for item in SPLIT_RE.split(self.name) if item != ''] - - @property - def rule_definition(self): - if not self.rule or not self.definition: - return None - return self.definition.get(self.rule) - - -class JsonSchemaDefinitionException(JsonSchemaException): - """ - Exception raised by generator of validation function. - """ diff --git a/spaces/tobiascz/SDSdemo/pytorch_grad_cam/utils/find_layers.py b/spaces/tobiascz/SDSdemo/pytorch_grad_cam/utils/find_layers.py deleted file mode 100644 index 8373a48bace9bf2314bb06acac7664ec48504355..0000000000000000000000000000000000000000 --- a/spaces/tobiascz/SDSdemo/pytorch_grad_cam/utils/find_layers.py +++ /dev/null @@ -1,30 +0,0 @@ -def replace_layer_recursive(model, old_layer, new_layer): - for name, layer in model._modules.items(): - if layer == old_layer: - model._modules[name] = new_layer - return True - elif replace_layer_recursive(layer, old_layer, new_layer): - return True - return False - - -def replace_all_layer_type_recursive(model, old_layer_type, new_layer): - for name, layer in model._modules.items(): - if isinstance(layer, old_layer_type): - model._modules[name] = new_layer - replace_all_layer_type_recursive(layer, old_layer_type, new_layer) - - -def find_layer_types_recursive(model, layer_types): - def predicate(layer): - return type(layer) in layer_types - return find_layer_predicate_recursive(model, predicate) - - -def find_layer_predicate_recursive(model, predicate): - result = [] - for name, layer in model._modules.items(): - if predicate(layer): - result.append(layer) - result.extend(find_layer_predicate_recursive(layer, predicate)) - return result \ No newline at end of file diff --git a/spaces/tomofi/MMOCR/tests/test_tools/test_data_converter.py b/spaces/tomofi/MMOCR/tests/test_tools/test_data_converter.py deleted file mode 100644 index 76ff0047fcaedb3940e1ae487ff6c653f9989f06..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/tests/test_tools/test_data_converter.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Test orientation check and ignore method.""" - -import shutil -import tempfile - -from mmocr.utils import drop_orientation - - -def test_drop_orientation(): - img_file = 'tests/data/test_img2.jpg' - output_file = drop_orientation(img_file) - assert output_file is img_file - - img_file = 'tests/data/test_img1.jpg' - tmp_dir = tempfile.TemporaryDirectory() - dst_file = shutil.copy(img_file, tmp_dir.name) - output_file = drop_orientation(dst_file) - assert output_file[-3:] == 'png' diff --git a/spaces/tonyassi/controlnet-explorer/README.md b/spaces/tonyassi/controlnet-explorer/README.md deleted file mode 100644 index 5f1776255b8003dbffbd0a26661bb3a31559110f..0000000000000000000000000000000000000000 --- a/spaces/tonyassi/controlnet-explorer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Controlnet Explorer -emoji: 🚀 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/draw_repel_code_tool.py b/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/draw_repel_code_tool.py deleted file mode 100644 index 4cdc9c2ebc01a5a464953447f3f87e6ed938b5a1..0000000000000000000000000000000000000000 --- a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/draw_repel_code_tool.py +++ /dev/null @@ -1,143 +0,0 @@ -''' -The algorithm comes from the paper -"Enhanced Center Coding for Cell Detection with Convolutional Neural Networks" -https://arxiv.org/abs/1904.08864?context=cs.CV - -''' - -import numpy as np -from skimage.draw import disk as sk_disk - - -def draw_repel_code_ori(im: np.ndarray, pts: np.ndarray, r, A=1): - ''' - :param im: [H, W, 1] The image to be drawn. - :param pts: [N, yx] Input points. You need to ensure that the each point is unique. - :param r: The radius of the drawing area. - :param A: Control the gradual change. - :return: - ''' - if not isinstance(pts, np.ndarray): - pts = np.asarray(pts, np.int) - assert im.ndim == 3 and im.shape[2] == 1 - - wait_check_pixs = [] - for pt in pts: - rr, cc = sk_disk(pt, r, shape=im.shape[:2]) - pts2 = np.stack([rr, cc], 1) - pts2 = np.asarray(pts2, np.int32) - wait_check_pixs.extend(pts2) - check_pixs = np.unique(wait_check_pixs, axis=0) - - # # 源代码,很慢 - # for pix in check_pixs: - # ds = np.linalg.norm(pix[None] - pts, 2, 1) - # seq_id = np.argsort(ds)[:2] - # # close_2pts = list(pts[seq_id]) - # close_2pts_ds = list(ds[seq_id]) - # C = 0 - # if len(close_2pts_ds) == 2: - # D = close_2pts_ds[0] * (1 + close_2pts_ds[0] / (np.clip(close_2pts_ds[1], 1e-8, None))) - # if D < r: - # C = 1 / (1 + D * A) - # elif len(close_2pts_ds) == 1: - # D = close_2pts_ds[0] - # if D < r: - # C = 1 / (1 + D * A) - # im[pix[0], pix[1]] = C - - # # 这里为上面的并行加速代码。但还有待优化,内存占用太大了。 - # big_ds = np.linalg.norm(np.asarray(check_pixs[:, None] - pts[None], np.float32), 2, 2) - # big_seq_id = np.argsort(big_ds, axis=-1)[:, :2] - # big_close_2pts_ds = np.take_along_axis(big_ds, big_seq_id, 1) - # - # if big_close_2pts_ds.shape[1] == 2: - # D = big_close_2pts_ds[:, 0] * (1 + big_close_2pts_ds[:, 0] / (np.clip(big_close_2pts_ds[:, 1], 1e-8, None))) - # C = np.where(D < r, 1 / (1 + D * A), 0) - # elif big_close_2pts_ds.shape[1] == 1: - # D = big_close_2pts_ds[:, 0] - # C = np.where(D < r, 1 / (1 + D * A), 0) - # else: - # return im - # im[check_pixs[:, 0], check_pixs[:, 1]] = C[:, None] - - # 这里为上面的并行加速代码的分段代码,一次性查全部点,占用内存过于吓人。 - n_each_draw = 5000 - draw_count = int(np.ceil(len(check_pixs) / n_each_draw)) - for i in range(draw_count): - cur_check_pixs = check_pixs[i*n_each_draw:(i+1)*n_each_draw] - - big_ds = np.linalg.norm(np.asarray(cur_check_pixs[:, None] - pts[None], np.float32), 2, 2) - big_seq_id = np.argsort(big_ds, axis=-1)[:, :2] - big_close_2pts_ds = np.take_along_axis(big_ds, big_seq_id, 1) - - if big_close_2pts_ds.shape[1] == 2: - D = big_close_2pts_ds[:, 0] * (1 + big_close_2pts_ds[:, 0] / (np.clip(big_close_2pts_ds[:, 1], 1e-8, None))) - C = np.where(D < r, 1 / (1 + D * A), 0) - elif big_close_2pts_ds.shape[1] == 1: - D = big_close_2pts_ds[:, 0] - C = np.where(D < r, 1 / (1 + D * A), 0) - else: - # 这里是一次性检查全部点,如果没有,可以认为后面也没有了,可以直接退出 - break - im[cur_check_pixs[:, 0], cur_check_pixs[:, 1]] = C[:, None] - - return im - - -def draw_repel_code_fast(im: np.ndarray, pts: np.ndarray, r, A=1): - ''' - Another faster acceleration method code of the above function, but there are accuracy problems. - :param im: [H, W, 1] The image to be drawn. - :param pts: [N, yx] Input points. You need to ensure that the each point is unique. - :param r: The radius of the drawing area. - :param A: Control the gradual change. - :return: - ''' - assert im.ndim == 3 and im.shape[2] == 1 - - for pt in pts: - ds = np.linalg.norm(pt[None,] - pts, 2, 1) - # 注意,这个点集已经包含了待查询点自身 - # 该处r*2是为了增加更多的检查点,减少不准确问题 - close_pts = pts[ds <= r * 2] - rr, cc = sk_disk(pt, r, shape=im.shape[:2]) - pts2 = np.stack([rr, cc], 1) - pts2 = np.asarray(pts2, np.int32) - - big_ds = np.linalg.norm(np.asarray(pts2[:, None] - close_pts[None], np.float32), 2, 2) - # big_ds = np.linalg.norm(np.asarray(close_pts[None,] - pts2[:, None], np.float32), 2, 2) - big_seq_id = np.argsort(big_ds, axis=-1)[:, :2] - big_close_2pts_ds = np.take_along_axis(big_ds, big_seq_id, 1) - - if big_close_2pts_ds.shape[1] == 2: - D = big_close_2pts_ds[:, 0] * (1 + big_close_2pts_ds[:, 0] / (np.clip(big_close_2pts_ds[:, 1], 1e-8, None))) - C = np.where(D < r, 1 / (1 + D * A), 0) - elif big_close_2pts_ds.shape[1] == 1: - D = big_close_2pts_ds[:, 0] - C = np.where(D < r, 1 / (1 + D * A), 0) - else: - # return im - continue - im[pts2[:, 0], pts2[:, 1]] = C[:, None] - - return im - - -if __name__ == '__main__': - import cv2 - - im = np.zeros([512, 512, 1], np.float32) - # 注意输入的pt坐标顺序为yx,而不是xy - pts = np.random.randint(0, 512, [100, 2], np.int32) - - # 不允许有重复的坐标 - pts = np.unique(pts, axis=0) - - oim1 = draw_repel_code_ori(im, pts, 9) - oim2 = draw_repel_code_fast(im, pts, 9) - - cv2.imshow('oim1', oim1) - cv2.imshow('oim2', oim2) - - cv2.waitKey(0) diff --git a/spaces/typesdigital/TTS/app.py b/spaces/typesdigital/TTS/app.py deleted file mode 100644 index 7cae20c14f6319c40448750ae8a61aee8688a0b4..0000000000000000000000000000000000000000 --- a/spaces/typesdigital/TTS/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import streamlit as st -import numpy as np -from elevenlabs import voices, generate, set_api_key, UnauthenticatedRateLimitError - -def pad_buffer(audio): - # Pad buffer to multiple of 2 bytes - buffer_size = len(audio) - element_size = np.dtype(np.int16).itemsize - if buffer_size % element_size != 0: - audio = audio + b'\0' * (element_size - (buffer_size % element_size)) - return audio - -def generate_voice(text, voice_name, model_name): - audio = generate( - text[:250], # Limit to 250 characters - voice=voice_name, - model=model_name - ) - audio_data = np.frombuffer(pad_buffer(audio), dtype=np.int16) - audio_bytes = audio_data.tobytes() - return audio_bytes - -# Set the API key -set_api_key("f868e836c02c78b7ee9075d1e116a139") - -st.title("🎤 World's most advanced Text-to-Speech") - -description = """ -A demo of the world's most advanced TTS systems, made by [ElevenLabs](https://elevenlabs.io). Eleven Monolingual is designed to generate highly realistic voices in English, where Eleven Multilingual is a single model supporting multiple languages including English, German, Polish, Spanish, Italian, French, Portuguese, and Hindi. Sign up on [ElevenLabs](https://elevenlabs.io) to get fast access, long-form generation, voice cloning, API keys, and more! -""" - - -st.markdown(description) - -# Input text -input_text = st.text_area( - "Input Text (250 characters max)", - value="Hahaha OHH MY GOD! This is SOOO funny, I-I am Eleven a text-to-speech system!", - max_chars=250 -) - -# Voice selection -all_voices = voices() -input_voice = st.selectbox( - "Voice", - options=[voice.name for voice in all_voices], - index=0 -) - -# Model selection -input_model = st.radio( - "Model", - options=["eleven_monolingual_v1", "eleven_multilingual_v1"], - index=0 -) - -# Generate voice -if st.button("Generate Voice"): - try: - audio = generate_voice(input_text, input_voice, input_model) - st.audio(audio, format='audio/wav') - except UnauthenticatedRateLimitError: - st.error("Thanks for trying out ElevenLabs TTS! You've reached the free tier limit. Please provide an API key to continue.") - except Exception as e: - st.error(str(e)) \ No newline at end of file diff --git a/spaces/umichVision/virtex-redcaps/virtex/data/readers.py b/spaces/umichVision/virtex-redcaps/virtex/data/readers.py deleted file mode 100644 index 6915329db05fdd8160f6a63806a2d939f7915c74..0000000000000000000000000000000000000000 --- a/spaces/umichVision/virtex-redcaps/virtex/data/readers.py +++ /dev/null @@ -1,180 +0,0 @@ -r""" -A *Reader* is a PyTorch :class:`~torch.utils.data.Dataset` which simply reads -data from disk and returns it almost as is. Readers defined here are used by -datasets in :mod:`virtex.data.datasets`. -""" -from collections import defaultdict -import glob -import json -import os -import pickle -import random -from typing import Dict, List, Tuple - -import cv2 -import lmdb -from loguru import logger -from torch.utils.data import Dataset - - -# Some simplified type renaming for better readability -ImageID = int -Captions = List[str] - - -class SimpleCocoCaptionsReader(Dataset): - r""" - A reader interface to read COCO Captions dataset and directly from official - annotation files and return it unprocessed. We only use this for serializing - the dataset to LMDB files, and use :class:`~virtex.data.readers.LmdbReader` - in rest of the datasets. - - Parameters - ---------- - root: str, optional (default = "datasets/coco") - Path to the COCO dataset root directory. - split: str, optional (default = "train") - Which split (from COCO 2017 version) to read. One of ``{"train", "val"}``. - """ - def __init__(self, root: str = "datasets/coco", split: str = "train"): - - image_dir = os.path.join(root, f"{split}2017") - - # Make a tuple of image id and its filename, get image_id from its - # filename (assuming directory has images with names in COCO2017 format). - image_filenames = glob.glob(os.path.join(image_dir, "*.jpg")) - self.id_filename: List[Tuple[ImageID, str]] = [ - (int(os.path.basename(name)[:-4]), name) for name in image_filenames - ] - - # Make a mapping between image_id and its captions. - _captions = json.load( - open(os.path.join(root, "annotations", f"captions_{split}2017.json")) - ) - self._id_to_captions: Dict[ImageID, Captions] = defaultdict(list) - - for ann in _captions["annotations"]: - self._id_to_captions[ann["image_id"]].append(ann["caption"]) - - def __len__(self): - return len(self.id_filename) - - def __getitem__(self, idx: int): - image_id, filename = self.id_filename[idx] - - # shape: (height, width, channels), dtype: uint8 - image = cv2.imread(filename) - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - captions = self._id_to_captions[image_id] - - return {"image_id": image_id, "image": image, "captions": captions} - - -class LmdbReader(Dataset): - r""" - A reader interface to read datapoints from a serialized LMDB file containing - ``(image_id, image, caption)`` tuples. Optionally, one may specify a - partial percentage of datapoints to use. - - .. note:: - - When training in distributed setting, make sure each worker has SAME - random seed because there is some randomness in selecting keys for - training with partial dataset. If you wish to use a different seed for - each worker, select keys manually outside of this class and use - :meth:`set_keys`. - - .. note:: - - Similar to :class:`~torch.utils.data.distributed.DistributedSampler`, - this reader can shuffle the dataset deterministically at the start of - epoch. Use :meth:`set_shuffle_seed` manually from outside to change the - seed at every epoch. - - Parameters - ---------- - lmdb_path: str - Path to LMDB file with datapoints. - shuffle: bool, optional (default = True) - Whether to shuffle or not. If this is on, there will be one deterministic - shuffle based on epoch before sharding the dataset (to workers). - percentage: float, optional (default = 100.0) - Percentage of datapoints to use. If less than 100.0, keys will be - shuffled and first K% will be retained and use throughout training. - Make sure to set this only for training, not validation. - """ - - def __init__(self, lmdb_path: str, shuffle: bool = True, percentage: float = 100): - self.lmdb_path = lmdb_path - self.shuffle = shuffle - - assert percentage > 0, "Cannot load dataset with 0 percent original size." - self.percentage = percentage - - # fmt: off - # Create an LMDB transaction right here. It will be aborted when this - # class goes out of scope. - env = lmdb.open( - self.lmdb_path, subdir=False, readonly=True, lock=False, - readahead=False, map_size=1099511627776 * 2, - ) - self.db_txn = env.begin() - - # Form a list of LMDB keys numbered from 0 (as binary strings). - self._keys = [ - f"{i}".encode("ascii") for i in range(env.stat()["entries"]) - ] - # fmt: on - - # If data percentage < 100%, randomly retain K% keys. This will be - # deterministic based on random seed. - if percentage < 100.0: - retain_k: int = int(len(self._keys) * percentage / 100.0) - random.shuffle(self._keys) - self._keys = self._keys[:retain_k] - logger.info(f"Retained {retain_k} datapoints for training!") - - # A seed to deterministically shuffle at the start of epoch. This is - # set externally through `set_shuffle_seed`. - self.shuffle_seed = 0 - - def set_shuffle_seed(self, seed: int): - r"""Set random seed for shuffling data.""" - self.shuffle_seed = seed - - def get_keys(self) -> List[bytes]: - r"""Return list of keys, useful while saving checkpoint.""" - return self._keys - - def set_keys(self, keys: List[bytes]): - r"""Set list of keys, useful while loading from checkpoint.""" - self._keys = keys - - def __getstate__(self): - r""" - This magic method allows an object of this class to be pickable, useful - for dataloading with multiple CPU workers. :attr:`db_txn` is not - pickable, so we remove it from state, and re-instantiate it in - :meth:`__setstate__`. - """ - state = self.__dict__ - state["db_txn"] = None - return state - - def __setstate__(self, state): - self.__dict__ = state - - env = lmdb.open( - self.lmdb_path, subdir=False, readonly=True, lock=False, - readahead=False, map_size=1099511627776 * 2, - ) - self.db_txn = env.begin() - - def __len__(self): - return len(self._keys) - - def __getitem__(self, idx: int): - datapoint_pickled = self.db_txn.get(self._keys[idx]) - image_id, image, captions = pickle.loads(datapoint_pickled) - - return image_id, image, captions diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Al Hizbul Azam Pdf Downloadl How to Recite the Supplications in 30 Parts.md b/spaces/usbethFlerru/sovits-modelsV2/example/Al Hizbul Azam Pdf Downloadl How to Recite the Supplications in 30 Parts.md deleted file mode 100644 index a42a23c23f191a8410836ca23c515899dcaea988..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Al Hizbul Azam Pdf Downloadl How to Recite the Supplications in 30 Parts.md +++ /dev/null @@ -1,5 +0,0 @@ -
                -

                Hizbul Azam (mukhtasar) is a free app for Android published in the Reference Tools list of apps, part of Education.

                The company that develops Hizbul Azam (mukhtasar) is AHijazi. The latest version released by its developer is 1. This app was rated by 7 users of our site and has an average rating of 3.2.

                To install Hizbul Azam (mukhtasar) on your Android device, just click the green Continue To App button above to start the installation process. The app is listed on our website since 2013-06-05 and was downloaded 2118 times. We have already checked if the download link is safe, however for your own protection we recommend that you scan the downloaded app with your antivirus. Your antivirus may detect the Hizbul Azam (mukhtasar) as malware as malware if the download link to ahijazi.hizbulazam is broken.

                How to install Hizbul Azam (mukhtasar) on your Android device:

                • Click on the Continue To App button on our website. This will redirect you to Google Play.
                • Once the Hizbul Azam (mukhtasar) is shown in the Google Play listing of your Android device, you can start its download and installation. Tap on the Install button located below the search bar and to the right of the app icon.
                • A pop-up window with the permissions required by Hizbul Azam (mukhtasar) will be shown. Click on Accept to continue the process.
                • Hizbul Azam (mukhtasar) will be downloaded onto your device, displaying a progress. Once the download completes, the installation will start and you'll get a notification after the installation is finished.

                -

                Al Hizbul Azam Pdf Downloadl


                DOWNLOAD 🔗 https://urlcod.com/2uyUYi



                aaccfb2cb3
                -
                -
                \ No newline at end of file diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/midas/backbones/swin2.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/midas/backbones/swin2.py deleted file mode 100644 index ce4c8f1d6fc1807a207dc6b9a261c6f7b14a87a3..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/midas/backbones/swin2.py +++ /dev/null @@ -1,34 +0,0 @@ -import timm - -from .swin_common import _make_swin_backbone - - -def _make_pretrained_swin2l24_384(pretrained, hooks=None): - model = timm.create_model("swinv2_large_window12to24_192to384_22kft1k", pretrained=pretrained) - - hooks = [1, 1, 17, 1] if hooks == None else hooks - return _make_swin_backbone( - model, - hooks=hooks - ) - - -def _make_pretrained_swin2b24_384(pretrained, hooks=None): - model = timm.create_model("swinv2_base_window12to24_192to384_22kft1k", pretrained=pretrained) - - hooks = [1, 1, 17, 1] if hooks == None else hooks - return _make_swin_backbone( - model, - hooks=hooks - ) - - -def _make_pretrained_swin2t16_256(pretrained, hooks=None): - model = timm.create_model("swinv2_tiny_window16_256", pretrained=pretrained) - - hooks = [1, 1, 5, 1] if hooks == None else hooks - return _make_swin_backbone( - model, - hooks=hooks, - patch_grid=[64, 64] - ) diff --git a/spaces/vinid/webplip/home.py b/spaces/vinid/webplip/home.py deleted file mode 100644 index a2ed36cef04610fdcec8370d9d666c200c20c3a2..0000000000000000000000000000000000000000 --- a/spaces/vinid/webplip/home.py +++ /dev/null @@ -1,52 +0,0 @@ -from pathlib import Path -import streamlit as st -import streamlit.components.v1 as components -from PIL import Image -import base64 - -def read_markdown_file(markdown_file): - return Path(markdown_file).read_text() - -def render_svg(svg_filename): - with open(svg_filename,"r") as f: - lines = f.readlines() - svg=''.join(lines) - """Renders the given svg string.""" - b64 = base64.b64encode(svg.encode('utf-8')).decode("utf-8") - html = r'' % b64 - st.write(html, unsafe_allow_html=True) - - -def app(): - - st.markdown("# A visual-language foundation model for pathology") - st.markdown("This is a webapp for PLIP, our new fundational AI model for pathology and OpenPath our new dataset, **from our recent work**: Leveraging medical Twitter to build a visual-language foundation model for pathology") - st.markdown("### Pathology Language and Image Pretraining (PLIP)\n We develop PLIP, a multimodal AI with both image and text understanding. PLIP achieves state-of-the-art zero-shot and few-short performance for classifying new pathology images across diverse tasks. Moreover, PLIP enables users to retrieve similar cases by either image or natural language search, greatly facilitating knowledge sharing. Our approach demonstrates that publicly shared medical data is a tremendous opportunity that can be harnessed to advance biomedical AI.") - - fig1e = Image.open('resources/4x/Fig1e.png') - st.image(fig1e, caption='PLIP training procedure', output_format='png') - - st.markdown("### OpenPath Dataset\nThe lack of annotated publicly available medical images is a major barrier for innovations. At the same time, many de-identified images and much knowledge are shared by clinicians on public forums such as medical Twitter. Here we harness these crowd platforms to curate OpenPath, a large dataset of **208,414** pathology images paired with natural language descriptions") - - render_svg("resources/SVG/Asset 49.svg") - - - st.markdown("### Documentation\n" - "This webapp comes with different functionalities.\n" - "* Details: The details page guides you through our work.\n" - "* Text to Image: allows users to perform text search on a database of images.\n" - "* Image to Image: allows users to perform image search on a database of images.\n" - "") - - st.markdown("### Other Links\n" - "* Download [OpenPath](https://drive.google.com/drive/folders/1b5UT8BzUphkHZavRG-fmiyY9JWYIWZER)\n" - "* Code to reproduce [PLIP](https://github.com/vinid/path_eval) results\n" - "* Link to the [PLIP Model](https://huggingface.co/vinid/plip)\n" - "") - - st.markdown("""---""") - st.markdown('Disclaimer') - st.caption('Please be advised that this function has been developed in compliance with the Twitter policy of data usage and sharing. It is important to note that the results obtained from this function are not intended to constitute medical advice or replace consultation with a qualified medical professional. The use of this function is solely at your own risk and should be consistent with applicable laws, regulations, and ethical considerations. We do not warrant or guarantee the accuracy, completeness, suitability, or usefulness of this function for any particular purpose, and we hereby disclaim any liability arising from any reliance placed on this function or any results obtained from its use. If you wish to review the original Twitter post, you should access the source page directly on Twitter.') - - st.markdown('Privacy statement') - st.caption('In accordance with the privacy and control policy of Twitter, we hereby declared that the data redistributed by us shall only comprise of Tweet IDs. The Tweet IDs will be employed to establish a linkage with the original Twitter post, as long as the original post is still accessible. The hyperlink will cease to function if the user deletes the original post. It is important to note that all tweets displayed on our service have already been classified as non-sensitive by Twitter. It is strictly prohibited to redistribute any content apart from the Tweet IDs. Any distribution carried out must adhere to the laws and regulations applicable in your jurisdiction, including export control laws and embargoes.') diff --git a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/app/multipage.py b/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/app/multipage.py deleted file mode 100644 index 58952aaa1b123cf0aa970216f5444ba3eeab5772..0000000000000000000000000000000000000000 --- a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/app/multipage.py +++ /dev/null @@ -1,41 +0,0 @@ -""" -This file is the framework for generating multiple Streamlit applications -through an object oriented framework. -""" - -# Import necessary libraries -import streamlit as st - - -# Define the multipage class to manage the multiple apps in our program -class MultiPage: - """Framework for combining multiple streamlit applications.""" - - def __init__(self) -> None: - """Constructor class to generate a list which will store all our applications as an instance variable.""" - self.pages = [] - - def add_page(self, title, func) -> None: - """Class Method to Add pages to the project - Args: - title ([str]): The title of page which we are adding to the list of apps - - func: Python function to render this page in Streamlit - """ - - self.pages.append({ - - "title": title, - "function": func - }) - - def run(self): - # Drodown to select the page to run - page = st.sidebar.selectbox( - 'App Navigation', - self.pages, - format_func=lambda page: page['title'] - ) - - # run the app function - page['function']() diff --git a/spaces/vkdhiman93/cerebras-Cerebras-GPT-1.3B/README.md b/spaces/vkdhiman93/cerebras-Cerebras-GPT-1.3B/README.md deleted file mode 100644 index 0eaeb4ecfb49fa2cee3ff6687fe8c08d9bc70fca..0000000000000000000000000000000000000000 --- a/spaces/vkdhiman93/cerebras-Cerebras-GPT-1.3B/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Cerebras Cerebras GPT 1.3B -emoji: 💻 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/volhack/vits-uma-genshin-honkai/commons.py b/spaces/volhack/vits-uma-genshin-honkai/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/volhack/vits-uma-genshin-honkai/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/utils/misc.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/utils/misc.py deleted file mode 100644 index 2c58d0d7fee9fe3d4519270ad8c1e998d0d8a18c..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/utils/misc.py +++ /dev/null @@ -1,377 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections.abc -import functools -import itertools -import subprocess -import warnings -from collections import abc -from importlib import import_module -from inspect import getfullargspec -from itertools import repeat - - -# From PyTorch internals -def _ntuple(n): - - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple - - -def is_str(x): - """Whether the input is an string instance. - - Note: This method is deprecated since python 2 is no longer supported. - """ - return isinstance(x, str) - - -def import_modules_from_strings(imports, allow_failed_imports=False): - """Import modules from the given list of strings. - - Args: - imports (list | str | None): The given module names to be imported. - allow_failed_imports (bool): If True, the failed imports will return - None. Otherwise, an ImportError is raise. Default: False. - - Returns: - list[module] | module | None: The imported modules. - - Examples: - >>> osp, sys = import_modules_from_strings( - ... ['os.path', 'sys']) - >>> import os.path as osp_ - >>> import sys as sys_ - >>> assert osp == osp_ - >>> assert sys == sys_ - """ - if not imports: - return - single_import = False - if isinstance(imports, str): - single_import = True - imports = [imports] - if not isinstance(imports, list): - raise TypeError( - f'custom_imports must be a list but got type {type(imports)}') - imported = [] - for imp in imports: - if not isinstance(imp, str): - raise TypeError( - f'{imp} is of type {type(imp)} and cannot be imported.') - try: - imported_tmp = import_module(imp) - except ImportError: - if allow_failed_imports: - warnings.warn(f'{imp} failed to import and is ignored.', - UserWarning) - imported_tmp = None - else: - raise ImportError - imported.append(imported_tmp) - if single_import: - imported = imported[0] - return imported - - -def iter_cast(inputs, dst_type, return_type=None): - """Cast elements of an iterable object into some type. - - Args: - inputs (Iterable): The input object. - dst_type (type): Destination type. - return_type (type, optional): If specified, the output object will be - converted to this type, otherwise an iterator. - - Returns: - iterator or specified type: The converted object. - """ - if not isinstance(inputs, abc.Iterable): - raise TypeError('inputs must be an iterable object') - if not isinstance(dst_type, type): - raise TypeError('"dst_type" must be a valid type') - - out_iterable = map(dst_type, inputs) - - if return_type is None: - return out_iterable - else: - return return_type(out_iterable) - - -def list_cast(inputs, dst_type): - """Cast elements of an iterable object into a list of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=list) - - -def tuple_cast(inputs, dst_type): - """Cast elements of an iterable object into a tuple of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=tuple) - - -def is_seq_of(seq, expected_type, seq_type=None): - """Check whether it is a sequence of some type. - - Args: - seq (Sequence): The sequence to be checked. - expected_type (type): Expected type of sequence items. - seq_type (type, optional): Expected sequence type. - - Returns: - bool: Whether the sequence is valid. - """ - if seq_type is None: - exp_seq_type = abc.Sequence - else: - assert isinstance(seq_type, type) - exp_seq_type = seq_type - if not isinstance(seq, exp_seq_type): - return False - for item in seq: - if not isinstance(item, expected_type): - return False - return True - - -def is_list_of(seq, expected_type): - """Check whether it is a list of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=list) - - -def is_tuple_of(seq, expected_type): - """Check whether it is a tuple of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=tuple) - - -def slice_list(in_list, lens): - """Slice a list into several sub lists by a list of given length. - - Args: - in_list (list): The list to be sliced. - lens(int or list): The expected length of each out list. - - Returns: - list: A list of sliced list. - """ - if isinstance(lens, int): - assert len(in_list) % lens == 0 - lens = [lens] * int(len(in_list) / lens) - if not isinstance(lens, list): - raise TypeError('"indices" must be an integer or a list of integers') - elif sum(lens) != len(in_list): - raise ValueError('sum of lens and list length does not ' - f'match: {sum(lens)} != {len(in_list)}') - out_list = [] - idx = 0 - for i in range(len(lens)): - out_list.append(in_list[idx:idx + lens[i]]) - idx += lens[i] - return out_list - - -def concat_list(in_list): - """Concatenate a list of list into a single list. - - Args: - in_list (list): The list of list to be merged. - - Returns: - list: The concatenated flat list. - """ - return list(itertools.chain(*in_list)) - - -def check_prerequisites( - prerequisites, - checker, - msg_tmpl='Prerequisites "{}" are required in method "{}" but not ' - 'found, please install them first.'): # yapf: disable - """A decorator factory to check if prerequisites are satisfied. - - Args: - prerequisites (str of list[str]): Prerequisites to be checked. - checker (callable): The checker method that returns True if a - prerequisite is meet, False otherwise. - msg_tmpl (str): The message template with two variables. - - Returns: - decorator: A specific decorator. - """ - - def wrap(func): - - @functools.wraps(func) - def wrapped_func(*args, **kwargs): - requirements = [prerequisites] if isinstance( - prerequisites, str) else prerequisites - missing = [] - for item in requirements: - if not checker(item): - missing.append(item) - if missing: - print(msg_tmpl.format(', '.join(missing), func.__name__)) - raise RuntimeError('Prerequisites not meet.') - else: - return func(*args, **kwargs) - - return wrapped_func - - return wrap - - -def _check_py_package(package): - try: - import_module(package) - except ImportError: - return False - else: - return True - - -def _check_executable(cmd): - if subprocess.call(f'which {cmd}', shell=True) != 0: - return False - else: - return True - - -def requires_package(prerequisites): - """A decorator to check if some python packages are installed. - - Example: - >>> @requires_package('numpy') - >>> func(arg1, args): - >>> return numpy.zeros(1) - array([0.]) - >>> @requires_package(['numpy', 'non_package']) - >>> func(arg1, args): - >>> return numpy.zeros(1) - ImportError - """ - return check_prerequisites(prerequisites, checker=_check_py_package) - - -def requires_executable(prerequisites): - """A decorator to check if some executable files are installed. - - Example: - >>> @requires_executable('ffmpeg') - >>> func(arg1, args): - >>> print(1) - 1 - """ - return check_prerequisites(prerequisites, checker=_check_executable) - - -def deprecated_api_warning(name_dict, cls_name=None): - """A decorator to check if some arguments are deprecate and try to replace - deprecate src_arg_name to dst_arg_name. - - Args: - name_dict(dict): - key (str): Deprecate argument names. - val (str): Expected argument names. - - Returns: - func: New function. - """ - - def api_warning_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get name of the function - func_name = old_func.__name__ - if cls_name is not None: - func_name = f'{cls_name}.{func_name}' - if args: - arg_names = args_info.args[:len(args)] - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in arg_names: - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - arg_names[arg_names.index(src_arg_name)] = dst_arg_name - if kwargs: - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in kwargs: - - assert dst_arg_name not in kwargs, ( - f'The expected behavior is to replace ' - f'the deprecated key `{src_arg_name}` to ' - f'new key `{dst_arg_name}`, but got them ' - f'in the arguments at the same time, which ' - f'is confusing. `{src_arg_name} will be ' - f'deprecated in the future, please ' - f'use `{dst_arg_name}` instead.') - - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - kwargs[dst_arg_name] = kwargs.pop(src_arg_name) - - # apply converted arguments to the decorated method - output = old_func(*args, **kwargs) - return output - - return new_func - - return api_warning_wrapper - - -def is_method_overridden(method, base_class, derived_class): - """Check if a method of base class is overridden in derived class. - - Args: - method (str): the method name to check. - base_class (type): the class of the base class. - derived_class (type | Any): the class or instance of the derived class. - """ - assert isinstance(base_class, type), \ - "base_class doesn't accept instance, Please pass class instead." - - if not isinstance(derived_class, type): - derived_class = derived_class.__class__ - - base_method = getattr(base_class, method) - derived_method = getattr(derived_class, method) - return derived_method != base_method - - -def has_method(obj: object, method: str) -> bool: - """Check whether the object has a method. - - Args: - method (str): The method name to check. - obj (object): The object to check. - - Returns: - bool: True if the object has the method else False. - """ - return hasattr(obj, method) and callable(getattr(obj, method)) diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/learn/__init__.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/learn/__init__.py deleted file mode 100644 index bab9f3e37dacb95f5313a017b5c482ac26df6522..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/learn/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/4/30 20:57 -@Author : alexanderwu -@File : __init__.py -""" - -from metagpt.learn.text_to_image import text_to_image -from metagpt.learn.text_to_speech import text_to_speech -from metagpt.learn.google_search import google_search - -__all__ = ["text_to_image", "text_to_speech", "google_search"] diff --git a/spaces/wouaf/WOUAF-Text-to-Image/torch_utils/custom_ops.py b/spaces/wouaf/WOUAF-Text-to-Image/torch_utils/custom_ops.py deleted file mode 100644 index 4cc4e43fc6f6ce79f2bd68a44ba87990b9b8564e..0000000000000000000000000000000000000000 --- a/spaces/wouaf/WOUAF-Text-to-Image/torch_utils/custom_ops.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import glob -import torch -import torch.utils.cpp_extension -import importlib -import hashlib -import shutil -from pathlib import Path - -from torch.utils.file_baton import FileBaton - -#---------------------------------------------------------------------------- -# Global options. - -verbosity = 'brief' # Verbosity level: 'none', 'brief', 'full' - -#---------------------------------------------------------------------------- -# Internal helper funcs. - -def _find_compiler_bindir(): - patterns = [ - 'C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio */vc/bin', - ] - for pattern in patterns: - matches = sorted(glob.glob(pattern)) - if len(matches): - return matches[-1] - return None - -#---------------------------------------------------------------------------- -# Main entry point for compiling and loading C++/CUDA plugins. - -_cached_plugins = dict() - -def get_plugin(module_name, sources, **build_kwargs): - assert verbosity in ['none', 'brief', 'full'] - - # Already cached? - if module_name in _cached_plugins: - return _cached_plugins[module_name] - - # Print status. - if verbosity == 'full': - print(f'Setting up PyTorch plugin "{module_name}"...') - elif verbosity == 'brief': - print(f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True) - - try: # pylint: disable=too-many-nested-blocks - # Make sure we can find the necessary compiler binaries. - if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0: - compiler_bindir = _find_compiler_bindir() - if compiler_bindir is None: - raise RuntimeError(f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".') - os.environ['PATH'] += ';' + compiler_bindir - - # Compile and load. - verbose_build = (verbosity == 'full') - - # Incremental build md5sum trickery. Copies all the input source files - # into a cached build directory under a combined md5 digest of the input - # source files. Copying is done only if the combined digest has changed. - # This keeps input file timestamps and filenames the same as in previous - # extension builds, allowing for fast incremental rebuilds. - # - # This optimization is done only in case all the source files reside in - # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR - # environment variable is set (we take this as a signal that the user - # actually cares about this.) - source_dirs_set = set(os.path.dirname(source) for source in sources) - if len(source_dirs_set) == 1 and ('TORCH_EXTENSIONS_DIR' in os.environ): - all_source_files = sorted(list(x for x in Path(list(source_dirs_set)[0]).iterdir() if x.is_file())) - - # Compute a combined hash digest for all source files in the same - # custom op directory (usually .cu, .cpp, .py and .h files). - hash_md5 = hashlib.md5() - for src in all_source_files: - with open(src, 'rb') as f: - hash_md5.update(f.read()) - build_dir = torch.utils.cpp_extension._get_build_directory(module_name, verbose=verbose_build) # pylint: disable=protected-access - digest_build_dir = os.path.join(build_dir, hash_md5.hexdigest()) - - if not os.path.isdir(digest_build_dir): - os.makedirs(digest_build_dir, exist_ok=True) - baton = FileBaton(os.path.join(digest_build_dir, 'lock')) - if baton.try_acquire(): - try: - for src in all_source_files: - shutil.copyfile(src, os.path.join(digest_build_dir, os.path.basename(src))) - finally: - baton.release() - else: - # Someone else is copying source files under the digest dir, - # wait until done and continue. - baton.wait() - digest_sources = [os.path.join(digest_build_dir, os.path.basename(x)) for x in sources] - torch.utils.cpp_extension.load(name=module_name, build_directory=build_dir, - verbose=verbose_build, sources=digest_sources, **build_kwargs) - else: - torch.utils.cpp_extension.load(name=module_name, verbose=verbose_build, sources=sources, **build_kwargs) - module = importlib.import_module(module_name) - - except: - if verbosity == 'brief': - print('Failed!') - raise - - # Print status and add to cache. - if verbosity == 'full': - print(f'Done setting up PyTorch plugin "{module_name}".') - elif verbosity == 'brief': - print('Done.') - _cached_plugins[module_name] = module - return module - -#---------------------------------------------------------------------------- diff --git a/spaces/wwwwwwww2/bingo/src/components/chat-image.tsx b/spaces/wwwwwwww2/bingo/src/components/chat-image.tsx deleted file mode 100644 index 05ecc9771eada27a0f2d160bb01cba170d37bb09..0000000000000000000000000000000000000000 --- a/spaces/wwwwwwww2/bingo/src/components/chat-image.tsx +++ /dev/null @@ -1,170 +0,0 @@ -import { - useEffect, - useState, - useCallback, - ChangeEvent, - ClipboardEvent, - MouseEventHandler, - FormEvent, - useRef -} from "react" -import Image from 'next/image' -import PasteIcon from '@/assets/images/paste.svg' -import UploadIcon from '@/assets/images/upload.svg' -import CameraIcon from '@/assets/images/camera.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { cn } from '@/lib/utils' - -interface ChatImageProps extends Pick, 'uploadImage'> {} - -const preventDefault: MouseEventHandler = (event) => { - event.nativeEvent.stopImmediatePropagation() -} - -const toBase64 = (file: File): Promise => new Promise((resolve, reject) => { - const reader = new FileReader() - reader.readAsDataURL(file) - reader.onload = () => resolve(reader.result as string) - reader.onerror = reject -}) - -export function ChatImage({ children, uploadImage }: React.PropsWithChildren) { - const videoRef = useRef(null) - const canvasRef = useRef(null) - const mediaStream = useRef() - const [panel, setPanel] = useState('none') - - const upload = useCallback((url: string) => { - if (url) { - uploadImage(url) - } - setPanel('none') - }, [panel]) - - const onUpload = useCallback(async (event: ChangeEvent) => { - const file = event.target.files?.[0] - if (file) { - const fileDataUrl = await toBase64(file) - if (fileDataUrl) { - upload(fileDataUrl) - } - } - }, []) - - const onPaste = useCallback((event: ClipboardEvent) => { - const pasteUrl = event.clipboardData.getData('text') ?? '' - upload(pasteUrl) - }, []) - - const onEnter = useCallback((event: FormEvent) => { - event.preventDefault() - event.stopPropagation() - // @ts-ignore - const inputUrl = event.target.elements.image.value - if (inputUrl) { - upload(inputUrl) - } - }, []) - - const openVideo: MouseEventHandler = async (event) => { - event.stopPropagation() - setPanel('camera-mode') - } - - const onCapture = () => { - if (canvasRef.current && videoRef.current) { - const canvas = canvasRef.current - canvas.width = videoRef.current!.videoWidth - canvas.height = videoRef.current!.videoHeight - canvas.getContext('2d')?.drawImage(videoRef.current, 0, 0, canvas.width, canvas.height) - const cameraUrl = canvas.toDataURL('image/jpeg') - upload(cameraUrl) - } - } - - useEffect(() => { - const handleBlur = () => { - if (panel !== 'none') { - setPanel('none') - } - } - document.addEventListener('click', handleBlur) - return () => { - document.removeEventListener('click', handleBlur) - } - }, [panel]) - - useEffect(() => { - if (panel === 'camera-mode') { - navigator.mediaDevices.getUserMedia({ video: true, audio: false }) - .then(videoStream => { - mediaStream.current = videoStream - if (videoRef.current) { - videoRef.current.srcObject = videoStream - } - }) - } else { - if (mediaStream.current) { - mediaStream.current.getTracks().forEach(function(track) { - track.stop() - }) - mediaStream.current = undefined - } - } - }, [panel]) - - return ( -
                -
                panel === 'none' ? setPanel('normal') : setPanel('none')}>{children}
                -
                -
                -
                -

                添加图像

                -
                -
                - paste - - e.stopPropagation()} - /> - -
                -
                - - -
                -
                - {panel === 'camera-mode' &&
                -
                -
                -
                -
                -
                -
                -
                } -
                -
                - ) -} diff --git a/spaces/wydgg/bingo-wyd-ai/src/lib/isomorphic/index.ts b/spaces/wydgg/bingo-wyd-ai/src/lib/isomorphic/index.ts deleted file mode 100644 index 738dc92f74079ab762d584fb7422a8c8c3b61547..0000000000000000000000000000000000000000 --- a/spaces/wydgg/bingo-wyd-ai/src/lib/isomorphic/index.ts +++ /dev/null @@ -1,17 +0,0 @@ -'use client' - -import Default from './browser' - -let exportsModel: any = {} - -if (process.browser) { - Object.assign(exportsModel, require('./browser').default) -} else { - Object.assign(exportsModel, require('./node').default) -} - -export default exportsModel! as typeof Default - -export const fetch: typeof Default.fetch = exportsModel!.fetch -export const WebSocket: typeof Default.WebSocket = exportsModel!.WebSocket -export const debug: typeof Default.debug = exportsModel!.debug diff --git a/spaces/xiaohuajiejie/styletransfor/README.md b/spaces/xiaohuajiejie/styletransfor/README.md deleted file mode 100644 index 92ae9dfabf3414d0d2fe502a9ae64f5d6a4001aa..0000000000000000000000000000000000000000 --- a/spaces/xiaohuajiejie/styletransfor/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Styletransfor -emoji: 📈 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/xiatao/microsoft-trocr-base-printed/README.md b/spaces/xiatao/microsoft-trocr-base-printed/README.md deleted file mode 100644 index 42e6ebdf0be1ab64ac32122971271f46dbdc16f4..0000000000000000000000000000000000000000 --- a/spaces/xiatao/microsoft-trocr-base-printed/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Microsoft Trocr Base Printed -emoji: 😻 -colorFrom: pink -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/xxie92/antibody_visulization/diffab/tools/renumber/__main__.py b/spaces/xxie92/antibody_visulization/diffab/tools/renumber/__main__.py deleted file mode 100644 index 7ca1c759534be0a3ae14e4ab178e981eb4fb211a..0000000000000000000000000000000000000000 --- a/spaces/xxie92/antibody_visulization/diffab/tools/renumber/__main__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .run import main - -if __name__ == '__main__': - main() - \ No newline at end of file diff --git a/spaces/yaelvinker/CLIPasso/.ipynb_checkpoints/app-checkpoint.py b/spaces/yaelvinker/CLIPasso/.ipynb_checkpoints/app-checkpoint.py deleted file mode 100644 index 78a3affc8c46a09d35de18fb27841402cd1b3b2b..0000000000000000000000000000000000000000 --- a/spaces/yaelvinker/CLIPasso/.ipynb_checkpoints/app-checkpoint.py +++ /dev/null @@ -1,10 +0,0 @@ -import torch -import gradio as gr -import pydiffvg -import clip - -def greet(name): - return "hello" + name + torch.__version__ - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/yangogo/bingo/README.md b/spaces/yangogo/bingo/README.md deleted file mode 100644 index 218767d1d7debd26932ffddca2ec0f421c0171a9..0000000000000000000000000000000000000000 --- a/spaces/yangogo/bingo/README.md +++ /dev/null @@ -1,195 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -
                - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
                - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
                - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
                - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
                -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
                - -
                -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
                - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/yangogo/bingo/src/lib/hooks/use-bing.ts b/spaces/yangogo/bingo/src/lib/hooks/use-bing.ts deleted file mode 100644 index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000 --- a/spaces/yangogo/bingo/src/lib/hooks/use-bing.ts +++ /dev/null @@ -1,173 +0,0 @@ -'use client' - -import { useState, useCallback, useEffect, useMemo } from 'react' -import { useAtom, useAtomValue } from 'jotai' -import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state' -import { setConversationMessages } from './chat-history' -import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types' -import { nanoid } from '../utils' -import { TTS } from '../bots/bing/tts' - -export function useBing(botId: BotId = 'bing') { - const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId]) - const [enableTTS] = useAtom(voiceAtom) - const speaker = useMemo(() => new TTS(), []) - const [hash, setHash] = useAtom(hashAtom) - const bingConversationStyle = useAtomValue(bingConversationStyleAtom) - const [chatState, setChatState] = useAtom(chatAtom) - const [input, setInput] = useState('') - const [attachmentList, setAttachmentList] = useState([]) - - const updateMessage = useCallback( - (messageId: string, updater: (message: ChatMessageModel) => void) => { - setChatState((draft) => { - const message = draft.messages.find((m) => m.id === messageId) - if (message) { - updater(message) - } - }) - }, - [setChatState], - ) - - const sendMessage = useCallback( - async (input: string, options = {}) => { - const botMessageId = nanoid() - const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined - setChatState((draft) => { - const text = imageUrl ? `${input}\n\n![image](${imageUrl})` : input - draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' }) - setAttachmentList([]) - }) - const abortController = new AbortController() - setChatState((draft) => { - draft.generatingMessageId = botMessageId - draft.abortController = abortController - }) - speaker.reset() - await chatState.bot.sendMessage({ - prompt: input, - imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl, - options: { - ...options, - bingConversationStyle, - }, - signal: abortController.signal, - onEvent(event) { - if (event.type === 'UPDATE_ANSWER') { - updateMessage(botMessageId, (message) => { - if (event.data.text.length > message.text.length) { - message.text = event.data.text - } - - if (event.data.spokenText && enableTTS) { - speaker.speak(event.data.spokenText) - } - - message.throttling = event.data.throttling || message.throttling - message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions - message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses - }) - } else if (event.type === 'ERROR') { - updateMessage(botMessageId, (message) => { - message.error = event.error - }) - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } else if (event.type === 'DONE') { - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } - }, - }) - }, - [botId, attachmentList, chatState.bot, setChatState, updateMessage], - ) - - const uploadImage = useCallback(async (imgUrl: string) => { - setAttachmentList([{ url: imgUrl, status: 'loading' }]) - const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle) - if (response?.blobId) { - setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }]) - } else { - setAttachmentList([{ url: imgUrl, status: 'error' }]) - } - }, [chatState.bot]) - - const resetConversation = useCallback(() => { - chatState.bot.resetConversation() - speaker.abort() - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }] - draft.conversationId = nanoid() - }) - }, [chatState.bot, setChatState]) - - const stopGenerating = useCallback(() => { - chatState.abortController?.abort() - if (chatState.generatingMessageId) { - updateMessage(chatState.generatingMessageId, (message) => { - if (!message.text && !message.error) { - message.text = 'Cancelled' - } - }) - } - setChatState((draft) => { - draft.generatingMessageId = '' - }) - }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage]) - - useEffect(() => { - if (chatState.messages.length) { - setConversationMessages(botId, chatState.conversationId, chatState.messages) - } - }, [botId, chatState.conversationId, chatState.messages]) - - useEffect(() => { - if (hash === 'reset') { - resetConversation() - setHash('') - } - }, [hash, setHash]) - - const chat = useMemo( - () => ({ - botId, - bot: chatState.bot, - isSpeaking: speaker.isSpeaking, - messages: chatState.messages, - sendMessage, - setInput, - input, - resetConversation, - generating: !!chatState.generatingMessageId, - stopGenerating, - uploadImage, - setAttachmentList, - attachmentList, - }), - [ - botId, - bingConversationStyle, - chatState.bot, - chatState.generatingMessageId, - chatState.messages, - speaker.isSpeaking, - setInput, - input, - setAttachmentList, - attachmentList, - resetConversation, - sendMessage, - stopGenerating, - ], - ) - - return chat -} diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/FaceBoxesV2/faceboxes_detector.py b/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/FaceBoxesV2/faceboxes_detector.py deleted file mode 100644 index b953b85ce20424025e3127fe60f7f9021078da87..0000000000000000000000000000000000000000 --- a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/FaceBoxesV2/faceboxes_detector.py +++ /dev/null @@ -1,124 +0,0 @@ -from third_party.PIPNet.FaceBoxesV2.detector import Detector -import cv2, os -import numpy as np -import torch -import torch.nn as nn -from third_party.PIPNet.FaceBoxesV2.utils.config import cfg -from third_party.PIPNet.FaceBoxesV2.utils.prior_box import PriorBox -from third_party.PIPNet.FaceBoxesV2.utils.nms_wrapper import nms -from third_party.PIPNet.FaceBoxesV2.utils.faceboxes import FaceBoxesV2 -from third_party.PIPNet.FaceBoxesV2.utils.box_utils import decode -import time - - -class FaceBoxesDetector(Detector): - def __init__(self, model_arch, model_weights, use_gpu, device): - super().__init__(model_arch, model_weights) - self.name = "FaceBoxesDetector" - self.net = FaceBoxesV2( - phase="test", size=None, num_classes=2 - ) # initialize detector - self.use_gpu = use_gpu - self.device = device - - state_dict = torch.load(self.model_weights, map_location=self.device) - # create new OrderedDict that does not contain `module.` - from collections import OrderedDict - - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - name = k[7:] # remove `module.` - new_state_dict[name] = v - # load params - self.net.load_state_dict(new_state_dict) - self.net = self.net.to(self.device) - self.net.eval() - - def detect(self, image, thresh=0.6, im_scale=None): - # auto resize for large images - if im_scale is None: - height, width, _ = image.shape - if min(height, width) > 600: - im_scale = 600.0 / min(height, width) - else: - im_scale = 1 - image_scale = cv2.resize( - image, None, None, fx=im_scale, fy=im_scale, interpolation=cv2.INTER_LINEAR - ) - - scale = torch.Tensor( - [ - image_scale.shape[1], - image_scale.shape[0], - image_scale.shape[1], - image_scale.shape[0], - ] - ) - image_scale = ( - torch.from_numpy(image_scale.transpose(2, 0, 1)).to(self.device).int() - ) - mean_tmp = torch.IntTensor([104, 117, 123]).to(self.device) - mean_tmp = mean_tmp.unsqueeze(1).unsqueeze(2) - image_scale -= mean_tmp - image_scale = image_scale.float().unsqueeze(0) - scale = scale.to(self.device) - - with torch.no_grad(): - out = self.net(image_scale) - # priorbox = PriorBox(cfg, out[2], (image_scale.size()[2], image_scale.size()[3]), phase='test') - priorbox = PriorBox( - cfg, image_size=(image_scale.size()[2], image_scale.size()[3]) - ) - priors = priorbox.forward() - priors = priors.to(self.device) - loc, conf = out - prior_data = priors.data - boxes = decode(loc.data.squeeze(0), prior_data, cfg["variance"]) - boxes = boxes * scale - boxes = boxes.cpu().numpy() - scores = conf.data.cpu().numpy()[:, 1] - - # ignore low scores - inds = np.where(scores > thresh)[0] - boxes = boxes[inds] - scores = scores[inds] - - # keep top-K before NMS - order = scores.argsort()[::-1][:5000] - boxes = boxes[order] - scores = scores[order] - - # do NMS - dets = np.hstack((boxes, scores[:, np.newaxis])).astype( - np.float32, copy=False - ) - keep = nms(dets, 0.3) - dets = dets[keep, :] - - dets = dets[:750, :] - detections_scale = [] - for i in range(dets.shape[0]): - xmin = int(dets[i][0]) - ymin = int(dets[i][1]) - xmax = int(dets[i][2]) - ymax = int(dets[i][3]) - score = dets[i][4] - width = xmax - xmin - height = ymax - ymin - detections_scale.append(["face", score, xmin, ymin, width, height]) - - # adapt bboxes to the original image size - if len(detections_scale) > 0: - detections_scale = [ - [ - det[0], - det[1], - int(det[2] / im_scale), - int(det[3] / im_scale), - int(det[4] / im_scale), - int(det[5] / im_scale), - ] - for det in detections_scale - ] - - return detections_scale, im_scale diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/modeling_tf_utils.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/modeling_tf_utils.py deleted file mode 100644 index 6505a2ec6dd743910326597abc5e79c1b9ed746d..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/modeling_tf_utils.py +++ /dev/null @@ -1,3454 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""TF general model utils.""" - -from __future__ import annotations - -import functools -import gc -import inspect -import json -import os -import pickle -import re -import warnings -from collections.abc import Mapping -from pathlib import Path -from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Union - -import h5py -import numpy as np -import tensorflow as tf -from huggingface_hub import Repository, list_repo_files -from keras import backend as K -from packaging.version import parse -from tensorflow.python.util.keras_deps import get_call_context_function - -from . import DataCollatorWithPadding, DefaultDataCollator -from .activations_tf import get_tf_activation -from .configuration_utils import PretrainedConfig -from .dynamic_module_utils import custom_object_save -from .generation import GenerationConfig, TFGenerationMixin -from .tf_utils import ( - expand_1d, - load_attributes_from_hdf5_group, - save_attributes_to_hdf5_group, - shape_list, -) -from .utils import ( - SAFE_WEIGHTS_INDEX_NAME, - SAFE_WEIGHTS_NAME, - TF2_WEIGHTS_INDEX_NAME, - TF2_WEIGHTS_NAME, - TF_WEIGHTS_NAME, - WEIGHTS_INDEX_NAME, - WEIGHTS_NAME, - ModelOutput, - PushToHubMixin, - cached_file, - download_url, - find_labels, - has_file, - is_offline_mode, - is_remote_url, - is_safetensors_available, - is_tf_symbolic_tensor, - logging, - requires_backends, - working_or_temp_dir, -) -from .utils.hub import convert_file_size_to_int, get_checkpoint_shard_files - - -if is_safetensors_available(): - from safetensors import safe_open - from safetensors.tensorflow import save_file as safe_save_file - -if TYPE_CHECKING: - from . import PreTrainedTokenizerBase - - -logger = logging.get_logger(__name__) -tf_logger = tf.get_logger() - -TFModelInputType = Union[ - List[tf.Tensor], - List[np.ndarray], - Dict[str, tf.Tensor], - Dict[str, np.ndarray], - tf.Tensor, - np.ndarray, -] - - -def dummy_loss(y_true, y_pred): - if y_pred.shape.rank <= 1: - return y_pred - else: - reduction_axes = list(range(1, y_pred.shape.rank)) - return tf.reduce_mean(y_pred, axis=reduction_axes) - - -class TFModelUtilsMixin: - """ - A few utilities for `tf.keras.Model`, to be used as a mixin. - """ - - def num_parameters(self, only_trainable: bool = False) -> int: - """ - Get the number of (optionally, trainable) parameters in the model. - - Args: - only_trainable (`bool`, *optional*, defaults to `False`): - Whether or not to return only the number of trainable parameters - - Returns: - `int`: The number of parameters. - """ - if only_trainable: - return int(sum(np.prod(w.shape.as_list()) for w in self.trainable_variables)) - else: - return self.count_params() - - -def keras_serializable(cls): - """ - Decorate a Keras Layer class to support Keras serialization. - - This is done by: - - 1. Adding a `transformers_config` dict to the Keras config dictionary in `get_config` (called by Keras at - serialization time. - 2. Wrapping `__init__` to accept that `transformers_config` dict (passed by Keras at deserialization time) and - convert it to a config object for the actual layer initializer. - 3. Registering the class as a custom object in Keras (if the Tensorflow version supports this), so that it does not - need to be supplied in `custom_objects` in the call to `tf.keras.models.load_model`. - - Args: - cls (a `tf.keras.layers.Layers subclass`): - Typically a `TF.MainLayer` class in this project, in general must accept a `config` argument to its - initializer. - - Returns: - The same class object, with modifications for Keras deserialization. - """ - initializer = cls.__init__ - - config_class = getattr(cls, "config_class", None) - if config_class is None: - raise AttributeError("Must set `config_class` to use @keras_serializable") - - @functools.wraps(initializer) - def wrapped_init(self, *args, **kwargs): - config = args[0] if args and isinstance(args[0], PretrainedConfig) else kwargs.pop("config", None) - - if isinstance(config, dict): - config = config_class.from_dict(config) - initializer(self, config, *args, **kwargs) - elif isinstance(config, PretrainedConfig): - if len(args) > 0: - initializer(self, *args, **kwargs) - else: - initializer(self, config, *args, **kwargs) - else: - raise ValueError("Must pass either `config` (PretrainedConfig) or `config` (dict)") - - self._config = config - self._kwargs = kwargs - - cls.__init__ = wrapped_init - - if not hasattr(cls, "get_config"): - raise TypeError("Only use @keras_serializable on tf.keras.layers.Layer subclasses") - if hasattr(cls.get_config, "_is_default"): - - def get_config(self): - cfg = super(cls, self).get_config() - cfg["config"] = self._config.to_dict() - cfg.update(self._kwargs) - return cfg - - cls.get_config = get_config - - cls._keras_serializable = True - if hasattr(tf.keras.utils, "register_keras_serializable"): - cls = tf.keras.utils.register_keras_serializable()(cls) - return cls - - -class TFCausalLanguageModelingLoss: - """ - Loss function suitable for causal language modeling (CLM), that is, the task of guessing the next token. - - - - Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. - - - """ - - def hf_compute_loss(self, labels, logits): - loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( - from_logits=True, reduction=tf.keras.losses.Reduction.NONE - ) - if self.config.tf_legacy_loss: - # make sure only labels that are not equal to -100 affect the loss - active_loss = tf.not_equal(tf.reshape(labels, (-1,)), -100) - reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, shape_list(logits)[2])), active_loss) - labels = tf.boolean_mask(tf.reshape(labels, (-1,)), active_loss) - return loss_fn(labels, reduced_logits) - - # Clip negative labels to zero here to avoid NaNs and errors - those positions will get masked later anyway - unmasked_loss = loss_fn(tf.nn.relu(labels), logits) - # make sure only labels that are not equal to -100 affect the loss - loss_mask = tf.cast(labels != -100, dtype=unmasked_loss.dtype) - masked_loss = unmasked_loss * loss_mask - reduced_masked_loss = tf.reduce_sum(masked_loss) / tf.reduce_sum(loss_mask) - return tf.reshape(reduced_masked_loss, (1,)) - - -class TFQuestionAnsweringLoss: - """ - Loss function suitable for question answering. - """ - - def hf_compute_loss(self, labels, logits): - loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( - from_logits=True, reduction=tf.keras.losses.Reduction.NONE - ) - start_loss = loss_fn(labels["start_position"], logits[0]) - end_loss = loss_fn(labels["end_position"], logits[1]) - - return (start_loss + end_loss) / 2.0 - - -class TFTokenClassificationLoss: - """ - Loss function suitable for token classification. - - - - Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. - - - """ - - def hf_compute_loss(self, labels, logits): - loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( - from_logits=True, reduction=tf.keras.losses.Reduction.NONE - ) - if tf.executing_eagerly(): # Data-dependent conditionals are forbidden in XLA - if tf.math.reduce_any(labels == -1): - tf.print("Using `-1` to mask the loss for the token is deprecated. Please use `-100` instead.") - - if self.config.tf_legacy_loss: - # make sure only labels that are not equal to -100 - # are taken into account as loss - if tf.math.reduce_any(labels == -1): - tf.print("Using `-1` to mask the loss for the token is deprecated. Please use `-100` instead.") - active_loss = tf.reshape(labels, (-1,)) != -1 - else: - active_loss = tf.reshape(labels, (-1,)) != -100 - reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, shape_list(logits)[2])), active_loss) - labels = tf.boolean_mask(tf.reshape(labels, (-1,)), active_loss) - - return loss_fn(labels, reduced_logits) - - # Clip negative labels to zero here to avoid NaNs and errors - those positions will get masked later anyway - unmasked_loss = loss_fn(tf.nn.relu(labels), logits) - # make sure only labels that are not equal to -100 or -1 - # are taken into account as loss - loss_mask = tf.cast(labels >= 0, dtype=unmasked_loss.dtype) - # Avoid possible division by zero later - # Masked positions will have a loss of NaN because -100 and -1 are not valid labels - masked_loss = unmasked_loss * loss_mask - reduced_masked_loss = tf.reduce_sum(masked_loss) / tf.reduce_sum(loss_mask) - return tf.reshape(reduced_masked_loss, (1,)) - - -class TFSequenceClassificationLoss: - """ - Loss function suitable for sequence classification. - """ - - def hf_compute_loss(self, labels, logits): - if logits.shape.rank == 1 or logits.shape[1] == 1: - loss_fn = tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.NONE) - if labels.shape.rank == 1: - # MeanSquaredError returns a scalar loss if the labels are 1D, so avoid that - labels = tf.expand_dims(labels, axis=-1) - else: - loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( - from_logits=True, reduction=tf.keras.losses.Reduction.NONE - ) - - return loss_fn(labels, logits) - - -class TFMultipleChoiceLoss: - """Loss function suitable for multiple choice tasks.""" - - def hf_compute_loss(self, labels, logits): - loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( - from_logits=True, reduction=tf.keras.losses.Reduction.NONE - ) - return loss_fn(labels, logits) - - -class TFMaskedLanguageModelingLoss(TFCausalLanguageModelingLoss): - """ - Loss function suitable for masked language modeling (MLM), that is, the task of guessing the masked tokens. - - - - Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. - - - """ - - -class TFNextSentencePredictionLoss: - """ - Loss function suitable for next sentence prediction (NSP), that is, the task of guessing the next sentence. - - - - Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. - - - """ - - def hf_compute_loss(self, labels, logits): - loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( - from_logits=True, reduction=tf.keras.losses.Reduction.NONE - ) - if self.config.tf_legacy_loss: - # make sure only labels that are not equal to -100 - # are taken into account as loss - next_sentence_active_loss = tf.not_equal(tf.reshape(labels, (-1,)), -100) - next_sentence_reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, 2)), next_sentence_active_loss) - next_sentence_label = tf.boolean_mask(tf.reshape(labels, (-1,)), next_sentence_active_loss) - - return loss_fn(next_sentence_label, next_sentence_reduced_logits) - - # make sure only labels that are not equal to -100 - # are taken into account as loss - - # Clip negative labels to zero here to avoid NaNs and errors - those positions will get masked later anyway - unmasked_ns_loss = loss_fn(y_true=tf.nn.relu(labels), y_pred=logits) - ns_loss_mask = tf.cast(labels != -100, dtype=unmasked_ns_loss.dtype) - # Just zero out samples where label is -100, no reduction - masked_ns_loss = unmasked_ns_loss * ns_loss_mask - - return masked_ns_loss - - -def booleans_processing(config, **kwargs): - """ - Process the input booleans of each model. - - Args: - config ([`PretrainedConfig`]): - The config of the running model. - **kwargs: - The boolean parameters - - Returns: - A dictionary with the proper values for each boolean - """ - final_booleans = {} - - # Pure conv models (such as ConvNext) do not have `output_attentions`. If the signature has - # `output_attentions`, it will be present here in `kwargs`, even if unset (in that case, as `None`) - if "output_attentions" in kwargs: - final_booleans["output_attentions"] = ( - kwargs["output_attentions"] if kwargs["output_attentions"] is not None else config.output_attentions - ) - final_booleans["output_hidden_states"] = ( - kwargs["output_hidden_states"] if kwargs["output_hidden_states"] is not None else config.output_hidden_states - ) - final_booleans["return_dict"] = kwargs["return_dict"] if kwargs["return_dict"] is not None else config.return_dict - - if "use_cache" in kwargs: - final_booleans["use_cache"] = ( - kwargs["use_cache"] if kwargs["use_cache"] is not None else getattr(config, "use_cache", None) - ) - return final_booleans - - -def unpack_inputs(func): - """ - Decorator that processes the inputs to a Keras layer, passing them to the layer as keyword arguments. This enables - downstream use of the inputs by their variable name, even if they arrive packed as a dictionary in the first input - (common case in Keras). - - Args: - func (`callable`): - The callable function of the TensorFlow model. - - - Returns: - A callable that wraps the original `func` with the behavior described above. - """ - - original_signature = inspect.signature(func) - - @functools.wraps(func) - def run_call_with_unpacked_inputs(self, *args, **kwargs): - # isolates the actual `**kwargs` for the decorated function - kwargs_call = {key: val for key, val in kwargs.items() if key not in dict(original_signature.parameters)} - fn_args_and_kwargs = {key: val for key, val in kwargs.items() if key not in kwargs_call} - fn_args_and_kwargs.update({"kwargs_call": kwargs_call}) - - # move any arg into kwargs, if they exist - fn_args_and_kwargs.update(dict(zip(func.__code__.co_varnames[1:], args))) - - # Encoder Decoder models delegate the application of the configuration options to their inner models. - if "EncoderDecoder" in self.__class__.__name__: - config = None - else: - config = self.config - - unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs) - return func(self, **unpacked_inputs) - - # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This - # function does not follow wrapper chains (i.e. ignores `functools.wraps()`), meaning that without the line below - # Keras would attempt to check the first argument against the literal signature of the wrapper. - run_call_with_unpacked_inputs.__signature__ = original_signature - - return run_call_with_unpacked_inputs - - -def input_processing(func, config, **kwargs): - """ - Process the input of each TensorFlow model including the booleans. In case of a list of symbolic inputs, each input - has to be named accordingly to the parameters name, i.e. `input_ids = tf.keras.Input(shape=(128,), dtype='int32', - name="input_ids")` otherwise the order of the tensors will not be guaranteed during the training. - - Args: - func (`callable`): - The callable function of the TensorFlow model. - config ([`PretrainedConfig`]): - The config of the running model. - **kwargs: - The inputs of the model. - - Returns: - Two lists, one for the missing layers, and another one for the unexpected layers. - """ - signature = dict(inspect.signature(func).parameters) - has_kwargs = bool(signature.pop("kwargs", None)) - signature.pop("self", None) - parameter_names = list(signature.keys()) - main_input_name = parameter_names[0] - main_input = kwargs.pop(main_input_name, None) - output = {} - allowed_types = (tf.Tensor, bool, int, ModelOutput, tuple, list, dict, np.ndarray) - - if "inputs" in kwargs["kwargs_call"]: - warnings.warn( - "The `inputs` argument is deprecated and will be removed in a future version, use `input_ids` instead.", - FutureWarning, - ) - - output["input_ids"] = kwargs["kwargs_call"].pop("inputs") - - if "decoder_cached_states" in kwargs["kwargs_call"]: - warnings.warn( - "The `decoder_cached_states` argument is deprecated and will be removed in a future version, use" - " `past_key_values` instead.", - FutureWarning, - ) - output["past_key_values"] = kwargs["kwargs_call"].pop("decoder_cached_states") - - if "past" in kwargs["kwargs_call"] and "past_key_values" in parameter_names: - warnings.warn( - "The `past` argument is deprecated and will be removed in a future version, use `past_key_values`" - " instead.", - FutureWarning, - ) - kwargs["past_key_values"] = kwargs["kwargs_call"].pop("past") - elif "past_key_values" in kwargs["kwargs_call"] and "past" in parameter_names: - kwargs["past"] = kwargs["kwargs_call"].pop("past_key_values") - - if has_kwargs: - output["kwargs"] = kwargs.pop("kwargs_call", {}) - else: - if len(kwargs["kwargs_call"]) > 0: - raise ValueError( - "The following keyword arguments are not supported by this model:" - f" {list(kwargs['kwargs_call'].keys())}." - ) - kwargs.pop("kwargs_call") - - for k, v in kwargs.items(): - if isinstance(v, allowed_types) or tf.is_tensor(v) or v is None: - output[k] = v - else: - raise ValueError(f"Data of type {type(v)} is not allowed only {allowed_types} is accepted for {k}.") - - if isinstance(main_input, (tuple, list)): - for i, input in enumerate(main_input): - # EagerTensors don't allow to use the .name property so we check for a real Tensor - if is_tf_symbolic_tensor(input): - # Tensor names have always the pattern `name:id` then we check only the - # `name` part - tensor_name = input.name.split(":")[0] - - if tensor_name in parameter_names: - output[tensor_name] = input - else: - output[parameter_names[i]] = input - elif isinstance(input, allowed_types) or input is None: - output[parameter_names[i]] = input - else: - raise ValueError( - f"Data of type {type(input)} is not allowed only {allowed_types} is accepted for" - f" {parameter_names[i]}." - ) - elif isinstance(main_input, Mapping): - if "inputs" in main_input: - warnings.warn( - "The `inputs` argument is deprecated and will be removed in a future version, use `input_ids`" - " instead.", - FutureWarning, - ) - - output["input_ids"] = main_input.pop("inputs") - - if "decoder_cached_states" in main_input: - warnings.warn( - "The `decoder_cached_states` argument is deprecated and will be removed in a future version, use" - " `past_key_values` instead.", - FutureWarning, - ) - output["past_key_values"] = main_input.pop("decoder_cached_states") - - for k, v in dict(main_input).items(): - if isinstance(v, allowed_types) or v is None: - output[k] = v - elif k not in parameter_names and "args" not in parameter_names: - logger.warning( - f"The parameter {k} does not belongs to the parameter list {parameter_names} and will be ignored." - ) - continue - else: - raise ValueError(f"Data of type {type(v)} is not allowed only {allowed_types} is accepted for {k}.") - else: - if tf.is_tensor(main_input) or main_input is None: - output[main_input_name] = main_input - else: - raise ValueError( - f"Data of type {type(main_input)} is not allowed only {allowed_types} is accepted for" - f" {main_input_name}." - ) - - # Populates any unspecified argument with their default value, according to the signature. - for name in parameter_names: - if name not in list(output.keys()) and name != "args": - output[name] = kwargs.pop(name, signature[name].default) - - # When creating a SavedModel TF calls the method with LayerCall.__call__(args, **kwargs) - # So to respect the proper output we have to add this exception - if "args" in output: - if output["args"] is not None and is_tf_symbolic_tensor(output["args"]): - tensor_name = output["args"].name.split(":")[0] - output[tensor_name] = output["args"] - else: - # `args` in this case is always the first parameter, then `input_ids` - output["input_ids"] = output["args"] - - del output["args"] - - if "kwargs" in output: - del output["kwargs"] - - cast_output = {} - for key, val in output.items(): - if isinstance(val, tf.Tensor) and val.dtype == tf.int64: - cast_output[key] = tf.cast(val, tf.int32) - elif isinstance(val, np.ndarray) and val.dtype == np.int64: - cast_output[key] = val.astype(np.int32) - else: - cast_output[key] = val - - output = cast_output - del cast_output - - if config is not None: - boolean_dict = { - k: v - for k, v in output.items() - if k in ["return_dict", "output_attentions", "output_hidden_states", "use_cache"] - } - - output.update( - booleans_processing( - config=config, - **boolean_dict, - ) - ) - - return output - - -def dtype_byte_size(dtype): - """ - Returns the size (in bytes) occupied by one parameter of type `dtype`. - - Example: - - ```py - >>> dtype_byte_size(tf.float32) - 4 - ``` - """ - if dtype == tf.bool: - return 1 / 8 - bit_search = re.search(r"[^\d](\d+)$", dtype.name) - if bit_search is None: - raise ValueError(f"`dtype` is not a valid dtype: {dtype}.") - bit_size = int(bit_search.groups()[0]) - return bit_size // 8 - - -def format_weight_name(name, _prefix=None): - if "model." not in name and len(name.split("/")) > 1: - name = "/".join(name.split("/")[1:]) - if _prefix is not None: - name = _prefix + "/" + name - return name - - -def tf_shard_checkpoint(weights, max_shard_size="10GB"): - """ - Splits a model state dictionary in sub-checkpoints so that the final size of each sub-checkpoint does not exceed a - given size. - - The sub-checkpoints are determined by iterating through the `state_dict` in the order of its keys, so there is no - optimization made to make each sub-checkpoint as close as possible to the maximum size passed. For example, if the - limit is 10GB and we have weights of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], - [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. - - - - If one of the model's weight is bigger that `max_shard_size`, it will end up in its own sub-checkpoint which will - have a size greater than `max_shard_size`. - - - - Args: - weights (`Dict[str, tf.RessourceVariable]`): The list of tf.RessourceVariable of a model to save. - max_shard_size (`int` or `str`, *optional*, defaults to `"10GB"`): - The maximum size of each sub-checkpoint. If expressed as a string, needs to be digits followed by a unit - (like `"5MB"`). - """ - max_shard_size = convert_file_size_to_int(max_shard_size) - - sharded_state_dicts = [] - current_block = [] - current_block_size = 0 - total_size = 0 - - for item in weights: - weight_size = item.numpy().size * dtype_byte_size(item.dtype) - - # If this weight is going to tip up over the maximal size, we split. - if current_block_size + weight_size > max_shard_size: - sharded_state_dicts.append(current_block) - current_block = [] - current_block_size = 0 - - current_block.append(item) - current_block_size += weight_size - total_size += weight_size - - # Add the last block - sharded_state_dicts.append(current_block) - - # If we only have one shard, we return it - if len(sharded_state_dicts) == 1: - return {TF2_WEIGHTS_NAME: sharded_state_dicts[0]}, None - - # Otherwise, let's build the index - weight_map = {} - shards = {} - for idx, shard in enumerate(sharded_state_dicts): - shard_file = TF2_WEIGHTS_NAME.replace(".h5", f"-{idx+1:05d}-of-{len(sharded_state_dicts):05d}.h5") - shards[shard_file] = shard - for weight in shard: - weight_name = weight.name - weight_map[weight_name] = shard_file - - # Add the metadata - metadata = {"total_size": total_size} - index = {"metadata": metadata, "weight_map": weight_map} - return shards, index - - -def load_tf_sharded_weights(model, shard_files, ignore_mismatched_sizes=False, strict=False, _prefix=None): - """ - This is the same as `load_tf_weights` but for a sharded checkpoint. Detect missing and unexpected layers and load - the TF weights from the shard file accordingly to their names and shapes. - - This load is performed efficiently: each checkpoint shard is loaded one by one in RAM and deleted after being - loaded in the model. - - Args: - model (`tf.keras.models.Model`): The model in which to load the checkpoint. - shard_files (`str` or `os.PathLike`): A list containing the sharded checkpoint names. - ignore_mismatched_sizes`bool`, *optional`, defaults to `True`): - Whether or not to ignore the mismatch between the sizes - strict (`bool`, *optional*, defaults to `True`): - Whether to strictly enforce that the keys in the model state dict match the keys in the sharded checkpoint. - - Returns: - Three lists, one for the missing layers, another one for the unexpected layers, and a last one for the - mismatched layers. - """ - - # Load the index - unexpected_keys = set() - saved_keys = set() - mismatched_keys = set() - - # Since TF adds the name of the class to its weights, and uses the index and not the name of the layer to load - # the weight, we have to get rid of the first prefix of the name of the layer. - model_keys = set() - model_layer_map = {} - for i, k in enumerate(model.weights): - layer_name = k.name - if _prefix is not None and layer_name.startswith(_prefix): - layer_name = layer_name[len(_prefix) :] - layer_name = layer_name.lstrip("/") - if not ("model." in layer_name or len(layer_name.split("/")) == 1): - layer_name = "/".join(layer_name.split("/")[1:]) - model_keys.add(layer_name) - model_layer_map[layer_name] = i - - for shard_file in shard_files: - saved_weight_names_set, unexpected_keys_set, mismatched_keys_set = load_tf_shard( - model, - model_layer_map, - shard_file, - ignore_mismatched_sizes=ignore_mismatched_sizes, - _prefix=_prefix, - ) - saved_keys.update(saved_weight_names_set) - unexpected_keys.update(unexpected_keys_set) - mismatched_keys.update(mismatched_keys_set) - gc.collect() - - missing_keys = model_keys - saved_keys - if strict and (len(missing_keys) > 0 or len(unexpected_keys) > 0): - error_message = f"Error(s) in loading state_dict for {model.__class__.__name__}" - if len(missing_keys) > 0: - str_missing_keys = ",".join([f'"{k}"' for k in missing_keys]) - error_message += f"\nMissing key(s): {str_missing_keys}." - if len(unexpected_keys) > 0: - str_unexpected_keys = ",".join([f'"{k}"' for k in unexpected_keys]) - error_message += f"\nMissing key(s): {str_unexpected_keys}." - raise RuntimeError(error_message) - - return missing_keys, unexpected_keys, mismatched_keys - - -def load_tf_shard(model, model_layer_map, resolved_archive_file, ignore_mismatched_sizes=False, _prefix=None): - """ - Loads a shard from a sharded checkpoint file. Handles the missing keys and unexpected keys. - - Args: - model (`tf.keras.models.Model`): Model in which the weights are loaded - model_layer_map (`Dict`): A dictionary mapping the layer name to the index of the layer in the model. - resolved_archive_file (`str`): Path to the checkpoint file from which the weights will be loaded - ignore_mismatched_sizes (`bool`, *optional*, defaults to `False`): Whether to ignore the mismatched keys - - Returns: - `tf.keras.models.Model`: Three lists, one for the layers that were found and succesfully restored (from the - shard file), one for the mismatched layers, and another one for the unexpected layers. - """ - saved_weight_names_set = set() - saved_weights = {} - mismatched_keys = set() - unexpected_keys = set() - # Read the H5 file - try: - with h5py.File(resolved_archive_file, "r") as sharded_checkpoint_file: - # Retrieve the name of each layer from the H5 file - saved_h5_model_layers_name = set(load_attributes_from_hdf5_group(sharded_checkpoint_file, "layer_names")) - weight_value_tuples = [] - - # Compute missing and unexpected sub layers - # Store the weights in list of tuples that looks like [(weight_object, value_of_weight),...] - for layer_name in saved_h5_model_layers_name: - h5_layer_object = sharded_checkpoint_file[layer_name] - saved_weights[layer_name] = np.asarray(h5_layer_object) - - saved_weight_names_set.add(layer_name) - - if layer_name not in model_layer_map: - unexpected_keys.add(layer_name) - else: - symbolic_weight = model.weights[model_layer_map[layer_name]] - - saved_weight_value = saved_weights[layer_name] - # If the current weight is found - if saved_weight_value is not None: - # Check if the shape of the current weight and the one from the H5 file are different - if K.int_shape(symbolic_weight) != saved_weight_value.shape: - # If yes we reshape the weight from the H5 file accordingly to the current weight - # If the two shapes are not compatible we raise an issue - try: - array = np.reshape(saved_weight_value, K.int_shape(symbolic_weight)) - except ValueError as e: - if ignore_mismatched_sizes: - mismatched_keys.add( - (layer_name, saved_weight_value.shape, K.int_shape(symbolic_weight)) - ) - continue - else: - raise e - else: - array = saved_weight_value - - # We create the tuple that will be loaded and add it to the final list - weight_value_tuples.append((symbolic_weight, array)) - - K.batch_set_value(weight_value_tuples) - - return saved_weight_names_set, unexpected_keys, mismatched_keys - - except Exception as e: - try: - with open(resolved_archive_file) as f: - if f.read().startswith("version"): - raise OSError( - "You seem to have cloned a repository without having git-lfs installed. Please install " - "git-lfs and run `git lfs install` followed by `git lfs pull` in the folder " - "you cloned." - ) - else: - raise ValueError( - f"Unable to locate the file {resolved_archive_file} which is necessary to load this pretrained" - " model. Make sure you have saved the model properly." - ) from e - except (UnicodeDecodeError, ValueError): - raise OSError( - f"Unable to load weights from TF checkpoint file for '{resolved_archive_file}' " - f"at '{resolved_archive_file}'. " - "If you tried to load a TF model from a sharded checkpoint, you should try converting the model" - "by loading it in pytorch and saving it localy. A convertion script should be realeased soon." - ) - - -def load_tf_weights(model, resolved_archive_file, ignore_mismatched_sizes=False, _prefix=None): - """ - Detect missing and unexpected layers and load the TF weights from the shard file accordingly to their names and - shapes. - - Args: - model (`tf.keras.models.Model`): - The model to load the weights into. - resolved_archive_file (`str`): - The location of the H5 file. - ignore_mismatched_sizes (`bool`, *optional*, defaults to `False`): - Whether or not to ignore weights with shapes that don't match between the checkpoint of the model. - - Returns: - Three lists, one for the missing layers, another one for the unexpected layers, and a last one for the - mismatched layers. - """ - if resolved_archive_file.endswith(".safetensors"): - load_function = load_tf_weights_from_safetensors - else: - load_function = load_tf_weights_from_h5 - - return load_function( - model, resolved_archive_file, ignore_mismatched_sizes=ignore_mismatched_sizes, _prefix=_prefix - ) - - -def load_tf_weights_from_h5(model, resolved_archive_file, ignore_mismatched_sizes=False, _prefix=None): - mismatched_layers = [] - - # Read the H5 file - with h5py.File(resolved_archive_file, "r") as sharded_checkpoint_file: - # Retrieve the name of each layer from the H5 file - saved_h5_model_layers_name = set(load_attributes_from_hdf5_group(sharded_checkpoint_file, "layer_names")) - - # Find the missing layers from the high level list of layers - missing_layers = list({layer.name for layer in model.layers} - saved_h5_model_layers_name) - - # Find the unexpected layers from the high level list of layers - unexpected_layers = list(saved_h5_model_layers_name - {layer.name for layer in model.layers}) - saved_weight_names_set = set() - symbolic_weights_names = set() - weight_value_tuples = [] - - # Compute missing and unexpected sub layers - # Store the weights in list of tuples that looks like [(weight_object, value_of_weight),...] - for layer in model.layers: - # if layer_name from the H5 file belongs to the layers from the instantiated model - if layer.name in saved_h5_model_layers_name: - # Get the H5 layer object from its name - h5_layer_object = sharded_checkpoint_file[layer.name] - # Get all the weights as a list from the layer object - symbolic_weights = layer.trainable_weights + layer.non_trainable_weights - saved_weights = {} - - # Create a dict from the H5 saved model that looks like {"weight_name": weight_value} - # And a set with only the names - for weight_name in load_attributes_from_hdf5_group(h5_layer_object, "weight_names"): - # TF names always start with the model name so we ignore it - name = "/".join(weight_name.split("/")[1:]) - - if _prefix is not None: - name = _prefix + "/" + name - - saved_weights[name] = np.asarray(h5_layer_object[weight_name]) - - # Add the updated name to the final list for computing missing/unexpected values - saved_weight_names_set.add(name) - - # Loop over each weights from the instantiated model and compare with the weights from the H5 file - for symbolic_weight in symbolic_weights: - # TF names always start with the model name so we ignore it - if _prefix is not None: - delimeter = len(_prefix.split("/")) - symbolic_weight_name = "/".join( - symbolic_weight.name.split("/")[:delimeter] - + symbolic_weight.name.split("/")[delimeter + 1 :] - ) - else: - symbolic_weight_name = "/".join(symbolic_weight.name.split("/")[1:]) - - # here we check if the current weight is among the weights from the H5 file - # If yes, get the weight_value of the corresponding weight from the H5 file - # If not, make the value to None - saved_weight_value = saved_weights.get(symbolic_weight_name, None) - - # Retrocompatibility patch: some embeddings are stored with the weights name (e.g. Bart's - # `model.shared/embeddings:0` are stored as `model.shared/weights:0`) - if saved_weight_value is None and symbolic_weight_name.endswith("embeddings:0"): - symbolic_weight_name = symbolic_weight_name[:-12] + "weight:0" - saved_weight_value = saved_weights.get(symbolic_weight_name, None) - - # Add the updated name to the final list for computing missing/unexpected values - symbolic_weights_names.add(symbolic_weight_name) - - # If the current weight is found - if saved_weight_value is not None: - # Check if the shape of the current weight and the one from the H5 file are different - if K.int_shape(symbolic_weight) != saved_weight_value.shape: - # If yes we reshape the weight from the H5 file accordingly to the current weight - # If the two shapes are not compatible we raise an issue - try: - array = np.reshape(saved_weight_value, K.int_shape(symbolic_weight)) - except ValueError as e: - if ignore_mismatched_sizes: - mismatched_layers.append( - (symbolic_weight_name, saved_weight_value.shape, K.int_shape(symbolic_weight)) - ) - continue - else: - raise e - else: - array = saved_weight_value - - # We create the tuple that will be loaded and add it to the final list - weight_value_tuples.append((symbolic_weight, array)) - - # Load all the weights - K.batch_set_value(weight_value_tuples) - - # Compute the missing and unexpected layers - missing_layers.extend(list(symbolic_weights_names - saved_weight_names_set)) - unexpected_layers.extend(list(saved_weight_names_set - symbolic_weights_names)) - - return missing_layers, unexpected_layers, mismatched_layers - - -def load_tf_weights_from_safetensors(model, resolved_archive_file, ignore_mismatched_sizes=False, _prefix=None): - # Read the safetensors file - with safe_open(resolved_archive_file, framework="tf") as safetensors_archive: - mismatched_layers = [] - weight_names = [format_weight_name(w.name, _prefix=_prefix) for w in model.weights] - loaded_weight_names = list(safetensors_archive.keys()) - # Find the missing layers from the high level list of layers - missing_layers = list(set(weight_names) - set(loaded_weight_names)) - # Find the unexpected layers from the high level list of layers - unexpected_layers = list(set(loaded_weight_names) - set(weight_names)) - - for weight in model.weights: - weight_name = format_weight_name(weight.name, _prefix=_prefix) - if weight_name in loaded_weight_names: - weight_value = safetensors_archive.get_tensor(weight_name) - # Check if the shape of the current weight and the one from the H5 file are different - if K.int_shape(weight) != weight_value.shape: - # If yes we reshape the weight from the H5 file accordingly to the current weight - # If the two shapes are not compatible we raise an issue - try: - weight_value = tf.reshape(weight_value, K.int_shape(weight)) - except ValueError as e: - if ignore_mismatched_sizes: - mismatched_layers.append((weight_name, weight_value.shape, K.int_shape(weight))) - continue - else: - raise e - - K.set_value(weight, weight_value) # weight.assign() might break if weight is a DTensor - return missing_layers, unexpected_layers, mismatched_layers - - -def init_copy_embeddings(old_embeddings, new_num_tokens): - r""" - This function aims to reduce the embeddings in case new_num_tokens < old_num_tokens or to pad with -1 in case - new_num_tokens > old_num_tokens. A mask is also computed in order to know which weight in the embeddings should be - kept or not. Example: - - - if new_num_tokens=5 and old_num_tokens=4 and old_embeddings=[w1,w2,w3,w4] - - - mask=[True,True,True,True,False] and current_weights=[w1,w2,w3,w4,-1] - - if new_num_tokens=4 and old_num_tokens=5 and old_embeddings=[w1,w2,w3,w4,w5] - - - mask=[True,True,True,True] and current_weights=[w1,w2,w3,w4] - """ - old_num_tokens, old_embedding_dim = shape_list(old_embeddings) - size_diff = new_num_tokens - old_num_tokens - - # initialize new embeddings - # Copy token embeddings from the previous ones - if tf.math.greater(size_diff, 0): - # if the new size is greater than the old one, we extend the current embeddings with a padding until getting new size - # and we create a mask to properly identify the padded values and be replaced by the values of the newly created - # embeddings - current_weights = tf.pad( - old_embeddings.value(), tf.convert_to_tensor([[0, size_diff], [0, 0]]), constant_values=-1 - ) - num_tokens_to_copy = min(old_num_tokens, new_num_tokens) - mask = tf.fill(tf.convert_to_tensor([num_tokens_to_copy, 1]), True) - mask = tf.pad(mask, tf.convert_to_tensor([[0, size_diff], [0, 0]]), constant_values=False) - else: - # if the new size if lower than the old one, we take the current embeddings until the new size - current_weights = tf.slice( - old_embeddings.value(), - tf.convert_to_tensor([0, 0]), - tf.convert_to_tensor([new_num_tokens, old_embedding_dim]), - ) - mask = tf.fill(tf.convert_to_tensor([new_num_tokens, 1]), True) - - return mask, current_weights - - -class TFPreTrainedModel(tf.keras.Model, TFModelUtilsMixin, TFGenerationMixin, PushToHubMixin): - r""" - Base class for all TF models. - - [`TFPreTrainedModel`] takes care of storing the configuration of the models and handles methods for loading, - downloading and saving models as well as a few methods common to all models to: - - - resize the input embeddings, - - prune heads in the self-attention heads. - - Class attributes (overridden by derived classes): - - - **config_class** ([`PretrainedConfig`]) -- A subclass of [`PretrainedConfig`] to use as configuration class - for this model architecture. - - **base_model_prefix** (`str`) -- A string indicating the attribute associated to the base model in derived - classes of the same architecture adding modules on top of the base model. - - **main_input_name** (`str`) -- The name of the principal input to the model (often `input_ids` for NLP - models, `pixel_values` for vision models and `input_values` for speech models). - """ - config_class = None - base_model_prefix = "" - main_input_name = "input_ids" - _auto_class = None - _using_dummy_loss = None - _label_to_output_map = None - - # a list of re pattern of tensor names to ignore from the model when loading the model weights - # (and avoid unnecessary warnings). - _keys_to_ignore_on_load_missing = None - # a list of re pattern of tensor names to ignore from the weights when loading the model weights - # (and avoid unnecessary warnings). - _keys_to_ignore_on_load_unexpected = None - _requires_load_weight_prefix = False - - @property - def dummy_inputs(self) -> Dict[str, tf.Tensor]: - """ - Dummy inputs to build the network. - - Returns: - `Dict[str, tf.Tensor]`: The dummy inputs. - """ - dummies = {} - for key, spec in self.input_signature.items(): - # 2 is the most correct arbitrary size. I will not be taking questions - dummy_shape = [dim if dim is not None else 2 for dim in spec.shape] - if spec.shape[0] is None: - # But let's make the batch size 1 to save memory anyway - dummy_shape[0] = 1 - dummies[key] = tf.ones(shape=dummy_shape, dtype=spec.dtype) - if key == "token_type_ids": - # Some models have token_type_ids but with a vocab_size of 1 - dummies[key] = tf.zeros_like(dummies[key]) - if self.config.add_cross_attention and "encoder_hidden_states" in inspect.signature(self.call).parameters: - if "encoder_hidden_states" not in dummies: - if self.main_input_name == "input_ids": - dummies["encoder_hidden_states"] = tf.ones( - shape=(1, 2, self.config.hidden_size), dtype=tf.float32, name="encoder_hidden_states" - ) - else: - raise NotImplementedError( - "Model has cross-attention but we couldn't infer the shape for the encoder hidden states. Please manually override dummy_inputs!" - ) - return dummies - - @property - def framework(self) -> str: - """ - :str: Identifies that this is a TensorFlow model. - """ - return "tf" - - def build(self, input_shape=None): - call_context = get_call_context_function() - if self.built or call_context().in_call: - self.built = True - else: - self.built = True - # Set the serving spec quickly to ensure that Keras doesn't use the specific dummy input shapes as the spec - # Setting it in build() allows users to override the shape when loading a non-pretrained model from config - self._set_save_spec(self.input_signature) - self(self.dummy_inputs, training=False) - - def __init__(self, config, *inputs, **kwargs): - super().__init__(*inputs, **kwargs) - if not isinstance(config, PretrainedConfig): - raise ValueError( - f"Parameter config in `{self.__class__.__name__}(config)` should be an instance of class " - "`PretrainedConfig`. To create a model from a pretrained model use " - f"`model = {self.__class__.__name__}.from_pretrained(PRETRAINED_MODEL_NAME)`" - ) - # Save config and origin of the pretrained weights if given in model - self.config = config - self.name_or_path = config.name_or_path - self.generation_config = GenerationConfig.from_model_config(config) if self.can_generate() else None - - def get_config(self): - return self.config.to_dict() - - @classmethod - def from_config(cls, config, **kwargs): - if isinstance(config, PretrainedConfig): - return cls._from_config(config, **kwargs) - return cls._from_config(cls.config_class.from_dict(config, **kwargs)) - - @classmethod - def _from_config(cls, config, **kwargs): - """ - All context managers that the model should be initialized under go here. - """ - return cls(config, **kwargs) - - def get_head_mask(self, head_mask: tf.Tensor | None, num_hidden_layers: int) -> tf.Tensor: - """ - Prepare the head mask if needed. - - Args: - head_mask (`tf.Tensor` with shape `[num_heads]` or `[num_hidden_layers x num_heads]`, *optional*): - The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard). - num_hidden_layers (`int`): - The number of hidden layers in the model. - - Returns: - `tf.Tensor` with shape `[num_hidden_layers x batch x num_heads x seq_length x seq_length]` or list with - `[None]` for each layer. - """ - if head_mask is not None: - head_mask = self._convert_head_mask_to_5d(head_mask, num_hidden_layers) - else: - head_mask = [None] * num_hidden_layers - - return head_mask - - def _convert_head_mask_to_5d(self, head_mask, num_hidden_layers): - """-> [num_hidden_layers x batch x num_heads x seq_length x seq_length]""" - if head_mask.shape.rank == 1: - head_mask = head_mask[None, None, :, None, None] - head_mask = tf.repeat(head_mask, repeats=num_hidden_layers, axis=0) - elif head_mask.shape.rank == 2: - head_mask = head_mask[:, None, :, None, None] - assert head_mask.shape.rank == 5, f"head_mask.dim != 5, instead {head_mask.dim()}" - head_mask = tf.cast(head_mask, tf.float32) # switch to float if need + fp16 compatibility - return head_mask - - @tf.function - def serving(self, inputs): - """ - Args: - Method used for serving the model. Does not have a specific signature, but will be specialized as concrete - functions when saving with `save_pretrained`. - inputs (`Dict[str, tf.Tensor]`): - The input of the saved model as a dictionary of tensors. - """ - output = self.call(inputs) - - return self.serving_output(output) - - def eager_serving(self, inputs): - """ - Method used for serving the model. This method is deprecated, and will be removed. - - Args: - inputs (`Dict[str, tf.Tensor]`): - The input of the saved model as a dictionary of tensors. - """ - warnings.warn( - "The function `eager_serving` is deprecated and will be removed in version 4.32.0 of Transformers", - FutureWarning, - ) - output = self.call(inputs) - - return self.serving_output(output) - - @property - def input_signature(self) -> Dict[str, tf.TensorSpec]: - """ - This property should return a dict mapping input names to tf.TensorSpec objects, representing the expected - shape and dtype for model inputs. It is used for both serving and for generating the dummy inputs used to build - the model. - """ - model_inputs = list(inspect.signature(self.call).parameters) - sig = {} - if "input_ids" in model_inputs: - if self.__class__.__name__.endswith("ForMultipleChoice"): - text_dims = 3 - else: - text_dims = 2 - for input_name in ( - "input_ids", - "attention_mask", - "token_type_ids", - "decoder_input_ids", - "decoder_attention_mask", - ): - if input_name in model_inputs: - sig[input_name] = tf.TensorSpec([None] * text_dims, tf.int32, name=input_name) - if "pixel_values" in model_inputs: - pixel_values_shape = [None, None, None, None] - if hasattr(self.config, "vision_config"): - vision_config = self.config.vision_config - else: - vision_config = self.config - if hasattr(vision_config, "num_channels"): - pixel_values_shape[1] = vision_config.num_channels - else: - raise NotImplementedError( - "Could not infer number of channels from config, please override input_signature to specify input shapes." - ) - if hasattr(vision_config, "image_size"): - pixel_values_shape[2] = pixel_values_shape[3] = vision_config.image_size - elif hasattr(vision_config, "input_size"): - pixel_values_shape[2] = pixel_values_shape[3] = vision_config.input_size - else: - raise NotImplementedError( - "Could not infer input image shape from config, please override input_signature to specify input shapes." - ) - sig["pixel_values"] = tf.TensorSpec(pixel_values_shape, tf.float32, name="pixel_values") - if "input_features" in model_inputs: - raise NotImplementedError("Audio models need a manually defined input_signature") - return sig - - def serving_output(self, output): - """ - Prepare the output of the saved model. Can be overridden if specific serving modifications are required. - """ - if not isinstance(output, ModelOutput): - return output - for key in output: - if key.endswith("hidden_states") and not getattr(self.config, "output_hidden_states", False): - output[key] = None - elif key.endswith("attentions") and not getattr(self.config, "output_attentions", False): - output[key] = None - elif key == "past_key_values" and not getattr(self.config, "use_cache", False): - output[key] = None - elif key == "cross_attentions" and not ( - getattr(self.config, "output_attentions", False) and getattr(self.config, "add_cross_attention", False) - ): - output[key] = None - if isinstance(output[key], (tuple, list)): - try: - output[key] = tf.convert_to_tensor(output[key]) - except (ValueError, tf.errors.InvalidArgumentError): - pass # Layers may not have the same dimensions - return output - - @classmethod - def can_generate(cls) -> bool: - """ - Returns whether this model can generate sequences with `.generate()`. - - Returns: - `bool`: Whether this model can generate sequences with `.generate()`. - """ - # Detects whether `prepare_inputs_for_generation` has been overwritten, which is a requirement for generation. - # Alternativelly, the model can also have a custom `generate` function. - if "GenerationMixin" in str(cls.prepare_inputs_for_generation) and "GenerationMixin" in str(cls.generate): - return False - return True - - def get_input_embeddings(self) -> tf.keras.layers.Layer: - """ - Returns the model's input embeddings layer. - - Returns: - `tf.Variable`: The embeddings layer mapping vocabulary to hidden states. - """ - main_layer = getattr(self, self.base_model_prefix, self) - - if main_layer is not self: - return main_layer.get_input_embeddings() - else: - raise NotImplementedError - - def _save_checkpoint(self, checkpoint_dir, epoch): - if not os.path.isdir(checkpoint_dir): - os.mkdir(checkpoint_dir) - # We avoid tf.train.checkpoint or saving weights in TF format, even though that includes optimizer - # state for us, because it requires special handling for objects like custom losses, which we use - # internally and which users are likely to use too - weights_path = os.path.join(checkpoint_dir, "weights.h5") - self.save_weights(weights_path) - extra_data = {"epoch": epoch, "optimizer_state": self.optimizer.get_weights()} - extra_data_path = os.path.join(checkpoint_dir, "extra_data.pickle") - with open(extra_data_path, "wb") as f: - pickle.dump(extra_data, f) - - def load_repo_checkpoint(self, repo_path_or_name): - """ - Loads a saved checkpoint (model weights and optimizer state) from a repo. Returns the current epoch count when - the checkpoint was made. - - Args: - repo_path_or_name (`str`): - Can either be a repository name for your {object} in the Hub or a path to a local folder (in which case - the repository will have the name of that local folder). - - Returns: - `dict`: A dictionary of extra metadata from the checkpoint, most commonly an "epoch" count. - """ - if getattr(self, "optimizer", None) is None: - raise RuntimeError( - "Checkpoint loading failed as no optimizer is attached to the model. " - "This is most likely caused by the model not being compiled." - ) - if os.path.isdir(repo_path_or_name): - local_dir = repo_path_or_name - else: - # If this isn't a local path, check that the remote repo exists and has a checkpoint in it - repo_files = list_repo_files(repo_path_or_name) - for file in ("checkpoint/weights.h5", "checkpoint/extra_data.pickle"): - if file not in repo_files: - raise FileNotFoundError(f"Repo {repo_path_or_name} does not contain checkpoint file {file}!") - repo = Repository(repo_path_or_name.split("/")[-1], clone_from=repo_path_or_name) - local_dir = repo.local_dir - - # Now make sure the repo actually has a checkpoint in it. - checkpoint_dir = os.path.join(local_dir, "checkpoint") - weights_file = os.path.join(checkpoint_dir, "weights.h5") - if not os.path.isfile(weights_file): - raise FileNotFoundError(f"Could not find checkpoint file weights.h5 in repo {repo_path_or_name}!") - extra_data_file = os.path.join(checkpoint_dir, "extra_data.pickle") - if not os.path.isfile(extra_data_file): - raise FileNotFoundError(f"Could not find checkpoint file extra_data.pickle in repo {repo_path_or_name}!") - - # Assuming the repo is real and we got a checkpoint, load the weights and the optimizer state into the model. - # The optimizer state includes the iteration count, so learning rate schedules should resume as normal too. - self.load_weights(weights_file) - with open(extra_data_file, "rb") as f: - extra_data = pickle.load(f) - self.optimizer.set_weights(extra_data["optimizer_state"]) - - # Finally, return the epoch number from the checkpoint. This isn't a property of the model, so we can't - # set it directly, but the user can pass it to fit(). - return {"epoch": extra_data["epoch"]} - - def prepare_tf_dataset( - self, - dataset: "datasets.Dataset", # noqa:F821 - batch_size: int = 8, - shuffle: bool = True, - tokenizer: Optional["PreTrainedTokenizerBase"] = None, - collate_fn: Optional[Callable] = None, - collate_fn_args: Optional[Dict[str, Any]] = None, - drop_remainder: Optional[bool] = None, - prefetch: bool = True, - ): - """ - Wraps a HuggingFace [`~datasets.Dataset`] as a `tf.data.Dataset` with collation and batching. This method is - designed to create a "ready-to-use" dataset that can be passed directly to Keras methods like `fit()` without - further modification. The method will drop columns from the dataset if they don't match input names for the - model. If you want to specify the column names to return rather than using the names that match this model, we - recommend using `Dataset.to_tf_dataset()` instead. - - Args: - dataset (`Any`): - A [~`datasets.Dataset`] to be wrapped as a `tf.data.Dataset`. - batch_size (`int`, defaults to 8): - The size of batches to return. - shuffle (`bool`, defaults to `True`): - Whether to return samples from the dataset in random order. Usually `True` for training datasets and - `False` for validation/test datasets. - tokenizer ([`PreTrainedTokenizerBase`], *optional*): - A `PreTrainedTokenizer` that will be used to pad samples to create batches. Has no effect if a specific - `collate_fn` is passed instead. - collate_fn (`Callable`, *optional*): - A function that collates samples from the dataset into a single batch. Defaults to - `DefaultDataCollator` if no `tokenizer` is supplied or `DataCollatorWithPadding` if a `tokenizer` is - passed. - collate_fn_args (`Dict[str, Any]`, *optional*): - A dict of arguments to pass to the `collate_fn` alongside the list of samples. - drop_remainder (`bool`, *optional*): - Whether to drop the final batch, if the batch_size does not evenly divide the dataset length. Defaults - to the same setting as `shuffle`. - prefetch (`bool`, defaults to `True`): - Whether to add prefetching to the end of the `tf.data` pipeline. This is almost always beneficial for - performance, but can be disabled in edge cases. - - - Returns: - `Dataset`: A `tf.data.Dataset` which is ready to pass to the Keras API. - """ - requires_backends(self, ["datasets"]) - import datasets - - if collate_fn is None: - if tokenizer is None: - collate_fn = DefaultDataCollator(return_tensors="np") - else: - collate_fn = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="np") - if collate_fn_args is None: - collate_fn_args = {} - - if not isinstance(dataset, datasets.Dataset): - raise TypeError("Dataset argument should be a datasets.Dataset!") - model_inputs = list(inspect.signature(self.call).parameters) - model_labels = find_labels(self.__class__) - if "cols_to_retain" in list(inspect.signature(dataset._get_output_signature).parameters.keys()): - output_signature, _ = dataset._get_output_signature( - dataset, - batch_size=None, - collate_fn=collate_fn, - collate_fn_args=collate_fn_args, - cols_to_retain=model_inputs, - ) - else: - # TODO Matt: This is a workaround for older versions of datasets that are missing the `cols_to_retain` - # argument. We should remove this once the minimum supported version of datasets is > 2.3.2 - unwanted_columns = [ - feature - for feature in dataset.features - if feature not in model_inputs and feature not in ("label_ids", "label") - ] - dataset = dataset.remove_columns(unwanted_columns) - output_signature, _ = dataset._get_output_signature( - dataset, batch_size=None, collate_fn=collate_fn, collate_fn_args=collate_fn_args - ) - output_columns = list(output_signature.keys()) - feature_cols = [col for col in output_columns if col in model_inputs and col not in model_labels] - label_cols = [col for col in output_columns if col in model_labels] - - # Backwards compatibility for older versions of datasets. Previously, if `columns` or `label_cols` - # were a single element list, the returned element spec would be a single element. Now, passing [feature] - # will return a dict structure {"feature": feature}, and passing a single string will return a single element. - feature_cols = feature_cols[0] if len(feature_cols) == 1 else feature_cols - label_cols = label_cols[0] if len(label_cols) == 1 else label_cols - - if drop_remainder is None: - drop_remainder = shuffle - tf_dataset = dataset.to_tf_dataset( - columns=feature_cols, - label_cols=label_cols, - batch_size=batch_size, - shuffle=shuffle, - drop_remainder=drop_remainder, - collate_fn=collate_fn, - collate_fn_args=collate_fn_args, - prefetch=prefetch, - ) - return tf_dataset - - def compile( - self, - optimizer="rmsprop", - loss="auto_with_warning", - metrics=None, - loss_weights=None, - weighted_metrics=None, - run_eagerly=None, - steps_per_execution=None, - **kwargs, - ): - """ - This is a thin wrapper that sets the model's loss output head as the loss if the user does not specify a loss - function themselves. - """ - if loss in ("auto_with_warning", "passthrough"): # "passthrough" for workflow backward compatibility - logger.info( - "No loss specified in compile() - the model's internal loss computation will be used as the " - "loss. Don't panic - this is a common way to train TensorFlow models in Transformers! " - "To disable this behaviour please pass a loss argument, or explicitly pass " - "`loss=None` if you do not want your model to compute a loss. You can also specify `loss='auto'` to " - "get the internal loss without printing this info string." - ) - loss = "auto" - if loss == "auto": - loss = dummy_loss - self._using_dummy_loss = True - else: - self._using_dummy_loss = False - parent_args = list(inspect.signature(tf.keras.Model.compile).parameters.keys()) - # This argument got renamed, we need to support both versions - if "steps_per_execution" in parent_args: - super().compile( - optimizer=optimizer, - loss=loss, - metrics=metrics, - loss_weights=loss_weights, - weighted_metrics=weighted_metrics, - run_eagerly=run_eagerly, - steps_per_execution=steps_per_execution, - **kwargs, - ) - else: - super().compile( - optimizer=optimizer, - loss=loss, - metrics=metrics, - loss_weights=loss_weights, - weighted_metrics=weighted_metrics, - run_eagerly=run_eagerly, - experimental_steps_per_execution=steps_per_execution, - **kwargs, - ) - - def compute_loss(self, *args, **kwargs): - if hasattr(tf.keras.Model, "compute_loss"): - # This will be true in TF 2.8 or greater - return super().compute_loss(*args, **kwargs) - else: - warnings.warn( - "The old compute_loss method is deprecated as it conflicts with the Keras compute_loss " - "method added in TF 2.8. If you want the original HF compute_loss, please call " - "hf_compute_loss() instead. From TF versions >= 2.8, or Transformers versions >= 5, " - "calling compute_loss() will get the Keras method instead.", - FutureWarning, - ) - return self.hf_compute_loss(*args, **kwargs) - - def get_label_to_output_name_mapping(self): - arg_names = list(inspect.signature(self.call).parameters) - if self._label_to_output_map is not None: - return self._label_to_output_map - elif "start_positions" in arg_names: - return {"start_positions": "start_logits", "end_positions": "end_logits"} - elif "sentence_order_label" in arg_names: - return {"labels": "prediction_logits", "sentence_order_label": "sop_logits"} - elif "next_sentence_label" in arg_names: - return {"labels": "prediction_logits", "next_sentence_label": "seq_relationship_logits"} - elif "mc_labels" in arg_names: - return {"labels": "logits", "mc_labels": "mc_logits"} - else: - return {} - - def train_step(self, data): - """ - A modification of Keras's default `train_step` that correctly handles matching outputs to labels for our models - and supports directly training on the loss output head. In addition, it ensures input keys are copied to the - labels where appropriate. It will also copy label keys into the input dict when using the dummy loss, to ensure - that they are available to the model during the forward pass. - """ - - # We hardcode the most common renamings; models with weirder names can set `self._label_to_output_map` - arg_names = list(inspect.signature(self.call).parameters) - label_kwargs = find_labels(self.__class__) - label_to_output = self.get_label_to_output_name_mapping() - output_to_label = {val: key for key, val in label_to_output.items()} - if not self._using_dummy_loss and parse(tf.__version__) < parse("2.11.0"): - # Newer TF train steps leave this out - data = expand_1d(data) - x, y, sample_weight = tf.keras.utils.unpack_x_y_sample_weight(data) - # If the inputs are mutable dictionaries, make a shallow copy of them because we will modify - # them during input/label pre-processing. This avoids surprising the user by wrecking their data. - # In addition, modifying mutable Python inputs makes XLA compilation impossible. - if isinstance(x, dict): - x = x.copy() - if isinstance(y, dict): - y = y.copy() - - # When using a dummy loss, we ensure that separate labels are copied to the correct model arguments, - # if those keys are not already present in the input dict - if self._using_dummy_loss and y is not None: - # If y is a tensor and the model only has one label-like input, map y to that input - if len(label_kwargs) == 1 and isinstance(y, tf.Tensor): - if isinstance(x, tf.Tensor): - x = {arg_names[0]: x} - label_kwarg = next(iter(label_kwargs)) - if label_kwarg not in x: - x[label_kwarg] = y - # Otherwise, copy keys from y to x as long as they weren't already present in x - elif isinstance(y, dict): - if isinstance(x, tf.Tensor): - x = {arg_names[0]: x} - for key, val in y.items(): - if key in arg_names and key not in x: - x[key] = val - elif output_to_label.get(key, None) in arg_names and key not in x: - x[output_to_label[key]] = val - if y is None: - y = {key: val for key, val in x.items() if key in label_kwargs} - if not y and not self._using_dummy_loss: - raise ValueError("Could not find label column(s) in input dict and no separate labels were provided!") - - if isinstance(y, dict): - # Rename labels at this point to match output heads - y = {label_to_output.get(key, key): val for key, val in y.items()} - - # Run forward pass. - with tf.GradientTape() as tape: - if self._using_dummy_loss and "return_loss" in arg_names: - y_pred = self(x, training=True, return_loss=True) - else: - y_pred = self(x, training=True) - if self._using_dummy_loss: - loss = self.compiled_loss(y_pred.loss, y_pred.loss, sample_weight, regularization_losses=self.losses) - else: - loss = None - - # This next block matches outputs to label keys. Tensorflow's standard method for doing this - # can get very confused if any of the keys contain nested values (e.g. lists/tuples of Tensors) - if isinstance(y, dict) and len(y) == 1: - if list(y.keys())[0] in y_pred.keys(): - y_pred = y_pred[list(y.keys())[0]] - elif list(y_pred.keys())[0] == "loss": - y_pred = y_pred[1] - else: - y_pred = y_pred[0] - _, y = y.popitem() - elif isinstance(y, dict): - # If the labels are a dict, match keys from the output by name - y_pred = {key: val for key, val in y_pred.items() if key in y} - elif isinstance(y, tuple) or isinstance(y, list): - # If the labels are a tuple/list, match keys to the output by order, skipping the loss. - if list(y_pred.keys())[0] == "loss": - y_pred = y_pred.to_tuple()[1:] - else: - y_pred = y_pred.to_tuple() - y_pred = y_pred[: len(y)] # Remove unused fields in case those cause problems - else: - # If the labels are a single tensor, match them to the first non-loss tensor in the output - if list(y_pred.keys())[0] == "loss": - y_pred = y_pred[1] - else: - y_pred = y_pred[0] - - if loss is None: - loss = self.compiled_loss(y, y_pred, sample_weight, regularization_losses=self.losses) - - # Run backwards pass. - self.optimizer.minimize(loss, self.trainable_variables, tape=tape) - - self.compiled_metrics.update_state(y, y_pred, sample_weight) - # Collect metrics to return - return_metrics = {} - for metric in self.metrics: - result = metric.result() - if isinstance(result, dict): - return_metrics.update(result) - else: - return_metrics[metric.name] = result - return return_metrics - - def test_step(self, data): - """ - A modification of Keras's default `train_step` that correctly handles matching outputs to labels for our models - and supports directly training on the loss output head. In addition, it ensures input keys are copied to the - labels where appropriate. It will also copy label keys into the input dict when using the dummy loss, to ensure - that they are available to the model during the forward pass. - """ - # We hardcode the most common renamings; models with weirder names can set `self._label_to_output_map` - arg_names = list(inspect.signature(self.call).parameters) - label_kwargs = find_labels(self.__class__) - label_to_output = self.get_label_to_output_name_mapping() - output_to_label = {val: key for key, val in label_to_output.items()} - if not self._using_dummy_loss and parse(tf.__version__) < parse("2.11.0"): - # Newer versions leave this out - data = expand_1d(data) - x, y, sample_weight = tf.keras.utils.unpack_x_y_sample_weight(data) - # If the inputs are mutable dictionaries, make a shallow copy of them because we will modify - # them during input/label pre-processing. This avoids surprising the user by wrecking their data. - # In addition, modifying mutable Python inputs makes XLA compilation impossible. - if isinstance(x, dict): - x = x.copy() - if isinstance(y, dict): - y = y.copy() - - # When using a dummy loss, we ensure that separate labels are copied to the correct model arguments, - # if those keys are not already present in the input dict - if self._using_dummy_loss and y is not None: - arg_names = list(inspect.signature(self.call).parameters) - # If y is a tensor and the model only has one label-like input, map y to that input - if len(label_kwargs) == 1 and isinstance(y, tf.Tensor): - if isinstance(x, tf.Tensor): - x = {arg_names[0]: x} - label_kwarg = next(iter(label_kwargs)) - if label_kwarg not in x: - x[label_kwarg] = y - # Otherwise, copy keys from y to x as long as they weren't already present in x - elif isinstance(y, dict): - if isinstance(x, tf.Tensor): - x = {arg_names[0]: x} - for key, val in y.items(): - if key in arg_names and key not in x: - x[key] = val - elif output_to_label.get(key, None) in arg_names and key not in x: - x[output_to_label[key]] = val - if y is None: - y = {key: val for key, val in x.items() if key in label_kwargs} - if not y and not self._using_dummy_loss: - raise ValueError("Could not find label column(s) in input dict and no separate labels were provided!") - - if isinstance(y, dict): - # Rename labels at this point to match output heads - y = {label_to_output.get(key, key): val for key, val in y.items()} - - # Run forward pass. - if self._using_dummy_loss and "return_loss" in arg_names: - y_pred = self(x, return_loss=True, training=False) - else: - y_pred = self(x, training=False) - if self._using_dummy_loss: - loss = self.compiled_loss(y_pred.loss, y_pred.loss, sample_weight, regularization_losses=self.losses) - else: - loss = None - - # This next block matches outputs to label keys. Tensorflow's standard method for doing this - # can get very confused if any of the keys contain nested values (e.g. lists/tuples of Tensors) - if isinstance(y, dict) and len(y) == 1: - if list(y.keys())[0] in y_pred.keys(): - y_pred = y_pred[list(y.keys())[0]] - elif list(y_pred.keys())[0] == "loss": - y_pred = y_pred[1] - else: - y_pred = y_pred[0] - _, y = y.popitem() - elif isinstance(y, dict): - # If the labels are a dict, match keys from the output by name - y_pred = {key: val for key, val in y_pred.items() if key in y} - elif isinstance(y, tuple) or isinstance(y, list): - # If the labels are a tuple/list, match keys to the output by order, skipping the loss. - if list(y_pred.keys())[0] == "loss": - y_pred = y_pred.to_tuple()[1:] - else: - y_pred = y_pred.to_tuple() - y_pred = y_pred[: len(y)] # Remove unused fields in case those cause problems - else: - # If the labels are a single tensor, match them to the first non-loss tensor in the output - if list(y_pred.keys())[0] == "loss": - y_pred = y_pred[1] - else: - y_pred = y_pred[0] - - if loss is None: - loss = self.compiled_loss(y, y_pred, sample_weight, regularization_losses=self.losses) - - self.compiled_metrics.update_state(y, y_pred, sample_weight) - # Collect metrics to return - return_metrics = {} - for metric in self.metrics: - result = metric.result() - if isinstance(result, dict): - return_metrics.update(result) - else: - return_metrics[metric.name] = result - return return_metrics - - def create_model_card( - self, - output_dir, - model_name: str, - language: Optional[str] = None, - license: Optional[str] = None, - tags: Optional[str] = None, - finetuned_from: Optional[str] = None, - tasks: Optional[str] = None, - dataset_tags: Optional[Union[str, List[str]]] = None, - dataset: Optional[Union[str, List[str]]] = None, - dataset_args: Optional[Union[str, List[str]]] = None, - ): - """ - Creates a draft of a model card using the information available to the `Trainer`. - - Args: - output_dir (`str` or `os.PathLike`): - The folder in which to create the model card. - model_name (`str`, *optional*): - The name of the model. - language (`str`, *optional*): - The language of the model (if applicable) - license (`str`, *optional*): - The license of the model. Will default to the license of the pretrained model used, if the original - model given to the `Trainer` comes from a repo on the Hub. - tags (`str` or `List[str]`, *optional*): - Some tags to be included in the metadata of the model card. - finetuned_from (`str`, *optional*): - The name of the model used to fine-tune this one (if applicable). Will default to the name of the repo - of the original model given to the `Trainer` (if it comes from the Hub). - tasks (`str` or `List[str]`, *optional*): - One or several task identifiers, to be included in the metadata of the model card. - dataset_tags (`str` or `List[str]`, *optional*): - One or several dataset tags, to be included in the metadata of the model card. - dataset (`str` or `List[str]`, *optional*): - One or several dataset identifiers, to be included in the metadata of the model card. - dataset_args (`str` or `List[str]`, *optional*): - One or several dataset arguments, to be included in the metadata of the model card. - """ - # Avoids a circular import by doing this when necessary. - from .modelcard import TrainingSummary # tests_ignore - - training_summary = TrainingSummary.from_keras( - self, - keras_history=self.history, - language=language, - license=license, - tags=tags, - model_name=model_name, - finetuned_from=finetuned_from, - tasks=tasks, - dataset_tags=dataset_tags, - dataset=dataset, - dataset_args=dataset_args, - ) - model_card = training_summary.to_model_card() - with open(os.path.join(output_dir, "README.md"), "w") as f: - f.write(model_card) - - def set_input_embeddings(self, value): - """ - Set model's input embeddings - - Args: - value (`tf.Variable`): - The new weights mapping hidden states to vocabulary. - """ - main_layer = getattr(self, self.base_model_prefix) - - if main_layer is None: - raise NotImplementedError("The model does not implements the base_model_prefix attribute.") - - try: - main_layer.set_input_embeddings(value) - except AttributeError: - logger.info("Building the model") - self.build() - main_layer.set_input_embeddings(value) - - def get_output_embeddings(self) -> Union[None, tf.keras.layers.Layer]: - """ - Returns the model's output embeddings - - Returns: - `tf.Variable`: The new weights mapping vocabulary to hidden states. - """ - if self.get_lm_head() is not None: - lm_head = self.get_lm_head() - - try: - return lm_head.get_output_embeddings() - except AttributeError: - logger.info("Building the model") - self.build() - - return lm_head().get_output_embeddings() - - return None # Overwrite for models with output embeddings - - def set_output_embeddings(self, value): - """ - Set model's output embeddings - - Args: - value (`tf.Variable`): - The new weights mapping hidden states to vocabulary. - """ - if self.get_lm_head() is not None: - lm_head = self.get_lm_head() - try: - lm_head.set_output_embeddings(value) - except AttributeError: - logger.info("Building the model") - self.build() - lm_head.set_output_embeddings(value) - - def get_output_layer_with_bias(self) -> Union[None, tf.keras.layers.Layer]: - """ - Get the layer that handles a bias attribute in case the model has an LM head with weights tied to the - embeddings - - Return: - `tf.keras.layers.Layer`: The layer that handles the bias, None if not an LM model. - """ - warnings.warn( - "The method get_output_layer_with_bias is deprecated. Please use `get_lm_head` instead.", FutureWarning - ) - return self.get_lm_head() - - def get_prefix_bias_name(self) -> Union[None, str]: - """ - Get the concatenated _prefix name of the bias from the model name to the parent layer - - Return: - `str`: The _prefix name of the bias. - """ - warnings.warn("The method get_prefix_bias_name is deprecated. Please use `get_bias` instead.", FutureWarning) - return None - - def get_bias(self) -> Union[None, Dict[str, tf.Variable]]: - """ - Dict of bias attached to an LM head. The key represents the name of the bias attribute. - - Return: - `tf.Variable`: The weights representing the bias, None if not an LM model. - """ - if self.get_lm_head() is not None: - lm_head = self.get_lm_head() - try: - return lm_head.get_bias() - except AttributeError: - self.build() - - return lm_head.get_bias() - return None - - def set_bias(self, value): - """ - Set all the bias in the LM head. - - Args: - value (`Dict[tf.Variable]`): - All the new bias attached to an LM head. - """ - if self.get_lm_head() is not None: - lm_head = self.get_lm_head() - try: - lm_head.set_bias(value) - except AttributeError: - self.build() - lm_head.set_bias(value) - - def get_lm_head(self) -> tf.keras.layers.Layer: - """ - The LM Head layer. This method must be overwritten by all the models that have a lm head. - - Return: - `tf.keras.layers.Layer`: The LM head layer if the model has one, None if not. - """ - return None - - def resize_token_embeddings( - self, new_num_tokens: Optional[int] = None - ) -> Union[tf.keras.layers.Embedding, tf.Variable]: - """ - Resizes input token embeddings matrix of the model if `new_num_tokens != config.vocab_size`. - - Takes care of tying weights embeddings afterwards if the model class has a `tie_weights()` method. - - Arguments: - new_num_tokens (`int`, *optional*): - The number of new tokens in the embedding matrix. Increasing the size will add newly initialized - vectors at the end. Reducing the size will remove vectors from the end. If not provided or `None`, just - returns a pointer to the input tokens without doing anything. - - Return: - `tf.Variable` or `tf.keras.layers.Embedding`: Pointer to the input tokens of the model. - """ - # TODO (joao): flagged for replacement (by `_v2_resized_token_embeddings`) due to embeddings refactor - - # Run the new code path if the model has a keras embeddings layer - if isinstance(self.get_input_embeddings(), tf.keras.layers.Embedding): - return self._v2_resized_token_embeddings(new_num_tokens) - - if new_num_tokens is None or new_num_tokens == self.config.vocab_size: - return self._get_word_embedding_weight(self.get_input_embeddings()) - - model_embeds = self._resize_token_embeddings(new_num_tokens) - - # Update base model and current model config - self.config.vocab_size = new_num_tokens - - return model_embeds - - def _v2_resized_token_embeddings(self, new_num_tokens: Optional[int] = None) -> tf.keras.layers.Embedding: - """ - Resizes input token embeddings matrix of the model if `new_num_tokens != config.vocab_size`. - - Arguments: - new_num_tokens (`int`, *optional*): - The number of new tokens in the embedding matrix. Increasing the size will add newly initialized - vectors at the end. Reducing the size will remove vectors from the end. If not provided or `None`, just - returns a pointer to the input tokens without doing anything. - - Return: - `tf.keras.layers.Embedding`: Pointer to the input tokens of the model. - """ - if new_num_tokens is None or new_num_tokens == self.config.vocab_size: - return self.get_input_embeddings() - - model_embeds = self._v2_resize_token_embeddings(new_num_tokens) - - # Update base model and current model config - self.config.vocab_size = new_num_tokens - - return model_embeds - - def _get_word_embedding_weight(model, embedding_layer): - # TODO (joao): flagged for delection due to embeddings refactor - - # If the variable holds the weights themselves, return them - if isinstance(embedding_layer, tf.Tensor): - return embedding_layer - # Otherwise, try to get them from the layer's attributes - - embeds = getattr(embedding_layer, "weight", None) - if embeds is not None: - return embeds - - embeds = getattr(embedding_layer, "decoder", None) - if embeds is not None: - return embeds - - # The reason why the attributes don't exist might be - # because the model is not built, so retry getting - # the argument after building the model - model.build() - - embeds = getattr(embedding_layer, "weight", None) - if embeds is not None: - return embeds - - embeds = getattr(embedding_layer, "decoder", None) - if embeds is not None: - return embeds - - return None - - def _resize_token_embeddings(self, new_num_tokens): - # TODO (joao): flagged for replacement (by `_v2_resize_token_embeddings`) due to embeddings refactor - old_embeddings = self._get_word_embedding_weight(self.get_input_embeddings()) - new_embeddings = self._get_resized_embeddings(old_embeddings, new_num_tokens) - - # if word embeddings are not tied, make sure that lm head bias is resized as well - if self.get_bias() is not None: - old_lm_head_bias = self.get_bias() - new_lm_head_bias = self._get_resized_lm_head_bias(old_lm_head_bias, new_num_tokens) - - self.set_bias(new_lm_head_bias) - - # if word embeddings are not tied, make sure that lm head decoder is resized as well - if self.get_output_embeddings() is not None: - old_lm_head_decoder = self._get_word_embedding_weight(self.get_output_embeddings()) - new_lm_head_decoder = self._get_resized_lm_head_decoder(old_lm_head_decoder, new_num_tokens) - - self.set_output_embeddings(new_lm_head_decoder) - - self.set_input_embeddings(new_embeddings) - - return self.get_input_embeddings() - - def _v2_resize_token_embeddings(self, new_num_tokens): - old_embeddings = self.get_input_embeddings() - new_embeddings = self._v2_get_resized_embeddings(old_embeddings, new_num_tokens) - self.set_input_embeddings(new_embeddings) - - # If word embeddings are not tied, make sure that lm head bias is resized as well - if self.get_bias() is not None: - old_lm_head_bias = self.get_bias() - new_lm_head_bias = self._v2_get_resized_lm_head_bias(old_lm_head_bias, new_num_tokens) - self.set_bias(new_lm_head_bias) - - # If word embeddings are not tied, make sure that lm head decoder is resized as well. - tied_weights = self.get_input_embeddings() == self.get_output_embeddings() - if self.get_output_embeddings() is not None and not tied_weights: - old_lm_head_decoder = self._get_word_embedding_weight(self.get_output_embeddings()) - # TODO (joao): this one probably needs a v2 version with other models - new_lm_head_decoder = self._get_resized_lm_head_decoder(old_lm_head_decoder, new_num_tokens) - self.set_output_embeddings(new_lm_head_decoder) - - return self.get_input_embeddings() - - def _get_resized_lm_head_bias(self, old_lm_head_bias, new_num_tokens): - """ - Build a resized bias from the old ones. Increasing the size will add newly initialized vectors at the end. - Reducing the size will remove vectors from the end - - Args: - old_lm_head_bias (`tf.Variable`): - Old lm head bias to be resized. - new_num_tokens (`int`, *optional*): - New number of tokens in the linear matrix. - - Increasing the size will add newly initialized vectors at the end. Reducing the size will remove - vectors from the end. If not provided or `None`, just returns None - - Return: - `tf.Variable`: Pointer to the resized bias. - """ - # TODO (joao): flagged for replacement (by `_v2_get_resized_lm_head_bias`) due to embeddings refactor - new_lm_head_bias = {} - - for attr, weight in old_lm_head_bias.items(): - first_dim, old_num_tokens = (None, shape_list(weight)[0]) if tf.rank(weight) == 1 else shape_list(weight) - size_diff = new_num_tokens - old_num_tokens - final_shape = [new_num_tokens] if first_dim is None else [first_dim, new_num_tokens] - - # initialize new bias - if tf.math.greater(size_diff, 0): - padding_shape = [[0, size_diff]] if first_dim is None else [[0, 0], [0, size_diff]] - current_bias = tf.pad(weight.value(), tf.convert_to_tensor(padding_shape), constant_values=-1) - num_tokens_to_copy = min(old_num_tokens, new_num_tokens) - mask_shape = [num_tokens_to_copy] if first_dim is None else [1, num_tokens_to_copy] - bias_mask = tf.fill(tf.convert_to_tensor(mask_shape), True) - bias_mask = tf.pad(bias_mask, tf.convert_to_tensor(padding_shape), constant_values=False) - else: - slice_from = [0] if first_dim is None else [0, 0] - current_bias = tf.slice( - weight.value(), tf.convert_to_tensor(slice_from), tf.convert_to_tensor(final_shape) - ) - bias_mask = tf.fill(tf.convert_to_tensor(final_shape), True) - - new_bias = self.add_weight( - shape=final_shape, - initializer="zeros", - trainable=True, - name=weight.name.split(":")[0], - ) - init_bias = tf.where(bias_mask, current_bias, new_bias.value()) - - new_bias.assign(init_bias) - new_lm_head_bias[attr] = new_bias - - return new_lm_head_bias - - def _v2_get_resized_lm_head_bias( - self, old_lm_head_bias: Dict[str, tf.Variable], new_num_tokens: int - ) -> Dict[str, tf.Tensor]: - """ - Build a resized bias from the old ones. Increasing the size will add newly initialized vectors at the end. - Reducing the size will remove vectors from the end - - Args: - old_lm_head_bias (`Dict[str, tf.Variable]`): - Old lm head bias to be resized. - new_num_tokens (`int`): - New number of tokens in the linear matrix. Increasing the size will add newly initialized vectors at - the end. Reducing the size will remove vectors from the end. - - Return: - `tf.Tensor`: Values for the resized bias. - """ - new_lm_head_bias = {} - - for attr, weight in old_lm_head_bias.items(): - # Determine the size difference (depending on the shape) - first_dim, old_num_tokens = (None, shape_list(weight)[0]) if tf.rank(weight) == 1 else shape_list(weight) - size_diff = new_num_tokens - old_num_tokens - - # Copy the old bias values to the new bias - if old_num_tokens > new_num_tokens: - new_bias = weight.value()[..., :new_num_tokens] - else: - padding_shape = [[0, size_diff]] if first_dim is None else [[0, 0], [0, size_diff]] - new_bias = tf.pad(weight.value(), tf.convert_to_tensor(padding_shape)) - - new_lm_head_bias[attr] = new_bias - return new_lm_head_bias - - def _get_resized_lm_head_decoder(self, old_lm_head_decoder, new_num_tokens): - """ - Build a resized decoder from the old ones. Increasing the size will add newly initialized vectors at the end. - Reducing the size will remove vectors from the end - - Args: - old_lm_head_decoder (`tf.Variable`): - Old lm head decoder to be resized. - new_num_tokens (`int`, *optional*): - New number of tokens in the linear matrix. - - Increasing the size will add newly initialized vectors at the end. Reducing the size will remove - vectors from the end. If not provided or `None`, just returns None - - Return: - `tf.Variable`: Pointer to the resized decoder or None if the output embeddings are different from the input - ones. - """ - new_lm_head_decoder = old_lm_head_decoder - is_input_output_equals = tf.reduce_any( - self._get_word_embedding_weight(self.get_input_embeddings()) == old_lm_head_decoder - ) - - if old_lm_head_decoder is not None and not is_input_output_equals: - old_embedding_dim = shape_list(old_lm_head_decoder)[1] - decoder_mask, current_decoder = init_copy_embeddings(old_lm_head_decoder, new_num_tokens) - new_lm_head_decoder = self.add_weight( - shape=(new_num_tokens, old_embedding_dim), - initializer="zeros", - trainable=True, - name=old_lm_head_decoder.name.split(":")[0], - ) - init_decoder = tf.where(decoder_mask, current_decoder, new_lm_head_decoder.value()) - - new_lm_head_decoder.assign(init_decoder) - - return new_lm_head_decoder - - def _get_resized_embeddings(self, old_embeddings, new_num_tokens=None) -> tf.Variable: - """ - Build a resized Embedding weights from a provided token Embedding weights. Increasing the size will add newly - initialized vectors at the end. Reducing the size will remove vectors from the end - - Args: - old_embeddings (`tf.Variable`): - Old embeddings to be resized. - new_num_tokens (`int`, *optional*): - New number of tokens in the embedding matrix. - - Increasing the size will add newly initialized vectors at the end. Reducing the size will remove - vectors from the end. If not provided or `None`, just returns a pointer to the input tokens - `tf.Variable` module of the model without doing anything. - - Return: - `tf.Variable`: Pointer to the resized Embedding Module or the old Embedding Module if `new_num_tokens` is - `None` - """ - # TODO (joao): flagged for replacement (by `_v2_get_resized_embeddings`) due to embeddings refactor - old_embedding_dim = shape_list(old_embeddings)[1] - init_range = getattr(self.config, "initializer_range", 0.02) - embeddings_mask, current_embeddings = init_copy_embeddings(old_embeddings, new_num_tokens) - new_embeddings = self.add_weight( - name=old_embeddings.name.split(":")[0], - shape=[new_num_tokens, old_embedding_dim], - initializer=get_initializer(init_range), - dtype=tf.float32, - ) - init_embeddings = tf.where(embeddings_mask, current_embeddings, new_embeddings.value()) - - new_embeddings.assign(init_embeddings) - - return new_embeddings - - def _v2_get_resized_embeddings( - self, old_embeddings: tf.keras.layers.Embedding, new_num_tokens: int - ) -> tf.keras.layers.Embedding: - """ - Build a resized Embedding layer from a provided Embedding layer. Increasing the size will add newly initialized - vectors at the end. Reducing the size will remove vectors from the end. - - Args: - old_embeddings (`tf.keras.layers.Embedding`): - Old embeddings to be resized. - new_num_tokens (`int`, *optional*): - New number of tokens in the embedding matrix. - - Return: - `tf.keras.layers.Embedding`: Resized Embedding layer. - """ - - # Get the initialization range for the embeddings - init_range = 0.02 # default value - potential_initialization_variable_names = [ - "initializer_range", # most common - "initializer_factor", # e.g. T5 - "init_std", # e.g BART - ] - for var_name in potential_initialization_variable_names: - if hasattr(self.config, var_name): - init_range = getattr(self.config, var_name) - - # Get a new (initialized) embeddings layer - new_embeddings = tf.keras.layers.Embedding( - input_dim=new_num_tokens, - output_dim=old_embeddings.output_dim, - embeddings_initializer=tf.keras.initializers.TruncatedNormal(stddev=init_range), - name=old_embeddings.embeddings.name[:-13], # exact same scoped name except "/embeddings:0" - ) - new_embeddings(tf.constant([[0]])) - - # Copy the old embeddings to the new embeddings - if old_embeddings.input_dim >= new_num_tokens: - init_embeddings = old_embeddings.embeddings[:new_num_tokens] - else: - init_embeddings = tf.concat( - [old_embeddings.embeddings, new_embeddings.embeddings[old_embeddings.input_dim :]], axis=0 - ) - new_embeddings.embeddings.assign(init_embeddings) - return new_embeddings - - def prune_heads(self, heads_to_prune): - """ - Prunes heads of the base model. - - Arguments: - heads_to_prune (`Dict[int, List[int]]`): - Dictionary with keys being selected layer indices (`int`) and associated values being the list of heads - to prune in said layer (list of `int`). For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on - layer 1 and heads 2 and 3 on layer 2. - """ - raise NotImplementedError - - def save_pretrained( - self, - save_directory, - saved_model=False, - version=1, - push_to_hub=False, - signatures=None, - max_shard_size: Union[int, str] = "10GB", - create_pr: bool = False, - safe_serialization: bool = False, - token: Optional[Union[str, bool]] = None, - **kwargs, - ): - """ - Save a model and its configuration file to a directory, so that it can be re-loaded using the - [`~TFPreTrainedModel.from_pretrained`] class method. - - Arguments: - save_directory (`str`): - Directory to which to save. Will be created if it doesn't exist. - saved_model (`bool`, *optional*, defaults to `False`): - If the model has to be saved in saved model format as well or not. - version (`int`, *optional*, defaults to 1): - The version of the saved model. A saved model needs to be versioned in order to be properly loaded by - TensorFlow Serving as detailed in the official documentation - https://www.tensorflow.org/tfx/serving/serving_basic - push_to_hub (`bool`, *optional*, defaults to `False`): - Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the - repository you want to push to with `repo_id` (will default to the name of `save_directory` in your - namespace). - signatures (`dict` or `tf.function`, *optional*): - Model's signature used for serving. This will be passed to the `signatures` argument of model.save(). - max_shard_size (`int` or `str`, *optional*, defaults to `"10GB"`): - The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size - lower than this size. If expressed as a string, needs to be digits followed by a unit (like `"5MB"`). - - - - If a single weight of the model is bigger than `max_shard_size`, it will be in its own checkpoint shard - which will be bigger than `max_shard_size`. - - - - create_pr (`bool`, *optional*, defaults to `False`): - Whether or not to create a PR with the uploaded files or directly commit. - safe_serialization (`bool`, *optional*, defaults to `False`): - Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`). - token (`str` or `bool`, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use - the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). - kwargs (`Dict[str, Any]`, *optional*): - Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method. - """ - use_auth_token = kwargs.pop("use_auth_token", None) - - if use_auth_token is not None: - warnings.warn( - "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers.", FutureWarning - ) - if token is not None: - raise ValueError( - "`token` and `use_auth_token` are both specified. Please set only the argument `token`." - ) - token = use_auth_token - - if token is not None: - kwargs["token"] = token - - if os.path.isfile(save_directory): - logger.error(f"Provided path ({save_directory}) should be a directory, not a file") - return - - os.makedirs(save_directory, exist_ok=True) - - if push_to_hub: - commit_message = kwargs.pop("commit_message", None) - repo_id = kwargs.pop("repo_id", save_directory.split(os.path.sep)[-1]) - repo_id = self._create_repo(repo_id, **kwargs) - files_timestamps = self._get_files_timestamps(save_directory) - - if saved_model: - # If `torch_dtype` is in the config with a torch dtype class as the value, we need to change it to string. - # (Although TF doesn't care about this attribute, we can't just remove it or set it to `None`.) - if getattr(self.config, "torch_dtype", None) is not None and not isinstance(self.config.torch_dtype, str): - self.config.torch_dtype = str(self.config.torch_dtype).split(".")[1] - if signatures is None: - serving_default = self.serving.get_concrete_function(self.input_signature) - if any(spec.dtype == tf.int32 for spec in self.input_signature.values()): - int64_spec = { - key: tf.TensorSpec( - shape=spec.shape, dtype=tf.int64 if spec.dtype == tf.int32 else spec.dtype, name=spec.name - ) - for key, spec in self.input_signature.items() - } - int64_serving = self.serving.get_concrete_function(int64_spec) - signatures = {"serving_default": serving_default, "int64_serving": int64_serving} - else: - signatures = serving_default - saved_model_dir = os.path.join(save_directory, "saved_model", str(version)) - self.save(saved_model_dir, include_optimizer=False, signatures=signatures) - logger.info(f"Saved model created in {saved_model_dir}") - - # Save configuration file - self.config.architectures = [self.__class__.__name__[2:]] - - # If we have a custom model, we copy the file defining it in the folder and set the attributes so it can be - # loaded from the Hub. - if self._auto_class is not None: - custom_object_save(self, save_directory, config=self.config) - - self.config.save_pretrained(save_directory) - if self.can_generate(): - self.generation_config.save_pretrained(save_directory) - - # If we save using the predefined names, we can load using `from_pretrained` - weights_name = SAFE_WEIGHTS_NAME if safe_serialization else TF2_WEIGHTS_NAME - output_model_file = os.path.join(save_directory, weights_name) - - shards, index = tf_shard_checkpoint(self.weights, max_shard_size) - - # Clean the folder from a previous save - for filename in os.listdir(save_directory): - full_filename = os.path.join(save_directory, filename) - # If we have a shard file that is not going to be replaced, we delete it, but only from the main process - # in distributed settings to avoid race conditions. - weights_no_suffix = weights_name.replace(".bin", "").replace(".safetensors", "") - if ( - filename.startswith(weights_no_suffix) - and os.path.isfile(full_filename) - and filename not in shards.keys() - ): - os.remove(full_filename) - - if index is None: - if safe_serialization: - state_dict = {format_weight_name(w.name): w.value() for w in self.weights} - safe_save_file(state_dict, output_model_file, metadata={"format": "tf"}) - else: - self.save_weights(output_model_file) - logger.info(f"Model weights saved in {output_model_file}") - else: - save_index_file = os.path.join(save_directory, TF2_WEIGHTS_INDEX_NAME) - # Save the index as well - with open(save_index_file, "w", encoding="utf-8") as index_file: - content = json.dumps(index, indent=2, sort_keys=True) + "\n" - index_file.write(content) - logger.info( - f"The model is bigger than the maximum size per checkpoint ({max_shard_size}) and is going to be " - f"split in {len(shards)} checkpoint shards. You can find where each parameters has been saved in the " - f"index located at {save_index_file}." - ) - for shard_file, shard in shards.items(): - with h5py.File(os.path.join(save_directory, shard_file), mode="w") as shard_file: - layers = [] - for layer in sorted(shard, key=lambda x: x.name): - if "model." in layer.name or len(layer.name.split("/")) == 1: - layer_name = layer.name - else: - layer_name = "/".join(layer.name.split("/")[1:]) - param_dset = shard_file.create_dataset( - layer_name, layer.numpy().shape, dtype=layer.numpy().dtype - ) - param_dset[:] = layer.numpy() - layers.append(layer_name.encode("utf8")) - save_attributes_to_hdf5_group(shard_file, "layer_names", layers) - - if push_to_hub: - self._upload_modified_files( - save_directory, - repo_id, - files_timestamps, - commit_message=commit_message, - token=token, - ) - - @classmethod - def from_pretrained( - cls, - pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], - *model_args, - config: Optional[Union[PretrainedConfig, str, os.PathLike]] = None, - cache_dir: Optional[Union[str, os.PathLike]] = None, - ignore_mismatched_sizes: bool = False, - force_download: bool = False, - local_files_only: bool = False, - token: Optional[Union[str, bool]] = None, - revision: str = "main", - **kwargs, - ): - r""" - Instantiate a pretrained TF 2.0 model from a pre-trained model configuration. - - The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come - pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning - task. - - The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those - weights are discarded. - - Parameters: - pretrained_model_name_or_path (`str`, *optional*): - Can be either: - - - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a - user or organization name, like `dbmdz/bert-base-german-cased`. - - A path to a *directory* containing model weights saved using - [`~TFPreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`. - - A path or url to a *PyTorch state_dict save file* (e.g, `./pt_model/pytorch_model.bin`). In this - case, `from_pt` should be set to `True` and a configuration object should be provided as `config` - argument. This loading path is slower than converting the PyTorch model in a TensorFlow model - using the provided conversion scripts and loading the TensorFlow model afterwards. - - `None` if you are both providing the configuration and state dictionary (resp. with keyword - arguments `config` and `state_dict`). - model_args (sequence of positional arguments, *optional*): - All remaining positional arguments will be passed to the underlying model's `__init__` method. - config (`Union[PretrainedConfig, str]`, *optional*): - Can be either: - - - an instance of a class derived from [`PretrainedConfig`], - - a string valid as input to [`~PretrainedConfig.from_pretrained`]. - - Configuration for the model to use instead of an automatically loaded configuration. Configuration can - be automatically loaded when: - - - The model is a model provided by the library (loaded with the *model id* string of a pretrained - model). - - The model was saved using [`~TFPreTrainedModel.save_pretrained`] and is reloaded by supplying the - save directory. - - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a - configuration JSON file named *config.json* is found in the directory. - from_pt (`bool`, *optional*, defaults to `False`): - Load the model weights from a PyTorch state_dict save file (see docstring of - `pretrained_model_name_or_path` argument). - ignore_mismatched_sizes (`bool`, *optional*, defaults to `False`): - Whether or not to raise an error if some of the weights from the checkpoint do not have the same size - as the weights of the model (if for instance, you are instantiating a model with 10 labels from a - checkpoint with 3 labels). - cache_dir (`str`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received files. Will attempt to resume the download if such a - file exists. - proxies: - (`Dict[str, str], `optional`): A dictionary of proxy servers to use by protocol or endpoint, e.g., - `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): Whether ot not to also return a - dictionary containing missing keys, unexpected keys and error messages. - local_files_only(`bool`, *optional*, defaults to `False`): - Whether or not to only look at local files (e.g., not try downloading the model). - token (`str` or `bool`, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use - the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - - - - - To test a pull request you made on the Hub, you can pass `revision="refs/pr/". - - - - mirror (`str`, *optional*): - Mirror source to accelerate downloads in China. If you are from China and have an accessibility - problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. - Please refer to the mirror site for more information. - subfolder (`str`, *optional*, defaults to `""`): - In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can - specify the folder name here. - tf_to_pt_weight_rename (`Callable`, *optional*): - A function that is called to transform the names of weights during the PyTorch to TensorFlow - crossloading process. This is not necessary for most models, but is useful to allow composite models to - be crossloaded correctly. - kwargs (remaining dictionary of keyword arguments, *optional*): - Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., - `output_attentions=True`). Behaves differently depending on whether a `config` is provided or - automatically loaded: - - - If a configuration is provided with `config`, `**kwargs` will be directly passed to the - underlying model's `__init__` method (we assume all relevant updates to the configuration have - already been done) - - If a configuration is not provided, `kwargs` will be first passed to the configuration class - initialization function ([`~PretrainedConfig.from_pretrained`]). Each key of `kwargs` that - corresponds to a configuration attribute will be used to override said attribute with the - supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute - will be passed to the underlying model's `__init__` function. - - Examples: - - ```python - >>> from transformers import BertConfig, TFBertModel - - >>> # Download model and configuration from huggingface.co and cache. - >>> model = TFBertModel.from_pretrained("bert-base-uncased") - >>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). - >>> model = TFBertModel.from_pretrained("./test/saved_model/") - >>> # Update configuration during loading. - >>> model = TFBertModel.from_pretrained("bert-base-uncased", output_attentions=True) - >>> assert model.config.output_attentions == True - >>> # Loading from a Pytorch model file instead of a TensorFlow checkpoint (slower, for example purposes, not runnable). - >>> config = BertConfig.from_json_file("./pt_model/my_pt_model_config.json") - >>> model = TFBertModel.from_pretrained("./pt_model/my_pytorch_model.bin", from_pt=True, config=config) - ```""" - from_pt = kwargs.pop("from_pt", False) - resume_download = kwargs.pop("resume_download", False) - proxies = kwargs.pop("proxies", None) - output_loading_info = kwargs.pop("output_loading_info", False) - use_auth_token = kwargs.pop("use_auth_token", None) - trust_remote_code = kwargs.pop("trust_remote_code", None) - _ = kwargs.pop("mirror", None) - load_weight_prefix = kwargs.pop("load_weight_prefix", None) - from_pipeline = kwargs.pop("_from_pipeline", None) - from_auto_class = kwargs.pop("_from_auto", False) - subfolder = kwargs.pop("subfolder", "") - commit_hash = kwargs.pop("_commit_hash", None) - tf_to_pt_weight_rename = kwargs.pop("tf_to_pt_weight_rename", None) - - # Not relevant for TF models - _ = kwargs.pop("adapter_kwargs", None) - - if use_auth_token is not None: - warnings.warn( - "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers.", FutureWarning - ) - if token is not None: - raise ValueError( - "`token` and `use_auth_token` are both specified. Please set only the argument `token`." - ) - token = use_auth_token - - if trust_remote_code is True: - logger.warning( - "The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is" - " ignored." - ) - - user_agent = {"file_type": "model", "framework": "tensorflow", "from_auto_class": from_auto_class} - if from_pipeline is not None: - user_agent["using_pipeline"] = from_pipeline - - if is_offline_mode() and not local_files_only: - logger.info("Offline mode: forcing local_files_only=True") - local_files_only = True - - # Load config if we don't provide a configuration - if not isinstance(config, PretrainedConfig): - config_path = config if config is not None else pretrained_model_name_or_path - config, model_kwargs = cls.config_class.from_pretrained( - config_path, - cache_dir=cache_dir, - return_unused_kwargs=True, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - token=token, - revision=revision, - _from_auto=from_auto_class, - _from_pipeline=from_pipeline, - _commit_hash=commit_hash, - **kwargs, - ) - else: - model_kwargs = kwargs - - if commit_hash is None: - commit_hash = getattr(config, "_commit_hash", None) - - # This variable will flag if we're loading a sharded checkpoint. In this case the archive file is just the - # index of the files. - is_sharded = False - # Load model - if pretrained_model_name_or_path is not None: - pretrained_model_name_or_path = str(pretrained_model_name_or_path) - is_local = os.path.isdir(pretrained_model_name_or_path) - if is_local: - if from_pt and os.path.isfile(os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME)): - # Load from a PyTorch checkpoint in priority if from_pt - archive_file = os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME) - elif from_pt and os.path.isfile(os.path.join(pretrained_model_name_or_path, WEIGHTS_INDEX_NAME)): - # Load from a sharded PyTorch checkpoint - archive_file = os.path.join(pretrained_model_name_or_path, WEIGHTS_INDEX_NAME) - is_sharded = True - elif is_safetensors_available() and os.path.isfile( - os.path.join(pretrained_model_name_or_path, SAFE_WEIGHTS_NAME) - ): - # Load from a safetensors checkpoint - archive_file = os.path.join(pretrained_model_name_or_path, SAFE_WEIGHTS_NAME) - elif is_safetensors_available() and os.path.isfile( - os.path.join(pretrained_model_name_or_path, SAFE_WEIGHTS_INDEX_NAME) - ): - # Load from a sharded safetensors checkpoint - archive_file = os.path.join(pretrained_model_name_or_path, SAFE_WEIGHTS_INDEX_NAME) - is_sharded = True - raise NotImplementedError("Support for sharded checkpoints using safetensors is coming soon!") - elif os.path.isfile(os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_NAME)): - # Load from a TF 2.0 checkpoint - archive_file = os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_NAME) - elif os.path.isfile(os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_INDEX_NAME)): - # Load from a sharded TF 2.0 checkpoint - archive_file = os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_INDEX_NAME) - is_sharded = True - # At this stage we don't have a weight file so we will raise an error. - elif os.path.isfile(os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME)) or os.path.isfile( - os.path.join(pretrained_model_name_or_path, WEIGHTS_INDEX_NAME) - ): - raise EnvironmentError( - f"Error no file named {TF2_WEIGHTS_NAME} found in directory {pretrained_model_name_or_path} " - "but there is a file for PyTorch weights. Use `from_pt=True` to load this model from those " - "weights." - ) - else: - raise EnvironmentError( - f"Error no file named {TF2_WEIGHTS_NAME} or {WEIGHTS_NAME} found in directory " - f"{pretrained_model_name_or_path}." - ) - elif os.path.isfile(pretrained_model_name_or_path): - archive_file = pretrained_model_name_or_path - is_local = True - elif os.path.isfile(pretrained_model_name_or_path + ".index"): - archive_file = pretrained_model_name_or_path + ".index" - is_local = True - elif is_remote_url(pretrained_model_name_or_path): - filename = pretrained_model_name_or_path - resolved_archive_file = download_url(pretrained_model_name_or_path) - else: - # set correct filename - if from_pt: - filename = WEIGHTS_NAME - elif is_safetensors_available(): - filename = SAFE_WEIGHTS_NAME - else: - filename = TF2_WEIGHTS_NAME - - try: - # Load from URL or cache if already cached - cached_file_kwargs = { - "cache_dir": cache_dir, - "force_download": force_download, - "proxies": proxies, - "resume_download": resume_download, - "local_files_only": local_files_only, - "token": token, - "user_agent": user_agent, - "revision": revision, - "subfolder": subfolder, - "_raise_exceptions_for_missing_entries": False, - "_commit_hash": commit_hash, - } - resolved_archive_file = cached_file(pretrained_model_name_or_path, filename, **cached_file_kwargs) - - # Since we set _raise_exceptions_for_missing_entries=False, we don't get an exception but a None - # result when internet is up, the repo and revision exist, but the file does not. - if resolved_archive_file is None and filename == SAFE_WEIGHTS_NAME: - # Maybe the checkpoint is sharded, we try to grab the index name in this case. - resolved_archive_file = cached_file( - pretrained_model_name_or_path, SAFE_WEIGHTS_INDEX_NAME, **cached_file_kwargs - ) - if resolved_archive_file is not None: - is_sharded = True - raise NotImplementedError( - "Support for sharded checkpoints using safetensors is coming soon!" - ) - else: - # This repo has no safetensors file of any kind, we switch to TensorFlow. - filename = TF2_WEIGHTS_NAME - resolved_archive_file = cached_file( - pretrained_model_name_or_path, TF2_WEIGHTS_NAME, **cached_file_kwargs - ) - if resolved_archive_file is None and filename == TF2_WEIGHTS_NAME: - # Maybe the checkpoint is sharded, we try to grab the index name in this case. - resolved_archive_file = cached_file( - pretrained_model_name_or_path, TF2_WEIGHTS_INDEX_NAME, **cached_file_kwargs - ) - if resolved_archive_file is not None: - is_sharded = True - if resolved_archive_file is None and filename == WEIGHTS_NAME: - # Maybe the checkpoint is sharded, we try to grab the index name in this case. - resolved_archive_file = cached_file( - pretrained_model_name_or_path, WEIGHTS_INDEX_NAME, **cached_file_kwargs - ) - if resolved_archive_file is not None: - is_sharded = True - if resolved_archive_file is None: - # Otherwise, maybe there is a PyTorch or Flax model file. We try those to give a helpful error - # message. - has_file_kwargs = { - "revision": revision, - "proxies": proxies, - "token": token, - } - if has_file(pretrained_model_name_or_path, WEIGHTS_NAME, **has_file_kwargs): - raise EnvironmentError( - f"{pretrained_model_name_or_path} does not appear to have a file named" - f" {TF2_WEIGHTS_NAME} but there is a file for PyTorch weights. Use `from_pt=True` to" - " load this model from those weights." - ) - else: - raise EnvironmentError( - f"{pretrained_model_name_or_path} does not appear to have a file named {WEIGHTS_NAME}," - f" {TF2_WEIGHTS_NAME} or {TF_WEIGHTS_NAME}" - ) - - except EnvironmentError: - # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted - # to the original exception. - raise - except Exception: - # For any other exception, we throw a generic error. - - raise EnvironmentError( - f"Can't load the model for '{pretrained_model_name_or_path}'. If you were trying to load it" - " from 'https://huggingface.co/models', make sure you don't have a local directory with the" - f" same name. Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a" - f" directory containing a file named {WEIGHTS_NAME}, {TF2_WEIGHTS_NAME} or {TF_WEIGHTS_NAME}" - ) - if is_local: - logger.info(f"loading weights file {archive_file}") - resolved_archive_file = archive_file - filename = resolved_archive_file.split(os.path.sep)[-1] - else: - logger.info(f"loading weights file {filename} from cache at {resolved_archive_file}") - else: - resolved_archive_file = None - - # We'll need to download and cache each checkpoint shard if the checkpoint is sharded. - if is_sharded: - # resolved_archive_file becomes a list of files that point to the different checkpoint shards in this case. - resolved_archive_file, _ = get_checkpoint_shard_files( - pretrained_model_name_or_path, - resolved_archive_file, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - local_files_only=local_files_only, - token=token, - user_agent=user_agent, - revision=revision, - _commit_hash=commit_hash, - ) - - safetensors_from_pt = False - if filename == SAFE_WEIGHTS_NAME: - with safe_open(resolved_archive_file, framework="tf") as f: - safetensors_metadata = f.metadata() - if safetensors_metadata is None or safetensors_metadata.get("format") not in ["pt", "tf", "flax"]: - raise OSError( - f"The safetensors archive passed at {resolved_archive_file} does not contain the valid metadata." - " Make sure you save your model with the `save_pretrained` method." - ) - safetensors_from_pt = safetensors_metadata.get("format") == "pt" - - config.name_or_path = pretrained_model_name_or_path - - # composed models, *e.g.* TFRag, require special treatment when it comes to loading - # pre-trained weights. - if cls._requires_load_weight_prefix and model_kwargs.get("name") is not None: - model_kwargs["load_weight_prefix"] = load_weight_prefix + "/" + model_kwargs.get("name") - - # Instantiate model. - model = cls(config, *model_args, **model_kwargs) - - if from_pt: - from .modeling_tf_pytorch_utils import load_pytorch_checkpoint_in_tf2_model - - # Load from a PyTorch checkpoint - return load_pytorch_checkpoint_in_tf2_model( - model, - resolved_archive_file, - allow_missing_keys=True, - output_loading_info=output_loading_info, - _prefix=load_weight_prefix, - tf_to_pt_weight_rename=tf_to_pt_weight_rename, - ) - - # we might need to extend the variable scope for composite models - if load_weight_prefix is not None: - with tf.compat.v1.variable_scope(load_weight_prefix): - model.build() # build the network with dummy inputs - else: - model.build() # build the network with dummy inputs - - if safetensors_from_pt: - from .modeling_tf_pytorch_utils import load_pytorch_state_dict_in_tf2_model - - with safe_open(resolved_archive_file, framework="tf") as safetensors_archive: - # Load from a PyTorch checkpoint - # We load in TF format here because PT weights often need to be transposed, and this is much - # faster on GPU. Loading as numpy and transposing on CPU adds several seconds to load times. - return load_pytorch_state_dict_in_tf2_model( - model, - safetensors_archive, - tf_inputs=False, # No need to build the model again - allow_missing_keys=True, - output_loading_info=output_loading_info, - _prefix=load_weight_prefix, - ignore_mismatched_sizes=ignore_mismatched_sizes, - ) - - # 'by_name' allow us to do transfer learning by skipping/adding layers - # see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1339-L1357 - try: - if is_sharded: - for file in resolved_archive_file: - os.path.isfile(file), f"Error retrieving files {file}" - - missing_keys, unexpected_keys, mismatched_keys = load_tf_sharded_weights( - model, - resolved_archive_file, - ignore_mismatched_sizes=ignore_mismatched_sizes, - _prefix=load_weight_prefix, - ) - else: - missing_keys, unexpected_keys, mismatched_keys = load_tf_weights( - model, - resolved_archive_file, - ignore_mismatched_sizes=ignore_mismatched_sizes, - _prefix=load_weight_prefix, - ) - except OSError as e: - try: - with open(resolved_archive_file) as f: - if f.read().startswith("version"): - raise OSError( - "You seem to have cloned a repository without having git-lfs installed. Please install " - "git-lfs and run `git lfs install` followed by `git lfs pull` in the folder " - "you cloned." - ) - else: - raise ValueError from e - except (UnicodeDecodeError, ValueError): - raise OSError( - "Unable to load weights from h5 file. " - "If you tried to load a TF 2.0 model from a PyTorch checkpoint, please set from_pt=True. " - ) - - if cls._keys_to_ignore_on_load_missing is not None: - for pat in cls._keys_to_ignore_on_load_missing: - missing_keys = [k for k in missing_keys if re.search(pat, k) is None] - - if cls._keys_to_ignore_on_load_unexpected is not None: - for pat in cls._keys_to_ignore_on_load_unexpected: - unexpected_keys = [k for k in unexpected_keys if re.search(pat, k) is None] - - if len(unexpected_keys) > 0: - logger.warning( - f"Some layers from the model checkpoint at {pretrained_model_name_or_path} were not used when" - f" initializing {model.__class__.__name__}: {unexpected_keys}\n- This IS expected if you are" - f" initializing {model.__class__.__name__} from the checkpoint of a model trained on another task or" - " with another architecture (e.g. initializing a BertForSequenceClassification model from a" - " BertForPreTraining model).\n- This IS NOT expected if you are initializing" - f" {model.__class__.__name__} from the checkpoint of a model that you expect to be exactly identical" - " (initializing a BertForSequenceClassification model from a BertForSequenceClassification model)." - ) - else: - logger.warning(f"All model checkpoint layers were used when initializing {model.__class__.__name__}.\n") - - if len(missing_keys) > 0: - logger.warning( - f"Some layers of {model.__class__.__name__} were not initialized from the model checkpoint at" - f" {pretrained_model_name_or_path} and are newly initialized: {missing_keys}\nYou should probably" - " TRAIN this model on a down-stream task to be able to use it for predictions and inference." - ) - elif len(mismatched_keys) == 0: - logger.warning( - f"All the layers of {model.__class__.__name__} were initialized from the model checkpoint at" - f" {pretrained_model_name_or_path}.\nIf your task is similar to the task the model of the checkpoint" - f" was trained on, you can already use {model.__class__.__name__} for predictions without further" - " training." - ) - if len(mismatched_keys) > 0: - mismatched_warning = "\n".join( - [ - f"- {key}: found shape {shape1} in the checkpoint and {shape2} in the model instantiated" - for key, shape1, shape2 in mismatched_keys - ] - ) - logger.warning( - f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at" - f" {pretrained_model_name_or_path} and are newly initialized because the shapes did not" - f" match:\n{mismatched_warning}\nYou should probably TRAIN this model on a down-stream task to be able" - " to use it for predictions and inference." - ) - - # If it is a model with generation capabilities, attempt to load the generation config - if model.can_generate(): - try: - model.generation_config = GenerationConfig.from_pretrained( - pretrained_model_name_or_path, - cache_dir=cache_dir, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - token=token, - revision=revision, - subfolder=subfolder, - _from_auto=from_auto_class, - _from_pipeline=from_pipeline, - **kwargs, - ) - except OSError: - logger.info( - "Generation config file not found, using a generation config created from the model config." - ) - pass - - if output_loading_info: - loading_info = { - "missing_keys": missing_keys, - "unexpected_keys": unexpected_keys, - "mismatched_keys": mismatched_keys, - } - - return model, loading_info - - return model - - def push_to_hub( - self, - repo_id: str, - use_temp_dir: Optional[bool] = None, - commit_message: Optional[str] = None, - private: Optional[bool] = None, - max_shard_size: Optional[Union[int, str]] = "10GB", - token: Optional[Union[bool, str]] = None, - # (`use_auth_token` is deprecated: we have to keep it here as we don't have **kwargs) - use_auth_token: Optional[Union[bool, str]] = None, - create_pr: bool = False, - **base_model_card_args, - ) -> str: - """ - Upload the model files to the 🤗 Model Hub while synchronizing a local clone of the repo in `repo_path_or_name`. - - Parameters: - repo_id (`str`): - The name of the repository you want to push your model to. It should contain your organization name - when pushing to a given organization. - use_temp_dir (`bool`, *optional*): - Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. - Will default to `True` if there is no directory named like `repo_id`, `False` otherwise. - commit_message (`str`, *optional*): - Message to commit while pushing. Will default to `"Upload model"`. - private (`bool`, *optional*): - Whether or not the repository created should be private. - token (`bool` or `str`, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated - when running `huggingface-cli login` (stored in `~/.huggingface`). Will default to `True` if `repo_url` - is not specified. - max_shard_size (`int` or `str`, *optional*, defaults to `"10GB"`): - Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard - will then be each of size lower than this size. If expressed as a string, needs to be digits followed - by a unit (like `"5MB"`). - create_pr (`bool`, *optional*, defaults to `False`): - Whether or not to create a PR with the uploaded files or directly commit. - - Examples: - - ```python - from transformers import TFAutoModel - - model = TFAutoModel.from_pretrained("bert-base-cased") - - # Push the model to your namespace with the name "my-finetuned-bert". - model.push_to_hub("my-finetuned-bert") - - # Push the model to an organization with the name "my-finetuned-bert". - model.push_to_hub("huggingface/my-finetuned-bert") - ``` - """ - if use_auth_token is not None: - warnings.warn( - "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers.", FutureWarning - ) - if token is not None: - raise ValueError( - "`token` and `use_auth_token` are both specified. Please set only the argument `token`." - ) - token = use_auth_token - - if "repo_path_or_name" in base_model_card_args: - warnings.warn( - "The `repo_path_or_name` argument is deprecated and will be removed in v5 of Transformers. Use " - "`repo_id` instead." - ) - repo_id = base_model_card_args.pop("repo_path_or_name") - # Deprecation warning will be sent after for repo_url and organization - repo_url = base_model_card_args.pop("repo_url", None) - organization = base_model_card_args.pop("organization", None) - - if os.path.isdir(repo_id): - working_dir = repo_id - repo_id = repo_id.split(os.path.sep)[-1] - else: - working_dir = repo_id.split("/")[-1] - - repo_id = self._create_repo( - repo_id, private=private, token=token, repo_url=repo_url, organization=organization - ) - - if use_temp_dir is None: - use_temp_dir = not os.path.isdir(working_dir) - - with working_or_temp_dir(working_dir=working_dir, use_temp_dir=use_temp_dir) as work_dir: - files_timestamps = self._get_files_timestamps(work_dir) - - # Save all files. - self.save_pretrained(work_dir, max_shard_size=max_shard_size) - if hasattr(self, "history") and hasattr(self, "create_model_card"): - # This is a Keras model and we might be able to fish out its History and make a model card out of it - base_model_card_args = { - "output_dir": work_dir, - "model_name": Path(repo_id).name, - } - base_model_card_args.update(base_model_card_args) - self.create_model_card(**base_model_card_args) - - self._upload_modified_files( - work_dir, - repo_id, - files_timestamps, - commit_message=commit_message, - token=token, - create_pr=create_pr, - ) - - @classmethod - def register_for_auto_class(cls, auto_class="TFAutoModel"): - """ - Register this class with a given auto class. This should only be used for custom models as the ones in the - library are already mapped with an auto class. - - - - This API is experimental and may have some slight breaking changes in the next releases. - - - - Args: - auto_class (`str` or `type`, *optional*, defaults to `"TFAutoModel"`): - The auto class to register this new model with. - """ - if not isinstance(auto_class, str): - auto_class = auto_class.__name__ - - import transformers.models.auto as auto_module - - if not hasattr(auto_module, auto_class): - raise ValueError(f"{auto_class} is not a valid auto class.") - - cls._auto_class = auto_class - - -class TFConv1D(tf.keras.layers.Layer): - """ - 1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2). - - Basically works like a linear layer but the weights are transposed. - - Args: - nf (`int`): - The number of output features. - nx (`int`): - The number of input features. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation to use to initialize the weights. - kwargs (`Dict[str, Any]`, *optional*): - Additional keyword arguments passed along to the `__init__` of `tf.keras.layers.Layer`. - """ - - def __init__(self, nf, nx, initializer_range=0.02, **kwargs): - super().__init__(**kwargs) - self.nf = nf - self.nx = nx - self.initializer_range = initializer_range - - def build(self, input_shape): - self.weight = self.add_weight( - "weight", shape=[self.nx, self.nf], initializer=get_initializer(self.initializer_range) - ) - self.bias = self.add_weight("bias", shape=[1, self.nf], initializer=tf.zeros_initializer()) - - def call(self, x): - bz, sl = shape_list(x)[:2] - - x = tf.reshape(x, [-1, self.nx]) - x = tf.matmul(x, self.weight) + self.bias - - x = tf.reshape(x, [bz, sl, self.nf]) - - return x - - -class TFSharedEmbeddings(tf.keras.layers.Layer): - r""" - Construct shared token embeddings. - - The weights of the embedding layer is usually shared with the weights of the linear decoder when doing language - modeling. - - Args: - vocab_size (`int`): - The size of the vocabulary, e.g., the number of unique tokens. - hidden_size (`int`): - The size of the embedding vectors. - initializer_range (`float`, *optional*): - The standard deviation to use when initializing the weights. If no value is provided, it will default to - \\(1/\sqrt{hidden\_size}\\). - kwargs (`Dict[str, Any]`, *optional*): - Additional keyword arguments passed along to the `__init__` of `tf.keras.layers.Layer`. - """ - # TODO (joao): flagged for delection due to embeddings refactor - - def __init__(self, vocab_size: int, hidden_size: int, initializer_range: Optional[float] = None, **kwargs): - super().__init__(**kwargs) - self.vocab_size = vocab_size - self.hidden_size = hidden_size - self.initializer_range = hidden_size**-0.5 if initializer_range is None else initializer_range - warnings.warn( - "`TFSharedEmbeddings` is scheduled for deletion in v4.32, use `tf.keras.layers.Embedding` instead.", - DeprecationWarning, - ) - - def build(self, input_shape): - """ - Build shared token embedding layer Shared weights logic adapted from - https://github.com/tensorflow/models/blob/a009f4fb9d2fc4949e32192a944688925ef78659/official/transformer/v2/embedding_layer.py#L24 - """ - self.weight = self.add_weight( - "weight", shape=[self.vocab_size, self.hidden_size], initializer=get_initializer(self.initializer_range) - ) - super().build(input_shape) - - def get_config(self): - config = { - "vocab_size": self.vocab_size, - "hidden_size": self.hidden_size, - "initializer_range": self.initializer_range, - } - base_config = super().get_config() - - return dict(list(base_config.items()) + list(config.items())) - - def call(self, inputs: tf.Tensor, mode: str = "embedding") -> tf.Tensor: - """ - Get token embeddings of inputs or decode final hidden state. - - Args: - inputs (`tf.Tensor`): - In embedding mode, should be an int64 tensor with shape `[batch_size, length]`. - - In linear mode, should be a float tensor with shape `[batch_size, length, hidden_size]`. - mode (`str`, defaults to `"embedding"`): - A valid value is either `"embedding"` or `"linear"`, the first one indicates that the layer should be - used as an embedding layer, the second one that the layer should be used as a linear decoder. - - Returns: - `tf.Tensor`: In embedding mode, the output is a float32 embedding tensor, with shape `[batch_size, length, - embedding_size]`. - - In linear mode, the output is a float32 with shape `[batch_size, length, vocab_size]`. - - Raises: - ValueError: if `mode` is not valid. - - Shared weights logic is adapted from - [here](https://github.com/tensorflow/models/blob/a009f4fb9d2fc4949e32192a944688925ef78659/official/transformer/v2/embedding_layer.py#L24). - """ - if mode == "embedding": - return self._embedding(inputs) - elif mode == "linear": - return self._linear(inputs) - else: - raise ValueError(f"mode {mode} is not valid.") - - def _embedding(self, input_ids): - """Applies embedding based on inputs tensor.""" - return tf.gather(self.weight, input_ids) - - def _linear(self, inputs): - """ - Computes logits by running inputs through a linear layer. - - Args: - inputs: A float32 tensor with shape [..., hidden_size] - - Returns: - float32 tensor with shape [..., vocab_size]. - """ - first_dims = shape_list(inputs)[:-1] - x = tf.reshape(inputs, [-1, self.hidden_size]) - logits = tf.matmul(x, self.weight, transpose_b=True) - - return tf.reshape(logits, first_dims + [self.vocab_size]) - - -class TFSequenceSummary(tf.keras.layers.Layer): - """ - Compute a single vector summary of a sequence hidden states. - - Args: - config ([`PretrainedConfig`]): - The config used by the model. Relevant arguments in the config class of the model are (refer to the actual - config class of your model for the default values it uses): - - - **summary_type** (`str`) -- The method to use to make this summary. Accepted values are: - - - `"last"` -- Take the last token hidden state (like XLNet) - - `"first"` -- Take the first token hidden state (like Bert) - - `"mean"` -- Take the mean of all tokens hidden states - - `"cls_index"` -- Supply a Tensor of classification token position (GPT/GPT-2) - - `"attn"` -- Not implemented now, use multi-head attention - - - **summary_use_proj** (`bool`) -- Add a projection after the vector extraction. - - **summary_proj_to_labels** (`bool`) -- If `True`, the projection outputs to `config.num_labels` classes - (otherwise to `config.hidden_size`). - - **summary_activation** (`Optional[str]`) -- Set to `"tanh"` to add a tanh activation to the output, - another string or `None` will add no activation. - - **summary_first_dropout** (`float`) -- Optional dropout probability before the projection and activation. - - **summary_last_dropout** (`float`)-- Optional dropout probability after the projection and activation. - - initializer_range (`float`, defaults to 0.02): The standard deviation to use to initialize the weights. - kwargs (`Dict[str, Any]`, *optional*): - Additional keyword arguments passed along to the `__init__` of `tf.keras.layers.Layer`. - """ - - def __init__(self, config: PretrainedConfig, initializer_range: float = 0.02, **kwargs): - super().__init__(**kwargs) - - self.summary_type = config.summary_type if hasattr(config, "summary_use_proj") else "last" - if self.summary_type == "attn": - # We should use a standard multi-head attention module with absolute positional embedding for that. - # Cf. https://github.com/zihangdai/xlnet/blob/master/modeling.py#L253-L276 - # We can probably just use the multi-head attention module of PyTorch >=1.1.0 - raise NotImplementedError - - self.has_summary = hasattr(config, "summary_use_proj") and config.summary_use_proj - if self.has_summary: - if hasattr(config, "summary_proj_to_labels") and config.summary_proj_to_labels and config.num_labels > 0: - num_classes = config.num_labels - else: - num_classes = config.hidden_size - self.summary = tf.keras.layers.Dense( - num_classes, kernel_initializer=get_initializer(initializer_range), name="summary" - ) - - self.has_activation = False - activation_string = getattr(config, "summary_activation", None) - if activation_string is not None: - self.has_activation = True - self.activation = get_tf_activation(activation_string) - - self.has_first_dropout = hasattr(config, "summary_first_dropout") and config.summary_first_dropout > 0 - if self.has_first_dropout: - self.first_dropout = tf.keras.layers.Dropout(config.summary_first_dropout) - - self.has_last_dropout = hasattr(config, "summary_last_dropout") and config.summary_last_dropout > 0 - if self.has_last_dropout: - self.last_dropout = tf.keras.layers.Dropout(config.summary_last_dropout) - - def call(self, inputs, cls_index=None, training=False): - if not isinstance(inputs, (dict, tuple, list)): - hidden_states = inputs - elif isinstance(inputs, (tuple, list)): - hidden_states = inputs[0] - cls_index = inputs[1] if len(inputs) > 1 else None - assert len(inputs) <= 2, "Too many inputs." - else: - hidden_states = inputs.get("hidden_states") - cls_index = inputs.get("cls_index", None) - - if self.summary_type == "last": - output = hidden_states[:, -1] - elif self.summary_type == "first": - output = hidden_states[:, 0] - elif self.summary_type == "mean": - output = tf.reduce_mean(hidden_states, axis=1) - elif self.summary_type == "cls_index": - hidden_shape = shape_list(hidden_states) # e.g. [batch, num choices, seq length, hidden dims] - if cls_index is None: - cls_index = tf.fill( - hidden_shape[:-2], hidden_shape[-2] - 1 - ) # A tensor full of shape [batch] or [batch, num choices] full of sequence length - cls_shape = shape_list(cls_index) - if len(cls_shape) <= len(hidden_shape) - 2: - cls_index = tf.expand_dims(cls_index, axis=-1) - # else: - # cls_index = cls_index[..., tf.newaxis] - # cls_index = cls_index.expand((-1,) * (cls_index.dim()-1) + (hidden_states.size(-1),)) - # shape of cls_index: (bsz, XX, 1, hidden_size) where XX are optional leading dim of hidden_states - output = tf.gather(hidden_states, cls_index, batch_dims=len(hidden_shape) - 2) - output = tf.squeeze( - output, axis=len(hidden_shape) - 2 - ) # shape of output: (batch, num choices, hidden_size) - elif self.summary_type == "attn": - raise NotImplementedError - - if self.has_first_dropout: - output = self.first_dropout(output, training=training) - - if self.has_summary: - output = self.summary(output) - - if self.has_activation: - output = self.activation(output) - - if self.has_last_dropout: - output = self.last_dropout(output, training=training) - - return output - - -def get_initializer(initializer_range: float = 0.02) -> tf.keras.initializers.TruncatedNormal: - """ - Creates a `tf.keras.initializers.TruncatedNormal` with the given range. - - Args: - initializer_range (*float*, defaults to 0.02): Standard deviation of the initializer range. - - Returns: - `tf.keras.initializers.TruncatedNormal`: The truncated normal initializer. - """ - return tf.keras.initializers.TruncatedNormal(stddev=initializer_range) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deprecated/bort/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deprecated/bort/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilevitv2/convert_mlcvnets_to_pytorch.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilevitv2/convert_mlcvnets_to_pytorch.py deleted file mode 100644 index 2e2d31295d7c58fa7c75cff883cfc0815ffa6cb5..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilevitv2/convert_mlcvnets_to_pytorch.py +++ /dev/null @@ -1,326 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Convert MobileViTV2 checkpoints from the ml-cvnets library.""" - - -import argparse -import collections -import json -from pathlib import Path - -import requests -import torch -import yaml -from huggingface_hub import hf_hub_download -from PIL import Image - -from transformers import ( - MobileViTImageProcessor, - MobileViTV2Config, - MobileViTV2ForImageClassification, - MobileViTV2ForSemanticSegmentation, -) -from transformers.utils import logging - - -logging.set_verbosity_info() -logger = logging.get_logger(__name__) - - -def load_orig_config_file(orig_cfg_file): - print("Loading config file...") - - def flatten_yaml_as_dict(d, parent_key="", sep="."): - items = [] - for k, v in d.items(): - new_key = parent_key + sep + k if parent_key else k - if isinstance(v, collections.abc.MutableMapping): - items.extend(flatten_yaml_as_dict(v, new_key, sep=sep).items()) - else: - items.append((new_key, v)) - return dict(items) - - config = argparse.Namespace() - with open(orig_cfg_file, "r") as yaml_file: - try: - cfg = yaml.load(yaml_file, Loader=yaml.FullLoader) - - flat_cfg = flatten_yaml_as_dict(cfg) - for k, v in flat_cfg.items(): - setattr(config, k, v) - except yaml.YAMLError as exc: - logger.error("Error while loading config file: {}. Error message: {}".format(orig_cfg_file, str(exc))) - return config - - -def get_mobilevitv2_config(task_name, orig_cfg_file): - config = MobileViTV2Config() - - is_segmentation_model = False - - # dataset - if task_name.startswith("imagenet1k_"): - config.num_labels = 1000 - if int(task_name.strip().split("_")[-1]) == 384: - config.image_size = 384 - else: - config.image_size = 256 - filename = "imagenet-1k-id2label.json" - elif task_name.startswith("imagenet21k_to_1k_"): - config.num_labels = 21000 - if int(task_name.strip().split("_")[-1]) == 384: - config.image_size = 384 - else: - config.image_size = 256 - filename = "imagenet-22k-id2label.json" - elif task_name.startswith("ade20k_"): - config.num_labels = 151 - config.image_size = 512 - filename = "ade20k-id2label.json" - is_segmentation_model = True - elif task_name.startswith("voc_"): - config.num_labels = 21 - config.image_size = 512 - filename = "pascal-voc-id2label.json" - is_segmentation_model = True - - # orig_config - orig_config = load_orig_config_file(orig_cfg_file) - assert getattr(orig_config, "model.classification.name", -1) == "mobilevit_v2", "Invalid model" - config.width_multiplier = getattr(orig_config, "model.classification.mitv2.width_multiplier", 1.0) - assert ( - getattr(orig_config, "model.classification.mitv2.attn_norm_layer", -1) == "layer_norm_2d" - ), "Norm layers other than layer_norm_2d is not supported" - config.hidden_act = getattr(orig_config, "model.classification.activation.name", "swish") - # config.image_size == getattr(orig_config, 'sampler.bs.crop_size_width', 256) - - if is_segmentation_model: - config.output_stride = getattr(orig_config, "model.segmentation.output_stride", 16) - if "_deeplabv3" in task_name: - config.atrous_rates = getattr(orig_config, "model.segmentation.deeplabv3.aspp_rates", [12, 24, 36]) - config.aspp_out_channels = getattr(orig_config, "model.segmentation.deeplabv3.aspp_out_channels", 512) - config.aspp_dropout_prob = getattr(orig_config, "model.segmentation.deeplabv3.aspp_dropout", 0.1) - - # id2label - repo_id = "huggingface/label-files" - id2label = json.load(open(hf_hub_download(repo_id, filename, repo_type="dataset"), "r")) - id2label = {int(k): v for k, v in id2label.items()} - config.id2label = id2label - config.label2id = {v: k for k, v in id2label.items()} - - return config - - -def rename_key(dct, old, new): - val = dct.pop(old) - dct[new] = val - - -def create_rename_keys(state_dict, base_model=False): - if base_model: - model_prefix = "" - else: - model_prefix = "mobilevitv2." - - rename_keys = [] - for k in state_dict.keys(): - if k[:8] == "encoder.": - k_new = k[8:] - else: - k_new = k - - if ".block." in k: - k_new = k_new.replace(".block.", ".") - if ".conv." in k: - k_new = k_new.replace(".conv.", ".convolution.") - if ".norm." in k: - k_new = k_new.replace(".norm.", ".normalization.") - - if "conv_1." in k: - k_new = k_new.replace("conv_1.", f"{model_prefix}conv_stem.") - for i in [1, 2]: - if f"layer_{i}." in k: - k_new = k_new.replace(f"layer_{i}.", f"{model_prefix}encoder.layer.{i-1}.layer.") - if ".exp_1x1." in k: - k_new = k_new.replace(".exp_1x1.", ".expand_1x1.") - if ".red_1x1." in k: - k_new = k_new.replace(".red_1x1.", ".reduce_1x1.") - - for i in [3, 4, 5]: - if f"layer_{i}.0." in k: - k_new = k_new.replace(f"layer_{i}.0.", f"{model_prefix}encoder.layer.{i-1}.downsampling_layer.") - if f"layer_{i}.1.local_rep.0." in k: - k_new = k_new.replace(f"layer_{i}.1.local_rep.0.", f"{model_prefix}encoder.layer.{i-1}.conv_kxk.") - if f"layer_{i}.1.local_rep.1." in k: - k_new = k_new.replace(f"layer_{i}.1.local_rep.1.", f"{model_prefix}encoder.layer.{i-1}.conv_1x1.") - - for i in [3, 4, 5]: - if i == 3: - j_in = [0, 1] - elif i == 4: - j_in = [0, 1, 2, 3] - elif i == 5: - j_in = [0, 1, 2] - - for j in j_in: - if f"layer_{i}.1.global_rep.{j}." in k: - k_new = k_new.replace( - f"layer_{i}.1.global_rep.{j}.", f"{model_prefix}encoder.layer.{i-1}.transformer.layer.{j}." - ) - if f"layer_{i}.1.global_rep.{j+1}." in k: - k_new = k_new.replace( - f"layer_{i}.1.global_rep.{j+1}.", f"{model_prefix}encoder.layer.{i-1}.layernorm." - ) - - if f"layer_{i}.1.conv_proj." in k: - k_new = k_new.replace(f"layer_{i}.1.conv_proj.", f"{model_prefix}encoder.layer.{i-1}.conv_projection.") - - if "pre_norm_attn.0." in k: - k_new = k_new.replace("pre_norm_attn.0.", "layernorm_before.") - if "pre_norm_attn.1." in k: - k_new = k_new.replace("pre_norm_attn.1.", "attention.") - if "pre_norm_ffn.0." in k: - k_new = k_new.replace("pre_norm_ffn.0.", "layernorm_after.") - if "pre_norm_ffn.1." in k: - k_new = k_new.replace("pre_norm_ffn.1.", "ffn.conv1.") - if "pre_norm_ffn.3." in k: - k_new = k_new.replace("pre_norm_ffn.3.", "ffn.conv2.") - - if "classifier.1." in k: - k_new = k_new.replace("classifier.1.", "classifier.") - - if "seg_head." in k: - k_new = k_new.replace("seg_head.", "segmentation_head.") - if ".aspp_layer." in k: - k_new = k_new.replace(".aspp_layer.", ".") - if ".aspp_pool." in k: - k_new = k_new.replace(".aspp_pool.", ".") - - rename_keys.append((k, k_new)) - return rename_keys - - -def remove_unused_keys(state_dict): - """remove unused keys (e.g.: seg_head.aux_head)""" - keys_to_ignore = [] - for k in state_dict.keys(): - if k.startswith("seg_head.aux_head."): - keys_to_ignore.append(k) - for k in keys_to_ignore: - state_dict.pop(k, None) - - -# We will verify our results on an image of cute cats -def prepare_img(): - url = "http://images.cocodataset.org/val2017/000000039769.jpg" - # url = "https://cdn.britannica.com/86/141086-050-9D7C75EE/Gulfstream-G450-business-jet-passengers.jpg" - im = Image.open(requests.get(url, stream=True).raw) - return im - - -@torch.no_grad() -def convert_mobilevitv2_checkpoint(task_name, checkpoint_path, orig_config_path, pytorch_dump_folder_path): - """ - Copy/paste/tweak model's weights to our MobileViTV2 structure. - """ - config = get_mobilevitv2_config(task_name, orig_config_path) - - # load original state_dict - checkpoint = torch.load(checkpoint_path, map_location="cpu") - - # load huggingface model - if task_name.startswith("ade20k_") or task_name.startswith("voc_"): - model = MobileViTV2ForSemanticSegmentation(config).eval() - base_model = False - else: - model = MobileViTV2ForImageClassification(config).eval() - base_model = False - - # remove and rename some keys of load the original model - state_dict = checkpoint - remove_unused_keys(state_dict) - rename_keys = create_rename_keys(state_dict, base_model=base_model) - for rename_key_src, rename_key_dest in rename_keys: - rename_key(state_dict, rename_key_src, rename_key_dest) - - # load modified state_dict - model.load_state_dict(state_dict) - - # Check outputs on an image, prepared by MobileViTImageProcessor - image_processor = MobileViTImageProcessor(crop_size=config.image_size, size=config.image_size + 32) - encoding = image_processor(images=prepare_img(), return_tensors="pt") - outputs = model(**encoding) - - # verify classification model - if task_name.startswith("imagenet"): - logits = outputs.logits - predicted_class_idx = logits.argmax(-1).item() - print("Predicted class:", model.config.id2label[predicted_class_idx]) - if task_name.startswith("imagenet1k_256") and config.width_multiplier == 1.0: - # expected_logits for base variant - expected_logits = torch.tensor([-1.6336e00, -7.3204e-02, -5.1883e-01]) - assert torch.allclose(logits[0, :3], expected_logits, atol=1e-4) - - Path(pytorch_dump_folder_path).mkdir(exist_ok=True) - print(f"Saving model {task_name} to {pytorch_dump_folder_path}") - model.save_pretrained(pytorch_dump_folder_path) - print(f"Saving image processor to {pytorch_dump_folder_path}") - image_processor.save_pretrained(pytorch_dump_folder_path) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - # Required parameters - parser.add_argument( - "--task", - default="imagenet1k_256", - type=str, - help=( - "Name of the task for which the MobileViTV2 model you'd like to convert is trained on . " - """ - Classification (ImageNet-1k) - - MobileViTV2 (256x256) : imagenet1k_256 - - MobileViTV2 (Trained on 256x256 and Finetuned on 384x384) : imagenet1k_384 - - MobileViTV2 (Trained on ImageNet-21k and Finetuned on ImageNet-1k 256x256) : - imagenet21k_to_1k_256 - - MobileViTV2 (Trained on ImageNet-21k, Finetuned on ImageNet-1k 256x256, and Finetuned on - ImageNet-1k 384x384) : imagenet21k_to_1k_384 - Segmentation - - ADE20K Dataset : ade20k_deeplabv3 - - Pascal VOC 2012 Dataset: voc_deeplabv3 - """ - ), - choices=[ - "imagenet1k_256", - "imagenet1k_384", - "imagenet21k_to_1k_256", - "imagenet21k_to_1k_384", - "ade20k_deeplabv3", - "voc_deeplabv3", - ], - ) - - parser.add_argument( - "--orig_checkpoint_path", required=True, type=str, help="Path to the original state dict (.pt file)." - ) - parser.add_argument("--orig_config_path", required=True, type=str, help="Path to the original config file.") - parser.add_argument( - "--pytorch_dump_folder_path", required=True, type=str, help="Path to the output PyTorch model directory." - ) - - args = parser.parse_args() - convert_mobilevitv2_checkpoint( - args.task, args.orig_checkpoint_path, args.orig_config_path, args.pytorch_dump_folder_path - ) diff --git a/spaces/ykilcher/apes/metrics/perceptual_path_length.py b/spaces/ykilcher/apes/metrics/perceptual_path_length.py deleted file mode 100644 index d070f45a04efed7e9492fddb85078be306753282..0000000000000000000000000000000000000000 --- a/spaces/ykilcher/apes/metrics/perceptual_path_length.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Perceptual Path Length (PPL) from the paper "A Style-Based Generator -Architecture for Generative Adversarial Networks". Matches the original -implementation by Karras et al. at -https://github.com/NVlabs/stylegan/blob/master/metrics/perceptual_path_length.py""" - -import copy -import numpy as np -import torch -import dnnlib -from . import metric_utils - -#---------------------------------------------------------------------------- - -# Spherical interpolation of a batch of vectors. -def slerp(a, b, t): - a = a / a.norm(dim=-1, keepdim=True) - b = b / b.norm(dim=-1, keepdim=True) - d = (a * b).sum(dim=-1, keepdim=True) - p = t * torch.acos(d) - c = b - d * a - c = c / c.norm(dim=-1, keepdim=True) - d = a * torch.cos(p) + c * torch.sin(p) - d = d / d.norm(dim=-1, keepdim=True) - return d - -#---------------------------------------------------------------------------- - -class PPLSampler(torch.nn.Module): - def __init__(self, G, G_kwargs, epsilon, space, sampling, crop, vgg16): - assert space in ['z', 'w'] - assert sampling in ['full', 'end'] - super().__init__() - self.G = copy.deepcopy(G) - self.G_kwargs = G_kwargs - self.epsilon = epsilon - self.space = space - self.sampling = sampling - self.crop = crop - self.vgg16 = copy.deepcopy(vgg16) - - def forward(self, c): - # Generate random latents and interpolation t-values. - t = torch.rand([c.shape[0]], device=c.device) * (1 if self.sampling == 'full' else 0) - z0, z1 = torch.randn([c.shape[0] * 2, self.G.z_dim], device=c.device).chunk(2) - - # Interpolate in W or Z. - if self.space == 'w': - w0, w1 = self.G.mapping(z=torch.cat([z0,z1]), c=torch.cat([c,c])).chunk(2) - wt0 = w0.lerp(w1, t.unsqueeze(1).unsqueeze(2)) - wt1 = w0.lerp(w1, t.unsqueeze(1).unsqueeze(2) + self.epsilon) - else: # space == 'z' - zt0 = slerp(z0, z1, t.unsqueeze(1)) - zt1 = slerp(z0, z1, t.unsqueeze(1) + self.epsilon) - wt0, wt1 = self.G.mapping(z=torch.cat([zt0,zt1]), c=torch.cat([c,c])).chunk(2) - - # Randomize noise buffers. - for name, buf in self.G.named_buffers(): - if name.endswith('.noise_const'): - buf.copy_(torch.randn_like(buf)) - - # Generate images. - img = self.G.synthesis(ws=torch.cat([wt0,wt1]), noise_mode='const', force_fp32=True, **self.G_kwargs) - - # Center crop. - if self.crop: - assert img.shape[2] == img.shape[3] - c = img.shape[2] // 8 - img = img[:, :, c*3 : c*7, c*2 : c*6] - - # Downsample to 256x256. - factor = self.G.img_resolution // 256 - if factor > 1: - img = img.reshape([-1, img.shape[1], img.shape[2] // factor, factor, img.shape[3] // factor, factor]).mean([3, 5]) - - # Scale dynamic range from [-1,1] to [0,255]. - img = (img + 1) * (255 / 2) - if self.G.img_channels == 1: - img = img.repeat([1, 3, 1, 1]) - - # Evaluate differential LPIPS. - lpips_t0, lpips_t1 = self.vgg16(img, resize_images=False, return_lpips=True).chunk(2) - dist = (lpips_t0 - lpips_t1).square().sum(1) / self.epsilon ** 2 - return dist - -#---------------------------------------------------------------------------- - -def compute_ppl(opts, num_samples, epsilon, space, sampling, crop, batch_size, jit=False): - dataset = dnnlib.util.construct_class_by_name(**opts.dataset_kwargs) - vgg16_url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt' - vgg16 = metric_utils.get_feature_detector(vgg16_url, num_gpus=opts.num_gpus, rank=opts.rank, verbose=opts.progress.verbose) - - # Setup sampler. - sampler = PPLSampler(G=opts.G, G_kwargs=opts.G_kwargs, epsilon=epsilon, space=space, sampling=sampling, crop=crop, vgg16=vgg16) - sampler.eval().requires_grad_(False).to(opts.device) - if jit: - c = torch.zeros([batch_size, opts.G.c_dim], device=opts.device) - sampler = torch.jit.trace(sampler, [c], check_trace=False) - - # Sampling loop. - dist = [] - progress = opts.progress.sub(tag='ppl sampling', num_items=num_samples) - for batch_start in range(0, num_samples, batch_size * opts.num_gpus): - progress.update(batch_start) - c = [dataset.get_label(np.random.randint(len(dataset))) for _i in range(batch_size)] - c = torch.from_numpy(np.stack(c)).pin_memory().to(opts.device) - x = sampler(c) - for src in range(opts.num_gpus): - y = x.clone() - if opts.num_gpus > 1: - torch.distributed.broadcast(y, src=src) - dist.append(y) - progress.update(num_samples) - - # Compute PPL. - if opts.rank != 0: - return float('nan') - dist = torch.cat(dist)[:num_samples].cpu().numpy() - lo = np.percentile(dist, 1, interpolation='lower') - hi = np.percentile(dist, 99, interpolation='higher') - ppl = np.extract(np.logical_and(dist >= lo, dist <= hi), dist).mean() - return float(ppl) - -#---------------------------------------------------------------------------- diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py deleted file mode 100644 index 40844ddeb8d47ff58a6af49ab35bad84e14f5721..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py +++ /dev/null @@ -1,8 +0,0 @@ -from ..common.optim import SGD as optimizer -from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier -from ..common.data.coco import dataloader -from ..common.models.mask_rcnn_fpn import model -from ..common.train import train - -model.backbone.bottom_up.freeze_at = 2 -train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl" diff --git a/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/config/index.js b/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/config/index.js deleted file mode 100644 index c78c435cdda56d7f231dbaea7ca13b32459f58d0..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/config/index.js +++ /dev/null @@ -1,93 +0,0 @@ -/** - * Manages the internal config of nodemon, checking for the state of support - * with fs.watch, how nodemon can watch files (using find or fs methods). - * - * This is *not* the user's config. - */ -var debug = require('debug')('nodemon'); -var load = require('./load'); -var rules = require('../rules'); -var utils = require('../utils'); -var pinVersion = require('../version').pin; -var command = require('./command'); -var rulesToMonitor = require('../monitor/match').rulesToMonitor; -var bus = utils.bus; - -function reset() { - rules.reset(); - - config.dirs = []; - config.options = { ignore: [], watch: [], monitor: [] }; - config.lastStarted = 0; - config.loaded = []; -} - -var config = { - run: false, - system: { - cwd: process.cwd(), - }, - required: false, - dirs: [], - timeout: 1000, - options: {}, -}; - -/** - * Take user defined settings, then detect the local machine capability, then - * look for local and global nodemon.json files and merge together the final - * settings with the config for nodemon. - * - * @param {Object} settings user defined settings for nodemon (typically on - * the cli) - * @param {Function} ready callback fired once the config is loaded - */ -config.load = function (settings, ready) { - reset(); - var config = this; - load(settings, config.options, config, function (options) { - config.options = options; - - if (options.watch.length === 0) { - // this is to catch when the watch is left blank - options.watch.push('*.*'); - } - - if (options['watch_interval']) { // jshint ignore:line - options.watchInterval = options['watch_interval']; // jshint ignore:line - } - - config.watchInterval = options.watchInterval || null; - if (options.signal) { - config.signal = options.signal; - } - - var cmd = command(config.options); - config.command = { - raw: cmd, - string: utils.stringify(cmd.executable, cmd.args), - }; - - // now run automatic checks on system adding to the config object - options.monitor = rulesToMonitor(options.watch, options.ignore, config); - - var cwd = process.cwd(); - debug('config: dirs', config.dirs); - if (config.dirs.length === 0) { - config.dirs.unshift(cwd); - } - - bus.emit('config:update', config); - pinVersion().then(function () { - ready(config); - }).catch(e => { - // this doesn't help testing, but does give exposure on syntax errors - console.error(e.stack); - setTimeout(() => { throw e; }, 0); - }); - }); -}; - -config.reset = reset; - -module.exports = config; diff --git a/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/monitor/run.js b/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/monitor/run.js deleted file mode 100644 index cbd905c04487a9df952cee9c2115eaf22a70f8b6..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/monitor/run.js +++ /dev/null @@ -1,546 +0,0 @@ -var debug = require('debug')('nodemon:run'); -const statSync = require('fs').statSync; -var utils = require('../utils'); -var bus = utils.bus; -var childProcess = require('child_process'); -var spawn = childProcess.spawn; -var exec = childProcess.exec; -var execSync = childProcess.execSync; -var fork = childProcess.fork; -var watch = require('./watch').watch; -var config = require('../config'); -var child = null; // the actual child process we spawn -var killedAfterChange = false; -var noop = () => {}; -var restart = null; -var psTree = require('pstree.remy'); -var path = require('path'); -var signals = require('./signals'); -const osRelease = parseInt(require('os').release().split('.')[0], 10); - -function run(options) { - var cmd = config.command.raw; - // moved up - // we need restart function below in the global scope for run.kill - /*jshint validthis:true*/ - restart = run.bind(this, options); - run.restart = restart; - - // binding options with instance of run - // so that we can use it in run.kill - run.options = options; - - var runCmd = !options.runOnChangeOnly || config.lastStarted !== 0; - if (runCmd) { - utils.log.status('starting `' + config.command.string + '`'); - } else { - // should just watch file if command is not to be run - // had another alternate approach - // to stop process being forked/spawned in the below code - // but this approach does early exit and makes code cleaner - debug('start watch on: %s', config.options.watch); - if (config.options.watch !== false) { - watch(); - return; - } - } - - config.lastStarted = Date.now(); - - var stdio = ['pipe', 'pipe', 'pipe']; - - if (config.options.stdout) { - stdio = ['pipe', process.stdout, process.stderr]; - } - - if (config.options.stdin === false) { - stdio = [process.stdin, process.stdout, process.stderr]; - } - - var sh = 'sh'; - var shFlag = '-c'; - - const binPath = process.cwd() + '/node_modules/.bin'; - - const spawnOptions = { - env: Object.assign({}, process.env, options.execOptions.env, { - PATH: binPath + path.delimiter + process.env.PATH, - }), - stdio: stdio, - }; - - var executable = cmd.executable; - - if (utils.isWindows) { - // if the exec includes a forward slash, reverse it for windows compat - // but *only* apply to the first command, and none of the arguments. - // ref #1251 and #1236 - if (executable.indexOf('/') !== -1) { - executable = executable - .split(' ') - .map((e, i) => { - if (i === 0) { - return path.normalize(e); - } - return e; - }) - .join(' '); - } - // taken from npm's cli: https://git.io/vNFD4 - sh = process.env.comspec || 'cmd'; - shFlag = '/d /s /c'; - spawnOptions.windowsVerbatimArguments = true; - spawnOptions.windowsHide = true; - } - - var args = runCmd ? utils.stringify(executable, cmd.args) : ':'; - var spawnArgs = [sh, [shFlag, args], spawnOptions]; - - const firstArg = cmd.args[0] || ''; - - var inBinPath = false; - try { - inBinPath = statSync(`${binPath}/${executable}`).isFile(); - } catch (e) {} - - // hasStdio allows us to correctly handle stdin piping - // see: https://git.io/vNtX3 - const hasStdio = utils.satisfies('>= 6.4.0 || < 5'); - - // forking helps with sub-process handling and tends to clean up better - // than spawning, but it should only be used under specific conditions - const shouldFork = - !config.options.spawn && - !inBinPath && - !(firstArg.indexOf('-') === 0) && // don't fork if there's a node exec arg - firstArg !== 'inspect' && // don't fork it's `inspect` debugger - executable === 'node' && // only fork if node - utils.version.major > 4; // only fork if node version > 4 - - if (shouldFork) { - // this assumes the first argument is the script and slices it out, since - // we're forking - var forkArgs = cmd.args.slice(1); - var env = utils.merge(options.execOptions.env, process.env); - stdio.push('ipc'); - const forkOptions = { - env: env, - stdio: stdio, - silent: !hasStdio, - }; - if (utils.isWindows) { - forkOptions.windowsHide = true; - } - child = fork(options.execOptions.script, forkArgs, forkOptions); - utils.log.detail('forking'); - debug('fork', sh, shFlag, args); - } else { - utils.log.detail('spawning'); - child = spawn.apply(null, spawnArgs); - debug('spawn', sh, shFlag, args); - } - - if (config.required) { - var emit = { - stdout: function (data) { - bus.emit('stdout', data); - }, - stderr: function (data) { - bus.emit('stderr', data); - }, - }; - - // now work out what to bind to... - if (config.options.stdout) { - child.on('stdout', emit.stdout).on('stderr', emit.stderr); - } else { - child.stdout.on('data', emit.stdout); - child.stderr.on('data', emit.stderr); - - bus.stdout = child.stdout; - bus.stderr = child.stderr; - } - - if (shouldFork) { - child.on('message', function (message, sendHandle) { - bus.emit('message', message, sendHandle); - }); - } - } - - bus.emit('start'); - - utils.log.detail('child pid: ' + child.pid); - - child.on('error', function (error) { - bus.emit('error', error); - if (error.code === 'ENOENT') { - utils.log.error('unable to run executable: "' + cmd.executable + '"'); - process.exit(1); - } else { - utils.log.error('failed to start child process: ' + error.code); - throw error; - } - }); - - child.on('exit', function (code, signal) { - if (child && child.stdin) { - process.stdin.unpipe(child.stdin); - } - - if (code === 127) { - utils.log.error( - 'failed to start process, "' + cmd.executable + '" exec not found' - ); - bus.emit('error', code); - process.exit(); - } - - // If the command failed with code 2, it may or may not be a syntax error - // See: http://git.io/fNOAR - // We will only assume a parse error, if the child failed quickly - if (code === 2 && Date.now() < config.lastStarted + 500) { - utils.log.error('process failed, unhandled exit code (2)'); - utils.log.error(''); - utils.log.error('Either the command has a syntax error,'); - utils.log.error('or it is exiting with reserved code 2.'); - utils.log.error(''); - utils.log.error('To keep nodemon running even after a code 2,'); - utils.log.error('add this to the end of your command: || exit 1'); - utils.log.error(''); - utils.log.error('Read more here: https://git.io/fNOAG'); - utils.log.error(''); - utils.log.error('nodemon will stop now so that you can fix the command.'); - utils.log.error(''); - bus.emit('error', code); - process.exit(); - } - - // In case we killed the app ourselves, set the signal thusly - if (killedAfterChange) { - killedAfterChange = false; - signal = config.signal; - } - // this is nasty, but it gives it windows support - if (utils.isWindows && signal === 'SIGTERM') { - signal = config.signal; - } - - if (signal === config.signal || code === 0) { - // this was a clean exit, so emit exit, rather than crash - debug('bus.emit(exit) via ' + config.signal); - bus.emit('exit', signal); - - // exit the monitor, but do it gracefully - if (signal === config.signal) { - return restart(); - } - - if (code === 0) { - // clean exit - wait until file change to restart - if (runCmd) { - utils.log.status('clean exit - waiting for changes before restart'); - } - child = null; - } - } else { - bus.emit('crash'); - if (options.exitcrash) { - utils.log.fail('app crashed'); - if (!config.required) { - process.exit(1); - } - } else { - utils.log.fail( - 'app crashed - waiting for file changes before' + ' starting...' - ); - child = null; - } - } - - if (config.options.restartable) { - // stdin needs to kick in again to be able to listen to the - // restart command - process.stdin.resume(); - } - }); - - // moved the run.kill outside to handle both the cases - // intial start - // no start - - // connect stdin to the child process (options.stdin is on by default) - if (options.stdin) { - process.stdin.resume(); - // FIXME decide whether or not we need to decide the encoding - // process.stdin.setEncoding('utf8'); - - // swallow the stdin error if it happens - // ref: https://github.com/remy/nodemon/issues/1195 - if (hasStdio) { - child.stdin.on('error', () => {}); - process.stdin.pipe(child.stdin); - } else { - if (child.stdout) { - child.stdout.pipe(process.stdout); - } else { - utils.log.error( - 'running an unsupported version of node ' + process.version - ); - utils.log.error( - 'nodemon may not work as expected - ' + - 'please consider upgrading to LTS' - ); - } - } - - bus.once('exit', function () { - if (child && process.stdin.unpipe) { - // node > 0.8 - process.stdin.unpipe(child.stdin); - } - }); - } - - debug('start watch on: %s', config.options.watch); - if (config.options.watch !== false) { - watch(); - } -} - -function waitForSubProcesses(pid, callback) { - debug('checking ps tree for pids of ' + pid); - psTree(pid, (err, pids) => { - if (!pids.length) { - return callback(); - } - - utils.log.status( - `still waiting for ${pids.length} sub-process${ - pids.length > 2 ? 'es' : '' - } to finish...` - ); - setTimeout(() => waitForSubProcesses(pid, callback), 1000); - }); -} - -function kill(child, signal, callback) { - if (!callback) { - callback = noop; - } - - if (utils.isWindows) { - const taskKill = () => { - try { - exec('taskkill /pid ' + child.pid + ' /T /F'); - } catch (e) { - utils.log.error('Could not shutdown sub process cleanly'); - } - }; - - // We are handling a 'SIGKILL' , 'SIGUSR2' and 'SIGUSR1' POSIX signal under Windows the - // same way it is handled on a UNIX system: We are performing - // a hard shutdown without waiting for the process to clean-up. - if (signal === 'SIGKILL' || osRelease < 10 || signal === 'SIGUSR2' || signal==="SIGUSR1" ) { - debug('terminating process group by force: %s', child.pid); - - // We are using the taskkill utility to terminate the whole - // process group ('/t') of the child ('/pid') by force ('/f'). - // We need to end all sub processes, because the 'child' - // process in this context is actually a cmd.exe wrapper. - taskKill(); - callback(); - return; - } - - try { - // We are using the Windows Management Instrumentation Command-line - // (wmic.exe) to resolve the sub-child process identifier, because the - // 'child' process in this context is actually a cmd.exe wrapper. - // We want to send the termination signal directly to the node process. - // The '2> nul' silences the no process found error message. - const resultBuffer = execSync( - `wmic process where (ParentProcessId=${child.pid}) get ProcessId 2> nul` - ); - const result = resultBuffer.toString().match(/^[0-9]+/m); - - // If there is no sub-child process we fall back to the child process. - const processId = Array.isArray(result) ? result[0] : child.pid; - - debug('sending kill signal SIGINT to process: %s', processId); - - // We are using the standalone 'windows-kill' executable to send the - // standard POSIX signal 'SIGINT' to the node process. This fixes #1720. - const windowsKill = path.normalize( - `${__dirname}/../../bin/windows-kill.exe` - ); - - // We have to detach the 'windows-kill' execution completely from this - // process group to avoid terminating the nodemon process itself. - // See: https://github.com/alirdn/windows-kill#how-it-works--limitations - // - // Therefore we are using 'start' to create a new cmd.exe context. - // The '/min' option hides the new terminal window and the '/wait' - // option lets the process wait for the command to finish. - - execSync( - `start "windows-kill" /min /wait "${windowsKill}" -SIGINT ${processId}` - ); - } catch (e) { - taskKill(); - } - callback(); - } else { - // we use psTree to kill the full subtree of nodemon, because when - // spawning processes like `coffee` under the `--debug` flag, it'll spawn - // it's own child, and that can't be killed by nodemon, so psTree gives us - // an array of PIDs that have spawned under nodemon, and we send each the - // configured signal (default: SIGUSR2) signal, which fixes #335 - // note that psTree also works if `ps` is missing by looking in /proc - let sig = signal.replace('SIG', ''); - - psTree(child.pid, function (err, pids) { - // if ps isn't native to the OS, then we need to send the numeric value - // for the signal during the kill, `signals` is a lookup table for that. - if (!psTree.hasPS) { - sig = signals[signal]; - } - - // the sub processes need to be killed from smallest to largest - debug('sending kill signal to ' + pids.join(', ')); - - child.kill(signal); - - pids.sort().forEach((pid) => exec(`kill -${sig} ${pid}`, noop)); - - waitForSubProcesses(child.pid, () => { - // finally kill the main user process - exec(`kill -${sig} ${child.pid}`, callback); - }); - }); - } -} - -run.kill = function (noRestart, callback) { - // I hate code like this :( - Remy (author of said code) - if (typeof noRestart === 'function') { - callback = noRestart; - noRestart = false; - } - - if (!callback) { - callback = noop; - } - - if (child !== null) { - // if the stdin piping is on, we need to unpipe, but also close stdin on - // the child, otherwise linux can throw EPIPE or ECONNRESET errors. - if (run.options.stdin) { - process.stdin.unpipe(child.stdin); - } - - // For the on('exit', ...) handler above the following looks like a - // crash, so we set the killedAfterChange flag if a restart is planned - if (!noRestart) { - killedAfterChange = true; - } - - /* Now kill the entire subtree of processes belonging to nodemon */ - var oldPid = child.pid; - if (child) { - kill(child, config.signal, function () { - // this seems to fix the 0.11.x issue with the "rs" restart command, - // though I'm unsure why. it seems like more data is streamed in to - // stdin after we close. - if (child && run.options.stdin && child.stdin && oldPid === child.pid) { - child.stdin.end(); - } - callback(); - }); - } - } else if (!noRestart) { - // if there's no child, then we need to manually start the process - // this is because as there was no child, the child.on('exit') event - // handler doesn't exist which would normally trigger the restart. - bus.once('start', callback); - run.restart(); - } else { - callback(); - } -}; - -run.restart = noop; - -bus.on('quit', function onQuit(code) { - if (code === undefined) { - code = 0; - } - - // remove event listener - var exitTimer = null; - var exit = function () { - clearTimeout(exitTimer); - exit = noop; // null out in case of race condition - child = null; - if (!config.required) { - // Execute all other quit listeners. - bus.listeners('quit').forEach(function (listener) { - if (listener !== onQuit) { - listener(); - } - }); - process.exit(code); - } else { - bus.emit('exit'); - } - }; - - // if we're not running already, don't bother with trying to kill - if (config.run === false) { - return exit(); - } - - // immediately try to stop any polling - config.run = false; - - if (child) { - // give up waiting for the kids after 10 seconds - exitTimer = setTimeout(exit, 10 * 1000); - child.removeAllListeners('exit'); - child.once('exit', exit); - - kill(child, 'SIGINT'); - } else { - exit(); - } -}); - -bus.on('restart', function () { - // run.kill will send a SIGINT to the child process, which will cause it - // to terminate, which in turn uses the 'exit' event handler to restart - run.kill(); -}); - -// remove the child file on exit -process.on('exit', function () { - utils.log.detail('exiting'); - if (child) { - child.kill(); - } -}); - -// because windows borks when listening for the SIG* events -if (!utils.isWindows) { - bus.once('boot', () => { - // usual suspect: ctrl+c exit - process.once('SIGINT', () => bus.emit('quit', 130)); - process.once('SIGTERM', () => { - bus.emit('quit', 143); - if (child) { - child.kill('SIGTERM'); - } - }); - }); -} - -module.exports = run; diff --git a/spaces/zhangyd/bingo/src/components/chat-attachments.tsx b/spaces/zhangyd/bingo/src/components/chat-attachments.tsx deleted file mode 100644 index ef43d4e262935d263b6099138c56f7daade5299d..0000000000000000000000000000000000000000 --- a/spaces/zhangyd/bingo/src/components/chat-attachments.tsx +++ /dev/null @@ -1,37 +0,0 @@ -import Image from 'next/image' -import ClearIcon from '@/assets/images/clear.svg' -import RefreshIcon from '@/assets/images/refresh.svg' -import { FileItem } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' -import { useBing } from '@/lib/hooks/use-bing' - -type ChatAttachmentsProps = Pick, 'attachmentList' | 'setAttachmentList' | 'uploadImage'> - -export function ChatAttachments({ attachmentList = [], setAttachmentList, uploadImage }: ChatAttachmentsProps) { - return attachmentList.length ? ( -
                - {attachmentList.map(file => ( -
                - {file.status === 'loading' && ( -
                -
                -
                ) - } - {file.status !== 'error' && ( -
                - -
                ) - } - {file.status === 'error' && ( -
                - refresh uploadImage(file.url)} /> -
                - )} - -
                - ))} -
                - ) : null -} diff --git a/spaces/zhuolisam/resume-ranker/README.md b/spaces/zhuolisam/resume-ranker/README.md deleted file mode 100644 index 21e81f2d493a69bfcbbef852cba8379583d827d4..0000000000000000000000000000000000000000 --- a/spaces/zhuolisam/resume-ranker/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Resume Ranking -emoji: 📊 -colorFrom: yellow -colorTo: indigo -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -# resume-ranker -
                - -## How to Use? - -Install all the dependencies with: - -```bash -./install.sh -``` - -Run the Streamlilt with: - -```bash -streamlit run app.py -``` - -Or run it from with: - -```bash -python demo.py -``` diff --git a/spaces/zomehwh/vits-models-pcr/monotonic_align/__init__.py b/spaces/zomehwh/vits-models-pcr/monotonic_align/__init__.py deleted file mode 100644 index e97eecc595dd3bd97d0104ec62799e2e5efea57c..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/vits-models-pcr/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype)